[OpenJDK 2D-Dev] X11 uniform scaled wide lines and dashed lines; STROKE_CONTROL in Pisces

Denis Lila dlila at redhat.com
Mon Sep 27 14:43:51 UTC 2010

Hi Jim.
> How much faster?  I'm worried about this, especially given our tiled 
> approach to requesting the data.  What was the bottleneck before? 
> (It's been a while since I visited the code - we weren't computing the 
> crossings for every curve in the path for every tile being generated 
> were we?)

    Not much faster. I'm working on someting better.
    I'm not sure about the bottleneck, but what we were doing before is:
1. Flatten (by subdividing) every curve so that we deal only with lines.
2. Add each line to a list sorted by y0. When end_rendering was called
for each scanline we found the crossings of the scanline and every line
in our line list, which we used to compute the alpha for that scanline's
pixel row. All this would be put into RLE encoded temporary storage and it
would be read back and converted into tile form by PiscesTileGenerator.

    Speaking of which, would it not be better to get rid of PiscesCache and
just keep a buffer with the current tile row in Renderer.java. This would
be possible because the specification for AATileGenerator says the iteration
is like: for (y...) for (x...);.
Why is PiscesCache there? It isn't being used as a cache at all. Could it be?
Also, why do we output tiles, instead of just pixel rows (which I guess would
just be nx1 tiles). Is it because we would like to use getTypicalAlpha to eliminate
completely transparent or completely opaque regions as soon as possible (and the
longer a tile is the less of a chance it has at being either of those two)?

> I can see your points here.  I think there are solutions to avoid much 
> of the untransforming we can consider, but your solution works well so 
> let's get it in and then we can look at optimizations if we feel they 
> are causing a measurable problem later.

    I should say this isn't quite as bad as I might have made it seem. Firstly,
this IO handler class I made elimiinates transformations when Dasher
communicates with Stroker. More importantly, no untransforming is done
when the transformation is just a translation or is the identity or is singular
and when STROKE_CONTROL is off, we only transform the output path. That's
because the most important reason for handling transforms the way I do now
is because we can't normalize untransformed paths, otherwise coordinate
adjustments might be magnified too much. So, we need to transform paths
before normalization. But we also can't do the stroking and widening
before the normalization. But if normalization is removed we can just pass
untransformed paths into Stroker, and transform its output (which is still
somewhat more expensive than only trasnforming the input path, since
Stroker produces many 3-7 curves for each input curve).

> Note that these kinds of heuristics are the kinds of things that
> people could probably create dissertations on, so my hat is off to you. 
> Hopefully I'll be able to wrap my head around it without too much
> trouble.

    Thanks :)

> I'm not sure I understand the reasoning of the control point 
> calculation.  I'll have to look at the code to register an opinion.

    I'm sorry, my explanation wasn't very clear. I attached a picture that
will hopefully clarify things.
But, in a way, the computation I use is forced on us. Suppose we have a
quadratic curve B and we need to compute one of its offsets C. C'(0)
and C'(1) will be parallel to B'(0) and B'(1) so we need to make sure
our computed offset has this property too (or it would look weird around
the endpoints). Now, B'(0) and B'(1) are parallel to p2-p1 and p3-p2
where p1,p2,p3 are the 3 control points that define B, so if the control
points of C are q1, q2, q3 then q2-q1 and q3-q2 must be parallel to p2-p1
and p3-p2 respectively. At this point, we need more constraint, since
our system is underdetermined. We use the constraints that q1 = C(0)
and q3 = C(1) (so, the endpoints of the computed offset are equal to the
endpoints of the ideal offset). All we have left to compute is q2, but
we know the direction of q2-q1 and the direction of q3-q2, so q2 must
lie on the lines defined by q1+t*(q2-q1) and q3+t*(q3-q2) so q2 must
be the intersection of these lines.

> It sounds like you are correct here.  What does the closed source code
> draw?

    I thought closed source java simply didn't draw the round joins in
these cases, but I did some more testing and it turns out it does for
some curves and it doesn't for others. I've included the results of a
test I wrote that tries to draw paths like: moveTo(0,0);p.lineTo(cos,sin);p.lineTo(0,0);
where cos and sin are coordinates on a circle (source1.png is the output
of closed java. Source2.png is my output). As you can see, my
version draws the round joins on all tested cases, while closed java
is inconsistent.

> Are you saying that I can ignore that question now?


> let's work on reviewing the code with an eye towards getting it in
> as an important milestone before we muck with it too much more...

    That sounds good. Hopefully by the end of today I'll have a
less memory hungry AA implementation that is also faster.

Thank you,

----- "Jim Graham" <james.graham at oracle.com> wrote:

> Hi Denis,
> Sorry for being so distracted (on the other hand, it was for JavaOne 
> ;-), but it looks like you made some incredible progress in the
> meantime!
> On 9/22/2010 2:22 PM, Denis Lila wrote:
> > I also ran a 2D graphics test suite:
> http://icedtea.classpath.org/hg/gfx-test/
> > icedtea6 version 1.8, openjdk7 with my patch does much better. The
> number of
> > generated images that are identical to closed java has more than
> doubled. No
> Woohoo!
> > I should give a high level overview of how things have changed:
> > 1) Antialiasing uses adaptive forward differencing. Now I rasterize
> each curve
> > as soon as I receive it and store the crossings instead of storing
> curves and computing
> > the crossings as needed. This is faster, but not as memory friendly
> so I'm a bit uneasy
> > about this decision. What do you think?

> > 2) In Dasher.java I implemented the buffer to store initial
> segments.
> > 3) For dashing, I compute curve lengths using the algorithm you told
> me about.
> Cool.  Keep in mind that I never did an exhaustive search for "best way 
> to measure curve length" so we may very well find a better algorithm if 
> we need to.
> > 4) Transforms are handled differently. We used to transform
> everything before feeding
> > it to pisces. Since pisces has to compute offsets, it needed to know
> about these
> > transforms. This made things very confusing. I have introduced a
> class which Stroker
> > and Dasher use for IO that knows about transformations. So, when a
> shape is being
> > rendered its path iterator transforms the points. These transformed
> points are
> > then normalized (if normalization is on). Then they are passed
> through this IO
> > handler which untransforms the points and feeds them to Dasher or
> Stroker, which
> > pass their output through the IO handler again which transforms
> them. Unfortunately,
> > this will do many more transformations than before. The reason I did
> this is to
> > avoid going back and forth between user space and device space
> coordinates in the
> > same file.

> > 5) In stroker, we used to keep variables that stored the previous
> point (px0,py0)
> > and the second point (sx1 and sy1, I think). I eliminated these.
> They were
> > misleading and unnecessary and just don't make sense if we have
> curves. They were
> > misleading because the only way they were ever used was to get
> tangent vectors at
> > the start and current position in the path. I replaced them with
> variables that
> > hold these tangents. This is much clearer and saves us many
> subtractions. Because
> > of this some of the methods parameters have changed. The
> computeMiter parameters
> > are a good example of ones that should have changed but didn't
> because I didn't
> > have time to refactor it. This should be done because if we use
> vectors it will
> > be clearer and will help with 9).
> Cool!
> > 6) 0 length curves and lines were being ignored. This isn't what
> proprietary java
> > does (which is drawing caps/joins as if a tiny horizontal line going
> right had
> > just been drawn). I "fixed" this to match the behaviour of
> proprietary java.
> > Because of the change above, this turned out to be a 3 liner.
> I'm glad these restructuring changes are starting to pay off. 
> Hopefully a lot more tweaks will get easier in the future.
> > 7) I put code that draws joins in its own method (drawJoin). This
> made the close
> > and finish methods much clearer and allowed me to fix this
> createStrokedShape
> > issue: http://bugs.openjdk.java.net/show_bug.cgi?id=100046
> Saner outlines are a good indication of cleaner internal processes.
> > 8) To compute cubic offset curves first I subdivide the original
> curve at points
> > where it starts bending too much (i.e. more than 90 degrees since
> the last
> > subdivision), has inflection points, and where one of the offset
> curves has
> > cusps (see comments in the file for more on this). Finding the
> offset cusps was
> > the main reason for 4, since if we worked with transformed
> coordinates there
> > could be shears and the linewidth would not be constant (we need the
> line width
> > because offset cusps occur when the radius of curvature is equal to
> the width).
> > Then for each of these curves, I find a left offset and a right
> offset using
> > constraints that the curve and the offsets must have parallel
> tangents at t=0
> > and t=1 and that the computed offsets at t=0.5 must be equal to the
> ideal
> > true offsets at t=0.5.

> > 9) To compute quadratic offsets I subdivide as above (except for the
> inflection
> > points, of which there are none). Then for each subdivision I
> compute the start
> > and end point in the obvious manner, and I compute the control point
> as the
> > intersection of the lines that go through the end points and are
> parallel to
> > the tangents at the end points.

> > 10) I have included my old patch for drawing round joins using
> bezier curves.
> Yay!
> > 11) The side and isCCW methods have changed are are almost exactly
> the same
> > as each other. I still keep them both so that round joins can be
> drawn in cases
> > such as: p.moveTo(x,y);p.lineTo(x+c,y);p.lineTo(x,y). Proprietary
> java does not
> > do this, but I think it should, since joins are meant to join
> different segments
> > in a path. In the example above there are 2 segments, and it is
> possible to
> > draw a round join, so why not? Does the specification say anything
> about this case?
> > I changed the side() algorithm because I think what I use now also
> works (or at
> > least it certainly works for all cases in which we use it), it is
> faster and
> > it is more accurate.

> > I think the above are all the important changes.
> > Some things I wanted to discuss:
> > 1. Should I change antialiasing and make it store the curves instead
> of the crossings?
> It's a topic for discussion.  I'm wary of the memory footprint, but I
> don't want to kill performance.  Is it a simple change to switch?  I'm
> also a fan of "get what you have in now as a stake in the ground and 
> then deal with some of these issues as follow-on optimizations".
> > 2. I'm not completely comfortable with the way special cases are
> handled in
> > the somethingTo, computeOffsetQuad, and computeOffsetCubic methods
> in Stroker.java
> > ("special cases" being very tiny curves).
> I'll take a look and report my comfort level too.
> > 3. From tests it looks to me like offsets for quads might not be
> good enough. It's
> > barely visible, but I don't know if I'm comfortable with it (fixing
> this would be
> > pretty simple - either add some subdivisions, or approximate quad
> curves with cubics
> > (the latter might be especially nice, since we could use the same
> code for both
> > curve types)).
> I'll need to look at code here too before I can comment.
> >>> 3. Should line 862 in Stroker be enabled (I give some reasons for
> wanting to
> >>> in a comment below it).
> >> That'll take me a day or two just to understand and figure out...
> >> <crosseyed smiley>
> > I'm sorry, I should have explained it. That line was a check for
> lineWidth2>  1.5&&
> > subdivisionsSoFar<  5. It was meant to avoid trying to find points
> where the offset
> > curves have cusps if we already have "enough" subdivisions. Now I
> always try to find
> > these points because I don't think this procedure is expensive
> enough to jeopardize
> > rendering quality.
> > I would very much appreciate any comments or suggestions.
> I'm afraid to make too many more before you get a chance to check in 
> what you have now and at least get us several concrete steps towards a
> quality implementation.

> 			...jim
-------------- next part --------------
A non-text attachment was scrubbed...
Name: quadCtrlComp.png
Type: image/png
Size: 20208 bytes
Desc: not available
URL: <http://mail.openjdk.java.net/pipermail/2d-dev/attachments/20100927/eeedfcfc/quadCtrlComp.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: source1.png
Type: image/png
Size: 7286 bytes
Desc: not available
URL: <http://mail.openjdk.java.net/pipermail/2d-dev/attachments/20100927/eeedfcfc/source1.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: source2.png
Type: image/png
Size: 7615 bytes
Desc: not available
URL: <http://mail.openjdk.java.net/pipermail/2d-dev/attachments/20100927/eeedfcfc/source2.png>

More information about the 2d-dev mailing list