[OpenJDK 2D-Dev] xrender-pipeline prototype/hack uploaded

Dmitri Trembovetski Dmitri.Trembovetski at Sun.COM
Mon Jul 21 17:43:35 UTC 2008

   Hi Clemens,

Clemens Eisserer wrote:
> 1.) I've created a seperate struct to hold the per-surface xrender
> data structures, similar to ShmPixmapData.
> I initialize it in JX11SD_InitWindow for windows, because initSurface
> where I do the xrender-initialization for pixmaps is not called for
> windows.
> Is there a better place to do it for windows, maybe unified for both
> windows and pixmaps?

   I would indeed suggest to unify your xrender-related initialization
   so the same code is used to init xrender-related stuff for
   window and pixmams. From that perspective doing it in
   InitWindow sounds fine.

   I assume that you expect both X11 renderer and XRender renderer
   to be able to render to the same surface, right?

> 2.) For now I also leak xrender-resources, because
> Java_sun_java2d_x11_X11SurfaceData_initOps is also called at
> window-resize. Whaht should I do about this?
> Should I store xrender-related data in the peers, like its done with
> the window-handle?

   When window is resized X11SurfaceData is replaced. So you should
   dispose of your xrender resources for old surface in X11SD_Dispose
   just like everything else is. A new surface will be created and
   initialized when needed.

> 3.) I am currently using lock/unlock/getrasinfo for MaskFill, is it
> worth to write seperate upload-routines because of the overhead those
> functions have (JNI calls, Locking, Xsync, ...)?


> 4.) In X11SD_InitWindow I've currently RGB24 hardcoded, but I guess
> this assumption will be wrong once translucent windows are supported
> by Java. How can I get the surfacetype of a window?

   You can call getSurfaceType() which is defined in SurfaceData,

   Or did you mean something different?

> 5.) Currently I set the color by passing SunGraphics2D.pixel down to
> the native library, which premultiplies it. I guess this „hardcoded"
> way of doing things is not really good.
> Is the right way to install an appropriate ColorModel?
> For premultiplied surfaces I need to specify a ColorSpace, which one
> is the right for me?

   SurfaceData has an associated SurfaceType, which defines a PixelConverter,
   which converts rgb to pixels for particular surfae type. So when
   Graphics.setColor is called, the current surfaceData's pixel converter is
   called to convert color's rgb to Surface's pixel format.

   So if your surface has correct surface type the conversion should
   be done automatically. See how it is done for OGL or D3DSurfaceData

> 6.) Is there a way to get force a BufferedImage to be cached?
> For now texturepaints only work when the image has been drawn already,
> otherwise its not cached any my pipeline can't handle this case.

   Yes, take a look at D3D/OGLBufImgOps for example. There's a piece
   of code which forces the surface to be cached:
         if (!(srcData instanceof D3DSurfaceData)) {
             // REMIND: this hack tries to ensure that we have a cached texture
             srcData =
                 dstData.getSourceSurfaceData(img, sg.TRANSFORM_ISIDENT,
                                              CompositeType.SrcOver, null);
             if (!(srcData instanceof D3DSurfaceData)) {
                 return false;

> 7.) In X11PMBlitLoops/X11PMTransformedBlit/Transform I simply use the
> inverted transformation on the source. However this means my drawing
> area is from 0,0 ->bounds.x/bounds.y.
> If I don't do it this way, e.g. the rotation point is moved.
> Is there a way to translate the inverted transform, so that I don't
> need to composite from 0,0?

   Can you elaborate on this one? I don't quite follow.

> 8.) I am a bit puzzeled with the problem described at:
> http://thread.gmane.org/gmane.comp.freedesktop.xorg/30450/focus=30476
> Is this a bug, or an unwelcome feature? Any ideas how I can work arround it?
> I already do clipping the shape-outline however due to rounding errors
> I either cut something away or get a black thin line.

   That looks like xrender bug to me, unless you're using some
   mode which tells it to update pixels other than those mapped
   from the source image to the destination.

> 9.) When scaling images e.g. only on the x-axis, the result is also
> blurred and smeared over the y-axis.
> http://article.gmane.org/gmane.comp.freedesktop.xorg/30507
> (One of the screenshots is the wrong one, simply have a look for the
> other images in the album of the right image)
> Any ideas where this could come from, or how I could avoid it?
> Setting RepeatPad helps a bit (at least Nimbus looks right), but seems
> to fall back to software.

   I think it could be caused by incorrect pixel to texel mapping -
   (or the equivalent in xrender). Basically your pixels don't
   fall where they should, so filtering is applied.
   See my comments below for 10.

   I don't know how much xrender similar to d3d in this regard
   but it's worth a look.

> 10.) Just from a quick look at
> http://linuxhippy.blogspot.com/2008/07/nimbus_20.html, any ideas what
> could cause this artifact?  It quite struggles me for some time ;)

   Very similar stuff was addressed by these fixes in d3d pipeline
   and was caused by incorrect pixel to texel mapping:

   For the fix see SetTransform() method in D3DContext.cpp:

   You can use this rather exhaustive regression test (from the
   2d repo, not yet in the jdk7 master):


More information about the 2d-dev mailing list