felix.bembrick at gmail.com
Sat Oct 31 00:40:05 UTC 2015
I have experimented with all those options with the following results:
1. Turning on verbose proves hardware acceleration is being used.
2. Increasing texture size and fiddling with the amount of VRAM has no
effect on performance.
3. Turning off Hi DPI changes the appearance of the app (i.e. controls
are too small etc.) but has no effect on performance.
4. Disabling hardware acceleration makes it another order of magnitude
slower than before.
So none of the options improved performance at all. All we know for sure
is that it's using D3D and that it is running so much slower than I
expected and so much so that it is unusable.
Here's some of the initial output which hopefully shows something about the
*Prism pipeline init order: d3dUsing native-based Pisces rasterizerUsing
dirty region optimizationsNot using texture mask for primitivesNot forcing
power of 2 sizes for texturesUsing hardware CLAMP_TO_ZERO modeOpting in for
HiDPI pixel scalingPrism pipeline name =
com.sun.prism.d3d.D3DPipelineLoading D3D native library ...
succeeded.D3DPipelineManager: Created D3D9Ex deviceDirect3D initialization
succeeded(X) Got class = class com.sun.prism.d3d.D3DPipelineInitialized
prism pipeline: com.sun.prism.d3d.D3DPipelineMaximum supported texture
size: 16384Maximum texture size clamped to 8192OS Information:
Windows version 10.0 build 10240D3D Driver Information: NVIDIA
GeForce GTX TITAN X \\.\DISPLAY1 Driver nvd3dum.dll, version
10.18.13.5850 Pixel Shader version 3.0 Device : ven_10DE,
dev_17C2, subsys_113210DE Max Multisamples supported: 4 vsync: true
vpipe: trueLoading Prism common native library ...*
* succeeded.PPSRenderer: scenario.effect - createShader:
LinearConvolveShadow_28PPSRenderer: scenario.effect - createShader:
LinearConvolve_8PPSRenderer: scenario.effect - createShader:
LinearConvolve_64PPSRenderer: scenario.effect - createShader:
Blend_ADDPPSRenderer: scenario.effect - createShader:
LinearConvolveShadow_16PPSRenderer: scenario.effect - createShader:
On 31 October 2015 at 07:21, Jim Graham <james.graham at oracle.com> wrote:
> Other things to try:
> -Dprism.verbose=true (output should show the following options
> are working)
> -Dglass.win.uiScale=1.0 (disables HiDPI)
> -Dprism.order=sw (disables HW acceleration)
> -Dprism.maxTextureSize=8192 (mentioned before - increases max texture
> -Dprism.maxvram=2G (increases maximum texture pool to 2GB)
> -Dprism.targetvram=2G (combined with maxvram, increases initial
> pool to 2GB)
> On 10/30/15 12:59 PM, Felix Bembrick wrote:
>> Hi Jim,
>> I had Windows 10 on my previous machine and my wife's low-end PC is also
>> running Win10 and the same version of Java.
>> But I have what is supposed to be the fastest graphics card of all
>> (GeForce GTX Titan X) and she has a very basic card.
>> The only real difference is that she has a 22" monitor with a resolution
>> of 1920 X 1024 (?) and I have 2 4K monitors.
>> Hi-DPI is supported in the sense that everything renders at the correct
>> size etc (unlike Swing) but it performs so slowly that there must be
>> something fundamentally wrong, especially since JavaFX seems to be the only
>> technology that's affected.
>> On 31 Oct 2015, at 06:49, Jim Graham <james.graham at oracle.com> wrote:
>>> It should be supported. Which version of Windows were you using
>>> before? We've supported HiDPI on Windows since JDK8u60 on all supported
>>> versions of Windows...
>>> On 10/27/15 11:24 PM, Felix Bembrick wrote:
>>>> I just installed JavaFX on my new Windows 10 machine which is extremely
>>>> powerful but has two 4K monitors and while everything looks great and the
>>>> right "size", the performance is very sluggish to say the least.
>>>> Is this because Hi-DPI is not yet supported in JavaFX on Windows?
More information about the openjfx-dev