Affine transforms - matrix algebra: equals
james.graham at oracle.com
Thu Aug 23 15:02:30 PDT 2012
This is an interesting caveat. I don't think it warrants the conclusion
you draw, but it does complicate matters.
For the question "does this pair of transforms produce the same visible
results as each other" then the answer is tempered by "how big a
difference in the *output* is visible?", not by the mantissa qualities
of the numbers used to represent the answers.
The problem is, the developer may not be considering the entire
operation end to end. This type of query is absolutely compatible with
a full 4x4 3D final rendering transform and a 3D box region as the
input. It's just that we don't (yet) provide sophisticated enough
transforms to specify the full 4x4 matrix to ask the question on. When
we get there, though, the nature of the query will still apply and it
will still want an absolute error estimation.
Another option is to offer the same method with a camera as an argument,
but I don't think that works because our current cameras compute a
matrix that depends on the bounds of a Scene/Stage and so they cannot
provide a proper matrix in isolation.
But, until then, the answers will only be fully accurate in a
non-perspective world (the transforms are 3x3 so they will work for
non-perspective 3D scenes, but I don't think those are very interesting,
and they will work just fine for more ordinary 2D only apps). So, are
we offering this API prematurely? Or are we simply offering one that is
useful for a subset of FX apps with the potential for working for all
apps when we flesh out our Transform subclasses?
If we used a ulp/mantissa based error in this API, then we would simply
make it hard for developers to utilize both now and in the future when
our Transform objects become sophisticated enough to perform these tests
properly for perspective 3D scenes. It is simply the wrong model for
how the caller wants to consider their error estimations for this type
of comparison. A developer could, with some work, come up with a ULP
value for such a method, but the number of mantissa bits representing a
visible error on one side of the region of interest would be a different
number of bits than on the other side - so do they choose the number of
bits for the side with the liberal mantissas or the side with the
conservative mantissas and choose between an answer that conservatively
tells you there are visible differences when there are not, or an answer
that liberally sometimes tells you there is nothing to see when the
difference might be obvious?
ULP/mantissa error bounds are useful for API testing, definitely, but
when you are worried only about visible differences then they are the
On 8/23/2012 2:15 PM, Kirill.Prazdnikov wrote:
> On 8/24/2012 12:52 AM, Jim Graham wrote:
>> In other words, you'd specify "within 1/Nth of a pixel", not "within N
>> bits of mantissa" and if you are measuring over the dimensions of a
>> typical screen (0-2K for example) then "1/Nth of a pixel" is around 12
>> bits of mantissa for N=1 (or ~(212)/N multiple of ulp) for a "float",
>> and around 40 bits of mantissa for N=1 for a "double". This form of
>> the test would probably do better with an absolute error measurement.
> You are thinking as if everything is screen space 2D. But when we see a
> space ship flying around then the math above is different ... ( + camera
> transformation errors )
> ULP-precision is the absolute measurement of precision. Despite of
More information about the openjfx-dev