Re: [eigen] precision

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Here are a couple more thoughts on the subjects of  "using context in
fuzzy compares". It's mostly for reference.

Suppose you want to test whether x is approximately zero.

The most stupid fuzzy compare is:

if( abs(x) < epsilon )

however there's a big problem here, it's that there's no universal
definition of "small" for floating point numbers, since e.g. doubles
have 15 digits precision but exponents ranging from -300 to +300. So
if, for double, you set epsilon = 1e-15 here, then when you're
applying this in a context where your numbers are very small, like
1e-50, this test considers all your numbers to be zero! So that first
version is completely bogus with respect to floating-point arithmetic.

A better approach consists in saying that "being close to zero"
doesn't have absolute meaning, only relative meaning. In other words,
all one can implement is "x is much smaller than y" with respect to a
certain precision, that is, to a certain number of significant digits.
That's exactly Eigen's ei_isMuchSmallerThan. it works like this:

if( abs(x) < abs(y) * precision )

Here, the precision level is multiplicative, while in the first
solution it was additive. As a result, we now support the full range
of floating-point exponents: this solution really makes sense from the
point of view of floating-point precision. When you want to test
whether  "x is zero" in this approach, you need to choose a "reference
value" y to compare it with. So this is a first level of "context
awareness".

However, with this approach, while the reference value y depends on
context, we still haven't said anything about the choice of
'precision'. Clearly it should be "a little coarser than machine
epsilon" to take into account some accumulated imprecision, but how
much coarser exactly?

So far in Eigen, we had standard values for each type and didn't
bother anymore. What's happened recently in LDLt and yesterday in LU
is that we've made context-aware formulas for that choice of
"precision" -- a second level of context-awareness, removing the last
choice of an arbitrary value.

Cheers,
Benoit

2009/5/8, Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
> Hi,
>
> 2009/5/8 Michael Meyer <mjhmeyer@xxxxxxxxx>:
>>
>> Hello,
>>
>> This is my first post to the list.
>> I am very impressed with the design of this library and could not believe
>> the benchmarks when I saw them.
>> So the following comments are intended to be constructive:
>>
>> I think the precision (MathFunctions.h) is generally set too low
>> (I am on version 2.01, haven't switched to 2.1 yet).
>> Of course it depends on the context what a reasonable value is.
>
> In fact, your post is very timely as we've been working on similar
> issues ... today.
>
> A very, very long time ago, in the times of Eigen 0.9, I decided to
> introduce global precision levels (precision<T>()) and all-purpose
> fuzzy comparisons: that is ei_isMuchSmallerThan (when used with
> default precision parameter), etc, that you spotted.
>
> For a very long time, these were used absolutely everywhere in Eigen,
> regardless of context. Including in decomposition algorithms.
>
> Then a few monthes ago, Keir coded LDLt decomposition based on a
> different fuzzy comparison paradigm: he found a research paper on LDLt
> explaining what was the optimal fuzzy comparison to use internally in
> the computation of the LDLt, as a function of the machine-epsilon and
> of the matrix size.
>
> Then today, Hauke and myself investigated what was the best fuzzy
> comparison to use internally in the LU decomposition, see the other
> thread.
>
> Conclusions:
> 1) When we know enough _context_, for example when we're in the middle
> of a big decomposition algorithm, we can and should use specially
> tailored fuzzy comparisons. Only experiment can tell which fuzzy
> comparison is best in a particular situation. You mention Hessenberg
> decomposition, well this decomposition needs to be reworked in the
> first place and once this is done we need to experiment with various
> precision levels as in the other thread about LU.
> 2) However there will always be a need for the general "context-blind"
> fuzzy compare. Here, what's open for discussion is the particular
> value of precision<T>() to use. I was just thinking that the value for
> double, 1e-11, was a bit too coarse. I was about to propose replacing
> it by 1e-13. But it certainly doesn't make sense to replace it by
> 1e-20 as that would mean that ei_isApprox is just the same as
> operator== !
>
>>
>> I played around with the Hessenberg tridiagonalization applied to the
>> Hilbert matrix H in dimension 400
>> (scalar type double, H=QTQ', T tridiagonal, Q orthogonal) and checked how
>> close H-QTQ' is to the zero matrix
>> using the L1-norm.
>>
>> The norm of the Hilbert matrix is || H || = 550
>>
>> With the value of precision<double> as in the distribution I got
>>
>> || H - QTQ' || = 4.4e-4
>>
>> I then set precision<double> to 1e-20 and got
>>
>> || H - QTQ' || = 2.8e-8.
>>
>> Setting the pecision even higher did not improve the result.
>> The code is attached, sorry no includes since I renamed some the files:
>
> What this says is rather, as I discuss above in 1), that this
> decomposition algorithm should use a custom fuzzy compare.
>
> Cheers,
> Benoit
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/