Re: [eigen] precision

[ Thread Index | Date Index | More Archives ]


2009/5/8 Michael Meyer <mjhmeyer@xxxxxxxxx>:
> Hello,
> This is my first post to the list.
> I am very impressed with the design of this library and could not believe the benchmarks when I saw them.
> So the following comments are intended to be constructive:
> I think the precision (MathFunctions.h) is generally set too low
> (I am on version 2.01, haven't switched to 2.1 yet).
> Of course it depends on the context what a reasonable value is.

In fact, your post is very timely as we've been working on similar
issues ... today.

A very, very long time ago, in the times of Eigen 0.9, I decided to
introduce global precision levels (precision<T>()) and all-purpose
fuzzy comparisons: that is ei_isMuchSmallerThan (when used with
default precision parameter), etc, that you spotted.

For a very long time, these were used absolutely everywhere in Eigen,
regardless of context. Including in decomposition algorithms.

Then a few monthes ago, Keir coded LDLt decomposition based on a
different fuzzy comparison paradigm: he found a research paper on LDLt
explaining what was the optimal fuzzy comparison to use internally in
the computation of the LDLt, as a function of the machine-epsilon and
of the matrix size.

Then today, Hauke and myself investigated what was the best fuzzy
comparison to use internally in the LU decomposition, see the other

1) When we know enough _context_, for example when we're in the middle
of a big decomposition algorithm, we can and should use specially
tailored fuzzy comparisons. Only experiment can tell which fuzzy
comparison is best in a particular situation. You mention Hessenberg
decomposition, well this decomposition needs to be reworked in the
first place and once this is done we need to experiment with various
precision levels as in the other thread about LU.
2) However there will always be a need for the general "context-blind"
fuzzy compare. Here, what's open for discussion is the particular
value of precision<T>() to use. I was just thinking that the value for
double, 1e-11, was a bit too coarse. I was about to propose replacing
it by 1e-13. But it certainly doesn't make sense to replace it by
1e-20 as that would mean that ei_isApprox is just the same as
operator== !

> I played around with the Hessenberg tridiagonalization applied to the Hilbert matrix H in dimension 400
> (scalar type double, H=QTQ', T tridiagonal, Q orthogonal) and checked how close H-QTQ' is to the zero matrix
> using the L1-norm.
> The norm of the Hilbert matrix is || H || = 550
> With the value of precision<double> as in the distribution I got
> || H - QTQ' || = 4.4e-4
> I then set precision<double> to 1e-20 and got
> || H - QTQ' || = 2.8e-8.
> Setting the pecision even higher did not improve the result.
> The code is attached, sorry no includes since I renamed some the files:

What this says is rather, as I discuss above in 1), that this
decomposition algorithm should use a custom fuzzy compare.


Mail converted by MHonArc 2.6.19+