Re: [eigen] Machine precision<> too coarse

[ Thread Index | Date Index | More Archives ]

2009/2/3 Keir Mierle <mierle@xxxxxxxxx>:
> Is there a reason that 1e-11 is used for double precision instead of the
> more conventional 2.2e-16? And same for float precision; why 1e-5 instead of
> 1.2e-7 for floats?

Let me ask the question the converse way: is there a reason why the
"epsilon" 2.2e-16 should play any specific role here?

Matrix operations involve many, many floating point operations. The
imprecision accumulates and at the end of a matrix operation one
cannot hope to be withing the "epsilon" of the correct result.

Here's an example. Invert a random 1000x1000 matrix of double's,
uniformly picked in the interval [-1,1]. Check your computation:
double error = (matrix * matrix.inverse() - Identity ).norm();

If you used full pivoting LU, you'll get an error of the order of 1e-13
If you used partial pivoting LU, you'll get an error of the order of 1e-11

Now Eigen's internal choice of 1e-11 isn't meant for that kind of use
case; it's more meant for smaller internal operations like the ones
the full LU does in order to determine the rank. So the imprecision
doesn't get that big, but still it is much bigger than the epsilon

In Eigen's unit tests, an even far coarser choice of precision is used
in order to check the result of large end-user operations for sanity.
Our unit tests generally don't aim to test precision, except for a few
of them (we should have more) which can compare precision to other
libraries such as GSL if they are found. (It's really hard to say
abstractly "this computation must have this precision").


Mail converted by MHonArc 2.6.19+