Re: [eigen] Problem inverting a Matrix4f with Eigen 2.0.0

[ Thread Index | Date Index | More Archives ]

On Sat, 17 Jul 2010, Benoit Jacob wrote:

If (lhs * solution - rhs) is small compared to rhs, then what more
could you ask for?

It might be that lhs is ill-conditioned but (perhaps just by chance)
your solution is still good.

Exactly. And in that case you want to know that the lhs is ill-conditioned, because it means that rounding and other errors in the construction of the lhs will have a big effect on the solution.

Perhaps a better reason is what Helmut said. One measure for the accuracy of the solution x_computed, as computed by Eigen, is the distance to the true solution, x_true. The relative error in the solution is bounded in terms of the residual by

 | x_computed - x_true | / | x_true | <= \kappa(A) | r | / | b |

where r = A x_computed - b is the residual and \kappa(A) is the condition number, | A | * | A^{-1} |.

ok, honestly I think I understand what you mean, so here is my more
serious answer. I understand you mean that it is still useful to
compute condition numbers and not just residuals, because in some
situations you know that your condition number should never be higher
than a certain value, so you want to check that to detect potential
problems that could have happened earlier (leading to the formation of
this ill-conditioned matrix).

I hadn't actually thought of this. I'm not that such situations arise often in practice.

I am tempted to leave that for the user to do, because this is a
really advanced use case and even in the context of a given
decomposition, there are different inequivalent notions of condition
numbers. For example in the case of LU, in this discussion we've
already seen 3 inequivalent notions of condition number:
- max/min absolute value of diagonal coeff of U
- matrix.L1_norm() / inverse.L1_norm()   (as proposed by LAPACK)
- matrix.Linf_norm() / inverse.Linf_norm()   (as proposed by LAPACK too)
and this is only considering the condition numbers that can be
computed directly from LU! Not even mentioning the "good" condition
number max/min singval...

For the second and third one, how is the user supposed to compute these? LAPACK does not actually construct the inverse, because that costs O(n^3) even if you have the LU decomposition. Instead, LAPACK does something smarter which costs not much more than computing the residual and gives you an estimate; the book of Golub and Van Loan has a description of the algorithm. It is this *estimate* that I'd like to see in Eigen; it's a nontrivial algorithm with (I believe) many use cases. I agree that it is not very useful to have a one-line function which just computes A.someNorm() * A.inverse().someNorm(),

Unfortunately, the different notions of condition numbers can give
really different orders of magnitude, at least for large matrices.

Yes, there are several things that can go wrong. One is the fact that the norms are inequivalent, as you point out; it's tempting to conclude that the error in the solution is small in the 2-norm while you compute the condition number in the 1-norm. The second one is that the estimate for the condition number may not be accurate. The third one is that the bound for the error in the solution may not be tight.

But this is how scientific computing usually works. You cannot prove that your computation is correct, but you collect different pieces of data which give you more and more confidence in the reliability of your computation. One such piece is the residual, another one is the (estimate for the) condition number. If they are low, it suggests you may be fine. If they are high, it serves as a warning light.

Of course, this whole discussion is academic until somebody actually implements the routine for estimating the condition number.


Mail converted by MHonArc 2.6.19+