Re: [eigen] Problem inverting a Matrix4f with Eigen 2.0.0

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


2010/7/17 Jitse Niesen <jitse@xxxxxxxxxxxxxxxxx>:
> On Sat, 17 Jul 2010, Benoit Jacob wrote:
>
>> Another reason why we decided not to expose condition numbers in Eigen
>> is that for the main purpose they're used for, namely checking if a
>> result is reliable, there is a better approach which is: check the
>> result itself. For example, if you want to check how accurate your
>> matrix inverse is, just compute matrix*inverse and see how close it is
>> to the identity matrix. Nothing beats that! When it comes to more
>> general solving with potentially non full rank matrices, this is even
>> better, because the condition number of the lhs matrix alone doesn't
>> tell all you need to know (it also depends on your particular rhs), so
>> the approach we're recommending in Eigen, to compute lhs*solution and
>> compare with rhs, is the only way to know for sure how good your
>> solution is.
>
> I'm not sure I agree. The residual (lhs * solution - rhs) is certainly
> useful, but so is the condition number (norm of A multiplied by norm of
> A^{-1}, where I deliberately don't specify the norm used). If the residual
> is zero but the condition number is huge then you probably should not trust
> the solution,

If (lhs * solution - rhs) is small compared to rhs, then what more
could you ask for?

It might be that lhs is ill-conditioned but (perhaps just by chance)
your solution is still good.

> because even though Eigen may have found the exact solution to
> the problem you gave it, it is likely that the lhs and rhs that you gave to
> Eigen are not quite exact,

In the sense of being not quite what the user had in mind? We can't
catch PEBKAC mistakes :-)

ok, honestly I think I understand what you mean, so here is my more
serious answer. I understand you mean that it is still useful to
compute condition numbers and not just residuals, because in some
situations you know that your condition number should never be higher
than a certain value, so you want to check that to detect potential
problems that could have happened earlier (leading to the formation of
this ill-conditioned matrix).

I just want to make it clear that trying to check this, is a much more
advanced use case than just trying to check if the solution is
correct. The residual check does guarantee that the solution is
correct for the given lhs and rhs. You're trying to do something more
involved than that: when the user knows that a certain condition
number shouldn't exceed a certain value, you want to enable checking
that.

I am tempted to leave that for the user to do, because this is a
really advanced use case and even in the context of a given
decomposition, there are different inequivalent notions of condition
numbers. For example in the case of LU, in this discussion we've
already seen 3 inequivalent notions of condition number:
 - max/min absolute value of diagonal coeff of U
 - matrix.L1_norm() / inverse.L1_norm()   (as proposed by LAPACK)
 - matrix.Linf_norm() / inverse.Linf_norm()   (as proposed by LAPACK too)
and this is only considering the condition numbers that can be
computed directly from LU! Not even mentioning the "good" condition
number max/min singval...

Since the use case for that is that the user knows that a *certain*
condition number shouldn't exceed a certain threshold, and since a
mismatched notion of condition number would amount to changing that
threshold, I think it's best to let the user do that himself.

> This implies (if correct) that it is useful to have Eigen compute estimates
> for the condition number. The operative word here is 'estimate': I think the
> order of magnitude is what is usually important, not the exact value.

Unfortunately, the different notions of condition numbers can give
really different orders of magnitude, at least for large matrices.

Let's check L1 and Linf norms since that's what LAPACK is doing. I
don't know if they mean entrywise or induced norms so let's give
examples in both cases.

First let's consider entrywise norms:

For example, consider a NxN matrix all of whose coefficients are equal to 1.

Its Linf entrywise norm is 1.
Its L1 entrywise norm is N^2.

So if you have N=1000, the ratio between the two is 1e+6 !

Now let's consider induced norms. (http://en.wikipedia.org/wiki/Matrix_norm)

Consider a NxN matrix whose first column is filled with 1's and the
rest with 0's.

Its L1 induced norm is N
Its Linf induced norm is 1

So now if your have N=1000, the ratio between the two is still 1000.
Not as bad, but still bad!

In conclusion, inequivalent notions of condition numbers are really
inequivalent, even if you just need an order of magnitude.

Benoit

>
> Cheers,
> Jitse
>
>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/