Re: [eigen] Problem inverting a Matrix4f with Eigen 2.0.0

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


2010/7/17 Aron Ahmadia <aja2111@xxxxxxxxxxxx>:
> LAPACK is a set of algorithms for direct solution of matrices with
> very straightforward structure (dense, banded, etc...).  You get a
> matrix and you compute the solution in finite time.  I agree that you
> can check the quality of your result simply by direct comparison.
>
> When you get into iterative methods such as Conjugate Gradients,
> GMRES, etc..., the condition number plays a huge role in the
> effectiveness of the method, how quickly it converges, and frequently,
> whether it will converge at all.  That said, it is usually impossible
> or very expensive to calculate the condition number of these large
> matrices, they are usually estimated.
>
> I hope this is taken in the spirit of information, and not argument :)

Sure! Thanks for sharing this. And my reply would be that whatever
condition number you want, is straightforward to compute from a
suitable decomposition, e.g. the "good" condition number is
straightforward to compute from SVD or even selfadjoint eigensolver.

Benoit

>
> A
>
> On Sat, Jul 17, 2010 at 7:18 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
>> But here as usual, I think it's LAPACK who is right, not MATLAB and NumPy!
>>
>> Even if the mutually inequivalent notions of condition numbers used by
>> LAPACK feel complicated and inelegant, they are actually more relevant
>> to the problems at hand, than a single unified notion of condition
>> number can be.
>>
>> Another reason why we decided not to expose condition numbers in Eigen
>> is that for the main purpose they're used for, namely checking if a
>> result is reliable, there is a better approach which is: check the
>> result itself. For example, if you want to check how accurate your
>> matrix inverse is, just compute matrix*inverse and see how close it is
>> to the identity matrix. Nothing beats that! When it comes to more
>> general solving with potentially non full rank matrices, this is even
>> better, because the condition number of the lhs matrix alone doesn't
>> tell all you need to know (it also depends on your particular rhs), so
>> the approach we're recommending in Eigen, to compute lhs*solution and
>> compare with rhs, is the only way to know for sure how good your
>> solution is.
>>
>> Benoit
>>
>> 2010/7/17 Aron Ahmadia <aja2111@xxxxxxxxxxxx>:
>>> I admit that pointing to LAPACK was a bad example (I am least familiar
>>> with that package of those mentioned), however, MATLAB and NumPy are
>>> far more common these days as computational interfaces than LAPACK
>>> (even if they are built on top of its routines).
>>>
>>> A
>>>
>>> On Sat, Jul 17, 2010 at 6:55 PM, Aron Ahmadia <aja2111@xxxxxxxxxxxx> wrote:
>>>> Hi Benoit,
>>>>
>>>> Sorry, I meant the inverse in this sense, this is something that
>>>> arises when solving the two problems:
>>>>
>>>> Ab = x
>>>> Ax = b
>>>>
>>>> Where I leave the unknown as x, and the fixed as b.  Both problems can
>>>> be bound by a condition number that depends on the perturbations of x
>>>>
>>>> \kappa = ||A||*||b||/||x||     <= ||A||*||A^-1|| (forward)
>>>> \kappa = ||A^-1||*||b||/||x|| <= ||A||*||A^-1|| (backward)
>>>>
>>>> The term ||A||*||A^-1||, since it arises in both forward and backward
>>>> problems, is called the condition number of A.  This is pretty solidly
>>>> in the literature, and you wouldn't confuse anybody if you had a
>>>> general "calculate the condition number of a matrix" function and more
>>>> specialized ones for calculating the condition numbers of other
>>>> specific operations.
>>>>
>>>> A
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/