Re: [eigen] [patch] LDLt decomposition with rank-deficient matrices

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


2010/2/27 Ben Goodrich <bgokgm@xxxxxxxxxxxxxx>:
> On Wed, Feb 24, 2010 at 12:45 PM, Ben Goodrich <bgokgm@xxxxxxxxxxxxxx> wrote:
>> This is a great example, thank you. In my opinion, it would be good to
>> have a way in eigen to define D uniquely by utilizing the unit lower
>> triangular restriction on L.

Why do you care about D? What is the use case?

The only use cases that I know for Cholesky factor through "solving
AX=B systems where A is self-adjoint". For that, the current LLt and
LDLt classes are enough.

> In other words, would it be acceptable to
>> add an option to the vectorD() and matrixL() methods that reversed the
>> pivoting so that the result was in textbook form with respect to the
>> original matrix?

The vectorD and matrixL methods are zero-cost accessors that just
return stuff that has previously been computed during the
decomposition. So, if you wanted to implement something like that,
that would rather be
 - either as an option to the decomposition itself
 - or as new separate methods.

I still don't understand what is "textbook form" and why it's useful :/


>
> Since no one has objected so far, maybe I should try to implement this
> myself. But let me make sure I understand what eigen is doing first.
>
> We have the pivoted decomposition
>
> A = P^T L D L^* P
>
> and the vectorD() method and the matrixL() method get the diagonal of
> D and the unit lower-triangular matrix L but those correspond to the
> pivoted version of A (unless P = I). So to get the factors that
> correspond to the original version of A, we need to
>
> (1) pre-multiply matrixL() by P.transpose() to get the leftmost factor
>
> and
>
> (2) multiply the original A from the left by the inverse of the
> leftmost factor and from the right by the inverse transposed to get
> the diagonal matrix
>
> Is there a faster way to accomplish (2)? It seems like in order to get
> the unpivoted D, a method would have to call reconstructedMatrix(), do
> (1), invert that matrix, and do two more matrix multiplications. That
> would be computationally expensive even if all the steps were
> optimized. Users could skip the reconstructedMatrix() step because
> they have the original A. Maybe it would be better to write a method
> to do (1) and write a separate function that takes A and its
> decomposition to do (2)?

I'm sorry, this is hard for me to understand, but I would be more
motivated if I understood first what you really want to do. Why does
the order of the terms in the vectorD matter to you?

Benoit

>
> Thanks,
> Ben
>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/