Re: [eigen] [patch] LDLt decomposition with rank-deficient matrices |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] [patch] LDLt decomposition with rank-deficient matrices
- From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
- Date: Sat, 27 Feb 2010 17:27:15 -0500
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=Sl6PBL0mgpvHluaeiI8XkuieMK9wklTUoaYgvjVkhSo=; b=AcnTr1zOhcCjgaeD8smLs+6DRe+RhO6OFh8yj5brYluS8Sv5ACni1qtJuMJGt2X061 2S3nXYk+uIGTFkFGdPmPXu//R/iPuuKR2bk7/fomH5+lGflHZbLLn54nBJYlJemmLNR7 WwNMl/ktEv/AIuAyp+LXPIEkobl0Dbdxegtl0=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=erOcxgK6M6SCVB/2GiZzmkIFjsQnGd/KtlhFHyav9BfRpUOFXVVpqQPINuxxx8w9Md Jrq8DyN+xl5jFOpFbhqXMYo983kDZko8+WXFVbvGx4AcJMoOyrfnMm5oGLBj8I2Z8LHf BcsFdF4vUww0SXhBEPgBo9n6eqmwNaRppSUlM=
2010/2/27 Ben Goodrich <bgokgm@xxxxxxxxxxxxxx>:
> On Wed, Feb 24, 2010 at 12:45 PM, Ben Goodrich <bgokgm@xxxxxxxxxxxxxx> wrote:
>> This is a great example, thank you. In my opinion, it would be good to
>> have a way in eigen to define D uniquely by utilizing the unit lower
>> triangular restriction on L.
Why do you care about D? What is the use case?
The only use cases that I know for Cholesky factor through "solving
AX=B systems where A is self-adjoint". For that, the current LLt and
LDLt classes are enough.
> In other words, would it be acceptable to
>> add an option to the vectorD() and matrixL() methods that reversed the
>> pivoting so that the result was in textbook form with respect to the
>> original matrix?
The vectorD and matrixL methods are zero-cost accessors that just
return stuff that has previously been computed during the
decomposition. So, if you wanted to implement something like that,
that would rather be
- either as an option to the decomposition itself
- or as new separate methods.
I still don't understand what is "textbook form" and why it's useful :/
>
> Since no one has objected so far, maybe I should try to implement this
> myself. But let me make sure I understand what eigen is doing first.
>
> We have the pivoted decomposition
>
> A = P^T L D L^* P
>
> and the vectorD() method and the matrixL() method get the diagonal of
> D and the unit lower-triangular matrix L but those correspond to the
> pivoted version of A (unless P = I). So to get the factors that
> correspond to the original version of A, we need to
>
> (1) pre-multiply matrixL() by P.transpose() to get the leftmost factor
>
> and
>
> (2) multiply the original A from the left by the inverse of the
> leftmost factor and from the right by the inverse transposed to get
> the diagonal matrix
>
> Is there a faster way to accomplish (2)? It seems like in order to get
> the unpivoted D, a method would have to call reconstructedMatrix(), do
> (1), invert that matrix, and do two more matrix multiplications. That
> would be computationally expensive even if all the steps were
> optimized. Users could skip the reconstructedMatrix() step because
> they have the original A. Maybe it would be better to write a method
> to do (1) and write a separate function that takes A and its
> decomposition to do (2)?
I'm sorry, this is hard for me to understand, but I would be more
motivated if I understood first what you really want to do. Why does
the order of the terms in the vectorD matter to you?
Benoit
>
> Thanks,
> Ben
>
>
>