Re: [eigen] [patch] LDLt decomposition with rank-deficient matrices

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Hi Benoit,

On Sat, Feb 27, 2010 at 5:27 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
> 2010/2/27 Ben Goodrich <bgokgm@xxxxxxxxxxxxxx>:
>> On Wed, Feb 24, 2010 at 12:45 PM, Ben Goodrich <bgokgm@xxxxxxxxxxxxxx> wrote:
>>> This is a great example, thank you. In my opinion, it would be good to
>>> have a way in eigen to define D uniquely by utilizing the unit lower
>>> triangular restriction on L.
>
> Why do you care about D? What is the use case?

Definitely not a standard use case :) Basically, it is not solving AX
= B systems, but rather numerical optimization / statistics where the
A matrix is a function of a parameter vector and the objective
function being maximized, f(theta), is defined with respect to the D
in "the" LDLt decomposition of A(theta).

[Or at least one could define the objective function that way. I have
a paper under review that operationalizes the objective function on
the eigenvalues of A(theta), and I am trying to see if the LDLt
decomposition would work better or worse for the revision. It seems to
have some pros and cons in my testing so far.

For what it is worth, although I am interested in this for my own
research, the paper by Benson and Vanderbei I referred to earlier on
this thread

http://www.pages.drexel.edu/~hvb22/dimacsppr3.pdf

talks in more general terms about how D is useful for operationalizing
the constraint in semidefinite programming problems (mine is just one
specific instance of that) because the diagonal cells of D are convex
functions of the cells of A.]

One big problem was the situation I outlined earlier in test_noise.cc

VectorXd D(5);
 D << // feed data
 MatrixXd L(5,5);
 L << // feed data

MatrixXd A = L * D.asDiagonal() * L.transpose();
A_factored = A.ldlt();

The numerical noise causes ldlt() to pivot differently and thus
A_factored.vectorD() is very different from the D that "actually"
generated A. However if I do

A.diagonal().setOnes();
A_factored = A.ldlt();

then A_factored.vectorD() is essentially the same as the D that
"actually" generated A (which in symbolic math would have had 1 in all
its diagonal cells).

Thus, I am worried that numerical noise can cause (more)
discontinuities in my objective function due to different ways of
pivoting during the optimization and would like to have a "consistent"
(maybe textbook was a poor choice of word) way to define D so that
f(theta) is smooth and always approximately the same as f(theta +
epsilon).

> The vectorD and matrixL methods are zero-cost accessors that just
> return stuff that has previously been computed during the
> decomposition. So, if you wanted to implement something like that,
> that would rather be
>  - either as an option to the decomposition itself
>  - or as new separate methods.
>
> I still don't understand what is "textbook form" and why it's useful :/

Right. And I am basically just trying to figure out the fastest way to
get a consistent D that has decent numerical precision even when
A(theta) is nearly singular (but not indefinite). If I am
understanding correctly, I need to unpivot in basically the same way
that reconstructMatrix() does. But unpivoting is going to be
computationally expensive when it has to be done repeatedly in an
optimization context, so I was hoping someone had a short-cut relative
to the steps that first came to my mind :) .

So, it sounds like I just need to try to implement it every way that I
can think of and see how well / fast they work out.

Thanks,
Ben



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/