Re: [eigen] [patch] LDLt decomposition with rank-deficient matrices |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] [patch] LDLt decomposition with rank-deficient matrices
- From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
- Date: Sun, 28 Feb 2010 00:44:01 -0500
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=id343x2WO6TjLKh4JQ3mSvlDzmn7z+Zx+dnxucAISMI=; b=t6mtD+PhZD7JNtV/ibGpNW6HRm/WbrL26Z/lDKOl84zR+Vwwps3M77vI5Hk6XRzrhd /zC7xmNQg53YjuP7Dyzvabow3YZUuioamV+MAtP15gEH5I/44iTJq4DUsODPHElgaFFY VDqCWVdvcg3wmqewsl9rycZn8qYUeph6goTcw=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=pzqFI66WI1W1MrPNH2H5pmXwtOdytkF9TuWsMibp2Ib6mfs+fQIYXGFztrK8f+ImI/ qWc4FyZIE14QAXC/WShrjZhaCkTPTapskrguXWSWVPJSnag0I6uE2tqQuRbYItXZfZdN mB9Sg+Vux+dATYO/s/Zigwu3vh1AGPdCx4ewk=
2010/2/27 Ben Goodrich <bgokgm@xxxxxxxxxxxxxx>:
> Hi Benoit,
>
> On Sat, Feb 27, 2010 at 5:27 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
>> 2010/2/27 Ben Goodrich <bgokgm@xxxxxxxxxxxxxx>:
>>> On Wed, Feb 24, 2010 at 12:45 PM, Ben Goodrich <bgokgm@xxxxxxxxxxxxxx> wrote:
>>>> This is a great example, thank you. In my opinion, it would be good to
>>>> have a way in eigen to define D uniquely by utilizing the unit lower
>>>> triangular restriction on L.
>>
>> Why do you care about D? What is the use case?
>
> Definitely not a standard use case :) Basically, it is not solving AX
> = B systems, but rather numerical optimization / statistics where the
> A matrix is a function of a parameter vector and the objective
> function being maximized, f(theta), is defined with respect to the D
> in "the" LDLt decomposition of A(theta).
>
> [Or at least one could define the objective function that way. I have
> a paper under review that operationalizes the objective function on
> the eigenvalues of A(theta), and I am trying to see if the LDLt
> decomposition would work better or worse for the revision. It seems to
> have some pros and cons in my testing so far.
>
> For what it is worth, although I am interested in this for my own
> research, the paper by Benson and Vanderbei I referred to earlier on
> this thread
>
> http://www.pages.drexel.edu/~hvb22/dimacsppr3.pdf
>
> talks in more general terms about how D is useful for operationalizing
> the constraint in semidefinite programming problems (mine is just one
> specific instance of that) because the diagonal cells of D are convex
> functions of the cells of A.]
Ah ok, sounds reasonable.
>
> One big problem was the situation I outlined earlier in test_noise.cc
>
> VectorXd D(5);
> D << // feed data
> MatrixXd L(5,5);
> L << // feed data
>
> MatrixXd A = L * D.asDiagonal() * L.transpose();
> A_factored = A.ldlt();
>
> The numerical noise causes ldlt() to pivot differently and thus
> A_factored.vectorD() is very different from the D that "actually"
> generated A. However if I do
>
> A.diagonal().setOnes();
> A_factored = A.ldlt();
>
> then A_factored.vectorD() is essentially the same as the D that
> "actually" generated A (which in symbolic math would have had 1 in all
> its diagonal cells).
>
> Thus, I am worried that numerical noise can cause (more)
> discontinuities in my objective function due to different ways of
> pivoting during the optimization and would like to have a "consistent"
> (maybe textbook was a poor choice of word) way to define D so that
> f(theta) is smooth and always approximately the same as f(theta +
> epsilon).
ok, I understand. You want something continuous, so pivoting is not
good for you.
>
>> The vectorD and matrixL methods are zero-cost accessors that just
>> return stuff that has previously been computed during the
>> decomposition. So, if you wanted to implement something like that,
>> that would rather be
>> - either as an option to the decomposition itself
>> - or as new separate methods.
>>
>> I still don't understand what is "textbook form" and why it's useful :/
>
> Right. And I am basically just trying to figure out the fastest way to
> get a consistent D that has decent numerical precision even when
> A(theta) is nearly singular (but not indefinite).
Indeed: if A were indefinite, then LDLt without pivoting would be unusable.
Even staying in the definite case, as A approaches indefiniteness, the
stability of LDLt-without-pivoting will get increasingly bad. But I
guess that that's something you're prepared to live with :)
> If I am
> understanding correctly, I need to unpivot in basically the same way
> that reconstructMatrix() does. But unpivoting is going to be
> computationally expensive when it has to be done repeatedly in an
> optimization context, so I was hoping someone had a short-cut relative
> to the steps that first came to my mind :) .
>
> So, it sounds like I just need to try to implement it every way that I
> can think of and see how well / fast they work out.
If I understand correctly your other email, you'd be happy with
instead a non-pivoting LDLt. That sounds more reasonable than
"unpivoting" a pivoting LDLt.
So: OK to add a non-pivoting option to LDLt.
Benoit
>
> Thanks,
> Ben
>
>
>