Re: [eigen] Re: news from the LU / rank / solving API front |

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]

*To*: eigen@xxxxxxxxxxxxxxxxxxx*Subject*: Re: [eigen] Re: news from the LU / rank / solving API front*From*: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>*Date*: Mon, 26 Oct 2009 13:04:38 -0400*Dkim-signature*: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=cXe6k1Q5rGmhjbnLcMVS0mD9Dfg3bK3NmnT2Pao54mk=; b=FtZEO+0hDb/JJ9UEeXX/MYgyn7q0tkb2/uYKkWN2NBAx+fdfRBp5CDGk9XvVHhxvPj kZ/t9hjBxHbWGNENf6u3eZHuhhOtrz5VY/kPS6ACMSeEkYB+XcIvtm0ldV6P619h/xWa 8HckNXBFRdK03OZ1oyqeB5njUwD7PfHb6DDD4=*Domainkey-signature*: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=gmcOWvQyZC9DjXJd+AW5lGcMS2hJL4vDbO58sFZUFrkwxfJC9ulO+i8RkytC4xCCMo wy8vqJftbVdfqQ9XVm0SWGezDhH/VKyV5BrFh+eOf7TV3GIciUNNq/Ra2+kBXmtMNlV0 aYS+JDLl9rtUuWkvUl1t08EhB0mKJtml+oEH8=

2009/10/26 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>: > > > On Mon, Oct 26, 2009 at 4:29 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> > wrote: >> >> Hi again, >> >> i can finally give some more news. I've essentially done Inverse.h now >> (in my fork) which was the big scary part (as far as API design is >> concerned). i've also fixed the 4x4 inverse (Euler's 2x2 trick) so it >> should now be numerically stable. So I expect the rest of this API >> reworking to go fast now. >> >> But while i'm at it i'd like to discuss one more change, so that users >> of the devel branch dont have to adapt again soon. >> >> About the naming of the decomposition class, we need some homogeneity. >> currently, "LU" does full pivoting, on the other hand the equivanlent >> in QR is called "FullPivotingQR" >> I propose: >> * harmonize that: always mention the pivoting in the name >> * abbreviate Pivoting as Piv >> * keep shorthand methods such as lu(), even introduce qr() etc... > > ok for me. > >> >> So: >> LU--->FullPivLU >> PartialLU--->PartialPivLU >> in QR, replace Pivoting by Piv >> add fullPivLu() >> keep lu() as shortcut to fullPivLu() >> partialLU() ---> partialPivLu() >> add qr() as shortcut to householderQr() >> add rrqr() as shortcut to colPivHouseholderQr() >> (note: rr for rank-revealing, that's standard) > > why does lu() would refer to full pivoting, and qr() to no pivoting ? > Perhaps, both should refer to their respective partial pivoting variants > which seem to be good default choices: good accuracy and good scalability.. Argh. Actually your suggestion turns out to be the better one. (For those who're new here. Gael was dead 1500 years ago and rose again only to put this project in the right direction.) The reason why your suggestion is the better one is that these shortcuts like lu() would really be all about quickly meeting user's expectations, and in this case, a huge part of that would be to accomodate users coming from LAPACK. Now in LAPACK, not only dgesv but also dgesvx assume the matrix to be square. This means that it's actually perfectly ok to have .lu() assuming that the matrix is square. So my biggest objection vanishes. Also, dgesv/dgesvx aren't full-pivoting. Moreover, as you mention, partial pivoting lu is more scalable thanks to your blocked implementation, so that settles the deal... it's not a bad thing to come out good in uninformed luser benchmarks. Benoit > > gael. > >> >> Opinions? >> >> Benoit >> >> 2009/10/18 Benoit Jacob <jacob.benoit.1@xxxxxxxxx>: >> > Hi, >> > >> > This is mostly for Gael and Hauke (if you're busy, skip to 5. below) >> > who I bugged recently about that, and a followup to the goog old >> > thread "LU precision tuning" on this list. >> > >> > It's been very very tedious but i think we're now on the good side of >> > the API refactoring in LU (and once I've finished LU the rest should >> > be the same). >> > See my fork http://bitbucket.org/bjacob/refactor-solve-api/ >> > >> > 1) What Eigen 2.0 des. >> > In Eigen 2.0, we determine the rank in the LU decomposition itself. As >> > soon as some pivot is small enough to be considered zero (we have a >> > threshold for that, see "LU precision tuning" on this list), we stop >> > the LU decomposition. In LU::solve(), we also use that threshold to >> > determine whether a solution exists. If no solution exists according >> > to that test, then we don't try to return a solution. >> > >> > 2) What LAPACK does. >> > In dgesvx, there are 2 cases. If a pivot is exactly zero, so the >> > matrix is exactly singular, dgesvx bails out and doesn't give a >> > solution. This means that dgesvx can't be used on simple 2x2 examples >> > like the matrix (1&0\\0&0) and the right-hand-side (1\\0). If no pivot >> > is exactly zero, dgesvx just computes a solution without worrying >> > about accuracy if a pivot was near zero. >> > >> > 3) What the new Eigen (in the fork) *cough* 3.0 *cough* does. >> > As previously discussed, i realized that we should try as hard as >> > possible to get a solution, even approximate; the only case in which >> > we need to bail out is if continuing would generate inf/nan values >> > (that would be worthless to the user anyway); and the precision check >> > is something that the user can always do by himself afterwards. So >> > that means that we get much closer to LAPACK. On the other hand, I >> > don't see the need to restrict to nonsingular matrices like LAPACK >> > does. If some pivot is zero, we have an equation of the form 0*X=Y, we >> > can just ignore Y and set X=0, which can give a useful approximate >> > solution if Y was itself meant to be zero. >> > >> > 4) precision of the rank determination (problem solved, was a PEBKAC) >> > As you may know, in the past we've found it very difficult to have >> > good accuracy for rank() methods in e.g. LU. We always had a small >> > amount of failures in the LU test. >> > I just found the reason. Our LU with full pivoting algo consists in >> > taking the biggest element in the remaining corner and moving it to >> > become the next pivot. From there we assumed that the pivots were >> > automatically sorted in decreasing order. That was approximately true, >> > but not exactly true. In practice it was possible to get cases where >> > the (n+1)-th pivot was several times bigger than the n-th pivot. So >> > now we no longer make this assumption, we manually sort the pivots as >> > needed (not possible to do in the decomposition itself), and bingo! I >> > can now run "test_lu r1000" and it succeeds! >> > >> > 5) The threshold to determine when a pivot is nonzero is, by default, >> > the same as before (Higham formula). The user can override that by >> > calling usePrecision(value) and I introduced a special type Default_t >> > so one can come back to the default by doing useThreshold(Default). Ah >> > yes and nevermind what i said about virtual methods, I had been >> > smoking some good raccoon fur blend (a Canadian Thanksgiving >> > tradition). >> > >> > Benoit >> > >> >> > >

**Follow-Ups**:**Re: [eigen] Re: news from the LU / rank / solving API front***From:*Gael Guennebaud

**References**:**[eigen] news from the LU / rank / solving API front***From:*Benoit Jacob

**[eigen] Re: news from the LU / rank / solving API front***From:*Benoit Jacob

**Re: [eigen] Re: news from the LU / rank / solving API front***From:*Gael Guennebaud

**Messages sorted by:**[ date | thread ]- Prev by Date:
**Re: [eigen] Mapping array of scalars into quaternions** - Next by Date:
**Re: [eigen] Mapping array of scalars into quaternions** - Previous by thread:
**Re: [eigen] Re: news from the LU / rank / solving API front** - Next by thread:
**Re: [eigen] Re: news from the LU / rank / solving API front**

Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |