>
> gael.
>
>>
>> Opinions?
>>
>> Benoit
>>
>> 2009/10/18 Benoit Jacob <
jacob.benoit.1@xxxxxxxxx>:
>> > Hi,
>> >
>> > This is mostly for Gael and Hauke (if you're busy, skip to 5. below)
>> > who I bugged recently about that, and a followup to the goog old
>> > thread "LU precision tuning" on this list.
>> >
>> > It's been very very tedious but i think we're now on the good side of
>> > the API refactoring in LU (and once I've finished LU the rest should
>> > be the same).
>> > See my fork
http://bitbucket.org/bjacob/refactor-solve-api/
>> >
>> > 1) What Eigen 2.0 des.
>> > In Eigen 2.0, we determine the rank in the LU decomposition itself. As
>> > soon as some pivot is small enough to be considered zero (we have a
>> > threshold for that, see "LU precision tuning" on this list), we stop
>> > the LU decomposition. In LU::solve(), we also use that threshold to
>> > determine whether a solution exists. If no solution exists according
>> > to that test, then we don't try to return a solution.
>> >
>> > 2) What LAPACK does.
>> > In dgesvx, there are 2 cases. If a pivot is exactly zero, so the
>> > matrix is exactly singular, dgesvx bails out and doesn't give a
>> > solution. This means that dgesvx can't be used on simple 2x2 examples
>> > like the matrix (1&0\\0&0) and the right-hand-side (1\\0). If no pivot
>> > is exactly zero, dgesvx just computes a solution without worrying
>> > about accuracy if a pivot was near zero.
>> >
>> > 3) What the new Eigen (in the fork) *cough* 3.0 *cough* does.
>> > As previously discussed, i realized that we should try as hard as
>> > possible to get a solution, even approximate; the only case in which
>> > we need to bail out is if continuing would generate inf/nan values
>> > (that would be worthless to the user anyway); and the precision check
>> > is something that the user can always do by himself afterwards. So
>> > that means that we get much closer to LAPACK. On the other hand, I
>> > don't see the need to restrict to nonsingular matrices like LAPACK
>> > does. If some pivot is zero, we have an equation of the form 0*X=Y, we
>> > can just ignore Y and set X=0, which can give a useful approximate
>> > solution if Y was itself meant to be zero.
>> >
>> > 4) precision of the rank determination (problem solved, was a PEBKAC)
>> > As you may know, in the past we've found it very difficult to have
>> > good accuracy for rank() methods in e.g. LU. We always had a small
>> > amount of failures in the LU test.
>> > I just found the reason. Our LU with full pivoting algo consists in
>> > taking the biggest element in the remaining corner and moving it to
>> > become the next pivot. From there we assumed that the pivots were
>> > automatically sorted in decreasing order. That was approximately true,
>> > but not exactly true. In practice it was possible to get cases where
>> > the (n+1)-th pivot was several times bigger than the n-th pivot. So
>> > now we no longer make this assumption, we manually sort the pivots as
>> > needed (not possible to do in the decomposition itself), and bingo! I
>> > can now run "test_lu r1000" and it succeeds!
>> >
>> > 5) The threshold to determine when a pivot is nonzero is, by default,
>> > the same as before (Higham formula). The user can override that by
>> > calling usePrecision(value) and I introduced a special type Default_t
>> > so one can come back to the default by doing useThreshold(Default). Ah
>> > yes and nevermind what i said about virtual methods, I had been
>> > smoking some good raccoon fur blend (a Canadian Thanksgiving
>> > tradition).
>> >
>> > Benoit
>> >
>>
>>
>
>