Re: [eigen] Precision in Cholesky Decomposition

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


I am using Eigen 2.0.2; I will try the development branch.

I just double checked the precision necessary, and 1e-16 works, but
1e-15 does not.  The condition number of the matrix is
2525609.87787008 (computed in MATLAB), so it would seem to be okay by
your 1e15 criterion.

For my purposes the LDLT is not an option, because I actually need the
lower triangular matrix, not just to solve a system (this is one step
of a home-brewed quadratic programming algorithm optimized for
specific constraint and quadratic term matrix properties).  I may be
able to rework the algorithm as a whole to keep some of the desirable
properties that I have and avoid the LLT step somehow, but I haven't
found any way around it with my current algorithm.

On Mon, Jun 8, 2009 at 9:09 AM, Benoit Jacob<jacob.benoit.1@xxxxxxxxx> wrote:
> Wait, there's a misunderstanding !!
>
> From what you say I guess that you're using 2.0 right ?
>
> What I was talking about was the development branch (pre 2.1).
>
> Can you try it?
>
> Benoit
>
> 2009/6/8 Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
>> There's been discussion recently over this value and indeed we should
>> reduce it a bit, perhaps down to 1e-13.
>>
>> There's no way that it can make sense to make it lower than the
>> machine epsilon for double which is roughly 1e-16. See the source code
>> of ei_isMuchSmallerThan, and ei_isApprox : basically, this would turn
>> these fuzzy comparisons into exact comparisons !!
>>
>> 2009/6/8 Scott Stephens <stephens.js@xxxxxxxxx>:
>>> I was having a problem with a Cholesky decomposition failing with the
>>> indication that the matrix was not positive definite.  I was surprised
>>> because I was able to successfully perform the Cholesky decomposition
>>> in MATLAB.  I tracked this down to the value returned by
>>> precision<double>() as defined in src/Core/MathFunctions.h, the square
>>> root of which is being used as the cut-off for numerically equivalent
>>> to zero in the LLT code.  The return value is hard-coded to 1e-11; I
>>> was able to match MATLAB by setting this to 1e-18 instead.
>>
>> If you really had to use such a small value, smaller than machine
>> epsilon, that probably means that your matrix isn't significantly
>> distinguishable from an indefinite one and then you should use LDLT
>> instead of LLT. Or else you found a bug in our code. It would be
>> interesting if you could run a SVD on this matrix and find out its
>> condition number. if it's greater than 1e15 then your matrix isn't
>> significantly definite.
>>
>>>
>>> Is there a particular reason 1e-11 was chosen for this value?  Would
>>> it make sense to go with a smaller value by default, or at least offer
>>> the ability to configure this number as part of the LLT interface?
>>
>> The current value is determined by a formula that Keir found in a
>> paper by Higham (see source code) and that we later rederived
>> experimentally (see thread "LU precision tuning"). So we can be very
>> confident that this is the right formula and there's hardly a chance
>> that the user might know better... can you check the condition number
>> of your matrix, check whether Matlab uses plain LLT or LDLT (that
>> would explain it), check the accuracy of the result it gives...
>>
>>>  Or is there a better way for me to solve this problem? (Maybe some kind
>>> of scaling before the decomposition, then rescaling when I'm done?)
>>
>> Scaling won't change the condition number :)
>> Then I'm no big expert in numerical questions. All I can say is try
>> LDLT... maybe someone will know better.
>>
>> Benoit
>>
>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/