Re: [eigen] AutoDiffScalar

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Ok I had a look at the devel branch and the fix is good.
I'm running a comparison with Sacado and get results like these:

Computing first order derivatives:
        Fixed   Dynamic:
Eigen   15 ms   94 ms
Sacado  47 ms   125 ms

Computing second order  derivatives:
        Fixed   Dynamic:
Eigen   140 ms  2094 ms
Sacado  281 ms  1047 ms

At second order dynamic derivatives you begin to see a severe penalty
for the resize fix and the key to resolving this is the operator
overloading you have begun to implement.
That overloading doesn't work for second(and higher) derivatives, but
I attach a suggestion for a fix.
There are still a couple of things I can't seem to figure out myself.

Björn

Den 15 oktober 2009 19.50 skrev Björn Piltz <bjornpiltz@xxxxxxxxxxxxxx>:
> Thanks for the quick reply!
> I will have a look at the dev-branch.
> You are probably right that constants can be avoided, but it would be
> nice to be able to use code like
>
> template<typename T>
> void f(const T* in, T* out)
> {
>    const T& x = in[0];
> ...
>    out[0] = 2*x + sin(2*pi*y) + pow(z, 3) +...;
> }
>
> It would be quite verbose to convert it to avoid constants and given
> ADS<->constant overloads there shouldn't be any penalty
> performance-wise.
>
> As for use cases,  I would think this class to be a perfect fit with
> the Levenberg-Marquardt algorithm when the user doesn't supply
> derivatives.
> Basically all the Functors in
> eigen2-cminpack/src/tip/unsupported/test/NonLinear.cpp could be called
> with the ADS.
>
> I've had a quick look at AutoDiffVector, but I'll check it out for
> sure once I've got the Scalar working.
>
> Björn
>
> 2009/10/15 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>>
>>
>> 2009/10/15 Björn Piltz <bjornpiltz@xxxxxxxxxxxxxx>
>>>
>>> Hi all,
>>> I've been following the work on the cminpack branch with interest and
>>> right now I'm looking at AutoDiffScalar.
>>> I've written some tests and seen that it has the potential to be very
>>> efficient, mainly thanks to the lazy evaluation I guess, but I have
>>> some questions.
>>> The fixed size implementation already seems to be mostly done, but
>>> there are some problems with the dynamic version.
>>> Look for example at the implementation of addition:
>>>
>>> template<typename OtherDerType>
>>> inline const
>>> AutoDiffScalar<CwiseBinaryOp<ei_scalar_sum_op<Scalar>,DerType,OtherDerType>
>>> >
>>> operator+(const AutoDiffScalar<OtherDerType>& other) const
>>> {
>>>    return
>>> AutoDiffScalar<CwiseBinaryOp<ei_scalar_sum_op<Scalar>,DerType,OtherDerType>
>>> >(
>>>    m_value + other.value(),
>>>    m_derivatives + other.derivatives());
>>> }
>>>
>>> This implementation crashes when "other" was initialized through c'tor
>>> AutoDiffScalar(const Scalar& value) and other.derivatives() is of size
>>> zero.
>>
>> Oh I see, but do you have a use case for that ? I mean this happens only
>> when you convert a constant to an active scalar type, and I think this could
>> be avoided in most cases. For instance,
>>
>> Scalar sum = 0;
>> for(i=0 ...) sum += v[i];
>>
>> can be changed to:
>>
>> Scalar sum = v[0];
>> for(i=1....) sum += v[i];
>>
>> So following this idea, I guess we should also add overloads for
>> "AutoDiffScalar + constant" and similars....
>>
>> Anyway, can you retry with the devel branch, I've just committed a fix which
>> resize and set to zero the derivatives of one argument if the other is a
>> null Matrix. Yes I know this is not optimal but I don't know how to do it
>> more efficiently.
>>
>>>
>>> The fix is not obvious to me since we need to return a binary
>>> expression and just m_derivatives won't do.
>>> I could check the size of other.m_derivatives and fill it up with
>>> zeros, when appropriate, but I would have to do that check at compile
>>> time since "OtherDerType" could also be a binary expression or
>>> something similar. I haven't found a way at compilation to check if an
>>> type is a "normal matrix" or some kind of an expression.
>>>
>>> I hope somebody has an idea of how to resolve this, because a fast
>>> forward differentiation implementation with expression templates
>>> supporting dynamic size vectors would be a really cool feature. I've
>>> compared this implementation to Sacado, the only other good template
>>> implementation I could find out there, and this one compares very
>>> favorably(fast).
>>
>> I did not know about Sacado, but I'm glad we are already faster :) I've seen
>> it is part of the trilineos framework. This framework really provide a lot
>> of features !
>>
>> btw, about this module, there is still a work in progress AutoDiffVector
>> class. Its goal is to efficiently representing a vector of active variables.
>> Basically it will behave like a Matrix<AutoDiffScalar,Size,1> but internally
>> it will directly store the Jacobian matrix allowing much higher
>> performances, and ease of use (directly initialize the Jacobian to the
>> identity, directly get the Jacobian as a matrix, etc...).
>>
>> gael.
>>
>>
>>>
>>> Any feedback will be appreciated
>>> Björn
>>>
>>>
>>
>

Attachment: AutoDiffScalar.patch
Description: Binary data

Attachment: test.zip
Description: Zip archive



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/