Re: [eigen] AutoDiffScalar |

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]

*To*: eigen@xxxxxxxxxxxxxxxxxxx*Subject*: Re: [eigen] AutoDiffScalar*From*: Gael Guennebaud <gael.guennebaud@xxxxxxxxx>*Date*: Thu, 15 Oct 2009 18:59:33 +0200*Dkim-signature*: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=Ub/dKySkToQ+XaHkOpf+M6NLOK218/xgjVeYKpkZGy0=; b=w9x4fBy1126KnBIeA0QlDQ2M/2BzBFXPht6oj5aaSbGnEYfDJ8mzKcBA9qXb8wsPjm ksZY4zuZn0OWzr6q+UrINSISUzVpsD6og81lbGp433X5rWb7MwzReebWOvTzCsXaD6wJ J4z1mT4M1PGk3utooUxGcQ9JTcOzQ5xmyZGoA=*Domainkey-signature*: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=wW98WLgatZzKeK09WtE1I2KCCiBzml8dXcx5t8dicKRFp6g0cqeZ09Bnc6RidaMoZW 0lKyVNxeYqAheYiR+fd4j+mKNaEHL+BRzkIJt7iqQIGRXfhMXyYi6ROfJqWTpELg2Pwn pPBw1dL7fAKB1OaH6qkxKfuQGB1K0EmYzYRow=

2009/10/15 Björn Piltz <bjornpiltz@xxxxxxxxxxxxxx>

Oh I see, but do you have a use case for that ? I mean this happens only when you convert a constant to an active scalar type, and I think this could be avoided in most cases. For instance,

Scalar sum = 0;

for(i=0 ...) sum += v[i];

can be changed to:

Scalar sum = v[0];

for(i=1....) sum += v[i];

So following this idea, I guess we should also add overloads for "AutoDiffScalar + constant" and similars....

Anyway, can you retry with the devel branch, I've just committed a fix which resize and set to zero the derivatives of one argument if the other is a null Matrix. Yes I know this is not optimal but I don't know how to do it more efficiently.

I did not know about Sacado, but I'm glad we are already faster :) I've seen it is part of the trilineos framework. This framework really provide a lot of features !

btw, about this module, there is still a work in progress AutoDiffVector class. Its goal is to efficiently representing a vector of active variables. Basically it will behave like a Matrix<AutoDiffScalar,Size,1> but internally it will directly store the Jacobian matrix allowing much higher performances, and ease of use (directly initialize the Jacobian to the identity, directly get the Jacobian as a matrix, etc...).

gael.

Hi all,

I've been following the work on the cminpack branch with interest and

right now I'm looking at AutoDiffScalar.

I've written some tests and seen that it has the potential to be very

efficient, mainly thanks to the lazy evaluation I guess, but I have

some questions.

The fixed size implementation already seems to be mostly done, but

there are some problems with the dynamic version.

Look for example at the implementation of addition:

template<typename OtherDerType>

inline const AutoDiffScalar<CwiseBinaryOp<ei_scalar_sum_op<Scalar>,DerType,OtherDerType>

>

operator+(const AutoDiffScalar<OtherDerType>& other) const

{

return AutoDiffScalar<CwiseBinaryOp<ei_scalar_sum_op<Scalar>,DerType,OtherDerType>

>(

m_value + other.value(),

m_derivatives + other.derivatives());

}

This implementation crashes when "other" was initialized through c'tor

AutoDiffScalar(const Scalar& value) and other.derivatives() is of size

zero.

Oh I see, but do you have a use case for that ? I mean this happens only when you convert a constant to an active scalar type, and I think this could be avoided in most cases. For instance,

Scalar sum = 0;

for(i=0 ...) sum += v[i];

can be changed to:

Scalar sum = v[0];

for(i=1....) sum += v[i];

So following this idea, I guess we should also add overloads for "AutoDiffScalar + constant" and similars....

Anyway, can you retry with the devel branch, I've just committed a fix which resize and set to zero the derivatives of one argument if the other is a null Matrix. Yes I know this is not optimal but I don't know how to do it more efficiently.

The fix is not obvious to me since we need to return a binary

_expression_ and just m_derivatives won't do.

I could check the size of other.m_derivatives and fill it up with

zeros, when appropriate, but I would have to do that check at compile

time since "OtherDerType" could also be a binary _expression_ or

something similar. I haven't found a way at compilation to check if an

type is a "normal matrix" or some kind of an _expression_.

I hope somebody has an idea of how to resolve this, because a fast

forward differentiation implementation with _expression_ templates

supporting dynamic size vectors would be a really cool feature. I've

compared this implementation to Sacado, the only other good template

implementation I could find out there, and this one compares very

favorably(fast).

I did not know about Sacado, but I'm glad we are already faster :) I've seen it is part of the trilineos framework. This framework really provide a lot of features !

btw, about this module, there is still a work in progress AutoDiffVector class. Its goal is to efficiently representing a vector of active variables. Basically it will behave like a Matrix<AutoDiffScalar,Size,1> but internally it will directly store the Jacobian matrix allowing much higher performances, and ease of use (directly initialize the Jacobian to the identity, directly get the Jacobian as a matrix, etc...).

gael.

Any feedback will be appreciated

Björn

**Follow-Ups**:**Re: [eigen] AutoDiffScalar***From:*Björn Piltz

**References**:**[eigen] AutoDiffScalar***From:*Björn Piltz

**Messages sorted by:**[ date | thread ]- Prev by Date:
**Re: [eigen] BLAS backend** - Next by Date:
**Re: [eigen] Re: Strong inlining is sometimes ignored...** - Previous by thread:
**[eigen] AutoDiffScalar** - Next by thread:
**Re: [eigen] AutoDiffScalar**

Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |