Re: [eigen] AutoDiffScalar |

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]

*To*: eigen@xxxxxxxxxxxxxxxxxxx*Subject*: Re: [eigen] AutoDiffScalar*From*: Björn Piltz <bjornpiltz@xxxxxxxxxxxxxx>*Date*: Thu, 15 Oct 2009 19:50:16 +0200*Dkim-signature*: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:mime-version:received:reply-to:in-reply-to :references:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=/7aT2SbZlMNpmIUq2y7mIQ0PBiu0EW8PMMddU7eK728=; b=BYnTEY4x9cg1EaZaqd/1sI2COE9PhEFnQZfQ5XWbZHVXqtnvn+GrTDY9k0+xH4eaG3 fYBl94AkgApcgohzuWS5Mw1LZw/LxoDOY/S9whLVxaZOiFBZc1/9ZOvffcsW77hUv9Eu QvflAoIr+9883yTVWQdpt+47Cinyddtq5PwDM=*Domainkey-signature*: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:content-type:content-transfer-encoding; b=Sl2VBRQFkayLNNtMcwmG1ei3FtGuRcP6ZmLkjXAo658yTawfLi/jApVfVwBqc1IgHg pBXVdL7mDPyJ4jCRRlwPfrrPP8dPLVUEchnc2L6tjlRdeFPiEiQ9FT/7Pl9pmau6fz54 ZR6nqq1R0D4+4keMDN/QyxvLrbCXBF4m1cLfM=

Thanks for the quick reply! I will have a look at the dev-branch. You are probably right that constants can be avoided, but it would be nice to be able to use code like template<typename T> void f(const T* in, T* out) { const T& x = in[0]; .... out[0] = 2*x + sin(2*pi*y) + pow(z, 3) +...; } It would be quite verbose to convert it to avoid constants and given ADS<->constant overloads there shouldn't be any penalty performance-wise. As for use cases, I would think this class to be a perfect fit with the Levenberg-Marquardt algorithm when the user doesn't supply derivatives. Basically all the Functors in eigen2-cminpack/src/tip/unsupported/test/NonLinear.cpp could be called with the ADS. I've had a quick look at AutoDiffVector, but I'll check it out for sure once I've got the Scalar working. Björn 2009/10/15 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>: > > > 2009/10/15 Björn Piltz <bjornpiltz@xxxxxxxxxxxxxx> >> >> Hi all, >> I've been following the work on the cminpack branch with interest and >> right now I'm looking at AutoDiffScalar. >> I've written some tests and seen that it has the potential to be very >> efficient, mainly thanks to the lazy evaluation I guess, but I have >> some questions. >> The fixed size implementation already seems to be mostly done, but >> there are some problems with the dynamic version. >> Look for example at the implementation of addition: >> >> template<typename OtherDerType> >> inline const >> AutoDiffScalar<CwiseBinaryOp<ei_scalar_sum_op<Scalar>,DerType,OtherDerType> >> > >> operator+(const AutoDiffScalar<OtherDerType>& other) const >> { >> return >> AutoDiffScalar<CwiseBinaryOp<ei_scalar_sum_op<Scalar>,DerType,OtherDerType> >> >( >> m_value + other.value(), >> m_derivatives + other.derivatives()); >> } >> >> This implementation crashes when "other" was initialized through c'tor >> AutoDiffScalar(const Scalar& value) and other.derivatives() is of size >> zero. > > Oh I see, but do you have a use case for that ? I mean this happens only > when you convert a constant to an active scalar type, and I think this could > be avoided in most cases. For instance, > > Scalar sum = 0; > for(i=0 ...) sum += v[i]; > > can be changed to: > > Scalar sum = v[0]; > for(i=1....) sum += v[i]; > > So following this idea, I guess we should also add overloads for > "AutoDiffScalar + constant" and similars.... > > Anyway, can you retry with the devel branch, I've just committed a fix which > resize and set to zero the derivatives of one argument if the other is a > null Matrix. Yes I know this is not optimal but I don't know how to do it > more efficiently. > >> >> The fix is not obvious to me since we need to return a binary >> expression and just m_derivatives won't do. >> I could check the size of other.m_derivatives and fill it up with >> zeros, when appropriate, but I would have to do that check at compile >> time since "OtherDerType" could also be a binary expression or >> something similar. I haven't found a way at compilation to check if an >> type is a "normal matrix" or some kind of an expression. >> >> I hope somebody has an idea of how to resolve this, because a fast >> forward differentiation implementation with expression templates >> supporting dynamic size vectors would be a really cool feature. I've >> compared this implementation to Sacado, the only other good template >> implementation I could find out there, and this one compares very >> favorably(fast). > > I did not know about Sacado, but I'm glad we are already faster :) I've seen > it is part of the trilineos framework. This framework really provide a lot > of features ! > > btw, about this module, there is still a work in progress AutoDiffVector > class. Its goal is to efficiently representing a vector of active variables. > Basically it will behave like a Matrix<AutoDiffScalar,Size,1> but internally > it will directly store the Jacobian matrix allowing much higher > performances, and ease of use (directly initialize the Jacobian to the > identity, directly get the Jacobian as a matrix, etc...). > > gael. > > >> >> Any feedback will be appreciated >> Björn >> >> >

**Follow-Ups**:**Re: [eigen] AutoDiffScalar***From:*Björn Piltz

**References**:**[eigen] AutoDiffScalar***From:*Björn Piltz

**Re: [eigen] AutoDiffScalar***From:*Gael Guennebaud

**Messages sorted by:**[ date | thread ]- Prev by Date:
**Re: [eigen] Re: Strong inlining is sometimes ignored...** - Next by Date:
**Re: [eigen] BLAS backend** - Previous by thread:
**Re: [eigen] AutoDiffScalar** - Next by thread:
**Re: [eigen] AutoDiffScalar**

Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |