Re: [eigen] Using custom scalar types |

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]

*To*: eigen@xxxxxxxxxxxxxxxxxxx*Subject*: Re: [eigen] Using custom scalar types*From*: Bob Carpenter <carp@xxxxxxxxxxx>*Date*: Sat, 16 Jun 2012 16:45:23 -0400

On 6/16/12 10:55 AM, Brad Bell wrote:

Looking at the example in http://eigen.tuxfamily.org/dox/TopicCustomizingEigen.html#CustomScalarType one sees ... snip ... IsInteger = 0, IsSigned, ReadCost = 1, I suspect that IsSigned should be 1 for this example ?

That's how we did it -- IsSigned = 1. There are examples for scalars and complex in Eigen itself in Core/NumTraits.h that we followed. We have a pretty extensive integration of our reverse-mode auto-dif with Eigen in the Stan project for MCMC sampling in Bayesian models (we use Hamiltonian Monte Carlo, which requires gradients). The basic auto-dif is a more OO approach to the basic auto-dif design in David Gay's RAD (and also has some resemblance to CppAD): http://code.google.com/p/stan/source/browse/src/stan/agrad/agrad.hpp Here's the definitions for matrices: http://code.google.com/p/stan/source/browse/src/stan/agrad/matrix.hpp For Eigen, we also defined scalar_product_traits<var,double> and <double,var>, which gives us some mixed-operation capability, but not enough through all the expression templates. We wound up short-circuiting all of the expression templates with a very expensive conversion of double values to auto-dif vars for mixed matrix ops. You see this in the to_var functions (available for double and var so we could still write relatively templated matrix ops). We're now looking into whether we can add an implicit constructor for Matrix<var,...> based on a Matrix<double,...>. We have had no problem at all using Eigen with all var type inputs. We've used the LLT solvers, determinants, inverses, Eigenvalues/vectors, dot products, norms, etc. etc. We've validated the derivatives versus known matrix derivatives. We found the Magnus and Neudecker book and The Matrix Cookbook really useful in this regard as they have complementary sets of formulas for matrix derivatives. Compile times got so long with header-only use in the combination of auto-dif and Eigen that we eventually broke out a ..cpp file for the functions that was really slowing us down. We could move more from the .hpp to the .cpp. The real win performance-wise was vectorizing the derivatives for dot products (we're mainly using this for statistical models like regressions where there's a lot of dot products -- also matrix products are just bunches of dot products). This unfortunately circumvents all the expression templates in Eigen, so it's not the optimal solution -- that would be to do template specializations for auto-dif for all the expression templates.

Furthermore, one also sees ... snip ... inline adouble internal::sin(const adouble& x) { return sin(x); } inline adouble internal::cos(const adouble& x) { return cos(x); } inline adouble internal::pow(const adouble& x, adouble y) { return pow(x, y); } ... snip ... Is this a typo, or should the argument y be passed by value instead of by const reference ?

We've had no problem just using argument-dependent lookup. So we never defined any internal:: versions of special functions. - Bob Carpenter Columbia Uni, Dept of Statistics

**References**:**[eigen] Using custom scalar types***From:*Brad Bell

**Messages sorted by:**[ date | thread ]- Prev by Date:
**Re: [eigen] Using custom scalar types** - Next by Date:
**[eigen] g++ -Wshadow warnings** - Previous by thread:
**Re: [eigen] Using custom scalar types** - Next by thread:
**Re: RE : Re: [eigen] Help on solving a race condition**

Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |