Re: [eigen] projects using Eigen2 ?

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


and do note that if you go the comma-initializer way, it will check
that the sizes match so you can remove your asserts ;)

Cheers,
Benoit

2008/12/6 Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
> arf i can't resist:
>
>  131   A.block(0,0,n,n).setZero();
>  132   A.block(0,n,n,n)=Eigen::MatrixXd::Identity(n,n);
>  133   A.block(n,0,n,n)=Minvminus*gG;
>  134   A.block(n,n,n,n)=Minvminus*Q;
>
> purely cosmetic this is one of the cases where i think that corner() is neat:
>
>  131   A.corner(Eigen::TopLeft, n,n).setZero();
>  132   A.corner(Eigen::TopRight, n,n)=Eigen::MatrixXd::Identity(n,n);
>  133   A.corner(Eigen::BottomLeft, n,n)=Minvminus*gG;
>  134   A.corner(Eigen::BottomRight, n,n)=Minvminus*Q;
>
> Or alternatively the comma-initializer looks even more futuristic
> (though if you go for it, do benchmark as some compilers like ICC have
> had trouble with it in the past):
>
> A << Eigen::MatrixXd::Zero(n,n), Eigen::MatrixXd::Identity(n,n),
>        Minvminus*gG, Minvminus*Q;
>
> Cheers,
> Benoit
>
>
> 2008/12/6 Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
>> One more remark: for what you're doing it's really much better if
>> christoffel is actually stored in row-major order as this makes the
>> rowwise sum do only contiguous memory accesses. Especially if n is
>> large enough (i guess it is say >= 10 otherwise you'd be using
>> fixed-size, right?) you'll benefit from vectorization.
>>
>> Cheers,
>> Benoit
>>
>> 2008/12/6 Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
>>> Thanks, i'll add the link to the wiki.
>>>
>>> 2008/12/6 Timothy Hunter <tjhunter@xxxxxxxxxxxx>:
>>>> while the eigen-specific parts are located at:
>>>> http://personalrobots.svn.sourceforge.net/viewvc/personalrobots/pkg/trunk/controllers/control_toolbox/
>>>> (in include/ for the template code, src/serial_chain_model.cpp and
>>>> test/ for the test units)
>>>
>>> OK so I search occurences of "Eigen::" in this file and comment below:
>>>
>>>   90   Eigen::MatrixXd M(n,n);
>>>   91   NEWMAT::Matrix mass(n,n);
>>>   92   chain_->computeMassMatrix(kdl_q,kdl_torque_,mass);
>>>   93   for(int i=0;i<n;i++)
>>>   94     for(int j=0;j<n;j++)
>>>   95       M(i,j)=mass(i+1,j+1);
>>>
>>> Here, if the storage order (colmajor or rowmajor) (Eigen defaults to
>>> column-major) is the same for M and mass, instead of this double for
>>> loop, you are better off using a Eigen::Map as it will potentially
>>> vectorize.
>>>
>>> Something like  (using Eigen trunk):
>>> M = Eigen::MatrixXd::Map(mass.array_of_coefficients(), n, n);
>>>
>>> Now if mass is row-major, but it is a priority for you to optimize the
>>> copying from mass to M, then you can just declare M as a row-major
>>> matrix:
>>> Eigen::Matrix<double,Eigen::Dynamic,Eigen::Dynamic,Eigen::RowMajor> M(n,n);
>>>
>>>  102 //   ROS_DEBUG("5-");
>>>  103   for(int i=0;i<n;i++)
>>>  104     for(int j=0;j<n;j++)
>>>  105       for(int k=0;k<n;k++)
>>>  106         Q(i,j)+=christoffel(i*n+j+1,k+1)*kdl_q_dot(j);
>>>
>>> When I see this i want to cry "use Eigen expressions" but it's
>>> nontrivial as you're mixing Eigen with NEWMAT matrices.
>>> Again, assuming that you know the storage order of NEWMAT, you can get
>>> away with a Eigen::Map. Assuming it's column-major:
>>> Eigen::Map<Eigen::MatrixXd>
>>> map_christoffel(christoffel.array_of_coefficients(), n*n, n);
>>> for(int i=0;i<n;i++)
>>>  for(int j=0;j<n;j++)
>>>    Q(i,j) += map_christoffel.row(i*n+j).sum() * kdl_q_dot(j);
>>>       // notice that you were doing unnecessary multiplications in
>>> the inner loop as kdl_q_dot(j) didn't depend on k
>>>
>>> Now from here we can see a further simplification using
>>> rowwise().sum() and a dot product:
>>>
>>> #include<Eigen/Array>
>>> Eigen::Map<Eigen::MatrixXd>
>>> map_christoffel(christoffel.array_of_coefficients(), n*n, n);
>>> for(int i=0;i<n;i++)
>>>  Q.row(i) += map_christoffel.rowwise().sum().segment(i*n,n).dot(kdl_q_dot);
>>>
>>> Here I'm handling kdl_q_dot as a Eigen:: vector, i know that's not the
>>> case, but assuming it's stored contiguously in memory you can handle
>>> that again with a Eigen::Map.
>>>
>>> As you can see using Eigen expressions you go from 3 for loops down to
>>> 1. Assuming that the compiler isn't confused by that c++ abstraction,
>>> normally you can get much better performance. Do benchmark it before
>>> using it in production as we sometimes have bad surprises, depending
>>> on the version of GCC, sometimes complex expressions are poorly
>>> optimized.
>>>
>>> I'll stop here for now:)
>>>
>>> Cheers,
>>> Benoit
>>>
>>
>

---


Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/