[eigen] update

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Hi List,

I thought that i'd focus on my "real" job but didn't manage to ;) so here's an 
update on what happened, and wish me to manage to focus on my job now!

I've been working locally with git and git-svn for the past 2 weeks, because I 
had really big changes in mind. I've completely refactored eigen2, and the 
most obvious result is that there are now only 714 lines of code, down from 
1093, and we don't even use anymore those big preprocessor macros to define 
operators -- so the code size drop is even bigger!

Concretely, I've unified MatrixXpr and MatrixBase into a single base class 
template, EigenBase. This way, one can declare/define a function f taking any 
kind of matrix/expression/vector/whatever as follows:
template<typename T, typename U> f(EigenBase<T,U>& m);
while the former design forced to define f separately for expressions and for 
actual objects. This is responsible for the larger part of the code size 
drop. Of course this is a "curious" base as in the CRTP programming pattern.

There are no longer Matrix/MatrixX/Vector/VectorX classes each reimplementing 
its own constructors etc. There is only one "Matrix" class (it's what used to 
be called MatrixBase), and there are typedefs emulating 
Matrix/MatrixX/Vector/VectorX.

It is now possible to "evaluate" expressions, i.e. you do
	eval(some_expression)
and some_expression automatically finds which Matrix type must be created. 
This means that all our expression types now recursively remember the number 
of rows/columns as a compile-time constant, with a magic value (currently -1) 
meaning dynamic value (unknown at compile-time). This will be very useful 
later when we implement heavy operations which need to create matrices -- 
such as LU decomposition.

This makes the aliasing system very unneccessary, so I removed it.
So instead of
	m.alias() = m * m;
you know do
	m = eval(m * m);
which feels more natural. But wait, this is not all. I have understood that it 
is in general very wrong to have m * m return an expression! Indeed, it 
actually is an optimization to immediately evaluate the matrix product. Let 
me explain. If you do a combined matrix product like:
	m1 * m2 * m3
then the second product will evaluate each entry of m1*m2 MANY TIMES -- namely 
as many times as there are rows in m3. Now an entry of a matrix product is 
expensive to compute, so having m1*m2 return a temporary matrix (instead of 
an expression) is an optimization because it CACHES the results of these 
expensive computations! Now of course there are cases where one wouldn't want 
the temporary to be created, like
	m1 = m2 * m3;
but then one should consider that the cost of a matrix multiplication (n^3 
muls, (n-1)^3 adds, etc...) is so high that the cost of the useless copy (n^2 
movs) is not a big deal!

Conclusion: we'll have operator* between matrices return a matrix, that's all. 
So
	m = m * m;
will just work.

And for those who really know what they're doing we'll provide the 
expression-returning flavor under a different name, like
	m = m.lazyMul(m);
and actually operator* will just do:
operator*(const EigenBase& a, const EigenBase& b)
{
	return eval(a.lazyMul(b));
}

Cheers,
Benoit

PS Bensch: I know you won't like eval() being a global function, but I've yet 
to find a way to make the compiler tolerate it as a member. Probably that's 
doable, I'm just tired.

Attachment: signature.asc
Description: This is a digitally signed message part.



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/