Re: [eigen] first benchmark of large and sparse matrix

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


I can't really comment on the sparse stuff, all I can say is "I agree" 
and "thank you for the explanations":)

But it is very good that you post all these explanations here, there are some 
people on this list with an interest in sparse matrices, it is not impossible 
that someone will step up to help you (hint to lurkers: that would be 
awesome) and then your explanations here and on the wiki are very useful.

> actually there is room for some factorization of the code since we now
> have the same vectorized loops at several places:
>  - normal product (row major * col major)
>  - dot product
>  - redux
>  - sum
>
> so sum could call redux, dot product could call
> (a.conjugate().cwiseProduct(b)).sum(); and the product could call
> a.row(row).cwiseProduct(b.col(col))).sum();

The keyword here is "compilation time" !

Especially the dot product is a very important basic operation, use massively 
by the user, and by Eigen algos (like fuzzy compares). So I don't want it to 
compile slowly because it has to construct a cwiseProduct and a conjugate(). 
Also, since this is only for vectors it was possible to simplify a few things.
That said, I haven't benchmarked compilation time.

For Product, the same remarks apply, probably the best thing to do is a good 
benchmark of compilation time.

For sum(), things are different. Since sum() is a far less common operation, 
compilation time matters less, so I'm OK with merging it back to redux once 
redux is vectorized (again, sum is a starting point for that).

Cheers,

Benoit

Attachment: signature.asc
Description: This is a digitally signed message part.



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/