Re: [eigen] Parallelizable operations

[ Thread Index | Date Index | More Archives ]

Thanks for the info. Wouldn't it make sense to parallellize sums or other simple operations when matrices are big enough? I know that the problem would be to determine what is big enough.

But maybe a new parameter could be included to force parallelism between two instances of DenseBase<> for instance. I am being quite naive with this proposal, but I guess it could be great for some applications if multiprocessing is available. What do you think? would this at least be possible and not too complicated to implement with the current API?

On Wed, Jul 28, 2010 at 3:36 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
2010/7/28 Carlos Becker <carlosbecker@xxxxxxxxx>:
> Hi everyone, I was just wondering: which are the parallelizable operations
> with Eigen? I mean, if I enable openMP and EIGEN_DONT_PARALLELIZE is not
> defined, which operations would get parallelized? According to what I see in
> the source code it is mostly dedicated to products.

Yes, that is all what's currently parallelized. However, do note that
products are where blocking (aka "level 3 implemented") decompositions
spend most of their time, so this benefits already a large part of the
decompositions. In the future, we'll parallelize more and more stuff,
i.e. my new divide-and-conquer SVD is going to be fully

> I am asking this because
> I am now choosing between parallelizing some code myself or letting Eigen do
> it. I guess that using EIGEN_DONT_PARALLELIZE would allow me to parallelize
> larger blocks and, knowin what I am doing, I suppose I can get better
> performance, specially when the operations are not purely products.

I can't give a general answer to that question, it depends too much on
your specifics!


> Thanks!
> Carlos

Mail converted by MHonArc 2.6.19+