Re: [eigen] Sparse Matrices in Eigen |

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]

*To*: eigen@xxxxxxxxxxxxxxxxxxx*Subject*: Re: [eigen] Sparse Matrices in Eigen*From*: Aron Ahmadia <aja2111@xxxxxxxxxxxx>*Date*: Fri, 18 Jun 2010 14:43:37 +0300*Dkim-signature*: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received :in-reply-to:references:date:x-google-sender-auth:message-id:subject :from:to:content-type; bh=cchMllAj9mfa0XXsPJ7roqODEJQnVuP2U+bBRZJZr2o=; b=uveVI/Au/kALmqWIo9s4qUx+hLmvU7ZodvusDANj66scxUWo5GAvOcpN708Urjt0C3 ONfUrvRA7DGtnoiyQSOSg/KJl1BlG8p67xbADpMM05KRA2H+v6zqJjKU6QjMQ3fUHW/R tS7cHhSBK2ec28Retbq2WV+mRR3t3JrLBYbqs=*Domainkey-signature*: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type; b=qWlEFOzmXkG+XK136YOqgRFRTrTftj+RZ6JGLkhZb8Ifi/Dby02x+Y+6yzc2+iTt+e PNmNt3E0/J7dXYd3UidVnWTopzqyk7zLqVVrE9s9yzQET+Tym0pJvSkKv7TiOdBjgQwp nbIe4coaT2zoSWrYQ3crv6PjnpT1zGXMhrqMY=

Just a few thoughts and an update from my end.

Here at KAUST and at IBM Research I have been talking to some of the HPC guys who deal with academic and commercial research codes, and they too are very interested in sparse matrix operations.

I agree with Gael's assessment that it is hard to get any sort of reasonable performance out of SpMV without prior knowledge of the matrix's internals. Additionally, static templating is less likely to be of use in taking advantage of the matrix's internal data layouts, you really need to be able to dynamically represent the sparse matrix.. A good API for assembling the matrix is essential, but I think it is likely that it would contain a variety of different ways to add components to the matrix:

* stencil/diagonal operators

* blocks of different sizes

* classical compressed row/column

OSKI definitely beats naive strategies, but I think you can always do better if you know more about the structure of your problem.

I am happy to help out with some of the tuning, particularly if we can also test/tune against KAUST's 65,536-core BlueGene/P supercomputer and there are some interesting academic things we can do on the machine (KAUST is open to commercial research as well, but we would need some sort of MOU or research collaboration in place, which would probably not be worth it for Google)

Cheers,

Aron Ahmadia

On Fri, Jun 18, 2010 at 2:00 PM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:

On Fri, Jun 18, 2010 at 9:50 AM, Sameer Agarwalyes, I agree such construction method can be quite convenient.

<sameeragarwal@xxxxxxxxxx> wrote:

> Dear Eigen Folks,

> Keir and I are quite interested in Eigen having world class sparse matrix

> support. We spent some time talking about it and here are some thoughts.

> Some of this stuff is from my previous email on the subject.

> 1. Construction.

> Eigen should support a (i,j,s) triplet based constructors of the form

> SparseMatrix* M = BuildSparseMatrix( vector<int> rows, vector<int> cols,

> vector<double> values, int num_rows, int num_cols);

> SparseMatrix* M = BuildSparseMatrix( int* rows, int* cols, double* values,

> int nnz, int num_rows, int num_cols);

> I think this a much more useful way of building a sparse matrix rather than

> the DynamicSparseMatrix route.

Regarding the API, maybe it would be better to directly have

SparseMatrix constructors taking a list of Triplet as follow:

template<typename TripletIterator>

SparseMatrix(Index rows, Index cols, const TripletIterator& begin,

const TripletIterator& end);

Where typeof(*TripletIterator) must provide the rowId(), columnId(),

and value() functions (or whatever other names if you don't like

them). Of course we would provide a default template Triplet class.

Here TripletIterator could be std::vector<Triplet>::iterator, Triplet*, etc....

Do you really want to be able to use three different buffers ?

agree.

> 2. BLAS

> We should first focus on building a sparse BLAS, but we don't actually need

> the whole thing.

already OK

> The most common operations that we should focus on are

> vector * scalar

> vector / scalar

not available and I'm not sure it makes sense because theoretically

> vector + scalar (maybe)

> vector - scalar (maybe)

that means all zeros become equal to the scalar, and so the matrix

become dense. Maybe we could offer a method to return the list of non

zeros as an Eigen dense vector, so that the user can do all kind of

low level operations on these values.

OK for norm() and squaredNorm(), have to add other p norms

> vector.norm() (all p norms)

> vector +/- vector

already OK

> dot(vector, vector)

already OK

> Matrix * vector

OK

> Matrix^T * vector

OK

yes, I've checked a few recent paper on that topic, but it seems the

> If we have well optimized robust implementations of these operations, a

> majority of sparse matrix usage will be satisfied. In particular it gives us

> enough operators to build iterative solvers with. Which brings up the issue

> of efficient implementation of SpMV - sparse matrix dense vector product.

> This is the source of much activity in the high performance computing

> lately, and a naive implementation of spmv has extremely poor performance.

> It is hard to hard code a strategy that works well for all matrices.

> One direction we could look into is wrapping the OSKI library

> http://bebop.cs.berkeley.edu/oski/

only way to really speed up thing is through an analysis of the sparse

matrix which is often as expensive than the product itself, and so the

only situation to get significant speed up is when the spmv product is

performed multiple times on a matrix having the same nonzero pattern.

Nevertheless, I'll bench oski.

This is definitely a must to have, and this is also a good way to test

> 3. Iterative Methods

> A library of sparse iterative solvers built on top of the sparse matrix

> support. This may or may not be part of Eigen proper. A very useful thing to

> have here are implementations of a variety of basic preconditioning schemes.

> Not just for square/symmetric systems but also for rectangular systems and

> non-symmetric systems.

the basic API is fine. Recently, Daniel Lowengrub kindly proposed his

help to improve the Sparse module, and I oriented him to this

particular task.

Yes, and we also need a better way to declare and instantiate the

> 4. Direct methods

> Here we need better support for exposing the interface to symbolic

> factorization reuse, and the various ordering options that solvers provide.

solvers because the current proposal is not really scalable and error

prone. The API should also minimize the need of temporaries. As you

said earlier, people can already do their own cooking to use Eigen's

SparseMatrix with such libraries, and so this part is less critical

and it can be delayed for 3.1.

That's great :) With the addition of Daniel Lowengrub we should have

> Both Keir and I are interested in helping with this.

enough man power to get it right :)

Gael

> Sameer

>

**References**:**[eigen] Sparse Matrices in Eigen***From:*Sameer Agarwal

**Re: [eigen] Sparse Matrices in Eigen***From:*Gael Guennebaud

**Messages sorted by:**[ date | thread ]- Prev by Date:
**Re: [eigen] Sparse Matrices in Eigen** - Next by Date:
**Re: [eigen] Sparse module updates** - Previous by thread:
**Re: [eigen] Sparse Matrices in Eigen** - Next by thread:
**Re: [eigen] Sparse Matrices in Eigen**

Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |