Re: [eigen] Sparse Matrices in Eigen |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] Sparse Matrices in Eigen
- From: Gael Guennebaud <gael.guennebaud@xxxxxxxxx>
- Date: Fri, 18 Jun 2010 13:00:27 +0200
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:mime-version:received:in-reply-to :references:from:date:message-id:subject:to:content-type :content-transfer-encoding; bh=dE2q5QOsXERu45/Yctdk5qwTic0+VntOBnPc1WhPCjs=; b=F1IOnvNJeuOJxyamHUwrMBleB7eVO7CWF7HF/dTNALDGvY5x5nHANZeRwN0XP64J/t 5gGiw8YqkDf1/7emzYsH/qYHWgQ9CvwpMu5vqs5Vu3MEyyFDqNPfnjJBrRnm5pard3RT dT5WKDSR6b3RGkP++m3NhAbK78cNzcEyPKu8c=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:content-transfer-encoding; b=lBuULr5L4gqEFja67BXt1ZO2pkhceTzusedTogxP725nxFyKo4rewggJDyHUWVG3gK ASdwKBHNY6AhR0FcrtYntwg+H0VpkEUvG+bNSBjF1+z0u9JTI5YA+29nz2jb2p5+Z07G pEyQv1RJgifDmhrkkrcPvGr28I1LNFjf4Q0ew=
On Fri, Jun 18, 2010 at 9:50 AM, Sameer Agarwal
<sameeragarwal@xxxxxxxxxx> wrote:
> Dear Eigen Folks,
> Keir and I are quite interested in Eigen having world class sparse matrix
> support. We spent some time talking about it and here are some thoughts.
> Some of this stuff is from my previous email on the subject.
> 1. Construction.
> Eigen should support a (i,j,s) triplet based constructors of the form
> SparseMatrix* M = BuildSparseMatrix( vector<int> rows, vector<int> cols,
> vector<double> values, int num_rows, int num_cols);
> SparseMatrix* M = BuildSparseMatrix( int* rows, int* cols, double* values,
> int nnz, int num_rows, int num_cols);
> I think this a much more useful way of building a sparse matrix rather than
> the DynamicSparseMatrix route.
yes, I agree such construction method can be quite convenient.
Regarding the API, maybe it would be better to directly have
SparseMatrix constructors taking a list of Triplet as follow:
template<typename TripletIterator>
SparseMatrix(Index rows, Index cols, const TripletIterator& begin,
const TripletIterator& end);
Where typeof(*TripletIterator) must provide the rowId(), columnId(),
and value() functions (or whatever other names if you don't like
them). Of course we would provide a default template Triplet class.
Here TripletIterator could be std::vector<Triplet>::iterator, Triplet*, etc.....
Do you really want to be able to use three different buffers ?
> 2. BLAS
> We should first focus on building a sparse BLAS, but we don't actually need
> the whole thing.
agree.
> The most common operations that we should focus on are
> vector * scalar
> vector / scalar
already OK
> vector + scalar (maybe)
> vector - scalar (maybe)
not available and I'm not sure it makes sense because theoretically
that means all zeros become equal to the scalar, and so the matrix
become dense. Maybe we could offer a method to return the list of non
zeros as an Eigen dense vector, so that the user can do all kind of
low level operations on these values.
> vector.norm() (all p norms)
OK for norm() and squaredNorm(), have to add other p norms
> vector +/- vector
already OK
> dot(vector, vector)
already OK
> Matrix * vector
OK
> Matrix^T * vector
OK
> If we have well optimized robust implementations of these operations, a
> majority of sparse matrix usage will be satisfied. In particular it gives us
> enough operators to build iterative solvers with. Which brings up the issue
> of efficient implementation of SpMV - sparse matrix dense vector product.
> This is the source of much activity in the high performance computing
> lately, and a naive implementation of spmv has extremely poor performance..
> It is hard to hard code a strategy that works well for all matrices.
> One direction we could look into is wrapping the OSKI library
> http://bebop.cs.berkeley.edu/oski/
yes, I've checked a few recent paper on that topic, but it seems the
only way to really speed up thing is through an analysis of the sparse
matrix which is often as expensive than the product itself, and so the
only situation to get significant speed up is when the spmv product is
performed multiple times on a matrix having the same nonzero pattern.
Nevertheless, I'll bench oski.
> 3. Iterative Methods
> A library of sparse iterative solvers built on top of the sparse matrix
> support. This may or may not be part of Eigen proper. A very useful thing to
> have here are implementations of a variety of basic preconditioning schemes.
> Not just for square/symmetric systems but also for rectangular systems and
> non-symmetric systems.
This is definitely a must to have, and this is also a good way to test
the basic API is fine. Recently, Daniel Lowengrub kindly proposed his
help to improve the Sparse module, and I oriented him to this
particular task.
> 4. Direct methods
> Here we need better support for exposing the interface to symbolic
> factorization reuse, and the various ordering options that solvers provide.
Yes, and we also need a better way to declare and instantiate the
solvers because the current proposal is not really scalable and error
prone. The API should also minimize the need of temporaries. As you
said earlier, people can already do their own cooking to use Eigen's
SparseMatrix with such libraries, and so this part is less critical
and it can be delayed for 3.1.
> Both Keir and I are interested in helping with this.
That's great :) With the addition of Daniel Lowengrub we should have
enough man power to get it right :)
Gael
> Sameer
>