Re: [eigen] Sparse Matrices in Eigen

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


HI Gael,

Thanks for the detailed reply.
 
yes, I agree such construction method can be quite convenient.
Regarding the API, maybe it would be better to directly have
SparseMatrix constructors taking a list of Triplet as follow:

template<typename TripletIterator>
SparseMatrix(Index rows, Index cols, const TripletIterator& begin,
const TripletIterator& end);

Where typeof(*TripletIterator) must provide the rowId(), columnId(),
and value() functions (or whatever other names if you don't like
them). Of course we would provide a default template Triplet class.

Here TripletIterator could be std::vector<Triplet>::iterator, Triplet*, etc....

Do you really want to be able to use three different buffers ?

As I mentioned in my other emails, three buffers is what we get from the fortran/matlab/cholmod world. Also, it is quite often the case for optimization algorithms that the sparsity remains the same and only the values array changes. I like being able to just access that instead of passing the whole iterator passed around. Also it allows me to store and access contiguous blocks in the values array. A high level interface like an iterator is nice, but I would like a low level three buffer interface too.

> vector + scalar (maybe)
> vector - scalar (maybe)

not available and I'm not sure it makes sense because theoretically
that means all zeros become equal to the scalar, and so the matrix
become dense. Maybe we could offer a method to return the list of non
zeros as an Eigen dense vector, so that the user can do all kind of
low level operations on these values.

An iterator over the non-zero values of the sparse object...

 
> One direction we could look into is wrapping the OSKI library
> http://bebop.cs.berkeley.edu/oski/

yes, I've checked a few recent paper on that topic, but it seems the
only way to really speed up thing is through an analysis of the sparse
matrix which is often as expensive than the product itself, and so the
only situation to get significant speed up is when the spmv product is
performed multiple times on a matrix having the same nonzero pattern.
Nevertheless, I'll bench oski.

So OSKI only tunes the multiplies if it thinks the computational effort is worth it. You can give it hints about the work load and such. 

> 3. Iterative Methods
> A library of sparse iterative solvers built on top of the sparse matrix
> support. This may or may not be part of Eigen proper. A very useful thing to
> have here are implementations of a variety of basic preconditioning schemes.
> Not just for square/symmetric systems but also for rectangular systems and
> non-symmetric systems.

This is definitely a must to have, and this is also a good way to test
the basic API is fine. Recently, Daniel Lowengrub kindly proposed his
help to improve the Sparse module, and I oriented him to this
particular task.

Is there a plan for what you/Daniel plan to do on this front?

Sameer



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/