|Re: [tng-devel] MUMPS binding in tmath/sparse|
[ Thread Index |
| More lists.tuxfamily.org/tng-devel Archives
It would be interesting to compare the efficiency of SymSparseMatrix
- inserting values (maybe to test by converting both from SparseMatrix)
- products with vectors and matrices, scalars
- sums with other symmetric sparse matrices
That way, we can ensure that the better algorithms of both should be
Furthermore, I could imagine the appearance of a new data type:
which stores its values by separate arrays for 'value', 'i' and 'j'. It
should follow the
design of tmath.SparseMatrix though.
Zitat von Sebastian Wolff <sw@xxxxxxxxxxxxxxxxxxxx>:
I made some changes to the interface class of sparse solvers and
applied them to MUMPS.
It is now possible to
- solve A*x=b, where x,b are dense vectors AND matrices,
- solve transpose(A)*x=b.
- The factorizing method SparseSolver:Compute() distinguishes between
symmetric and general sparse matrices.
Regarding optimizing the memory use of MUMPS I was checking the MUMPS
documentation. The problem is, that the given data buffers of the
input data (arrays for x,y,value) may be altered by MUMPS. For
example, duplicate entries are merged (ok - we can ensure this before
pushing data to MUMPS) and nonzero entries are created where MUMPS
believes it is necesary (can not be ensured by us or do we want to
scan MUMPS' source code with each release?).
I, therefore, discourage to use the data buffer of our sparse
matrices as direct input for MUMPS and recommend to create a copy.