Re: [tng-users] TMATH - bug in symmetric sparse matrices!

[ Thread Index | Date Index | More Archives ]

After talking with Prof. Bucher we decided not to wait for Eigen 3.0.

The core functionality of SymSparseMatrix is now restored (including most of external matrix functions). The user must, however, ensure that only the lower triangular parts
are stored.

Furthermore, there are some new options in using sparse solver MUMPS. It is now possible
- solve A*x=b, where x,b are dense vectors AND matrices,
- solve transpose(A)*x=b.
- The factorizing method SparseSolver:Compute() distinguishes between symmetric and
general sparse matrices.
- By SetPositive you can tell MUMPS that you ensure that the matrix is symmetric and
positive definite to enable optimizations.
- When constructing the MUMPS solver object, you can optionally use
# solver = tmath.MUMPS(true)
which will tell MUMPS to use the internal data buffer directly (minimal RAM usage). This option should be used with care. MUMPS may alter the data buffer which can lead to
unforeseen behavior if the matrix is used later on.

Best regards

Zitat von Sebastian Wolff <sw@xxxxxxxxxxxxxxxxxxxx>:

There is a serious bug in Eigen library regarding symmetric sparse matrices. Therefore I encourage not to use them .

Symmetric sparse matrices should contain only the lower triangular part of the matrix and behave as if the (non stored) uppertriangular part would exist. This happened to be the case. But Eigen always stored the full matrix regardless the specified memory layout. That means: Operators worked, but backends like MUMPS did not. I made some experiments with the code to trick around that, but as a result the class tmath.SymSparseMatrix should be considered as being broken. We have to wait for Eigen 3.0 which is supposed to be released within the next 12 months.

Best regards SW

Mail converted by MHonArc 2.6.19+