RE: [eigen] SparseMatrix with non POD-"Scalar" type (e.g. SparseBlockMatrix) yields unnecessary initialisations of Scalar-type?

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Hi Gael,

thank you for your detailed comments.

> Your example is nice starting
> proof-of-concept. To get it right, we certainly need a new
> SparseBlockMatrix class and some way to specify which algorithm to use
> for a/b, i.e., a dense Cholesky in SimplicialCholesky, LU in SparseLU,
> etc.

In my case, I am / was expecting to write the inversion / solution myself
anyway (or at least I'll probably need a very particular preconditioned), so I
was interested in Eigen's dense functionality (to work on the blocks), and
storing the sparse matrix. I wasn't expecting Eigen's solvers to work on my
bastardised sparse matrix.
But yes, one could either specify those different settings to the solver, or
add the desired invert() or whatever implementation to the block-base class
and control it that way.

> abs is probably not the only free fonction you/we will have to
> specify, but also operator/, and in some cases sqrt (LLT). In this
> precise case of AmbiVector, is only needed for non-conservative sparse
> matrix products, i.e.:  (A*B).prune(eps);

Yes, I saw these pop up when I naïvely tried to use the existing solvers. In the code
I just wanted to get the matrix-multiply to work.

> We need these initializations because it is allowed to do:
> 
> mat.insert(i,j);
> 
> to insert an explicit zero. One solution could be to return a wrapper
> object that would set the wrapped value to zero in the destructor if
> operator= has not been called, but that sounds a bit overkill.

If it were up to me, I'd make a distinction API level, i.e. leave insert as is and add a variant that
doesn't make such a guarantee (and then has some performance advantages).

> Actually, all these constant-to-Scalar conversions are problematic for
> any heavy-weight scalar types (AutoDiff, multi-precision, etc.), and
> an old idea was to add in NumTraits a Constant (maybe that name is not
> good) type for that usage. This way if custom scalar type is fully
> compatible with double, then NumTraits<Custom>::Constant could be
> double. For fixed-size matrices it could be a CwiseNullaryOp<> thus
> making the assignment more efficient and optimizable by the compiler.

I don't know Eigen very well, but how does this help (and what is CwiseNullaryOp<>'s role
in this)? The examples I've pointed out were more aimed at unneeded construction / initialisation
(so removing unnecessary work — at least for my case) and not at performing the work more efficient.

> As I said, there is currently no recommended way to create
> sparse-block matrices but this is something we'd like to have, and
> we'll probably work on it soon. So if you're willing to share your
> advances in this direction and participate in any way to this
> development, you're very welcome!!

I'm not sure I'm experienced enough with Eigen to do this, and I'm not even sure we'll use it
(at the moment I'm playing with Eigen to see whether it'd be suitable for our purposes). But if
there's any input and code I can provide, I'll gladly do so.

> Do you have some measure of the overhead?

Not yet, I'm far away from any realistic implementation, but if I get that far, I'll collect it.

As for Sameer's comment on the different block-sizes, I'm mostly interested in fixed (even
compile-time constant size) blocks. I'd think that the scenario with fixed block-size would
warrant a specialised / separate implementation, because it is so much more uniform (and
exploitable in structure).


Best regards

Daniel Vollmer

--------------------------
Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)
German Aerospace Center
Institute of Aerodynamics and Flow Technology | Lilienthalplatz 7 | 38108 Braunschweig | Germany


Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/