Re: [eigen] SparseMatrix with non POD-"Scalar" type (e.g. SparseBlockMatrix) yields unnecessary initialisations of Scalar-type?

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Excellent. Yes of course, and that block size could even be a template
parameter to give compile time optimization if possible.
Sameer


On Sun, Mar 17, 2013 at 3:00 PM, Gael Guennebaud
<gael.guennebaud@xxxxxxxxx> wrote:
> What you describe is exactly what I have in mind because it's what I
> need too, but of course I'd also like to have the possibility to have
> all blocks the same fixed size.
>
> gael
>
> On Sun, Mar 17, 2013 at 9:25 PM, Sameer Agarwal
> <sameeragarwal@xxxxxxxxxx> wrote:
>> Gael,
>> Sparse block matrices in Eigen would be a really great addition.
>>
>> Is the plan to have all blocks of the same size?
>>
>> One direction of interest to us would be sparse matrices that support
>> row and column block size vectors, e.g
>>
>> consider "3x3" sparse block matrix
>>
>> col_blocks = [2 4 2]
>> row_blocks = [3 4 5]
>>
>> Then the block structure will look like
>>
>> M = [3x2 3x4 3x2]
>>        [4x2 4x4 4x2]
>>        [5x2 5x4 5x2]
>>
>> Sameer
>>
>>
>> On Sun, Mar 17, 2013 at 11:38 AM, Gael Guennebaud
>> <gael.guennebaud@xxxxxxxxx> wrote:
>>> Hi Daniel,
>>>
>>> On Sat, Mar 16, 2013 at 7:00 PM,  <Daniel.Vollmer@xxxxxx> wrote:
>>>
>>>> I've attached my experiment that essentially uses a dense 5x5 matrix as the "Scalar" type. Basically, this works (which is already pretty nice) but I have noticed two things which seem sub-optimal, and so I was curious whether I'm missing something or doing it wrong.
>>>
>>> adding an explicit support for sparse block matrices is something we
>>> planed to address soon. Your example is nice starting
>>> proof-of-concept. To get it right, we certainly need a new
>>> SparseBlockMatrix class and some way to specify which algorithm to use
>>> for a/b, i.e., a dense Cholesky in SimplicialCholesky, LU in SparseLU,
>>> etc.
>>>
>>>> 1) The Scalar-type used in SparseMatrix seems to need an abs() function in order to (I assume) only iterate over non-zero elements (AmbiVector.h:308 & 335). Oddly enough, my abs-version was needed to compile, but never got called.
>>>> It might be advantageous to provide an iterator that iterates over all elements without making such a guarantee, but then the point shouldn't matter once it is compressed.
>>>
>>> abs is probably not the only free fonction you/we will have to
>>> specify, but also operator/, and in some cases sqrt (LLT). In this
>>> precise case of AmbiVector, is only needed for non-conservative sparse
>>> matrix products, i.e.:  (A*B).prune(eps);
>>>
>>>> 2) Many times the single-argument constructor is called for the Scalar-type of SparseMatrix in instances where I don't think this is necessary. As this becomes more and more expensive with a more heavy-weight Scalar type, I'd like to avoid this where possible, for example
>>>> - SparseMatrix.h:1108 (insertUncompressed): Element is initialised to 0 even though it is later overwritten by the inserted value. Worked fine for me if I removed the = 0.
>>>> - SparseMatrix.h:382 (insertBackByOuterInner) and SparseMatrix.h:392 (insertBackByOuterInnerUnordered): Element is initialised to 0 even though it is later overwritten by the inserted value. I hacked this by making a variant of CompressedStorage::append() that only takes an Index and not a value-ref and calling that instead (because again, the value is later overwritten via the reference we return)
>>>> - AmbiVector.h:197/206/239 (coeffRef): I don't quite understand this, but I'm not sure the element needs to be initialised here either.
>>>
>>> We need these initializations because it is allowed to do:
>>>
>>> mat.insert(i,j);
>>>
>>> to insert an explicit zero. One solution could be to return a wrapper
>>> object that would set the wrapped value to zero in the destructor if
>>> operator= has not been called, but that sounds a bit overkill.
>>>
>>>> - AmbiVector.h:171 (setZero): Shouldn't / couldn't this use assignment from m_zero instead?
>>>
>>>
>>> Actually, all these constant-to-Scalar conversions are problematic for
>>> any heavy-weight scalar types (AutoDiff, multi-precision, etc.), and
>>> an old idea was to add in NumTraits a Constant (maybe that name is not
>>> good) type for that usage. This way if custom scalar type is fully
>>> compatible with double, then NumTraits<Custom>::Constant could be
>>> double. For fixed-size matrices it could be a CwiseNullaryOp<> thus
>>> making the assignment more efficient and optimizable by the compiler.
>>>
>>>> Is this approach (using a custom scalar type) still the recommended way to create sparse block-matrices?
>>>
>>> As I said, there is currently no recommended way to create
>>> sparse-block matrices but this is something we'd like to have, and
>>> we'll probably work on it soon. So if you're willing to share your
>>> advances in this direction and participate in any way to this
>>> development, you're very welcome!!
>>>
>>>> Can we reduce the overhead of these initialisations or make them optional?
>>>
>>> Do you have some measure of the overhead?
>>>
>>> gael
>>>
>>>>
>>>> I'd welcome any feedback. :)
>>>>
>>>> Thanks!
>>>>     Daniel Vollmer
>>>>
>>>> --------------------------
>>>> Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR)
>>>> German Aerospace Center
>>>> Institute of Aerodynamics and Flow Technology | Lilienthalplatz 7 | 38108 Braunschweig | Germany
>>>
>>>
>>
>>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/