Re: [eigen] Geometry module - Quaternion fitting alternative

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


On Wed, May 27, 2009 at 2:37 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
> 2009/5/27 Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
>> 2009/5/27 Hauke Heibel <hauke.heibel@xxxxxxxxxxxxxx>:
>>> Regarding the storage type (i.e. RowMajor vs. ColMajor) I would
>>> refrain from starting to implement any algorithm such that it makes
>>> any assumptions on it.
>>
>> We agree with this but sometimes it's very hard to make code that will
>> be equally fast in both cases. Look at the PartialLU: it could equally
>> well be done with row operations or col operations, and of course
>> better speed is achieved if this matches the storage order. Currently
>> it is done with col operations and I'm not sure how to best make it
>> work in the row-major case without writing the code twice. Or is this
>> inevitable? Just transposing doesn't cut it as that would interchange
>> L and U.transpose() and they're not interchangeable (L has 1's on the
>> diagonal).
>
> You got me thinking :)
>
> It's not that hopeless, just very hard for my limited brain to think.
> If we also transpose the LU matrix here, all should be fine. So
> transposing should be one possible way of doing this.
>
> Another, even more "agnostic" way would be to introduce
> "storage-order" versions of rows/cols/blocks/coeff access methods.
> These would ignore the actual storage order and instead work as if the
> storage order were always col-major. Then I could take my LU algoritm
> that uses column operations and do everything with these methods and
> the result would be the same, only it would be equally optimized
> regardless of storage order.

This is what I did in the Sparse module because if you have a colmajor
sparse matrix there is no way to loop over the elements of a row,
whence the outersize(), innerSize(), innerVector(), etc. functions.

This is also similar to what is done in the matrix product: the
function takes a pair of pointer/stride and assume the matrix is
colmajor. The caller takes care to transpose/swap the arguments when
needed.

> I have yet to decide between the 2 approaches, transposing is more
> high-level and elegant and safe and requires no changes to the core of
> Eigen, but the other approach will give faster compilation and will
> allow to write algorithms like LU without having to act in 2 different
> ways (transpose or not) depending on storage order.
>
> Benoit
>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/