Re: [eigen] SelfAdjointView<> as LHS expression for optimizations?

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


the implementation is in, and the following is now 0.75 times faster (checked):

R.triangularView<...>() = (H * A.selfadjointView<...>()).eval() * H.adjoint();

gael

On Tue, Nov 9, 2010 at 9:20 AM, Gael Guennebaud
<gael.guennebaud@xxxxxxxxx> wrote:
> For the implementation, see what I wrote above, it is just a matter of
> a few cosmetic changes in internal::selfadjoint_product.... so don't
> worry !
>
> gael
>
> On Tue, Nov 9, 2010 at 1:18 AM, Jose Luis Blanco
> <joseluisblancoc@xxxxxxxxx> wrote:
>> Hi Gael, Benoit,
>>
>> Thanks for the pointers!
>>
>> Giving it a second thought, I also think a RHS approach would be
>> clearer and perhaps also easier to implement.
>>
>> I like the:
>>  result  =  (H * C * H.transpose()).assumeSymmetric();
>>
>> method, since it would be more generic than the HCH^t operation.
>>
>> But on twist() as a method, it would probably be the easiest way to
>> go, although the operators I know as "twists" are for converting from
>> SO(3) to so(3) and such, and I don't see the resemblance at first
>> sight...
>>
>> If you give me some basic starting points to start with, I could *try*
>> implementing such "C.twist(H)" method.
>> Naively I would go and write it as part of MatrixBase<> as something
>> like http://reference.mrpt.org/0.9.1/ops__matrices_8h-source.html#l00469
>> , but obviously I can do it much better with Eigen, so please let me
>> know another "similar" methods already in Eigen and/or docs I can
>> study to implement it more efficiently taking advance of
>> vectorization...
>>
>> In the topic I work at, a very common operation in inner loops is
>> H*C*H^t with C a cov. matrix and H a Jacobian, that's the why of my
>> interest in optimizing the operation as much as possible.
>>
>> JL
>>
>>
>> On Tue, Nov 9, 2010 at 12:07 AM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
>>> 2010/11/8 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>>>> On Fri, Nov 5, 2010 at 8:49 PM, Jose Luis Blanco
>>>> <joseluisblancoc@xxxxxxxxx> wrote:
>>>>> Hi all,
>>>>>
>>>>> The operation I have in mind is:
>>>>>  RESULT = H * C * H.transpose()
>>>>> with C symmetric and squared.
>>>>>
>>>>> The point is that two "problems" (well... I have to admit, small
>>>>> ones!) occur with a naive Eigen expresion "H * C * H.transpose()":
>>>>>
>>>>> 1) Many multiplications could be saved by realizing that the result is
>>>>> exactly symmetric. I know the implications may be far reaching, but,
>>>>> could SelfAdjointView<> be extended so it can be used to store
>>>>> results? I mean, something like:
>>>>>
>>>>> Eigen::MatrixXd RESULT;
>>>>> RESULT.selfadjointView<Eigen::Lower>() =
>>>>> H*C.selfadjointView<Eigen::Lower>()*H.transpose();
>>>>>
>>>>> or an alternative that wouldn't require LHS  = operator for SelfAdjointView<>:
>>>>>
>>>>> Eigen::MatrixXd RESULT =
>>>>> (H*C.selfadjointView<Eigen::Lower>()*H.transpose()).selfAdjointView<Eigen::Lower>();
>>>>>
>>>>> Any ideas about the difficulty of implementing any of these two ideas?
>>>>> Or... is there anything similar already done?
>>>
>>> I would say that a really good API here would only be about the
>>> right-hand side, since the left-hand side is irrelevant to the fact
>>> that one can do this optimization. We could either (that's the
>>> simplest to implement) have a specialized method for that, which
>>> according to mathematicians jargon could be called twist(), like
>>>
>>>   result = C.twist(H)
>>>
>>> Or, once bug 99 is implemented, we could modify the product expression
>>> after it's constructed to tell Eigen that we know it's symmetric:
>>>
>>>   result  =  (H * C * H.transpose()).assumeSymmetric();
>>>
>>> In any case I would prefer to try solutions involving only the RHS
>>> before we look at a LHS API.
>>>
>>> Benoit
>>>
>>>>
>>>> Good question. Actually I don't any particular optimizations beside
>>>> computing only half of the second product. Basically, this would be
>>>> implemented as follow:
>>>>
>>>> Temp = H * C.selfadjointView<Lower>();
>>>> Result.triangularView<Lower> = Temp * H.adjoint();
>>>>
>>>> The second line currently does not work, however this is very easy to
>>>> do by extending our internal selfadjoint product routine to compute
>>>> half of a general A*B product instead of handling AA* or A*A only.
>>>> Actually this routines cannot exploit the fact that both sides come
>>>> from the same matrix anyway....
>>>>
>>>> Regarding the API we could simply let the user write:
>>>>
>>>> Result.triangularView<Lower> = H * C.selfadjointView<Lower>() * H.adjoint();
>>>>
>>>> For consistency reason,
>>>>
>>>> Result.selfadjointView<Lower> = H * C.selfadjointView<Lower>() * H.adjoint();
>>>>
>>>> would do the same.
>>>>
>>>> gael
>>>>
>>>>>
>>>>> 2) Apart from efficiency, there is this numerical accuracy problem of
>>>>> the resulting matrix not being *exactly* symmetric... a*b*c != c*b*a
>>>>> for most architectures. Ok, the errors are tiny, but it may lead to
>>>>> inconsistencies that some time ago affected the Cholesky
>>>>> implementation I used to use... that's why I worry about all this in
>>>>> the first place!
>>>>>
>>>>>
>>>>> Best,
>>>>> JL
>>>>>
>>
>>
>>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/