Re: [eigen] SelfAdjointView<> as LHS expression for optimizations? |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] SelfAdjointView<> as LHS expression for optimizations?
- From: Jose Luis Blanco <joseluisblancoc@xxxxxxxxx>
- Date: Tue, 9 Nov 2010 01:18:01 +0100
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=kebYJ5aqvuXHAnFKcvmowCem38rOv+kq0ZXjebcCKo0=; b=MnmjOlAbFXb2pEbKVqSRUkkpwGs/DofTNzE9SuU1hhdus3QTKukWjr5WyZZGvAl+Mc kcKWfOzOVLVBUM42O/BPlJHVpr0dHVTl8k0Hj/A/ZktvoAh4o6+f6fN17KSmYo/IlGlO GDWq+jly6flvcGvf2ChCQ0O3i6Npzu5Bo4ghA=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=XXXBiNv/SAtK+Rjue5PkRtJAVucuI8OaEnVMlPS5mGLjfRfqIadaLuwYt0+tR+ZoWu jmwklCVKxmRd+0CtrK1fWCpKAwMGxjcMWm3mVyyhCqkOfggjW1kUZEhGj9ANZ8FjmpNg U/RyxpQ8ApAvHYOUOkOE2C0OyNrwBDh1JAqF4=
Hi Gael, Benoit,
Thanks for the pointers!
Giving it a second thought, I also think a RHS approach would be
clearer and perhaps also easier to implement.
I like the:
result = (H * C * H.transpose()).assumeSymmetric();
method, since it would be more generic than the HCH^t operation.
But on twist() as a method, it would probably be the easiest way to
go, although the operators I know as "twists" are for converting from
SO(3) to so(3) and such, and I don't see the resemblance at first
sight...
If you give me some basic starting points to start with, I could *try*
implementing such "C.twist(H)" method.
Naively I would go and write it as part of MatrixBase<> as something
like http://reference.mrpt.org/0.9.1/ops__matrices_8h-source.html#l00469
, but obviously I can do it much better with Eigen, so please let me
know another "similar" methods already in Eigen and/or docs I can
study to implement it more efficiently taking advance of
vectorization...
In the topic I work at, a very common operation in inner loops is
H*C*H^t with C a cov. matrix and H a Jacobian, that's the why of my
interest in optimizing the operation as much as possible.
JL
On Tue, Nov 9, 2010 at 12:07 AM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
> 2010/11/8 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>> On Fri, Nov 5, 2010 at 8:49 PM, Jose Luis Blanco
>> <joseluisblancoc@xxxxxxxxx> wrote:
>>> Hi all,
>>>
>>> The operation I have in mind is:
>>> RESULT = H * C * H.transpose()
>>> with C symmetric and squared.
>>>
>>> The point is that two "problems" (well... I have to admit, small
>>> ones!) occur with a naive Eigen expresion "H * C * H.transpose()":
>>>
>>> 1) Many multiplications could be saved by realizing that the result is
>>> exactly symmetric. I know the implications may be far reaching, but,
>>> could SelfAdjointView<> be extended so it can be used to store
>>> results? I mean, something like:
>>>
>>> Eigen::MatrixXd RESULT;
>>> RESULT.selfadjointView<Eigen::Lower>() =
>>> H*C.selfadjointView<Eigen::Lower>()*H.transpose();
>>>
>>> or an alternative that wouldn't require LHS = operator for SelfAdjointView<>:
>>>
>>> Eigen::MatrixXd RESULT =
>>> (H*C.selfadjointView<Eigen::Lower>()*H.transpose()).selfAdjointView<Eigen::Lower>();
>>>
>>> Any ideas about the difficulty of implementing any of these two ideas?
>>> Or... is there anything similar already done?
>
> I would say that a really good API here would only be about the
> right-hand side, since the left-hand side is irrelevant to the fact
> that one can do this optimization. We could either (that's the
> simplest to implement) have a specialized method for that, which
> according to mathematicians jargon could be called twist(), like
>
> result = C.twist(H)
>
> Or, once bug 99 is implemented, we could modify the product expression
> after it's constructed to tell Eigen that we know it's symmetric:
>
> result = (H * C * H.transpose()).assumeSymmetric();
>
> In any case I would prefer to try solutions involving only the RHS
> before we look at a LHS API.
>
> Benoit
>
>>
>> Good question. Actually I don't any particular optimizations beside
>> computing only half of the second product. Basically, this would be
>> implemented as follow:
>>
>> Temp = H * C.selfadjointView<Lower>();
>> Result.triangularView<Lower> = Temp * H.adjoint();
>>
>> The second line currently does not work, however this is very easy to
>> do by extending our internal selfadjoint product routine to compute
>> half of a general A*B product instead of handling AA* or A*A only.
>> Actually this routines cannot exploit the fact that both sides come
>> from the same matrix anyway....
>>
>> Regarding the API we could simply let the user write:
>>
>> Result.triangularView<Lower> = H * C.selfadjointView<Lower>() * H.adjoint();
>>
>> For consistency reason,
>>
>> Result.selfadjointView<Lower> = H * C.selfadjointView<Lower>() * H.adjoint();
>>
>> would do the same.
>>
>> gael
>>
>>>
>>> 2) Apart from efficiency, there is this numerical accuracy problem of
>>> the resulting matrix not being *exactly* symmetric... a*b*c != c*b*a
>>> for most architectures. Ok, the errors are tiny, but it may lead to
>>> inconsistencies that some time ago affected the Cholesky
>>> implementation I used to use... that's why I worry about all this in
>>> the first place!
>>>
>>>
>>> Best,
>>> JL
>>>