Re: [eigen] SelfAdjointView<> as LHS expression for optimizations? |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] SelfAdjointView<> as LHS expression for optimizations?
- From: Gael Guennebaud <gael.guennebaud@xxxxxxxxx>
- Date: Tue, 9 Nov 2010 10:50:45 +0100
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:mime-version:received:in-reply-to :references:from:date:message-id:subject:to:content-type :content-transfer-encoding; bh=EzTWbEZjCQ3ohy9OmkDc1tx8sOAiAF8j8m/+zxtFSe4=; b=lcq97FCWcy+WBwNxC706HTjoe4BmpB6zw2UuySsAUB3A1E4W+c1Q46Za5fPThHKXin XRzhUHh0yKzT0mn1idh+2Zo5WKpItAul9t+NUiNiopgdiYiZL+yiyAwnUiDaXDslBbk7 E2SfTTlzskZUpsnkh/erEvj73pZ3+JcchF2y0=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:content-transfer-encoding; b=Nu9tR0u2TuhFfuPL95zGBRJLEjMUnIKwKMIUTwOPwJvhQ34pco2v+GZn60r6+WO6mJ 9aYN25FQSyroxCEOvjpQb/7Lfgb/RxJJa2hn1XyV2Cg9PAiyHn91WO1NG91Qc1FmejIf X5j8s/e3ogqXsxsFB/hVT/MKdROoiftKmWHxU=
ok regarding the API there are two use cases depending on the lhs:
either you want the full matrix, or only one half. The former is just
an extension of the later with an additional selfadjointView-to-dense
copy.
Before inventing a new API let's see what we already have:
1 - X = (A+A).triangularView<Lower>();
evaluate the lower part of A+A into X and set the strict upper part of X to 0
2 - X.triangularView<Lower>() = A+A;
evaluate the lower part of A+A into X and left the strict upper part
of X unchanged
3 - X.triangularView<Lower>() = (A+A).triangularView<Lower>();
a more explicit version of 2
4 - X = (A+A).selfadjointView<Lower>();
evaluate the lower part of A+A into X and copy the adjoint of its
strict lower part into its strict upper part
5 - X.selfadjointView<Lower>() = A;
6 - X.selfadjointView<Lower>() = (A+A).selfadjointView<Lower>();
5 and 6 are not allowed. You have to use 2 or 3 instead.
So to be consistent we should allow all the above replacing A+A by
either H*H' or H*C*H' or whatever A * B product.... No need to
introduce any assumeSelfadjoint() ....
Now, currently we also have a special function for X += alpha * H * H'
where we want to update only one half of X:
X.selfadjointView<Lower>().rankUpdate(H,alpha);
which would be equivalent to:
X.triangularView<Lower>() += alpha * H * H.adjoint();
A special method is quite useful to avoid duplicating complex block
expressions twice, though this current rankUpdate API is quite
inconsistent with your proposal.
Also, like Jose, I've never heard about "twist" to refer to such
transformations, but I've heard about "similarity transformation"
since X = H A H' is the definition of X and A being similar matrices.
Beside the function name, I don't clearly see why A.twist(B) would
mean B * A * B' and not A * B * A' ? So overall, in this case I'm
quite reluctant to the introduction of a special function.
gael
On Tue, Nov 9, 2010 at 12:07 AM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
> 2010/11/8 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>> On Fri, Nov 5, 2010 at 8:49 PM, Jose Luis Blanco
>> <joseluisblancoc@xxxxxxxxx> wrote:
>>> Hi all,
>>>
>>> The operation I have in mind is:
>>> RESULT = H * C * H.transpose()
>>> with C symmetric and squared.
>>>
>>> The point is that two "problems" (well... I have to admit, small
>>> ones!) occur with a naive Eigen expresion "H * C * H.transpose()":
>>>
>>> 1) Many multiplications could be saved by realizing that the result is
>>> exactly symmetric. I know the implications may be far reaching, but,
>>> could SelfAdjointView<> be extended so it can be used to store
>>> results? I mean, something like:
>>>
>>> Eigen::MatrixXd RESULT;
>>> RESULT.selfadjointView<Eigen::Lower>() =
>>> H*C.selfadjointView<Eigen::Lower>()*H.transpose();
>>>
>>> or an alternative that wouldn't require LHS = operator for SelfAdjointView<>:
>>>
>>> Eigen::MatrixXd RESULT =
>>> (H*C.selfadjointView<Eigen::Lower>()*H.transpose()).selfAdjointView<Eigen::Lower>();
>>>
>>> Any ideas about the difficulty of implementing any of these two ideas?
>>> Or... is there anything similar already done?
>
> I would say that a really good API here would only be about the
> right-hand side, since the left-hand side is irrelevant to the fact
> that one can do this optimization. We could either (that's the
> simplest to implement) have a specialized method for that, which
> according to mathematicians jargon could be called twist(), like
>
> result = C.twist(H)
>
> Or, once bug 99 is implemented, we could modify the product expression
> after it's constructed to tell Eigen that we know it's symmetric:
>
> result = (H * C * H.transpose()).assumeSymmetric();
>
> In any case I would prefer to try solutions involving only the RHS
> before we look at a LHS API.
>
> Benoit
>
>>
>> Good question. Actually I don't any particular optimizations beside
>> computing only half of the second product. Basically, this would be
>> implemented as follow:
>>
>> Temp = H * C.selfadjointView<Lower>();
>> Result.triangularView<Lower> = Temp * H.adjoint();
>>
>> The second line currently does not work, however this is very easy to
>> do by extending our internal selfadjoint product routine to compute
>> half of a general A*B product instead of handling AA* or A*A only.
>> Actually this routines cannot exploit the fact that both sides come
>> from the same matrix anyway....
>>
>> Regarding the API we could simply let the user write:
>>
>> Result.triangularView<Lower> = H * C.selfadjointView<Lower>() * H.adjoint();
>>
>> For consistency reason,
>>
>> Result.selfadjointView<Lower> = H * C.selfadjointView<Lower>() * H.adjoint();
>>
>> would do the same.
>>
>> gael
>>
>>>
>>> 2) Apart from efficiency, there is this numerical accuracy problem of
>>> the resulting matrix not being *exactly* symmetric... a*b*c != c*b*a
>>> for most architectures. Ok, the errors are tiny, but it may lead to
>>> inconsistencies that some time ago affected the Cholesky
>>> implementation I used to use... that's why I worry about all this in
>>> the first place!
>>>
>>>
>>> Best,
>>> JL
>>>
>>>
>>>
>>
>>
>>
>
>
>