Re: [eigen] SelfAdjointView<> as LHS expression for optimizations?

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


ok regarding the API there are two use cases depending on the lhs:
either you want the full matrix, or only one half. The former is just
an extension of the later with an additional selfadjointView-to-dense
copy.

Before inventing a new API let's see what we already have:

1 - X = (A+A).triangularView<Lower>();

evaluate the lower part of A+A into X and set the strict upper part of X to 0


2 -  X.triangularView<Lower>() = A+A;

evaluate the lower part of A+A into X and left the strict upper part
of X unchanged


3 -  X.triangularView<Lower>() = (A+A).triangularView<Lower>();

a more explicit version of 2


4 -  X = (A+A).selfadjointView<Lower>();

evaluate the lower part of A+A into X and copy the adjoint of its
strict lower part into its strict upper part

5 - X.selfadjointView<Lower>() = A;
6 - X.selfadjointView<Lower>() = (A+A).selfadjointView<Lower>();

5 and 6 are not allowed. You have to use 2 or 3 instead.



So to be consistent we should allow all the above replacing A+A by
either H*H' or H*C*H' or whatever A * B product.... No need to
introduce any assumeSelfadjoint() ....

Now, currently we also have a special function for X += alpha * H * H'
 where we want to update only one half of X:

X.selfadjointView<Lower>().rankUpdate(H,alpha);

which would be equivalent to:

X.triangularView<Lower>() += alpha * H * H.adjoint();

A special method is quite useful to avoid duplicating complex block
expressions twice, though this current rankUpdate API is quite
inconsistent with your proposal.

Also, like Jose, I've never heard about "twist" to refer to such
transformations, but I've heard about "similarity transformation"
since X = H A H' is the definition of X and A being similar matrices.
Beside the function name, I don't clearly see why A.twist(B) would
mean B * A * B' and not A * B * A' ? So overall, in this case I'm
quite reluctant to the introduction of a special function.


gael

On Tue, Nov 9, 2010 at 12:07 AM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
> 2010/11/8 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>> On Fri, Nov 5, 2010 at 8:49 PM, Jose Luis Blanco
>> <joseluisblancoc@xxxxxxxxx> wrote:
>>> Hi all,
>>>
>>> The operation I have in mind is:
>>>  RESULT = H * C * H.transpose()
>>> with C symmetric and squared.
>>>
>>> The point is that two "problems" (well... I have to admit, small
>>> ones!) occur with a naive Eigen expresion "H * C * H.transpose()":
>>>
>>> 1) Many multiplications could be saved by realizing that the result is
>>> exactly symmetric. I know the implications may be far reaching, but,
>>> could SelfAdjointView<> be extended so it can be used to store
>>> results? I mean, something like:
>>>
>>> Eigen::MatrixXd RESULT;
>>> RESULT.selfadjointView<Eigen::Lower>() =
>>> H*C.selfadjointView<Eigen::Lower>()*H.transpose();
>>>
>>> or an alternative that wouldn't require LHS  = operator for SelfAdjointView<>:
>>>
>>> Eigen::MatrixXd RESULT =
>>> (H*C.selfadjointView<Eigen::Lower>()*H.transpose()).selfAdjointView<Eigen::Lower>();
>>>
>>> Any ideas about the difficulty of implementing any of these two ideas?
>>> Or... is there anything similar already done?
>
> I would say that a really good API here would only be about the
> right-hand side, since the left-hand side is irrelevant to the fact
> that one can do this optimization. We could either (that's the
> simplest to implement) have a specialized method for that, which
> according to mathematicians jargon could be called twist(), like
>
>   result = C.twist(H)
>
> Or, once bug 99 is implemented, we could modify the product expression
> after it's constructed to tell Eigen that we know it's symmetric:
>
>   result  =  (H * C * H.transpose()).assumeSymmetric();
>
> In any case I would prefer to try solutions involving only the RHS
> before we look at a LHS API.
>
> Benoit
>
>>
>> Good question. Actually I don't any particular optimizations beside
>> computing only half of the second product. Basically, this would be
>> implemented as follow:
>>
>> Temp = H * C.selfadjointView<Lower>();
>> Result.triangularView<Lower> = Temp * H.adjoint();
>>
>> The second line currently does not work, however this is very easy to
>> do by extending our internal selfadjoint product routine to compute
>> half of a general A*B product instead of handling AA* or A*A only.
>> Actually this routines cannot exploit the fact that both sides come
>> from the same matrix anyway....
>>
>> Regarding the API we could simply let the user write:
>>
>> Result.triangularView<Lower> = H * C.selfadjointView<Lower>() * H.adjoint();
>>
>> For consistency reason,
>>
>> Result.selfadjointView<Lower> = H * C.selfadjointView<Lower>() * H.adjoint();
>>
>> would do the same.
>>
>> gael
>>
>>>
>>> 2) Apart from efficiency, there is this numerical accuracy problem of
>>> the resulting matrix not being *exactly* symmetric... a*b*c != c*b*a
>>> for most architectures. Ok, the errors are tiny, but it may lead to
>>> inconsistencies that some time ago affected the Cholesky
>>> implementation I used to use... that's why I worry about all this in
>>> the first place!
>>>
>>>
>>> Best,
>>> JL
>>>
>>>
>>>
>>
>>
>>
>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/