Re: [eigen] Rigid Transformations in eigen: discussion thread
• To: eigen@xxxxxxxxxxxxxxxxxxx
• Subject: Re: [eigen] Rigid Transformations in eigen: discussion thread
• From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
• Date: Fri, 18 Sep 2009 07:56:12 -0400
• Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=sD/D9hytstk5O1lZ/jUObWVhQaqecrsMzXc3La+F2GBn2MMkFgRyrxX02xbG8JABi0 I6k2ux+9JK3LkR3ySpCSAWNtOuqDprzsAtw3AJhJ4DHDSuTRrxOIhEgv59S13D5tjnDm QkBA7JB1EgpIREkA3l1T6wBaE5Kr36Vt/RO+Y=

```2009/9/18 Rohit Garg <rpg.314@xxxxxxxxx>:
>>> Another quite simple solution is to add a second template argument
>>> being the type of the coeffs. By default it would be a Vector8_. For
>>> instance operator* would be implemented as:
>>>
>>> DualQuaternion<Scalar, CwiseUnaryOp<ei_scalar_multiple_op<Scalar>,
>>> CoeffsTypeOfThis> > operator*(Scalar s)
>>> {
>>>  return s * m_coeffs;
>>> }
>>>
>>>
>>> same for operator+:
>>>
>>> template<typename OtherCoeffsType>
>>> DualQuaternion<Scalar, CwiseBinaryOp<ei_scalar_sum_op<Scalar>,
>>> CoeffsTypeOfThis, OtherCoeffsType> >
>>> operator+(const DualQuaternion<Scalar, OtherCoeffsType>& other)
>>> {
>>>  return m_coeffs + other.m_coeffs;
>>> }
>>>
>>> Then you add a ctor and operator= like this:
>>>
>>> template<typename OtherCoeffsType> operator=(const
>>> DualQuaternion<Scalar, OtherCoeffsType>& other)
>>> {
>>>  m_coeffs = other.m_coeffs;
>>> }
>>>
>>>
>>> and you're almost done: you still have to take care to evaluate the
>>> coeffs if needed in the compositing operator* and similar functions.
>>
>> Excellent, somehow that possibility totally escaped me.
>>
>> This solution also fixes another problem of Rohit, by the way: how to
>> map existing data as a Quaternion (there already was the
>> reinterpret_cast trick but now there is also the possibility of using
>> Map<....> as the data type).
>
> Wait, we saw that fixed size vectorizable types don't play nice with
> Map. You saw that a Vector4i wasn't getting vectorized when I mapped a
> block of memory to it.

Yes, there is a limitation with Map vectorization at the moment, so
reinterpret_cast is still useful, just letting you know that at least
it will be possible to use Map as another solution.

>>
>> Rohit, just in case this isn't clear from Gael's email, he's
>> suggesting that the data type of the "Vector8 m_coeffs;" data member
>> can be freely chosen, so instead of a "plain vector" type as Vector8
>> you can have any expression type. So your operator+ may now return a
>> DualQuaternion where the underlying Vector8 type is now a "sum of two
>> Vectors" expression. That brings all the ETs to DualQuaternion.
>
> I don't understand. If I just return say m_coeffs + other.m_coeffs,
> wouldn't that bring et to dualquats?

It would be inconvenient, because then doing

(dualquat1 + dualquat2).someDualQuatMethod()

wouldn't work. And more generally, one wishes that doing dualquat1 +
dualquat2 returns a "dualquat expression", something that really
behaves like a dualquat.

Benoit

```

 Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/