Re: [eigen] Rigid Transformations in eigen: discussion thread |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] Rigid Transformations in eigen: discussion thread
- From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
- Date: Fri, 18 Sep 2009 07:56:12 -0400
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=UIXnZLp18e5K59lS+GWmbStKhayMX76P637YDqAyJs8=; b=baDeryC7D1lqWpVpt+MUHav9JIe9qsAbkE+8sDyPM4AuTg8SifPoYwh8mQgP+1ZKUy 3ycsnNEiwQW54xNTvCooYU/jhv298oQX/l1z8NTimf274YRLBvLyxMFxv6r1SJlR40GV /ZKDcgh9/L0bjuyk+l5v9jo1fns4mD3cE0ysQ=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=sD/D9hytstk5O1lZ/jUObWVhQaqecrsMzXc3La+F2GBn2MMkFgRyrxX02xbG8JABi0 I6k2ux+9JK3LkR3ySpCSAWNtOuqDprzsAtw3AJhJ4DHDSuTRrxOIhEgv59S13D5tjnDm QkBA7JB1EgpIREkA3l1T6wBaE5Kr36Vt/RO+Y=
2009/9/18 Rohit Garg <rpg.314@xxxxxxxxx>:
>>> Another quite simple solution is to add a second template argument
>>> being the type of the coeffs. By default it would be a Vector8_. For
>>> instance operator* would be implemented as:
>>>
>>> DualQuaternion<Scalar, CwiseUnaryOp<ei_scalar_multiple_op<Scalar>,
>>> CoeffsTypeOfThis> > operator*(Scalar s)
>>> {
>>> return s * m_coeffs;
>>> }
>>>
>>>
>>> same for operator+:
>>>
>>> template<typename OtherCoeffsType>
>>> DualQuaternion<Scalar, CwiseBinaryOp<ei_scalar_sum_op<Scalar>,
>>> CoeffsTypeOfThis, OtherCoeffsType> >
>>> operator+(const DualQuaternion<Scalar, OtherCoeffsType>& other)
>>> {
>>> return m_coeffs + other.m_coeffs;
>>> }
>>>
>>> Then you add a ctor and operator= like this:
>>>
>>> template<typename OtherCoeffsType> operator=(const
>>> DualQuaternion<Scalar, OtherCoeffsType>& other)
>>> {
>>> m_coeffs = other.m_coeffs;
>>> }
>>>
>>>
>>> and you're almost done: you still have to take care to evaluate the
>>> coeffs if needed in the compositing operator* and similar functions.
>>
>> Excellent, somehow that possibility totally escaped me.
>>
>> This solution also fixes another problem of Rohit, by the way: how to
>> map existing data as a Quaternion (there already was the
>> reinterpret_cast trick but now there is also the possibility of using
>> Map<....> as the data type).
>
> Wait, we saw that fixed size vectorizable types don't play nice with
> Map. You saw that a Vector4i wasn't getting vectorized when I mapped a
> block of memory to it.
Yes, there is a limitation with Map vectorization at the moment, so
reinterpret_cast is still useful, just letting you know that at least
it will be possible to use Map as another solution.
>>
>> Rohit, just in case this isn't clear from Gael's email, he's
>> suggesting that the data type of the "Vector8 m_coeffs;" data member
>> can be freely chosen, so instead of a "plain vector" type as Vector8
>> you can have any expression type. So your operator+ may now return a
>> DualQuaternion where the underlying Vector8 type is now a "sum of two
>> Vectors" expression. That brings all the ETs to DualQuaternion.
>
> I don't understand. If I just return say m_coeffs + other.m_coeffs,
> wouldn't that bring et to dualquats?
It would be inconvenient, because then doing
(dualquat1 + dualquat2).someDualQuatMethod()
wouldn't work. And more generally, one wishes that doing dualquat1 +
dualquat2 returns a "dualquat expression", something that really
behaves like a dualquat.
Benoit