Re: [eigen] Rotations and unit vectors

[ Thread Index | Date Index | More Archives ]

On 07.05.2013 13:12, Hauke Heibel wrote:
I understand these issues but personally, I am more concerned about the
performance in release mode and prefer to have errors risen when I am using
a library in the wrong way in debug mode.

Maybe we could introduce a new flag like EIGEN_TOLERANCE_CHECKING or EIGEN_INPUT_CHECKING (similar to EIGEN_INTERNAL_DEBUGGING)? Then we could also check if user-given rotation matrices are actually rotation matrices, etc.

The issue about mutating functions is tricky. We have it in quite a few
places in the Geometry module, e.g. Transform::linear()/matrix(). These
mutating functions pretty much break the encapsulation of the classes.
Fixing these (as opposed to removing them) is a little bit more complicated
since the only solution I see is the introduction of proxy objects which
would do the checking which already sounds messy. :)

This still leaves a tiny loophole if someone const_casts the .axis() const, or somehow else gets direct access to the data of the axis. But I guess we must accept that C++ will always leave possibilities to shoot oneself in the foot.

Another possibility would be to check for unity every time the object is used -- that however can have a significant performance impact and would not raise the assertion when the error happens but arbitrarily later.

3 - Which tolerance? Shall we strictly enforce that
axis.normalized()==axis? shall we tolerate some epsilon??

Maybe epsilon could be computed as the upper bound of the error which could
be imposed by calling axis.normalized() but I am not sure - probably this
error is depending on the magnitude of the original axis. Furthermore, I

I'm pretty sure that it does not depend on the order of magnitude.
I don't see any way how a.normalized() and (2*a).normalized()) can lead to different results in IEEE754 math (unless one over-/underflows). However, I guess it's not too trivial to determine how big that error can get exactly (But extensive random testing and multiplying the maximum error by maybe 2.0 or 4.0 should be sufficient). Furthermore, there might be users who accept certain errors and normalize their vectors using the fast SSE rsqrt, giving them a relative precision of about 11bits, so hard-coding an epsilon is quite likely a bad idea.


Dipl.-Inf., Dipl.-Math. Christoph Hertzberg
Cartesium 0.049
Universität Bremen
Enrique-Schmidt-Straße 5
28359 Bremen

Tel: +49 (421) 218-64252

Mail converted by MHonArc 2.6.19+