Re: [eigen] Geometry module - Quaternion fitting alternative

[ Thread Index | Date Index | More Archives ]

2009/5/26 Hauke Heibel <hauke.heibel@xxxxxxxxxxxxxx>:
> Thank you for the detailed review.
> The points covered by your review:
> - KDE comment is gone
> - RealScalar has been introduced for the statistics normalization
> - Code is bound to identical floating point types, i.e. all casts are gone
> - Introduced squaredNorm instead of dot()
> - Introduced ei_isMuchSmallerThan for the rank computation
> - Introduced coef instead of operator()


> Additional changes:
> - Added missing preproc guard against multiple includes
> - Changed the expression/template helper names to be Eigen conform ei_...
> - Added a specialized version for complex numbers that does nothing but
>  issue a compile time error when using it
> - Added doxygen guard for the template helpers in the anonymous namespace
> - Added doxygen guard for the complex specialization
> - Improved the return type deduction - it should now return minimal sized
>  transformations
> - Improved to docs (added ingroup stuff)
> - Modified doc/ to support the amsmath package
> - Added fixed size tests to the unit test


> The modified doxygen stuff allows to use latex environments like
> 'align' and adds support for matrix, bmatrix, etc. environments. The
> attached screenshot shows the docs.

That's looking very nice,

>> +  for (int i=0; i<n; ++i)
>> +  {
>> +    src_mean += src.col(i).cast<Scalar>();
>> +    dst_mean += dst.col(i).cast<Scalar>();
>> +  }
>> OK, this is a rowwise().sum() but you didn't want to #include Array
>> just for that, right? makes sense.
> Here it would be interesting to know, just performance wise, whether
> looping twice over the data yields better performance than doing it
> just once. It probably depends if the vectorization can process four
> scalars at once or just two.

OK, I overlooked the performance aspect here in my review because I
didn't see how it could be performance critical. Now I see: this part
takes a time that's proportional to the number of points, while the
SVD below is independent of the number of points, right? So this part
becomes critical when there are sufficiently many points?

So yes we want vectorization here. But then we want to use Eigen
expressions so the vectorization will be handled in Eigen.

My understanding is that in the matrices src and dst, the number of
rows is typically small (it's the geometric dimension) while the
number of columns is typically large (it's the number of points). Is
this right?

Since the number of rows is small, and often fixed-size, we can't rely
on each column being decomposable into packets. For example, with
float, a packet is 4 scalars, so if there are 3 rows that's
impossible, and if there are 5 rows then columns won't start at
aligned locations so in general won't contain any packet and even if
they did that would be too costly to determine at runtime.

This means that the only way to vectorize that is by decomposing each
_row_ into packets.

This means that a prerequisite to vectorize that is that your matrices
src and dst be RowMajor.

Then, if src and dst are RowMajor, you can write your vectorized
variant just with rowwise().sum() if you want to include Array, or you
can do this:

for (int i=0; i<m; ++i)
  src_mean.coeffRef(i) = src.row(i).sum();
  dst_mean.coeffRef(i) = dst.row(i).sum();

Notice that you can also remove then the above initialization by Zero.

>> +  for (int i=0; i<n; ++i)
>> +  {
>> +    src_demean.col(i) = src.col(i).cast<Scalar>() - src_mean;
>> +    dst_demean.col(i) = dst.col(i).cast<Scalar>() - dst_mean;
>> +  }
>> again i understand that you prefer to avoid including Array.
> You can really do this with colwise? This is a vector valued operation
> not scalar valued - I thought colwise only supports scalar valued
> operations.

ah, i forgot, that vector broadcasting isn't currently supported in
colwise/rowwise. With Gael we agreed that that was needed but noone
took time to do it. Gael already made a replicate() expression for

> By the way, did you realize that I did not add the file to the
> unsupported directory but dared to put it right into the Geometry
> module? Is that ok as is?

That's OK because:
 - this is adding a not-too-big file to an already existing module
 - it's clear that it's useful and is going to be used and we had
something similar in the TODO

> Also, just out of curiosity, is dot() slower than squaredNorm(). My
> assumption was that it should vectorize as well as squaredNorm() does.

For real numbers, and given a sufficiently good compiler, it does. However:
- the compiler needs to be clever enough to remember that it's the
same x twice. Otherwise you get useless memory accesses.
- for complex numbers, x^2+y^2 is faster than x^2+y^2+i(yx-xy)

> Regarding the complex support I tried to make the two helper methods
> from the unit test generic in the sense that they can now create
> random matrices belonging to U(n) and SU(n) but the actual algorithm
> has a few lines left for which I am unsure on how to handle the
> complex case.

randMatrixUnitary looks good to me, I would just comment toward the end:

+   // final check
+    const RealScalar diagonal =
+    error = diagonal - size;
+    --max_tries;
+  }
+  if (max_tries == 0)
+    throw std::runtime_error("randMatrixO: Could not construct
orthogonal matrix!");

Here, instead of .transpose().conjugate() you can do .adjoint(). Then
I have big trouble understanding what this 'diagonal' variable means.
It is homogeneous to a coefficient^4. Then you compare it to 'size'.

It seems that here all what you want to do is check whether Q is
unitary? Then use isUnitary(), we have this method for that :)

Your test didn't work, for example according to your test this matrix
Q was "unitary":
1/2   1/2
1/2   1/2

Also in the runtime_error. First, this would be AFAIK the first time
we do custom exception throwing to report failed tests; but I don't
know exceptions well enough to determine if that is OK or not. Ask
Gael. Second, replace "orthogonal" by "unitary".

In randMatrixSpecialUnitary:

+ // initialize orthogonal matrix
+ MatrixType Q = randMatrixUnitary<Scalar>(size);

replace "orthogonal" by "unitary" in the comment

+ // correct for reflections, we only want rotations
+  Q.col(0) *= ei_conj(Q.determinant());

remove that comment, or say "tweak the first column to make the
determinant be 1"

> Also, since I do not fully understand what this type of
> transformation means in the complex domain

Linear transformations on complex spaces can't be understood as
intuitively as their real counterparts, because already the
2-dimensional complex case is 4-dimensional! So it's normal that you
can't 'understand' their geometric meaning, nobody does. Instead, you
can develop an intuition of how they behave based on real mental
analogues. So for "unitary" think of a good old real rotation in
3-space, a reflection, etc... this mental picture is still quite
useful even though it's not accurate.

Another way to think of unitaries: in finite dimension, they are
exactly the same thing as isometries.

> I think I am not the right
> person to adapt the code. Currently, the usage of complex matrices
> will result in a compile time assert.

You're probably underestimating yourself, because, aside of the
isUnitary check (which was already wrong in the real case) your code
is perfect. But anyway, if you don't feel comfortable, it's ok, no
need to do it for now.

> In case you or anybody else wants to check the code once more from an
> existing repository you need to do:
> hg qpop -a // unapply all patches
> cd .hg/patches
> hg pull // retrieve the new patches
> cd ../..
> hg qpush -a // reapply the updated patches
> In case it is a fresh checkout just do:
> hg qclone https://hauke@xxxxxxxxxxxxx/hauke/umeyama-integration/ eigen_umeyama
> cd eigen_umeyama
> hg qpush -a

Thanks for the help, but just if someone else is interested, you have
to do "hg pull -u" instead of "hg pull".

You code looks great already, feel free to push.


Mail converted by MHonArc 2.6.19+