Re: [eigen] Geometry module - Quaternion fitting alternative |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] Geometry module - Quaternion fitting alternative
- From: Hauke Heibel <hauke.heibel@xxxxxxxxxxxxxx>
- Date: Wed, 27 May 2009 09:37:51 +0200
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=Xb6aTXzqdyn/Mt4rqpybl5Zp6IehiPTRXb4W3eBghPw=; b=qE3Ctcvvnq5ozNBodJc+jkngiPbgDRTRT6GC12Yo9KF2tq/Imiek/xGT1OIrnsYNIx hUBR7LBIgDeKr6HD5cgOYELuCYsM3QhbLH4qABclWPGLDi/IVC4c+ikUgaOMdZvK6UXn yeuZa8KkIjmSKyEwDMhoYp0fDscHTY+ZGRvWU=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=WAUKbH4mxHBHf+q50vPTSyflCCnqOeYNcMGIpu58RnppErkW4nZBXDMTWISlmw8c+l Q+0gxhWRZ7qZ7FsgqQHrHpn4m4Jp2slQNZVW5Ja/R3f03sT9QPE1FOv6ZDor1dyJBixi 73tvz71Zib0dwe/ys/8DD+YFyZH1SizgQn1LQ=
On Tue, May 26, 2009 at 4:28 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
> 2009/5/26 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>> wow that's a very great contribution !
>>
>> I did not read the code but I still have a few comments.
>>
>> On Tue, May 26, 2009 at 3:31 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
>>> 2009/5/26 Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
>>>> This means that a prerequisite to vectorize that is that your matrices
>>>> src and dst be RowMajor.
>>>
>>> Oh but I overlooked the fact that dst and src are the arguments passed
>>> to the function. So we don't control their storage order, the user
>>> does, and the default is col-major. We could say that you change the
>>> implementation so that now the _rows_ are the points, so vectorization
>>> can be done in the col-major case... but that would mean interleaved
>>> storage, so the construction of the matrices in the first place would
>>> be less efficient (less memory locality) so it's probably not worth
>>> it.
>>
>> then I would say use the highest level Eigen expression as you can,
>> and hope that in some situation the vectorization will be enabled. For
>> instance there are still a few possibilities to improve the
>> vectorization of partial reduction. Also note that the geometry module
>> already requires the Array module.
Well, here are right away a few important things. First, I Eigen'fied
the covariance computation. The mean and the std deviation are now
implemented purely based on rowwise(). The only thing which is
currently done manually is the de-meaning.
Regarding the storage type (i.e. RowMajor vs. ColMajor) I would
refrain from starting to implement any algorithm such that it makes
any assumptions on it. In my programs I always default the storage
type to RowMajor. I don't want to start a discussion here since I am
convinced that the amortized cost in a typical program will be
identical for both storage types and that the choice is merely a
matter of taste or maybe interoperability in case you want to
interface with other libraries.
>> Also it would be very cool to describe the underlying algorithm in 2/3
>> sentences so that the user know what to expect (complexity, etc.). If
>> this is already the case just forgot what I said !
I did that and added O-notation run-time approximations - it would be
cool if someone could double check the values I have given. The most
costly operation of the covariance computation (the umeyama?) is the
matrix product of the demeaned input vectors - it's a product of an
m-by-n times n-by-m matrix, where m is the dimension and n the number
of input points.
>> I'm wondering whether the Geometric module is the right place for that
>> ? More generally I'm wondering whether we should favor few but fat
>> modules, or many but light ones ? For instance, if we want to go for
>> the few but fat solution, then we could also have a single
>> Decomposition module with LU, LLT, QR, SVD,, etc. !! Personally, I
>> prefer the "many but light" solution. This is why I'm suggesting to
>> put this algorithm in a new module, just like a Levenberg-Marquat
>> solver should have its own module. Again, this is just a suggestion...
>
> Actually I 100% agree.
>
> Maybe the modules should be named after what they offer (like
> "Umeyama" and "GeometricPolarDecomposition") rather than how they do
> it (like "GeometryUsingSVD") so as to keep us freedom to reimplement
> in the future.
This is a huge topic - I say so because it is so tricky to decide
about the membership. Here, I would still vote for the Geometry module
because actually the algorithm is computing the geometrical
relationship of two point sets. We could also put it in a module
called 'Registration' because we could see the outcome of the
algorithm as an alignment (registration) of data sets. Modules as
'Umeyama' and 'GeometricPolarDecomposition' sound so special to me
that I am afraid of having like dozens of modules after a short period
as soon as we follow that path. For Levenberg-Marquat it seems easier
- possible something like 'Optimization'. I think in general it is
more convenient for the user to find things when the modules are
partitioned in a "What is the algorithm doing"-fashion as opposed to
"How is the algorithm doing it"-way. Consider the case, when somebody
pops up and implements the quaternion fitting. If that would end up in
a different module than the Umeyama I would find it rather awkward
considering that the algorithms are more or less trying to achieve the
same.
That said, for the initial commit I kept the algorithm in the Geometry
module - until we (or you) have decided.
Cheers,
Hauke