Re: [eigen] Switched libmv to Eigen2; some thoughts and questions.

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


On Saturday 15 November 2008 22:53:20 Keir Mierle wrote:
> I switched libmv to Eigen2 from FLENS.

Great news !

> I got to delete quite a bit of code 
> in the process. Also, I ported the FLENS tests over to Eigen2 to ensure the
> functionality we are using works; which it all did with no local
> modifications. I am excited about not needing local modifications! The only
> non-superficial change I had to make was for libmv's Nullspace() call,
> which finds the nullspace of a possibly wide matrix for solving homogenous
> equations, to pad the matrix A in Ax=0 with zero rows to make A square,
> when A is wide. Benoit suggested the LU's kernel() function, which I will
> investigate next though I am unsure if it is as stable as the SVD method.

For a general rectangular matrix A, I think that our LU is as stable as it can 
get. We do complete pivoting so at each step we take the biggest possible 
pivot, and the result is that our LU decomposition and solver doesn't have 
trouble even for large sizes such as 1000x1000.

Of course it's true that QR and SVD can allow even better stability because 
you can ask them to produce the eigenspace of the smallest eigenvalue or 
singular value, which is more stable than asking for the nullspace in cases 
where numerical damage makes the matrix look invertible.

But for a 3x4 matrix, and more generally for a nxp matrix with n<p, regardless 
of the matrix coefficients (hence regardless of any numerical damage) the LU 
decomposition will always find a nontrivial nullspace, so you should be fine 
in this particular use case.

The situation where LU might fail is much more when you have a nxp matrix with 
n>=p and you need to compute its nullspace; then numerical damage can make 
the nullspace look like it's {0}. Eigen uses some tolerance to handle 
numerical damage to a certain extent, but beyond that it is indeed necessary 
to use QR or SVD.

>
> Thoughts:

Thanks for sharing them !

> The compile and errors from Eigen2 were much better than with 
> FLENS. The fixed size and dynamically sized matrices worked together
> flawlessly, requiring no fixes to Eigen2. There were no surprising aliasing
> effects; Eigen2 Does The Right Thing with respect to possibly aliased
> matricies. The array type functionality is quite convenient. Eigen2
> expressions are concise and readable; much more so than with FLENS where it
> is necessary to make explicit temporaries (rather than letting the compiler
> decide if temporaries are necessary).

Glad you noticed this ! By the way, here is documentation on these matters if 
you're interested:

http://eigen.tuxfamily.org/api/TutorialCore.html#TutorialLazyEvaluation

> For example, it's really nice that 
> you can do (A*x + b).norm2() inline, without the boilerplate. I expected
> compile times to be really bad, they weren't.

Yes, at some point we became extremely careful about that, if you browse the 
archives on this list you'll see that Gael produced detailed stats of compile 
time as a function of the SVN revision number, we studied closely what is 
costly in terms of compilation time and what isn't...

> One of the best things about 
> using Eigen2 is that we get better than MKL or ATLAS performance,

Indeed our benchmarks show that we're ahead of ATLAS; as to MKL it depends: 
thanks to our expression templates we tend to do better for nested 
expressions like "(A*x + b).norm2()" which you mention above; but they have 
some very optimized heavy algorithms for matrix product, eigensolving etc, 
that are hard to beat. See:

http://eigen.tuxfamily.org/index.php?title=Benchmark

> while 
> eliminating dependencies on crufty old FORTRAN libraries that aren't
> installed everywhere. That's really fantastic; getting the right LAPACK
> version is hard.
>
> Overall I am quite impressed. Great job guys!

Thanks !

>
> I spent a bit of time hacking the SVD code to extract the first part
> (bidiagonalization) but didn't end up finishing it because I wasn't sure
> what the right way to avoid copying the resulting U and V matrices. If you
> make a function Bidiagonalize(Matrix *A, Matrix *U, Vector *diagonal,
> Vector *super_diagonal, Matrix *V), where U and V can be passed as NULL to
> avoid computing them, you can't do 'local' typedefs to remove some of the
> template complexity like you can with a class. So I am not sure what the
> right approach is here. Also, the current SVD algorithm supports avoiding
> computing U and V, but does not expose this; I am not sure what the best
> way to extend the API is.

Hehe, I can believe that getting SVD right is going to require a lot of 
thought.

Maybe, instead of starting from the JAMA code, it will be faster to start from 
scratch using a book giving pseudocode. Indeed, eigen expressions make it 
easy to copy pseudocode from a book to get at least an initial implementation 
(then optimizing it for small fixed sizes can be another story...)

>
> One issue with Eigen2 is that the default formatting for output is not very
> nice.

Gael already fixed this :) Using the IOFormat class you get very customizable 
IO.

See this:

http://eigen.tuxfamily.org/api/structEigen_1_1IOFormat.html

> I may make local modifications so that the array display is matlab- 
> and python- copy & pastable (and has aligned columns). I am not sure what
> benefit the current format has over a more conventional display format.
> Also, some of the examples in the docs are hard to understand because the
> example output matrices are not aligned (does that have 3 or 4 columns?).

So, maybe we could consider using another default format at least for the 
docs.

>
> Going forward I am excited to work with Eigen2.

Cheers,
Benoit

---


Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/