On Saturday 15 November 2008 22:53:20 Keir Mierle wrote:
> I switched libmv to Eigen2 from FLENS.
Great news !
> I got to delete quite a bit of code
> in the process. Also, I ported the FLENS tests over to Eigen2 to ensure the
> functionality we are using works; which it all did with no local
> modifications. I am excited about not needing local modifications! The only
> non-superficial change I had to make was for libmv's Nullspace() call,
> which finds the nullspace of a possibly wide matrix for solving homogenous
> equations, to pad the matrix A in Ax=0 with zero rows to make A square,
> when A is wide. Benoit suggested the LU's kernel() function, which I will
> investigate next though I am unsure if it is as stable as the SVD method.
For a general rectangular matrix A, I think that our LU is as stable as it can
get. We do complete pivoting so at each step we take the biggest possible
pivot, and the result is that our LU decomposition and solver doesn't have
trouble even for large sizes such as 1000x1000.
Of course it's true that QR and SVD can allow even better stability because
you can ask them to produce the eigenspace of the smallest eigenvalue or
singular value, which is more stable than asking for the nullspace in cases
where numerical damage makes the matrix look invertible.
Yes, for many cases we have matricies that 'should' be invertible, but aren't because of measurement noise (not necessarily numerical precision). In this case SVD is the best way.
But for a 3x4 matrix, and more generally for a nxp matrix with n<p, regardless
of the matrix coefficients (hence regardless of any numerical damage) the LU
decomposition will always find a nontrivial nullspace, so you should be fine
in this particular use case.
Ok! Great.
The situation where LU might fail is much more when you have a nxp matrix with
n>=p and you need to compute its nullspace; then numerical damage can make
the nullspace look like it's {0}. Eigen uses some tolerance to handle
numerical damage to a certain extent, but beyond that it is indeed necessary
to use QR or SVD.
In typical vision applications, a common occurrence is solving Ax = 0 where A contains measurements (or numbers derived from measurements) from images. In theory the A matrix should be singular, but because of measurements is not. So the SVD provides the 'best' null vector given noise.
>
> Thoughts:
Thanks for sharing them !
> The compile and errors from Eigen2 were much better than with
> FLENS. The fixed size and dynamically sized matrices worked together
> flawlessly, requiring no fixes to Eigen2. There were no surprising aliasing
> effects; Eigen2 Does The Right Thing with respect to possibly aliased
> matricies. The array type functionality is quite convenient. Eigen2
> expressions are concise and readable; much more so than with FLENS where it
> is necessary to make explicit temporaries (rather than letting the compiler
> decide if temporaries are necessary).
Glad you noticed this ! By the way, here is documentation on these matters if
you're interested:
http://eigen.tuxfamily.org/api/TutorialCore.html#TutorialLazyEvaluation
> For example, it's really nice that
> you can do (A*x + b).norm2() inline, without the boilerplate. I expected
> compile times to be really bad, they weren't.
Yes, at some point we became extremely careful about that, if you browse the
archives on this list you'll see that Gael produced detailed stats of compile
time as a function of the SVN revision number, we studied closely what is
costly in terms of compilation time and what isn't...
Neat! That is way more C++ magic than I'll ever know...
> One of the best things about
> using Eigen2 is that we get better than MKL or ATLAS performance,
Indeed our benchmarks show that we're ahead of ATLAS; as to MKL it depends:
thanks to our _expression_ templates we tend to do better for nested
expressions like "(A*x + b).norm2()" which you mention above; but they have
some very optimized heavy algorithms for matrix product, eigensolving etc,
that are hard to beat. See:
http://eigen.tuxfamily.org/index.php?title=Benchmark
Actually for us, the bigger win than performance is not having dependencies on LAPACK and related. This is a barrier for contributors who have to jump through extra hoops to compile the software.
> while
> eliminating dependencies on crufty old FORTRAN libraries that aren't
> installed everywhere. That's really fantastic; getting the right LAPACK
> version is hard.
>
> Overall I am quite impressed. Great job guys!
Thanks !
>
> I spent a bit of time hacking the SVD code to extract the first part
> (bidiagonalization) but didn't end up finishing it because I wasn't sure
> what the right way to avoid copying the resulting U and V matrices. If you
> make a function Bidiagonalize(Matrix *A, Matrix *U, Vector *diagonal,
> Vector *super_diagonal, Matrix *V), where U and V can be passed as NULL to
> avoid computing them, you can't do 'local' typedefs to remove some of the
> template complexity like you can with a class. So I am not sure what the
> right approach is here. Also, the current SVD algorithm supports avoiding
> computing U and V, but does not expose this; I am not sure what the best
> way to extend the API is.
Hehe, I can believe that getting SVD right is going to require a lot of
thought.
Maybe, instead of starting from the JAMA code, it will be faster to start from
scratch using a book giving pseudocode. Indeed, eigen expressions make it
easy to copy pseudocode from a book to get at least an initial implementation
(then optimizing it for small fixed sizes can be another story...)
I tried finding a copy of Matrix Computations online, but sadly the critical pages are missing from Google's book previews. I will see about finding a copy next week. I didn't find any good descriptions of how to implement the SVD online. It appears computing the SVD breaks down into two phases: 1. Bidiagonalize A into UBV (via householder reflections or somesuch), then 2. Eliminate the superdiagonals via rotations (QR) or similar. Bidiagonalize is useful on its own, and there are multiple ways of accomplishing it, so separating it out seems good.
One issue with reusing the QR and LU methods already in eigen as part of an SVD solver is that it does not appear possible to use existing matrices for the storage, rather than having them private members of the QR/LU class. Have you guys considered making the API such that the user can supply allocated matrices for the results? I may be misunderstanding the API.
> One issue with Eigen2 is that the default formatting for output is not very
Gael already fixed this :) Using the IOFormat class you get very customizable
IO.
See this:
http://eigen.tuxfamily.org/api/structEigen_1_1IOFormat.html
Yes, I saw this; my issue is not that it can't be done, it's that the default is not good. I am not sure what justification there is for the current default format; why is it not aligned?
> I may make local modifications so that the array display is matlab-
> and python- copy & pastable (and has aligned columns). I am not sure what
> benefit the current format has over a more conventional display format..
> Also, some of the examples in the docs are hard to understand because the
> example output matrices are not aligned (does that have 3 or 4 columns?).
So, maybe we could consider using another default format at least for the
docs.
How about the following:
- Make the default aligned, at least.
- Offer several built in formatters for common formats (Matlab (includes octave and scilab), R, numpy, C arrays, Eigen2 arrays, CSV).
- Have a compile define to switch the default if so desired (not sure how else this could be done).
I use the copy and paste functionality as a poor-man's interop with numpy to do analysis of code run via C++ (as part of whatever numeric lib). This is very handy; I can print an array in the middle of some code, paste it into numpy, plot it and otherwise analyze the result.
Having the ability to output to C and Eigen2 is very useful as well, because sometimes you want to take the result of a computation and put it back into source code (particular for unit testing). I also did this for libmv1, where I had a function 'output_to_c(name)' which produced
double name[2][2] = { {1, 2}, {3, 4} };
My overall goal is to make it so that 90% of the users of Eigen2 will not have to reach for the custom formatter; Eigen should already provide the format they want out of the box.
>
> Going forward I am excited to work with Eigen2.
Cheers,
Benoit