Re: [eigen] Usage of Eigen2 in a closed source application

[ Thread Index | Date Index | More Archives ]

2008/12/14 Keir Mierle <mierle@xxxxxxxxx>:
> I have looked at it, and started sketching some interfaces, but didn't end
> up with anything. I read the relevant parts of the Golub book. Actually
> implementing the SVD is not hard; the tricky part is to make the API
> reusable. The SVD algorithm breaks down into a few clear parts which would
> almost certainly be useful outside of the SVD itself.

I see, that's interesting.

> I am not sure about the following things:
> Should we maintain the current API for the SVD and just change under the
> covers?

No, we don't have to maintain the current API. When I say that the
current SVD is experimental it means that we don't make API stability
guarantees for now.

> If so, then it is not obvious to me how to break the algorithm into
> smaller reusable pieces, because the SVD object owns the matrix with the
> results. In particular, should the bidiagonalization also be a
> Bidiagonalization object? Should it own the result matrices?

Whatever makes most sense technically (haven't yet looked at it), we
don't have to preserve any existing API for the sake of it.

> Householder matrices are used in several parts of the code; it would be nice
> to have a simple mechanism for accumulating householder transforms (without
> actually applying them) to reuse throughout.

I see.

> I don't understand how lazy evaluation would work with the SVD; does it even
> make sense?

Don't worry about it. Lazy evaluation is for expressions, but in
decompositions such as LU and QR we compute the decomposition right
away, in the constructor of the decomposition class. SVD should be no
different. Decompositions aren't lazy.

> My template-metaprogramming-fu is not strong.

Not that you should need much of it, but just in case you're
interested, this can help you learn how stuff works:

> I am not sure how to evaluate the numerical precision of the resulting SVD
> algorithm, which is important for many applications (however this is an
> entire other can of worms).

AFAIK there are at least three different SVD algorithms, each
representing a different compromise between speed and numerical

My suggestion is that we first reimplement the generic SVD (for
dynamic-size matrices) using the most stable SVD.

Then when we do fixed-size specializations, we'll use a faster, less
stable SVD as fixed-size is only for small sizes anyway.

Numerical stability is extremely important. For dynamic-size we want
to cover sizes up to 1000x1000. For the LU decomposition and matrix
inversion, I initially experimented with the usual LU (partial
pivoting) as it is faster and is what most libraries do, but numerical
stability was not good enough for sizes like 200x200 and bigger:
matrix inversion simply failed and returned a really wrong result.
Problem solved with a full-pivoting LU. I want the same in SVD so we
can be confident in the results returned by Eigen.

This is easy to test in unit-tests, i.e. compute the SVD, reconstruct
the original matrix and compare. Gael went further and made unit-tests
that will compare precision with other libraries to check that we're
doing as well as them.



Mail converted by MHonArc 2.6.19+