RE: [eigen] Lanczos Algorithm

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Hello,

My name is Chris Cowing-Zitron. I'm not a core Eigen contributor, but do use Eigen quite a bit, and I follow the email list. I thought I might be able to give some helpful pointers regarding your eigenvalue project.

The Lanczos algorithm in its original form is rather limited by two fundamental issues which apply to essentially all Krylov processes: the loss of orthogonality, and uneven convergence. Both of these problems can be largely resolved, though, by incoporating other techniques (e.g. the polynomial methods you mentioned), and when the Lanczos algorithm is supported by those methods, it becomes very powerful. I'll discuss these two issues and ways to resolve them below. But first, I would like to emphasize, as this is especially important for a student project, that while the Lanczos algorithm itself is very easy to implement in any language, some of these supplementary methods are far more complicated. That being said, Eigen's _expression_ templates and powerful syntax make it the ideal platform in which to implement these techniques, both in terms of how easy it will be and how efficient the results should be, and I think your project has great potential.

The loss of orthogonality means that the basis vectors of the Krylov subspace you are generating eventually become linearly dependent, and this is a problem both because you stop exploring an expanding subpace and because the process can breakdown as a result of division by zero. The genius of the Lanczos process is that in exact arithmetic the generated vectors are automatically pairwise orthogonal without any extra work, but in finite-precision arithmetic, numerical round-off *always* results in the loss of orthogonality. Therefore, you will have to implement some form of reorthogonalization in your project to make it reliable. There are many ways to do this, basically all of which involve manually orthogonalizing (usually just a subset of) the basis vectors via, e.g. the modified Gram-Schmidt process or a QR decomposition. This requires that you save a subset of the basis vectors you've generated, which costs extra memory, or regenerate them selectively when you decide to reorthogonalize. The reorthogonalization can be very expensive in terms of flops relative to the Lanczos process used to generate the basis vectors, but implementing the reorthogonalization itself should be very easy using Eigen. The complicated part is actually deciding when to reorthogonalize, and which basis vectors to do it with, and quite bit of work has been done on this. Luckily, and ironically, the loss-of-orthogonality is directly tied to the convergence of Ritz pairs, so it is often sufficient to reorthogonalize against just the converged Ritz vectors (which you'd have saved anyway if the user wants eigenvectors as well). I would recommend looking at the following projects to get ideas: TRLan/nuTRLan (http://crd-legacy.lbl.gov/~kewu/trlan/) for adaptive techniques, and PROPACK (http://sun.stanford.edu/~rmunk/PROPACK/) for more traditional methods.

The second issue I mentioned (uneven convergence) means that at any given point during the Lanczos process prior to completion, that is, before you've generated a tridiagonal matrix of the same dimensions as your original matrix, the approximations to the largest eigenvalues of your matrix (in absolute value) will be of much higher relative accuracy than the approximations to the smaller eigenvalues. Thus, in its native form, the Lanczos process is really only good for computing a few of the largest eigenvalues of a matrix. Importantly, DO NOT attempt to run the Lanczos process to completion, as the numerical issues alone will make the results very inaccurate, and the cost in terms of flops will end up being quite a bit more than direct methods which would produce more accurate solutions anyway. To get small eigenvalues, shift-and-invert techniques, which compute eigenvalues of the inverse of your matrix, are sufficient, while for interior eigenvalues (neither the largest nor the smallest) conjugate polynomial methods come into play, as they allow you to get accurate approximations to the interior eigenvalues much earlier in the Lanczos process. An excellent implementation of these methods is FILTLAN (http://www-users.cs.umn.edu/~saad/software/filtlan/index.html), written by a student of Yousef Saad. The only real problem with FILTLAN is that the author didn't use Eigen :-p By that I mean, conjugate polynomial methods require applying a polynomial in the matrix to a vector, and Eigen's _expression_ templates and intelligent delayed evaluation should make this much more efficient at runtime, and much easier to program, whereas FILTLAN is based upon a custom (and much less efficient) C++ linear algebra library.

For any of these methods, I would recommend looking at the following books. The most recent edition of Golub and Van Loan's "Matrix Computations" has a much expanded treatment of large, sparse eigenvalue methods, and gives a solid, yet up-to-date introduction to the issues I mentioned above, with a great collection of bibliographical references. Yousef Saad's pair of books "Numerical Methods for Large Eigenvalue Problems" and "Iterative Methods for Sparse Linear Systems", both of which you can download from his site (http://www-users.cs.umn.edu/~saad/books.html), are thorough treatments of Krylov methods, though you may benefit more from the linear systems book than the eigenvalue one, as it has been updated more recently. Finally, "Templates for the Solution of Algebraic Eigenvalue Problems" (www.cs.ucdavis.edu/~bai/ET/contents.html) is a great reference as well.

I hope this was of some help. If you have any questions about the numerical linear algebra involved, I'm happy to try to help, and if I can't, I may be able to direct you to someone who can. I look forward to seeing (and using) your results!

-- Chris


From: Imène Tendjaoui [imene.tendjaoui@xxxxxxxxx]
Sent: Friday, May 30, 2014 8:06 AM
To: eigen@xxxxxxxxxxxxxxxxxxx
Cc: matthieu.moy@xxxxxxxxxxxxxxx; Tristan Sahel; Bediss Cherif
Subject: [eigen] Lanczos Algorithm

Hi,

Tristan, Bediss and I have been working on implementing the Lanczos Algorithm to implement eigenvalue decompositions for sparse matrices, and we have started with implementing a version for dense matrices to get familiar with both the library and the algorithm.

The Lanczos algorithm provides a tridiagonal matrix so we were wondering wheter it would be a good idea to use the characteristic polynomials to find the eigenvalues - given that the polynomials have particular properties - or use a more general algorithm.

We do not know all the algorithms, so feel free to make any suggestions.

Thank you for your help !

Bediss, Tristan and Imène





Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/