Re: [eigen] MAXITER in the Eigenvalue module
• To: eigen@xxxxxxxxxxxxxxxxxxx
• Subject: Re: [eigen] MAXITER in the Eigenvalue module
• From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
• Date: Mon, 31 May 2010 21:12:00 -0400

```Hi,

Very good questions and remarks :)

2010/5/31 Jitse Niesen <jitse@xxxxxxxxxxxxxxxxx>:
> Hello,
>
> I am wondering what should happen if the iterative process does not
> converge. The Schur and tridiagonalization decomposition, and indirectly all
> eigensolvers, are based on an iteration. At the moment, the code prints
> "MAXITER" to cerr and leaves the object in an inconsistent state when the
> maximum number of iterations (hardcoded to be 30) is reached.
> I think that the SVD procedure has similar issues.

Yes. Basically any algorithm that's able to compute eigenvalues or
singular values has to be iterative (that is actually easy to prove :)
using Galois theory)

Hardcoding values like 30 is no good practice. No magic value is the
right one. In my experience that tends to grow like the logarithm of
the matrix size. It would be worth doing some experiments to see what
is the "worst case" number f(n) of iterations needed out of 1000
matrices of size n, and try to infer a formula. It can't be worse than
just taking f(n)=30 for all n.

In Jacobi algorithms, we know that the algorithm converges eventually.
So we can just keep iterate until the precision stops improving
significantly. In Givens and Divide-and-Conquer algorithms (basically
everything except JacobiSVD, currently), on the other hand, it is not
certain at all that convergence will eventually happen, these
algorithms can actually fail on some "unlucky" matrices, and it is
then better to exit with a poor result, than to loop infinitely.

It would be good, then, to have an "info" field telling the outcome of
the iterative algorithm, like in LAPACK. Something the user could
query with a method like info().

> As I understand it, it is extremely unlikely that you need more than 30
> iterations. I haven't yet seen an example in you reach this limit, but on
> the other hand there does not seem to be a proof that the algorithm
> terminates.

Indeed. The algorithms typically rely on the singular values being all
distinct (of multiplicity 1). When that's not the case, convergence is
not guaranteed. JacobiSVD still guarantees convergence, albeit slower
convergence.

> So I guess we should design the API while keeping in mind that
> the algorithm may not succeed.

Right. OK for the info() method?

>
> One possibility is to add a member function isInitialized() which returns
> true if the computation was successful. That would not break any existing
> programs.

Currently, m_isInitialized is only guarding against bad calls to our
API. I would like to keep it that way, guarding only against
programming mistakes, not against numerical bad luck, and to keep it
internal (only used in our own assertions), and to introduce a
separate concept of info() fully dedicated to reporting on numerical
bad luck. Does that sound acceptable?

> Ideally, compute() would return a bool indicating success, but at
> the moment compute() in EigenSolver and SelfAdjointEigenSolver returns
> *this. That is actually inconsistent, because compute() in
> ComplexEigenSolver and the various matrix decomposition classes return void.

I don't exactly know why, but I'm not a fan of using methods return
values to report on success/error. Between returning void and *this, I
would like to return *this in all compute() methods, because that
costs nearly nothing and pleases users who like to do method chaining.

Cheers,
Benoit

>
> Cheers.
> Jitse
>
>
>

```

 Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/