[eigen] Progress and plans

```Hello,

```
I thought it might be useful to give you a general update so that you can take it into account when making plans during the meeting. I hope that you'll be able to report some of your conclusions back to the list.
```
```
The MatrixFunctions module is almost ready for release (more details further down). I plan to continue working on Eigen, but the course I'm teaching at the moment is proving a lot of work so I have little spare time. Thus I'm glad I decided not to come to the meeting. The students are having their Easter break starting 22 March; I hope that will give me a bit of air. Initially, my plan was to work on precision-oriented testing and I thought a bit about it (see below), but at the moment I'm not sure whether I'll have enough time to have something in place in time for 3.0. An alternative would be for me to improve the tutorial, which has the advantage that even a little time yields results.
```
```
I welcome suggestions on what areas it would be most useful for me to contribute to.
```

Status of MatrixFunctions module
--------------------------------

```
The module contains implementations for good algorithms for computing the matrix exponential and (hyperbolic) sine and cosine, and entire functions in general. The main function missing is the logarithm.
```
```
I think the module is usable at the moment. However, there are some problems if the matrices have clustered eigenvalues (this is the most difficult case). One issue is that ComplexSchur often reaches the maximum number of iterations. I don't know whether that's an inherent weakness in the algoritm or only in the implementation - it may be as simple as increasing the maximum number of iterations. There seems to be another issue in matrix_function_3, which I haven't investigated yet. My guess is that the test matrices are simple too difficult to evaluate the matrix functions at the desired precision, which is dummy_precision() for floats = 1e-5 = 200 * machine epsilon. The test with double seems to go okay and dummy_precision() for doubles = 1e-12 = 10000 * machine epsilon; any reason why we're so much stricter with floats?
```
```
As far as I am concerned, the API is quite okay. The module provides functions like
```
template <typename Derived>
MatrixExponentialReturnValue<Derived>
ei_matrix_exponential(const MatrixBase<Derived> &M)

```
The return type is a derived class of ReturnByValue. Issues I'd like to discuss:
```* is the ei_ prefix necessary, given that the function is in the Eigen
namespace? The wiki has the rule "global functions start with ei_" .
* should it be a member function instead? Is this possible while
maintaining the separation between stable and unstable modules?
* related, what needs to be done to get the module out of unstable/ ?

Precision-oriented tests
------------------------

```
I thought a bit about it, nothing profound, but perhaps you're interested. The ToDo list on the wiki seems to indicate three possibilities:
```
```
1. We write an BLAS/LAPACK interface with Fortran (or C?) bindings so that we can run the LAPACK test suite directly.
```
2. We port the LAPACK test suite to call the Eigen routines in C++.

3. We write our own test suite.

```
My preference goes to option 2. It's difficult to find good test matrices, so leveraging LAPACK's work is good. Furthermore, it's a good selling point to be able to say that we pass the LAPACK test suite. My guess is that option 1 is more work than 2, especially to make it portable. Fortran/C bindings may be useful in themselves, but we will be up against a crowded field.
```
```
The LAPACK test suite is described in LAPACK Working Note 41, available from http://www.netlib.org/lapack/lawns/lawn41.ps
```

Have a good meeting!
Jitse

```

 Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/