we currently have thousands of unit tests checking the correctness of the implementation, and, in some cases, the numerical robustness.
Checking performance, and in particular tracking performance regression, is also very important for a library like Eigen, but this task is way more tricky, and there is no tools to help us.
A while back, I've started to hack some scripts in bench/perf_monitoring, and a few weeks ago I've finally extended them such that they are now quite usable (disclaimer: some parts are really naive, there are plenty of shortcomings, but with proper inputs, they do the job!). You can see the outcome there:
These scripts do not really track performance regression, but rather generate performance charts for various tests (gemm, gemv, cholesky, etc.), with various predefined matrix sizes, and for the abscisse a selected set of changesets. We can then visually examine the behavior of Eigen over time. This strategy is quite convenient because it allows to easily add new test and look back in the history to check whether some previous state of Eigen was faster for the given problem.
See the above page for details on how to run and adjust these scripts.
If someone want to take the lead on this aspect, then you are very welcome. The bash scripts can be improved in numerous ways, starting from rewriting them in python, we could add more tests, and we could also imagine a small web-apps to ease the navigation across the different benchs, hardware, etc.