Re: [eigen] Eigenvalues of (lower) Hessenberg matrix
• To: eigen@xxxxxxxxxxxxxxxxxxx
• Subject: Re: [eigen] Eigenvalues of (lower) Hessenberg matrix
• From: Ian Bell <ian.h.bell@xxxxxxxxx>
• Date: Fri, 7 Apr 2017 18:43:26 -0600
• Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=KAFyb9Hyy4+J+nEZ5TeGxukbPcKf4Lz34hjtG2yjBc0=; b=TesVkzQ8HtgUg9pzTUsY1cZkQDrhYdQur5x3STQy7FxXvna9p5PMesiMPCHfV4g4mR 8L/SIUTptNHF8985P89QIiMtzXRB7G50OBer9tDXQ5HpKuFiKqJcO8gjmblXjPy0ZDpw 6SkQHqU3JJMsVm+7KYltVqU9lJgxZVT2DE4XkK7UlJcnqGf7JLR6pIORQYcEXPb8MqxN FhNzp+k2vVHJDikbuj4Hq7cNlyT6UMr4sotgloynilTgPuAc5kuYfbauNRZyfbpm81OA 6DK4VVbOxYrP59QcDLwY1j8yTuP6Xg2v1MoQP3oU0Yykpp51o0jDSCWJmGz//1byZ0i9 +6GQ==

More concretely, the first set of (generally complex) eigenvalues comes from the eigenvalues function:

(8.00369,0)

(6.98671,3.79456)

(6.98671,-3.79456)

(4.20697,6.57094)

(4.20697,-6.57094)

(0.406905,7.58543)

(0.406905,-7.58543)

(-3.39597,6.56783)

(-3.39597,-6.56783)

(-7.20075,0)

(-6.18116,3.79158)

(-6.18116,-3.79158)

(-0.992713,0.497434)

(-0.992713,-0.497434)

(-0.514681,0)

(-0.983973,0)

(-1.49944,0)

and the second set from the Hessenberg Schur decomposition:

8.00369

5.69511

8.2783

6.6756

1.73834

0.33335

0.48046

-3.44542

-3.34651

-2.84165

-9.52066

-7.20075

-1.49944

-1.22097

-0.764456

-0.983973

-0.514681

As you can see, the purely real eigenvalue are in both lists (yay!), but the complex eigenvalues are in both sets too (boo!).

Ian

On Fri, Apr 7, 2017 at 6:40 PM, Ian Bell wrote:
Ok, so I got that mostly working (I can find all the real roots), but it also gives me a bunch of other roots in addition to the real roots that I am looking for; I believe they are arriving from the complex roots - how do I kick out the roots that would be associated with the complex roots normally?  The code I used is

Eigen::RealSchur<Eigen::MatrixXd> schur;

schur.computeFromHessenberg(Abalanced.transpose(), Eigen::MatrixXd::Zero(Abalanced.rows(), Abalanced.cols()), false);

Eigen::VectorXd eigs = schur.matrixT().diagonal();

I haven't done a speed benchmark, but I can will do so once I figure out how to keep the roots I want.

Kind Regards,

Ian

On Fri, Apr 7, 2017 at 8:35 AM, Ian Bell wrote:
Gael,

I'm afraid that a patch is probably not forthcoming from me, sadly, as I don't know the guts of how eigenvalue solver operates.  On the other hand, I could contribute my matrix balancing routine, without which the eigenvalue solver is totally useless on my problem.

For computeFromHessenberg, do you have to operate on fixed size matrices?  I gave the example of the use of 16 x 16 matrix, but in general they are dynamically sized.

Ian

On Thu, Apr 6, 2017 at 2:00 PM, Gael Guennebaud wrote:
Hi,

I really though EigenSolver had a computeFromHessenberg shortcuts, but looks like it is still missing though this would be easy to add (similar to SelfAdjointEigenSolver::computeFromTridiagonal, that one does exist!). Patch welcome.

In the meantime, since you only care about the eigenvalues and if they are real, then you can use RealSchur::computeFromHessenberg(H, Matrix<double,16,16>::Zero(), false) and then RealSchur::matrixT().diagonal() to get the real eigenvalues. If they are not real, then implementing EigenSolver::computeFromHessenberg based on EigenSolver::compute() and RealSchur::computeFromHessenberg() would be the easiest.

gael

On Thu, Apr 6, 2017 at 4:16 PM, Ian Bell wrote:
Nope, using dynamic sized arrays; for 16x16, Eigen is about twice as slow as LAPACK routine tuned for Hessenberg matrices.

On Wed, Apr 5, 2017 at 1:12 AM, Julian Kent wrote:
Hi Ian,

Just to check, are you using fixed-size matrices? This might let the compiler do considerably more in terms of loop unrolling and SIMD, and could account for the factor of 2 difference you see with LAPACK.

Cheers
Julian

On 5 April 2017 at 00:07, Ian Bell wrote:
Yixuan,

That's certainly much better - on par at these sizes now with the naive eigenvalues() function in Eigen.  I guess if you are focused on larger matrices (as most people seem to be), my matrices are rather tiny, and perhaps, less interesting.

For sure I took many replicates - the timings were actually carried out in a python library built with pybind11 on top of your code, Eigen, and some of my own code.  The library is actually for working with Chebyshev expansions of continuous functions and doing rootfinding with them.  I'm relatively familiar with the pitfalls of profiling, so I think this is a pretty fair test.

For these sizes, there just isn't much more to do in Eigen-based eigenvalue solving.  The solver in LAPACK for Hessenberg matrices is about two times (very roughly) faster for 16 x 16 matrices.  So this is not the end, I guess, but it might be the end of my capabilities in the world of linear algebra.

For future reference, how does one "select "computeEigenvectors = false" in EigenSolve"?

Kind Regards, and thanks for your help,
Ian

On Tue, Apr 4, 2017 at 2:47 PM, Yixuan Qiu wrote:
Hi Ian,

Given that your matrices are at very small scale, I don't expect my class will have visible improvement. Actually you found it to be slower, which I think may be due to several reasons:
1. My class does some basic scaling of matrix before factorization.
2. Did you select "computeEigenvectors = false" in EigenSolver? My class assumes computing eigenvectors, so some calculations are unneeded in your case. You can try the reduced version attached.
3. Did you randomize the order of execution? Did you have enough replications of experiment? Was the difference statistically significant relative to the standard error? Personally I don't fully trust benchmark results measured by elapsed time at the scale of microseconds. I would recommend Callgrind as the profiler for such tasks.

Hope this helps.

Best,
Yixuan

2017-04-04 15:40 GMT-04:00 Ian Bell :
Sadly, it seems to be consistently slower than the naive eigenvalues method of the MatrixXd class (see attached).  Any idea why?  Have you done any performance benchmarking with the UpperHessenberg code?

On Mon, Apr 3, 2017 at 8:16 PM, Ian Bell wrote:
Amazing! I'll try that tomorrow at work.  From your testing, how long approximately would it take to find the eigenvalues of a 10x10 Hessenberg matrix?

On Mon, Apr 3, 2017 at 8:00 PM, Yixuan Qiu wrote:
Eigen implements the upper Hessenberg eigen solver internally in the EigenSolver class.

I have extracted the relevant code and created an independent class in my Spectra library, so probably this is what you want: https://github.com/yixuan/spectra/blob/master/include/LinAlg/UpperHessenbergEigen.h
You can drop the namespace declaration if you want.

The usage of this class is similar to other eigen solvers in Eigen: https://github.com/yixuan/spectra/blob/master/test/Eigen.cpp#L27-L29

Best,
Yixuan

2017-04-03 20:59 GMT-04:00 Ian Bell :
I have a matrix, that by its construction is known to be Hessenberg (rigorously, lower Hessenberg, but it doesn't matter because the transpose of a matrix has the same eigenvalues as the original matrix and all I care about is eigenvalues).  Is there any magic trick in Eigen that allows for more efficient evaluation of eigenvalues?  The standard method eigenvalues() doesn't seem to do anything too smart about checking for the Hessenberg-ness of the matrix.  Lapack has the function dhseqr, is there anything similar in Eigen?

Ian

--
Yixuan Qiu <yixuanq@xxxxxxxxx>
Department of Statistics,
Purdue University

--
Yixuan Qiu <yixuanq@xxxxxxxxx>
Department of Statistics,
Purdue University

 Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/