|Re: [eigen] Eigenvalues of (lower) Hessenberg matrix|
[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]
schur.computeFromHessenberg(Abalanced.transpose(), Eigen::MatrixXd::Zero(Abalanced.rows(), Abalanced.cols()), false);
Eigen::VectorXd eigs = schur.matrixT().diagonal();
I haven't done a speed benchmark, but I can will do so once I figure out how to keep the roots I want.
Gael,I'm afraid that a patch is probably not forthcoming from me, sadly, as I don't know the guts of how eigenvalue solver operates. On the other hand, I could contribute my matrix balancing routine, without which the eigenvalue solver is totally useless on my problem.For computeFromHessenberg, do you have to operate on fixed size matrices? I gave the example of the use of 16 x 16 matrix, but in general they are dynamically sized.IanOn Thu, Apr 6, 2017 at 2:00 PM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:Hi,I really though EigenSolver had a computeFromHessenberg shortcuts, but looks like it is still missing though this would be easy to add (similar to SelfAdjointEigenSolver::comput
eFromTridiagonal, that one does exist!). Patch welcome.In the meantime, since you only care about the eigenvalues and if they are real, then you can use RealSchur::computeFromHessenbe rg(H, Matrix<double,16,16>::Zero(), false) and then RealSchur::matrixT().diagonal( ) to get the real eigenvalues. If they are not real, then implementing EigenSolver::computeFromHessen berg based on EigenSolver::compute() and RealSchur::computeFromHessenbe rg() would be the easiest.gaelOn Thu, Apr 6, 2017 at 4:16 PM, Ian Bell <ian.h.bell@xxxxxxxxx> wrote:Nope, using dynamic sized arrays; for 16x16, Eigen is about twice as slow as LAPACK routine tuned for Hessenberg matrices.On Wed, Apr 5, 2017 at 1:12 AM, Julian Kent <jkflying@xxxxxxxxx> wrote:JulianHi Ian,Just to check, are you using fixed-size matrices? This might let the compiler do considerably more in terms of loop unrolling and SIMD, and could account for the factor of 2 difference you see with LAPACK.
CheersOn 5 April 2017 at 00:07, Ian Bell <ian.h.bell@xxxxxxxxx> wrote:Kind Regards, and thanks for your help,For future reference, how does one "select "computeEigenvectors = false" in EigenSolve"?For these sizes, there just isn't much more to do in Eigen-based eigenvalue solving. The solver in LAPACK for Hessenberg matrices is about two times (very roughly) faster for 16 x 16 matrices. So this is not the end, I guess, but it might be the end of my capabilities in the world of linear algebra.Yixuan,For sure I took many replicates - the timings were actually carried out in a python library built with pybind11 on top of your code, Eigen, and some of my own code. The library is actually for working with Chebyshev expansions of continuous functions and doing rootfinding with them. I'm relatively familiar with the pitfalls of profiling, so I think this is a pretty fair test.
That's certainly much better - on par at these sizes now with the naive eigenvalues() function in Eigen. I guess if you are focused on larger matrices (as most people seem to be), my matrices are rather tiny, and perhaps, less interesting.IanOn Tue, Apr 4, 2017 at 2:47 PM, Yixuan Qiu <yixuanq@xxxxxxxxx> wrote:YixuanHi Ian,Given that your matrices are at very small scale, I don't expect my class will have visible improvement. Actually you found it to be slower, which I think may be due to several reasons:1. My class does some basic scaling of matrix before factorization.2. Did you select "computeEigenvectors = false" in EigenSolver? My class assumes computing eigenvectors, so some calculations are unneeded in your case. You can try the reduced version attached.3. Did you randomize the order of execution? Did you have enough replications of experiment? Was the difference statistically significant relative to the standard error? Personally I don't fully trust benchmark results measured by elapsed time at the scale of microseconds. I would recommend Callgrind as the profiler for such tasks.Hope this helps.Best,2017-04-04 15:40 GMT-04:00 Ian Bell <ian.h.bell@xxxxxxxxx>:Sadly, it seems to be consistently slower than the naive eigenvalues method of the MatrixXd class (see attached). Any idea why? Have you done any performance benchmarking with the UpperHessenberg code?On Mon, Apr 3, 2017 at 8:16 PM, Ian Bell <ian.h.bell@xxxxxxxxx> wrote:Amazing! I'll try that tomorrow at work. From your testing, how long approximately would it take to find the eigenvalues of a 10x10 Hessenberg matrix?On Mon, Apr 3, 2017 at 8:00 PM, Yixuan Qiu <yixuanq@xxxxxxxxx> wrote:YixuanBest,Eigen implements the upper Hessenberg eigen solver internally in the EigenSolver class.
I have extracted the relevant code and created an independent class in my Spectra library, so probably this is what you want: https://github.com/yixuan/spec
tra/blob/master/include/LinAlg /UpperHessenbergEigen.hYou can drop the namespace declaration if you want.The usage of this class is similar to other eigen solvers in Eigen: https://github..com/yixuan/spec tra/blob/master/test/Eigen.cpp #L27-L292017-04-03 20:59 GMT-04:00 Ian Bell <ian.h.bell@xxxxxxxxx>:I have a matrix, that by its construction is known to be Hessenberg (rigorously, lower Hessenberg, but it doesn't matter because the transpose of a matrix has the same eigenvalues as the original matrix and all I care about is eigenvalues). Is there any magic trick in Eigen that allows for more efficient evaluation of eigenvalues? The standard method eigenvalues() doesn't seem to do anything too smart about checking for the Hessenberg-ness of the matrix. Lapack has the function dhseqr, is there anything similar in Eigen?Ian
--Yixuan Qiu <yixuanq@xxxxxxxxx>
Department of Statistics,
--Yixuan Qiu <yixuanq@xxxxxxxxx>
Department of Statistics,
|Mail converted by MHonArc 2.6.19+||http://listengine.tuxfamily.org/|