Re: [eigen] BDCSVD update

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Congrats, with your new solver I now get:

(versus MKL)
BDCSVD:         0.61278
BDCSVD  U+V:    5.05408
dgesdd_:        0.486337
dgesdd_ U+V:    1.20376


(versus Lapack)
BDCSVD:         1.94448
BDCSVD  U+V:    11.7599
dgesdd_:        1.82727
dgesdd_ U+V:    3.36355

FYI, here, the bidiagonalization step takes about 1.7s.

So as you guessed, there is major room for improvement to compute U and V.


gael.


On Tue, Aug 27, 2013 at 5:44 PM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:

On Tue, Aug 27, 2013 at 4:54 PM, Jitse Niesen <jitse@xxxxxxxxxxxxxxxxx> wrote:
Gael, thanks for your work on bidiagonalization. Hopefully we are getting close to lapack now.

yes. The bidiagonalization step is now as fast as Lapack or intel MKL (with a single thread) even though it is still dominated by matrix-vector products.

Before your new solving method, I got these numbers (for a 1000 x 1000 random matrix):

on a xeon X5570 versus Intel MKL:
BDCSVD:         1.03396
BDCSVD  U+V:    5.39393
dgesdd_(MKL):        0.53134
dgesdd_(MKL) U+V:    1.25072

on a core2 versus Lapack:
BDCSVD:         2.35741
BDCSVD  U+V:    12.1212
dgesdd_:        1.81886
dgesdd_ U+V:    3.46479

gael



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/