Re: [eigen] RFC: BasicSVD class

[ Thread Index | Date Index | More Archives ]

I've drifted far away from the project now, but I just want to say that I'm very happy that people are taking care of our SVD situation. Clearly JacobiSVD leaves much to be desired in terms of speed. There is room for a fast-but-less-reliable SVD as I understand that BasicSVD is. I would just suggest a more specific name than "BasicSVD": use a name that is somehow specific to the particular method that your SVD employs.

Do you know how BasicSVD compares to Householder-bidiagonalization SVD's such as the LAPACK methods you mention? Those are what I thought we wanted ( ); but it's been so long now, I'd rather have another thing than nothing at all.


2012/12/10 Mark Borgerding <mark@xxxxxxxxxxxxxx>
Thanks for your assessment of "not useless" ;).  To specify your statement below: the algorithm struggles with getting the smallest singular values from ill-conditioned matrices.  This is the harder problem, to be sure, than finding the dominant dimensions.  It is impressive that JacobiSVD can do such a great job with pulling out dimensions that are 10 orders of magnitude down. 

However ...

Getting the weakest dimensions is not what I need an SVD for.  I want the first few dominant dimensions.  I suspect this is a common requirement (hence matlab eigs and svds functions). I have been frustated in the past by attempts to use JacobiSVD because of the hundred-fold speed difference compared to the LAPACK functions ( sgesdd, cgesdd, dgesdd, zgesdd )

I'm not suggesting BasicSVD replace JacobiSVD.  I'm offering it as an alternative.

On the subject of power iteration.  Note that what I implemented is not textbook power iteration.  I use feedback guided by the maximization of the Rayleigh Quotient. If you want to disable the feedback, comment out the lines with "damp" in them.

On 12/10/2012 09:12 AM, Jitse Niesen wrote:
On Fri, 7 Dec 2012, Gael Guennebaud wrote:

Indeed, this simple algorithm seems pretty fast and it could even be even
faster once we have fully optimized SelfAdjointEigenSolver. However
I'm skeptical about its accuracy since it is squaring the entries that is
typically something you want to avoid when using a SVD solver.

The algorithm indeed seems to struggle with ill-conditioned matrices, which have both small and large singular values. The test program below uses a 6-by-6 matrix with doubles and gives the following output:

Singular values (exact):
     1   0.01 0.0001  1e-06  1e-08  1e-10

Singular values (with JacobiSVD):
     1   0.01 0.0001  1e-06  1e-08  1e-10

Singular values (with BasicSVD):
          1        0.01      0.0001 9.99981e-07 1.05497e-08 1.43198e-09

Of course this does not mean that it's useless; there are probably circumstances in which BasicSVD is good enough as Mark indicates, and it would make a nice addition. The power method by itself would already be very nice.



#include "BasicSVD.h"
#include <iostream>

using namespace Eigen;

int main()
  // The singular values
  const int size = 6;
  VectorXd singvals(size);
  singvals << 1, 1e-2, 1e-4, 1e-6, 1e-8, 1e-10;

  // Generate two random orthogonal matrices
  MatrixXd Uexact = MatrixXd::Random(size, size).householderQr().householderQ();
  MatrixXd Vexact = MatrixXd::Random(size, size).householderQr().householderQ();

  // Generate matrix whose SVD is to be computed
  MatrixXd A = Uexact * singvals.asDiagonal() * Vexact;
  std::cout << "Singular values (exact):\n";
  std::cout << singvals.transpose() << "\n\n";

  // Compute SVD using standard algorithm
  VectorXd singvalsJac = A.jacobiSvd().singularValues();
  std::cout << "Singular values (with JacobiSVD):\n";
  std::cout << singvalsJac.transpose() << "\n\n";

  // Compute SVD using new algorithm
  BasicSVD<MatrixXd> svdBasic(A, size);
  VectorXd singvalsBas = svdBasic.singularValues();
  std::cout << "Singular values (with BasicSVD):\n";
  std::cout << singvalsBas.transpose() << "\n";

Mail converted by MHonArc 2.6.19+