RE: [eigen] Bug(s) report: SparseQR on tall-thin matrices

[ Thread Index | Date Index | More Archives ]

Hi there,

We’ve done a fair amount of work on rationalizing SparseQR, so that the “tall thin” and “full” interoperate efficiently, although we haven’t fully packaged it for submission yet.  Just posting here in case it helps others.

Take a look at which includes some fixes related to those you’ve already pointed to, and a revamp of LevenbergMarquardt using much faster specialist QRs:

BlockDiagonalSparseQR	 -- QR for block-diagonal matrices
BlockSparseQR -- QR for matrices with structure [Block1 Block2], where the blocks have special QR Solvers


Dr Andrew Fitzgibbon FREng FBCS FIAPR
Partner Scientist
Microsoft HoloLens, Cambridge, UK

From: Julian Kent [mailto:jkflying@xxxxxxxxx] 
Sent: 13 January 2017 10:25
To: eigen@xxxxxxxxxxxxxxxxxxx
Subject: Re: [eigen] Bug(s) report: SparseQR on tall-thin matrices

Whoops, sorry about the auto-signature that slipped in there. Switching to personal email.

On 13 January 2017 at 11:18, Julian Kent <mailto:julian.kent@xxxxxxxxx> wrote:
Ah, yes, I guess I am more used to the 'thin' QR decomposition Q/R sizes. Regardless, we need a way of correctly accessing a 'thin' matrix somehow. Perhaps making a 'leftColumns' function for matrixQ? This would let Q be m x m and R be m x n, but you can easily access the thin with Q.leftColumns(n) and R.topLeftCorner(n, n) for all matrix sizes.
I also have some ideas for making SparseQR_QProduct faster using a gather-dense-distribute pattern which would allow for improved handling of dense blocks, although I'm not sure if you've already tried this approach. If you think it is promising I could probably spend some time on it.
Julian Kent | Computer Vision Engineer
EPFL Innovation Park | Building F | 1015 Lausanne, Switzerland ;|  mailto:julian.kent@xxxxxxxxx

On Thu, Jan 12, 2017 at 10:44 PM, Gael Guennebaud <mailto:gael.guennebaud@xxxxxxxxx> wrote:


this is mainly an issue of "full QR" versus "thin QR", and the actual implementation seems to be inconsistent here. If the input matrix is m x n, m>=n, then:

qr.matrixQ().cols() == n


SparseMatrix Q = qr.matrixQ();
Q.cols() == m

To be consistent with the dense world,  qr.matrixQ().cols() should be equal to m by default plus some mechanism to extract the thin part. We could also add a thinQ() method. Then regarding matrixR(), we could add a thinR() method for convenience.


On Mon, Jan 9, 2017 at 3:51 PM, Julian Kent <mailto:julian.kent@xxxxxxxxx> wrote:
While trying to use SparseQR on a matrix A with rows > cols, I found 2 bugs:
1) The size of qr.matrixR() is m x n, instead of n x n as expected. SparseQR.h:305 initialises m_R with size (m,n), and nothing does any resizing. For now I'm just taking the topRows(n), but I'm not entirely sure this is correct, and it certainly isn't the behaviour I expect. Shouldn't there be a non-destructive resize, if the extra rows are really necessary for intermediate procesing?
2) qr.matrixQ() claims to be size m x n, as expected. However, trying to multiply qr.matrixQ() with a n x k dense matrix gives an assertion error:
Eigen/src/SparseQR/SparseQR.h:640: void Eigen::SparseQR_QProduct<SparseQRType, Derived>::evalTo(DesType&) const [with DesType = Eigen::Matrix<double, -1, -1>; SparseQRType = Eigen::SparseQR<Eigen::SparseMatrix<double>, Eigen::NaturalOrdering<int> >; Derived = Eigen::Matrix<double, -1, -1>]: Assertion `m_qr.m_Q.rows() == m_other.rows() && "Non conforming object sizes"' failed.
In the .solve(...) only matrixQ.transpose() is used, which is probably why this hasn't shown up earlier.

These bugs may be interacting with each other to fool any accuracy tests using A*P = Q*R on tall-thin matrices, with the extra rows in R passing the assert in Q.
Let me know if you need example matrices to work with.

Julian Kent

Mail converted by MHonArc 2.6.19+