|Re: [eigen] Bug(s) report: SparseQR on tall-thin matrices|
[ Thread Index |
| More lists.tuxfamily.org/eigen Archives
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] Bug(s) report: SparseQR on tall-thin matrices
- From: Julian Kent <julian.kent@xxxxxxxxx>
- Date: Fri, 13 Jan 2017 11:18:11 +0100
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pix4d.com; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=BiwrxUIOYnXc3KrfJLP6NhbAiCLz4CHXK4YOksa83R4=; b=BSxXJTDn8RnqFDv1ED75RjxAbyRX7ecwWVS2R4MXm4DBO4QNU80tle8HGg4H4zm+1f O7nUZTbne66yAUPfhaG4eDKi3T17V00NpzftxJly8vEtlGBy+sroIn/LIhPEMAVCEPXZ 4SlwUyRtP9NZIMQv+B5apTY3dAuGMhw6m8ZvU=
Ah, yes, I guess I am more used to the 'thin' QR decomposition Q/R sizes. Regardless, we need a way of correctly accessing a 'thin' matrix somehow. Perhaps making a 'leftColumns' function for matrixQ? This would let Q be m x m and R be m x n, but you can easily access the thin with Q.leftColumns(n) and R.topLeftCorner(n, n) for all matrix sizes.
I also have some ideas for making SparseQR_QProduct faster using a gather-dense-distribute pattern which would allow for improved handling of dense blocks, although I'm not sure if you've already tried this approach. If you think it is promising I could probably spend some time on it.