Re: [eigen] cache-friendly matrix inverse |

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]

*To*: eigen@xxxxxxxxxxxxxxxxxxx*Subject*: Re: [eigen] cache-friendly matrix inverse*From*: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>*Date*: Thu, 14 May 2009 15:39:42 +0200*Dkim-signature*: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=rtH6Yst/bFN9/7TFkab5ZyuHXJ2f+VrYGl1x/Qa4Kv4=; b=Nyq+cOPwP3nzhbpW6ql8Ow2TKZDq7ABWUKcc/SApi7dQWxm9RWDqzNnwreoqE942rl EDXCdKzAMkXJ2IR0rBmD36M5WLaMzbGTgzwiVWaRWUfR8ubiqJ0w5MHDD5IGoPHWEEl5 sU/IWYEFGsV9Wjc9O6Z62R4hPMwfqCCm6KEs0=*Domainkey-signature*: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=bYRO2ZcNR3aCl/IgjBsSUkzCCXswGk6B2Vgn87WlGN1HF8E6XyPG8NVm4RS9uTTpSF taiDBXBlNt861bWCk+DBHqW4R9el05E0L+ksuQZ8/Ui44L2eSWNA2oNbseHMSO6v3pD0 +ktfRo2Bt63+Nb/nOezKyDsE7hdzajcr1QPKo=

2009/5/14 Jitse Niesen <jitse@xxxxxxxxxxxxxxxxx>: > I tested anyway. Indeed the inverse is massively faster for small matrices. > But it's also massively faster for small dynamic-size matrices (there is no > loop unrolling in this case, is there?). And it's still clearly faster for > 100 x 100 matrices. Indeed, for dynamic sizes, there's no unrolling, but there's another issue: cache-friendliness. Gael made a very good (actually, according to the benchmarks on the wiki, the best all-around) cache-friendly matrix-vector product implementation. This becomes increasingly important as the matrix size increases. By contrast, our triangular solver does not yet have comparable optimizations (and, according to our benchmarks, neither do other libraries). So again the benchmarking is skewed. > One important caveat: I did not test for accuracy. Using the LU > decomposition is supposed to be more stable. I agree, and i didn't dispute that, precomputing the inverse and then multiplying by it is accumulating more inaccuracy than just using the LU to solve once. > I can see two possibilities. Either the perceived wisdom in numerical > analysis is wrong. Or there is a lot of scope to optimize LU solve. It's the 2nd: there is a lot of scope to optimize LU solve. However, doing so is more difficult than optimizing matrix-vector product, and won't give quite as good results because dealing with the triangular matrix is rather painful from the point of view of packet alignment; so the difference may become negligible for large enough matrices but on the other hand, for small enough matrices, the matrix-vector product will remain faster; and i still have no idea what the crossover point will be. Cheers, Benoit

**Follow-Ups**:**Re: [eigen] cache-friendly matrix inverse***From:*Benoit Jacob

**References**:**[eigen] cache-friendly matrix inverse***From:*Benoit Jacob

**Re: [eigen] cache-friendly matrix inverse***From:*Robert Lupton the Good

**Re: [eigen] cache-friendly matrix inverse***From:*Christian Mayer

**Re: [eigen] cache-friendly matrix inverse***From:*Benoit Jacob

**Re: [eigen] cache-friendly matrix inverse***From:*Christian Mayer

**Re: [eigen] cache-friendly matrix inverse***From:*Benoit Jacob

**Re: [eigen] cache-friendly matrix inverse***From:*Benoit Jacob

**Re: [eigen] cache-friendly matrix inverse***From:*Jitse Niesen

**Messages sorted by:**[ date | thread ]- Prev by Date:
**Re: [eigen] cache-friendly matrix inverse** - Next by Date:
**Re: [eigen] cache-friendly matrix inverse** - Previous by thread:
**Re: [eigen] cache-friendly matrix inverse** - Next by thread:
**Re: [eigen] cache-friendly matrix inverse**

Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |