Re: [eigen] cache-friendly matrix inverse |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen <eigen@xxxxxxxxxxxxxxxxxxx>
- Subject: Re: [eigen] cache-friendly matrix inverse
- From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
- Date: Thu, 14 May 2009 01:28:19 +0200
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=QtAkgUXyNc/KpqxnzlLs7L1Ib7eUXolT0PTGg1yEn9k=; b=b9LqZHBGHi9g+5p44HvlET04XY8IKPfA3XYBBdfkTqjsnIaJgdWmL7ji5P8xxnpE9l YNs3c+Z4N/a8ppOJkmorLfr21nRRDEEefGnGw/ZElKD7Wx1JpPb4PC+hdFUxNBeVhPyO NcRj9RhUlGP6tfubxznXU3+fCFRhHXpU1z+n0=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=nFhvrADxwooUrRlbmWSHaxkMULUyk58ZKctc3c50Z4TJMgKHfV5NKCQqzwtv4kcuKk xRPcEi5moOCaF0X0ej33YIhaqC/0qIlYWvbIW9AK+Hz7/y9Jm96tid7PzbL0GSXFRUoJ NaB1ZitwqLCVUkYtpEytuHiyFWg53SPJcUQ2g=
[Oops, my turn to reply to Jitse instead of the list...]
2009/5/14, Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
> 2009/5/14, Jitse Niesen <jitse@xxxxxxxxxxxxxxxxx>:
>>> Good point. I think you're right for very large sizes, asymptotically
>>> they're the same.
>>> [...]
>>> But for a size like 10x10, the matrix-vector product will be much
>>> faster.
>>
>> Can you give some proof for that?
>>
>> Perhaps you're right (though I think the cross-over point is closer to 3
>> that 10), but it flies against what people in numerical analysis have
>> been
>> teaching for the last thirty years. I don't know myself, I only do
>> matrices of larger size, and for e.g. 100 x 100 I can only repeat what
>> Christian said:
>> * LU decomp instead of inverse takes the same time, AND
>> * LU decomp instead of inverse is more precise
>
> So, we agree that multiplying by the inverse will be faster for small
> enough sizes, and that LU solving will be faster for large enough
> sizes, and the only question is what the crossover point is.
>
> I don't have any idea about that to be honest, and since we're talking
> about small sizes here it'll even depend on e.g. whether the size of a
> column (assuming column-major storage) is a multiple of the SIMD
> vector size, etc... so one can't give a very general answer. About
> testing with Eigen, well since we're talking about small sizes
> unrolling will be very important, currently the matrix-vector product
> is unrolled but not the triangular solver so i don't think this will
> produce useful numbers.
>
> Cheers,
> Benoit
>