Re: [eigen] cache-friendly matrix inverse

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


[Oops, my turn to reply to Jitse instead of the list...]

2009/5/14, Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
> 2009/5/14, Jitse Niesen <jitse@xxxxxxxxxxxxxxxxx>:
>>> Good point. I think you're right for very large sizes, asymptotically
>>> they're the same.
>>> [...]
>>> But for a size like 10x10, the matrix-vector product will be much
>>> faster.
>>
>> Can you give some proof for that?
>>
>> Perhaps you're right (though I think the cross-over point is closer to 3
>> that 10), but it flies against what people in numerical analysis have
>> been
>> teaching for the last thirty years. I don't know myself, I only do
>> matrices of larger size, and for e.g. 100 x 100 I can only repeat what
>> Christian said:
>> * LU decomp instead of inverse takes the same time, AND
>> * LU decomp instead of inverse is more precise
>
> So, we agree that multiplying by the inverse will be faster for small
> enough sizes, and that LU solving will be faster for large enough
> sizes, and the only question is what the crossover point is.
>
> I don't have any idea about that to be honest, and since we're talking
> about small sizes here it'll even depend on e.g. whether the size of a
> column (assuming column-major storage) is a multiple of the SIMD
> vector size, etc... so one can't give a very general answer. About
> testing with Eigen, well since we're talking about small sizes
> unrolling will be very important, currently the matrix-vector product
> is unrolled but not the triangular solver so i don't think this will
> produce useful numbers.
>
> Cheers,
> Benoit
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/