|Re: [eigen] Matrix decompositions|
[ Thread Index |
| More lists.tuxfamily.org/eigen Archives
- To: eigen <eigen@xxxxxxxxxxxxxxxxxxx>
- Subject: Re: [eigen] Matrix decompositions
- From: Gael Guennebaud <gael.guennebaud@xxxxxxxxx>
- Date: Thu, 28 Feb 2013 23:52:27 +0100
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:content-type; bh=2aDWlQXVH/DOIHq9OUcGhRhBs8ntu1W6OAd8QvpIQ4k=; b=IHXgZ3Vq4ztfY+OYvTtfA7DssnFuiJtYWKzbBQKmonEl0idpi7NkXTDHYQswZDjvTN mILeZKn66uHqAvRR9YoE6PNfVVTjeElSvOpWqN9UUvBvMTL0X6ToefM0aUwaL2nscQ61 kRalDP39ZDrDISqDQiz0fQjfd4LiJMZ04VPFy3kI/uNS7XMpKd4LSJkEeylevAoioOW6 GGwSjUmoMgku3z1faoiA+cubna3wvnxFCKoSs9/QNKfgSigFskm6o49ujLmvb8G/RoEB IqmlFrL7mF+HU7basPl5qZzmf/Y+00WT1QX1tBJ0uullfIF8aQwcBZJOIXoLTbLdUsPS 0tGg==
Again, if you map dynamic size matrices or vectors, they yes
vectorization will apply, but of course the performance with respect
to alignement will highly depend on the object sizes and kind of
- For matrix-matrix operation as we found in factorizations, the
aligned flag is not used, and actual alignement does not matter.
- For matrix-vector products, the aligned flag is not used, but the
actual alignement does improve performance (the best is when all
columns and result vector have same alignement)
- For coefficient-wise operations like A = B + C, then the
vectorization is applied such that A is written using aligned stores.
Therefore, if A, B, and C have the aligned flag, then B and C can also
be read with aligned loads. Otherwise we have to access to them using
unaligned loads which are more costly, especially if the data are
really not aligned. Here the overhead is higher for small
I see that you are now talking about autodiff that mostly involves
coefficient wise operations on usually not too large vectors. So yes
in this case the alignement flag does matter.
As you see, alignement is a rather complex topic and the optimal
memory layout really depends on the use case. So if want more precise
information, please be as specific as possible.
On Thu, Feb 28, 2013 at 10:47 PM, Sameer Agarwal
> On Thu, Feb 28, 2013 at 1:24 PM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx>
>> On Thu, Feb 28, 2013 at 10:00 PM, Sameer Agarwal
>> <sameeragarwal@xxxxxxxxxx> wrote:
>> > So why is SIMD/vectorization disabled on unaligned matrices? and by turn
>> > on
>> > Map objects.
>> SIMD is disabled only for fixed size matrices which are supposed to be
>> small, and so not worth trying to adapt to the actual alignement at
>> run-time. For dynamic size matrices, everything is vectorized.
> Two questions here.
> 1. So I can take an arbitrary block of memory, make a Map object out of it
> and all operations on it are vectorized?
> 2. What are the performance implications of using an Aligned versus
> unaligned matrix then? Because in the automatic differentiation code in
> Ceres, where we use unaligned eigen matrices, declaring them aligned makes a
> 2x difference in performance. And we thought this was because vectorization
> was getting disabled.