Re: [eigen] Support for true Array

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


actually Eigen already allows you to do either SoA or AoS. For instance, let's consider a list of Vector3f. To get the AoS behavior you'd do:

Matrix<float,3,Dynamic,ColMajor> data;

and for the SoA:

Matrix<float,3,Dynamic,RowMajor> data;

Unlike a vector of Vector3f, you have to access the element using .col(i), instead of [i].

Then you can for instance normalize all vectors:

data.colwise().normalize()

If you opted for the SoA layout, this normalization will be vectorized (though there is still room for optimizations here).

Note that all basics linear operations (+, -, scalar products) are automatically vectorized in both cases because in this case, Eigen will interpret such a matrix as a 3*size linear vector.

Finallly, this SoA layout is still not optimal. A better option is to group the data per packet, e.g., for a 2D vector of floats you would do:

x0, x1, x2, x3, y0, y1, y2, y3, x4, x5, x6, x6, y4, y5, y6, y7, ....

but that one is more difficult to implement because it requires special cares.

Gael.

On Wed, Nov 25, 2009 at 3:02 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
2009/11/25 Rohit Garg <rpg.314@xxxxxxxxx>:
> On Wed, Nov 25, 2009 at 7:12 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
>> 2009/11/25 Rohit Garg <rpg..314@xxxxxxxxx>:
>>> Array Of Structures - AoS
>>> ===================
>>> struct float2{
>>> float x,y};
>>>
>>> float2 arr[N];
>>>
>>> Structure Of Arrays - SoA
>>> ===================
>>> struct float2
>>> {
>>> float x[N],y[N];}
>>>
>>> SOA form has *big* advantages for simd ops over aos.
>>
>> Yes indeed; so: AoS is what we had so far in Eigen; SoA is what we'll
>> get thanks to the new Array class, when using e.g. Matrix<Array,3,1>.
>
> Just so that I understand it., if I define Array type at compile time,
> will eigen change data layouts into soa form and then vectorize this
> thing? Changing the data layouts is the key thing here. That's what
> leads to soa's simd efficiency? I may have not understood what has
> been presented so far, but I don't think changing the data layouts has
> been proposed so far.

AFAICS, Eigen is not going to do anything clever here.

If you do:

 Matrix<Array,3,1>

then Eigen will just have the same Matrix layout as usual, that is:

 Array ; Array ; Array

So if your Array are dynamically allocated 1-dimensional arrays,
consisting of a pointer and an int (the size) then the layout is:

 pointer0; size0; pointer1; size1; pointer2; size2

each pointer points to a traditional C array.

If on the other hand Array has fixed size N then the layout becomes:

 array0[0] ; ...; array0[N-1] ; array1[0] ; ... ; array1[N-1] ;
array2[0] ; ... ; array2[N-1]

In any case it is true that each Array coefficient is stored
contiguously, which enables SIMD on each coeff operation.

Is this OK ?

Benoit

>>
>> Benoit
>>
>>>>
>>>> This is about creating an Array class that can be used as a scalar
>>>> type, and on which the arithmetic ops are vectorized. So when you have
>>>>   Matrix<Array,3,1> u,v,w;
>>>> and you do:
>>>>   u = v+w;
>>>> this expands (thanks to Eigen xpr templates and unrollers) to:
>>>>   u[0] = v[0]+w[0];
>>>>   u[1] = v[1]+w[1];
>>>>   u[2] = v[2]+w[2];
>>>> Each of these 3 lines is now an operation on Array objects and expands
>>>> to a vectorized loop doing the addition.
>>>>
>>>> Benoit
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Rohit Garg
>>>
>>> http://rpg-314.blogspot.com/
>>>
>>> Senior Undergraduate
>>> Department of Physics
>>> Indian Institute of Technology
>>> Bombay
>>>
>>>
>>>
>>
>>
>>
>
>
>
> --
> Rohit Garg
>
> http://rpg-314.blogspot.com/
>
> Senior Undergraduate
> Department of Physics
> Indian Institute of Technology
> Bombay
>
>
>







Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/