I’ve managed to make it work and, for the scalar product, your code is as fast as my code with a structure of arrays. That’s well done.
That’s too bad that we cannot get the full speed for linear combination with complex numbers. As you have said, these optimizations are only useful when we are not limited by the bandwidth, which is when the vectors are small.
for pzero, you need the devel branch of Eigen, or just replace it by pset1<Packet>(0).
For linear combination with complex coeffs I don't think it's possible to get close SoA for such small (i.e. in-cache) arrays.
You are right about the operations. Obviously, for the “linear combination”, I am interested in the case where rk are complex numbers.
I have tried to compile your code that implements the scalar product, but I don’t have any definition for pzero? Where is it?
> On 10 Jan 2019, at 22:25, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:
> just to be sure I fully understood the operations you're considering, let's assume you have:
> VectorXcf A1(300), A2(300), ...;
> complex<float> c1, c2, ...;
> float r1, r2, ...;
> then "Scalar product" means:
> and "Linear combination" means:
> r1*A1 + r2*A2 + ...
> If this is right, then linear combination with real coefficients should already be 100% optimal and the only problematic operation could be dot products because of the complex*complex product overhead. I think those can be optimized while keeping an AoS layout by accumulating within multiple packets that we combine only at the end of the reduction instead of doing and expensive combination for every product. This is similar to what we do in GEMM.
> I gave it a shot, and I get similar speed than with a SoA layout. See attached file. This is a proof of concept : AVX only, no conjugation, n%16==0, etc.
> I think this can be integrated within Eigen by generalizing a bit the current reduction code to allow for custom accumulation packets.