Re: [eigen] State of eigen support for small integers(16 bit, signed/unsigned)

[ Thread Index | Date Index | More Archives ]

2009/8/20 Rohit Garg <rpg.314@xxxxxxxxx>:
>> First, read this,
>> The two most important things are to edit the files NumTraits.h and
>> MathFunctions.h in Core/.
>> For vectorization, you'll have to edit the files in Core/arch/... at
>> least Core/arch/SSE/PacketMath.h. Not sure about how it works for
>> small ints (are there specific SSE instructions for e.g. adding two
>> packets of 8 int16's ? you know better than me)
> I had a look at that file. But it doesn't mention anything about the
> vectorization. So how is that handled? Somewhere, eigen will have to
> be told that operation on multiple of 8 can be vectorized and so on.

Yes, that is done in the PacketMath.h file (as that is architecture-dependent).

See in src/Core/arch/SSE/PacketMath.h:

template<> struct ei_packet_traits<float>  : ei_default_packet_traits
  typedef Packet4f type; enum {size=4};
  enum {
    HasSin  = 1,
    HasCos  = 1,
    HasLog  = 1,
    HasExp  = 1,
    HasSqrt = 1
template<> struct ei_packet_traits<double> : ei_default_packet_traits
{ typedef Packet2d type; enum {size=2}; };
template<> struct ei_packet_traits<int>    : ei_default_packet_traits
{ typedef Packet4i type; enum {size=4}; };

template<> struct ei_unpacket_traits<Packet4f> { typedef float  type;
enum {size=4}; };
template<> struct ei_unpacket_traits<Packet2d> { typedef double type;
enum {size=2}; };
template<> struct ei_unpacket_traits<Packet4i> { typedef int    type;
enum {size=4}; };

So for int16, the "size" enums will be set to 8.

There's one thing that scares me. In ei_unpacket_traits, we make the
assumption that the scalar type (e.g. int) can be deduced back from
the packet type (e.g. Packet4i which is __m128i). That assumption will
be broken if small integer types ever share the same packet type, like
__m128i, I don't know. If that happens, we'll have to reorganize that
code a little.

> Also I don't understand the point of editing the NumTraits file. How
> does that help?

Each supported type has a specialization of NumTraits that provides
useful information about the type. For example, for int:

template<> struct NumTraits<int>
  typedef int Real;
  typedef double FloatingPoint;
  enum {
    IsComplex = 0,
    HasFloatingPoint = 0,
    ReadCost = 1,
    AddCost = 1,
    MulCost = 1

These lines tell respectively:
- that ei_real(int) should return an int,
- that if ever needed to cast to a floating point type then "double"
should be used,
- that int isn't complex numbers
- that int isn't a floating-point type
- the 3 last lines are used in the cost analysis, they don't
correspond to any natural units, for small ints leave the value 1.

Actually for small ints the only difference will be that "float" will
be enough as the FloatingPoint typedef.


Mail converted by MHonArc 2.6.19+