|Re: [eigen] State of eigen support for small integers(16 bit, signed/unsigned)|
[ Thread Index |
| More lists.tuxfamily.org/eigen Archives
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] State of eigen support for small integers(16 bit, signed/unsigned)
- From: Rohit Garg <rpg.314@xxxxxxxxx>
- Date: Thu, 20 Aug 2009 06:51:15 -0700
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=Vz4jd+uCm3qc2jUmJ9rDOBQfGrNDcVdw7vb+jbPx/zU=; b=K15jQpWgEZU5eXcJwL8VMetBmRgxKi1uTgPcllXgDmX4fiToBNyXx8XOunKvt8imIl 9zzCQgPT2mdQ/cSdiPRiRlqAdfQuoeiKKQK2zK+FI8NY8cRCxivmG3MiHy3IwGCEHYgt ZoA8k4MnObsLLUbP8p55Vj+VFOZdFSzlcq9EI=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=blDDawWXIKyuZEYnFc37OYaH0s+z+JXALAXKrH/86V+dUK/QXIHBHdAMAC0HtP4LYB 3mUBrFXlfvUqEZmyB1GN9SDaKkw+6oLkrWFlYuF2aQaW7K/vhZsz55ikGvFATKE1+0T9 AhVN6QTs+oPxiXmFCY7OrssOhgcLB7J+F4CXY=
> First, read this,
> The two most important things are to edit the files NumTraits.h and
> MathFunctions.h in Core/.
So one adds all types to eigen by following that method on that page then.
> For vectorization, you'll have to edit the files in Core/arch/... at
> least Core/arch/SSE/PacketMath.h. Not sure about how it works for
> small ints (are there specific SSE instructions for e.g. adding two
> packets of 8 int16's ? you know better than me)
> For the bitwise ops, in the non-vector case i don't think you need to
> do anything special since operators & |... work natively; you can
> always check the Functors.h either in Core/ or in Array/ ; for
> vectorization I'm very optimistic too -- we already have ei_pand(),
> ei_por() etc in PacketMath.h and since for bitwise ops the integer
> size is irrelevant, you should be able to use that.
What about shifts?
>> BTW, are such features even welcome in eigen?
> Yes, most welcome. When I look at forums linking to our website, this
> is a frequently wanted feature. The fact that people on this list
> asked for it several times, finishes proving that it's a wanted
> feature. Vectorization is most welcome here too -- the prospect of 8x
> boosts on int16 and 16x boosts on int8 is very interesting...
Department of Physics
Indian Institute of Technology