Re: [eigen] Tensor module - Einstein notation

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Ah yes cool :)

By the way, one would expect that the "contract" member function would naturally do a tensor product if given an empty array of index pairs (I think there is a static_assert preventing it).

2015-07-01 1:00 GMT+02:00 Benoit Steiner <benoit.steiner.goog@xxxxxxxxx>:
That's great progress ! For transpositions you can use the tensor shuffling op.

On Tue, Jun 30, 2015 at 3:57 PM, Godeffroy VALET <godeffroy.valet@xxxxxxxxxxxxxxxxx> wrote:
Ok, thank you for your answers !

On my current draft, chipping is done, and I will do contractions soon. Then, for tensor products and assignments, I will probably need to implement new classes (tensor product _expression_ and transposition _expression_).

I will ask if I need help. Feel free to contact me if you like.

Best,
Godeffroy




2015-06-29 20:04 GMT+02:00 Benoit Steiner <benoit.steiner.goog@xxxxxxxxx>:
Einstein's notation is very elegant, and supporting it would indeed make a number of tensor expressions easier to read and write.

I would also favor avoiding implicit summation at least for the time being. Developing efficient kernels is challenging and extremely time consuming. It's more reasonable for a first implementation to leverage the existing contraction and chipping code, even this this means that we can't support implicit summation.

Good luck,
Benoit

On Thu, Jun 25, 2015 at 6:19 AM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:

Hi,

the main author of this Tensor module (Benoit Steiner in CC), should be able to give you more accurate feedbacks.

Regarding the implementation (question 1), for performance reasons it is necessary to fallback to the most appropriate evaluation path depending on the actual operation, and re-use the optimized kernels we have in /Core for 1D and 2D tensors. This is already what is done in the Tensor module: for instance (some) contractions are implemented on top of matrix-matrix products.

I also confirm that in the current Tensor module, there is no tensor product yet, only contractions.. Mostly because without Einstein-like notations, it is pretty difficult to express general tensor products.

Regarding implicit-summation, I'd recommend to restrict to Einstein summation only for now: at first glance, converting the general implicit summation to optimized routines seems to be significantly more challenging.

Cheers,
Gael 

On Sun, Jun 21, 2015 at 1:00 PM, Godeffroy VALET <godeffroy.valet@xxxxxxxxxxxxxxxxx> wrote:
Hello,

As I have said some time ago on the forum (https://forum.kde.org/viewtopic.php?f=74&t=125958), it would be nice to have an "implicit summation" notation such as :

r(i,k)=a(i,j)*b(j,k); //this contraction is the matrix product
r     =a(0,i)*b(i,1); //dot product of 0th row of a with 1th
                        column of b, using slicing and contraction
r(i,j,k,l)=a(i,j)*b(k,l) // tensor product resulting in a 4th order tensor

The motivation is simplicity, scalability of the notation and readability.


Now there are a few implementation decisions to make :
1) do we implement this feature over existing operations (chip, contraction, tensor product), or do we implement it on its own and rewrite existing operations over it ?

The first choice would seem more reasonable at first and is also easier. But this notation is actually much more general and code might be factored in a unified implementation. Is there specific optimisation for each operation that cannot be factored, etc ?
Besides, I haven't found the tensor product in the currently available operations yet.

2) do we allow general implicit summation or only Einstein notation ?

With Einstein notation, the indices are normally "up" or "down" so that at most two instances of the same index is allowed in an _expression_ on the right side of the equal sign (one "up" and one "down" in mathematical writing, in code they look the same) :
r(i,k)  =a(i,j)*b(j,k);        //ok, only 2 'j's
r(i,k,l)=a(i,j)*b(j,k)*c(j,l); //not ok, more than 2 'j's

The second _expression_ would be allowed with general implicit summation.
Therefore, general implicit summation is more powerful, but adds constraints to the implementation. In particular, if we allow it, we cannot reduce the subexpression a(i,j)*b(j,k) to a contraction operation.

I think that this choice depends on the decision for the first question. If the notation has its own implementation, it is natural to allow general implicit summation. If it relies on underlying operations, then it would be easier to stay with only Einstein summation for now.

What do you think ?

On the forum, I have said that I would try to implement this myself. As it is only on my free time, I do not promise I will finish it any time soon. But if someone needs it or wants to do it quicker than me, you can contact me.


Best,
Godeffroy





--




--



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/