Re: [eigen] Tensor module - Einstein notation

[ Thread Index | Date Index | More Archives ]

Einstein's notation is very elegant, and supporting it would indeed make a number of tensor expressions easier to read and write.

I would also favor avoiding implicit summation at least for the time being. Developing efficient kernels is challenging and extremely time consuming. It's more reasonable for a first implementation to leverage the existing contraction and chipping code, even this this means that we can't support implicit summation.

Good luck,

On Thu, Jun 25, 2015 at 6:19 AM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:


the main author of this Tensor module (Benoit Steiner in CC), should be able to give you more accurate feedbacks.

Regarding the implementation (question 1), for performance reasons it is necessary to fallback to the most appropriate evaluation path depending on the actual operation, and re-use the optimized kernels we have in /Core for 1D and 2D tensors. This is already what is done in the Tensor module: for instance (some) contractions are implemented on top of matrix-matrix products.

I also confirm that in the current Tensor module, there is no tensor product yet, only contractions. Mostly because without Einstein-like notations, it is pretty difficult to express general tensor products.

Regarding implicit-summation, I'd recommend to restrict to Einstein summation only for now: at first glance, converting the general implicit summation to optimized routines seems to be significantly more challenging.


On Sun, Jun 21, 2015 at 1:00 PM, Godeffroy VALET <godeffroy.valet@xxxxxxxxxxxxxxxxx> wrote:

As I have said some time ago on the forum (, it would be nice to have an "implicit summation" notation such as :

r(i,k)=a(i,j)*b(j,k); //this contraction is the matrix product
r     =a(0,i)*b(i,1); //dot product of 0th row of a with 1th
                        column of b, using slicing and contraction
r(i,j,k,l)=a(i,j)*b(k,l) // tensor product resulting in a 4th order tensor

The motivation is simplicity, scalability of the notation and readability.

Now there are a few implementation decisions to make :
1) do we implement this feature over existing operations (chip, contraction, tensor product), or do we implement it on its own and rewrite existing operations over it ?

The first choice would seem more reasonable at first and is also easier. But this notation is actually much more general and code might be factored in a unified implementation. Is there specific optimisation for each operation that cannot be factored, etc ?
Besides, I haven't found the tensor product in the currently available operations yet.

2) do we allow general implicit summation or only Einstein notation ?

With Einstein notation, the indices are normally "up" or "down" so that at most two instances of the same index is allowed in an _expression_ on the right side of the equal sign (one "up" and one "down" in mathematical writing, in code they look the same) :
r(i,k)  =a(i,j)*b(j,k);        //ok, only 2 'j's
r(i,k,l)=a(i,j)*b(j,k)*c(j,l); //not ok, more than 2 'j's

The second _expression_ would be allowed with general implicit summation.
Therefore, general implicit summation is more powerful, but adds constraints to the implementation. In particular, if we allow it, we cannot reduce the subexpression a(i,j)*b(j,k) to a contraction operation.

I think that this choice depends on the decision for the first question. If the notation has its own implementation, it is natural to allow general implicit summation. If it relies on underlying operations, then it would be easier to stay with only Einstein summation for now.

What do you think ?

On the forum, I have said that I would try to implement this myself. As it is only on my free time, I do not promise I will finish it any time soon. But if someone needs it or wants to do it quicker than me, you can contact me.



Mail converted by MHonArc 2.6.19+