Re: [eigen] Tensor module - Einstein notation |

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]

*To*: eigen@xxxxxxxxxxxxxxxxxxx*Subject*: Re: [eigen] Tensor module - Einstein notation*From*: Benoit Steiner <benoit.steiner.goog@xxxxxxxxx>*Date*: Tue, 30 Jun 2015 16:00:12 -0700*Cc*: Benoit Steiner <bsteiner@xxxxxxxxxx>*Dkim-signature*: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=/OzSJT9LlQfk2uUzizT1049hrYdmqnehNQYr0tn80fg=; b=n2KUbE4nh26I40hzQFvbxATgb2Y3QeXdhbS3Mv4hAiCkY0eCRlcVWiBA5eIsHQM+1c 5GENNZ4pupsSpEL9S+JiUCiE/WBMxzBl+fgExRmY772UCMRuZ9YmXQLsqxnz6929RiqW 3KZ00fZaaK3EW5PH2h8toA1+oqqC2CxXqP7O+p1tvZE0EXbVQYkmSP6G/u82BaosKwCO 4v9z4gVzvg5QxllgtEutqr1Bxd1vmVsdvZCYUIAd623qASVSngdc/K0gApj9HFwayvEf GBo+LnebtP8IkdgzkkNoq8T8m4QmFA6Jtv7iokS541r6P2NaC3G/MOOqlMsXf0nD3P/P t5kA==

That's great progress ! For transpositions you can use the tensor shuffling op.

On Tue, Jun 30, 2015 at 3:57 PM, Godeffroy VALET <godeffroy.valet@xxxxxxxxxxxxxxxxx> wrote:

GodeffroyOk, thank you for your answers !

On my current draft, chipping is done, and I will do contractions soon. Then, for tensor products and assignments, I will probably need to implement new classes (tensor product _expression_ and transposition _expression_).I will ask if I need help. Feel free to contact me if you like.Best,2015-06-29 20:04 GMT+02:00 Benoit Steiner <benoit.steiner.goog@xxxxxxxxx>:Einstein's notation is very elegant, and supporting it would indeed make a number of tensor expressions easier to read and write.I would also favor avoiding implicit summation at least for the time being. Developing efficient kernels is challenging and extremely time consuming. It's more reasonable for a first implementation to leverage the existing contraction and chipping code, even this this means that we can't support implicit summation.Good luck,Benoit--On Thu, Jun 25, 2015 at 6:19 AM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:Hi,the main author of this Tensor module (Benoit Steiner in CC), should be able to give you more accurate feedbacks.Regarding the implementation (question 1), for performance reasons it is necessary to fallback to the most appropriate evaluation path depending on the actual operation, and re-use the optimized kernels we have in /Core for 1D and 2D tensors. This is already what is done in the Tensor module: for instance (some) contractions are implemented on top of matrix-matrix products.I also confirm that in the current Tensor module, there is no tensor product yet, only contractions. Mostly because without Einstein-like notations, it is pretty difficult to express general tensor products.Regarding implicit-summation, I'd recommend to restrict to Einstein summation only for now: at first glance, converting the general implicit summation to optimized routines seems to be significantly more challenging.Cheers,GaelOn Sun, Jun 21, 2015 at 1:00 PM, Godeffroy VALET <godeffroy.valet@xxxxxxxxxxxxxxxxx> wrote:The second _expression_ would be allowed with general implicit summation.With Einstein notation, the indices are normally "up" or "down" so that at most two instances of the same index is allowed in an _expression_ on the right side of the equal sign (one "up" and one "down" in mathematical writing, in code they look the same) :2) do we allow general implicit summation or only Einstein notation ?Besides, I haven't found the tensor product in the currently available operations yet.The first choice would seem more reasonable at first and is also easier. But this notation is actually much more general and code might be factored in a unified implementation. Is there specific optimisation for each operation that cannot be factored, etc ?1) do we implement this feature over existing operations (chip, contraction, tensor product), or do we implement it on its own and rewrite existing operations over it ?Now there are a few implementation decisions to make :The motivation is simplicity, scalability of the notation and readability.Hello,As I have said some time ago on the forum (https://forum.kde.org/viewtopic.php?f=74&t=125958), it would be nice to have an "implicit summation" notation such as :`r(i,k)=a(i,j)*b(j,k); //this contraction is the matrix product`

r =a(0,i)*b(i,1); //dot product of 0th row of a with 1th

column of b, using slicing and contraction

r(i,j,k,l)=a(i,j)*b(k,l) // tensor product resulting in a 4th order tensor`r(i,k) =a(i,`

j)*b(j,k);`//ok, only 2 'j's`

`r(i,k,l)=a(i,`

j)*b(j,k)*c(j,l); //not ok`, more than 2 'j's`

Therefore, general implicit summation is more powerful, but adds constraints to the implementation. In particular, if we allow it, we cannot reduce the subexpression`a(i,j)*b(j,k)`

to a contraction operation.I think that this choice depends on the decision for the first question. If the notation has its own implementation, it is natural to allow general implicit summation. If it relies on underlying operations, then it would be easier to stay with only Einstein summation for now.What do you think ?On the forum, I have said that I would try to implement this myself. As it is only on my free time, I do not promise I will finish it any time soon. But if someone needs it or wants to do it quicker than me, you can contact me.Best,Godeffroy

**Follow-Ups**:**Re: [eigen] Tensor module - Einstein notation***From:*Godeffroy VALET

**References**:**Re: [eigen] Tensor module - Einstein notation***From:*Godeffroy VALET

**Messages sorted by:**[ date | thread ]- Prev by Date:
**Re: [eigen] Tensor module - Einstein notation** - Next by Date:
**Re: [eigen] Tensor module - Einstein notation** - Previous by thread:
**Re: [eigen] Tensor module - Einstein notation** - Next by thread:
**Re: [eigen] Tensor module - Einstein notation**

Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |