Re: [eigen] [Review] Pull request 66, Huge Tensor module improvements

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


I have created a new pull request: https://bitbucket.org/eigen/eigen/pull-request/68/tensor-support/diff. It essentially the same code as in pull request 66, except for 2 changes:
  * I deleted the .device() api.
  * I deleted the new pblend code and the vectorized version of the tensor .select() code.  


On Thu, Jun 12, 2014 at 1:25 AM, Christian Seiler <christian@xxxxxxxx> wrote:
Hi,

> Sounds good.
>
>
>      * we come to an agreement about the general direction. I see
>        basically the following things we disagree about:
>            - how closely we should follow Eigen's current structure
>            - whether there should be a Tensor <-> TensorArray distinction
>
>
> I initially thought about creating a TensorArray class similar to the
> Eigen Array class, and make the Tensor class an equivalent to the Eigen
> Matrix class. I changed my mind after realizing that unlike the matrix
> case there are various variations on tensor algebra out there (see for
> example the various tensor products in
> http://en.wikipedia.org/wiki/Tensor_product#Other_examples_of_tensor_products),
> and would would end up having to create one Tensor class for each case..
> I think that it will be easier to have a single Tensor class with
> various product implemented as functions instead of * operator. We'll
> end up with a lot less code if we create kroneckerProduct,
> topologicalProduct, ... methods instead of KroneckerTensor,
> TopologicalTensor,... classes.

So, I completely agree that having kroneckerProduct() etc. as functions
rather than operator() is the right way to go, Tensor itself should not
implement operator* in my eyes.

I just want to follow the rest of Eigen where Array is something where
everything happens coefficient-wise and where Matrix is something where
it considers the entire thing as a mathematical object.

Therefore my idea would be: cwise operator* and operator/ for
TensorArray, but no tensor products; tensor product functions for Tensor.

I think this separation is good conceptually.

> That said I don't have a strong bias against creating a
> TensorArray class.

So, since Christoph also agrees with me on this point, and you don't
really strongly object, I'll go in that direction.

>      * I merge the pull request and start working on the cleanups / things
>        I've mentioned in this thread - not the new features themselves
>        initially, so the LinearAccessBit restriction will stay for the
>        moment, for example, but rather the other things like merging fixed
>        and dynamic sized tensors, getting rid of Sizes<> etc.
>
> Ok. I wasn't planning on working on the cleanups any time soon, so there
> won't be conflicts here.

Ok, great!

>      * In the mean time, you could work on Blend / .device() support for
>        Eigen in general, because I do think this is something that could be
>        really useful in general.
>
>
> Over the summer I'll be working on the tensor _expression_ evaluation
> mechanism in order to. I might also start migrating some of the Torch
> library code to use Eigen tensors, which would require some form of
> slicing, so I'll probably also add that.

So before you start with that, we should have a discussion on what we
think the API should be. Not up to the very last detail (I know a lot of
things become clear only when you start implementing), but at least as a
general thing.

>     In that case, we wouldn't be working on the same thing and at the end
>     hopefully we'll have a really solid basis to build upon and add further
>     features.
>
>     Do you think that is reasonable?
>
>
> I do.

Fantastic! :)

Regards,
Christian






--
Benoit


Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/