Re: [eigen] [Review] Pull request 66, Huge Tensor module improvements

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


 
You've got two things in your pull request that affect Eigen in general,
not just the Tensor module: Blend and .device(). So maybe we can come to
the following arrangement:

 * you resubmit your pull request without Blend and .device() (but with
   EIGEN_DEVICE_FUNC)

Sounds good.
 
 * we come to an agreement about the general direction. I see
   basically the following things we disagree about:
       - how closely we should follow Eigen's current structure
       - whether there should be a Tensor <-> TensorArray distinction

I initially thought about creating a TensorArray class similar to the Eigen Array class, and make the Tensor class an equivalent to the Eigen Matrix class. I changed my mind after realizing that unlike the matrix case there are various variations on tensor algebra out there (see for example the various tensor products in http://en.wikipedia.org/wiki/Tensor_product#Other_examples_of_tensor_products), and would would end up having to create one Tensor class for each case. I think that it will be easier to have a single Tensor class with various product implemented as functions instead of * operator. We'll end up with a lot less code if we create kroneckerProduct, topologicalProduct, ... methods instead of KroneckerTensor, TopologicalTensor,... classes. That said I don't have a strong bias against creating a TensorArray class.

 * I merge the pull request and start working on the cleanups / things
   I've mentioned in this thread - not the new features themselves
   initially, so the LinearAccessBit restriction will stay for the
   moment, for example, but rather the other things like merging fixed
   and dynamic sized tensors, getting rid of Sizes<> etc.

Ok. I wasn't planning on working on the cleanups any time soon, so there won't be conflicts here.
 
 * In the mean time, you could work on Blend / .device() support for
   Eigen in general, because I do think this is something that could be
   really useful in general.

Over the summer I'll be working on the tensor _expression_ evaluation mechanism in order to. I might also start migrating some of the Torch library code to use Eigen tensors, which would require some form of slicing, so I'll probably also add that.
 
In that case, we wouldn't be working on the same thing and at the end
hopefully we'll have a really solid basis to build upon and add further
features.

Do you think that is reasonable?

I do. 


--
Benoit


Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/