Hi,
Feel free to object with arguments.
Basically, it supports the following features:
 - arbitrary tensor rank (up to 5 in C++03)
 - fully-dynamic or fully-fixed dimensions
 - forward or backward storage order (like row-major/column-major)
 - slicing, chipping (pick a R-1 rank sub tensor), reduction,
 - contractions using Eigen's fast matrix-matrix and matrix-vector kernels,
 - reshape, shuffling of the dimensions,
 - convolutions
- etc.
There are also a few features aiming to be integrated in Eigen core. In particular, the evaluation of expressions on CUDA and a fast CUDA matrix-matrix kernel.
Before it can be integrated within Eigen's official modules, we will have to unify everything that can be unified:
 - the evaluator mechanisms (more on that later)
 - unify Tensor and FixedSizeTensor while enabling mixed fixed-dynamic sizes
 - TensorMap -> Map<Tensor>
 - CUDA stuffs (more on that later)
 - move the documentation to Doxygen
 - clean Eigen::array, Eigen::DSizes if possible
 - fix Eigen::IndexList (it currently requires C++14)
 - etc.
and also stabilize the naming convensions (more on that later).
Additional welcome features could be:
 - Ref<Tensor>
 - Iterator based evaluation for large-rank _expression_ with complex indexing
 - cost model (could allow to switch between index-based and iterator-based evaluation
 - unify 1D/2D tensors with MatrixBase?
Nonetheless, I think this is a good timing for the merge as the current implementation is already pretty mature and this will provide the visibility this module deserves.
Cheers,
Gael.