Re: [eigen] [Review] Pull request 66, Huge Tensor module improvements

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Hi,

>     1. C++03 compatibility emulation. I see that you went down that rabbit
>        hole and came out alive and well, where I thought it would be far
>        too complicated to emulate that with lots of variants of templates.
>        Seeing your code, I actually see it's quite doable and not really
>        that difficult. But I think there are some things that can be
>        improved:
> 
>        a. Since Tensor/ does not necessarily need C++11 anymore, we should
>           probably move it to unsupported/Eigen/Tensor. I don't see you
>           implementing TensorSymmetry/ in C++03 however, so I think that
>           will have to stay under unsupported/Eigen/CXX11/.
> 
>  
> The emulation code is not perfect, and one gets better functionality
> with cxx11, so it might be worth keeping it explicit that there is a
> dependency on cxx11.

But looking over the thing, other than *real* variable indices and C++11
specific things such as initializer lists, I'm quite convinced that
everything can be implemented in C++03 - maybe not quite as efficient,
but at least working.

> I don't like the Sizes template much either. At the time I couldn't find
> a way to emulate a variadic version of the TensorFixedSized template
> classes without having to rewrite a fair amount of code. Over time I
> moved most of that code to TensorBase and TensorDimensions so should now
> be possible to avoid the Sizes class entirely in the public api.

Ok, then that should be a longer-term TODO item.

>        e. Nitpicking: you support up to 5 indices in C++03, but IIRC Fortran
>           supports 7, so maybe we should match that?
> 
> 
> I'll need to go to 8 in the future ;)

:-)

Ok, since we're looking for an arbitrary number anyway, why don't we
just say 10?

>     3. General class structure:
> 
>        a. I don't like two classes TensorFixedSize and Tensor. I think
>           there should be only one class, like Matrix or Vector.
> 
> 
> I'd also like to see them merged. This would be cleaner and open the
> door to a mixed tensor for which some dimensions are known at compile
> time and some known dynamically.

I agree.

>        b. You only have one base class right now, which is fine for the
>           features implemented so far, but I think at this stage we should
>           really go and replicate the entire class hierarchy of Eigen,
>           even if some subclasses are not really needed right now, just to
>           make it easier later to add more features.
> 
> 
> I don't think we need to replicate the entire eigen class hierarchy. The
> tensor class is a generalization of the eigen Array class,

I disagree here. Eigen's Array class provides coefficient-wise access to
everything. Btw I just saw in your code (after I read this comment) that
you implemented operator* like in the array class. I don't think that's
a good idea.

A tensor for me is a mathematical object with specific properties and
operations. If at all one would define an operator* between two tensors,
it would be a Kronecker product, but I would *never* *ever* expect
operator* to be a coefficient-wise product for tensors. On the other
hand, I think it's the best idea to just make any expression tensor1 *
tensor2 just fail to compile, because different people have different
expectations here (in contrast to matrix products).

So I do think that there is an important distinction between a Tensor
and an Array view on top of a Tensor.

> however I
> don't see an equivalent to the Eigen Matrix class in the tensor world.

There are a couple of main differences between Matrix and Array:

 * operator* does something else
 * operator/ is not defined in Matrix
 * .inverse(), .lu(), .llt(), .eigenvalues(), etc. are not defined in
   Array

Applying this for Tensors: as I said above, I don't see operator* to be
a good idea for tensors as objects (due to possible ambiguity), so the
initial differences would just be (once TensorArray is implemented):
operator* and operator/ are not defined in Tensor itself.

But I do think that there are potential future things that could be done
with tensors as mathematical objects that shouldn't necessarily be done
with coefficient-like structures.

So I do think it's a good idea to separate those two.

> As a result, we don't need to introduce an equivalent to the Matrix,
> MatrixBase, DenseBase, and DenseCoeffsBase classes. If/when we add
> support for sparse tensors we'll need a common base class for the dense
> and sparse tensor hierarchies (similar to EigenBase), but this can wait. 

Here, I also disagree. If we have the same type of hierarchy both in the
tensor and the matrix world, then people don't have to rethink things,
then everything is analogous. Even though the code may have some classes
which might strictly speaking not be needed, I do think this simplifies
maintenance (and possible future mergeability) quite a bit.

>        a. For all evaluators, you assume something like LinearAccessBit,
>           which is fine as long as one doesn't implement some kind of
>           generalized .block() for tensor and/or Maps with strides.
> 
> 
> That's right. It will be tricky to figure out how to properly vectorize
> an expression that contains calls to .block() and .reshape() correctly,
> so I punted on this completely.

Why? I think it's rather straight-forward, just a lot of work. But
before we start implementing .block() in the first place, we should
first think about how we want to design the API here, since subtensor
support is probably going to be one of the major features of the tensor
class and we should design that right.

> There are 2 possible non exclusive short
> term solutions:
>   * disable all vectorization whenever a .block expression is
> encountered, or whenever a tensor map is created with a non default stride.
>   * Evaluate the .block() expression immediately (when used in a rvalue
> expression) by making a dense copy that can be accessed linearly.

No, I only mentioned the LinearAccessBit thing because it was not said
anywhere. I didn't mean to say this is something that urgently needs to
be changed.

>        b. The HasBlend / pblend() stuff is probably also something that
>           could benefit the rest of Eigen, right? OTOH, the for loop over
>           the packet size seems potentially inefficient to me (at least on
>           CPUs) - can't you just do a packet() on m_condImpl and store that
>           in the selector?
> 
> The downside of the pblend approach is that the then and else expression
> always have to be evaluated. When using a packet of 4 there is a 12.5%
> chance that either can be completely skipped, so the non vectorized
> implementation of select might be faster in some case. Before using the
> pblend instruction in Eigen Core I'd like to update the evaluation cost
> model to reflect this.


>        d. Nitpicking: Why did you change #include <...> to #include "..."
>           for things that should be in the include path? The only things
>           in "..." should be references to src/... from within the main
>           include headers, at least if I follow the style of the rest of
>           Eigen.
> 
>  
>  #include "..." for non system includes seems to be the default in Eigen
> (at least based on the Eigen/Core header)

Oh, all unsupported/ modules I looked at initially include Eigen/Core
with angular brackets. OTOH, non-unsupported/ modules don't. Bah, I
don't really care...

>           That said, I really don't like names like TensorMap<> etc. that
>           people will then use from the public API. Also, later on there's
>           probably going to be something like Ref<> etc. I think it should
>           be better to leverage the same names for the Tensor module, and I
>           think I have an idea how to do that without making the rest of
>           Eigen fall apart, but I have to try to see if it works first.
> 
>  
> The Map code assumes a 2D layout in several places. For example, the
> notion of inner and outer strides assume there are exactly 2 strides.
> Similarly, referring to a coefficient can be done by row and column. On
> the other hands, n dimensional accessors are missing. Long term I'd like
> to see the Tensor code moved into Eigen core and the TensorMap merged
> with the Map class, but for the time being I thought it was safer to
> create a TensorMap class.

No, you completely misunderstand me: I don't want to reuse the code of
Map<>. It's clear that this won't work. I just want to reuse the name,
so that the user has a consistent API.

See here:
http://news.gmane.org/find-root.php?message_id=de4e566a8ddd3d68222353ba6dd84eb8%40iwakd.de





--------------------------------------------------------------------


Generally speaking: just to be on the same page about a future plan: I'd
also like to continue working on the Tensor module myself (that's why I
started this thread in the first place), so we need to do a couple of
things:


  * Agree on what should go into the official repository immediately
    and what should wait - and then merge at least large parts of your
    stuff sooner rather than later.

  * Come to a common understanding what the direction forward is. When
    I listed all these points w.r.t. your pull request, I didn't say
    that you should do all of them, on the contrary; a lot of the API
    cleanup stuff (and adding all the other classes) is something I'd
    be willing to do, I already have ideas on how to do this and that
    and it's probably easier to code than to explain. Therefore, we
    need to have an idea of who's working on what.

You've got two things in your pull request that affect Eigen in general,
not just the Tensor module: Blend and .device(). So maybe we can come to
the following arrangement:

 * you resubmit your pull request without Blend and .device() (but with
   EIGEN_DEVICE_FUNC)

 * we come to an agreement about the general direction. I see
   basically the following things we disagree about:
       - how closely we should follow Eigen's current structure
       - whether there should be a Tensor <-> TensorArray distinction

 * I merge the pull request and start working on the cleanups / things
   I've mentioned in this thread - not the new features themselves
   initially, so the LinearAccessBit restriction will stay for the
   moment, for example, but rather the other things like merging fixed
   and dynamic sized tensors, getting rid of Sizes<> etc.

 * In the mean time, you could work on Blend / .device() support for
   Eigen in general, because I do think this is something that could be
   really useful in general.

In that case, we wouldn't be working on the same thing and at the end
hopefully we'll have a really solid basis to build upon and add further
features.

Do you think that is reasonable?

Regards,
Christian




Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/