Re: [eigen] Support for trace of tensors

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]



The best way to get started is probably look at an existing operation and adapt it to your needs. The chipping operation in TensorChipping.h is probably the one closest to what you need that doesn't have a lot of optimizations to obscure the code.

The first class in TensorChipping.h is the TensorChippingOp class. It's a purely declarative class that describe what is operation is supposed to do. It takes 2 template parameter, the id of the dimension along which we're chipping and the subexpression to chip. In your case, you need to specify a collection of dimensions to compute the trace for instead of a single dim, so we'll end up with something like this:

template<template Dims, typename XprType>
class TensorTraceOp : public TensorBase<TensorTraceOp<Dims, XprType> >
{
  EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE TensorChippingOp(const XprType& expr, const Dims& dims)
      : m_xpr(expr), m_offset(offset), m_dims(dims) {
  }

  EIGEN_DEVICE_FUNC
  const Dims& dims() const { return m_dims; }

  EIGEN_DEVICE_FUNC
  const typename internal::remove_all<typename XprType::Nested>::type&
  _expression_() const { return m_xpr; }

 ...
};

The second class in TensorChipping.h is the TensorEvaluator for the TensorChippingOp. The evaluator is responsible for doing the actual computation whenever requested. The chipping operation has 2 evaluators. The first one applies when chipping is used on the right hand side of an assignment, the second one handles the case of chipping on the left hand side of an assignment. It doesn't make sense to use the trace operation as a lhs (how would we define what T..trace(dims) = Y does?), so we'll only have a single evaluator for the trace. The evaluator will look something like this:

template<template Dims, typename ArgType, typename Device>
struct TensorEvaluator<const TensorTraceOp<Dims, ArgType>, Device>
{
  typedef TensorTraceOp<Dims, ArgType> XprType;
  static const int NumInputDims = internal::array_size<typename TensorEvaluator<ArgType, Device>::Dimensions>::value;
  static const int NumDims = NumInputDims- internal::array_size<Dims>::value;
  typedef typename XprType::Index Index;
  typedef DSizes<Index, NumDims> Dimensions;
  typedef typename XprType::Scalar Scalar;
  typedef typename XprType::CoeffReturnType CoeffReturnType;
  typedef typename PacketType<CoeffReturnType, Device>::type PacketReturnType;

  enum {
    // Alignment can't be guaranteed at compile time
    IsAligned = false,
    PacketAccess = false,  // disable vectorization to start with. That way we won't need to implement a packet() method.
    Layout = TensorEvaluator<ArgType, Device>::Layout,
    CoordAccess = false,
    RawAccess = false
  };

  EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE TensorEvaluator(const XprType& op, const Device& device)
      : m_impl(op._expression_(), device), m_device(device)
  {
     // Compute the dimension of the result. 
     const typename TensorEvaluator<ArgType, Device>::Dimensions& input_dims = m_impl.dimensions();

     int j = 0;
     for (int i = 0; i < NumInputDims; ++i) {
       if (i is in op.dims()) {    // beware, this is pseudo code
         continue;
       }
       m_dimensions[j] = input_dims[i];
       ++j;
     }
  }

  EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE const Dimensions& dimensions() const { return m_dimensions; }

  EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE bool evalSubExprsIfNeeded(Scalar* /*data*/) {
    m_impl.evalSubExprsIfNeeded(NULL);
    return true;
  }

  EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void cleanup() {
    m_impl.cleanup();
  }

  EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE CoeffReturnType coeff(Index index) const
  {
     // This is where we compute the trace itself. Here the 'index' parameter is the index of the coefficient to compute in the result. We can access coefficients at location i in the input tensor using m_impl.coeff(i);
  }

  Dimensions m_dimensions;
  TensorEvaluator<ArgType, Device> m_impl;
  const Device& m_device;
};

The 3rd thing to do is to define the traits for the trace operation. This is going to be very similar to what's going on in the chipping op. Beware, this should be at the beginning of the TensorTrace.h file, and in the internal namespace.

namespace internal {
template<typename Dims, typename XprType>
struct traits<TensorTraceOp<Dims, XprType> > : public traits<XprType>
{
  typedef typename XprType::Scalar Scalar;
  typedef traits<XprType> XprTraits;
  typedef typename XprTraits::StorageKind StorageKind;
  typedef typename XprTraits::Index Index;
  typedef typename XprType::Nested Nested;
  typedef typename remove_reference<Nested>::type _Nested;
  static const int NumDimensions = XprTraits::NumDimensions - 1;
  static const int Layout = XprTraits::Layout;
};

template<typename Dims, typename XprType>
struct eval<TensorTraceOp<Dims, XprType>, Eigen::Dense>
{
  typedef const TensorTraceOp<Dims, XprType>& type;
};

template<typename Dims, typename XprType>
struct nested<TensorTraceOp<Dims, XprType>, 1, typename eval<TensorTraceOp<Dims, XprType> >::type>
{
  typedef TensorTraceOp<Dims, XprType> type;
};
}

The 4th step is to forward declare the TensorTraceOp class in TensorForwardDeclarations.h. Just add this:

template<typename Dims, typename XprType> class TensorTraceOp;

The 5th step is to add a trace() method to the tensor api. For that you simply need to add a method to the TensorBase class (in TensorBase.h). There are 2 TensorBase class. You need to add the trace api to the first one, the second one is used for expressions that reside on the left hand side of an assignment. 

   template <typename Dims> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE
    const TensorTraceOp<Dims, const Derived>
    trace(const Dims& dims) const {
      return TensorTraceOp<Dims, const Derived>(derived(), dims);
    }

The last step is to add a test. All the tensor test reside in unsupported/test/, so it's probably best to add yours there as well. The tests for the chipping operation reside in  unsupported/test/cxx11_tensor_chipping.cpp, 


Hope this helps. Let me know if you need more information, I realize that I didn't go into much details.






On Wed, Mar 30, 2016 at 6:09 AM, Christophe Prud'homme <prudhomme@xxxxxxxxxx> wrote:

On Tue, Mar 29, 2016 at 8:20 PM, Benoit Steiner <benoit.steiner.goog@xxxxxxxxx> wrote:
The trace operator would be a nice addition to the library. Unfortunately I don't have the time to implement the operation myself at the moment, but I can help you get started if you're interested in contributing.

​Thanks Benoit

I ​am listening. Where should I start ?

the interface would be

Eigen::Tensor<float,4> T(2,2,2,2)

Eigen::array<dimpair_t, 1> dims = {{dimpair_t(2, 3)}};           

// rank 2 tensor after contraction of indices 2 and 3
Eigen::Tensor<float,2> res = T.trace(dims); 

or
Eigen::Tensor<float,2> res = T.trace(2,3); 

// indices 2 and 3 must have the same dimensions

for(int i = 0;i < 2;++i)
for(int j = 0;j < 2;++j)
{
  for(int c= 0; c < 2; < ++c)
   res(i,j) += T(i,j,c,c)
}


Best regards
C.
--
Professor in Scientific Computing
IRMA, Cemosis, Master CSMI, AMIES
IRMA: http://www-irma.u-strasbg.fr/~prudhomm
Twitter: http://twitter.com/prudhomm 
Téléphone : +33 (0)3 68 85 00 89 - Bureau : P-210    
Mobile: +33 (0)6 87 64 60 51



--
Benoit


Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/