Re: [eigen] Componentwise Operations on an Arbitrary Number of Tensors

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]



oh, actually I forgot that you were mostly interested in the Tensor/GPU stuff, so what I said does not really apply. Nonetheless, the step 5 should not be so different, and since the steps 1-4 are just a matter of API for which it is not yet fully decided what we want, you could just try to adapt what I said about step5 in the Tensor lib and start with the simplest to implement API, something like:

fused_assign(std::tie(R1, R2, ...), std::tie(expr1, expr2, ..)).

gael

On Tue, Jan 3, 2017 at 6:56 PM, Graham Neubig <gneubig@xxxxxxxxxx> wrote:
Thanks for the pointers! I'll take a look, but this isn't trivial so I won't make any guarantees that I can finish it. (If someone else who has more familiarity is interested I'd be perfectly happy to hand it off!)

Graham

On Tue, Jan 3, 2017 at 5:00 AM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:
That does not sounds trivial but you're welcome to give it a try. Following the:

  tie(R1,R2,...) = E1,E2,...;

approach (which, BTW, does not allow to mix =,+=,-= assignment operators), I see the following steps:

1 - Implement a template Tie<T1,T2,...,TN> class storing N references to the nested expressions
2 - Implement the "Tie<...> tie(...)" function
3 - Implement "template<OtherDerived> Tie<Derived,OtherDerived> DenseBase::operator,(const DenseBase<OtherDerived>& other) const;"
4 - Implement "template<OtherDerived> Tie<...,OtherDerived> Tie<...>::operator,(const DenseBase<OtherDerived>& other) const;"
5 - Implement "Tie<...>::operator=(Tie<...>)", to this end look at the bottom of the file src/Core/AssignEvaluator.h. The easiest is probably to implement your own "kernel" similar to "generic_dense_assignment_kernel" but looping through each nested expressions, and then call it as in "call_dense_assignment_loop" from Tie::operator=, neglecting implicit transposition/aliasing issues for now.

gael


On Tue, Jan 3, 2017 at 4:17 AM, Graham Neubig <gneubig@xxxxxxxxxx> wrote:
Hi Christoph and all,

Thanks for the reference to the issue, and glad to see that this is on the radar. I honestly don't really know where to start on this, but if someone could give some pointers for where in the code I could reference I could take a look and see if I could cook up something.

Graham

On Dec 30, 2016 8:06 AM, "Christoph Hertzberg" <chtz@xxxxxxxxxxxxxxxxxxxxxxxx> wrote:
Hi!

For reference, here is a bugzilla entry for this feature request:
http://eigen.tuxfamily.org/bz/show_bug.cgi?id=984

An idea we had back then was to introduce Eigen::tie and then allow assignments like:

    Eigen::ArrayXd A, B, X;
    Eigen::tie(A,B) = sin(X), cos(X);

(and of course, one could introduce sincos, minmax, ... -operators which are assignable to tie-expressions).

Christoph


On 28.12.2016 at 22:48, Gael Guennebaud wrote:
Hi,

it seems that what you're looking for is a mean to merge multiple
evaluation loops of the same size into a single one (the fact that they run
on the GPU is not really important here). Actually, this needs already
shows up for stuff like:

a = vec.minCoeff();
b = vec.maxCoeff();

that currently requires two loops. I remember that we already talked about
that with Benoit S., and I don't think there is a general solution
implemented in the Tensor module yet.

Technically, I don't think that's very difficult though. The main
difficulty is perhaps on the API side. We could imagine something like:

auto E1 = (R1.deferred() = expr1);
auto E2 = (R2.deferred() = expr2);
....
merged_eval(E1, E2, ...);

that would essentially generate:

(parallel/GPU/whatever) for loop {
  R1[i] = expr1.coeffl(i);
  R2[i] = expr2.coeffl(i);
  ...
}

In Eigen/Core, "R.deferred().operator=(expr)"  would return an
Eigen::internal::Assignment _expression_ (without calling run) that would be
merged by the merged_eval function.


gael


On Wed, Dec 28, 2016 at 3:22 PM, Graham Neubig <gneubig@xxxxxxxxxx> wrote:

Hi Eigen Folks,

First, thanks for the great library. We're using it in our machine
learning library DyNet to great success.

I had a quick question about something that seems like it should be
possible, but I haven't found a reference. I currently have code here:
https://github.com/clab/dynet/blob/master/dynet/training.cc#L280

That implements the "Adam" update rule for stochastic gradient descent
found in this paper:
https://arxiv.org/abs/1412.6980

Here, all places with "tvec()" are Eigen one-dimensional Tensors. The
thing that bugs me here is that I'm calling 4 different operations, which
results in 4 different GPU kernel launches, for an operation that is
inherently componentwise. If possible, I'd like to be able to basically
create a single functor that takes 4 floats, and modifies them
appropriately, then pass this in a single GPU operation.

I know this is possible using binaryExpr() for binary expressions, but I
couldn't find it for operations with a larger number of arguments. Is there
any chance that there is an elegant way to do this within Eigen (i.e.
without writing my own kernel)?

Graham




--
 Dipl. Inf., Dipl. Math. Christoph Hertzberg

 Universität Bremen
 FB 3 - Mathematik und Informatik
 AG Robotik
 Robert-Hooke-Straße 1
 28359 Bremen, Germany

 Zentrale: +49 421 178 45-6611

 Besuchsadresse der Nebengeschäftsstelle:
 Robert-Hooke-Straße 5
 28359 Bremen, Germany

 Tel.:    +49 421 178 45-4021
 Empfang: +49 421 178 45-6600
 Fax:     +49 421 178 45-4150
 E-Mail:  chtz@xxxxxxxxxxxxxxxxxxxxxxxx

 Weitere Informationen: http://www.informatik.uni-bremen.de/robotik







Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/