Re: [eigen] (co-)mentoring for Google Summer of Code

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]



On Wed, Feb 11, 2015 at 12:02 PM, Schleimer, Ben <bensch128@xxxxxxxxx> wrote:
Hi Gael,
   I believe that once a OpenCL kernel is built, it can be reused as many times as you want. I was under the impression that the device keeps the kernel on the device memory but I could be wrong.

My problem was to find a way to store the generated kernels and then be able to find them. Perhaps a static variable of a function templated by the whole _expression_ type will do. Then at the first run of an _expression_, we still have to generate the kernel source code from the _expression_ tree, uniquely name the leaves (i.e., the matrices), and compile the kernel.
 
Also, the C++ wrapper for the OpenCL buffer object is fairly straight forward... read data to device, run the kernel, write data from device...

Sure, but if you want to avoid useless device-to-host followed host-to-device copies and keep data on the GPU when possible, then you have to ask the user to manually trigger the copies.

All these difficulties are avoided with CUDA.

gael
 

Sigh, I would volunteer to do this but I'm too busy right now...
so I'm going to shut up now.
Ben


On Wednesday, February 11, 2015 12:05 AM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:


Hi,

unfortunately handling OpenCL is much more complicated as you have to generate and compile the code at runtime, and if the _expression_ is within a loop you would also like to cache the compiled code somehow. You would also have to deal with memory transfers while CUDA can do it for us.

gael

On Wed, Feb 11, 2015 at 2:00 AM, Schleimer, Ben <bensch128@xxxxxxxxx> wrote:
Hi Gael,
  I wanted to add in, what about OpenCL support for expressions?
Or does that not make any sense from a performance POV?

Cheers
Ben


On Monday, February 9, 2015 1:38 PM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:



Hi developers,

it's a shame that Eigen still haven't participated to GSOC. The deadline for organisations is within 2 weeks, and if I'm not alone willing to act as a mentor, I'll prepare the forms to apply.

So would any of you be interested to participate to GSOC 2015 as a mentor/co-mentor?

Here are some initial project ideas:

- blocking/tiling in ColPivHouseholderQR and other decompositions
- sparse eigen decomposition (followup of last spring student project)
- performance oriented unit-testing with a web interface to track perf regressions
- finalize sparse-block storage format and extend direct solvers to exploit it
- evaluate expressions through CUDA (based on Benoit Steiner's work on Tensors)

cheers,
Gael








Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/