|Re: [eigen] (co-)mentoring for Google Summer of Code|
[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]
Hi Gael,I believe that once a OpenCL kernel is built, it can be reused as many times as you want. I was under the impression that the device keeps the kernel on the device memory but I could be wrong.
Also, the C++ wrapper for the OpenCL buffer object is fairly straight forward... read data to device, run the kernel, write data from device...
Sigh, I would volunteer to do this but I'm too busy right now...so I'm going to shut up now.BenOn Wednesday, February 11, 2015 12:05 AM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:
Hi,unfortunately handling OpenCL is much more complicated as you have to generate and compile the code at runtime, and if the _expression_ is within a loop you would also like to cache the compiled code somehow. You would also have to deal with memory transfers while CUDA can do it for us.gaelOn Wed, Feb 11, 2015 at 2:00 AM, Schleimer, Ben <bensch128@xxxxxxxxx> wrote:Hi Gael,I wanted to add in, what about OpenCL support for expressions?Or does that not make any sense from a performance POV?CheersBenOn Monday, February 9, 2015 1:38 PM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:
Hi developers,it's a shame that Eigen still haven't participated to GSOC. The deadline for organisations is within 2 weeks, and if I'm not alone willing to act as a mentor, I'll prepare the forms to apply.So would any of you be interested to participate to GSOC 2015 as a mentor/co-mentor?Here are some initial project ideas:- blocking/tiling in ColPivHouseholderQR and other decompositions- sparse eigen decomposition (followup of last spring student project)- performance oriented unit-testing with a web interface to track perf regressions- finalize sparse-block storage format and extend direct solvers to exploit it- evaluate expressions through CUDA (based on Benoit Steiner's work on Tensors)cheers,Gael
|Mail converted by MHonArc 2.6.19+||http://listengine.tuxfamily.org/|