Re: [eigen] Parallel matrix multiplication causes heap allocation

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]




On Sun, Dec 18, 2016 at 8:12 PM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:
Sorry, my intervention was too brief to really make sense. I was more thinking about the practical difficulties to implement it in a header-only way, without dependencies, with C++03 features, in a portable way... With C++11 this should be doable, so why not giving a try with the current strategy as fallback.


Is some native threading library (Pthreads on Linux, something else on Windows, Solaris, etc.) an acceptable dependency?

If you are willing to add a branch on an is_initialized flag, at least on the is_parallel code path, then you can hide the one-time heap allocation in a pthread_once (or equivalent).

Obviously, I omitted lots of details.  I'll see if I find time over the holiday to prototype this.

Best,

Jeff

 
BTW, I should add that we also make use of static stack allocation for small enough matrices, as controlled by EIGEN_STACK_ALLOCATION_LIMIT.

gael

On Mon, Dec 19, 2016 at 2:35 AM, Jeff Hammond <jeff.science@xxxxxxxxx> wrote:


On Dec 18, 2016, at 2:46 PM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:



On Sun, Dec 18, 2016 at 11:15 PM, Jeff Hammond <jeff.science@xxxxxxxxx> wrote:


All good GEMM implementations copy into temporaries but they should not have to dynamically allocate them in every call. 

Some BLAS libraries allocate the buffer only once, during the first call. GotoBLAS used to do this. I don't know if OpenBLAS still does this, or in which cases. 

At one time, BLIS (cousin of OpenBLAS) used a static buffer that was part of the data segment, so malloc was not necessary to get the temporary buffers. I think it can dynamically allocate instead now but only once during the first call.


but what about multi-threading?  (GEMM being called from multiple threads)


You mean N threads calling 1 GEMM, N threads calling N GEMM or the truly general case (N and M, perhaps unevenly distributed)?

In any case, I don't see the problem - allocate buffer sufficient to cover last level cache and give 1/N of it to each thread, where N is number of cores.

I will have to look at what various implementations actually do but I don't see any fundamental issue here.

If you look at the 2014 paper I linked before, it may discuss this or at least provide the necessary details to derive a good strategy.

Jeff 

gael
 


Jeff

gael
 

François



On 18 Dec 2016, at 01:06, Rene Ahlsdorf <ahlsdorf@xxxxxxxxxxxxxxxxxx> wrote:

Dear Eigen team,

first of all, thank you for all your effort to create such a great math library. I really love using it.

I’ve got a question about your parallelization routines. I want to calculate a parallel (omp based) matrix multiplication (result: 500 x 250 matrix) without allocating any new space in the meantime. So I have activated „Eigen::internal::set_is_malloc_allowed(false)“ to check that nothing goes wrong. However, my program crashes with the error message 
„Assertion failed: (is_malloc_allowed() && "heap allocation is forbidden (EIGEN_RUNTIME_NO_MALLOC is defined and g_is_malloc_allowed is false)"), function check_that_malloc_is_allowed, file /Users/xxx//libs/eigen/Eigen/src/Core/util/Memory.h, line 143.“. Is this behaviour desired? Should there be an allocation before doing parallel calculations? Or am I doing something wrong?

Thanks in advance.

Regards,
René Ahlsdorf

Eigen Version: 3.3.1 (commit f562a193118d)


Attached: Screenshot showing the last function calls 
<Screenshot 2016-12-18 01.01.42.png>






--


Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/