Re: [eigen] Help on solving a race condition

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Maybe this is too simple, but what would be wrong with a sort of implicit initialization step.

Couldn't you just have a simple header file that does this cache computation and stores it in a global variable in Eigen namespace. For instance as a toy example
==========================
#include <stdlib.h>
#include <time.h>

inline int function() {
    srand ( time(NULL) );
    return rand() % 10 + 1;
}

static int L1CACHE = function();
===========================

Then in your other functions, you can just use L1CACHE (or whatever) you computed. They will be constant with respect to any program runtime, right?

for instance

#include <iostream>

#include "init.h"
using namespace std;

int main() {
    cout << "L1CACHE " << L1CACHE << std::endl;
}

There will only be one call to the function this way, and it can be outside of any parallel blocks, and should work for any type of threading. And won't require users to call init(), it all happens behind the scenes.

Maybe I'm missing something.

Cheers,

-- Trevor



On 8 June 2012 09:37, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:
On Fri, Jun 8, 2012 at 5:32 PM, Brad Bell <bradbell@xxxxxxxxxx> wrote:
> This solution is openmp specific; what if someone wants to use pthreads or
> some other threading system ?

Very good point!

Then I guess the only safe and clean solution is to request users to
call a Eigen::init_parallel() or something.

Gael.

> On 06/08/2012 07:24 AM, Hauke Heibel wrote:
>>
>> Hi Gael,
>>
>> I think the most efficient solution would be to use atomics. I am not
>> sure whether the code below is allowed and fixes the problem
>>
>> #pragma omp atomic
>> static std::ptrdiff_t m_l1CacheSize =
>> manage_caching_sizes_helper(queryL1CacheSize(),8 * 1024);
>>
>> #pragma omp atomic
>> static std::ptrdiff_t m_l2CacheSize =
>> manage_caching_sizes_helper(queryTopLevelCacheSize(),1*1024*1024);
>>
>> Another alternative might be #pragma omp critical.
>>
>> Yet another idea might be to look at 'atomic_do_once' (see [1] and
>> [2]) from Intel's TBB. The code from [2] probably gets much simpler
>> for the use case at hand because the cache size itself could be stored
>> as an atomic.
>>
>> Regards,
>> Hauke
>>
>> [1]
>> http://stackoverflow.com/questions/9344739/thread-safe-lazy-creation-with-tbb
>> [2]
>> https://akazarov.web.cern.ch/akazarov/cmt/releases/nightly/tbb/src/tbb/tbb_misc.h
>>
>> On Fri, Jun 8, 2012 at 3:12 PM, Gael Guennebaud
>> <gael.guennebaud@xxxxxxxxx>  wrote:
>>>
>>> Hi,
>>>
>>> I'm looking for some help on understanding how the following piece of
>>> code in Eigen can lead to a race condition when the function
>>> manage_caching_sizes is called from different threads:
>>>
>>>
>>> inline std::ptrdiff_t manage_caching_sizes_helper(std::ptrdiff_t a,
>>> std::ptrdiff_t b)
>>> {
>>>  return a<=0 ? b : a;
>>> }
>>>
>>> inline void manage_caching_sizes(Action action, std::ptrdiff_t* l1=0,
>>> std::ptrdiff_t* l2=0)
>>> {
>>>  static std::ptrdiff_t m_l1CacheSize =
>>> manage_caching_sizes_helper(queryL1CacheSize(),8 * 1024);
>>>  static std::ptrdiff_t m_l2CacheSize =
>>> manage_caching_sizes_helper(queryTopLevelCacheSize(),1*1024*1024);
>>>  ...
>>> }
>>>
>>>
>>> During it's first call, this function computes and store the cache sizes.
>>>
>>> I would like to find a solution that avoids making the static variable
>>> "thread private" as suggested there:
>>>
>>> http://stackoverflow.com/questions/8828466/using-openmp-and-eigen-causes-infinite-loop-deadlock/10540025
>>> The problem with this approach, is that the cache-sizes are recomputed
>>> for every thread.
>>>
>>>
>>> thanks,
>>>
>>> Gaël
>>>
>>>
>>
>>
>
>
>





Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/