|Re: [eigen] a branch for SMP (openmp) experimentations|
[ Thread Index |
| More lists.tuxfamily.org/eigen Archives
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] a branch for SMP (openmp) experimentations
- From: Aron Ahmadia <aja2111@xxxxxxxxxxxx>
- Date: Tue, 23 Feb 2010 12:34:18 +0100
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to :content-type:content-transfer-encoding; bh=IMonYwlJNs+eRyazPkf1ZMCEE73igQBubNsbHpQsM1A=; b=p1jm1YaCzvqlrSkjUkONTM78VkNKwSCuXsVRdWFw8df/o1cbr1O30X3Vq68HaPasgF ZJAarOfKswLpQRh4BK/AT4ARLDl3suBl5W4GAmVc8hfGM0V83NvCDZkMwkO4YdhqOMil 5ZTcQqGa3z2wcQlEXrbGNKfkHTYR+DtPXajSU=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type :content-transfer-encoding; b=emJpdXtVdHDUDm7bw4E19uRwKQ9UkiRXaODHy9dd6SGotApWZApKoDzNo0GxzBfgJd hF7NvK/JnGJ8OvcyX00lKLch/2sktk2VLLKN4TFOi5cWqM2csFS7GnlxAw/izT6PSdTt wvAMngciPXxDv15ZvUu6HnY3xNivfPNSl8A2E=
Oh very cool! I am going to start playing with this very soon. I
want to look into getting some automatic numbers up from each build so
that it's easy to see how the code is performing across several
different HPC architectures (BlueGene/P, Nehalem, Clovertown). The
bench code should be what I'm looking for.
I'm curious about the difference between OpenMP and pthreads. I know
there is a bit of overthread in OpenMP but I didn't realize it would
be so meaningful. This will of course vary between architectures and
compilers, but I will query the IBM Researchers (Gunnels is more of an
expert with both pthreads and OpenMP and worked in van de Geijn's
group so may have more direct thoughts).
On Mon, Feb 22, 2010 at 11:43 AM, Hauke Heibel
> Works like a charm - see attachement.
> - Hauke
> On Mon, Feb 22, 2010 at 11:28 AM, Gael Guennebaud
> <gael.guennebaud@xxxxxxxxx> wrote:
>> I have just created a fork there:
>> to play with SMP support, and more precisely, with OpenMP.
>> Currently only the general matrix-matrix product is parallelized. I've
>> implemented a general 1D parallelizer to factor the parallelization code.. It
>> is defined there:
>> and used at the end of this file:
>> In the bench/ folder there are two bench_gemm*.cpp files to try it and
>> compare to BLAS.
>> On my core2 duo, I've observed a speed up around 1.9 for relatively small
>> At work I have an "Intel(R) Core(TM)2 Quad CPU Q9400 @ 2.66GHz" but it
>> is currently too busy to do really meaningfull experiments. Nevertheless, it
>> seems that gotoblas, which is directly using pthread, reports more
>> consistent speedups. So perhaps OpenMP is trying to do some too smart
>> scheduling and it might be useful to directly deal with pthread?
>> For the lazy but interested reader, the interesting piece of code is there:
>> int threads = omp_get_num_procs();
>> int blockSize = size / threads;
>> #pragma omp parallel for schedule(static,1)
>> for(int i=0; i<threads; ++i)
>> int blockStart = i*blockSize;
>> int actualBlockSize = std::min(blockSize, size - blockStart);
>> func(blockStart, actualBlockSize);
>> feelfree to play with it and have fun!
>> PS: to Aron, yesterday the main pb was that our benchtimer reported the
>> total execution time, not the "real" one... the other was that my system was
>> a bit buzy because of daemons, and stuff like that...