Re: [eigen] slides of my talk |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] slides of my talk
- From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
- Date: Mon, 26 Jan 2009 16:53:15 +0100
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=fcEKio9Y65RvnSRqEeEPQ7s6EigN46rgHFoWTjHbyec=; b=p4WKpTkjG8Kv3QHvx92V6b+KJuJIRxsxw3pAuX6csMnYoMUbswXgcIntnwMsdDOtcW OxVxQ9eH2wbDKvqH3XOBNOvYPLD/5MBobPEKqDiF4PTM8i3EIdCj8u6NIzJAZzvWcK9t MV8TLuR7bMb6jF5eiGQP9DVxZJKzvz0CgVKAE=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=n2djW82qgwYIAN53ZpvzYX7X7IphPPCoEsMe1sQVx3JLXFCjef2ZMrBD/7m0Ut/EP0 D3bS/9dDXnh2RyB1yHb/BfiwWYZuhcP4tBmVU4sNKZSfVpk+Lgm7yTDn2ICDDOCRm8SJ Dgm19QbKjcUPewQQwBGrZI0cBU6imJG3yMTDA=
Well, thinking about that, they might have added fixed-size paths for
such common operations, so they pay only the price of a runtime
switch.
But take a slightly more complex operation, i.e. a*u+b*v+c*w for
arrays of numbers a,b,c and arrays of vectors u,v,w.
The point is that even if they optimize some common operations for
small fixed sizes they can only do it for a small fixed number of
these.
Benoit
2009/1/26 Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
> Just FYI, if you want something that's guaranteed to kill matlab or
> more generally any framework focusing on dynamic-size objects, try a
> heavy computation on a large set of fixed-size vectors or matrices.
>
> E.g. take a 4x4 matrix m, take a large array of 4-vectors
> v[100000000]; and multiply each of these vectors by the matrix m;
>
> Benoit
>
> 2009/1/26 Andre Krause <post@xxxxxxxxxxxxxxxx>:
>> Gael Guennebaud schrieb:
>>
>> well, it seems as if there was something done by the matlab team since version
>> 6.5 of matlab. see this link:
>>
>> http://www.caspur.it/risorse/softappl/doc/matlab_help/techdoc/matlab_prog/ch7_pe11.html
>>
>> Operating System MATLAB 6.1 MATLAB 6.5 Performance Gain
>> Windows 1 min., 25.9 sec. 1.1 sec. x 78.1
>> Linux 3 min., 13.8 sec. 1.7 sec. x 114.0
>> Solaris 8 min., 51.0 sec. 23.5 sec. x 22.6
>>
>> looking at those results, it would be even more interesting, how well matlab
>> does now compared to native c++.
>>
>> seems as if they indeed added some jit compiler...
>>
>> p.s. just googled - and found this:
>>
>> http://www.mathworks.se/products/matlab/whatsnew.html
>>
>> so matlab really has JIT acceleration.
>>
>>
>>> indeed, if you use matlab on relatively large dataset and you only use
>>> high level operators/routines then matlab is very fast because it
>>> relies on other state of the art library. There even exists plugins to
>>> perform some computations on the GPU...
>>>
>>> Said that, matlab is extremely slow as soon as you are doing some more
>>> low level stuff involving complex expressions or a lot of small
>>> computations... Matlab can be a couple of order of magnitude slower
>>> than a C/C++ equivalent code. It seems MatLab is missing a JIT
>>> compiler.
>>>
>>> btw, benoit thanks for the slides !
>>>
>>> Gael.
>>>
>>> On Mon, Jan 26, 2009 at 3:35 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
>>>> I have no clue how matlab does in general, but on the forum we've had
>>>> a user benchmarking matrix product against matlab, and matlab was
>>>> doing like MKL and Goto, which suggests that it's using one of these
>>>> libraries.
>>>>
>>>> If you're on *nix, you could try ldd or nm to get a clue about what
>>>> library they're using.
>>>>
>>>> Cheers,
>>>> Benoit
>>>>
>>>> 2009/1/26 Andre Krause <post@xxxxxxxxxxxxxxxx>:
>>>>> Benoit Jacob schrieb:
>>>>>> Hi,
>>>>>>
>>>>>> Today morning I gave a talk on Eigen at the CS department. Here are my
>>>>>> slides, in case they might be helpful.
>>>>>> When you see (SHOW BENCHMARKS) that's where I showed the benchmark
>>>>>> graphs that are displayed on the Benchmark page of the wiki.
>>>>>> I didn't have time to cover the last part on Eigen internals, I just
>>>>>> finished with the benchmarks. However people were interested and asked
>>>>>> questions about internals so in effect I got to explain as much of the
>>>>>> internals as I had planned to.
>>>>>>
>>>>>> Thanks a LOT to Keir: he translated the C++ snippets into Matlab for
>>>>>> me, and gave me a good briefing as to how to organize a talk for an
>>>>>> audience of CS people.
>>>>>>
>>>>>> I am just afraid that using \documentclass{slides} for a public of CS
>>>>>> people makes me look like a dinosaur mathematician.
>>>>>>
>>>>>> Cheers,
>>>>>> Benoit
>>>>> dear benoit, you show matlab code corresponding to eigen2 code. but in the
>>>>> benchmark section on the wiki, there is no matlab benchmark. i would be very
>>>>> curious how slow (or fast?) matlab is, compared to eigen. i teached myself
>>>>> matlab over christmas, and now i am curious how matlab compares to c++ (with
>>>>> eigen2).
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>>
>