Re: [eigen] cppduals - dual number implementation with Eigen specializations
• To: eigen@xxxxxxxxxxxxxxxxxxx
• Subject: Re: [eigen] cppduals - dual number implementation with Eigen specializations
• From: Michael Tesch <tesch1@xxxxxxxxx>
• Date: Tue, 3 Dec 2019 16:15:44 +0100
• Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=qWBr/Mr6ovzTk+aBeVCYfFxrmFYzzDdv8VXbX08hGhI=; b=hQ4aYapFyfV2ZvGo7vu80FWZuuLynC2D2Cvd9k2yNL+hd/j6ML+QhWwxpCUptHVmVm P0SCTNEU25VKH8CXZrc91t7WK2NfnZAPXOFX7AiHmprRfi5vhtzYh45VprvUtBb5Rj6E TjqLEhR3CFdkEZqh5QHMTPopLSmOtuLnZutLkidGmQ44c4dPI6cSWk/uxWrviCidKqJN VhvhPUeg8QKug05kBp1g0bpWSnxeqp46Ncap4Frc7BVxIvI926Fvhxg5e+CM9dAeeO7W 4710nxwNZ2fEOvcBbk90BdApE85ZcXQrw+VS4rEGKD96GiuJ02/IrYxCr0ugMBK1f2gn kiYw==

Yes, of course.  It's mentioned in the paper..  The dual approach is faster (one multiplication fewer per op) and exact (doesn't rely on +h).  The only place where the complex-step might be better is when a) your function is real, and b) you dont need much accuracy, and c) you have BLAS's complex-complex matrix operations that are faster than Eigen's optimization of the dual-valued matrix ops.

There is/was some performance problem with Eigen's optimization of mat-mult of complex-valued matrices (and by extension, my dual-valued Eiegn matrices).  The real-valued operations are/were very close to OpenBLAS on my machines, but the complex valued ops are quite a bit slower.  I mentioned it on the Eigen chat room once, but I didn't have time to track it down and come up with a proper bug-report.  (sorry, it will happen some day.. in case it hasn't already been fixed.)

On Tue, Dec 3, 2019 at 4:04 PM Ian Bell <ian.h.bell@xxxxxxxxx> wrote:
Have you compared against naive complex step derivatives : https://sinews.siam.org/Details-Page/differentiation-without-a-difference ?  For first derivatives, CSD are trivial to apply, and doesn't require any additional machinations.  I think that would serve as a nice benchmark.

On Tue, Dec 3, 2019, 8:36 AM Michael Tesch <tesch1@xxxxxxxxx> wrote:
Hello,

I've written (yet another!) Dual Number implementation for automatic differentiation.  It is meant to be used as the value-type in Eigen matrices, and has templates for vectorization (shockingly) similar to (and based on) Eigen's complex-type vectorizations.  It is quite fast for first-order forward diff, and imho pretty easy to use.  There are also SSE/SSE3/AVX vectorizations for std::complex<dual< float | double >> types.

The library is here: https://gitlab.com/tesch1/cppduals , and there's a small paper in JOSS too: https://doi.org/10.21105/joss.01487

I hope this could be useful for someone and would be glad for any feedback, improvements, etc.

It would be interesting to compare this approach to others, by hand-wavey arguments I believe it should ultimately be faster in certain cases.

Cheers,
Michael

• Follow-Ups:
• References:

 Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/