Re: FFT module thoughts, was Re: [eigen] Eigen 3.0.5 Could NOT find FFTW |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: FFT module thoughts, was Re: [eigen] Eigen 3.0.5 Could NOT find FFTW
- From: Márton Danóczy <marton78@xxxxxxxxx>
- Date: Thu, 5 Apr 2012 11:09:52 +0200
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:content-transfer-encoding; bh=H0eQszjeSsbrAQ7b4TRAZ0EhcqwAGYbTVcIl2SwBNVM=; b=eq4FsUIyGWJGfUNb2BKD1b1bhee+KZvHE6TOPaoHp9NO3C5Vi2SK2p8Z7V1ojRy7UW SDIsfru3NSl+Q24FxeiElGw+e+KIL0J+X8YeUq6X7WX+l4ib43ETsz+bTE1xgULXIBCA AN98x7x9b1yDi+P4Np5Todjws/kgZHyuc8DuIawOXGX8y2S4JfbxFB992LKDA4aZiVUD VyzcVMT1Q0IjGVqnYM2jG35a6jciLiPbiBULYaArXjMd3FOeL32+GFo9UHWgAPrLwfpb s4tmtFcejE+yzMxOc+h1jqoqi/gz/UuulhH2RIsbRaVySwE3A5I1ud4/hv9x6KIUE9tM juvw==
That's true of course, but it would be useful for people working with
single precision on ARM, x86 and PowerPC.
Marton
On 5 April 2012 11:05, Pavel Holoborodko <pavel@xxxxxxxxxxxxxxx> wrote:
> Unfortunately not applicable for arbitrary precision computing (independent
> from hardware floats).
>
>
> On Thu, Apr 5, 2012 at 5:53 PM, Márton Danóczy <marton78@xxxxxxxxx> wrote:
>>
>> Hi Pavel and Mark,
>>
>> Julien Pommier (the guy who wrote the SSE-accelerated sin/cos/log/exp
>> code) wrote a simple, small and fast SSE/Altivec/NEON accelerated FFT
>> implementation, maybe it would be worth checking out? As a backend or
>> to integrate it into kissfft?
>>
>> https://bitbucket.org/jpommier/pffft
>>
>> Also, he implemented sin/cos/log/exp on NEON, see here:
>> http://gruntthepeon.free.fr/ssemath/neon_mathfun.html
>>
>> Greets,
>> Marton
>>
>> On 5 April 2012 08:22, Pavel Holoborodko <pavel@xxxxxxxxxxxxxxx> wrote:
>> > Hello Mark,
>> >
>> > Thank you for creating wonderful FFT-library and integrating it with
>> > Eigen!
>> >
>> > Today I've done some tests on using Eigen::FFT (with kissfft as a
>> > back-end)
>> > together with arbitrary precision floating-point scalars, mpreal -
>> > another
>> > "unsupported" module in Eigen.
>> >
>> > I found that kissfft back-end is very well written, invariant of scalar
>> > type
>> > - no additional conversion was required.
>> > However, there is one question I don't understand - hope you (or other
>> > gurus) can help.
>> >
>> > I use multiple precision floating point numerical type and setup 300
>> > decimal
>> > digits accuracy for computations.
>> > Then I apply fwd on random data, do the inverse inv and compare its
>> > output
>> > with original data.
>> > In a perfect world, difference between original data and returned by
>> > inv should much up to full precision (approx. 300 digits).
>> >
>> > This is the case only if input data length is of power of 2 - all digits
>> > are
>> > correct (error is less than 1e-300).
>> > However, if data length is of any other size (any - even, odd), I get
>> > only
>> > 16 correct digits (usual double precision).
>> > This might mean that double arithmetic was used somewhere along the way
>> > in
>> > fwd or inv.
>> >
>> > I've checked the code - and couldn't find "obvious" places myself. Could
>> > you
>> > suggest where could be a problem?
>> >
>> > Thank you in advance, code is below.
>> >
>> > // Setup precision to 308 decimal digits
>> > mpfr::mpreal::set_default_prec(1024);
>> >
>> > typedef mpfr::mpreal Real;
>> > typedef std::complex<Real> Complex;
>> >
>> > const int N = 16; // 2^n is "ok", others are _not_
>> > std::vector<Real> x(N);
>> > std::vector<Real> z;
>> > std::vector<Complex> y;
>> > Real norm = 0;
>> >
>> > for (int i = 0; i < x.size(); i++) x[i] =
>> > Real((rand()/(double)RAND_MAX));
>> >
>> > Eigen::FFT<Real> fft;
>> > fft.fwd(y, x);
>> >
>> > fft.inv(z, y);
>> >
>> > // Compare with original
>> > for (int i = 0; i < x.size(); i++) norm += (x[i]-z[i]) * (x[i]-z[i]);
>> >
>> > std::cout<<"norm = "<<sqrt(norm)<<std::endl;
>> >
>> >
>> > On Sat, Mar 10, 2012 at 4:33 AM, Mark Borgerding <mark@xxxxxxxxxxxxxx>
>> > wrote:
>> >>
>> >> On 03/05/2012 04:21 PM, Trevor Irons wrote:
>> >>>
>> >>> ... if Mark is reading:
>> >>>
>> >>> You had threatened to update the FFT module with 2D in the past. Are
>> >>> there still plans for this?
>> >>
>> >>
>> >>
>> >> Plans? Yes. Timely plans? Hmmm.... I am currently pursuing a Ph.D.,
>> >> while working nearly full-time and trying to spend time with my family.
>> >> Idle nights suitable for hacking are few and far between.
>> >>
>> >> Here are some of the thoughts I have for ...
>> >>
>> >> ... cleaning up the interface:
>> >>
>> >> * Make correctly typed complex input a requirement for FFT::fft --
>> >> current version has a template the tries to do the right thing no
>> >> matter what (overambitious, at least for an initial version)
>> >> * Make separate real-optimized methods : rfft, rifft
>> >> o the original version that overloaded by scalar vs. complex
>> >> types is of minimal value and maximal headaches -- e.g. when
>> >> trying to perform proxied inverse FFTs, there is no way to
>> >> infer whether the result should be real
>> >> * Once the above things are done, then it should be easier to make
>> >> the returned proxy mode work properly. To allow code like:
>> >> o crossSpec += _dft.fft(x).cwiseProduct( _dft.fft(y) );
>> >> * Make the FFT operation unitary (all eigenvalues=1) by scaling
>> >> 1/sqrt(n) for both directions -- this might be controversial because
>> >> it disagrees with Matlab. Thoughts anyone?
>> >>
>> >>
>> >> ... stability, testing enhancements:
>> >>
>> >> * fix the CMake script FindFFTW -- I don't really know much about
>> >> CMake. The FindFFTW seems to do the right thing under linux, but
>> >> people are apparently having trouble in Windows. -- If you find a
>> >> bug, then you have 90% of the credentials needed to fix it !
>> >>
>> >>
>> >> ... features
>> >>
>> >> * Add the Intel MKL backend
>> >> * 2-d FFTs
>> >>
>> >>
>> >> Anyone with thoughts on the above, or observations from their use of
>> >> FFT?
>> >>
>> >> -- Mark
>> >>
>> >>
>> >
>>
>>
>