On 5 April 2012 08:22, Pavel Holoborodko <
pavel@xxxxxxxxxxxxxxx> wrote:
> Hello Mark,
>
> Thank you for creating wonderful FFT-library and integrating it with Eigen!
>
> Today I've done some tests on using Eigen::FFT (with kissfft as a back-end)
> together with arbitrary precision floating-point scalars, mpreal - another
> "unsupported" module in Eigen.
>
> I found that kissfft back-end is very well written, invariant of scalar type
> - no additional conversion was required.
> However, there is one question I don't understand - hope you (or other
> gurus) can help.
>
> I use multiple precision floating point numerical type and setup 300 decimal
> digits accuracy for computations.
> Then I apply fwd on random data, do the inverse inv and compare its output
> with original data.
> In a perfect world, difference between original data and returned by
> inv should much up to full precision (approx. 300 digits).
>
> This is the case only if input data length is of power of 2 - all digits are
> correct (error is less than 1e-300).
> However, if data length is of any other size (any - even, odd), I get only
> 16 correct digits (usual double precision).
> This might mean that double arithmetic was used somewhere along the way in
> fwd or inv.
>
> I've checked the code - and couldn't find "obvious" places myself. Could you
> suggest where could be a problem?
>
> Thank you in advance, code is below.
>
> // Setup precision to 308 decimal digits
> mpfr::mpreal::set_default_prec(1024);
>
> typedef mpfr::mpreal Real;
> typedef std::complex<Real> Complex;
>
> const int N = 16; // 2^n is "ok", others are _not_
> std::vector<Real> x(N);
> std::vector<Real> z;
> std::vector<Complex> y;
> Real norm = 0;
>
> for (int i = 0; i < x.size(); i++) x[i] = Real((rand()/(double)RAND_MAX));
>
> Eigen::FFT<Real> fft;
> fft.fwd(y, x);
>
> fft.inv(z, y);
>
> // Compare with original
> for (int i = 0; i < x.size(); i++) norm += (x[i]-z[i]) * (x[i]-z[i]);
>
> std::cout<<"norm = "<<sqrt(norm)<<std::endl;
>
>
> On Sat, Mar 10, 2012 at 4:33 AM, Mark Borgerding <
mark@xxxxxxxxxxxxxx>
> wrote:
>>
>> On 03/05/2012 04:21 PM, Trevor Irons wrote:
>>>
>>> ... if Mark is reading:
>>>
>>> You had threatened to update the FFT module with 2D in the past. Are
>>> there still plans for this?
>>
>>
>>
>> Plans? Yes. Timely plans? Hmmm.... I am currently pursuing a Ph.D.,
>> while working nearly full-time and trying to spend time with my family.
>> Idle nights suitable for hacking are few and far between.
>>
>> Here are some of the thoughts I have for ...
>>
>> ... cleaning up the interface:
>>
>> * Make correctly typed complex input a requirement for FFT::fft --
>> current version has a template the tries to do the right thing no
>> matter what (overambitious, at least for an initial version)
>> * Make separate real-optimized methods : rfft, rifft
>> o the original version that overloaded by scalar vs. complex
>> types is of minimal value and maximal headaches -- e.g.. when
>> trying to perform proxied inverse FFTs, there is no way to
>> infer whether the result should be real
>> * Once the above things are done, then it should be easier to make
>> the returned proxy mode work properly. To allow code like:
>> o crossSpec += _dft.fft(x).cwiseProduct( _dft.fft(y) );
>> * Make the FFT operation unitary (all eigenvalues=1) by scaling
>> 1/sqrt(n) for both directions -- this might be controversial because
>> it disagrees with Matlab. Thoughts anyone?
>>
>>
>> ... stability, testing enhancements:
>>
>> * fix the CMake script FindFFTW -- I don't really know much about
>> CMake. The FindFFTW seems to do the right thing under linux, but
>> people are apparently having trouble in Windows. -- If you find a
>> bug, then you have 90% of the credentials needed to fix it !
>>
>>
>> ... features
>>
>> * Add the Intel MKL backend
>> * 2-d FFTs
>>
>>
>> Anyone with thoughts on the above, or observations from their use of FFT?
>>
>> -- Mark
>>
>>
>