Re: [eigen] Precision difference on 32-bit and 64-bit linux using float? |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] Precision difference on 32-bit and 64-bit linux using float?
- From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
- Date: Fri, 26 Feb 2010 06:51:17 -0500
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=b50tOJ4lFdoBQaYItzKo1J68cYJpc/CD0nTLxoVABYM=; b=pzMheKtMHDxvrEFkjg2+t8f3iKDmj21bU7iV0kiUZkV3pb7cdKX/GXf7wR/dPy0L71 a33hNvrKvUE3qv0qMOX54EYiEnWY4XE4wyFJsrwFhPzGx/DKEHdLE//x5dMYBeGvgc/1 Su9+tuADhtUUIhf04Uozvvm/3M/DTDpXZdjrk=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=AG2K+sCy0dY7H/UZRqYhHqbTxWl6T1Mw7GIEMqpUlMw0s6vzCUab00X5f5Otf7PH41 gzTlsTCyaRcYi4uaW/T7Z+ofbqXTS53H8s16CZM0waco+SibDGaYZNY0B1lGhlcWWd+f AVICL78e0kbMYBCFOWJFbbBZgLN1Zfq9Y2OBA=
2010/2/26 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>
> First you should not test for equality but do some fuzzy comparisons, you
> know a*b != b*a etc.
Just for the record, a nitpick: a*b=b*a (commutativity) actually works
exactly, but a*(b*c)=a*b*c (associativity) and a*(b+c)=a*b+a*c
(distributivity) do not work exactly in floating point arithmetic.
>
> Second, are you comparing 64bits against 32bits without SEE ? or 32bits with
> SSE ? In the former case, it is perfectly normal to get significant
> differences because the FPU does all the computations using 80bits of
> precision, even for floats. So if you are not enabling SSE on the 32bits
> platform, I'd say that the 32bits results are more correct than the 64bits
> ones (on 64bits SSE is enabled by default).
>
> If you think you still have troubles, please be more specific about the kind
> of operations your are doing.
>
> gael
>
> On Fri, Feb 26, 2010 at 10:47 AM, Atle Rudshaug <atle@xxxxxxxxxxxxxxxxxx>
> wrote:
>>
>> Hi!
>>
>> I have a unit-test where I check a function, using a lot of Eigen math,
>> against some hard coded results for equality. However, the precision on
>> 32-bit and 64-bit platforms differ, which causes the test to fail on our
>> 32-bit nightly test. I am using floats, not doubles. Is this a known
>> "problem"?
>>
>> Here are the results from the 32-bit run (the 64-bit run gets the correct
>> answers):
>> Expected: -8.333e-12 Actual: -8.34754e-12
>> Expected: -1.111e-06 Actual: -1.11111e-06
>> Expected: -1.389e-12 Actual: -1.39126e-12
>> Expected: -1.389e-12 Actual: -1.39126e-12
>> Expected: -5.556e-07 Actual: -5.55554e-07
>> Expected: -5.556e-07 Actual: -5.55554e-07
>> Expected: -5.556e-07 Actual: -5.55554e-07
>> Expected: -5.556e-07 Actual: -5.55554e-07
>> Expected: -6.944e-13 Actual: -6.95629e-13
>> Expected: -6.944e-13 Actual: -6.95629e-13
>> Expected: 1.111e-06 Actual: 1.11111e-06
>> Expected: -1.389e-12 Actual: -1.39126e-12
>> Expected: -1.389e-12 Actual: -1.39126e-12
>> Expected: -1.389e-12 Actual: -1.39126e-12
>> Expected: 5.556e-07 Actual: 5.55557e-07
>> Expected: 5.556e-07 Actual: 5.55557e-07
>> Expected: 5.556e-07 Actual: 5.55557e-07
>> Expected: 5.556e-07 Actual: 5.55557e-07
>> Expected: -6.944e-13 Actual: -6.95629e-13
>> Expected: -6.944e-13 Actual: -6.95629e-13
>>
>> - Atle
>>
>>
>
>