On Fri, Feb 26, 2010 at 10:47 AM, Atle Rudshaug
<atle@xxxxxxxxxxxxxxxxxx> wrote:
Hi!
I have a unit-test where I check a function, using a lot of Eigen math, against some hard coded results for equality. However, the precision on 32-bit and 64-bit platforms differ, which causes the test to fail on our 32-bit nightly test. I am using floats, not doubles. Is this a known "problem"?
Here are the results from the 32-bit run (the 64-bit run gets the correct answers):
Expected: -8.333e-12 Actual: -8.34754e-12
Expected: -1.111e-06 Actual: -1.11111e-06
Expected: -1.389e-12 Actual: -1.39126e-12
Expected: -1.389e-12 Actual: -1.39126e-12
Expected: -5.556e-07 Actual: -5.55554e-07
Expected: -5.556e-07 Actual: -5.55554e-07
Expected: -5.556e-07 Actual: -5.55554e-07
Expected: -5.556e-07 Actual: -5.55554e-07
Expected: -6.944e-13 Actual: -6.95629e-13
Expected: -6.944e-13 Actual: -6.95629e-13
Expected: 1.111e-06 Actual: 1.11111e-06
Expected: -1.389e-12 Actual: -1.39126e-12
Expected: -1.389e-12 Actual: -1.39126e-12
Expected: -1.389e-12 Actual: -1.39126e-12
Expected: 5.556e-07 Actual: 5.55557e-07
Expected: 5.556e-07 Actual: 5.55557e-07
Expected: 5.556e-07 Actual: 5.55557e-07
Expected: 5.556e-07 Actual: 5.55557e-07
Expected: -6.944e-13 Actual: -6.95629e-13
Expected: -6.944e-13 Actual: -6.95629e-13
- Atle