|Re: [eigen] again a unit test precision question|
[ Thread Index |
| More lists.tuxfamily.org/eigen Archives
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] again a unit test precision question
- From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
- Date: Sat, 3 Jul 2010 10:13:50 -0400
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=bnIDbd+3AALumL0jzdYDKnH7mVdVFaFjm7/AusbDtKg=; b=hrYU8PLCsmjxtaG4SQyQB4S19pmr2pOfd77TrIvFN1uRYwNiySnpiwsyNPo8xkm2+5 QEuHfw2i82x2+dWSUdfW6trrCT2wpzlETa9jVavs2qPdZSjbAR0uyAg1L9AupTHLVfrI 5SVmgeDhRR1pWpxABsyIgrpU1ywoF16ChlDcE=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=l0APt/NhKNZoEQMkQBmGUE4nCqgufwIRaCkWIcikR2nwCbb8AllxMdnx8WmYgZ5Au1 1j2P8qTFjO+/dqDqpgNK7Wg9zdYyf/ChK+Nyl5nHlpf2aK0eu1lMYqAiVj6DbQxd57Ge HyIc+sKCJUgUCmuGChYzVz5niLoKN7/NXoAVM=
2010/7/3 Jitse Niesen <jitse@xxxxxxxxxxxxxxxxx>:
> On Sat, 3 Jul 2010, Hauke Heibel wrote:
>> Anyways, the redux_8 test is failing every now and then in redux.cpp
>> line 105 (e.g. seed 1278148098).
>> and it is failing with values
>> 6.1214e-005 and 6.10948e-005
> The difference between these two values is 2e-7. The test adds 33 floats
> between -1 and 1. The error analysis for addition uses absolute error, not
> relative error, so the correct test is whether
> abs(sum computed by one method - sum computed by another method) <
> For tolerance, you can use length of vector * epsilon for floats, which is
> bigger than 2e-7, so the error committed is fine.
Yes, using length*epsilon will work here; I agree with Jitse's solution.
But I would also like to explain why the above test, which on the
surface of it looks perfectly right, is actually unsafe, and why you
get failures "only now and then".
So, the test is,
in principle this is a very fuzzy (since in the unit tests we use a
very low precision threshold) test comparing two versions of what is
mathematically the same sum. If, say, s is 0.638573, the expression
v.tail(size-i).sum() will be something very close to that, and the
test will just succeed. Because the error margin will be small
compared to s. As Jitse notes, the error margin is at worst
length*epsilon (in practice I guess even less than that, as the
central limit theorem predicts a smaller deviation). And
length*epsilon is very small compared to the typical value of s. So
when do problems happen? It is when, out of pure chance, s is near
zero. Because in that case, suddenly length*epsilon isn't negligible
anymore compared to s. So the relative fuzzy compare fails. In other
words, in this particular case, the error is relative to the order of
magnitude of the elements being summed, which is 1, not to the order
of magnitude of s. So an absolute, not relative, fuzzy compare, is
what we need. So I'm all for Jitse's solution above.
>> and those values are probably occurring because v's values are uniform
>> random values from [-1,1]. I am quite bad in numerics but I assume we
>> are seeing such differences because of cancellation errors?
>> Or is it simply addition arithmetic?
>> Whatever it is, I think a trivial fix would be to compute the absolute
>> sum. It is not the same test but in order to succeed it requires both,
>> the sum and the absolute value to be working.
> I think we should simply change the test along the above lines.
> Something like
> VERIFY_IS_MUCH_SMALLER_THAN(m1.sum() - s, Scalar(1));
Yes, this is another way of phrasing the solution you proposed above.
Whichever you prefer. The first solution above was more of a precision
test, while this one is much looser (it uses the unit tests default
precision which is low).