[ Thread Index |
| More lists.tuxfamily.org/eigen Archives
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] Testing 2.0
- From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
- Date: Sun, 7 Feb 2010 13:03:32 -0500
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=zzfKZ0rAkIoM7/xCzS86ojKOUd8twqOS3i+4bWXJT4E=; b=fm/5ZJ4PNUkuJ42JtxuROjJanCxnjnqG+nWokKVKOyFPS1FojS+zNnedim5jVji0TQ Q+Dh1BgVt9ak9p5FLB8nask4gfRLrzBWhuVcpZFvwujv+DdSksTsuP9QEGXE4CPR5oBr BBvZj9qpTX8tt0L9M2su+dtql5GDSGmcV0IaE=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=RsvzLhT9rrPcxP2rw+pqEPhDT307Fc/IXHikLvpc5MrmM8pCy8HnZrsv5UXNM11pwR V4ukC7rMR8tPg4YYKYxGqhx3ysYF34YAD2BcBGOBJU/0tnlx+IFYzBaNq18JWxErVytC wF4ZJzHPUybZzqMJRkuz9Z16EDp084ScunyZQ=
2010/2/7 Jitse Niesen <jitse@xxxxxxxxxxxxxxxxx>:
> As you can see on
> I ran the test suite yesterday for the 2.0 branch with different versions of
> gcc on my ancient laptop (hence the long compile times). The good news is
> that it all went okay. I re-ran all the test that failed manually for a few
> times, and they all passed, so it was just a bad roll of the dice.
ok, cool! thanks!
> However, at some point we should perhaps think about what to do with these
> tests; a lot of them fail fairly frequently (for instance the lu tests)
ah yes, the lu tests fail typically on the exact rank computation, and
the reason why it fails is that eigen 2.0's algorithm for that is
really not very accurate.
This has been fixed in the development branch: the exact rank
computation now works very consistently, so you can run the lu tests
with 1000 repetitions.
This is too heavy to backport to 2.0, as it makes part of the 2.0 API
obsolete (the bool return value of solve(), for example).
So at this point i'd just keep things as-is in 2.0, it's fixed in the
For the other tests, i didn't look.
> which makes them far less useful. Perhaps we should look at the lapack test
> suite, they have both hard fails (if the result is completely wrong) and
> soft fails.
> PS: Good luck with the xpr template stuff which is way too complicated for