Re: [eigen] non-linear optimization test summary |

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]

*To*: eigen@xxxxxxxxxxxxxxxxxxx*Subject*: Re: [eigen] non-linear optimization test summary*From*: Thomas Capricelli <orzel@xxxxxxxxxxxxxxx>*Date*: Sat, 12 Jun 2010 04:19:29 +0200*Organization*: freehackers.org

Yes, we are aware of those failures. They are actually regressions introduced by a change from Benoit which "should not change any behaviour". I've spent a lot of times on trying to understand what happens, some of it with Benoit, but I still dont know what the problem is. Please do not change this file yet. We need to fix it. Adding fuziness is NOT the solution to this. At least not to fix this regression (then, when i'll make sure the tests pass on several os/compilers.. may be). Note on the different problems there are: * bad 'info' : info is the what the algorithm returns to indicate the reason for stopping. This is a huge problem when it changes * nfev (number of function of evaluation): this is very slightly less important, but still important. At least on my computer (the very same where tests passed with revision previous Benoit regression) we should get the same number. * error on 'squared norm': this one is tricky to explain. This is not the usual "stuff computed on a computer may differ from one computer/os/compiler to another one". What we check here is the result from an optimization algorithm. The value at the minimum of the function. This is the very purpose of the algorithm, and even if we might need some more steps on another computer, we should get the same result. URLs in NonLinearOptimization.cpp give the source of some (very important) reference tests, and until now we got almost always exactly the same results as those references. If we do not anymore, this is very, very bad (tm). Not just the usual "computers are fuzzy" Note to Benoit : when you got a really smaller nfev, this is probably actually because the algorithm completely failed, and stopped on a wrong value ('info' probably is different too, but checked later on on the testfile). As a side not, i intend to split the files in several tests, but i want to have this regression fixed before, as it does not help while i hunt it. I use to do a lot of going backward/forward in history, merging changes ... So, anyway, i have yet to fix those, i know, i have not (yet?) given up. regards!, -- Thomas Capricelli <orzel@xxxxxxxxxxxxxxx> http://www.freehackers.org/thomas In data venerdì 11 giugno 2010 12:42:29, Benoit Jacob ha scritto: > We have discussed this a lot with Thomas already, we're a bit clueless > about them. These failures started to appear with a seemingly > unrelated changeset. If it were just 602->606, I'd say add fuzziness. > But these numbers of iterations can vary a lot more, sometimes much > larger, sometimes much smaller. In this test I have had a 98 (while > 602 was expected) and the worst is that this was not reproducible on > Thomas machine. Since these numbers of iterations are so erratic, my > guess was that the termination criteria used by this iterative > algorithm was wrong; but a quick look at the code hasn't revealed > anything obvious. > > Benoit > > 2010/6/11 Hauke Heibel <hauke.heibel@xxxxxxxxxxxxxx>: > > Hi, > > > > I am just posting this as a summary and to get some idea in which > > tests I really start looking into and where we simply adapt the > > thresholds. > > > > We have the following tests failing (on all systems): > > NonLinearOptimization_7 > > NonLinearOptimization_8 > > NonLinearOptimization_10 > > NonLinearOptimization_12 > > > > NonLinearOptimization_7: > > - number of function evaluations(line 1341, 603-606 where 602 is expected) > > > > My guess is that here something fuzzy with an upper limit of function > > evaluations might be more appropriate. > > > > NonLinearOptimization_8: > > - squared norm (line 1019, 1.42986e-25, 1.42932e-25, 1.42897e-25, > > 1.42977e-25 where 1.4309e-25 expected) > > > > Probably again, we need to be more fuzzy. > > > > NonLinearOptimization_10: > > - info return result (2 where 3 expected) > > - number of function evaluations (on line 1180 we get 289 where 284 is expected) > > > > Maybe here we need to look more deeply into what is going wrong > > because the info value should probably be the same. > > > > NonLinearOptimization_12: > > - number of function evaluations (on line 1428 we get 498, 509 where > > 490 is expected and on line 1429 we get 378 where 378 is expected) > > > > Once again we need fuzzyness. > > > > I don't know whether I recall it well, but did not you (Thomas and > > Benoit) already have a discussion about that topic once on IRC? > > > > - Hauke > > > > > > > > >

**Follow-Ups**:**Re: [eigen] non-linear optimization test summary***From:*Benoit Jacob

**References**:**[eigen] non-linear optimization test summary***From:*Hauke Heibel

**Re: [eigen] non-linear optimization test summary***From:*Benoit Jacob

**Messages sorted by:**[ date | thread ]- Prev by Date:
**Re: [eigen] Re: Today's joke** - Next by Date:
**Re: [eigen] non-linear optimization test summary** - Previous by thread:
**Re: [eigen] non-linear optimization test summary** - Next by thread:
**Re: [eigen] non-linear optimization test summary**

Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |