Re: [eigen] port of (c)minpack to eigen : status

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


On Fri, Sep 25, 2009 at 12:35 AM, Thomas Capricelli <orzel@xxxxxxxxxxxxxxx> wrote:
In data giovedì 24 settembre 2009 15:50:38, Hauke Heibel ha scritto:
> Actually, in our framework we have that even for optimizers since we also
> like to switch between let's say Downhill Simplex (or Nelder Mead) and
> LevenbergMarquardt. But that is really optional - though in our group we
> utilize it a lot.

Well, i dont really know (yet) about this one.

What is this framework you are speaking about ?

It's our group internal framwork - we have a unified interface wrapping different optimizers like LBFGS, Nelder-Mead (aka Downhill Simplex) and Levenberg Marquardt. It's for research purposes only and we are wrapping around the LBFGS from Jorge Nocedal, Nelder-Mead from Numerical Recipes and Levenberg Marquardt from Manolis Lourakis (levmar in C/C++).

I have in our unit tests some standard (cost-) functions (Rosenbrock, Helical Valley, ...) which might be interesting for the unit tests. When I find some time I could add them to your unit tests - btw, do we have write access on your fork?
 
> 3) It seems, as if currently different functions are called for numerical
> differentiation vs. optimal storage vs. analytical differentiation. I am
> just wondering whether these special cases are not candidates for
> templating. These are basically policies, right? Like 'use as much mem as
> you want' vs. 'use minimal memory' and 'differentiate analytically' vs.
> 'differentiate by forword diffs'.

Yes, definitely. I thought about this but i dont know how to do it exactly,
but i really think we should do that.
Currently, all three 'variants' are indeed almost the same code. This was this
way in cminpack, and i haven't factored it yet. Though i did pay attention so
that the 'diff' between  variants is as small as possible, to ease factoring
later on. If someone tells me the best way to do that using template arg, i
can go forward.

Afaik, one defines either enums or empty structs and sepcialize worker structs performing the desired task (differentiation or storage handling) depending on which enum/struct they have been specialized with.

> 4) Optional: A callback function would be cool. Basically, some way of
> getting updates of intermediate optimization results. It oftentimes helps a
> lot in the course of debugging/tuning specific optimization problems
>  because you can visualize the intermediate results.

There was something like this in minpack. There was an argument 'nprint' and
every 'nprint' steps, some callback was called so that the user could do
whatever he wans (display variables). The return value of the callback was
also used to ask for the algorithm to stop. See my commit baaf3f4c39dd which
removed it.
This is related to question 2 : there are two 'ways' of using the optimizers.
One function that does it all by itself, and the *Init()/*OneStep() way where
the loop is handled by the caller. If you want to print stuff and debug, you
could use the second way. Just copy the loop from (for example)
minimizeNumericalDiff() (it's four lines). You could also display from the
callbacks.

I see it now. I admit that this is possible though the question arises whether it's preferable to come up with a single and slim interface with all the bells and whistles. The problem here would be how we define callbacks - in case we wanted to use them. Should we go for std::tr1::function (like boost::function, exists only for newer compilers) or implement it ourselves?

Hauke


Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/