Re: [eigen] Specializing max_coeff_visitor for some number types

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


On 07.04.2014 20:33, Marc Glisse wrote:
The attached patch doesn't change the result of make check (adolc is
still broken, there is a missing -lGL somewhere, and parmetis doesn't
chose any MPI so finds none).

The adolc test is passing at my machine (that is with current devel, I have not tested your patch yet):
http://manao.inria.fr/CDash/testSummary.php?project=1&name=forward_adolc&date=2014-04-08

I don't have METIS installed and strangely, OpenGL is not detected here.

As you can see in LU, there is a subtlety I hadn't thought of: max(abs)
was returning a RealScalar, not a Scalar. That's not a big problem when
we only want to compare it with 0.

I think that m_maxpivot would technically not be needed for rational types and for interval types, because in neither case we need any kind of thresholding to determine if a pivot is exactly zero (rational), or is possibly zero (interval). But that's actually another problem to specialize rank(), and this likely again requires different specializations for all decompositions.

I think I didn't break maxPivot() in FullPivLU.

maxPivot wouldn't really make sense for rationals, because you are taking a 'best' pivot using some other criterion -- of course you can still calculate your maxPivot, but it has no significant meaning.

householder, SVD, etc would likely need a similar treatment.

Of course, but let us do that after we agreed on how to finish this.

I want the best in my case (well, not necessarily the best, but one good
enough). And the biggest lower-bound is just a simple heuristic, given
the choice between (1,1) and (2,20) I'd pick the first interval as the
better pivot ;-) But it may not be worth doing anything more subtle.

Hm, well the question is for a matrix, e.g.,
 [ (1,1)  row1 ]
 [ (2,20) row2 ]
whether
 row2 - (2,20)/(1,1) * row1
or
 row1 - (1,1)/(2,20) * row2
is better (simplified example). But I agree that simply using the
element with the bigger lower bound may not always be the best choice.

I believe you need to consider at least a third line. The uncertainty on
the pivot will propagate to the whole matrix (except its own row/col).

Ok, but then you'd actually also need to consider the contents of the rest of the rows.
E.g. choosing a row
  [(100,100)  (-1e6,+1e6) (-2e6,1e3) (-1e3,2e6)]
might not be a good choice, even if the first element makes a good pivot element. Note that neither of the other row-elements can be used as pivot at all, because their inverse is (-inf,inf).

That said, maybe there is a need to let the user provide different pivoting strategies, depending on his needs? Unfortunately, that risks bloating the interface.

Is the patch roughly the right approach? Where do we go from there?

I would consider this the right approach especially as long as we don't change existing behavior. But I'd like to hear the opinion of other developers before going further.

Of course, to make your patch worth the effort, we'd also need an example of a specialized scalar_better_coeff_op, demonstrating the benefits.

Christoph




--
----------------------------------------------
Dipl.-Inf., Dipl.-Math. Christoph Hertzberg
Cartesium 0.049
Universität Bremen
Enrique-Schmidt-Straße 5
28359 Bremen

Tel: +49 (421) 218-64252
----------------------------------------------



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/