Re: [eigen] on fuzzy comparisons

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Hi,

Executive summary:
- for vectors, the current approach is correct, and i can justify it.
- for matrices, the good solution would be extremely costly so the current
  solution is a compromise and I am open to suggestions if you think another
  compromise is better.

*** Vectors ***

There is absolutely no reason whatsoever why
(v << [a,....,a] ) should be equivalent to (v << a).

As n increases, the vector [a,...,a] with a repeated n times, gets bigger and 
bigger. Its norm, which is equal to abs(a)*sqrt(n), gets arbitrarily large.

When you imply that (v << [a,....,a] ) should be equivalent to (v << a),
implicitly you seem to believe that [a,...,a] has intuitively the "same size" 
regardless of n. But that is not at all the case.

To make this any precise, we need to agree on what we call the "size" of a 
vector. I'm saying that the only good such notion is the vector norm. You 
were proposing to look at the sizes (absolute values) of individual 
coefficients in the vector, and I'm saying that's a wrong approach! Indeed, I 
suppose that we agree that rotating a vector does not change its size. In 
general n-space, "rotating" means "apply a unitary matrix" where unitary 
a.k.a. orthogonal means a matrix u such that uu*=u*u=1. The vector norm is 
invariant under such a "rotation": whenever u is unitary and v is a vector, 
norm(uv)=norm(v). But the absolute values of the coeffs (or the max of them) 
are not at all invariant under such "rotations" !

I hope to have convinced you that:
(v << [a,....,a] ) should be equivalent to
(v << norm([a,....,a])=sqrt(n)*abs(a)).

And more generally that:
v << w should be equivalent to v << norm(w)
which should be equivalent to norm(v) << norm(w).

*** Matrices ***

Let's state right away what the real good notion of size is for a matrix is: 
it's matrixNorm(). So theoretically,

m1 << m2 should be equivalent to m1.matrixNorm() << m2.matrixNorm()
and
m << x should be equivalent to m.matrixNorm() << abs(x)  for any scalar x.

The matrix norm has all the good properties you expect; m and umu* have the 
same norm for any unitary u so it's invariant under rotation; also m and m* 
have the same norm.

Unfortunately... this is very costly to compute and depends on the QR module! 

If you think that we don't need matrix comparisons in the Core module and that 
it's OK to have them being quite costly, then if you like we can move that 
stuff to QR and use matrixNorm(). In fact, AFAICR only the unit-tests use 
matrix comparisons and it's quite OK to have them depend on QR.

So yes the current solution is suboptimal; but at least it retains some 
mathematical properties.

What the current solution means, is that:

*****************************************************************
m1 << m2 if and only if we have m1*v << m2*v for all vectors v
*****************************************************************

Sounds reasonable doesn't it? I am very attached to the fact that this 
relation has such a simple geometrical meaning. The implemented algorithm 
amounts to check that for v running over the canonical basis, and this is 
equivalent to checking it for all vectors v.

It is inspired by the so-called "strong topology" on operators.

If really it is important to you that the matrix comparisons behave well with 
respect to taking adjoints, then what you're after is called the "*-strong 
topology" and concretely this better relation (let's denote it by <<<) is 
just defined by:

m1 <<< m2 if and only if m1 << m2 and m1* << m2*.

So if you like we can do this instead; it solves your concern about 
transposes; it is just twice slower unless marked<SelfAdjoint>().

There remains the possibility, evocated above, of using matrixNorm() to get 
the best possible notion of matrix comparisons.

Cheers,

Benoit


> Things get weirder when we compare matrices: the comparison is done per
> column and each column has to satisfy the above criterion.
> Then it is very easy to build example where:
>
> m << a   !=    m' << a
>
> (where ' denotes the transpose)
>
> I understand such a definition comes from math when you interpret a matrix
> as an operator, but here I think they should be considered like big 1D
> vectors ?
>
> Actually a related question is whether or not all the vector/matrix entries
> are assumed to be of the same order or not. I guess their is no other
> choice to assume that, and if it's not the case then it's up to the user to
> do the comparisons per blocks...
>
> Then what about comparisons based on the max norm:
>
> v.isApprox(w)   ==  (v-w).cwiseAbs().maxCoeff() < eps *  min( v.order() ,
> w.order() );
>
> v << w == v.cwiseAbs().maxCoeff() < eps * w.order()
>
> v << a  ==  v.cwiseAbs().maxCoeff() < eps * |a|
>
> with the same definitions for vectors and matrices, and order could be
> defined like that:
>
> m.order() == m.cwiseAbs().maxCoeff();   // not very stable I guess
>
> m.order() == m.norm() / m.size();
>
> m.order() == m.cwiseAbs().sum()  / m.size();
>
>
> cheers,
> Gael.


Attachment: signature.asc
Description: This is a digitally signed message part.



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/