[eigen] more naming debates
• To: eigen@xxxxxxxxxxxxxxxxxxx
• Subject: [eigen] more naming debates
• From: "Gael Guennebaud" <gael.guennebaud@xxxxxxxxx>
• Date: Sun, 20 Jul 2008 13:50:19 +0200
• Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type :content-transfer-encoding:content-disposition; b=J01GRKugob9AGh5HDYsYjarlQw+GTFcVoo4nt0/8hpEl3FAHnLmmC6W4drHx040Zko Jm6p7k8sGMjsek452tvynjbE4oZpnRGGdMMht20RBbqOMCoYntf7ddvbhVH9DipUIHcY AHgCV5f1pFc6iLyBJ5dD1QhXKGrQFMbz9E8YU=

```Hi list,

I'd like to put some new (and old) naming discussion on the table.

The first one is about the current MatrixBase::inverseProduct(matrix)
and <any_suitable_decomposition>::solve(matrix) method names.
Currently the <any_suitable_decomposition> only holds for the two
Cholesky decomposition but such a method could also be applied to QR,
EigenDecomposition, the soon coming LU and not soon coming SVD. All
those methods basically aims doing the same thing. An example:

x = A.cholesky().solve(b);

compute A^1 * b using a Cholesky decomposition of the matrix A (which
has to be positive definite). From a slightly more high level point of
view this solves the linear system: Ax=b.
Moreover, note that here "b" can also be a matrix (not only a vector),
that can be interpreded as either a batch solving or just an "inverse
product".

I guess we should provide the same API for each decomposition:

x = A.LU().solve(b);
x = A.QR().solve(b);
x = A.SVD().solve(b);

First question: do you like such a API ? or did you have something
different in mind ?
Now we also have the MatrixBase::inverseProduct(matrix) method which
does exactly the same thing but only for A a triangular matrix (so no
need for a decomposition):

x = A.marked<Upper>.inverseProduct(b);

that is not really consistent. We should either go for names based on
"solve" or on "inverse-product". I think that someone looking for a
triangular solver in Eigen, won't find it easily. So triSolve() would
be more explicit (and keeping the above .solve()). On the other hand
those methods really does an inverse-product which sounds a bit more
generic. What do you think ?

Now I'd to restart the debate about triangular matrices and
marked/part/extract. Let's sumarise the current state:

1 - We have "A.marked<Upper>()"  which marks the A to be
upper-triangular, so the strict lower part is assumed to be zero, and
it *must* be zero. In practice it simply add the proper bit flags such
that some high level algorithms can follow an optimized path.

2 - We have "A.extract<Upper>()" which extract the upper triangular
part of an arbitrary matrix, more like .block(). To be compatible with
other matrix, querying a coefficient of the strict lower part will
return 0. So you can build a random triangular matrix like this:
Matrix4f tm = Matrix4f::random().extract<Upper>();

3 - We also have "A.part<Upper>()" which allows to write to triangular
part of a matrix, e.g.:
Matrix4f tm(Matrix4f::zero());
tm.part<Upper>() = Matrix4f::random();
So it behave more like block() but as a write only proxy.

I know we've already discussed a lot about that, but I still think
this current state is not satisfactory, and I think "extract" and
"part"  should be merged since they conceptually does exactly the same
thing: extract a triangular part and allows either read from or write
to it. The difference is that writing outside the selected part is
obviously forbidden, while reading outside the triangular part returns
0, that also make perfect sense. I don't think that requires two
different functions which IMO only introduces confusion.

So in case we agree to keep only one name, I suggest to keep "part".
(we can also imagine names based on "view" like .viewAs() but this is
confusing with .marked())

Cheers,
Gael.

```

 Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/