Re: [eigen] Usage of Eigen2 in a closed source application

[ Thread Index | Date Index | More Archives ]

-- sorry for the previous message, sometime I really hate gmail --

Hi Keir,

indeed, the major difficulty is to come up with an API:
 - maximizing the ease of use,
 - maximizing performances,
 - minimizing matrix copies,
 - minimizing code duplication,
 - maximize ease of reuse (eg., we want to be able to use them on
sub-matrices to implement block algorithms).

Obviously, that holds for all high level linear algebra modules (LU,
Cholesky, QR, SVD) and clearly the current design fails to satisfy all
these objectives. However, I think that having an object (eg., LU,
SVD, EigenSolver, etc.) which holds/knows the result and on which we
can perform some additional queries is really nice, and I'm quite in
favor to keep such kind of high level interface. Also note that those
objects provide a compute() function which allows to reuse the
allocated matrices with other inputs. As you noticed, what is really
missing is a lower level interface allowing to reuse allocated
matrices to hold the results, etc. Let me try to sketch something.

An option would be to provide a C-like low-level interface (more like
Lapack routines), and makes the high level classes use it. The main
advantage of this approach is that's pretty simple to implement.

Another option would be to make our high level classes more flexible
such that we could specify whether the object should:
 - allocate it's own data,
 - use the provided matrix to store the results,
 - or even overwrite the input if that makes sense for the given computation.

Actually, those two options are rather complementary, so perhaps we
could also go for both. What is really important is that the core
computation codes are instantiated only once per combination of scalar
type/kind of dimensions (fixed/dynamic)/storage order, and here having
C-like low-level functions could help.

For the ease of implementation, and in order to reduce code
generation, I guess we have to enforce a consistent storage order for
all inputs/outputs (C-Lapack enforces that as well). For dynamic
sizes, I think that's not too difficult to achieve. Basically, we can
use some Map<> objects to store the results and some runtime flags to
know if the memory has to be released when the object is deleted. For
instance, for a computation involving 1 input matrix and 2 results

template<typename MatrixType>
class SomeClass {

  typedef MatrixType<...> ResultType1;
  typedef MapWithStride<ResultType1> MappedResultType1;

  typedef MatrixType<...> ResultType2;
  typedef MapWithStride<ResultType2> MappedResultType2;

  MappedResultType1 m_result1;
  MappedResultType2 m_result2;
  bool m_isOwner1, m_isOwner2;

  template<typename OtherResultType1,typename OtherResultType2>
  SomeClass(const MatrixType& input,
                    OtherResultType1& optResultMatrix1 = NullXpr(),
                    OtherResultType2& optResultMatrix2 = NullXpr())
     if (m_isOwner1 = ei_isNull<OtherResultType1>())
       m_result1.remap(new Scalar[...], ...);

    if (m_isOwner2 = ei_isNull<OtherResultType2>())
       m_result2.remap(new Scalar[...], ...);


  ~SomeClass() { if(m_isOwner1) delete[]; ... }


This design assumes we add a MapWithStride expression (basically a
clone of Block) and a NullXpr expression (or something similar).

The pro:
 - quite easy to implement,
 - works with matrices or sub-matrix expressions without generating
multiple versions of the same code,
 - we can provide storage for only a few of the computed data, eg:
SomeClass<MyMatrix>(input, NullXpr(), myresult2);

The cons:
 - currently, not very well designed for fixed size.
 - we lost the AlignedBit flags for the results. This is probably not
a big deal, because those algorithms never work on a complete matrix
but always involve complex sub-matrix expression for which this flag
is lost anyway.

So, for fixed sizes, perhaps we could add some preallocated members:

  ResultType1 m_result1Data;
  ResultType2 m_result2Data;

and replace the "new Scalar()" by the respective "". The cons:
 - in the case the user provides its own storage matrix, then we waste
some stack memory (no big deal ?)
 - I guess the compiler could not assume that m_result1 refers to the
stack, and then, perhaps, we would lost some low-level
optimizations... ???

OK, I hope that was not too confusing, and if someone has some ideas
to refine this proposal, or to come up a completely new brand design,
your welcome !

About Householder transformations, yes, factorizing them was in the
TODO. Note that here the problem is a bit different because they are,
by nature, very low-level, and I think its fine if they are inlined
into the higher-level algorithms (so less code generation issue).


On Sun, Dec 14, 2008 at 2:50 AM, Keir Mierle <mierle@xxxxxxxxx> wrote:
> On Sat, Dec 13, 2008 at 12:37 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
>> Hi Keir,
>> Thanks for answering Frank's e-mail.
>> 2008/12/13 Keir Mierle <mierle@xxxxxxxxx>:
>> > that it didn't at the time have an SVD,
>> By the way, have you looked at reimplementing the SVD? I ask because
>> I'll have some time during the holidays and SVD is near the top of my
>> priority list, so if you haven't started it already then maybe I'll
>> try to do it.
> I have looked at it, and started sketching some interfaces, but didn't end up with anything. I read the relevant parts of the Golub book. Actually implementing the SVD is not hard; the tricky part is to make the API reusable. The SVD algorithm breaks down into a few clear parts which would almost certainly be useful outside of the SVD itself.
> I am not sure about the following things:
> Should we maintain the current API for the SVD and just change under the covers? If so, then it is not obvious to me how to break the algorithm into smaller reusable pieces, because the SVD object owns the matrix with the results. In particular, should the bidiagonalization also be a Bidiagonalization object? Should it own the result matrices?
> Householder matrices are used in several parts of the code; it would be nice to have a simple mechanism for accumulating householder transforms (without actually applying them) to reuse throughout.
> I don't understand how lazy evaluation would work with the SVD; does it even make sense? My template-metaprogramming-fu is not strong.
> I am not sure how to evaluate the numerical precision of the resulting SVD algorithm, which is important for many applications (however this is an entire other can of worms).
> If we can figure these things out, I'd happily write some or all of the SVD for Eigen.
> Keir
>> > Benoit, perhaps it is worth adding an entry to the license FAQ explicitly
>> > explaining that eigen can be used in commercial applications, provided
>> > source modifications to eigen itself are released back to the community?
>> Yes, i'll do it, good idea.
>> Cheers,
>> Benoit
>> ---


Mail converted by MHonArc 2.6.19+