Re: [eigen] General design issue

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


2009/11/16 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>
>
> On Sun, Nov 15, 2009 at 6:29 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
> wrote:
>>
>> 2009/11/15 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>> >
>> >
>> > On Sun, Nov 15, 2009 at 5:00 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
>> > wrote:
>> >>
>> >> Just one thing though: I am still not convinced that we want
>> >> arithmetic operators (let alone expression templates) for all of the
>> >> special matrix classes. It sure would be nice, but I am not convinced
>> >> that it's worth the extra work and complexity.
>> >
>> > Well, yes and no.
>> >
>> > Let's assume we agree that supporting matrix products is important.
>>
>> Matrix product of two special matrices? I really don't see the use
>> case for taking the matrix product of e.g. two BandMatrix... do you
>> have something in mind?
>
> well, for many kind of physic problems where it is advantageous to use a
> band storage rather than a general sparse storage.

Well ok, I'll just trust you that there exist such problems, in which
one actually needs to do matrix products... so obviously i don't mind
if you want to work on this, but personnally i still need motivational
examples if you would like me to work on this myself. In other words,
i still find it very tempting to say that special matrix classes are
only for special uses like solving.

>
> In order to factorize the code implementing the operators and functions (+,
> *, .transpose(), etc.) over all the different matrix type, I see three
> options:
>
> 1 - Move everything to AnyMatrixBase and find a nice way to generate
> explicit compilation errors when one operation is not supported by the given
> storage type.
>
> 2 - Split them into a set of "base" matrix classes, one per type of
> expression (e.g., CwiseUnaryOps, TransposeOps, etc.) and then let the main
> matrix base class of each kind of storage inherits the set of supported base
> operations. So the current MatrixBase class for dense storage would inherit
> CwiseUnaryOps, TransposeOps, CwiseBinaryOps, ProductOps, etc. One issue is
> that we cannot use multiple inheritence because:
>  a) MSVC does not like multiple inheritence of empty classes, it generates
> additional storage cost.
>  b) operator - and * would be inplemented by various base classes.
> To overcome this issue, one common meta programming solution is to use a
> derivation chain, e.g.:
>
> template<typename Derived> class MatrixBase : public
> DerivationChain<Derived,CwiseUnaryOps,TransposeOps, CwiseBinaryOps,
> ProductOps> {};
>
> The DerivationChain helper class will generate that inheritence tree:
>
> MatrixBase<Derived> -> CwiseUnaryOps<Derived> -> TransposeOps<Derived> ->
> CwiseBinaryOps<Derived> -> ProductOps<Derived> -> AnyMatrixBase<Derived>
>
> 3 - the third option is to use the preprocessor, not macro, but rather one
> #include per set of operations, e.g.:
>
> template<typename Derived> class MatrixBase
> {
>   // ...
>   #include <CwiseUnaryOps>
>   #include <TransposeOps>
>   #include <CwiseBinaryOps>
>   #include <ProductOps>
> };
>
>
> Because of the difficulties to generate good documentation and good
> compilation errors I don't really like solution 1.
>
> The solution 2 is the C++ solution. It also splits the documentation which
> could be an advantage (or not ?). Still regarding the documentation, we
> could cheat a bit by showing a flat inheritence tree hidding the derivation
> chain stuff.
>
> Finally, the solution 3 is much easier, both for us and for the compiler,
> but I'm not sure how doxygen handle that. I'll try and let you know.
>

If solution 3 is easier for both us and the compiler, that dwarves any
drawbacks it might have for doxygen...

There's something i'm not sure to understand: this solution only takes
care of defining operators/methods as just returning xpr objects,
right? The actual implementation of the xprs can't be shared at all
across base matrix types, as far as I can see.

Then frankly, I'd say don't worry too much about this aspect (of the
operators returning xprs) because that is only a relatively small part
of the code and effort... anyhow, the bulk of the work will be to
write all the xpr classes.

Benoit

> gael.
>
>>
>> Benoit
>>
>> > The
>> > problem is that in most cases you don't only want simple product, but
>> > you
>> > also want to be able to take the adjoint of the factors, do some scaling
>> > and
>> > maybe accumulate the result at the same time. This is not an hazard if
>> > the
>> > BLAS GEMM routine does all that in a single call. To make thing clear
>> > let me
>> > recall that GEMM allow to do:
>> >
>> > C = C + alpha * op(A) * op(B)
>> >
>> > However to support this kind of high level products we need to be able
>> > to
>> > build an expression tree containing Transpose, CwiseUnaryOp, and
>> > Product.
>> > Whence the "no".
>> >
>> > Now let's consider an arbitrary fancy storage Foo. A funny thing with my
>> > proposal is that if you implement the product for the Foo storage as we
>> > did
>> > for the dense path (i.e., via a kind of GEMM routine), then you even
>> > don't
>> > need to implement the specialization of {Transpose|CwiseUnaryOp}Impl for
>> > the
>> > storage Foo ! Whence the "yes".
>> >
>> > Nevertheless, both Transpose and CwiseUnaryOp are usually trivial to
>> > implement. The most difficult ones are Block,  CwiseBinary, and partial
>> > reductions which are less useful.
>> >
>> > Gael.
>> >
>> >>
>> >> I agree about all the rest. And it's true that your changes, which are
>> >> useful anyway, would also be the first thing to do if one wanted
>> >> arithmetic operators for special matrices. Finally, such operators and
>> >> expression templates can be added at a later date.
>> >>
>> >> Benoit
>> >>
>> >> 2009/11/13 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>> >> >
>> >> >
>> >> > On Fri, Nov 13, 2009 at 9:46 PM, Benoit Jacob
>> >> > <jacob.benoit.1@xxxxxxxxx>
>> >> > wrote:
>> >> >>
>> >> >> 2009/11/13 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>> >> >> >
>> >> >> >
>> >> >> > On Fri, Nov 13, 2009 at 8:58 PM, Benoit Jacob
>> >> >> > <jacob.benoit.1@xxxxxxxxx>
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> Hi,
>> >> >> >>
>> >> >> >> Wow, your mail took me a lot of effort to understand, I think
>> >> >> >> it's
>> >> >> >> mostly ok now...
>> >> >> >>
>> >> >> >
>> >> >> > sorry for that, I should have been more explicit.
>> >> >> >
>> >> >> >>
>> >> >> >> First of all, I agree with your summary of current limitations.
>> >> >> >> Yes,
>> >> >> >> I
>> >> >> >> agree we must take care of that before beta1.
>> >> >> >>
>> >> >> >> Let me rephrase some stuff in my own baby words to see if i got
>> >> >> >> it
>> >> >> >> right. The 2 basic problems you want to address are:
>> >> >> >> * code sharing between dense and sparse xpr
>> >> >> >> * in xpr trees, only the leaves should be affected by what the
>> >> >> >> leaves
>> >> >> >> actually are.
>> >> >> >>
>> >> >> >> The "naive" approach to that would probably be to let e.g.
>> >> >> >> Transpose
>> >> >> >> and SparseTranspose inherit a common base AnyTranspose, making it
>> >> >> >> a
>> >> >> >> curious base so AnyTranspose<Derived>... then .transpose()
>> >> >> >> methods
>> >> >> >> should return objects of AnyTranspose<Derived> which is ugly and
>> >> >> >> may
>> >> >> >> (out of empirical experience) give compiler warnings about strict
>> >> >> >> aliasing when casting the returned object to the derived type.
>> >> >> >>
>> >> >> >> So your solution, if I understand well, consists instead in
>> >> >> >> letting
>> >> >> >> Transpose itself be a template in one more integer parameter,
>> >> >> >> which
>> >> >> >> you call StorageType, which could have special values Dense,
>> >> >> >> Sparse,
>> >> >> >> Band...
>> >> >> >>
>> >> >> >> At this point, I think I have understood how you solve the "the
>> >> >> >> rest
>> >> >> >> of the tree doesn't completely change if the leaves are
>> >> >> >> different"
>> >> >> >> problem.
>> >> >> >>
>> >> >> >> But can you enlighten me as to how you solve the "code sharing"
>> >> >> >> problem? If Transpose<Dense> and Transpose<Sparse> can share
>> >> >> >> code,
>> >> >> >> where do you put that? In a common base class TransposeBase? So
>> >> >> >> that
>> >> >> >> problem and solution is actually completely orthogonal to the
>> >> >> >> above?
>> >> >> >
>> >> >> > What I suggest is a bit different. Let's pick an example:
>> >> >> >
>> >> >> > template<MatrixType> Transpose : public TransposeImpl<MatrixType,
>> >> >> > ei_traits<MatrixType>::StorageType>
>> >> >> > {
>> >> >> >  int rows();
>> >> >> >  int cols();
>> >> >> >  MatrixType::Nested m_matrix;
>> >> >> >  // etc.
>> >> >> > }
>> >> >> >
>> >> >> > then TranposeImpl<MatrixType,Dense> would inherit
>> >> >> > MatrixBase<Transpose<>
>> >> >> > >
>> >> >> > and implements the coeff() methods while
>> >> >> > TransposeImpl<MatrixType,Sparse>
>> >> >> > would inherit SparseMatrixBase<> and implement the InnerIterators
>> >> >> > mechanism.
>> >> >>
>> >> >> ok so actually in your scheme, the code sharing happens in the
>> >> >> derived
>> >> >> class Transpose, while the non-shared implementation happens in the
>> >> >> base class TransposeImpl.... clever!
>> >> >>
>> >> >> What I like the most here is that the class that is used as a node
>> >> >> in
>> >> >> the xpr tree, Transpose, doesn't get any additional template
>> >> >> parameter: our xpr trees were already complex enough like that...
>> >> >>
>> >> >> Are you going to set up a new fork for that? And add it to the
>> >> >> todo-3.0 page? It seems quite a long project to complete.
>> >> >
>> >> > yep, this is definitely not a trivial change, so of course I'll start
>> >> > a
>> >> > fork
>> >> > for that and we'll see!
>> >> >
>> >> > gael
>> >> >
>> >> >>
>> >> >> Benoit
>> >> >>
>> >> >>
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >>
>> >>
>> >
>> >
>> >
>> >
>>
>>
>
>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/