Re: [eigen] General design issue

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]




On Mon, Nov 16, 2009 at 2:41 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
2009/11/16 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>
>
> On Sun, Nov 15, 2009 at 6:29 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
> wrote:
>>
>> 2009/11/15 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>> >
>> >
>> > On Sun, Nov 15, 2009 at 5:00 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
>> > wrote:
>> >>
>> >> Just one thing though: I am still not convinced that we want
>> >> arithmetic operators (let alone _expression_ templates) for all of the
>> >> special matrix classes. It sure would be nice, but I am not convinced
>> >> that it's worth the extra work and complexity.
>> >
>> > Well, yes and no.
>> >
>> > Let's assume we agree that supporting matrix products is important.
>>
>> Matrix product of two special matrices? I really don't see the use
>> case for taking the matrix product of e.g. two BandMatrix... do you
>> have something in mind?
>
> well, for many kind of physic problems where it is advantageous to use a
> band storage rather than a general sparse storage.

Well ok, I'll just trust you that there exist such problems, in which
one actually needs to do matrix products... so obviously i don't mind
if you want to work on this, but personnally i still need motivational
examples if you would like me to work on this myself. In other words,
i still find it very tempting to say that special matrix classes are
only for special uses like solving.

>
> In order to factorize the code implementing the operators and functions (+,
> *, .transpose(), etc.) over all the different matrix type, I see three
> options:
>
> 1 - Move everything to AnyMatrixBase and find a nice way to generate
> explicit compilation errors when one operation is not supported by the given
> storage type.
>
> 2 - Split them into a set of "base" matrix classes, one per type of
> _expression_ (e.g., CwiseUnaryOps, TransposeOps, etc.) and then let the main
> matrix base class of each kind of storage inherits the set of supported base
> operations. So the current MatrixBase class for dense storage would inherit
> CwiseUnaryOps, TransposeOps, CwiseBinaryOps, ProductOps, etc. One issue is
> that we cannot use multiple inheritence because:
>  a) MSVC does not like multiple inheritence of empty classes, it generates
> additional storage cost.
>  b) operator - and * would be inplemented by various base classes..
> To overcome this issue, one common meta programming solution is to use a
> derivation chain, e.g.:
>
> template<typename Derived> class MatrixBase : public
> DerivationChain<Derived,CwiseUnaryOps,TransposeOps, CwiseBinaryOps,
> ProductOps> {};
>
> The DerivationChain helper class will generate that inheritence tree:
>
> MatrixBase<Derived> -> CwiseUnaryOps<Derived> -> TransposeOps<Derived> ->
> CwiseBinaryOps<Derived> -> ProductOps<Derived> -> AnyMatrixBase<Derived>
>
> 3 - the third option is to use the preprocessor, not macro, but rather one
> #include per set of operations, e.g.:
>
> template<typename Derived> class MatrixBase
> {
>   // ...
>   #include <CwiseUnaryOps>
>   #include <TransposeOps>
>   #include <CwiseBinaryOps>
>   #include <ProductOps>
> };
>
>
> Because of the difficulties to generate good documentation and good
> compilation errors I don't really like solution 1.
>
> The solution 2 is the C++ solution. It also splits the documentation which
> could be an advantage (or not ?). Still regarding the documentation, we
> could cheat a bit by showing a flat inheritence tree hidding the derivation
> chain stuff.
>
> Finally, the solution 3 is much easier, both for us and for the compiler,
> but I'm not sure how doxygen handle that. I'll try and let you know.
>

If solution 3 is easier for both us and the compiler, that dwarves any
drawbacks it might have for doxygen...

There's something i'm not sure to understand: this solution only takes
care of defining operators/methods as just returning xpr objects,
right? The actual implementation of the xprs can't be shared at all
across base matrix types, as far as I can see.

it takes care to both the declaration and the implementation of the operators/methods returning xpr objects. And this is a lot of code. Here is a selection of them with only the declaration:

****************************


    using ei_special_scalar_op_base<Derived,typename ei_traits<Derived>::Scalar,
                typename NumTraits<typename ei_traits<Derived>::Scalar>::Real>::operator*;

    template<typename OtherDerived>
    Derived& operator+=(const AnyMatrixBase<OtherDerived> &other);

    template<typename OtherDerived>
    Derived& operator-=(const AnyMatrixBase<OtherDerived> &other);

    const CwiseUnaryOp<ei_scalar_opposite_op<typename ei_traits<Derived>::Scalar>,Derived> operator-() const;

    template<typename OtherDerived>
    const CwiseBinaryOp<ei_scalar_sum_op<typename ei_traits<Derived>::Scalar>, Derived, OtherDerived>
    operator+(const MatrixBase<OtherDerived> &other) const;

    template<typename OtherDerived>
    const CwiseBinaryOp<ei_scalar_difference_op<typename ei_traits<Derived>::Scalar>, Derived, OtherDerived>
    operator-(const MatrixBase<OtherDerived> &other) const;

    template<typename OtherDerived>
    Derived& operator+=(const MatrixBase<OtherDerived>& other);
    template<typename OtherDerived>
    Derived& operator-=(const MatrixBase<OtherDerived>& other);

    Derived& operator*=(const Scalar& other);
    Derived& operator/=(const Scalar& other);

    const ScalarMultipleReturnType operator*(const Scalar& scalar) const;
    #ifdef EIGEN_PARSED_BY_DOXYGEN
    const ScalarMultipleReturnType operator*(const RealScalar& scalar) const;
    #endif
    const CwiseUnaryOp<ei_scalar_quotient1_op<typename ei_traits<Derived>::Scalar>, Derived>
    operator/(const Scalar& scalar) const;

    const CwiseUnaryOp<ei_scalar_multiple2_op<Scalar,std::complex<Scalar> >, Derived>
    operator*(const std::complex<Scalar>& scalar) const;

    inline friend const ScalarMultipleReturnType
    operator*(const Scalar& scalar, const MatrixBase& matrix)
    { return matrix*scalar; }

    inline friend const CwiseUnaryOp<ei_scalar_multiple2_op<Scalar,std::complex<Scalar> >, Derived>
    operator*(const std::complex<Scalar>& scalar, const MatrixBase& matrix)
    { return matrix*scalar; }

    Eigen::Transpose<Derived> transpose();
    const Eigen::Transpose<Derived> transpose() const;
    const AdjointReturnType adjoint() const;

    Diagonal<Derived,0> diagonal();
    const Diagonal<Derived,0> diagonal() const;

    template<int Index> Diagonal<Derived,Index> diagonal();
    template<int Index> const Diagonal<Derived,Index> diagonal() const;

    Diagonal<Derived, Dynamic> diagonal(int index);
    const Diagonal<Derived, Dynamic> diagonal(int index) const;

    template<unsigned int Mode> TriangularView<Derived, Mode> triangularView();
    template<unsigned int Mode> const TriangularView<Derived, Mode> triangularView() const;

    template<unsigned int UpLo> SelfAdjointView<Derived, UpLo> selfadjointView();
    template<unsigned int UpLo> const SelfAdjointView<Derived, UpLo> selfadjointView() const;

    const DiagonalWrapper<Derived> asDiagonal() const;

    template<typename OtherDerived>
    inline bool operator==(const MatrixBase<OtherDerived>& other) const
    { return (cwise() == other).all(); }

    template<typename OtherDerived>
    inline bool operator!=(const MatrixBase<OtherDerived>& other) const
    { return (cwise() != other).any(); }

    template<typename NewType>
    typename ei_cast_return_type<
        Derived,
        const CwiseUnaryOp<ei_scalar_cast_op<typename ei_traits<Derived>::Scalar, NewType>, Derived>
      >::type
    cast() const;

    NoAlias<Derived> noalias();

    ConjugateReturnType conjugate() const;
    RealReturnType real() const;
    NonConstRealReturnType real();
    const ImagReturnType imag() const;
    NonConstImagReturnType imag();

    template<typename CustomUnaryOp>
    const CwiseUnaryOp<CustomUnaryOp, Derived> unaryExpr(const CustomUnaryOp& func = CustomUnaryOp()) const;

    template<typename CustomViewOp>
    const CwiseUnaryView<CustomViewOp, Derived> unaryViewExpr(const CustomViewOp& func = CustomViewOp()) const;

    template<typename CustomBinaryOp, typename OtherDerived>
    const CwiseBinaryOp<CustomBinaryOp, Derived, OtherDerived>
    binaryExpr(const MatrixBase<OtherDerived> &other, const CustomBinaryOp& func = CustomBinaryOp()) const;


    Scalar sum() const;
    Scalar mean() const;
    Scalar trace() const;

    Scalar prod() const;

    typename ei_traits<Derived>::Scalar minCoeff() const;
    typename ei_traits<Derived>::Scalar maxCoeff() const;

    typename ei_traits<Derived>::Scalar minCoeff(int* row, int* col) const;
    typename ei_traits<Derived>::Scalar maxCoeff(int* row, int* col) const;

    typename ei_traits<Derived>::Scalar minCoeff(int* index) const;
    typename ei_traits<Derived>::Scalar maxCoeff(int* index) const;

    template<typename BinaryOp>
    typename ei_result_of<BinaryOp(typename ei_traits<Derived>::Scalar)>::type
    redux(const BinaryOp& func) const;

    template<typename Visitor>
    void visit(Visitor& func) const;

    const Cwise<Derived> cwise() const;
    Cwise<Derived> cwise();

    bool all(void) const;
    bool any(void) const;
    int count() const;

    static const CwiseNullaryOp<ei_scalar_random_op<Scalar>,Derived> Random(int rows, int cols);
    static const CwiseNullaryOp<ei_scalar_random_op<Scalar>,Derived> Random(int size);
    static const CwiseNullaryOp<ei_scalar_random_op<Scalar>,Derived> Random();


*********


Now if you add their respective implementations it seems to be worthwhile to try to share that piece of very boring code.

gael


 

Then frankly, I'd say don't worry too much about this aspect (of the
operators returning xprs) because that is only a relatively small part
of the code and effort... anyhow, the bulk of the work will be to
write all the xpr classes.

Benoit

> gael.
>
>>
>> Benoit
>>
>> > The
>> > problem is that in most cases you don't only want simple product, but
>> > you
>> > also want to be able to take the adjoint of the factors, do some scaling
>> > and
>> > maybe accumulate the result at the same time. This is not an hazard if
>> > the
>> > BLAS GEMM routine does all that in a single call. To make thing clear
>> > let me
>> > recall that GEMM allow to do:
>> >
>> > C = C + alpha * op(A) * op(B)
>> >
>> > However to support this kind of high level products we need to be able
>> > to
>> > build an _expression_ tree containing Transpose, CwiseUnaryOp, and
>> > Product.
>> > Whence the "no".
>> >
>> > Now let's consider an arbitrary fancy storage Foo. A funny thing with my
>> > proposal is that if you implement the product for the Foo storage as we
>> > did
>> > for the dense path (i.e., via a kind of GEMM routine), then you even
>> > don't
>> > need to implement the specialization of {Transpose|CwiseUnaryOp}Impl for
>> > the
>> > storage Foo ! Whence the "yes".
>> >
>> > Nevertheless, both Transpose and CwiseUnaryOp are usually trivial to
>> > implement. The most difficult ones are Block,  CwiseBinary, and partial
>> > reductions which are less useful.
>> >
>> > Gael.
>> >
>> >>
>> >> I agree about all the rest. And it's true that your changes, which are
>> >> useful anyway, would also be the first thing to do if one wanted
>> >> arithmetic operators for special matrices. Finally, such operators and
>> >> _expression_ templates can be added at a later date.
>> >>
>> >> Benoit
>> >>
>> >> 2009/11/13 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>> >> >
>> >> >
>> >> > On Fri, Nov 13, 2009 at 9:46 PM, Benoit Jacob
>> >> > <jacob.benoit.1@xxxxxxxxx>
>> >> > wrote:
>> >> >>
>> >> >> 2009/11/13 Gael Guennebaud <gael.guennebaud@xxxxxxxxx>:
>> >> >> >
>> >> >> >
>> >> >> > On Fri, Nov 13, 2009 at 8:58 PM, Benoit Jacob
>> >> >> > <jacob.benoit.1@xxxxxxxxx>
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> Hi,
>> >> >> >>
>> >> >> >> Wow, your mail took me a lot of effort to understand, I think
>> >> >> >> it's
>> >> >> >> mostly ok now...
>> >> >> >>
>> >> >> >
>> >> >> > sorry for that, I should have been more explicit.
>> >> >> >
>> >> >> >>
>> >> >> >> First of all, I agree with your summary of current limitations.
>> >> >> >> Yes,
>> >> >> >> I
>> >> >> >> agree we must take care of that before beta1.
>> >> >> >>
>> >> >> >> Let me rephrase some stuff in my own baby words to see if i got
>> >> >> >> it
>> >> >> >> right. The 2 basic problems you want to address are:
>> >> >> >> * code sharing between dense and sparse xpr
>> >> >> >> * in xpr trees, only the leaves should be affected by what the
>> >> >> >> leaves
>> >> >> >> actually are.
>> >> >> >>
>> >> >> >> The "naive" approach to that would probably be to let e.g.
>> >> >> >> Transpose
>> >> >> >> and SparseTranspose inherit a common base AnyTranspose, making it
>> >> >> >> a
>> >> >> >> curious base so AnyTranspose<Derived>... then .transpose()
>> >> >> >> methods
>> >> >> >> should return objects of AnyTranspose<Derived> which is ugly and
>> >> >> >> may
>> >> >> >> (out of empirical experience) give compiler warnings about strict
>> >> >> >> aliasing when casting the returned object to the derived type.
>> >> >> >>
>> >> >> >> So your solution, if I understand well, consists instead in
>> >> >> >> letting
>> >> >> >> Transpose itself be a template in one more integer parameter,
>> >> >> >> which
>> >> >> >> you call StorageType, which could have special values Dense,
>> >> >> >> Sparse,
>> >> >> >> Band...
>> >> >> >>
>> >> >> >> At this point, I think I have understood how you solve the "the
>> >> >> >> rest
>> >> >> >> of the tree doesn't completely change if the leaves are
>> >> >> >> different"
>> >> >> >> problem.
>> >> >> >>
>> >> >> >> But can you enlighten me as to how you solve the "code sharing"
>> >> >> >> problem? If Transpose<Dense> and Transpose<Sparse> can share
>> >> >> >> code,
>> >> >> >> where do you put that? In a common base class TransposeBase? So
>> >> >> >> that
>> >> >> >> problem and solution is actually completely orthogonal to the
>> >> >> >> above?
>> >> >> >
>> >> >> > What I suggest is a bit different. Let's pick an example:
>> >> >> >
>> >> >> > template<MatrixType> Transpose : public TransposeImpl<MatrixType,
>> >> >> > ei_traits<MatrixType>::StorageType>
>> >> >> > {
>> >> >> >  int rows();
>> >> >> >  int cols();
>> >> >> >  MatrixType::Nested m_matrix;
>> >> >> >  // etc.
>> >> >> > }
>> >> >> >
>> >> >> > then TranposeImpl<MatrixType,Dense> would inherit
>> >> >> > MatrixBase<Transpose<>
>> >> >> > >
>> >> >> > and implements the coeff() methods while
>> >> >> > TransposeImpl<MatrixType,Sparse>
>> >> >> > would inherit SparseMatrixBase<> and implement the InnerIterators
>> >> >> > mechanism.
>> >> >>
>> >> >> ok so actually in your scheme, the code sharing happens in the
>> >> >> derived
>> >> >> class Transpose, while the non-shared implementation happens in the
>> >> >> base class TransposeImpl.... clever!
>> >> >>
>> >> >> What I like the most here is that the class that is used as a node
>> >> >> in
>> >> >> the xpr tree, Transpose, doesn't get any additional template
>> >> >> parameter: our xpr trees were already complex enough like that...
>> >> >>
>> >> >> Are you going to set up a new fork for that? And add it to the
>> >> >> todo-3.0 page? It seems quite a long project to complete.
>> >> >
>> >> > yep, this is definitely not a trivial change, so of course I'll start
>> >> > a
>> >> > fork
>> >> > for that and we'll see!
>> >> >
>> >> > gael
>> >> >
>> >> >>
>> >> >> Benoit
>> >> >>
>> >> >>
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >>
>> >>
>> >
>> >
>> >
>> >
>>
>>
>
>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/