Re: [eigen] Iterators with dense matrices: (was: Areas of eigen3 variation and using eigen2/3 in generic code)

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


2010/1/3 Paul C. Leopardi <paul.leopardi@xxxxxxxxxxxx>:
> On Monday 04 January 2010 06:02:43 Benoit Jacob wrote:
>> 2010/1/3 Jesse Perla <jesseperla@xxxxxxxxx>:
>> >  Or any way to return a
>> > random access iterator for begin() and end() that I could use with
>> > std::transform, etc?  The "start" and "end" seem different than normal
>> > iterators
>>
>> Indeed, we do not have any iterators in Eigen for dense matrices and
>> vectors. Do you have a use case for that? So far, index-based access
>> has been all we have needed, as we were 100% focused on linear
>> algebra.
>
> Hi all,
> Sorry to break into the discussion at this point, but I am also thinking of
> migrating from uBLAS to Eigen. My GluCat code ( http://glucat.sf.net )
> compiles with an optional macro definition, GLUCAT_USE_DENSE_MATRICES
> which is used as follows.
>
> typedef ublas::column_major  orientation_t;
> typedef ublas::compressed_matrix< Scalar_T, orientation_t > basis_matrix_t;
> #ifdef _GLUCAT_USE_DENSE_MATRICES
>    typedef ublas::matrix< Scalar_T, orientation_t > matrix_t;
> #else
>    typedef basis_matrix_t matrix_t;
> #endif
>
> Code which uses matrices does not know if the matrix is dense or sparse, and
> so uses uBLAS begin1(), end1(), begin(), end(), as follows:
>
>  /// Square of Frobenius norm
>  template< typename Matrix_T >
>  typename Matrix_T::value_type
>  norm_frob2(const Matrix_T& val)
>  {
>    typedef typename Matrix_T::value_type Scalar_T;
>    typedef typename Matrix_T::size_type matrix_index_t;
>
>    Scalar_T result = Scalar_T(0);
>    for (typename Matrix_T::const_iterator1
>        val_it1 = val.begin1();
>        val_it1 != val.end1();
>        ++val_it1)
>      for (typename Matrix_T::const_iterator2
>          val_it2 = val_it1.begin();
>          val_it2 != val_it1.end();
>          ++val_it2)
>      {
>        if (numeric_traits<Scalar_T>::isNaN(*val_it2))
>          return numeric_traits<Scalar_T>::NaN();
>        result += (*val_it2) * (*val_it2);
>      }
>    return result;
>  }
>
> What is the equivalent code using Eigen? Do I need to code dense and sparse
> versions of every matrix routine so that the code can take advantage of
> iterators when using sparse matrices?

OK, thanks for the example, now I understand much better where you are
coming from.

Let me explain the Eigen approach to this problem. We try to let the user avoid
as much as possible writing his own loops iterating over the coefficients
of a dense matrix. The reason is because it is extremely
complicated for the user to write such loops in an efficient way. For
example, the loops that you write above have 2 problems in the dense
case:
 a) they are non-vectorized; vectorizing them for all architectures (SSE...)
   would require a lot of work;
 b) if the matrix has small fixed size at compile time, you are relying on
  the compiler to cleverly unroll these loops; but compilers are not so clever
  about that.

So Eigen's approach has been to let the user write higher-level code in
terms of /expressions/ and /functors/. The user can provide his own
expressions and functors, which describe how coefficient(i,j) is computed,
optionnally providing a SIMD implementation for which we have a portable
infrastructure; and then the user uses that with Eigen's built-in loops.

To take your example, and disregarding the fact that Eigen provides the
squaredNorm() method that does exactly that, here is how you would have
implemented this:

    // abs2 means square abs
    return matrix.cwise().abs2().sum()

This works with both dense and sparse matrices, will generate vectorized code
whenever possible for SSE and AltiVec, will unroll loops as appropriate if sizes
are known at compile time... so this is much better than anything that can be
achieved by letting the user write his own loops.

As I said, the user can provide his own expressions and functors, so the
only limitation is about the built-in loop types. We have essentially
two loop types:
 - assignment
 - reduction

For example, the above sum() is reduction with a "sum" functor. Again, the user
can provide his own functors here.

I still wonder:
Would it still be a good idea to add iterators to dense matrices? So
far, for matrices
i.e. linear algebra, our approach is doing well. But: Gael: once your
fork brings in a
general-purpose Array class, maybe people will also need general purpose array
processing, so our built-in loops might no longer be enough and
iterators migth become
really needed?

Benoit


> Best, Paul
>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/