Re: [eigen] Iterators with dense matrices: (was: Areas of eigen3 variation and using eigen2/3 in generic code) |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] Iterators with dense matrices: (was: Areas of eigen3 variation and using eigen2/3 in generic code)
- From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
- Date: Mon, 4 Jan 2010 08:16:57 -0500
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=yGUnX3+PR0q/UeOkZcr5V/xZv1GL+Q0YP6U/JmKVmRA=; b=dViVCkp38JGDXjnXiWGzp03+/8leqfLeexOCDfmYzlKelEGsLYdSGm109N2QdnOkqI S/RwhqXEl+bTbU7UJ/m/QhxwGU6+UX81UjVbSd7+N/5HhpByd9sfA3tAd7fTpelGLSew S8cSc2X3BKh+g3LZaQqcUJM/5hkOGx6lufCGQ=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=T79Rcev6aIJs+kBZT9n713rjT/RQt1JZ8/9mJ+9Zfh6BuzJp/8quMKvR4os+WBUrPp 4MUVHPDRUhHThjT1J6PCOEaoxFWNJsAHZ5AWQC53Iz2B8Sdlu//7NGsk+uYnIiUxcELC drXTaceEGVYWRzLHPR9xzVyJ/ah6VUu/llG8I=
OK, so let's assume that we agree on adding iterators to Eigen dense
types, using begin() and end(). I have questions:
1) are there yet more methods to add, or are begin() and end() the
only methods to add to existing Eigen classes?
2) Are the member typedefs const_iterator1 and const_iterator2
standard names that we should standardize on? If yes, does "1" mean
outer and "2" inner, or does "1" mean rows and "2" mean columns? It
does matter because Eigen supports both row-major and column-major
matrices.
What is the complete list of member typedefs for iterator types that
we should add, and what should they mean exactly?
Benoit
2010/1/4 Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
> 2010/1/4 Paul C. Leopardi <paul.leopardi@xxxxxxxxxxxx>:
>> On Monday 04 January 2010 12:59:27 Benoit Jacob wrote:
>>> Let me explain the Eigen approach to this problem. We try to let the user
>>> avoid as much as possible writing his own loops iterating over the
>>> coefficients of a dense matrix. The reason is because it is extremely
>>> complicated for the user to write such loops in an efficient way.
>>
>>> As I said, the user can provide his own expressions and functors, so the
>>> only limitation is about the built-in loop types. We have essentially
>>> two loop types:
>>> - assignment
>>> - reduction
>>>
>>> For example, the above sum() is reduction with a "sum" functor. Again, the
>>> user can provide his own functors here.
>>
>> Benoit,
>> Thanks for the explanation. I have two more examples where I don't understand
>> what the Eigen approach would be:
>>
>> 1. Left inverse of Kronecker product. nork(A, kron(A, B)) is close to B.
>>
>> /// Left inverse of Kronecker product
>> template< typename LHS_T, typename RHS_T >
>> const
>> RHS_T
>> nork(const LHS_T& lhs, const RHS_T& rhs, const bool mono)
>> {
>> // nork(A, kron(A, B)) is close to B
>> typedef RHS_T matrix_t;
>> typedef typename LHS_T::size_type lhs_index_t;
>> typedef typename matrix_t::size_type matrix_index_t;
>> const lhs_index_t lhs_s1 = lhs.size1();
>> const lhs_index_t lhs_s2 = lhs.size2();
>> const matrix_index_t rhs_s1 = rhs.size1();
>> const matrix_index_t rhs_s2 = rhs.size2();
>> const matrix_index_t res_s1 = rhs_s1 / lhs_s1;
>> const matrix_index_t res_s2 = rhs_s2 / lhs_s2;
>> typedef typename matrix_t::value_type Scalar_T;
>> const Scalar_T& nnz_lhs = Scalar_T( double(mono ? lhs_s1 : nnz(lhs)) );
>> if (!mono)
>> {
>> typedef error<matrix_t> error_t;
>> if (rhs_s1 == 0)
>> throw error_t("matrix", "nork: number of rows must not be 0");
>> if (rhs_s2 == 0)
>> throw error_t("matrix", "nork: number of cols must not be 0");
>> if (res_s1 * lhs_s1 != rhs_s1)
>> throw error_t("matrix", "nork: incompatible numbers of rows");
>> if (res_s2 * lhs_s2 != rhs_s2)
>> throw error_t("matrix", "nork: incompatible numbers of cols");
>> if (nnz_lhs == Scalar_T(0))
>> throw error_t("matrix", "nork: LHS must not be 0");
>> }
>> matrix_t result(res_s1, res_s2);
>> result.clear();
>> for (typename LHS_T::const_iterator1
>> lhs_it1 = lhs.begin1();
>> lhs_it1 != lhs.end1();
>> ++lhs_it1)
>> for (typename LHS_T::const_iterator2
>> lhs_it2 = lhs_it1.begin();
>> lhs_it2 != lhs_it1.end();
>> ++lhs_it2)
>> if (*lhs_it2 != Scalar_T(0))
>> {
>> typedef ublas::matrix_range<const matrix_t> matrix_range_t;
>> const Scalar_T& lhs_scale = *lhs_it2 * nnz_lhs;
>> const matrix_index_t start1 = res_s1 * lhs_it2.index1();
>> const matrix_index_t start2 = res_s2 * lhs_it2.index2();
>> using ublas::range;
>> const range& range1 = range(start1, start1 + res_s1);
>> const range& range2 = range(start2, start2 + res_s2);
>> const matrix_range_t& rhs_range =
>> matrix_range_t(rhs, range1, range2);
>> for (typename matrix_range_t::const_iterator1
>> rhs_it1 = rhs_range.begin1();
>> rhs_it1 != rhs_range.end1();
>> ++rhs_it1)
>> for (typename matrix_range_t::const_iterator2
>> rhs_it2 = rhs_it1.begin();
>> rhs_it2 != rhs_it1.end();
>> ++rhs_it2)
>> result(rhs_it2.index1(), rhs_it2.index2()) +=
>> *rhs_it2 / lhs_scale;
>> }
>> return result;
>> }
>
> So, I am already convinced by the std::binary_search example of the
> usefulness of STL algorithms, but just for reference, here's how you
> write this with Eigen for dense matrices:
>
> matrix_t result = matrix_t::Zero(res_rows, res_cols);
> for(int row = 0; row < lhs.rows(); ++row)
> {
> for(int col = 0; col < lhs.cols(); ++col)
> {
> Scalar_T x = lhs(row,col);
> if(x != Scalar_T(0))
> {
> const Scalar_T lhs_scale = x * nnz_lhs;
> const int start_row = res_rows * row;
> const int start_col = res_cols * col;
> result += rhs.block(start_row, start_col, res_rows, res_cols) / lhs_scale;
> }
> }
> }
>
> As you can see: the two innermost for loops have been replaced by a
> single Eigen expression assignment. Eigen will then easily vectorize
> and/or unroll that.
>
> For sparse matrices, of course Eigen already works with iterators.
>
> I agree that the two outermost loops would benefit from using
> iterators over nonzero elements, so this code applies readily to both
> dense and sparse matrices.
>
>>
>> 2. Product of monomial matrices. Called with RHS_T a sparse matrix type.
>>
>> /// Product of monomial matrices
>> template< typename LHS_T, typename RHS_T >
>> const typename RHS_T::expression_type
>> mono_prod(const ublas::matrix_expression<LHS_T>& lhs,
>> const ublas::matrix_expression<RHS_T>& rhs)
>> {
>> typedef const LHS_T lhs_expression_t;
>> typedef const RHS_T rhs_expression_t;
>> typedef typename RHS_T::expression_type matrix_t;
>> typedef typename matrix_t::size_type matrix_index_t;
>> typedef typename matrix_t::value_type Scalar_T;
>> typedef typename lhs_expression_t::const_iterator1 lhs_const_iterator1;
>> typedef typename lhs_expression_t::const_iterator2 lhs_const_iterator2;
>> typedef typename ublas::matrix_row<rhs_expression_t> matrix_row_t;
>> typedef typename matrix_row_t::const_iterator row_const_iterator;
>>
>> const matrix_index_t dim = lhs().size1();
>> // The following assumes that matrix_t is a sparse matrix type.
>> matrix_t result = matrix_t(dim, dim, dim);
>> for (lhs_const_iterator1
>> lhs_row = lhs().begin1();
>> lhs_row != lhs().end1();
>> ++lhs_row)
>> {
>> const lhs_const_iterator2& lhs_it = lhs_row.begin();
>> if (lhs_it != lhs_row.end())
>> {
>> const matrix_row_t& rhs_row = matrix_row_t(rhs(), lhs_it.index2());
>> const row_const_iterator& rhs_it = rhs_row.begin();
>> if (rhs_it != rhs_row.end())
>> result(lhs_it.index1(), rhs_it.index()) = (*lhs_it) * (*rhs_it);
>> }
>> }
>> return result;
>> }
>
> Eigen doesn't do 3D matrices at the moment, although the soon-coming
> Array class can be used as a Scalar type so you could use a Matrix of
> Arrays.
>
>>
>> To use these routines with Eigen matrices, should I try to wrap the Eigen
>> classes into classes which have the required iterators, or is there a way to
>> use the built-in loop types with these routines?
>
> I understand that we need to add iterators to cover your needs, so
> let's do that.
>
> Benoit
>