Re: [eigen] Enhanced AlignedBox

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


On Wed, Dec 23, 2009 at 7:47 PM, Manuel Yguel <manuel.yguel@xxxxxxxxx> wrote:
> On Tue, Dec 22, 2009 at 2:48 PM, Gael Guennebaud
> <gael.guennebaud@xxxxxxxxx> wrote:
>> Hi Manuel,
>>
>> thanks for the update,  but I still have a few remarks (I forgot some
>> in my previous email):
>>
>> * for the implementation of sizes you can simply do:
>>
>> return m_max - m_min;
>>
>> that is much simpler. If you wonder, yes, the type of "m_max-m_min" is
>> "CwiseBinaryOp< ei_scalar_difference_op<Scalar>, VectorType,
>> VectorType>".
>>
>>
>> * for center(), in order to have zero temporary you have to nest
>> multiple expressions like this:
>>
>> inline
>> const CwiseUnaryOp<ei_scalar_quotient1_op<Scalar>,
>>                                 CwiseBinaryOp<
>> ei_scalar_sum_op<Scalar>, VectorType, VectorType> > center() const
>> {
>>  return (m_min + m_max)/2;
>> }
> Thanks I learn a lot there.
>>
>> * Why did you template the functions taking another aligned box.
>> (e.g., contains(), extend(), clamp(), etc.). I mean, both the
>> dimension and the scalar type have to be exactly the same, so no need
>> to template them.
> Why? I agree for the dimension, but not necessary for the scalar type.
> One can want float for hardware reason and double for precision in an
> other part of the application.

Actually, this is mainly because we cannot rely on automatic type
conversion when enabling vectorization. Therefore, throughout Eigen we
enforce that both operand of a binary operations have the same scalar
type. Consequently, even if you allow contains() to accept different
scalar types, the code wont compile. To mix different scalar type you
have to explicitly do a cast. So two options:

1 - contains() only accepts the same scalar type, and the user have to
cast one of the operand
2 - contains() accepts different scalar types and the implementation
of contains() have to be updated to ensure consistent types that is
quite complicated because you have to determine whether casting
*this's members or the argument.

For the latter reason, and to be consistent with the rest of Eigen,
and to not hide any implicit conversions, I really vote for option 1.


>>
>> * Likewise you templated the functions initially taking a VectorType
>> that is a good thing to avoid necessary copies when the actual
>> parameter is not exactly the same type (e.g., a Block expression).
>> However you must honor the "Nested" type of the expression. Let me
>> explain on this example:
>>
>> template<typename Derived>
>> inline bool contains(const MatrixBase<Derived>& p) const
>> { return (m_min.cwise()<=p).all() && (p.cwise()<=m_max).all(); }
>>
>> Here "p" is used twice. So for instance if someone do:
>>
>> MyAABB aabb(XX,YY);
>>
>> aabb.contains((a+b)/2);
>>
>> since the type of "(a+b)/2" is an expression of the sum of two vectors
>> divided by two, this expression will be evaluated twice by the
>> function contains that is not very good. This might be even worse if
>> the expression is more complicated like a product, or the solution of
>> a solver, e.g.:
>>
>> aabb.contains(A.lu().solve(b));
>>
>> The equation Ax=b will be solved twice! So the solution is to copy
>> each of such generic arguments to their respective ideal nesting type
>> according to the number of evaluation, e.g.:
>>
>> template<typename Derived>
>> inline bool contains(const MatrixBase<Derived>& a_p) const
>> {
>>  const typename ei_nested<Derived,2>::type p(a_p.derived());
>>  return (m_min.cwise()<=p).all() && (p.cwise()<=m_max).all();
>> }
>>
>> Here the "2" is because p is going to be evaluated twice. Eigen will
>> compare the cost of evaluating a_p twice to the cost of evaluating it
>> once to a temporary and then access twice to the temporary, and
>> according to the result of this comparison, typename
>> ei_nested<Derived,2>::type will be either a reference "Derived&" or,
>> in this case, the same as "VectorType".
> oh, oh that is a great tool.
>>
>> side note:
>>
>> Now I'm also thinking that this expression is not as efficient as it
>> could be because it would be more efficient to loop through p only
>> once using a ternary operators like:
>> p.cwise().isInRange(m_min,m_max).all(); but that's not currently
>> possible. Note that such an optimization would only makes sense for
>> large dimensions.
>>
>> * finally it would be cool if you could configure your editor to use
>> two spaces instead of "tabs" for the indentation
> My mistake.
>>
>> gael
>>
>>
>
> Here is the last problem I see, this kind of code:
>
> m_min.setConstant( std::numeric_limits<Scalar>::max());
> m_max.setConstant(-std::numeric_limits<Scalar>::max());
>
>  is valid for floating types however it is not true for general types

yes you are right this is definitely not very good!

> so usually I prefer to use the boost definitions,
> defined in the following header:
> #include <boost/numeric/conversion/bounds.hpp>
>
> m_min.setConstant( boost::numeric::bounds<float>::highest() );
> m_max.setConstant( boost::numeric::bounds<float>::lowest() );
>
>
> Do you think it is the eigen way to add some fields and/or static
> methods to NumTraits in order to provide the equivalent following
> informations?

so far we used global functions.

> In the std::numeric_limits they give also minimum positive number,
> precision, codes for NAN, +INF etc...
> I saw that you define dummy_precision as a template function, should
> it not be a static method part of the NumTraits such that someone can
> easily change the default behaviour by defining its own NumTraits ?

why not. At least why you propose allow to centralize everything in a
single place.

Benoit, what do you think ?

>
> If you agree with that I would make the changes adding the static functions:
> highest()
> lowest()
> dummy_precision()
> adding the value for -INF also etc...
>
> and after we can mimic std::numeric_limits
> epsilon()
> has_infinity()
> has_quiet_NaN()
> has_signaling_NaN()
> etc ...

Then why not simply make NumTraits<Foo> inherits std::numeric_limits<Foo>?

cheers,
gael

> it should be mere copy-paste from the stl + boost (a bit boring I
> agree, but I can do it)
>
> - best regards and merry christmas
>
> Manuel
>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/