Re: [eigen] Documentation : it's a sprint!!

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Thanks Johan, that is true. This is an important point to mention since sometimes the examples are given with Matrix and not Array, and usually there are equivalents between them.


On Wed, Jul 7, 2010 at 3:52 PM, Johan Pauwels <johan.pauwels@xxxxxxxxxxxxx> wrote:
On 2010-07-07 12:26, Carlos Becker wrote:
Ok here there is a preview:  http://carlosbecker.com.ar/eigen/doc/TutorialReductionsVisitorsBroadcasting.html
I still have to merge Benoit's reductions' text and table.

Hi Carlos,

I'm just a regular user of Eigen and understood everything, so that must mean the tutorial serves its purpose :-) I do have one remark though. You write "It is important to point out that the vector to be added column-wise or row-wise must be of type Vector, and cannot be a Matrix." It appears also to work with ArrayXx (obviously not with ArrayXXx), so the sentence might be rephrased more generally to include all 1-D types.

 I also have a question about tutorial page 3. There it says that "On the other hand, assigning a matrix _expression_ to an array _expression_ is allowed." What about the other way around, I'm wondering. Whether that works or not, I would state it explicitly to make it clearer.

Kind regards,
Johan




On Wed, Jul 7, 2010 at 10:40 AM, Carlos Becker <carlosbecker@xxxxxxxxx> wrote:
Yes, sounds much better, I forgot about squaredNorm(). Regarding ::Index, I thought that maybe there was something like Eigen::Index, but I guess there is not.




On Wed, Jul 7, 2010 at 10:37 AM, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:
On Wed, Jul 7, 2010 at 11:30 AM, Carlos Becker <carlosbecker@xxxxxxxxx> wrote:
> Ok, finally back to the reductions/visitors tutorial. I was thinking of
> including a nice example in the end, using broadcasting and partial
> reductions to find the nearest neighbour of a vector between all the columns
> in a given matrix, with something like:
> VectorXf v;
> MatrixXf m;
> int index;
> (m.colwise() - v).array().square().colwise().sum().minCoeff(&index);

why not, but here is a simpler version:

MatrixXf::Index index;
(m.colwise() - v).colwise().squaredNorm().minCoeff(&index)

note that you have to use MatrixXf::Index instead of int.

gael.

> I think it would give a good insight of Eigen and, since this is tutorial
> number 7, the user should already have a good background on Eigen. I could
> explain what is happening at each '.' in the previous _expression_ to make it
> clear and show how powerful Eigen is.
> However, I know that this might be a bit advanced for a starters' tutorial,
> so I will wait for your thoughts on this.
> Cheers,
> Carlos
>
>
> On Thu, Jul 1, 2010 at 5:12 PM, Carlos Becker <carlosbecker@xxxxxxxxx>
> wrote:
>>
>> Thanks. It was to know what to put in each subsection since I made one for
>> visitors, one for reductions and one for broadcasting. So now it is clear.
>> Thanks
>>
>>
>> On Thu, Jul 1, 2010 at 3:10 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
>> wrote:
>>>
>>> 2010/7/1 Carlos Becker <carlosbecker@xxxxxxxxx>:
>>> > I thought that with matrix.colwise().sum(), colwise() is treated as a
>>> > visitor, and with += it is a broadcasting operation. This is because I
>>> > read
>>> > this on your tutorial proposal:
>>> > 7. Reductions, visitors, and broadcasting
>>> >  - for all kinds of matrices/arrays
>>> >  - sum() etc...
>>> >  - mention partial reductions with .colwise()...
>>> >  - mention broadcasting e.g. m.colwise() += vector;
>>> > I thought that partial reductions were done with visitors and
>>> > broadcasting
>>> > was something similar but with 'write access'. Or maybe both are
>>> > visitors, I
>>> > guess I am missing many concepts here.
>>>
>>> computing a sum is a reduction (or partial reduction) not a visitor.
>>>
>>> a visitor is when you want to find, as output, a location inside of a
>>> matrix.
>>>
>>> For example, this is a reduction:
>>>
>>>   result = matrix.maxCoeff();
>>>
>>> this is a visitor:
>>>
>>>   Index i, j;
>>>   matrix.maxCoeff(&i, &j);
>>>
>>> Anyway, don't bother learning a new area of Eigen just to write docs
>>> about it :-)
>>>
>>> Benoit
>>>
>>>
>>>
>>>
>>> >
>>> >
>>> > On Thu, Jul 1, 2010 at 2:46 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxxx>
>>> > wrote:
>>> >>
>>> >> Broadcasting means e.g.
>>> >>
>>> >>     matrix.colwise() += vector;
>>> >>
>>> >> Visitors are e.g.
>>> >>
>>> >>    Index i, j;
>>> >>    matrix.maxCoeff(&i, &j);
>>> >>
>>> >> These are two really different things, no?
>>> >> Benoit
>>> >>
>>> >> 2010/7/1 Carlos Becker <carlosbecker@xxxxxxxxx>:
>>> >> > Just a quick question: what would be the difference between visitors
>>> >> > and
>>> >> > broadcasting? Seems to me that broadcasting is able to 'visit'
>>> >> > column or
>>> >> > row-wise, also modifying the data inside the matrix/array object, am
>>> >> > I
>>> >> > right? I haven't used broadcasting with eigen before.
>>> >> >
>>> >> >
>>> >> >
>>> >> > On Wed, Jun 30, 2010 at 1:30 PM, Carlos Becker
>>> >> > <carlosbecker@xxxxxxxxx>
>>> >> > wrote:
>>> >> >>
>>> >> >> oh ok sorry, got confused since I thought that someone was alreay
>>> >> >> writing
>>> >> >> the sparse tut
>>> >> >>
>>> >> >>
>>> >> >> On Wed, Jun 30, 2010 at 1:29 PM, Gael Guennebaud
>>> >> >> <gael.guennebaud@xxxxxxxxx> wrote:
>>> >> >>>
>>> >> >>> nevermind, this C07_TutorialSparse.dox file is an old one...
>>> >> >>>
>>> >> >>> gael
>>> >> >>>
>>> >> >>> On Wed, Jun 30, 2010 at 2:26 PM, Carlos Becker
>>> >> >>> <carlosbecker@xxxxxxxxx>
>>> >> >>> wrote:
>>> >> >>> > I am starting with the reductions/visitors/broadcasting tutorial
>>> >> >>> > and
>>> >> >>> > just
>>> >> >>> > noticed that the sparse tutorial is named as
>>> >> >>> > C07_TutorialSparse.dox.
>>> >> >>> > According to the order
>>> >> >>> > in http://eigen.tuxfamily.org/dox-devel/ it
>>> >> >>> > should be
>>> >> >>> > C08 and C07 is to be the one I am doing. This is a silly
>>> >> >>> > question
>>> >> >>> > but
>>> >> >>> > just
>>> >> >>> > wanted to make sure we are all following the same conventions.
>>> >> >>> > Carlos
>>> >> >>> >
>>> >> >>> > On Sun, Jun 27, 2010 at 1:59 PM, Gael Guennebaud
>>> >> >>> > <gael.guennebaud@xxxxxxxxx>
>>> >> >>> > wrote:
>>> >> >>> >>
>>> >> >>> >> On Sun, Jun 27, 2010 at 12:41 PM, Carlos Becker
>>> >> >>> >> <carlosbecker@xxxxxxxxx>
>>> >> >>> >> wrote:
>>> >> >>> >> > Mmm I am trying to think of a straightforward explanation for
>>> >> >>> >> > this.
>>> >> >>> >> > What
>>> >> >>> >> > do
>>> >> >>> >> > you think about calling them fixed-size and dynamic-size
>>> >> >>> >> > blocks,
>>> >> >>> >> > where
>>> >> >>> >> > the
>>> >> >>> >> > former differs from the later because its size is known at
>>> >> >>> >> > compile-time.
>>> >> >>> >>
>>> >> >>> >> a slightly more precise variant: "the latter are optimized
>>> >> >>> >> versions
>>> >> >>> >> of
>>> >> >>> >> the former when the size is known at compile-time." I just
>>> >> >>> >> added
>>> >> >>> >> the
>>> >> >>> >> word "optimized"
>>> >> >>> >>
>>> >> >>> >> you might also have a look at the reference tables, section
>>> >> >>> >> "Sub
>>> >> >>> >> matrices" to see how they are presented. The old version of
>>> >> >>> >> this
>>> >> >>> >> section is available online there:
>>> >> >>> >>
>>> >> >>> >>
>>> >> >>> >>
>>> >> >>> >>
>>> >> >>> >>
>>> >> >>> >> http://eigen.tuxfamily.org/dox-devel/TutorialCore.html#TutorialCoreMatrixBlocks
>>> >> >>> >>
>>> >> >>> >> gael
>>> >> >>> >>
>>> >> >>> >> >
>>> >> >>> >> >
>>> >> >>> >> > On Sun, Jun 27, 2010 at 11:02 AM, Carlos Becker
>>> >> >>> >> > <carlosbecker@xxxxxxxxx>
>>> >> >>> >> > wrote:
>>> >> >>> >> >>
>>> >> >>> >> >> Yes, I got that and actually it was my mistake since I
>>> >> >>> >> >> supposed
>>> >> >>> >> >> that it
>>> >> >>> >> >> was only for fixed-size matrices, so now I am changing it.
>>> >> >>> >> >> Thanks,
>>> >> >>> >> >>
>>> >> >>> >> >>
>>> >> >>> >> >> 2010/6/27 Björn Piltz <bjornpiltz@xxxxxxxxxxxxxx>
>>> >> >>> >> >>>
>>> >> >>> >> >>> "The following tables show a summary of Eigen's block
>>> >> >>> >> >>> operations
>>> >> >>> >> >>> and
>>> >> >>> >> >>> how
>>> >> >>> >> >>> they are applied to fixed- and dynamic-sized Eigen
>>> >> >>> >> >>> objects."
>>> >> >>> >> >>> This quote and the following table gives the impression
>>> >> >>> >> >>> that
>>> >> >>> >> >>>  the
>>> >> >>> >> >>> fixed
>>> >> >>> >> >>> size functions are only available for fixed size matrices.
>>> >> >>> >> >>> But
>>> >> >>> >> >>> using
>>> >> >>> >> >>> fixed
>>> >> >>> >> >>> size vs dynamic size functions acually only determin the
>>> >> >>> >> >>> return
>>> >> >>> >> >>> type.
>>> >> >>> >> >>> Björn
>>> >> >>> >> >
>>> >> >>> >> >
>>> >> >>> >>
>>> >> >>> >>
>>> >> >>> >
>>> >> >>> >
>>> >> >>>
>>> >> >>>
>>> >> >>
>>> >> >
>>> >> >
>>> >>
>>> >>
>>> >
>>> >
>>>
>>>
>>
>
>








Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/