Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning |

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]

*To*: eigen@xxxxxxxxxxxxxxxxxxx*Subject*: Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning*From*: Patrik Huber <patrikhuber@xxxxxxxxx>*Date*: Fri, 15 Feb 2019 11:33:29 +0000*Dkim-signature*: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=OjdBH0BeZzPIiQ0NQDS+dCKqQyxUvZ2s4VXSMWV+nbs=; b=RgKgrJO+ncWyqBusn757FBkH6MjI2blfC9CdQLOPoO43F9BF5zVhU9K9MYnXpV30oJ b07ArlpRA6Ks2fcerkihg7922jufyYkoAFev3WtTzBQQbFK1NKFKaaml2C/1byT9f01E E9Pl7gm5nOjUMeL/xYbBNC6iKIrAjU+MK7vT/pSsvzkEsIWyPXOwjBkiYzgA6nHbGgJw EA3kZeymBdTxGZBZUeJUudgsw6cJXGbbMbgOJCAWnty6dyKPOMgec+qPKX92fF7WhT4e at4js53yRvRBTkv/0HxFInSUG4ot66ad7jJr/utR/OOHvORyo/v5GvYnSgAQTfV5DKIt 559w==

Hi Gael,

I'm glad that the information was helpful to you and some others.

I am actually firmly in the camp of "operator* as matrix-matrix product", like in Eigen. I personally don't like numpy's "@" syntax, as I really can't associate multiplication with that sign, but that's just personal. On a more rational level, I don't think we would (ever?) get the @-sign in C++ for matrix-matrix multiplication, so we would be stuck with having to use a mul/dot function - basically the numpy approach but without having "@". That really would not be great. I just would not want to write expressions like

A = B.mul(C.mul(D)); or (slightly better perhaps, but...) A = mul(B, C, D);.

Like you, I also mainly use matrix-matrix and matrix-vector multiplications, and when I rarely do need component-wise, it results in much more clear code to me to use .array() or a special function in that case.

I do like Matlab's approach of ".*" for component-wise multiplication, as I can associate something related to "multiplication" with it. But I don't think there's a way this could happen in C++.

The only real argument that I can see for using * for component-wise, and a function for matrix-matrix, is perhaps higher-dimensional tensor multiplications, where the *-sign might be ambiguous or might not make too much sense, but I am not too familiar with that field.

It might be a bit of bikeshedding about a technicality, but if more people feel like it should be "the Eigen way", now would probably be the time to give input to the SGs and discuss that further. Do you (or anyone else) think it would make sense to open a thread there? I'd be happy to do so.

Best wishes,

Patrik

On Thu, 14 Feb 2019 at 18:03, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:

Hi Patrik,On Wed, Feb 13, 2019 at 6:19 PM Patrik Huber <patrikhuber@xxxxxxxxx> wrote:I recently found out that the C++ standardisation committee now created a Special Interest Group (SIG) for Linear Algebra within SG14 (the study group for game-dev & low-latency), and that there's also a new study group on machine learning, SG19.Both groups have been formed within the last few months, and I don't think there was too much noise around it, so I thought it might be quite interesting for some people on the Eigen list. I also just joined their forums/list,Thanks a lot for these informations! I joined both SG14 and SG19 lists too.and I didn't recognise any familiar name for the Eigen list so far.There is Matthieu Brucher who is a member of this list and posted here a few times.On a first glance, I saw that they seem to make a few design decisions that are different from Eigen (e.g. operator* is only for scalar multiplications; or there are separate row/col_vector classes currently).Regarding operator*, from their discussions we can already see clear disagreements between "linear algebra" people and more general "data scientists"... Some cannot stand operator* as being a matrix-matrix product and are basically seeking for a Numpy on steroid. Personally, as I mostly do linear algebra I almost never use the component-wise product and I'd have a hard time giving up operator* for matrix-matrix products. On the other hand, I found myself using .array() frequently for scalar addition, abs, min/max and comparisons... and I've to admit that our .array()/.matrix() approach is not ideal in this respect.Nevertheless, following the idea of a "numpy on steroid", if that's really what developers want, we might thought about making our "array world" world more friendly with the linear-algebra world by:- adding a prod(,) (or dot(,)) function- moving more MatrixBase functions to DenseBase (most of them could except inverse())- allowing Array<> as input of decompositions- enabling "safe" binary operations between Array and Matrix (and returning an Array)This way people who don't want operator as a matrix product, or with a strong experience of numpy, could simply forget about Matrix<>/.matrix()/.array() and exclusively use Array<>. Then time will tell us if, as with numpy::matrix vs numpy::array, everybody will give up about Matrix<>... (I strongly doubt).Gaël.I think it would be really great to get some people from Eigen (us!) aboard that process.Here are some links:SG14 mailing list: http://lists.isocpp.org/sg14/SG19 mailing list: https://groups.google.com/a/isocpp.org/forum/?fromgroups#!forum/sg19There are two repos where they started mapping out ideas/code/paper:Best wishes,Patrik--Dr. Patrik HuberFounder & CEO 4dface Ltd3D face models & personal 3D face avatars for professional applicationsUnited KingdomWeb: www.4dface.io

**Follow-Ups**:**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning***From:*Christoph Hertzberg

**References**:**[eigen] ISO C++ working groups on Linear Algebra / Machine Learning***From:*Patrik Huber

**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning***From:*Gael Guennebaud

**Messages sorted by:**[ date | thread ]- Prev by Date:
**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning** - Next by Date:
**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning** - Previous by thread:
**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning** - Next by thread:
**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning**

Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |