Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning |

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]

*To*: eigen@xxxxxxxxxxxxxxxxxxx*Subject*: Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning*From*: Matthieu Brucher <matthieu.brucher@xxxxxxxxx>*Date*: Fri, 15 Feb 2019 22:51:44 +0000*Dkim-signature*: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=rZcJ63DJ1AteNtLIg30prJe9AEni4PfsuuHpAtTU0Gg=; b=ZQsRWcgZ7uxd9I19ZxQX3JWzTxvK0+g5uchIIEkQX/SqEC6vJ8RZyaNgDsEJexTFWM TQon2Q6G1S4U10TLgNDsEH6+LDToozBbdyIxrj/pXzWr9gTlG36XPHEjApKxooWR1x0n ySmQiUEIxPMZi9b3oZu55zaOZVIXZx8nN9zGTbjbdvWRdvYF3uYshqX8RvZyP3gAx/R0 3zaXg+KqlmAoaFASmzdEjY6kCTZNxfDt+N33yTTitQGSPC5jH71xhuE3Ux3Bxa/yXMHf 1riiNoPY8Fk6LKSnGiG5eY+j7px8dVxO/tkC+uuLZMA/S29Z+yDkRsCLRcCfzzxalEqN /3pw==

Haha, it's not that we can't stand the matrix multiplication, it's that objectively a huge chunk of the scientists (probably the majority) is not using '*' as the matrix multiplication (from Fortran to Numpy...).

One of the other things to notice from data scientists is that we don't consider that everything is 2D, or 2D+ like Matlab, but we use 1D arrays as much as nD, which makes a big difference.

I use Eigen a lot for my personal projects that are doing some linear algebra, and even there, it can be annoying to jump from arrays to matrices to arrays. That's one of the biggest advantages of lumpy, just one type (the Numpy matrix type is not relevant nowadays).

Cheers,

Matthieu

Le jeu. 14 févr. 2019 à 18:29, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> a écrit :

Hi Patrik,On Wed, Feb 13, 2019 at 6:19 PM Patrik Huber <patrikhuber@xxxxxxxxx> wrote:I recently found out that the C++ standardisation committee now created a Special Interest Group (SIG) for Linear Algebra within SG14 (the study group for game-dev & low-latency), and that there's also a new study group on machine learning, SG19.Both groups have been formed within the last few months, and I don't think there was too much noise around it, so I thought it might be quite interesting for some people on the Eigen list. I also just joined their forums/list,Thanks a lot for these informations! I joined both SG14 and SG19 lists too.and I didn't recognise any familiar name for the Eigen list so far.There is Matthieu Brucher who is a member of this list and posted here a few times.On a first glance, I saw that they seem to make a few design decisions that are different from Eigen (e.g. operator* is only for scalar multiplications; or there are separate row/col_vector classes currently).Regarding operator*, from their discussions we can already see clear disagreements between "linear algebra" people and more general "data scientists"... Some cannot stand operator* as being a matrix-matrix product and are basically seeking for a Numpy on steroid. Personally, as I mostly do linear algebra I almost never use the component-wise product and I'd have a hard time giving up operator* for matrix-matrix products. On the other hand, I found myself using .array() frequently for scalar addition, abs, min/max and comparisons... and I've to admit that our .array()/.matrix() approach is not ideal in this respect.Nevertheless, following the idea of a "numpy on steroid", if that's really what developers want, we might thought about making our "array world" world more friendly with the linear-algebra world by:- adding a prod(,) (or dot(,)) function- moving more MatrixBase functions to DenseBase (most of them could except inverse())- allowing Array<> as input of decompositions- enabling "safe" binary operations between Array and Matrix (and returning an Array)This way people who don't want operator as a matrix product, or with a strong experience of numpy, could simply forget about Matrix<>/.matrix()/.array() and exclusively use Array<>. Then time will tell us if, as with numpy::matrix vs numpy::array, everybody will give up about Matrix<>... (I strongly doubt).Gaël.I think it would be really great to get some people from Eigen (us!) aboard that process.Here are some links:SG14 mailing list: http://lists.isocpp.org/sg14/SG19 mailing list: https://groups.google..com/a/isocpp.org/forum/?fromgroups#!forum/sg19There are two repos where they started mapping out ideas/code/paper:Best wishes,Patrik--Dr. Patrik HuberFounder & CEO 4dface Ltd3D face models & personal 3D face avatars for professional applicationsUnited KingdomWeb: www.4dface.io

Quantitative analyst, Ph.D.

Blog: http://blog.audio-tk.com/

LinkedIn: http://www.linkedin.com/in/matthieubrucher

Blog: http://blog.audio-tk.com/

LinkedIn: http://www.linkedin.com/in/matthieubrucher

**Follow-Ups**:**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning***From:*Ganriel Nützi

**The "star" problem, was Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning***From:*Mark Borgerding

**References**:**[eigen] ISO C++ working groups on Linear Algebra / Machine Learning***From:*Patrik Huber

**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning***From:*Gael Guennebaud

**Messages sorted by:**[ date | thread ]- Prev by Date:
**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning** - Next by Date:
**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning** - Previous by thread:
**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning** - Next by thread:
**Re: [eigen] ISO C++ working groups on Linear Algebra / Machine Learning**

Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |