[eigen] eigen-mpi and more |

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]

*To*: "eigen@xxxxxxxxxxxxxxxxxxx" <eigen@xxxxxxxxxxxxxxxxxxx>*Subject*: [eigen] eigen-mpi and more*From*: "Cowing-Zitron, Christopher" <ccowingzitron@xxxxxxxx>*Date*: Tue, 9 Oct 2012 20:00:55 +0000*Accept-language*: en-US*Dkim-signature*: v=1; a=rsa-sha256; c=simple/simple; d=ucsd.edu; i=ccowingzitron@xxxxxxxx; q=dns/txt; s=041709-iport; t=1349812858; x=1381348858; h=from:to:subject:date:message-id:mime-version; bh=iBfJZ3c1lzALPke65GV1EuCcAhUMEjHoTkHiEezuLZ4=; b=iXS8kGTcvMg9ShR2cc7yuClxTtTp+WdDG1Z0qS6YuDfDlMHPjyWDqmVa ZoZQMwUSBlmg2gM0kIMPZl7JSkIYpquoeQ6z9zR7+SdRb8xnGexANUUU0 OqX7ADkgInVghlj1M2hVNBPtshFTzkwgfGIaAcYRrG21wGQQNnaeNL7e1 Y=;*Thread-index*: AQHNpljKdZVnPlQbZkOYVadRaasCyg==*Thread-topic*: eigen-mpi and more

Hello,
I just discovered there is a fork of eigen called eigen-mpi on bitbucket. About a month ago, before discovering eigen-mpi, I actually started working on such a project, named Spectrum, which is intended to be a full distributed dense and sparse BLAS based upon Eigen and MPI, via an extremely lightweight C++ MPI library called MPP (https://github.com/motonacciu/mpp). MPP is header-only, and provides a full communicator class, and very easy-to-implement, templated, synchronous and asynchronous message passing, all within just a few header files (unlike, say, boost::mpi, which is huge, requires compilation, and is dependent upon external Boost libraries). The overall design is that of distributed block matrix operations, with individual block operations being executed by Eigen, using processor distribution methods hidden from, those customizable by, the user. Thus far I've coded: 1) an extremely flexible, irregular block distribution scheme class, accomodating both 1-D and full 2-D Mondriaan processor distributions; 2) distributed dense and sparse matrices that are laid-out via the distribution object; and 3) the necessary plugins for Eigen::MatrixBase and Eigen::SparseMatrixBase, to make them suitable as processor-local submatrices of distributed objects, and enable the MPP message passing of them for the purposes of distributed BLAS operations like Cannon matrix multiplication. My distributed matrices permit both rotation and permutation of rows, columns, or both simultaneously; I have begun to implement the basic BLAS operations. While I think a full suite of SparseBlas is quite feasible, compile-time optimization of expressions would be very difficult and likely inefficient, given the lack of processor topology information at compile-time. I'm planning on including support for ParMetis, Parkway, Mondriaan, and a number of distributed linear and eigen solver libraries. I intend to actually complete this project to make it usable, within a not-too-long timeframe, as I very much need it for a bioinformatics project of mine; thus I am currently working almost full-time on it. That being said, I would very much appreciate any help other people can provide. How would the main Eigen developers like me to coordinate this, given that it has very similar (though apparently broader) design goals as eigen-mpi, but essentially incompatible structure, and is apparently moving much more quickly? -- Chris Cowing-Zitron |

**Follow-Ups**:**Re: [eigen] eigen-mpi and more***From:*Gael Guennebaud

**Messages sorted by:**[ date | thread ]- Prev by Date:
**Re: [eigen] unpleasant surprise mit math functions** - Next by Date:
**[eigen] question on alignment** - Previous by thread:
**Re: [eigen] unpleasant surprise mit math functions** - Next by thread:
**Re: [eigen] eigen-mpi and more**

Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |