Re: [eigen] Small dynamic matrix optimization

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Is there anyway to access the alignment through a type trait of the
matrix class?  Passing the correct alignment of the underlying storage
to the Eigen::Map would then be a simple task, albeit ugly.

Cheers,

Darcy

On Mon, 2019-10-21 at 16:35 +0200, Christoph Hertzberg wrote:
> If you can ensure that the allocated memory is aligned, you can use 
> `Map<Matrix<...>, Aligned>`.  Also, we do allow vectorization of 
> non-aligned data (with newer CPUs there is barely a performance 
> difference, except when loads/store are split between cache-lines).
> 
> Btw: There is a macro `ei_declare_aligned_stack_constructed_variable`
> in 
> Eigen/src/Core/util/Memory.h which already does most of what you need
> to 
> do (grep the source code to see where/how it is used).
> 
> Cheers,
> Christoph
> 
> 
> On 21/10/2019 16.21, Rob Conde wrote:
> > Thanks...I will think about this and maybe try to prototype
> > something.
> > 
> > Using the Map approach...I think this would disable SIMD right?
> > Because you don't know if the mapped data is aligned?
> > 
> > Rob
> > ________________________________
> > From: Christoph Hertzberg <chtz@xxxxxxxxxxxxxxxxxxxxxxxx>
> > Sent: Monday, October 21, 2019 10:15 AM
> > To: eigen@xxxxxxxxxxxxxxxxxxx <eigen@xxxxxxxxxxxxxxxxxxx>
> > Subject: Re: [eigen] Small dynamic matrix optimization
> > 
> > This is already done for some temporaries (like blocks in
> > cache-optimized GEMM). Doing that for custom types would be
> > interesting,
> > but not trivial (e.g., move-behavior is completely different, if
> > you
> > want to move the data to an object higher in the stack).
> > 
> > If you are ok with managing data yourself, you can manually achieve
> > this
> > by alloca/allocate-ing data yourself and use an Eigen::Map to refer
> > to
> > the data.
> > 
> > If you have a proof-of-concept how this could be integrated into
> > the
> > Eigen API, we are open for suggestions!
> > 
> > Another idea would be to allow custom allocators for matrices -- in
> > that
> > case the custom allocator could very easily be a stack-allocator
> > (which
> > would not even need to use the "main" stack). But that's also not
> > too
> > easy to integrate without breaking the current API (adding a new
> > type
> > would of course be an option for that).
> > 
> > Cheers,
> > Christoph
> > 
> > 
> > 
> > On 21/10/2019 15.39, Rob Conde wrote:
> > > I'm wondering if a small dynamic matrix optimization has been
> > > considered. In other words, allowing a stack allocation for
> > > matrices of unknown size up to a certain limit and using a heap
> > > allocation beyond that?
> > > 
> > > Rob Conde
> > > 
> > 
> > --
> >    Dr.-Ing. Christoph Hertzberg
> > 
> >    Besuchsadresse der Nebengeschäftsstelle:
> >    DFKI GmbH
> >    Robotics Innovation Center
> >    Robert-Hooke-Straße 5
> >    28359 Bremen, Germany
> > 
> >    Postadresse der Hauptgeschäftsstelle Standort Bremen:
> >    DFKI GmbH
> >    Robotics Innovation Center
> >    Robert-Hooke-Straße 1
> >    28359 Bremen, Germany
> > 
> >    Tel.:     +49 421 178 45-4021
> >    Zentrale: +49 421 178 45-0
> >    E-Mail:   christoph.hertzberg@xxxxxxx
> > 
> >    Weitere Informationen: http://www.dfki.de/robotik
> >     -------------------------------------------------------------
> >     Deutsches Forschungszentrum für Künstliche Intelligenz GmbH
> >     Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
> > 
> >     Geschäftsführung:
> >     Prof. Dr. Jana Koehler (Vorsitzende)
> >     Dr. Walter Olthoff
> > 
> >     Vorsitzender des Aufsichtsrats:
> >     Prof. Dr. h.c. Hans A. Aukes
> >     Amtsgericht Kaiserslautern, HRB 2313
> >     -------------------------------------------------------------
> > 
> > 
> > 
> > 




Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/