Re: [eigen] Tensor Module: Index mapping support

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Thank you very much Benoit for the answers :-)

Really appreciate it!

How would you go for the Custom Index type?
Should we adapt the interal::traits in the Tensor class (?¿)
or should we provide all functions with the ability to be called with a IndexType and insert it into the variadic functions you have already written:

 like my example:

            template<typename IndexType>
            void resize( const IndexType & dimensions ){

                resize(dimensions, std::make_index_sequence<NumIndices>{} );

            }

            template<typename IndexType, std::size_t... Is>
            void resize(const IndexType & dimensions, std::index_sequence<Is...> )
            {
                m_tensor.resize( dimensions(Is)... );
            }
However I dont know it the compiler optimizes the call by value of the unamed type std::index_sequence<Is…>
Does it?

I think this way is cool because (can be made with some Preprocessor Macros for all functions) the user can then additionally use own index type.
I have not so much detailed insight into Eigen, but could provide a patch implementation for your review :-)
I need this functionality :-), it makes the stuff so handy =)

std::index_sequence<> is C++14 sadly, so one needs some meta stuff (maybe there is already something in your meta header?)


Did somebody checkout Meta by Eric Niebler https://github.com/ericniebler/meta (its awesome :-) that would be an awesome meta template addition to use in the Eigen library ;-)

BR Gabriel




Am 08.10.2015 um 20:13 schrieb Benoit Steiner <bsteiner@xxxxxxxxxx>:



On Wed, Oct 7, 2015 at 9:31 AM, Nathan Yonkee <nathan.yonkee@xxxxxxxxx> wrote:
That is correct Gabriel the data is stored in a single, fixed size array, the data is also aligned in case you have vectorized CPU insructions (SSE or AVX). I'm only confident for fixed sized objects, it may be different if you use dynamically sized ones.

This is correct for both fixed sized and dynamically sized tensors. If a dynamically sized tensor needs to grow, a new contiguous chunk of memory will be allocated to accommodate the additional coefficients.




Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/