Re: [eigen] Optimizing Eigen::Tensor operations for tiny level 1 cache

[ Thread Index | Date Index | More Archives ]

Hi Pete,

The way to optimize the tensor library for hardware with limited cache sizes would be to 

1. Reduce the size of the buffer used for the ".block()" interface. I believe we currently try to fit them in L1, but perhaps the detection doesn't work correctly on your hardware.
2. Reduce the block sizes used in TensorContraction. 

1. By default the blocksize is chosen such that the blocks fits in L1:

 Each evaluator in an _expression_ reports how scratch memory it needs to compute a block's worth of data through the getResourceRequirements() API, e.g.:

 These values are then merged by the the executor in the calls here:

2. The tensor contraction blocking uses a number of heuristics to choose block sizes and level of parallelism. In particular, it tries to pack the lhs into L2, and rhs into L3.

I hope these pointers help.


On Tue, May 28, 2019 at 7:38 AM Pete Blacker <pete.blacker@xxxxxxxxx> wrote:
Hi there,

I'm currently using the Eigen::Tensor module on a relatively small processors which has very limited cache, 16KB level 1 no level 2 at all! I've been looking for any way to optimise the blocking of operations performed by Eigen for a particular block size but I can't find anything so far. 

Is there a way to optimise the Tensor operations for this type of small cache?



Mail converted by MHonArc 2.6.19+