Hi Pete,
The way to optimize the tensor library for hardware with limited cache sizes would be to
1. Reduce the size of the buffer used for the ".block()" interface. I believe we currently try to fit them in L1, but perhaps the detection doesn't work correctly on your hardware.
2. Reduce the block sizes used in TensorContraction.
1. By default the blocksize is chosen such that the blocks fits in L1:
Each evaluator in an _expression_ reports how scratch memory it needs to compute a block's worth of data through the getResourceRequirements() API, e.g.:
These values are then merged by the the executor in the calls here:
2. The tensor contraction blocking uses a number of heuristics to choose block sizes and level of parallelism. In particular, it tries to pack the lhs into L2, and rhs into L3.
I hope these pointers help.
Rasmus