The SpSparse library does offer one more thing of interest not mentioned previously: sparse address spaces.. For example, what if indices in your vector space can range from 0....10^9, but you use only 10,000 of those indices? You're dealing with, effectively, a 10,000-dimensional space embedded in a billion-dimensional space. Your sparse matrix in that space might have O(10,000) non-zero elements, not the O(10^9) that might be assumed by looking at the number of rows/columns in the sparse matrix, or O(10^18) in a dense matrix.
That's actually the kind of situation I deal with in my modeling, where at least 90% of the indices in my vector space are unused (they will NEVER be used in my part of the model). It can also come up, for example, in a domain decomposition setting with MPI. Or if you're combining two geometric dimensions into one mathematical dimension and choose to do so using fast bit operations, rather than multiplication and division.
The SpSparse code I wrote handles this case directly: it never uses an index to index into an array, so indices can be anything as long as they're ordered. They could even be crazy things such as tuples. Or even strings, if your dimensions have symbolic meanings. Sparse address spaces probably incur a time cost, and probably aren't appropriate for serious linear algebra. But if your tools can work directly with them, they can at least be convenient.
The alternative is to convert to a dense address space. That conversion is a pain, reducing code readability and debuggability. It also incurs an overhead of building hash maps from your sparse to dense address space. I'm sure if your problem involves enough linear algebra, this conversion would be worth it anyway. But for more "casual" linear algebra, user code complexity could be a bigger issue.