Hi,
I find it unintuitive that a submatrix of a matrix can change its majorness depending on if the submatrix is a row or a column. Say M is a column-majored matrix. Then M.row(i) is actually conceptually row-majored. This becomes an issue once you start to reshape the resulting 1D quantity, e.g, you might want treat pixels in a 2D image as a row of a big 3-row matrix. Then at the end, you want to unpack a row back into a 2d matrix. Natuarally, when you do unshape, you should preserve the majorness. But the row is actually row-majored, so if you don't change the majorness when you reshape, you get a row-majored matrix at the end.
To summarize: Having submatrix operations changing the majorness of the resulting submatrix depending on its shape is a library-wart. Can we change this behavior? Essentially once you choose a majorness convention for your application, all matrix operations should not change the majorness of the resulting matrix, unless EXPLICITLY requested by user, e.g., for efficiency.
Thanks,
Yuanchen