Re: [eigen] Signed or unsigned indexing

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


Well made points. These greatly expand on points I made on this list in May 2010.


From the linked video...

"I think one of the sad things about the standard library is that the indices are unsigned whereas array indices are unsigned. You are sort of doomed to have confusion and problems with that. [...] It is a major bug source. Which is why I'm saying, 'stay as simple as you can. Use [signed] integers until you really, really need something else.' "
-- Bjarne Stroustrop

This opinion was echoed by several of the esteemed panel and opposed by none: Bjarne Stroustrup, Andrei Alexandrescu, Herb Sutter, Scott Meyers, Chandler Carruth, Sean Parent, Michael Wong, and Stephan T. Lavavej.

" They [the unsigned ordinals in the STL] are wrong. We're sorry. We were young."


I would add...
It is human nature to forget that these "integer" data type we're discussing are only approximations of integers over a finite range. The approximation is most horribly broken at the point of over/underflow. For signed types, that point is often comfortably far from where our calculations happen.
For unsigned types, that boundary point at the busiest spot, zero.


-- Mark






On 01/20/2017 10:54 AM, Francois Fayard wrote:
Let me give you my 2 cents on this signed/unsigned problem:

- The core C++ language is using signed integers for indexing. If p is a pointer, p[k] is defined for k being a std::ptrdiff_t which is the Index type used by Eigen
- The C++ standard library is using unsigned integers for indexing: std::size_t

So, if you want to choose: (core C++) > (STL) > (Eigen). You have to go and use signed integers. Most people in the standard committees think that using unsigned integers for indexing was a mistake, including Bjarne Stroustrup, Herb Sutter and Chandler Carruth. You can see their argument in this video, at 42:38 and 1:02:50 ( https://www.youtube.com/watch?v=Puio5dly9N8 ).

Using unsigned integers is just an error-prone. For instance, if you have an array v and you are looking from the first element k  when you go from b downto a such that v[k] = 0, with unsigned integers, you just need to write:
for (int k = b; k >= a, —k) {
   if (v[k] == 0) {
     return k;
   }
}
If you do that with unsigned integers, good luck to write something simple that does not get crazy when a = 0 (It was a real bug I found in an image program that was searching for something from right to left inside a rectangle of interest).

In terms of performance, the compiler can also do more things with signed integers as overflow for signed integers is undefined behavior. This is not the case for unsigned integers (they warp). As a consequence, a compiler can assume that p[k] and p[k + 1] are next to each other in memory if k is signed. It can’t do that if k is unsigned (think of the mess it can lead for people working on auto-vectorization). With signed integer 10 * k / 2  = 5 * k. This is not the case with unsigned integers. I believe that "unsigned integers" should be renamed "modulo integers” because unsigned integers are just Z/(2^n Z). And I have taught enough Mathematics to know that most people are (or should be) scared of Z/(2^n Z) :-)

The only advantage of unsigned integers on a performance point of view is division: it is faster with unsigned integers. But I never had this bottleneck in real programs.

An argument used by “unsigned” people is that you get an extra bit for array indices. This is just nonsense. First, it only happens on 32-bit systems with char arrays of more than 2 million elements. Have you ever considered *char* arrays that long? Now, the must funny part is that if you look at the implementation of std::vector<T>, it stores 3 pointers, including the begin and the end of the array. To get its size, the vector returns static_cast<std::size_t>(end - begin). And end - begin is just at std::ptrdiff_t, so if the vector has more than 2 million elements, you just get undefined behavior. So std::vector can’t even use that bit !!! By the way, this is why std::vector<T>::max_size() is 2 million with libc++ (clang). And nobody seems to complain.

On the other side, I understand people who use std::size_t for indexing because it is consistent with the standard library.

François Fayard
Founder & Consultant - Inside Loop
Applied Mathematics & High Performance Computing

On Jan 20, 2017, at 3:33 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:

It's complicated. The use of signed indexing has a long history specifically in numerical linear algebra, see FORTRAN/BLAS/LAPACK. Also some large C++ shops such as Google have entirely turned away from unsigned indexing:
https://google.github.io/styleguide/cppguide.html#Integer_Types says:
You should not use the unsigned integer types such as uint32_t, unless there is a valid reason such as representing a bit pattern rather than a number, or you need defined overflow modulo 2^N. In particular, do not use unsigned types to say a number will never be negative. Instead, use assertions for this.

I only mention this to say, there are valid points on both sides of this argument. Neither choice will make much more than half of users happy. (I actually don't like the Google C++ style that much, and I wouldn't mind unsigned personally).

The only thing that would be really bad would be to try to make both signednesses fully supported. That would mean keeping only the intersection of the sets of advantages of either signedness, having to deal with the union of the sets of constraints.

Benoit

2017-01-20 8:25 GMT-05:00 Henrik Mannerström <henrik.mannerstrom@xxxxxxxxx>:
This issue seems to warrant a separate message thread.

I'd like to offer my two cents: As much as I like Eigen I think there is a strict ordering "c++ language" > "stl" > "Eigen". More than once have I developed something and only at some later point brought in Eigen. Code written in std:size_t fashion has then needed refactoring. So, if there would be a vote, I'd vote for size_t indexing. I think smooth interoperability with stl is valuable.

Best,
Henrik

On Thu, Jan 19, 2017 at 10:58 PM, Márton Danóczy <marton78@xxxxxxxxx> wrote:
Hi all,

while I would not want to argue with Gael nor with the numerous C++ experts advocating signed integers, today's reality is different. Some libraries, the most prominent being the standard library, use unsigned integers as indices. Therefore, mixing signed and unsigned types is sadly unavoidable when using Eigen, which is a major annoyance (at least for me, working with -pedantic -Werror).

In my opinion, using Eigen with -DEIGEN_DEFAULT_DENSE_INDEX_TYPE=size_t should just work, regardless of the developers' personal preference for signed integers.

Best,
Marton

On 19 January 2017 at 13:00, Gael Guennebaud <gael.guennebaud@xxxxxxxxx> wrote:


On Thu, Jan 19, 2017 at 11:31 AM, Andrew Fitzgibbon <awf@xxxxxxxxxxxxx> wrote:


I wonder if a rethink of reshape could allow a move to

unsigned index types, assuming I understand correctly
that Dynamic would be of another type.  It’s always been a

bit clunky getting “size_t-correctness” right for mixed
Eigen/STL code, and compilers complain increasingly
nowadays.   Perhaps now might be a time to give it a try?


See also: http://eigen.tuxfamily.org/index.php?title=FAQ#Why_Eigen.27s_API_is_using_signed_integers_for_sizes.2C_indices.2C_etc..3F

Maybe one day we'll get a "fixed" std-v2 that would be more compatible with libraries that made the right choice of using signed types.

For sparse matrices, I agree that we might try to allow for unsigned types as the StorageIndex type. This should be doable while keeping signed 64 bits integers for the API (rows, cols, nonZeros, etc.)

We might also think about solutions to ease the mix of Eigen/STL code...

gael
I see the “downcounting” argument at https://listengine.tuxfamily.org/lists.tuxfamily.org/eigen/2009/03/msg00099.html,
but that appears fairly strongly to be a special case where
one would anyway want to benchmark, check sizes etc.

Finally, I think we are in a world where sparse arrays with
entries in the 2-4billion range are reasonably common,
and one could conceivably be pleased to get the extra bit
back…

Thanks again for a great library!

A.

Dr Andrew Fitzgibbon FREng FBCS FIAPR

Partner Scientist
Microsoft HoloLens, Cambridge, UK

http://aka.ms/awf

From: Gael Guennebaud [mailto:gael.guennebaud@xxxxxxxxx]
Sent: 13 January 2017 12:26
To: eigen <eigen@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [eigen] Let's get reshape in shape for 3.4

On Fri, Jan 13, 2017 at 6:14 AM, Jason Newton <nevion@xxxxxxxxx> wrote:

Also, regarding them RowMajor/ColMajor  int/type issue - perhaps stuff
them in a new namespace or class - storage ?  Too bad StorageOrder is
already used in so many places.   Honestly I'm all for you making them
types and things working uniformly from there.  I have used them
myself as integers with the flags bitset, but only for enable_if logic
which would be rendered obsolete if you had a collection of C++11
inspired type traits (instead they get repeated on the web a few
places).  Sorry if I'm not being very detailed, it's been a while
since I've needed these, but my point is that it was basically a flaw
to use them as int's in the first place, in user code - and so I
encourage you to change things so it all works fluidly in the new api
without fear of upsetting users.  Although perhaps that is a daunting
task...

I think you are mixing Eigen::RowMajor with Eigen::RowMajorBit. I agree that the bit flags could be managed differently using individual type traits, but regarding Eigen::RowMajor, it is currently used as a template parameter to Matrix, Array, SparseMatrix, etc.:

Matrix<...., RowMajor|DontAlign>

which is pretty convenient to write compared to having to subclass some default_matrix_traits class to customize the options. With RowMajor|DontAlign, RowMajor could still be instance of an integral_constant-like type with operator | overloaded.... Actually I've started to think about such an approach for Eigen::Dynamic, so that one can write:

M = N*2+1

and get M==Dynamic if N==Dynamic. Currently we always have to write: M = N==Dynamic ? Dynamic : 2*N+1 which is error prone because it's easy to forget about checking for Dynamic, especially when combining multiple compile-time identifiers.

gael


-Jason


On Thu, Jan 12, 2017 at 11:56 PM, Jason Newton <nevion@xxxxxxxxx> wrote:
Hi Gael,

Glad to see all the new api's you're moving in for the new year.

I actually prefer C if C is a superset of B - that is the way it works
in Numpy - oder is overridable in several places, but mainly things
follow the matrix types you are working with (which would be
expressions here).

I haven't thought about the details but is there any reason
A.reshaped(4, n/2) work via constexprs or something on the 4?  I
imagine even if it did you're trying to cover for C++98 though, but I
think fix<4> is a fair bit ugly.

As for the placeholder for a solvable dimension - the matlab
convension is the empty matrix and I welcome that notion (warped as a
type) - how about any of, with no priorities:
Null, Nil, Empty, Filled, DontCare, Placeholder,  CalcSize (this and
the next are more explicit), or AutoSized


-Jason

On Thu, Jan 12, 2017 at 10:35 AM, Gael Guennebaud
<gael.guennebaud@xxxxxxxxx> wrote:
Hi everyone,

just after generic indexing/slicing, another long standing missing feature
is reshape. So let's make it for 3.4.

This is not the first time we discuss it. There is a old bug report entry
[1]. and a old pull-request with various discussions [2]. The Tensor module
also support reshape [3].

However, the feature is still not there because we never converged about how
to properly handle the ambiguity between col-major / row-major orders, also
called Fortran versus C style orders (e.g., in numpy doc [4]).

We have several options:

A) Interpret the indices in column major only, regardless of the storage
order.
   - used in MatLab and Armadillo
   - pros: simple strategy
   - cons: not very friendly for row-major inputs (needs to transpose twice)

B) Follows the storage order of the given expression
   - used by the Tensor module
   - pros: easiest implementation
   - cons:
      * results depends on storage order (need to be careful in generic code)
      * not all expressions have a natural storage order (e.g., a+a^T, a*b)
      * needs a hard copy if, e.g., the user want to stack columns of a
row-major input

C) Give the user an option to decide which order to use between: ColMajor,
RowMajor, Auto
   - used by numpy [4] with default to RowMajor (aka C-like order)
   - pros: give full control to the user
   - cons: the API is a bit more complicated

At this stage, option C) seems to be the only reasonable one. However, we
yet have to specify how to pass this option at compile-time, what Auto
means, and what is the default strategy.

Regarding 'Auto', it is similar to option (B) above. However, as I already
mentioned, some expressions do not has any natural storage order. We could
address this issue by limiting the use of 'Auto' to expressions for which
the storage order is "strongly" defined, where "strong" could mean:
  - Any expressions with the DirectAccessBit flags (it means we are dealing
with a Matrix, Map, sub-matrix, Ref, etc. but not with a generic expression)
  - Any expression with the LinearAccessBit flag: it means the expression can
be efficiently processed as a 1D vector.

Any other situation would raise a static_assert.

But what if I really don't care and just want to, e.g., get a linear view
with no constraints of the stacking order? Then we could add a fourth option
meaning 'IDontCare', perhaps 'AnyOrder' ?


For the default behavior, I would propose 'ColMajor' which is perhaps the
most common and predictable choice given that the default storage is column
major too.


Then, for the API, nothing fancy (I use c++11 for brevity):

template<typename RowsType=Index,typename ColType=Index,typename Order=Xxxx>
DenseBase::reshaped(RowsType rows,ColType cols,Order = Order());

with one variant to output a 1D array/vector:

template<typename Order= Xxxx >
DenseBase.reshaped(Order = Order());

Note that I used "reshaped" with a "d" on purpose.

The storage order of the resulting expression would match the optional
order.

Then for the name of the options we cannot use "RowMajor"/"ColMajor" because
they already are defined as "static const int" and we need objects with
different types here. Moreover, col-major/row-major does not extend well to
multi-dimension tensors. I also don't really like the reference to Fortran/C
as in numpy. "Forward"/"Backward" are confusing too. Any ideas?

The rows/cols parameters could also be a mix of compile-time & runtime
values, like:

A.reshaped(fix<4>,n/2);

And maybe we could even allow a placeholder to automatically compute one of
the dimension to match the given matrix size. We cannot reuse "Auto" here
because that would be too confusing:

A.reshaped(5,Auto);

Again, any ideas for a good placeholder name? (numpy uses -1 but we need a
compile-time identifier)


cheers,

gael

[1] http://eigen.tuxfamily.org/bz/show_bug.cgi?id=437
[2] https://bitbucket.org/eigen/eigen/pull-requests/41
[3]
https://bitbucket.org/eigen/eigen/src/default/unsupported/Eigen/CXX11/src/Tensor/README.md?fileviewer=file-view-default#markdown-header-operation-reshapeconst-dimensions-new_dims
[4]
https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.reshape.html










Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/