Re: [hatari-devel] fixing errors reported by GCC 10 -fanalyzer

[ Thread Index | Date Index | More lists.tuxfamily.org/hatari-devel Archives ]


Hi,

On 5/11/20 8:08 AM, Thomas Huth wrote:
Am Sun, 10 May 2020 23:19:24 +0200
schrieb Nicolas Pomarède <npomarede@xxxxxxxxxxxx>:
....
NOTE : assert() is used in several places ; as the condition/error
tested by assert() can also happen when assert is compiled to no-code
"{}", maybe assert() should be replaced by proper printf+exit (or
return if possible) ?

Only reason to disable asserts should be somebody trying to
make very small Hatari binary for something like Firebee.

There aren't so many of them that they would increase code
size significantly (I hope), and they (or allocs) shouldn't
be in performance critical places.
(Except maybe the one assert in newcpu.c::do_interrupt()
and couple in cycInt.c).


Asserts are there to catch unexpected logic errors. Although they should be enabled by
default, I think it's OK do disable them in
corner-cases.


Actually, I think the best thing that we can do, is to introduce
another wrapper around malloc() in our code base, that prints an error
message and exits the program completely in case of errors, and we then
should use that wrapper for allocating small amounts of memory instead
of using malloc + error handling everywhere around in the code.

The point of asserts is to show places where logic
errors happen. Malloc failure isn't a logic error,
so I think hiding the error location with
a wrapper is fine.


malloc()
should never fail for small amounts of memory, otherwise the system is
in a catastrophic out-of-memory situation anyway where the OOM-killer
of the kernel will reap out some processes soon. So aborting the
program is likely the best thing that you can do in such a situation
anyway.

FYI: Systems that use overcommit, like Linux,
aren't likely(?) to run out of memory when memory
is allocated, because allocation just sets process
memory page tables entries to point to a common
system zero page.  They run out of memory when
that memory is used i.e. when the allocated memory
pages get written (page index has to point to
a new, process specific page, which kernel needs
to find out from somewhere).

This write often happens right after allocation,
but the allocation itself may succeed even in low
memory situations.


Overcommit can be disabled, but nearly all Linux
distros, and I guess also derivatives like
Android, have it enabled.

(Overcommit is a memory usage optimization, very
useful for forking and threading, because fork
duplicates parent address space, and threads have
statically allocated, mostly unused stack.)


AFAIK Windows doesn't use overcommit, partly
because it creates new processes in a different
way.  There allocations can actually fail.
I have no idea whether Mac / BSD / Mach
use overcommit.


This is e.g. also what is done in the normal glib malloc functions -
they abort the program in OOM situations, and you rather have to use the
try_malloc functions instead if you want to avoid that, see e.g.:

  https://developer.gnome.org/glib/unstable/glib-Memory-Allocation.html#g-try-malloc

And while we're at it, our malloc-wrapper could also clear the
allocated memory block at the end - this is currently done manually in
most spots that malloc some memory anyway, so we could simplify the
code in many spots that way. Thus, what about calling the wrapper
mzalloc() or malloc0() or something similar?

Some libraries indirectly linked by Hatari might
export some alloc functions of their own, so name
should be clearly Hatari specific.

ANSI-C function for cleared memory is called calloc() -> maybe use Hatari_Calloc?


	- Eero



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/