Right now tilegx is right on the verge of timeout when it runs,
so adding a bit of headroom seems like the right thing; we
see failures when running tests in parallel.
Before this change, the while loop in reused_arena which avoids
returning a corrupt arena would never execute its body if the selected
arena were not corrupt. As a result, result == begin after the loop,
and the function returns NULL, triggering fallback to mmap.
__malloc_initialize_hook is interposed by application code, so
the usual approach to define a compatibility symbol does not work.
This commit adds a new mechanism based on #pragma GCC poison in
<stdc-predef.h>.
For regular mmapped chunks there are two size fields (hence a reduction
by 2 * SIZE_SZ bytes), but for fake chunks, we only have one size field,
so we need to subtract SIZE_SZ bytes.
This was initially reported as Emacs bug 23726.
After the heap rewriting added in commit
4cf6c72fd2 (malloc: Rewrite dumped heap
for compatibility in __malloc_set_state), we can change malloc alignment
for new allocations because the alignment of old allocations no longer
matters.
We need to increase the malloc state version number, so that binaries
containing dumped heaps of the new layout will not try to run on
previous versions of glibc, resulting in obscure crashes.
This commit addresses a failure of tst-malloc-thread-fail on the
affected architectures (32-bit ppc and mips) because the test checks
pointer alignment.
The first SIGUSR1 signal could arrive when sigusr1_sender_pid
was still 0. As a result, kill would send SIGSTOP to the
entire process group. This would cause the test to hang before
printing any output.
This commit also adds a sched_yield to the signal source, so that
it does not flood the parent process with signals it has never a
chance to handle.
Even with these changes, tst-mallocfork2 still fails reliably
after the fix in commit commit 56290d6e76
(Increase fork signal safety for single-threaded processes) is
backed out.
This will allow us to change many aspects of the malloc implementation
while preserving compatibility with existing Emacs binaries.
As a result, existing Emacs binaries will have a larger RSS, and Emacs
needs a few more milliseconds to start. This overhead is specific
to Emacs (and will go away once Emacs switches to its internal malloc).
The new checks to make free and realloc compatible with the dumped heap
are confined to the mmap paths, which are already quite slow due to the
munmap overhead.
This commit weakens some security checks, but only for heap pointers
in the dumped main arena. By default, this area is empty, so those
checks are as effective as before.
This provides a band-aid and addresses the scenario where fork is
called from a signal handler while the process is in the malloc
subsystem (or has acquired the libio list lock). It does not
address the general issue of async-signal-safety of fork;
multi-threaded processes are not covered, and some glibc
subsystems have fork handlers which are not async-signal-safe.
The fork handler now runs so late that there is no risk anymore that
other fork handlers in the same thread use malloc, so it is no
longer necessary to install malloc hooks which made a subset
of malloc functionality available to the thread that called fork.
Previously, a thread M invoking fork would acquire locks in this order:
(M1) malloc arena locks (in the registered fork handler)
(M2) libio list lock
A thread F invoking flush (NULL) would acquire locks in this order:
(F1) libio list lock
(F2) individual _IO_FILE locks
A thread G running getdelim would use this order:
(G1) _IO_FILE lock
(G2) malloc arena lock
After executing (M1), (F1), (G1), none of the threads can make progress.
This commit changes the fork lock order to:
(M'1) libio list lock
(M'2) malloc arena locks
It explicitly encodes the lock order in the implementations of fork,
and does not rely on the registration order, thus avoiding the deadlock.
* malloc/Makefile ($(objpfx)tst-malloc-backtrace,
$(objpfx)tst-malloc-thread-exit, $(objpfx)tst-malloc-thread-fail): Use
$(shared-thread-library) instead of hardcoding the path to libpthread.
This test case exercises unusual code paths in allocation functions,
related to allocation failures. Specifically, the test can reveal
the following bugs:
(a) calloc returns non-zero memory on fallback to sysmalloc.
(b) calloc can self-deadlock because it fails to release
the arena lock on certain allocation failures.
(c) pvalloc can dereference a NULL arena pointer.
(a) and (b) appear specific to a faulty downstream backport.
(c) was fixed as part of commit 10ad46bc65.
The test for (a) was inspired by a reproducer supplied by Jeff Layton.
* malloc/arena.c (list_lock): Document lock ordering requirements.
(free_list_lock): New lock.
(ptmalloc_lock_all): Comment on free_list_lock.
(ptmalloc_unlock_all2): Reinitialize free_list_lock.
(detach_arena): Update comment. free_list_lock is now needed.
(_int_new_arena): Use free_list_lock around detach_arena call.
Acquire arena lock after list_lock. Add comment, including FIXME
about incorrect synchronization.
(get_free_list): Switch to free_list_lock.
(reused_arena): Acquire free_list_lock around detach_arena call
and attached threads counter update. Add two FIXMEs about
incorrect synchronization.
(arena_thread_freeres): Switch to free_list_lock.
* malloc/malloc.c (struct malloc_state): Update comments to
mention free_list_lock.
reused_arena can increase the attached thread count of arenas on the
free list. This means that the assertion that the reference count is
zero is incorrect. In this case, the reference count initialization
is incorrect as well and could cause arenas to be put on the free
list too early (while they still have attached threads).
* malloc/arena.c (get_free_list): Remove assert and adjust
reference count handling. Add comment about reused_arena
interaction.
(reused_arena): Add comments abount get_free_list interaction.
* malloc/tst-malloc-thread-exit.c: New file.
* malloc/Makefile (tests): Add tst-malloc-thread-exit.
(tst-malloc-thread-exit): Link against libpthread.
This patch converts a few more function definitions in glibc from
old-style K&R to prototype style. This is sufficient to build and
test on x86_64 and x86 with -Wold-style-definition (I'll test on some
more architectures before proposing the actual addition of
-Wold-style-definition).
Tested for x86_64 and x86 with -Wold-style-definition in use
(testsuite - this patch affects files containing assertions).
* io/fts.c (fts_open): Convert to prototype-style function
definition.
* malloc/mcheck.c (mcheck): Likewise.
(mcheck_pedantic): Likewise.
* posix/regexec.c (re_search_2_stub): Likewise. Use
internal_function.
(re_search_internal): Likewise.
* resolv/res_init.c [RESOLVSORT] (net_mask): Convert to
prototype-style function definition.
* sunrpc/clnt_udp.c (clntudp_call): Likewise.
* sunrpc/pmap_rmt.c (clnt_broadcast): Likewise.
* sunrpc/rpcsvc/rusers.x (xdr_utmp): Likewise.
(xdr_utmpptr): Likewise.
(xdr_utmparr): Likewise.
(xdr_utmpidle): Likewise.
(xdr_utmpidleptr): Likewise.
(xdr_utmpidlearr): Likewise.
This mostly automatically-generated patch converts 113 function
definitions in glibc from old-style K&R to prototype-style. Following
my other recent such patches, this one deals with the case of function
definitions in files that either contain assertions or where grep
suggested they might contain assertions - and thus where it isn't
possible to use a simple object code comparison as a sanity check on
the correctness of the patch, because line numbers are changed.
A few such automatically-generated changes needed to be supplemented
by manual changes for the result to compile. openat64 had a prototype
declaration with "..." but an old-style definition in
sysdeps/unix/sysv/linux/dl-openat64.c, and "..." needed adding to the
generated prototype in the definition (I've filed
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68024> for diagnosing
such cases in GCC; the old state was undefined behavior not requiring
a diagnostic, but one seems a good idea). In addition, as Florian has
noted regparm attribute mismatches between declaration and definition
are only diagnosed for prototype definitions, and five functions
needed internal_function added to their definitions (in the case of
__pthread_mutex_cond_lock, via the macro definition of
__pthread_mutex_lock) to compile on i386.
After this patch is in, remaining old-style definitions are probably
most readily fixed manually before we can turn on
-Wold-style-definition for all builds.
Tested for x86_64 and x86 (testsuite).
* crypt/md5-crypt.c (__md5_crypt_r): Convert to prototype-style
function definition.
* crypt/sha256-crypt.c (__sha256_crypt_r): Likewise.
* crypt/sha512-crypt.c (__sha512_crypt_r): Likewise.
* debug/backtracesyms.c (__backtrace_symbols): Likewise.
* elf/dl-minimal.c (_itoa): Likewise.
* hurd/hurdmalloc.c (malloc): Likewise.
(free): Likewise.
(realloc): Likewise.
* inet/inet6_option.c (inet6_option_space): Likewise.
(inet6_option_init): Likewise.
(inet6_option_append): Likewise.
(inet6_option_alloc): Likewise.
(inet6_option_next): Likewise.
(inet6_option_find): Likewise.
* io/ftw.c (FTW_NAME): Likewise.
(NFTW_NAME): Likewise.
(NFTW_NEW_NAME): Likewise.
(NFTW_OLD_NAME): Likewise.
* libio/iofwide.c (_IO_fwide): Likewise.
* libio/strops.c (_IO_str_init_static_internal): Likewise.
(_IO_str_init_static): Likewise.
(_IO_str_init_readonly): Likewise.
(_IO_str_overflow): Likewise.
(_IO_str_underflow): Likewise.
(_IO_str_count): Likewise.
(_IO_str_seekoff): Likewise.
(_IO_str_pbackfail): Likewise.
(_IO_str_finish): Likewise.
* libio/wstrops.c (_IO_wstr_init_static): Likewise.
(_IO_wstr_overflow): Likewise.
(_IO_wstr_underflow): Likewise.
(_IO_wstr_count): Likewise.
(_IO_wstr_seekoff): Likewise.
(_IO_wstr_pbackfail): Likewise.
(_IO_wstr_finish): Likewise.
* locale/programs/localedef.c (normalize_codeset): Likewise.
* locale/programs/locarchive.c (add_locale_to_archive): Likewise.
(add_locales_to_archive): Likewise.
(delete_locales_from_archive): Likewise.
* malloc/malloc.c (__libc_mallinfo): Likewise.
* math/gen-auto-libm-tests.c (init_fp_formats): Likewise.
* misc/tsearch.c (__tfind): Likewise.
* nptl/pthread_attr_destroy.c (__pthread_attr_destroy): Likewise.
* nptl/pthread_attr_getdetachstate.c
(__pthread_attr_getdetachstate): Likewise.
* nptl/pthread_attr_getguardsize.c (pthread_attr_getguardsize):
Likewise.
* nptl/pthread_attr_getinheritsched.c
(__pthread_attr_getinheritsched): Likewise.
* nptl/pthread_attr_getschedparam.c
(__pthread_attr_getschedparam): Likewise.
* nptl/pthread_attr_getschedpolicy.c
(__pthread_attr_getschedpolicy): Likewise.
* nptl/pthread_attr_getscope.c (__pthread_attr_getscope):
Likewise.
* nptl/pthread_attr_getstack.c (__pthread_attr_getstack):
Likewise.
* nptl/pthread_attr_getstackaddr.c (__pthread_attr_getstackaddr):
Likewise.
* nptl/pthread_attr_getstacksize.c (__pthread_attr_getstacksize):
Likewise.
* nptl/pthread_attr_init.c (__pthread_attr_init_2_1): Likewise.
(__pthread_attr_init_2_0): Likewise.
* nptl/pthread_attr_setdetachstate.c
(__pthread_attr_setdetachstate): Likewise.
* nptl/pthread_attr_setguardsize.c (pthread_attr_setguardsize):
Likewise.
* nptl/pthread_attr_setinheritsched.c
(__pthread_attr_setinheritsched): Likewise.
* nptl/pthread_attr_setschedparam.c
(__pthread_attr_setschedparam): Likewise.
* nptl/pthread_attr_setschedpolicy.c
(__pthread_attr_setschedpolicy): Likewise.
* nptl/pthread_attr_setscope.c (__pthread_attr_setscope):
Likewise.
* nptl/pthread_attr_setstack.c (__pthread_attr_setstack):
Likewise.
* nptl/pthread_attr_setstackaddr.c (__pthread_attr_setstackaddr):
Likewise.
* nptl/pthread_attr_setstacksize.c (__pthread_attr_setstacksize):
Likewise.
* nptl/pthread_condattr_setclock.c (pthread_condattr_setclock):
Likewise.
* nptl/pthread_create.c (__find_in_stack_list): Likewise.
* nptl/pthread_getattr_np.c (pthread_getattr_np): Likewise.
* nptl/pthread_mutex_cond_lock.c (__pthread_mutex_lock): Define to
use internal_function.
* nptl/pthread_mutex_init.c (__pthread_mutex_init): Convert to
prototype-style function definition.
* nptl/pthread_mutex_lock.c (__pthread_mutex_lock): Likewise.
(__pthread_mutex_cond_lock_adjust): Likewise. Use
internal_function.
* nptl/pthread_mutex_timedlock.c (pthread_mutex_timedlock):
Convert to prototype-style function definition.
* nptl/pthread_mutex_trylock.c (__pthread_mutex_trylock):
Likewise.
* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_usercnt):
Likewise.
(__pthread_mutex_unlock): Likewise.
* nptl_db/td_ta_clear_event.c (td_ta_clear_event): Likewise.
* nptl_db/td_ta_set_event.c (td_ta_set_event): Likewise.
* nptl_db/td_thr_clear_event.c (td_thr_clear_event): Likewise.
* nptl_db/td_thr_event_enable.c (td_thr_event_enable): Likewise.
* nptl_db/td_thr_set_event.c (td_thr_set_event): Likewise.
* nss/makedb.c (process_input): Likewise.
* posix/fnmatch.c (__strchrnul): Likewise.
(__wcschrnul): Likewise.
(fnmatch): Likewise.
* posix/fnmatch_loop.c (FCT): Likewise.
* posix/glob.c (globfree): Likewise.
(__glob_pattern_type): Likewise.
(__glob_pattern_p): Likewise.
* posix/regcomp.c (re_compile_pattern): Likewise.
(re_set_syntax): Likewise.
(re_compile_fastmap): Likewise.
(regcomp): Likewise.
(regerror): Likewise.
(regfree): Likewise.
* posix/regexec.c (regexec): Likewise.
(re_match): Likewise.
(re_search): Likewise.
(re_match_2): Likewise.
(re_search_2): Likewise.
(re_search_stub): Likewise. Use internal_function
(re_copy_regs): Likewise.
(re_set_registers): Convert to prototype-style function
definition.
(prune_impossible_nodes): Likewise. Use internal_function.
* resolv/inet_net_pton.c (inet_net_pton): Convert to
prototype-style function definition.
(inet_net_pton_ipv4): Likewise.
* stdlib/strtod_l.c (____STRTOF_INTERNAL): Likewise.
* sysdeps/pthread/aio_cancel.c (aio_cancel): Likewise.
* sysdeps/pthread/aio_suspend.c (aio_suspend): Likewise.
* sysdeps/pthread/timer_delete.c (timer_delete): Likewise.
* sysdeps/unix/sysv/linux/dl-openat64.c (openat64): Likewise.
Make variadic.
* time/strptime_l.c (localtime_r): Convert to prototype-style
function definition.
* wcsmbs/mbsnrtowcs.c (__mbsnrtowcs): Likewise.
* wcsmbs/mbsrtowcs_l.c (__mbsrtowcs_l): Likewise.
* wcsmbs/wcsnrtombs.c (__wcsnrtombs): Likewise.
* wcsmbs/wcsrtombs.c (__wcsrtombs): Likewise.
In the per-thread arenas we apply trim_threshold-based checks
to the extra space between the pad and the top_area. This isn't
quite accurate and instead we should be harmonizing with the way
in which trim_treshold is applied everywhere else like sysrtim
and _int_free. The trimming check should be based on the size of
the top chunk and only the size of the top chunk. The following
patch harmonizes the trimming and make it consistent for the main
arena and thread arenas.
In the old code a large padding request might have meant that
trimming was not triggered. Now trimming is considered first based
on the chunk, then the pad is subtracted, and the remainder trimmed.
This is how all the other trimmings operate. I didn't measure the
performance difference of this change because it corrects what I
consider to be a behavioural anomaly. We'll need some profile driven
optimization to make this code better, and even there Ondrej and
others have better ideas on how to speedup malloc.
Tested on x86_64 with no regressions. Already reviewed by Siddhesh
Poyarekar and Mel Gorman here and discussed here:
https://sourceware.org/ml/libc-alpha/2015-05/msg00002.html
While doing code review I converted another bespoke round down, and
corrected a comment.
The comment spoke about keeping at least one page allocated even
during systrim, which is not correct. The code does nothing to keep
a page allocated. The code does attempt to keep PAD padding as
documented in comments and MINSIZE as required by design.
Historically in 2002 when Ulrich wrote the code (fa8d436c) the math
was inlined into one statement which did reserve an extra page:
extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
There is no reason given for this extra page.
In 2010 Anton Branchard's change (b9b42ee0) from division
to shifts removed the extra page by dropping the "+ (pagesiz-1), which
mean we might have attempted to return -0 via MORECORE. The fix by Will
Newton in 2014 added a check for extra being zero (51a7380b).
From first principles I see no reason why we should keep an extra
page of memory from being trimmed back to the OS. The only sensible
interface is to honour PAD padding as the function is documented,
with the caveat the MINSIZE is maintained for the top chunk.
Given that we've been using this code for 5+ years with no extra
page allocated is sufficient evidence that the comment should be changed
to match the code that I'm touching.
Tested on x86_64 and i686, no regressions.
If allocation on a non-main arena fails, the main arena is used
without checking to see if it is corrupt. Add a check that avoids the
main arena if it is corrupt.
* malloc/arena.c (arena_get_retry): Don't use main_arena if it is
corrupt.
The arena pointer in the first argument to arena_get2 was used in the
old days before per-thread arenas. They're unused now and hence can
be dropped.
ChangeLog:
* malloc/arena.c (arena_get2): Drop unused argument.
(arena_lock): Adjust.
(arena_get_retry): Likewise.
mksquashfs was reported in openSUSE to be causing segmentation faults when
creating installation images. Testing showed that mksquashfs sometimes
failed and could be reproduced within 10 attempts. The core dump looked
like the heap top was corrupted and was pointing to an unmapped area. In
other cases, this has been due to an application corrupting glibc structures
but mksquashfs appears to be fine in this regard.
The problem is that heap_trim is "growing" the top into unmapped space.
If the top chunk == MINSIZE then top_area is -1 and this check does not
behave as expected due to a signed/unsigned comparison
if (top_area <= pad)
return 0;
The next calculation extra = ALIGN_DOWN(top_area - pad, pagesz) calculates
extra as a negative number which also is unnoticed due to a signed/unsigned
comparison. We then call shrink_heap(heap, negative_number) which crashes
later. This patch adds a simple check against MINSIZE to make sure extra
does not become negative. It adds a cast to hint to the reader that this
is a signed vs unsigned issue.
Without the patch, mksquash fails within 10 attempts. With it applied, it
completed 1000 times without error. The standard test suite "make check"
showed no changes in the summary of test results.
[BZ #17581] The checking chain of unused chunks was terminated by a hash of
the block pointer, which was sometimes confused with the chunk length byte.
We now avoid using a length byte equal to the magic byte.
When the malloc subsystem detects some kind of memory corruption,
depending on the configuration it prints the error, a backtrace, a
memory map and then aborts the process. In this process, the
backtrace() call may result in a call to malloc, resulting in
various kinds of problematic behavior.
In one case, the malloc it calls may detect a corruption and call
backtrace again, and a stack overflow may result due to the infinite
recursion. In another case, the malloc it calls may deadlock on an
arena lock with the malloc (or free, realloc, etc.) that detected the
corruption. In yet another case, if the program is linked with
pthreads, backtrace may do a pthread_once initialization, which
deadlocks on itself.
In all these cases, the program exit is not as intended. This is
avoidable by marking the arena that malloc detected a corruption on,
as unusable. The following patch does that. Features of this patch
are as follows:
- A flag is added to the mstate struct of the arena to indicate if the
arena is corrupt.
- The flag is checked whenever malloc functions try to get a lock on
an arena. If the arena is unusable, a NULL is returned, causing the
malloc to use mmap or try the next arena.
- malloc_printerr sets the corrupt flag on the arena when it detects a
corruption
- free does not concern itself with the flag at all. It is not
important since the backtrace workflow does not need free. A free
in a parallel thread may cause another corruption, but that's not
new
- The flag check and set are not atomic and may race. This is fine
since we don't care about contention during the flag check. We want
to make sure that the malloc call in the backtrace does not trip on
itself and all that action happens in the same thread and not across
threads.
I verified that the test case does not show any regressions due to
this patch. I also ran the malloc benchmarks and found an
insignificant difference in timings (< 2%).
* malloc/Makefile (tests): New test case tst-malloc-backtrace.
* malloc/arena.c (arena_lock): Check if arena is corrupt.
(reused_arena): Find a non-corrupt arena.
(heap_trim): Pass arena to unlink.
* malloc/hooks.c (malloc_check_get_size): Pass arena to
malloc_printerr.
(top_check): Likewise.
(free_check): Likewise.
(realloc_check): Likewise.
* malloc/malloc.c (malloc_printerr): Add arena argument.
(unlink): Likewise.
(munmap_chunk): Adjust.
(ARENA_CORRUPTION_BIT): New macro.
(arena_is_corrupt): Likewise.
(set_arena_corrupt): Likewise.
(sysmalloc): Use mmap if there are no usable arenas.
(_int_malloc): Likewise.
(__libc_malloc): Don't fail if arena_get returns NULL.
(_mid_memalign): Likewise.
(__libc_calloc): Likewise.
(__libc_realloc): Adjust for additional argument to
malloc_printerr.
(_int_free): Likewise.
(malloc_consolidate): Likewise.
(_int_realloc): Likewise.
(_int_memalign): Don't touch corrupt arenas.
* malloc/tst-malloc-backtrace.c: New test case.