In libthread_db, use the exported GLIBC_PRIVATE symbols directly
instead of relying on _thread_db_* variables in libpthread
(which used to be created by the DB_FUNCTION macros).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The initialization of the report_events TCB field is now performed
in __tls_init_tp instead of __pthread_initialize_minimal_internal
(in libpthread).
The events interface is difficult to test because GDB stopped using it
in 2015. The td_thr_get_info change to ignore lookup issues is enough
to support GDB with this change.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
To help detect common kinds of memory (and other resource) management
bugs, GCC 11 adds support for the detection of mismatched calls to
allocation and deallocation functions. At each call site to a known
deallocation function GCC checks the set of allocation functions
the former can be paired with and, if the two don't match, issues
a -Wmismatched-dealloc warning (something similar happens in C++
for mismatched calls to new and delete). GCC also uses the same
mechanism to detect attempts to deallocate objects not allocated
by any allocation function (or pointers past the first byte into
allocated objects) by -Wfree-nonheap-object.
This support is enabled for built-in functions like malloc and free.
To extend it beyond those, GCC extends attribute malloc to designate
a deallocation function to which pointers returned from the allocation
function may be passed to deallocate the allocated objects. Another,
optional argument designates the positional argument to which
the pointer must be passed.
This change is the first step in enabling this extended support for
Glibc.
This is similar to the fix for elf/tst-pldd (2f9046fb05):
it checks ptrace_scope value (values higher than 2 are too restrictive
to allow the test to run) and it rearranges the spawned processes
to make the target process the gdb child.
Checked on x86_64-linux-gnu with ptrace_scope set to 1.
Keep __exit_funcs_lock almost all the time and unlock it only to execute
callbacks. This fixed two issues.
1. f->func.cxa was modified outside the lock with rare data race like:
thread 0: __run_exit_handlers unlock __exit_funcs_lock
thread 1: __internal_atexit locks __exit_funcs_lock
thread 0: f->flavor = ef_free;
thread 1: sees ef_free and use it as new
thread 1: new->func.cxa.fn = (void (*) (void *, int)) func;
thread 1: new->func.cxa.arg = arg;
thread 1: new->flavor = ef_cxa;
thread 0: cxafct = f->func.cxa.fn; // it's wrong fn!
thread 0: cxafct (f->func.cxa.arg, status); // it's wrong arg!
thread 0: goto restart;
thread 0: call the same exit_function again as it's ef_cxa
2. Don't unlock in main while loop after *listp = cur->next. If *listp
is NULL and __exit_funcs_done is false another thread may fail in
__new_exitfn on assert (l != NULL):
thread 0: *listp = cur->next; // It can be the last: *listp = NULL.
thread 0: __libc_lock_unlock
thread 1: __libc_lock_lock in __on_exit
thread 1: __new_exitfn
thread 1: if (__exit_funcs_done) // false: thread 0 isn't there yet.
thread 1: l = *listp
thread 1: moves one and crashes on assert (l != NULL);
The test needs multiple iterations to consistently fail without the fix.
Fixes https://sourceware.org/bugzilla/show_bug.cgi?id=27749
Checked on x86_64-linux-gnu.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The error paths of __check_native would leave the socket FD open on
return, resulting in an FD leak. Rework function exit paths so that
the fd is always closed on return.
(FYI, this is a repost of
https://sourceware.org/pipermail/libc-alpha/2019-July/105035.html now
that FSF papers have been signed and confirmed on FSF side).
This trivial patch attemps to fix BZ 24106. Basically the bash locally
used when building glibc on the host shall not leak on the installed
glibc, as the system where it is installed might be different and use
another bash location.
So I have looked for all occurences of @BASH@ or $(BASH) in installed
files, and replaced it by /bin/bash. This was suggested by Florian
Weimer in the bug report.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The symbols were moved using scripts/move-symbol-to-libc.py,
in one commit due to their dependency on the internal
__concurrency_level variable.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using scripts/move-symbol-to-libc.py.
Also clean up some unwinder linking leftover in the same spot
in nptl/pthreadP.h.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
It is necessary to arrange for a
__libpthread_version_placeholder@GLIBC_2.6 on some of the powerpc
targets.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
For some reason only dlopen failure caused dtv gaps to be reused.
It is possible that the intent was to never reuse modids for a
different module, but after dlopen failure all gaps are reused
not just the ones caused by the unfinished dlopened.
So the code has to handle reused modids already which seems to
work, however the data races at thread creation and tls access
(see bug 19329 and bug 27111) may be more severe if slots are
reused so this is scheduled after those fixes. I think fixing
the races are not simpler if reuse is disallowed and reuse has
other benefits, so set GL(dl_tls_dtv_gaps) whenever entries are
removed from the middle of the slotinfo list. The value does
not have to be correct: incorrect true value causes the next
modid query to do a slotinfo walk, incorrect false will leave
gaps and new entries are added at the end.
Fixes bug 27135.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Test concurrent dlopen and pthread_create when the loaded modules have
TLS. This triggers dl-tls assertion failures more reliably than the
nptl/tst-stack4 test.
The dlopened module has 100 DT_NEEDED dependencies with TLS, they were
reused from an existing TLS test. The number of created threads during
dlopen depends on filesystem speed and hardware, but at most 3 threads
are alive at a time to limit resource usage.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This is a follow up patch to the fix for bug 19329. This adds relaxed
MO atomics to accesses that were previously data races but are now
race conditions, and where relaxed MO is sufficient.
The race conditions all follow the pattern that the write is behind the
dlopen lock, but a read can happen concurrently (e.g. during tls access)
without holding the lock. For slotinfo entries the read value only
matters if it reads from a synchronized write in dlopen or dlclose,
otherwise the related dtv entry is not valid to access so it is fine
to leave it in an inconsistent state. The same applies for
GL(dl_tls_max_dtv_idx) and GL(dl_tls_generation), but there the
algorithm relies on the fact that the read of the last synchronized
write is an increasing value.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
DTV setup at thread creation (_dl_allocate_tls_init) is changed
to take the dlopen lock, GL(dl_load_lock). Avoiding data races
here without locks would require design changes: the map that is
accessed for static TLS initialization here may be concurrently
freed by dlclose. That use after free may be solved by only
locking around static TLS setup or by ensuring dlclose does not
free modules with static TLS, however currently every link map
with TLS has to be accessed at least to see if it needs static
TLS. And even if that's solved, still a lot of atomics would be
needed to synchronize DTV related globals without a lock. So fix
both bug 19329 and bug 27111 with a lock that prevents DTV setup
running concurrently with dlopen or dlclose.
_dl_update_slotinfo at TLS access still does not use any locks
so CONCURRENCY NOTES are added to explain the synchronization.
The early exit from the slotinfo walk when max_modid is reached
is not strictly necessary, but does not hurt either.
An incorrect acquire load was removed from _dl_resize_dtv: it
did not synchronize with any release store or fence and
synchronization is now handled separately at thread creation
and TLS access time.
There are still a number of racy read accesses to globals that
will be changed to relaxed MO atomics in a followup patch. This
should not introduce regressions compared to existing behaviour
and avoid cluttering the main part of the fix.
Not all TLS access related data races got fixed here: there are
additional races at lazy tlsdesc relocations see bug 27137.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbols pthread_clockjoin_np, pthread_join, pthread_timedjoin_np,
pthread_tryjoin_np, thrd_join were moved using
scripts/move-symbol-to-libc.py.
Moving the symbols at the same time avoids the need for temporary
exports.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This replaces the FREE_P macro with the __nptl_stack_in_use inline
function. stack_list_del is renamed to __nptl_stack_list_del,
stack_list_add to __nptl_stack_list_add, __deallocate_stack to
__nptl_deallocate_stack, free_stacks to __nptl_free_stacks.
It is convenient to move __libpthread_freeres into libc at the
same time. This removes the temporary __default_pthread_attr_freeres
export and restores full freeres coverage for __default_pthread_attr.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
The export of __default_pthread_attr_freeres is temporary. There
is a minor regression in freeres coverage because in the dynamic case,
__default_pthread_attr_freeres is no longer called if libpthread is
not linked in.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This removes the DEBUGGING_P macro and the __pthread_debug variable.
The __find_in_stack_list function is now unused and deleted as well.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The nptl version is used as default, since now with symbol always
present the single-thread optimization is tricky.
Hurd is not change, it is used it own lock scheme (which call
_cthreads_funlockfile).
Checked on x86_64-linux-gnu.
The nptl version is used as default, since now with symbol always
present the single-thread optimization is tricky.
Hurd is not change, it is used it own lock scheme (which call
_cthreads_ftrylockfile).
Checked on x86_64-linux-gnu.
The nptl version is used as default, since now with symbol always
present the single-thread optimization is tricky.
Hurd is not change, it is used it own lock scheme (which call
_cthreads_flockfile).
Checked on x86_64-linux-gnu.
Linux 5.12 adds the constants PTRACE_SYSEMU and
PTRACE_SYSEMU_SINGLESTEP for s390. Add these to glibc.
Tested with build-many-glibcs.py for s390-linux-gnu and
s390x-linux-gnu.
These workload traces cover the whole "long double" range.
This patch was prepared with the help of Adhemerval Zanella.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The GLIBC_PRIVATE exports for these symbols are expected to be
temporary.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
stack_list_del overwrites the in-flight stack variable.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
All the stack lists are now in _rtld_global, so it is possible
to change stack permissions directly from there, instead of
calling into libpthread to do the change.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Only ia64 needs the page mask, and it is straightforward
to compute the value within the function itself.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Permissions of the cached stacks may have to be updated if an object
is loaded that requires executable stacks, so the dynamic loader
needs to know about these cached stacks.
The move of in_flight_stack and stack_cache_actsize is a requirement for
merging __reclaim_stacks into the fork implementation in libc.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>