Linux 6.11 has getrandom() in vDSO. It operates on a thread-local opaque
state allocated with mmap using flags specified by the vDSO.
Multiple states are allocated at once, as many as fit into a page, and
these are held in an array of available states to be doled out to each
thread upon first use, and recycled when a thread terminates. As these
states run low, more are allocated.
To make this procedure async-signal-safe, a simple guard is used in the
LSB of the opaque state address, falling back to the syscall if there's
reentrancy contention.
Also, _Fork() is handled by blocking signals on opaque state allocation
(so _Fork() always sees a consistent state even if it interrupts a
getrandom() call) and by iterating over the thread stack cache on
reclaim_stack. Each opaque state will be in the free states list
(grnd_alloc.states) or allocated to a running thread.
The cancellation is handled by always using GRND_NONBLOCK flags while
calling the vDSO, and falling back to the cancellable syscall if the
kernel returns EAGAIN (would block). Since getrandom is not defined by
POSIX and cancellation is supported as an extension, the cancellation is
handled as 'may occur' instead of 'shall occur' [1], meaning that if
vDSO does not block (the expected behavior) getrandom will not act as a
cancellation entrypoint. It avoids a pthread_testcancel call on the fast
path (different than 'shall occur' functions, like sem_wait()).
It is currently enabled for x86_64, which is available in Linux 6.11,
and aarch64, powerpc32, powerpc64, loongarch64, and s390x, which are
available in Linux 6.12.
Link: https://pubs.opengroup.org/onlinepubs/9799919799/nframe.html [1]
Co-developed-by: Jason A. Donenfeld <Jason@zx2c4.com>
Tested-by: Jason A. Donenfeld <Jason@zx2c4.com> # x86_64
Tested-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> # x86_64, aarch64
Tested-by: Xi Ruoyao <xry111@xry111.site> # x86_64, aarch64, loongarch64
Tested-by: Stefan Liebler <stli@linux.ibm.com> # s390x
Linux 4.5 removed thread stack annotations due to the complexity of
computing them [1], and Linux added PR_SET_VMA_ANON_NAME on 5.17
as a way to name anonymous virtual memory areas.
This patch adds decoration on the stack created and used by
pthread_create, for glibc crated thread stack the /proc/self/maps will
now show:
[anon: glibc: pthread stack: <tid>]
And for user-provided stacks:
[anon: glibc: pthread user stack: <tid>]
The guard page is not decorated, and the mapping name is cleared when
the thread finishes its execution (so the cached stack does not have any
name associated).
Checked on x86_64-linux-gnu aarch64 aarch64-linux-gnu.
[1] 65376df582
Co-authored-by: Ian Rogers <irogers@google.com>
Reviewed-by: DJ Delorie <dj@redhat.com>
Created tunable glibc.pthread.stack_hugetlb to control when hugepages
can be used for stack allocation.
In case THP are enabled and glibc.pthread.stack_hugetlb is set to
0, glibc will madvise the kernel not to use allow hugepages for stack
allocations.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The implementation is based on scalar Chacha20 with per-thread cache.
It uses getrandom or /dev/urandom as fallback to get the initial entropy,
and reseeds the internal state on every 16MB of consumed buffer.
To improve performance and lower memory consumption the per-thread cache
is allocated lazily on first arc4random functions call, and if the
memory allocation fails getentropy or /dev/urandom is used as fallback.
The cache is also cleared on thread exit iff it was initialized (so if
arc4random is not called it is not touched).
Although it is lock-free, arc4random is still not async-signal-safe
(the per thread state is not updated atomically).
The ChaCha20 implementation is based on RFC8439 [1], omitting the final
XOR of the keystream with the plaintext because the plaintext is a
stream of zeros. This strategy is similar to what OpenBSD arc4random
does.
The arc4random_uniform is based on previous work by Florian Weimer,
where the algorithm is based on Jérémie Lumbroso paper Optimal Discrete
Uniform Generation from Coin Flips, and Applications (2013) [2], who
credits Donald E. Knuth and Andrew C. Yao, The complexity of nonuniform
random number generation (1976), for solving the general case.
The main advantage of this method is the that the unit of randomness is not
the uniform random variable (uint32_t), but a random bit. It optimizes the
internal buffer sampling by initially consuming a 32-bit random variable
and then sampling byte per byte. Depending of the upper bound requested,
it might lead to better CPU utilization.
Checked on x86_64-linux-gnu, aarch64-linux, and powerpc64le-linux-gnu.
Co-authored-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Yann Droneaud <ydroneaud@opteya.com>
[1] https://datatracker.ietf.org/doc/html/rfc8439
[2] https://arxiv.org/pdf/1304.1916.pdf
And also fixes the SINGLE_THREAD_P macro for SINGLE_THREAD_BY_GLOBAL,
since header inclusion single-thread.h is in the wrong order, the define
needs to come before including sysdeps/unix/sysdep.h. The macro
is now moved to a per-arch single-threade.h header.
The SINGLE_THREAD_P is used on some more places.
Checked on aarch64-linux-gnu and x86_64-linux-gnu.
Some Linux interfaces never restart after being interrupted by a signal
handler, regardless of the use of SA_RESTART [1]. It means that for
pthread cancellation, if the target thread disables cancellation with
pthread_setcancelstate and calls such interfaces (like poll or select),
it should not see spurious EINTR failures due the internal SIGCANCEL.
However recent changes made pthread_cancel to always sent the internal
signal, regardless of the target thread cancellation status or type.
To fix it, the previous semantic is restored, where the cancel signal
is only sent if the target thread has cancelation enabled in
asynchronous mode.
The cancel state and cancel type is moved back to cancelhandling
and atomic operation are used to synchronize between threads. The
patch essentially revert the following commits:
8c1c0aae20 nptl: Move cancel type out of cancelhandling
2b51742531 nptl: Move cancel state out of cancelhandling
26cfbb7162 nptl: Remove CANCELING_BITMASK
However I changed the atomic operation to follow the internal C11
semantic and removed the MACRO usage, it simplifies a bit the
resulting code (and removes another usage of the old atomic macros).
Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu,
and powerpc64-linux-gnu.
[1] https://man7.org/linux/man-pages/man7/signal.7.html
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
For audit modules and dependencies with initial-exec TLS, we can not
set the initial TLS image on default loader initialization because it
would already be set by the audit setup. However, subsequent thread
creation would need to follow the default behaviour.
This patch fixes it by setting l_auditing link_map field not only
for the audit modules, but also for all its dependencies. This is
used on _dl_allocate_tls_init to avoid the static TLS initialization
at load time.
Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 7061 files FOO.
I then removed trailing white space from math/tgmath.h,
support/tst-support-open-dev-null-range.c, and
sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following
obscure pre-commit check failure diagnostics from Savannah. I don't
know why I run into these diagnostics whereas others evidently do not.
remote: *** 912-#endif
remote: *** 913:
remote: *** 914-
remote: *** error: lines with trailing whitespace found
...
remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
A new thread exit lock and flag are introduced. They are used to
detect that the thread is about to exit or has exited in
__pthread_kill_internal, and the signal is not sent in this case.
The test sysdeps/pthread/tst-pthread_cancel-select-loop.c is derived
from a downstream test originally written by Marek Polacek.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
We stopped adding "Contributed by" or similar lines in sources in 2012
in favour of git logs and keeping the Contributors section of the
glibc manual up to date. Removing these lines makes the license
header a bit more consistent across files and also removes the
possibility of error in attribution when license blocks or files are
copied across since the contributed-by lines don't actually reflect
reality in those cases.
Move all "Contributed by" and similar lines (Written by, Test by,
etc.) into a new file CONTRIBUTED-BY to retain record of these
contributions. These contributors are also mentioned in
manual/contrib.texi, so we just maintain this additional record as a
courtesy to the earlier developers.
The following scripts were used to filter a list of files to edit in
place and to clean up the CONTRIBUTED-BY file respectively. These
were not added to the glibc sources because they're not expected to be
of any use in future given that this is a one time task:
https://gist.github.com/siddhesh/b5ecac94eabfd72ed2916d6d8157e7dchttps://gist.github.com/siddhesh/15ea1f5e435ace9774f485030695ee02
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The clone3 system call (since Linux 5.3) provides a superset of the
functionality of clone and clone2. It also provides a number of API
improvements, including the ability to specify the size of the child's
stack area which can be used by kernel to compute the shadow stack size
when allocating the shadow stack. Add:
extern int __clone_internal (struct clone_args *__cl_args,
int (*__func) (void *__arg), void *__arg);
to provide an abstract interface for clone, clone2 and clone3.
1. Simplify stack management for thread creation by passing both stack
base and size to create_thread.
2. Consolidate clone vs clone2 differences into a single file.
3. Call __clone3 if HAVE_CLONE3_WAPPER is defined. If __clone3 returns
-1 with ENOSYS, fall back to clone or clone2.
4. Use only __clone_internal to clone a thread. Since the stack size
argument for create_thread is now unconditional, always pass stack size
to create_thread.
5. Enable the public clone3 wrapper in the future after it has been
added to all targets.
NB: Sandbox will return ENOSYS on clone3 in both Chromium:
The following revision refers to this bug:
218438259d
commit 218438259dd795456f0a48f67cbe5b4e520db88b
Author: Matthew Denton <mpdenton@chromium.org>
Date: Thu Jun 03 20:06:13 2021
Linux sandbox: return ENOSYS for clone3
Because clone3 uses a pointer argument rather than a flags argument, we
cannot examine the contents with seccomp, which is essential to
preventing sandboxed processes from starting other processes. So, we
won't be able to support clone3 in Chromium. This CL modifies the
BPF policy to return ENOSYS for clone3 so glibc always uses the fallback
to clone.
Bug: 1213452
Change-Id: I7c7c585a319e0264eac5b1ebee1a45be2d782303
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2936184
Reviewed-by: Robert Sesek <rsesek@chromium.org>
Commit-Queue: Matthew Denton <mpdenton@chromium.org>
Cr-Commit-Position: refs/heads/master@{#888980}
[modify] https://crrev.com/218438259dd795456f0a48f67cbe5b4e520db88b/sandbox/linux/seccomp-bpf-helpers/baseline_policy.cc
and Firefox:
https://hg.mozilla.org/integration/autoland/rev/ecb4011a0c76
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Now that the thread cancellation type is not accessed concurrently
anymore, it is possible to move it out the cancelhandling.
By removing the cancel state out of the internal thread cancel handling
state there is no need to check if cancelled bit was set in CAS
operation.
It allows simplifing the cancellation wrappers and the
CANCEL_CANCELED_AND_ASYNCHRONOUS is removed.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Now that thread cancellation state is not accessed concurrently anymore,
it is possible to move it out the 'cancelhandling'.
The code is also simplified: CANCELLATION_P is replaced with a
internal pthread_testcancel call and the CANCELSTATE_BIT{MASK} is
removed.
With this behavior pthread_setcancelstate does not require to act on
cancellation if cancel type is asynchronous (is already handled either
by pthread_setcanceltype or by the signal handler).
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
To setup either the thread scheduling parameters or affinity,
pthread_create enforce synchronization on created thread to wait until
its parent either release PD ownership or send a cancellation signal if
a failure occurs.
However, cancelling the thread does not deallocate the newly created
stack since cancellation expects that a pthread_join to deallocate any
allocated thread resouces (threads stack or TLS).
This patch changes on how the thread resource is deallocate in case of
failure to be synchronous, where the creating thread will signal the
created thread to exit early so it could be joined. The creating thread
will be reponsible for the resource cleanup before returning to the
caller.
To signal the creating thread that a failure has occured, an unused
'struct pthread' member, parent_cancelhandling_unsed, now indicates
whether the setup has failed so creating thread can proper exit.
This strategy also simplifies by not using thread cancellation and
thus not running libgcc_so load in the signal handler (which is
avoided in thread cancellation since 'pthread_cancel' is the one
responsible to dlopen libgcc_s). Another advantage is since the
early exit is move to first step at thread creation, the signal
mask is not already set and thus it can not act on change ID setxid
handler.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Use the __nptl_tls_static_size_for_stack inline function instead,
and the GLRO (dl_tls_static_align) value directly.
The computation of GLRO (dl_tls_static_align) in
_dl_determine_tlsoffset ensures that the alignment is at least
TLS_TCB_ALIGN, which at least STACK_ALIGN (see allocate_stack).
Therefore, the additional rounding-up step is removed.
ALso move the initialization of the default stack size from
__pthread_initialize_minimal_internal to __pthread_early_init.
This introduces an extra system call during single-threaded startup,
but this simplifies the initialization sequence. No locking is
needed around the writes to __default_pthread_attr because the
process is single-threaded at this point.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This replaces the FREE_P macro with the __nptl_stack_in_use inline
function. stack_list_del is renamed to __nptl_stack_list_del,
stack_list_add to __nptl_stack_list_add, __deallocate_stack to
__nptl_deallocate_stack, free_stacks to __nptl_free_stacks.
It is convenient to move __libpthread_freeres into libc at the
same time. This removes the temporary __default_pthread_attr_freeres
export and restores full freeres coverage for __default_pthread_attr.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
stack_list_del overwrites the in-flight stack variable.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
All the stack lists are now in _rtld_global, so it is possible
to change stack permissions directly from there, instead of
calling into libpthread to do the change.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Only ia64 needs the page mask, and it is straightforward
to compute the value within the function itself.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Permissions of the cached stacks may have to be updated if an object
is loaded that requires executable stacks, so the dynamic loader
needs to know about these cached stacks.
The move of in_flight_stack and stack_cache_actsize is a requirement for
merging __reclaim_stacks into the fork implementation in libc.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
It is no longer needed after the SINGLE_THREADED_P consolidation.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This allows the elimination of the __libc_multiple_threads_ptr
variable in libpthread and its initialization procedure.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The stack list is available in ld.so since commit
1daccf403b ("nptl: Move stack list
variables into _rtld_global"), so it's possible to walk the stack
list directly in ld.so and perform the initialization there.
This eliminates an unprotected function pointer from _rtld_global
and reduces the libpthread initialization code.
The signal handler is exported as __nptl_setxid_sighandler, so
that the libpthread initialization code can install it. This
is sufficient for now because it is guarantueed to happen before
the first pthread_create call.
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 6694 files FOO.
I then removed trailing white space from benchtests/bench-pthread-locks.c
and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this
diagnostic from Savannah:
remote: *** pre-commit check failed ...
remote: *** error: lines with trailing whitespace found
remote: error: hook declined to update refs/heads/master
Now __thread_gscope_wait (the function behind THREAD_GSCOPE_WAIT,
formerly __wait_lookup_done) can be implemented directly in ld.so,
eliminating the unprotected GL (dl_wait_lookup_done) function
pointer.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
There are several compiler implementations that allow large stack
allocations to jump over the guard page at the end of the stack and
corrupt memory beyond that. See CVE-2017-1000364.
Compilers can emit code to probe the stack such that the guard page
cannot be skipped, but on aarch64 the probe interval is 64K by default
instead of the minimum supported page size (4K).
This patch enforces at least 64K guard on aarch64 unless the guard
is disabled by setting its size to 0. For backward compatibility
reasons the increased guard is not reported, so it is only observable
by exhausting the address space or parsing /proc/self/maps on linux.
On other targets the patch has no effect. If the stack probe interval
is larger than a page size on a target then ARCH_MIN_GUARD_SIZE can
be defined to get large enough stack guard on libc allocated stacks.
The patch does not affect threads with user allocated stacks.
Fixes bug 26691.
The per-thread state is refactored two use two strategies:
1. The default one uses a TLS structure, which will be placed in the
static TLS space (using __thread keyword).
2. Linux allocates via struct pthread and access it through THREAD_*
macros.
The default strategy has the disadvantage of increasing libc.so static
TLS consumption and thus decreasing the possible surplus used in
some scenarios (which might be mitigated by BZ#25051 fix).
It is used only on Hurd, where accessing the thread storage in the in
single thread case is not straightforward (afaiu, Hurd developers could
correct me here).
The fallback static allocation used for allocation failure is also
removed: defining its size is problematic without synchronizing with
translated messages (to avoid partial translation) and the resulting
usage is not thread-safe.
Checked on x86-64-linux-gnu, i686-linux-gnu, powerpc64le-linux-gnu,
and s390x-linux-gnu.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
union pthread_attr_transparent has always the correct size, even if
pthread_attr_t has padding that is not present in struct pthread_attr.
This should not result in an observable behavioral change. The
existing code appears to have been correct, but it was brittle because
it was not clear which functions were allowed to write to an entire
pthread_attr_t argument (e.g., by copying it).
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
With all Linux ABIs using the expected Linux kABI to indicate
syscalls errors, the INTERNAL_SYSCALL_DECL is an empty declaration
on all ports.
This patch removes the 'err' argument on INTERNAL_SYSCALL* macro
and remove the INTERNAL_SYSCALL_DECL usage.
Checked with a build against all affected ABIs.
All nptl targets have these signal definitions nowadays. This
changes also replaces the nptl-generic version of pthread_sigmask
with the Linux version.
Tested on x86_64-linux-gnu and i686-linux-gnu. Built with
build-many-glibcs.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This patch removes CLOCK_THREAD_CPUTIME_ID and CLOCK_PROCESS_CPUTIME_ID support
from clock_gettime and clock_settime generic implementation. For Linux, kernel
already provides supports through the syscall and Hurd HTL lacks
__pthread_clock_gettime and __pthread_clock_settime internal implementation.
As described in clock_gettime man-page [1] on 'Historical note for SMP
system', implementing CLOCK_{THREAD,PROCESS}_CPUTIME_ID with timer registers
is error-prone and susceptible to timing and accurary issues that the libc
can not deal without kernel support.
This allows removes unused code which, however, still incur in some runtime
overhead in thread creation (the struct pthread cpuclock_offset
initialization).
If hurd eventually wants to support them it should either either implement as
a kernel facility (or something related due its architecture) or in system
specific implementation.
Checked on aarch64-linux-gnu, x86_64-linux-gnu, and i686-linux-gnu. I also
checked on a i686-gnu build.
* nptl/Makefile (libpthread-routines): Remove pthread_clock_gettime and
pthread_clock_settime.
* nptl/pthreadP.h (__find_thread_by_id): Remove prototype.
* elf/dl-support.c [!HP_TIMING_NOAVAIL] (_dl_cpuclock_offset): Remove.
(_dl_non_dynamic_init): Remove _dl_cpuclock_offset setting.
* elf/rtld.c (_dl_start_final): Likewise.
* nptl/allocatestack.c (__find_thread_by_id): Remove function.
* sysdeps/generic/ldsodefs.h [!HP_TIMING_NOAVAIL] (_dl_cpuclock_offset):
Remove.
* sysdeps/mach/hurd/dl-sysdep.c [!HP_TIMING_NOAVAIL]
(_dl_cpuclock_offset): Remove.
* nptl/descr.h (struct pthread): Rename cpuclock_offset to
cpuclock_offset_ununsed.
* nptl/nptl-init.c (__pthread_initialize_minimal_internal): Remove
cpuclock_offset set.
* nptl/pthread_create.c (START_THREAD_DEFN): Likewise.
* sysdeps/nptl/fork.c (__libc_fork): Likewise.
* nptl/pthread_clock_gettime.c: Remove file.
* nptl/pthread_clock_settime.c: Likewise.
* sysdeps/unix/clock_gettime.c (hp_timing_gettime): Remove function.
[HP_TIMING_AVAIL] (realtime_gettime): Remove CLOCK_THREAD_CPUTIME_ID
and CLOCK_PROCESS_CPUTIME_ID support.
* sysdeps/unix/clock_settime.c (hp_timing_gettime): Likewise.
[HP_TIMING_AVAIL] (realtime_gettime): Likewise.
* sysdeps/posix/clock_getres.c (hp_timing_getres): Likewise.
[HP_TIMING_AVAIL] (__clock_getres): Likewise.
* sysdeps/unix/clock_nanosleep.c (CPUCLOCK_P, INVALID_CLOCK_P):
Likewise.
(__clock_nanosleep): Remove CPUCLOCK_P and INVALID_CLOCK_P usage.
[1] http://man7.org/linux/man-pages/man2/clock_gettime.2.html
One group of warnings seen with -Wextra is warnings for static or
inline not at the start of a declaration (-Wold-style-declaration).
This patch fixes various such cases for inline, ensuring it comes at
the start of the declaration (after any static). A common case of the
fix is "static inline <type> __always_inline"; the definition of
__always_inline starts with __inline, so the natural change is to
"static __always_inline <type>". Other cases of the warning may be
harder to fix (one pattern is a function definition that gets
rewritten to be static by an including file, "#define funcname static
wrapped_funcname" or similar), but it seems worth fixing these cases
with inline anyway.
Tested for x86_64.
* elf/dl-load.h (_dl_postprocess_loadcmd): Use __always_inline
before return type, without separate inline.
* elf/dl-tunables.c (maybe_enable_malloc_check): Likewise.
* elf/dl-tunables.h (tunable_is_name): Likewise.
* malloc/malloc.c (do_set_trim_threshold): Likewise.
(do_set_top_pad): Likewise.
(do_set_mmap_threshold): Likewise.
(do_set_mmaps_max): Likewise.
(do_set_mallopt_check): Likewise.
(do_set_perturb_byte): Likewise.
(do_set_arena_test): Likewise.
(do_set_arena_max): Likewise.
(do_set_tcache_max): Likewise.
(do_set_tcache_count): Likewise.
(do_set_tcache_unsorted_limit): Likewise.
* nis/nis_subr.c (count_dots): Likewise.
* nptl/allocatestack.c (advise_stack_range): Likewise.
* sysdeps/ieee754/dbl-64/s_sin.c (do_cos): Likewise.
(do_sin): Likewise.
(reduce_sincos): Likewise.
(do_sincos): Likewise.
* sysdeps/unix/sysv/linux/x86/elision-conf.c
(do_set_elision_enable): Likewise.
(TUNABLE_CALLBACK_FNDECL): Likewise.
The alignment of TLS variables is wrong if accessed from within a thread
for architectures with tls variant TLS_TCB_AT_TP.
For the main thread the static tls data is properly aligned.
For other threads the alignment depends on the alignment of the thread
pointer as the static tls data is located relative to this pointer.
This patch adds this alignment for TLS_TCB_AT_TP variants in the same way
as it is already done for TLS_DTV_AT_TP. The thread pointer is also already
properly aligned if the user provides its own stack for the new thread.
This patch extends the testcase nptl/tst-tls1.c in order to check the
alignment of the tls variables and it adds a pthread_create invocation
with a user provided stack.
The test itself is migrated from test-skeleton.c to test-driver.c
and the missing support functions xpthread_attr_setstack and xposix_memalign
are added.
ChangeLog:
[BZ #23403]
* nptl/allocatestack.c (allocate_stack): Align pointer pd for
TLS_TCB_AT_TP tls variant.
* nptl/tst-tls1.c: Migrate to support/test-driver.c.
Add alignment checks.
* support/Makefile (libsupport-routines): Add xposix_memalign and
xpthread_setstack.
* support/support.h: Add xposix_memalign.
* support/xthread.h: Add xpthread_attr_setstack.
* support/xposix_memalign.c: New File.
* support/xpthread_attr_setstack.c: Likewise.
The __libc_freeres framework does not extend to non-libc.so objects.
This causes problems in general for valgrind and mtrace detecting
unfreed objects in both libdl.so and libpthread.so. This change is
a pre-requisite to properly moving the malloc hooks out of malloc
since such a move now requires precise accounting of all allocated
data before destructors are run.
This commit adds a proper hook in libc.so.6 for both libdl.so and
for libpthread.so, this ensures that shm-directory.c which uses
freeit () to free memory is called properly. We also remove the
nptl_freeres hook and fall back to using weak-ref-and-check idiom
for a loaded libpthread.so, thus making this process similar for
all DSOs.
Lastly we follow best practice and use explicit free calls for
both libdl.so and libpthread.so instead of the generic hook process
which has undefined order.
Tested on x86_64 with no regressions.
Signed-off-by: DJ Delorie <dj@redhat.com>
Signed-off-by: Carlos O'Donell <carlos@redhat.com>
Previously if user requested S stack and G guard when creating a
thread, the total mapping was S and the actual available stack was
S - G - static_tls, which is not what the user requested.
This patch fixes the guard size accounting by pretending the user
requested S+G stack. This way all later logic works out except
when reporting the user requested stack size (pthread_getattr_np)
or when computing the minimal stack size (__pthread_get_minstack).
Normally this will increase thread stack allocations by one page.
TLS accounting is not affected, that will require a separate fix.
[BZ #22637]
* nptl/descr.h (stackblock, stackblock_size): Update comments.
* nptl/allocatestack.c (allocate_stack): Add guardsize to stacksize.
* nptl/nptl-init.c (__pthread_get_minstack): Remove guardsize from
stacksize.
* nptl/pthread_getattr_np.c (pthread_getattr_np): Likewise.
This patch adds two new internal defines to set the internal
pthread_mutex_t layout required by the supported ABIS:
1. __PTHREAD_MUTEX_NUSERS_AFTER_KIND which control whether to define
__nusers fields before or after __kind. The preferred value for
is 0 for new ports and it sets __nusers before __kind.
2. __PTHREAD_MUTEX_USE_UNION which control whether internal __spins and
__list members will be place inside an union for linuxthreads
compatibility. The preferred value is 0 for ports and it sets
to not use an union to define both fields.
It fixes the wrong offsets value for __kind value on x86_64-linux-gnu-x32.
Checked with a make check run-built-tests=no on all afected ABIs.
[BZ #22298]
* nptl/allocatestack.c (allocate_stack): Check if
__PTHREAD_MUTEX_HAVE_PREV is non-zero, instead if
__PTHREAD_MUTEX_HAVE_PREV is defined.
* nptl/descr.h (pthread): Likewise.
* nptl/nptl-init.c (__pthread_initialize_minimal_internal):
Likewise.
* nptl/pthread_create.c (START_THREAD_DEFN): Likewise.
* sysdeps/nptl/fork.c (__libc_fork): Likewise.
* sysdeps/nptl/pthread.h (PTHREAD_MUTEX_INITIALIZER): Likewise.
* sysdeps/nptl/bits/thread-shared-types.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
defines.
(__pthread_internal_list): Check __PTHREAD_MUTEX_USE_UNION instead
of __WORDSIZE for internal layout.
(__pthread_mutex_s): Check __PTHREAD_MUTEX_NUSERS_AFTER_KIND instead
of __WORDSIZE for internal __nusers layout and __PTHREAD_MUTEX_USE_UNION
instead of __WORDSIZE whether to use an union for __spins and __list
fields.
(__PTHREAD_MUTEX_HAVE_PREV): Define also for __PTHREAD_MUTEX_USE_UNION
case.
* sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
defines.
* sysdeps/alpha/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/arm/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/hppa/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/ia64/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/m68k/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/mips/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/nios2/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/s390/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/sh/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/sparc/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/tile/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/x86/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>