The sem_clockwait and sem_timedwait have been converted to support 64 bit time.
This change reuses futex_abstimed_wait_cancelable64 function introduced earlier.
The sem_{clock|timed}wait only accepts absolute time. Moreover, there is no
need to check for NULL passed as *abstime pointer to the syscalls as both calls
have exported symbols marked with __nonull attribute for abstime.
For systems with __TIMESIZE != 64 && __WORDSIZE == 32:
- Conversion from 32 bit time to 64 bit struct __timespec64 was necessary
- Redirection to __sem_{clock|timed}wait64 will provide support for 64 bit
time
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Above tests were performed with Y2038 redirection applied as well as without
to test the proper usage of both __sem_{clock|timed}wait64 and
__sem_{clock|timed}wait.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
It replaces the internal usage of __{f,l}xstat{at}{64} with the
__{f,l}stat{at}{64}. It should not change the generate code since
sys/stat.h explicit defines redirections to internal calls back to
xstat* symbols.
Checked with a build for all affected ABIs. I also check on
x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
The pthread_cond_clockwait and pthread_cond_timedwait have been converted
to support 64 bit time.
This change introduces new futex_abstimed_wait_cancelable64 function in
./sysdeps/nptl/futex-helpers.c, which uses futex_time64 where possible
and tries to replace low-level preprocessor macros from
lowlevellock-futex.h
The pthread_cond_{clock|timed}wait only accepts absolute time. Moreover,
there is no need to check for NULL passed as *abstime pointer as
__pthread_cond_wait_common() always passes non-NULL struct __timespec64
pointer to futex_abstimed_wait_cancellable64().
For systems with __TIMESIZE != 64 && __WORDSIZE == 32:
- Conversions between 64 bit time to 32 bit are necessary
- Redirection to __pthread_cond_{clock|timed}wait64 will provide support
for 64 bit time
The futex_abstimed_wait_cancelable64 function has been put into a separate
file on the purpose - to avoid issues apparent on the m68k architecture
related to small number of available registers (there is not enough
registers to put all necessary arguments in them if the above function
would be added to futex-internal.h with __always_inline attribute).
In fact - new function - namely __futex_abstimed_wait_cancellable32 is
used to reduce number of needed registers (as some in-register values are
stored on the stack when function call is made).
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Above tests were performed with Y2038 redirection applied as well as without
to test the proper usage of both __pthread_cond_{clock|timed}wait64 and
__pthread_cond_{clock|timed}wait.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The pthread_clockjoin_np and pthread_timedjoin_np have been converted to
support 64 bit time.
This change introduces new futex_timed_wait_cancel64 function in
./sysdeps/nptl/futex-internal.h, which uses futex_time64 where possible
and tries to replace low-level preprocessor macros from
lowlevellock-futex.h
The pthread_{timed|clock}join_np only accept absolute time. Moreover,
there is no need to check for NULL passed as *abstime pointer as
clockwait_tid() always passes struct __timespec64.
For systems with __TIMESIZE != 64 && __WORDSIZE == 32:
- Conversions between 64 bit time to 32 bit are necessary
- Redirection to __pthread_{clock|timed}join_np64 will provide support
for 64 bit time
Build tests:
./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Above tests were performed with Y2038 redirection applied as well as without
to test the proper usage of both __pthread_{timed|clock}join_np64 and
__pthread_{timed|clock}join_np.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
nptl has
/* Opcodes and data types for communication with the signal handler to
change user/group IDs. */
struct xid_command
{
int syscall_no;
long int id[3];
volatile int cntr;
volatile int error;
};
/* This must be last, otherwise the current thread might not have
permissions to send SIGSETXID syscall to the other threads. */
result = INTERNAL_SYSCALL_NCS (cmdp->syscall_no, 3,
cmdp->id[0], cmdp->id[1], cmdp->id[2]);
But the second argument of setgroups syscal is a pointer:
int setgroups (size_t size, const gid_t *list);
But on x32, pointers passed to syscall must have pointer type so that
they will be zero-extended. The kernel XID arguments are unsigned and
do not require sign extension. Change xid_command to
struct xid_command
{
int syscall_no;
unsigned long int id[3];
volatile int cntr;
volatile int error;
};
so that all arguments are zero-extended. A testcase is added for x32 and
setgroups returned with EFAULT when running as root without the fix.
The kernel ABI is not finalized, and there are now various proposals
to change the size of struct rseq, which would make the glibc ABI
dependent on the version of the kernels used for building glibc.
This is of course not acceptable.
This reverts commit 48699da1c4 ("elf:
Support at least 32-byte alignment in static dlopen"), commit
8f4632deb3 ("Linux: rseq registration
tests"), commit 6e29cb3f61 ("Linux: Use
rseq in sched_getcpu if available"), and commit
0c76fc3c2b ("Linux: Perform rseq
registration at C startup and thread creation"), resolving the conflicts
introduced by the ARC port and the TLS static surplus changes.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The per-thread state is refactored two use two strategies:
1. The default one uses a TLS structure, which will be placed in the
static TLS space (using __thread keyword).
2. Linux allocates via struct pthread and access it through THREAD_*
macros.
The default strategy has the disadvantage of increasing libc.so static
TLS consumption and thus decreasing the possible surplus used in
some scenarios (which might be mitigated by BZ#25051 fix).
It is used only on Hurd, where accessing the thread storage in the in
single thread case is not straightforward (afaiu, Hurd developers could
correct me here).
The fallback static allocation used for allocation failure is also
removed: defining its size is problematic without synchronizing with
translated messages (to avoid partial translation) and the resulting
usage is not thread-safe.
Checked on x86-64-linux-gnu, i686-linux-gnu, powerpc64le-linux-gnu,
and s390x-linux-gnu.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The __NSIG_WORDS value is based on minimum number of words to hold
the maximum number of signals supported by the architecture.
This patch also adds __NSIG_BYTES, which is the number of bytes
required to represent the supported number of signals. It is used in
syscalls which takes a sigset_t.
Checked on x86_64-linux-gnu and i686-linux-gnu.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The variable is placed in libc.so, and it can be true only in
an outer libc, not libcs loaded via dlmopen or static dlopen.
Since thread creation from inner namespaces does not work,
pthread_create can update __libc_single_threaded directly.
Using __libc_early_init and its initial flag, implementation of this
variable is very straightforward. A future version may reset the flag
during fork (but not in an inner namespace), or after joining all
threads except one.
Reviewed-by: DJ Delorie <dj@redhat.com>
Register rseq TLS for each thread (including main), and unregister for
each thread (excluding main). "rseq" stands for Restartable Sequences.
See the rseq(2) man page proposed here:
https://lkml.org/lkml/2018/9/19/647
Those are based on glibc master branch commit 3ee1e0ec5c.
The rseq system call was merged into Linux 4.18.
The TLS_STATIC_SURPLUS define is increased to leave additional room for
dlopen'd initial-exec TLS, which keeps elf/tst-auditmany working.
The increase (76 bytes) is larger than 32 bytes because it has not been
increased in quite a while. The cost in terms of additional TLS storage
is quite significant, but it will also obscure some initial-exec-related
dlopen failures.
The Hurd port doesn't have support for sigwaitinfo, sigtimedwait, and msgget
yet, so let us ignore the test for these when they return ENOSYS.
* nptl/tst-cancel4.c (tf_sigwaitinfo): Fallback on sigwait when
sigwaitinfo returns ENOSYS.
(tf_sigtimedwait): Likewise with sigtimedwait.
(tf_msgrcv, tf_msgsnd): Fallback on tf_usleep when msgget returns ENOSYS.
PF_UNIX was actually never intended to be passed as protocol parameter to
socket() calls: it is a protocol family, not a protocol. It happens that
Linux introduced accepting it during its 2.0 development, but it shouldn't.
OpenBSD kernels accept it as well, but FreeBSD and NetBSD rightfully do not.
GNU/Hurd does not either.
* nptl/tst-cancel4-common.c (do_test): Pass 0 instead of PF_UNIX as
protocol.
User provided stack should not be released nor madvised at
thread exit because it's owned by the user.
If the memory is shared or file based then MADV_DONTNEED
can have unwanted effects. With memory tagging on aarch64
linux the tags are dropped and thus it may invalidate
pointers.
Tested on aarch64-linux-gnu with MTE, it fixes
FAIL: nptl/tst-stack3
FAIL: nptl/tst-stack3-mem
By aligning its implementation on pthread_cond_wait.
* sysdeps/htl/sem-timedwait.c (cancel_ctx): New structure.
(cancel_hook): New function.
(__sem_timedwait_internal): Check for cancellation and register
cancellation hook that wakes the thread up, and check again for
cancellation on exit.
* nptl/tst-cancel13.c, nptl/tst-cancelx13.c: Move to...
* sysdeps/pthread/: ... here.
* nptl/Makefile: Move corresponding references and rules to...
* sysdeps/pthread/Makefile: ... here.
* nptl/tst-cancel25.c: Move to...
* sysdeps/pthread/tst-cancel25.c: ... here.
(tf2) Do not test for SIGCANCEL when it is not defined.
* nptl/Makefile: Move corresponding reference to...
* sysdeps/pthread/Makefile: ... here.
They were to be moved to sysdeps/pthread/Makefile in 45fce058f ('htl:
Enable more cancellation tests')
* nptl/Makefile: (tests): Remove tst-cancelx9.
(CFLAGS-tst-cancelx9.c): Remove.
d6d74ec16 ('htl: Enable more tests') moved the linking rules from
nptl/Makefile and htl/Makefile to the shared sysdeps/pthread/Makefile. But
e.g. on powerpc some tests are added in sysdeps/powerpc/Makefile, which is
included *after* sysdeps/pthread/Makefile, and thus the tests don't get
affected by the rules and fail to link. For now let's just copy over the
set of rules in both nptl/Makefile and htl/Makefile.
* sysdeps/pthread/Makefile: Move libpthread linking rules to...
* htl/Makefile: ... here and...
* nptl/Makefile: ... there.
This introduces the function __pthread_attr_extension to allocate the
extension space, which is freed by pthread_attr_destroy.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
union pthread_attr_transparent has always the correct size, even if
pthread_attr_t has padding that is not present in struct pthread_attr.
This should not result in an observable behavioral change. The
existing code appears to have been correct, but it was brittle because
it was not clear which functions were allowed to write to an entire
pthread_attr_t argument (e.g., by copying it).
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This allows to reuse the storage after calling pthread_cond_destroy.
* sysdeps/htl/bits/types/struct___pthread_cond.h (__pthread_cond):
Replace unused struct __pthread_condimpl *__impl field with unsigned int
__wrefs.
(__PTHREAD_COND_INITIALIZER): Update accordingly.
* sysdeps/htl/pt-cond-timedwait.c (__pthread_cond_timedwait_internal):
Register as waiter in __wrefs field. On unregistering, wake any pending
pthread_cond_destroy.
* sysdeps/htl/pt-cond-destroy.c (__pthread_cond_destroy): Register wake
request in __wrefs.
* nptl/Makefile (tests): Move tst-cond20 tst-cond21 to...
* sysdeps/pthread/Makefile (tests): ... here.
* nptl/tst-cond20.c nptl/tst-cond21.c: Move to...
* sysdeps/pthread/tst-cond20.c sysdeps/pthread/tst-cond21.c: ... here.
This needs a few test adjustments: In some cases, sigignore was
used for convenience (replaced with xsignal with SIG_IGN). Tests
for the deprecated functions need to disable
-Wdeprecated-declarations, and for the sigmask deprecation,
-Wno-error.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Also add the private type union pthread_attr_transparent, to reduce
the amount of casting that is required.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
pthread_attr_destroy needs to be a weak alias to avoid future
linknamespace failures.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
This is part of the libpthread removal project:
<https://sourceware.org/ml/libc-alpha/2019-10/msg00080.html>
Use __getline instead of __getdelim to avoid a localplt failure.
Likewise for __getrlimit/getrlimit.
The abilist updates were performed by:
git ls-files 'sysdeps/unix/sysv/linux/**/libc.abilist' \
| while read x ; do
echo "GLIBC_2.32 pthread_getattr_np F" >> $x
done
python3 scripts/move-symbol-to-libc.py --only-linux pthread_getattr_np
The private export of __pthread_getaffinity_np is no longer needed, but
the hidden alias still necessary so that the symbol can be exported with
versioned_symbol.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
This is part of the libpthread removal project:
<https://sourceware.org/ml/libc-alpha/2019-10/msg00080.html>
The abilist updates were performed by:
git ls-files 'sysdeps/unix/sysv/linux/**/libc.abilist' \
| while read x ; do
echo "GLIBC_2.32 pthread_getaffinity_np F" >> $x
done
python3 scripts/move-symbol-to-libc.py pthread_getaffinity_np
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
This is part of the libpthread removal project:
<https://sourceware.org/ml/libc-alpha/2019-10/msg00080.html>
The symbol did not previously exist in libc, so a new GLIBC_2.32
symbol is needed, to get correct dependency for binaries which
use the symbol but no longer link against libpthread.
The abilist updates were performed by:
git ls-files 'sysdeps/unix/sysv/linux/**/libc.abilist' \
| while read x ; do
echo "GLIBC_2.32 pthread_attr_setaffinity_np F" >> $x
done
python3 scripts/move-symbol-to-libc.py pthread_attr_setaffinity_np
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The stubs for pthread_getaffinity_np, pthread_getname_np,
pthread_setaffinity_np, pthread_setname_np are replaced, and corresponding
tests are moved.
After the removal of the NaCl port, nptl is Linux-specific, and the stubs
are no longer needed. This effectively reverts commit
c76d1ff514 ("NPTL: Add stubs for Linux-only
extension functions.").
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
There is a race between __nptl_setxid and exiting detached thread, which
causes a deadlock on stack_cache_lock. The deadlock happens in this
state:
T1: setgroups -> __nptl_setxid (holding stack_cache_lock, waiting on cmdp->cntr == 0)
T2 (detached, exiting): start_thread -> __deallocate_stack (waiting on stack_cache_lock)
more threads waiting on stack_cache_lock in pthread_create
For non-detached threads, start_thread waits for its own setxid handler to
finish before exiting. Do this for detached threads as well.
New threads inherit the signal mask from the current thread. This
means that signal handlers can run on the newly created thread
immediately after the kernel has created the userspace thread, even
before glibc has initialized the TCB. Consequently, new threads can
observe uninitialized ctype data, among other things.
To address this, block all signals before starting the thread, and
pass the original signal mask to the start routine wrapper. On the
new thread, first perform all thread initialization, and then unblock
signals.
The cost of doing this is two rt_sigprocmask system calls on the old
thread, and one rt_sigprocmask system call on the new thread. (If
there was a way to clone a new thread with a signals disabled, this
could be brought down to one system call each.) The thread descriptor
increases in size, too, and sigset_t is fairly large. This increase
could be brought down by reusing space the in the descriptor which is
not needed before running user code, or by switching to an internal
sigset_t definition which only covers the signals supported by the
kernel definition. (Part of the thread descriptor size increase is
already offset by reduced stack usage in the thread start wrapper
routine after this commit.)
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The upper bits of the sigset_t s not fully initialized in the signal
mask calls that return information from kernel (sigprocmask,
sigpending, and pthread_sigmask), since the exported sigset_t size
(1024 bits) is larger than Linux support one (64 or 128 bits).
It might make sigisemptyset/sigorset/sigandset fail if the mask
is filled prior the call.
This patch changes the internal signal function to handle up to
supported Linux signal number (_NSIG), the remaining bits are
untouched.
Checked on x86_64-linux-gnu and i686-linux-gnu.