Per the rseq syscall documentation, 3 fields are required to be
initialized by userspace prior to registration, they are 'cpu_id',
'rseq_cs' and 'flags'. Since we have no guarantee that 'struct pthread'
is cleared on all architectures, explicitly set those 3 fields prior to
registration.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
There are various existing tests that call pthread_attr_init and then
verify properties of the resulting initial values retrieved with
pthread_attr_get* functions. However, those are missing coverage of
the initial values retrieved with pthread_attr_getschedparam and
pthread_attr_getstacksize. Add testing for initial values from those
functions as well.
(tst-attr2 covers pthread_attr_getdetachstate,
pthread_attr_getguardsize, pthread_attr_getinheritsched,
pthread_attr_getschedpolicy, pthread_attr_getscope. tst-attr3 covers
some of those together with pthread_attr_getaffinity_np.
tst-pthread-attr-sigmask covers pthread_attr_getsigmask_np.
pthread_attr_getstack has unspecified results if called before the
relevant attributes have been set, while pthread_attr_getstackaddr is
deprecated.)
Tested for x86_64.
The pthread_timedjoin_np and pthread_clockjoin_np functions do not
check that a valid time has been specified. The documentation for
these functions in the glibc manual isn't sufficiently detailed to say
if they should, but consistency with POSIX functions such as
pthread_mutex_timedlock and pthread_cond_timedwait strongly indicates
that an EINVAL error is appropriate (even if there might be some
ambiguity about exactly where such a check should go in relation to
other checks for whether the thread exists, whether it's immediately
joinable, etc.). Copy the logic for such a check used in
pthread_rwlock_common.c.
pthread_join_common had some logic calling valid_nanoseconds before
commit 9e92278ffa, "nptl: Remove
clockwait_tid"; I haven't checked exactly what cases that detected.
Tested for x86_64 and x86.
The recursive lock used on abort does not synchronize with a new process
creation (either by fork-like interfaces or posix_spawn ones), nor it
is reinitialized after fork().
Also, the SIGABRT unblock before raise() shows another race condition,
where a fork or posix_spawn() call by another thread, just after the
recursive lock release and before the SIGABRT signal, might create
programs with a non-expected signal mask. With the default option
(without POSIX_SPAWN_SETSIGDEF), the process can see SIG_DFL for
SIGABRT, where it should be SIG_IGN.
To fix the AS-safe, raise() does not change the process signal mask,
and an AS-safe lock is used if a SIGABRT is installed or the process
is blocked or ignored. With the signal mask change removal,
there is no need to use a recursive loc. The lock is also taken on
both _Fork() and posix_spawn(), to avoid the spawn process to see the
abort handler as SIG_DFL.
A read-write lock is used to avoid serialize _Fork and posix_spawn
execution. Both sigaction (SIGABRT) and abort() requires to lock
as writer (since both change the disposition).
The fallback is also simplified: there is no need to use a loop of
ABORT_INSTRUCTION after _exit() (if the syscall does not terminate the
process, the system is broken).
The proposed fix changes how setjmp works on a SIGABRT handler, where
glibc does not save the signal mask. So usage like the below will now
always abort.
static volatile int chk_fail_ok;
static jmp_buf chk_fail_buf;
static void
handler (int sig)
{
if (chk_fail_ok)
{
chk_fail_ok = 0;
longjmp (chk_fail_buf, 1);
}
else
_exit (127);
}
[...]
signal (SIGABRT, handler);
[....]
chk_fail_ok = 1;
if (! setjmp (chk_fail_buf))
{
// Something that can calls abort, like a failed fortify function.
chk_fail_ok = 0;
printf ("FAIL\n");
}
Such cases will need to use sigsetjmp instead.
The _dl_start_profile calls sigaction through _profil, and to avoid
pulling abort() on loader the call is replaced with __libc_sigaction.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: DJ Delorie <dj@redhat.com>
Use the setresuid32 system call if it is available, prefering
it over setresuid. If both system calls exist, setresuid
is the 16-bit variant. This fixes a build failure on
sparcv9-linux-gnu.
The current racy approach is to enable asynchronous cancellation
before making the syscall and restore the previous cancellation
type once the syscall returns, and check if cancellation has happen
during the cancellation entrypoint.
As described in BZ#12683, this approach shows 2 problems:
1. Cancellation can act after the syscall has returned from the
kernel, but before userspace saves the return value. It might
result in a resource leak if the syscall allocated a resource or a
side effect (partial read/write), and there is no way to program
handle it with cancellation handlers.
2. If a signal is handled while the thread is blocked at a cancellable
syscall, the entire signal handler runs with asynchronous
cancellation enabled. This can lead to issues if the signal
handler call functions which are async-signal-safe but not
async-cancel-safe.
For the cancellation to work correctly, there are 5 points at which the
cancellation signal could arrive:
[ ... )[ ... )[ syscall ]( ...
1 2 3 4 5
1. Before initial testcancel, e.g. [*... testcancel)
2. Between testcancel and syscall start, e.g. [testcancel...syscall start)
3. While syscall is blocked and no side effects have yet taken
place, e.g. [ syscall ]
4. Same as 3 but with side-effects having occurred (e.g. a partial
read or write).
5. After syscall end e.g. (syscall end...*]
And libc wants to act on cancellation in cases 1, 2, and 3 but not
in cases 4 or 5. For the 4 and 5 cases, the cancellation will eventually
happen in the next cancellable entrypoint without any further external
event.
The proposed solution for each case is:
1. Do a conditional branch based on whether the thread has received
a cancellation request;
2. It can be caught by the signal handler determining that the saved
program counter (from the ucontext_t) is in some address range
beginning just before the "testcancel" and ending with the
syscall instruction.
3. SIGCANCEL can be caught by the signal handler and determine that
the saved program counter (from the ucontext_t) is in the address
range beginning just before "testcancel" and ending with the first
uninterruptable (via a signal) syscall instruction that enters the
kernel.
4. In this case, except for certain syscalls that ALWAYS fail with
EINTR even for non-interrupting signals, the kernel will reset
the program counter to point at the syscall instruction during
signal handling, so that the syscall is restarted when the signal
handler returns. So, from the signal handler's standpoint, this
looks the same as case 2, and thus it's taken care of.
5. For syscalls with side-effects, the kernel cannot restart the
syscall; when it's interrupted by a signal, the kernel must cause
the syscall to return with whatever partial result is obtained
(e.g. partial read or write).
6. The saved program counter points just after the syscall
instruction, so the signal handler won't act on cancellation.
This is similar to 4. since the program counter is past the syscall
instruction.
So The proposed fixes are:
1. Remove the enable_asynccancel/disable_asynccancel function usage in
cancellable syscall definition and instead make them call a common
symbol that will check if cancellation is enabled (__syscall_cancel
at nptl/cancellation.c), call the arch-specific cancellable
entry-point (__syscall_cancel_arch), and cancel the thread when
required.
2. Provide an arch-specific generic system call wrapper function
that contains global markers. These markers will be used in
SIGCANCEL signal handler to check if the interruption has been
called in a valid syscall and if the syscalls has side-effects.
A reference implementation sysdeps/unix/sysv/linux/syscall_cancel.c
is provided. However, the markers may not be set on correct
expected places depending on how INTERNAL_SYSCALL_NCS is
implemented by the architecture. It is expected that all
architectures add an arch-specific implementation.
3. Rewrite SIGCANCEL asynchronous handler to check for both canceling
type and if current IP from signal handler falls between the global
markers and act accordingly.
4. Adjust libc code to replace LIBC_CANCEL_ASYNC/LIBC_CANCEL_RESET to
use the appropriate cancelable syscalls.
5. Adjust 'lowlevellock-futex.h' arch-specific implementations to
provide cancelable futex calls.
Some architectures require specific support on syscall handling:
* On i386 the syscall cancel bridge needs to use the old int80
instruction because the optimized vDSO symbol the resulting PC value
for an interrupted syscall points to an address outside the expected
markers in __syscall_cancel_arch. It has been discussed in LKML [1]
on how kernel could help userland to accomplish it, but afaik
discussion has stalled.
Also, sysenter should not be used directly by libc since its calling
convention is set by the kernel depending of the underlying x86 chip
(check kernel commit 30bfa7b3488bfb1bb75c9f50a5fcac1832970c60).
* mips o32 is the only kABI that requires 7 argument syscall, and to
avoid add a requirement on all architectures to support it, mips
support is added with extra internal defines.
Checked on aarch64-linux-gnu, arm-linux-gnueabihf, powerpc-linux-gnu,
powerpc64-linux-gnu, powerpc64le-linux-gnu, i686-linux-gnu, and
x86_64-linux-gnu.
[1] https://lkml.org/lkml/2016/3/8/1105
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Fix an issue with commit b74121ae4b ("Update.") and prevent a stray
process from being left behind by tst-cancel7 (and also tst-cancelx7,
which is the same test built with '-fexceptions' additionally supplied
to the compiler), which then blocks remote testing until the process has
been killed by hand.
This test case creates a thread that runs an extra copy of the test via
system(3) and using the '--direct' option so that the test wrapper does
not interfere with this instance. This extra copy executes its business
and calls sigsuspend(2) and then never terminates by itself. Instead it
relies on being killed by the main test process directly via a thread
cancellation request or, should that fail, by issuing SIGKILL either at
the conclusion of 'do_test' or by the test driver via 'do_cleanup' where
the test timeout has been hit or the test driver interrupted.
However if the main test process has been instead killed by a signal,
such as due to incorrect execution, before it had a chance to kill the
extra copy of the test case, then the test wrapper will terminate
without running 'do_cleanup' and consequently the extra copy of the test
case will remain forever in its suspended state, and in the remote case
in particular it means that the remote test wrapper will wait forever
for the SSH command to complete.
This has been observed with the 'alpha-linux-gnu' target, where the main
test process triggers SIGSEGV and the test wrapper correctly records:
Didn't expect signal from child: got `Segmentation fault'
in nptl/tst-cancel7.out and terminates, but then the calling SSH command
continues waiting for the remaining process started in the same session
on the remote target to complete.
Address this problem by also registering 'do_cleanup' via atexit(3),
observing that 'support_delete_temp_files' is registered by the test
wrapper before the test initializing function 'do_prepare' is called and
that we call all the functions registered in the reverse of the order in
which they were registered, so it is safe to refer to 'pidfilename' in
'do_cleanup' invoked by exit(3) because by that time temporary files
have not yet been deleted.
A minor inconvenience is that if 'signal_handler' is invoked in the test
wrapper as a result of SIGALRM rather than SIGINT, then 'do_cleanup'
will be called twice, once as a cleanup handler and again by exit(3).
In reality it is harmless though, because issuing SIGKILL is guarded by
a record lock, so if the first call has succeeded in killing the extra
copy of the test case, then the subsequent call will do nothing.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Move the release of the semaphore used to synchronize between an extra
copy of the test run as a separate process and the main test process
until after the PID file has been locked. It is so that if the cleanup
function gets called by the test driver due to premature termination of
the main test process, then the function does not get at the PID file
before it has been locked and conclude that the extra copy of the test
has already terminated. This won't usually happen due to a relatively
high amount of time required to elapse before timeout triggers in the
test driver, but it will change with the next change.
There is still a small time window remaining with this change in place
where the main test process gets killed for some reason between the
extra copy of the test has been already started by pthread_create(3) and
a successful return from the call to sem_wait(3), in which case the
cleanup function can be reached before PID has been written to the PID
file and the file locked. It seems that with the test case structured
as it is now and PID-based process management we have no means to avoid
it.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Fix an issue with commit 2af4e3e566 ("Test of semaphores.") by making
the tst-sem11 and tst-sem12 tests use the test driver, preventing them
from ever causing testing to hang forever and never complete, such as
currently happening with the 'mips-linux-gnu' (o32 ABI) target. Adjust
the name of the PREPARE macro, which clashes with the interpretation of
its presence by the test driver, by using a TF_ prefix in reference to
the name of the 'tf' function.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Add a copyright notice to the tst-sem11 and tst-sem12 tests, observing
that they have been originally contributed back in 2007, with commit
2af4e3e566 ("Test of semaphores.").
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
By default, if the C++ toolchain lacks support for static linking,
configure fails to find the C++ header files and the glibc build fails.
The --disable-static-c++-link-check option allows the glibc build to
finish, but static C++ tests will fail if the C++ toolchain doesn't
have the necessary static C++ libraries which may not be easily installed.
Add --disable-static-c++-tests option to skip the static C++ link check
and tests. This fixes BZ #31797.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
The conditionals for several mtrace-based tests in catgets, elf, libio,
malloc, misc, nptl, posix, and stdio-common were incorrect leading to
test failures when bootstrapping glibc without perl.
The correct conditional for mtrace-based tests requires three checks:
first checking for run-built-tests, then build-shared, and lastly that
PERL is not equal to "no" (missing perl).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Testing for `None`-ness with `==` operator is frowned upon and causes
warnings in at least "LGTM" python linter. Fix that.
Signed-off-by: Konstantin Kharlamov <Hi-Angel@yandex.ru>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This ensures that the test still links with a linker that refuses
to create an executable stack marker automatically.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
So that the test is harder to confuse with elf/tst-execstack
(although the tests are supposed to be the same).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Add anonymous mmap annotations on loader malloc, malloc when it
allocates memory with mmap, and on malloc arena. The /proc/self/maps
will now print:
[anon: glibc: malloc arena]
[anon: glibc: malloc]
[anon: glibc: loader malloc]
On arena allocation, glibc annotates only the read/write mapping.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: DJ Delorie <dj@redhat.com>
Linux 4.5 removed thread stack annotations due to the complexity of
computing them [1], and Linux added PR_SET_VMA_ANON_NAME on 5.17
as a way to name anonymous virtual memory areas.
This patch adds decoration on the stack created and used by
pthread_create, for glibc crated thread stack the /proc/self/maps will
now show:
[anon: glibc: pthread stack: <tid>]
And for user-provided stacks:
[anon: glibc: pthread user stack: <tid>]
The guard page is not decorated, and the mapping name is cleared when
the thread finishes its execution (so the cached stack does not have any
name associated).
Checked on x86_64-linux-gnu aarch64 aarch64-linux-gnu.
[1] 65376df582
Co-authored-by: Ian Rogers <irogers@google.com>
Reviewed-by: DJ Delorie <dj@redhat.com>
If the kernel headers provide a larger struct rseq, we used that
size as the argument to the rseq system call. As a result,
rseq registration would fail on older kernels which only accept
size 32.
Fixes the following test-time errors, that lead to FAILs, on toolchains
that set -z now out o the box, such as the one used on Gentoo Hardened:
.../build-x86-x86_64-pc-linux-gnu-nptl $ grep '' nptl/tst-tls3*.out
nptl/tst-tls3.out:dlopen failed
nptl/tst-tls3-malloc.out:dlopen failed
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
* nptl/descr.h (struct pthread): Remove end_padding member, which
made this type incomplete.
(PTHREAD_STRUCT_END_PADDING): Stop using end_padding.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
With fortification enabled, system calls return result needs to be checked,
has it gets the __wur macro enabled.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
With fortification enabled, read calls return result needs to be checked,
has it gets the __wur macro enabled.
Note on read call removal from sysdeps/pthread/tst-cancel20.c and
sysdeps/pthread/tst-cancel21.c:
It is assumed that this second read call was there to overcome the race
condition between pipe closure and thread cancellation that could happen
in the original code. Since this race condition got fixed by
d0e3ffb7a5 the second call seems
superfluous. Hence, instead of checking for the return value of read, it
looks reasonable to simply remove it.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reflow all long lines adding comment terminators.
Rename files that cause inconsistent ordering.
Sort all reflowed text using scripts/sort-makefile-lines.py.
No code generation changes observed in binary artifacts.
No regressions on x86_64 and i686.
Created tunable glibc.pthread.stack_hugetlb to control when hugepages
can be used for stack allocation.
In case THP are enabled and glibc.pthread.stack_hugetlb is set to
0, glibc will madvise the kernel not to use allow hugepages for stack
allocations.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
And make always supported. The configure option was added on glibc 2.25
and some features require it (such as hwcap mask, huge pages support, and
lock elisition tuning). It also simplifies the build permutations.
Changes from v1:
* Remove glibc.rtld.dynamic_sort changes, it is orthogonal and needs
more discussion.
* Cleanup more code.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
They are both used by __libc_freeres to free all library malloc
allocated resources to help tooling like mtrace or valgrind with
memory leak tracking.
The current scheme uses assembly markers and linker script entries
to consolidate the free routine function pointers in the RELRO segment
and to be freed buffers in BSS.
This patch changes it to use specific free functions for
libc_freeres_ptrs buffers and call the function pointer array directly
with call_function_static_weak.
It allows the removal of both the internal macros and the linker
script sections.
Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Currently glibc uses in_time_t_range to detects time_t overflow,
and if it occurs fallbacks to 64 bit syscall version.
The function name is confusing because internally time_t might be
either 32 bits or 64 bits (depending on __TIMESIZE).
This patch refactors the in_time_t_range by replacing it with
in_int32_t_range for the case to check if the 64 bit time_t syscall
should be used.
The in_time_t range is used to detect overflow of the
syscall return value.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The old exception handling implementation used function interposition
to replace the dynamic loader implementation (no TLS support) with the
libc implementation (TLS support). This results in problems if the
link order between the dynamic loader and libc is reversed (bug 25486).
The new implementation moves the entire implementation of the
exception handling functions back into the dynamic loader, using
THREAD_GETMEM and THREAD_SETMEM for thread-local data support.
These depends on Hurd support for these macros, added in commit
b65a82e4e7 ("hurd: Add THREAD_GET/SETMEM/_NC").
One small obstacle is that the exception handling facilities are used
before the TCB has been set up, so a check is needed if the TCB is
available. If not, a regular global variable is used to store the
exception handling information.
Also rename dl-error.c to dl-catch.c, to avoid confusion with the
dlerror function.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
clang complains that libc_hidden_data_def (__nptl_threads_events)
creates an invalid alias:
pthread_create.c:50:1: error: alias must point to a defined variable or function
libc_hidden_data_def (__nptl_threads_events)
^
../include/libc-symbols.h:621:37: note: expanded from macro
'libc_hidden_data_def'
It seems that clang requires that a proper prototype is defined prior
the hidden alias creation.
Reviewed-by: Fangrui Song <maskray@google.com>
The current macros uses pid as signed value, which triggers a compiler
warning for process and thread timers. Replace MAKE_PROCESS_CPUCLOCK
with static inline function that expects the pid as unsigned. These
are similar to what Linux does internally.
Checked on x86_64-linux-gnu.
Reviewed-by: Arjun Shankar <arjun@redhat.com>
Use <support/test-driver.c> and replace pthread calls to its xpthread
equivalents.
Signed-off-by: Yu Chien Peter Lin <peterlin@andestech.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Rename atomic_exchange_rel/acq to use atomic_exchange_release/acquire
since these map to the standard C11 atomic builtins.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Replace atomic_decrement_and_test with atomic_fetch_add_relaxed.
These are simple counters which do not protect any shared data from
concurrent accesses. Also remove the unused file cond-perf.c.
Passes regress on AArch64.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Replace atomic_increment and atomic_increment_val with atomic_fetch_add_relaxed.
One case in sem_post.c uses release semantics (see comment above it).
The others are simple counters and do not protect any shared data from
concurrent accesses.
Passes regress on AArch64.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Remove the 4 uses of atomic_and and atomic_or with atomic_fetch_and_acquire
and atomic_fetch_or_acquire. This is preserves existing implied semantics,
however relaxed MO on FUTEX_OWNER_DIED accesses may be correct.
Passes regress on AArch64.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The Z modifier is a nonstandard synonymn for z (that predates z
itself) and compiler might issue an warning for in invalid
conversion specifier.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Replace the 3 uses of atomic_bit_set and atomic_bit_test_set with
atomic_fetch_or_relaxed. Using relaxed MO is correct since the
atomics are used to ensure memory is released only once.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
The implementation is based on scalar Chacha20 with per-thread cache.
It uses getrandom or /dev/urandom as fallback to get the initial entropy,
and reseeds the internal state on every 16MB of consumed buffer.
To improve performance and lower memory consumption the per-thread cache
is allocated lazily on first arc4random functions call, and if the
memory allocation fails getentropy or /dev/urandom is used as fallback.
The cache is also cleared on thread exit iff it was initialized (so if
arc4random is not called it is not touched).
Although it is lock-free, arc4random is still not async-signal-safe
(the per thread state is not updated atomically).
The ChaCha20 implementation is based on RFC8439 [1], omitting the final
XOR of the keystream with the plaintext because the plaintext is a
stream of zeros. This strategy is similar to what OpenBSD arc4random
does.
The arc4random_uniform is based on previous work by Florian Weimer,
where the algorithm is based on Jérémie Lumbroso paper Optimal Discrete
Uniform Generation from Coin Flips, and Applications (2013) [2], who
credits Donald E. Knuth and Andrew C. Yao, The complexity of nonuniform
random number generation (1976), for solving the general case.
The main advantage of this method is the that the unit of randomness is not
the uniform random variable (uint32_t), but a random bit. It optimizes the
internal buffer sampling by initially consuming a 32-bit random variable
and then sampling byte per byte. Depending of the upper bound requested,
it might lead to better CPU utilization.
Checked on x86_64-linux-gnu, aarch64-linux, and powerpc64le-linux-gnu.
Co-authored-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Yann Droneaud <ydroneaud@opteya.com>
[1] https://datatracker.ietf.org/doc/html/rfc8439
[2] https://arxiv.org/pdf/1304.1916.pdf
And also fixes the SINGLE_THREAD_P macro for SINGLE_THREAD_BY_GLOBAL,
since header inclusion single-thread.h is in the wrong order, the define
needs to come before including sysdeps/unix/sysdep.h. The macro
is now moved to a per-arch single-threade.h header.
The SINGLE_THREAD_P is used on some more places.
Checked on aarch64-linux-gnu and x86_64-linux-gnu.