Rename atomic_exchange_rel/acq to use atomic_exchange_release/acquire
since these map to the standard C11 atomic builtins.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 7061 files FOO.
I then removed trailing white space from math/tgmath.h,
support/tst-support-open-dev-null-range.c, and
sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following
obscure pre-commit check failure diagnostics from Savannah. I don't
know why I run into these diagnostics whereas others evidently do not.
remote: *** 912-#endif
remote: *** 913:
remote: *** 914-
remote: *** error: lines with trailing whitespace found
...
remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
This slightly reduces code size, as can be seen below.
__libc_lock_unlock is usually used along with __libc_lock_lock in
the same function. __libc_lock_lock already has an out-of-line
slow path, so this change should not introduce many additional
non-leaf functions.
This change also fixes a link failure in 32-bit Arm thumb mode
because commit 1f9c804fbd
("nptl: Use internal low-level lock type for !IS_IN (libc)")
introduced __libc_do_syscall calls outside of libc.
Before x86-64:
text data bss dec hex filename
1937748 20456 54896 2013100 1eb7ac libc.so.6
25601 856 12768 39225 9939 nss/libnss_db.so.2
40310 952 25144 66406 10366 nss/libnss_files.so.2
After x86-64:
text data bss dec hex filename
1935312 20456 54896 2010664 1eae28 libc.so.6
25559 864 12768 39191 9917 nss/libnss_db.so.2
39764 960 25144 65868 1014c nss/libnss_files.so.2
Before i686:
2110961 11272 39144 2161377 20fae1 libc.so.6
27243 428 12652 40323 9d83 nss/libnss_db.so.2
43062 476 25028 68566 10bd6 nss/libnss_files.so.2
After i686:
2107347 11272 39144 2157763 20ecc3 libc.so.6
26929 432 12652 40013 9c4d nss/libnss_db.so.2
43132 480 25028 68640 10c20 nss/libnss_files.so.2
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The elision interfaces are closely aligned between the targets that
implement them, so declare them in the generic <lowlevellock.h>
file.
Empty .c stubs are provided, so that fewer makefile updates
under sysdeps are needed. Also simplify initialization via
__libc_early_init.
The symbols __lll_clocklock_elision, __lll_lock_elision,
__lll_trylock_elision, __lll_unlock_elision, __pthread_force_elision
move into libc. For the time being, non-hidden references are used
from libpthread to access them, but once that part of libpthread
is moved into libc, hidden symbols will be used again. (Hidden
references seem desirable to reduce the likelihood of transactions
aborts.)
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 6694 files FOO.
I then removed trailing white space from benchtests/bench-pthread-locks.c
and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this
diagnostic from Savannah:
remote: *** pre-commit check failed ...
remote: *** error: lines with trailing whitespace found
remote: error: hook declined to update refs/heads/master
To help y2038 work avoid duplicate all the logic of nanosleep on
non cancellable version, the patch replace it with a new futex
operation, lll_timedwait. The changes are:
- Add a expected value for __lll_clocklock_wait, so it can be used
to wait for generic values.
- Remove its internal atomic operation and move the logic to
__lll_clocklock. It makes __lll_clocklock_wait even more generic
and __lll_clocklock slight faster on fast-path (since it won't
require a function call anymore).
- Add lll_timedwait, which uses __lll_clocklock_wait, to replace both
__pause_nocancel and __nanosleep_nocancel.
It also allows remove the sparc32 __lll_clocklock_wait implementation
(since it is similar to the generic one).
Checked on x86_64-linux-gnu, sparcv9-linux-gnu, and i686-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Rename lll_timedlock to lll_clocklock and add clockid
parameter to indicate the clock that the abstime parameter should
be measured against in preparation for adding
pthread_mutex_clocklock.
The name change mirrors the naming for the exposed pthread functions:
timed => absolute timeout measured against CLOCK_REALTIME (or clock
specified by attribute in the case of pthread_cond_timedwait.)
clock => absolute timeout measured against clock specified in preceding
parameter.
* sysdeps/nptl/lowlevellock.h (lll_clocklock): Rename from
lll_timedlock and add clockid parameter. (__lll_clocklock): Rename
from __lll_timedlock and add clockid parameter.
* sysdeps/unix/sysv/linux/sparc/lowlevellock.h (lll_clocklock):
Likewise.
* nptl/lll_timedlock_wait.c (__lll_clocklock_wait): Rename from
__lll_timedlock_wait and add clockid parameter. Use __clock_gettime
rather than __gettimeofday so that clockid can be used. This means
that conversion from struct timeval is no longer required.
* sysdeps/sparc/sparc32/lowlevellock.c (lll_clocklock_wait):
Likewise.
* sysdeps/sparc/sparc32/lll_timedlock_wait.c: Update comment to
refer to __lll_clocklock_wait rather than __lll_timedlock_wait.
* nptl/pthread_mutex_timedlock.c (lll_clocklock_elision): Rename
from lll_timedlock_elision, add clockid parameter and use
meaningful names for other parameters. (__pthread_mutex_timedlock):
Pass CLOCK_REALTIME where necessary to lll_clocklock and
lll_clocklock_elision.
* sysdeps/unix/sysv/linux/powerpc/lowlevellock.h
(lll_clocklock_elision): Rename from lll_timedlock_elision and add
clockid parameter. (__lll_clocklock_elision): Rename from
__lll_timedlock_elision and add clockid parameter.
* sysdeps/unix/sysv/linux/s390/lowlevellock.h: Likewise.
* sysdeps/unix/sysv/linux/x86/lowlevellock.h: Likewise.
* sysdeps/unix/sysv/linux/powerpc/elision-timed.c
(__lll_lock_elision): Call __lll_clocklock_elision rather than
__lll_timedlock_elision. (EXTRAARG): Add clockid parameter.
(LLL_LOCK): Likewise.
* sysdeps/unix/sysv/linux/s390/elision-timed.c: Likewise.
* sysdeps/unix/sysv/linux/x86/elision-timed.c: Likewise.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch removes the arch-specific x86 assembly implementation for
low level locking and consolidate both 64 bits and 32 bits in a
single implementation.
Different than other architectures, x86 lll_trylock, lll_lock, and
lll_unlock implements a single-thread optimization to avoid atomic
operation, using cmpxchgl instead. This patch implements by using
the new single-thread.h definitions in a generic way, although using
the previous semantic.
The lll_cond_trylock, lll_cond_lock, and lll_timedlock just use
atomic operations plus calls to lll_lock_wait*.
For __lll_lock_wait_private and __lll_lock_wait the generic implemtation
there is no indication that assembly implementation is required
performance-wise.
Checked on x86_64-linux-gnu and i686-linux-gnu.
* sysdeps/nptl/lowlevellock.h (__lll_trylock): New macro.
(lll_trylock): Call __lll_trylock.
* sysdeps/unix/sysv/linux/i386/libc-lowlevellock.S: Remove file.
* sysdeps/unix/sysv/linux/i386/lll_timedlock_wait.c: Likewise.
* sysdeps/unix/sysv/linux/i386/lowlevellock.S: Likewise.
* sysdeps/unix/sysv/linux/i386/lowlevellock.h: Likewise.
* sysdeps/unix/sysv/linux/x86_64/libc-lowlevellock.S: Likewise.
* sysdeps/unix/sysv/linux/x86_64/lll_timedlock_wait.c: Likewise.
* sysdeps/unix/sysv/linux/x86_64/lowlevellock.S: Likewise.
* sysdeps/unix/sysv/linux/x86_64/lowlevellock.h: Likewise.
* sysdeps/unix/sysv/linux/x86/lowlevellock.h: New file.
* sysdeps/unix/sysv/linux/x86_64/cancellation.S: Include
lowlevellock-futex.h.
Patch ce7eb0e903 ("nptl: Cleanup cancellation macros") changed the
join sequence for internal common __pthread_timedjoin_ex to use the
new macro lll_wait_tid. The idea was this macro would issue the
cancellable futex operation depending whether the timeout is used or
not. However if a timeout is used, __lll_timedwait_tid is called and
it is not a cancellable entrypoint.
This patch fixes it by simplifying the code in various ways:
- Instead of adding the cancellation handling on __lll_timedwait_tid,
it moves the generic implementation to pthread_join_common.c (called
now timedwait_tid with some fixes to use the correct type for pid).
- The llvm_wait_tid macro is removed, along with its replication on
x86_64, i686, and sparc arch-specific lowlevellock.h.
- sparc32 __lll_timedwait_tid is also removed, since the code is similar
to generic one.
- x86_64 and i386 provides arch-specific __lll_timedwait_tid which is
also removed since they are similar in functionality to generic C code
and there is no indication it is better than compiler generated code.
New tests, tst-join8 and tst-join9, are provided to check if
pthread_timedjoin_np acts as a cancellation point.
Checked on x86_64-linux-gnu, i686-linux-gnu, sparcv9-linux-gnu, and
aarch64-linux-gnu.
[BZ #24215]
* nptl/Makefile (lpthread-routines): Remove lll_timedwait_tid.
(tests): Add tst-join8 tst-join9.
* nptl/lll_timedwait_tid.c: Remove file.
* sysdeps/sparc/sparc32/lll_timedwait_tid.c: Likewise.
* sysdeps/unix/sysv/linux/i386/lll_timedwait_tid.c: Likewise.
* sysdeps/sysv/linux/x86_64/lll_timedwait_tid.c: Likewise.
* nptl/pthread_join_common.c (timedwait_tid): New function.
(__pthread_timedjoin_ex): Act as cancellation entrypoint is block
is set.
* nptl/tst-join5.c (thread_join): New function.
(tf1, tf2, do_test): Use libsupport and add pthread_timedjoin_np
check.
* nptl/tst-join8.c: New file.
* nptl/tst-join9.c: Likewise.
* sysdeps/nptl/lowlevellock-futex.h (lll_futex_wait_cancel,
lll_futex_timed_wait_cancel): Add generic macros.
* sysdeps/nptl/lowlevellock.h (__lll_timedwait_tid, lll_wait_tid):
Remove definitions.
* sysdeps/unix/sysv/linux/i386/lowlevellock.h: Likewise.
* sysdeps/unix/sysv/linux/sparc/lowlevellock.h: Likewise.
* sysdeps/unix/sysv/linux/x86_64/lowlevellock.h: Likewise.
* sysdeps/sparc/sparc32/lowlevellock.c (__lll_timedwait_tid):
Remove function.
* sysdeps/unix/sysv/linux/i386/lowlevellock.S (__lll_timedwait_tid):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/lowlevellock.S: Likewise.
* sysdeps/unix/sysv/linux/lowlevellock-futex.h
(lll_futex_timed_wait_cancel): New macro.
This patch wraps all uses of *_{enable,disable}_asynccancel and
and *_CANCEL_{ASYNC,RESET} in either already provided macros
(lll_futex_timed_wait_cancel) or creates new ones if the
functionality is not provided (SYSCALL_CANCEL_NCS, lll_futex_wait_cancel,
and lll_futex_timed_wait_cancel).
Also for some generic implementations, the direct call of the macros
are removed since the underlying symbols are suppose to provide
cancellation support.
This is a priliminary patch intended to simplify the work required
for BZ#12683 fix. It is a refactor change, no semantic changes are
expected.
Checked on x86_64-linux-gnu and i686-linux-gnu.
* nptl/pthread_join_common.c (__pthread_timedjoin_ex): Use
lll_wait_tid with timeout.
* nptl/sem_wait.c (__old_sem_wait): Use lll_futex_wait_cancel.
* sysdeps/nptl/aio_misc.h (AIO_MISC_WAIT): Use
futex_reltimed_wait_cancelable for cancelabla mode.
* sysdeps/nptl/gai_misc.h (GAI_MISC_WAIT): Likewise.
* sysdeps/posix/open64.c (__libc_open64): Do not call cancelation
macros.
* sysdeps/posix/sigwait.c (__sigwait): Likewise.
* sysdeps/posix/waitid.c (__sigwait): Likewise.
* sysdeps/unix/sysdep.h (__SYSCALL_CANCEL_CALL,
SYSCALL_CANCEL_NCS): New macro.
* sysdeps/nptl/lowlevellock.h (lll_wait_tid): Add timeout argument.
(lll_timedwait_tid): Remove macro.
* sysdeps/unix/sysv/linux/i386/lowlevellock.h (lll_wait_tid):
Likewise.
(lll_timedwait_tid): Likewise.
* sysdeps/unix/sysv/linux/sparc/lowlevellock.h (lll_wait_tid):
Likewise.
(lll_timedwait_tid): Likewise.
* sysdeps/unix/sysv/linux/x86_64/lowlevellock.h (lll_wait_tid):
Likewise.
(lll_timedwait_tid): Likewise.
* sysdeps/unix/sysv/linux/clock_nanosleep.c (__clock_nanosleep):
Use INTERNAL_SYSCALL_CANCEL.
* sysdeps/unix/sysv/linux/futex-internal.h
(futex_reltimed_wait_cancelable): Use LIBC_CANCEL_{ASYNC,RESET}
instead of __pthread_{enable,disable}_asynccancel.
* sysdeps/unix/sysv/linux/lowlevellock-futex.h
(lll_futex_wait_cancel): New macro.
On s390 (31bit) if glibc is build with -Os, pthread_join sometimes
blocks indefinitely. This is e.g. observable with
testcase intl/tst-gettext6.
pthread_join is calling lll_wait_tid(tid), which performs the futex-wait
syscall in a loop as long as tid != 0 (thread is alive).
On s390 (and build with -Os), tid is loaded from memory before
comparing against zero and then the tid is loaded a second time
in order to pass it to the futex-wait-syscall.
If the thread exits in between, then the futex-wait-syscall is
called with the value zero and it waits until a futex-wake occurs.
As the thread is already exited, there won't be a futex-wake.
In lll_wait_tid, the tid is stored to the local variable __tid,
which is then used as argument for the futex-wait-syscall.
But unfortunately the compiler is allowed to reload the value
from memory.
With this patch, the tid is loaded with atomic_load_acquire.
Then the compiler is not allowed to reload the value for __tid from memory.
ChangeLog:
[BZ #23137]
* sysdeps/nptl/lowlevellock.h (lll_wait_tid):
Use atomic_load_acquire to load __tid.
The macros lll_trylock, lll_cond_trylock are extended by an __glibc_unlikely
hint. Now the trylock macros are based on the same assumption about a
free/busy lock as lll_lock.
With the hint gcc emits code in e.g. pthread_mutex_trylock which does
not use jumps if the lock is free. Without the hint it had to jump away
if the lock is free.
Tested on s390x, ppc.
ChangeLog:
* sysdeps/nptl/lowlevellock.h (lll_trylock, lll_cond_trylock):
Add __glibc_unlikely hint.
lll_robust_unlock on i386 and x86_64 first sets the futex word to
FUTEX_WAITERS|0 before calling __lll_unlock_wake, which will set the
futex word to 0. If the thread is killed between these steps, then the
futex word will be FUTEX_WAITERS|0, and the kernel (at least current
upstream) will not set it to FUTEX_OWNER_DIED|FUTEX_WAITERS because 0 is
not equal to the TID of the crashed thread.
The lll_robust_lock assembly code on i386 and x86_64 is not prepared to
deal with this case because the fastpath tries to only CAS 0 to TID and
not FUTEX_WAITERS|0 to TID; the slowpath simply waits until it can CAS 0
to TID or the futex_word has the FUTEX_OWNER_DIED bit set.
This issue is fixed by removing the custom x86 assembly code and using
the generic C code instead. However, instead of adding more duplicate
code to the custom x86 lowlevellock.h, the code of the lll_robust* functions
is inlined into the single call sites that exist for each of these functions
in the pthread_mutex_* functions. The robust mutex paths in the latter
have been slightly reorganized to make them simpler.
This patch is meant to be easy to backport, so C11-style atomics are not
used.
[BZ #20985]
* nptl/Makefile: Adapt.
* nptl/pthread_mutex_cond_lock.c (LLL_ROBUST_MUTEX_LOCK): Remove.
(LLL_ROBUST_MUTEX_LOCK_MODIFIER): New.
* nptl/pthread_mutex_lock.c (LLL_ROBUST_MUTEX_LOCK): Remove.
(LLL_ROBUST_MUTEX_LOCK_MODIFIER): New.
(__pthread_mutex_lock_full): Inline lll_robust* functions and adapt.
* nptl/pthread_mutex_timedlock.c (pthread_mutex_timedlock): Inline
lll_robust* functions and adapt.
* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise.
* sysdeps/nptl/lowlevellock.h (__lll_robust_lock_wait,
__lll_robust_lock, lll_robust_cond_lock, __lll_robust_timedlock_wait,
__lll_robust_timedlock, __lll_robust_unlock): Remove.
* sysdeps/unix/sysv/linux/i386/lowlevellock.h (lll_robust_lock,
lll_robust_cond_lock, lll_robust_timedlock, lll_robust_unlock): Remove.
* sysdeps/unix/sysv/linux/x86_64/lowlevellock.h (lll_robust_lock,
lll_robust_cond_lock, lll_robust_timedlock, lll_robust_unlock): Remove.
* sysdeps/unix/sysv/linux/sparc/lowlevellock.h (__lll_robust_lock_wait,
__lll_robust_lock, lll_robust_cond_lock, __lll_robust_timedlock_wait,
__lll_robust_timedlock, __lll_robust_unlock): Remove.
* nptl/lowlevelrobustlock.c: Remove file.
* nptl/lowlevelrobustlock.sym: Likewise.
* sysdeps/unix/sysv/linux/i386/lowlevelrobustlock.S: Likewise.
* sysdeps/unix/sysv/linux/x86_64/lowlevelrobustlock.S: Likewise.
POSIX and C++11 require that a thread can destroy a mutex if no other
thread owns the mutex, is blocked on the mutex, or will try to acquire
it in the future. After destroying the mutex, it can reuse or unmap the
underlying memory. Thus, we must not access a mutex' memory after
releasing it. Currently, we can load the private flag after releasing
the mutex, which is fixed by this patch.
See https://sourceware.org/bugzilla/show_bug.cgi?id=13690 for more
background.
We need to call futex_wake on the lock after releasing it, however. This
is by design, and can lead to spurious wake-ups on unrelated futex words
(e.g., when the mutex memory is reused for another mutex). This behavior
is documented in the glibc-internal futex API and in recent drafts of the
Linux kernel's futex documentation (see the draft_futex branch of
git://git.kernel.org/pub/scm/docs/man-pages/man-pages.git).
2014-08-12 Bernard Ogden <bernie.ogden@linaro.org>
[BZ #16892]
* sysdeps/nptl/lowlevellock.h (__lll_timedlock): Use
atomic_compare_and_exchange_bool_acq rather than atomic_exchange_acq.