This patch reserves space for HWCAP3/HWCAP4 in the TCB of powerpc.
These hardware capabilities bits will be used by future Power
architectures.
Versioned symbol '__parse_hwcap_3_4_and_convert_at_platform' advertises
the availability of the new HWCAP3/HWCAP4 data in the TCB.
This is an ABI change for GLIBC 2.39.
Suggested-by: Peter Bergner <bergner@linux.ibm.com>
Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
This makes it more likely that the compiler can compute the strlen
argument in _startup_fatal at compile time, which is required to
avoid a dependency on strlen this early during process startup.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Rename atomic_exchange_rel/acq to use atomic_exchange_release/acquire
since these map to the standard C11 atomic builtins.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
libgcc ifunc resolvers that access hwcap via a field in the tcb can't
be called until the thread pointer is set up. Other ifunc resolvers
might need access to at_platform. This patch sets up a fake thread
pointer early to a copy of tcbhead_t. hwcapinfo.c already had local
variables for hwcap and at_platform, replace them with an entire
tcbhead_t. It's not that large and this way we easily ensure hwcap
and at_platform are at the same relative offsets as they are in the
real thread block.
The patch also conditionally disables part of tst-tlsifunc-static,
"bar address read from IFUNC resolver is incorrect". We can't get a
proper address for a thread variable before glibc initialises tls.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 7061 files FOO.
I then removed trailing white space from math/tgmath.h,
support/tst-support-open-dev-null-range.c, and
sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following
obscure pre-commit check failure diagnostics from Savannah. I don't
know why I run into these diagnostics whereas others evidently do not.
remote: *** 912-#endif
remote: *** 913:
remote: *** 914-
remote: *** error: lines with trailing whitespace found
...
remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
A local register variable is merely a compiler hint, and so not
appropriate in this context. Move the global register variable into
<thread_pointer.h> and include it from <tls.h>, as there can only
be one global definition for one particular register.
Fixes commit 8dbeb0561e
("nptl: Add <thread_pointer.h> for defining __thread_pointer").
Reported-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
TLS_INIT_TCB_ALIGN is not actually used. TLS_TCB_ALIGN was likely
introduced to support a configuration where the thread pointer
has not the same alignment as THREAD_SELF. Only ia64 seems to use
that, but for the stack/pointer guard, not for storing tcbhead_t.
Some ports use TLS_TCB_OFFSET and TLS_PRE_TCB_SIZE to shift
the thread pointer, potentially landing in a different residue class
modulo the alignment, but the changes should not impact that.
In general, given that TLS variables have their own alignment
requirements, having different alignment for the (unshifted) thread
pointer and struct pthread would potentially result in dynamic
offsets, leading to more complexity.
hppa had different values before: __alignof__ (tcbhead_t), which
seems to be 4, and __alignof__ (struct pthread), which was 8
(old default) and is now 32. However, it defines THREAD_SELF as:
/* Return the thread descriptor for the current thread. */
# define THREAD_SELF \
({ struct pthread *__self; \
__self = __get_cr27(); \
__self - 1; \
})
So the thread pointer points after struct pthread (hence __self - 1),
and they have to have the same alignment on hppa as well.
Similarly, on ia64, the definitions were different. We have:
# define TLS_PRE_TCB_SIZE \
(sizeof (struct pthread) \
+ (PTHREAD_STRUCT_END_PADDING < 2 * sizeof (uintptr_t) \
? ((2 * sizeof (uintptr_t) + __alignof__ (struct pthread) - 1) \
& ~(__alignof__ (struct pthread) - 1)) \
: 0))
# define THREAD_SELF \
((struct pthread *) ((char *) __thread_self - TLS_PRE_TCB_SIZE))
And TLS_PRE_TCB_SIZE is a multiple of the struct pthread alignment
(confirmed by the new _Static_assert in sysdeps/ia64/libc-tls.c).
On m68k, we have a larger gap between tcbhead_t and struct pthread.
But as far as I can tell, the port is fine with that. The definition
of TCB_OFFSET is sufficient to handle the shifted TCB scenario.
This fixes commit 23c77f6018
("nptl: Increase default TCB alignment to 32").
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
<tls.h> already contains a definition that is quite similar,
but it is not consistent across architectures.
Only architectures for which rseq support is added are covered.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
rseq support will use a 32-byte aligned field in struct pthread,
so the whole struct needs to have at least that alignment.
nptl/tst-tls3mod.c uses TCB_ALIGNMENT, therefore include <descr.h>
to obtain the fallback definition.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
All the ports now have THREAD_GSCOPE_IN_TCB set to 1. Remove all
support for !THREAD_GSCOPE_IN_TCB, along with the definition itself.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20210915171110.226187-4-bugaevc@gmail.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
We stopped adding "Contributed by" or similar lines in sources in 2012
in favour of git logs and keeping the Contributors section of the
glibc manual up to date. Removing these lines makes the license
header a bit more consistent across files and also removes the
possibility of error in attribution when license blocks or files are
copied across since the contributed-by lines don't actually reflect
reality in those cases.
Move all "Contributed by" and similar lines (Written by, Test by,
etc.) into a new file CONTRIBUTED-BY to retain record of these
contributions. These contributors are also mentioned in
manual/contrib.texi, so we just maintain this additional record as a
courtesy to the earlier developers.
The following scripts were used to filter a list of files to edit in
place and to clean up the CONTRIBUTED-BY file respectively. These
were not added to the glibc sources because they're not expected to be
of any use in future given that this is a one time task:
https://gist.github.com/siddhesh/b5ecac94eabfd72ed2916d6d8157e7dchttps://gist.github.com/siddhesh/15ea1f5e435ace9774f485030695ee02
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
For some architectures, the two functions are aliased, so these
symbols need to be moved at the same time.
The symbols were moved using scripts/move-symbol-to-libc.py.
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 6694 files FOO.
I then removed trailing white space from benchtests/bench-pthread-locks.c
and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this
diagnostic from Savannah:
remote: *** pre-commit check failed ...
remote: *** error: lines with trailing whitespace found
remote: error: hook declined to update refs/heads/master
PT_THREAD_POINTER is currenty defined inside a #ifndef __ASSEMBLER__ block, but
its usage should not be limited to C code, as it can be useful when accessing
the TLS from assembly code as well.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Now __thread_gscope_wait (the function behind THREAD_GSCOPE_WAIT,
formerly __wait_lookup_done) can be implemented directly in ld.so,
eliminating the unprotected GL (dl_wait_lookup_done) function
pointer.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
There are several compiler implementations that allow large stack
allocations to jump over the guard page at the end of the stack and
corrupt memory beyond that. See CVE-2017-1000364.
Compilers can emit code to probe the stack such that the guard page
cannot be skipped, but on aarch64 the probe interval is 64K by default
instead of the minimum supported page size (4K).
This patch enforces at least 64K guard on aarch64 unless the guard
is disabled by setting its size to 0. For backward compatibility
reasons the increased guard is not reported, so it is only observable
by exhausting the address space or parsing /proc/self/maps on linux.
On other targets the patch has no effect. If the stack probe interval
is larger than a page size on a target then ARCH_MIN_GUARD_SIZE can
be defined to get large enough stack guard on libc allocated stacks.
The patch does not affect threads with user allocated stacks.
Fixes bug 26691.
This patch adds a default pthreadtypes-arch.h, the idea is to simpify
new ports inclusion and an override is required only if the architecture
adds some arch-specific extensions or requirement.
The default values on the new generic header are based on current
architecture define value and they are not optimal compared to current
code requirements as below.
- On 64 bits __SIZEOF_PTHREAD_BARRIER_T is defined as 32 while is
sizeof (struct pthread_barrier) is 20 bytes.
- On 32 bits __SIZEOF_PTHREAD_ATTR_T is defined as 36 while
sizeof (struct pthread_attr) is 32.
The default values are not changed so the generic header could be
used by some architectures.
Checked with a build on affected abis.
Change-Id: Ie0cd586258a2650f715c1af0c9fe4e7063b0409a
This patch adds a new generic __pthread_rwlock_arch_t definition meant
to be used by new ports. Its layout mimics the current usage on some
64 bits ports and it allows some ports to use the generic definition.
The arch __pthread_rwlock_arch_t definition is moved from
pthreadtypes-arch.h to another arch-specific header (struct_rwlock.h).
Also the static intialization macro for pthread_rwlock_t is set to use
an arch defined on (__PTHREAD_RWLOCK_INITIALIZER) which simplifies its
implementation.
The default pthread_rwlock_t layout differs from current ports with:
1. Internal layout is the same for 32 bits and 64 bits.
2. Internal flag is an unsigned short so it should not required
additional padding to align for word boundary (if it is the case
for the ABI).
Checked with a build on affected abis.
Change-Id: I776a6a986c23199929d28a3dcd30272db21cd1d0
The current way of defining the common mutex definition for POSIX and
C11 on pthreadtypes-arch.h (added by commit 06be6368da) is
not really the best options for newer ports. It requires define some
misleading flags that should be always defined as 0
(__PTHREAD_COMPAT_PADDING_MID and __PTHREAD_COMPAT_PADDING_END), it
exposes options used solely for linuxthreads compat mode
(__PTHREAD_MUTEX_USE_UNION and __PTHREAD_MUTEX_NUSERS_AFTER_KIND), and
requires newer ports to explicit define them (adding more boilerplate
code).
This patch adds a new default __pthread_mutex_s definition meant to
be used by newer ports. Its layout mimics the current usage on both
32 and 64 bits ports and it allows most ports to use the generic
definition. Only ports that use some arch-specific definition (such
as hardware lock-elision or linuxthreads compat) requires specific
headers.
For 32 bit, the generic definitions mimic the other 32-bit ports
of using an union to define the fields uses on adaptive and robust
mutexes (thus not allowing both usage at same time) and by using a
single linked-list for robust mutexes. Both decisions seemed to
follow what recent ports have done and make the resulting
pthread_mutex_t/mtx_t object smaller.
Also the static intialization macro for pthread_mutex_t is set to use
a macro __PTHREAD_MUTEX_INITIALIZER where the architecture can redefine
in its struct_mutex.h if it requires additional fields to be
initialized.
Checked with a build on affected abis.
Change-Id: I30a22c3e3497805fd6e52994c5925897cffcfe13
The new rwlock implementation added by cc25c8b4c1 (2.25) removed
support for lock-elision. This patch removes remaining the
arch-specific unused definitions.
Checked with a build against all affected ABIs.
Change-Id: I5dec8af50e3cd56d7351c52ceff4aa3771b53cd6
This patch new build tests to check for internal fields offsets for
internal pthread_rwlock_t definition. Althoug the '__data.__flags'
field layout should be preserved due static initializators, the patch
also adds tests for the futexes that may be used in a shared memory
(although using different libc version in such scenario is not really
supported).
Checked with a build against all affected ABIs.
Change-Id: Iccc103d557de13d17e4a3f59a0cad2f4a640c148
The offsets of pthread_mutex_t __data.__nusers, __data.__spins,
__data.elision, __data.list are not required to be constant over
the releases. Only the __data.__kind is used for static
initializers.
This patch also adds an additional size check for __data.__kind.
Checked with a build against affected ABIs.
Change-Id: I7a4e48cc91b4c4ada57e9a5d1b151fb702bfaa9f
Linux from 3.9 through 4.2 does not abort HTM transaction on syscalls,
instead it suspend and resume it when leaving the kernel. The
side-effects of the syscall will always remain visible, even if the
transaction is aborted. This is an issue when transaction is used along
with futex syscall, on pthread_cond_wait for instance, where the futex
call might succeed but the transaction is rolled back leading the
pthread_cond object in an inconsistent state.
Glibc used to prevent it by always aborting a transaction before issuing
a syscall. Linux 4.2 also decided to abort active transaction in
syscalls which makes the glibc workaround superfluous. Worse, glibc
transaction abortion leads to a performance issue on recent kernels
where the HTM state is saved/restore lazily (v4.9). By aborting a
transaction on every syscalls, regardless whether a transaction has being
initiated before, GLIBS makes the kernel always save/restore HTM state
(it can not even lazily disable it after a certain number of syscall
iterations).
Because of this shortcoming, Transactional Lock Elision is just enabled
when it has been explicitly set (either by tunables of by a configure
switch) and if kernel aborts HTM transactions on syscalls
(PPC_FEATURE2_HTM_NOSC). It is reported that using simple benchmark [1],
the context-switch is about 5% faster by not issuing a tabort in every
syscall in newer kernels.
Checked on powerpc64le-linux-gnu with 4.4.0 kernel (Ubuntu 16.04).
* NEWS: Add note about new TLE support on powerpc64le.
* sysdeps/powerpc/nptl/tcb-offsets.sym (TM_CAPABLE): Remove.
* sysdeps/powerpc/nptl/tls.h (tcbhead_t): Rename tm_capable to
__ununsed1.
(TLS_INIT_TP, TLS_DEFINE_INIT_TP): Remove tm_capable setup.
(THREAD_GET_TM_CAPABLE, THREAD_SET_TM_CAPABLE): Remove macros.
* sysdeps/powerpc/powerpc32/sysdep.h,
sysdeps/powerpc/powerpc64/sysdep.h (ABORT_TRANSACTION_IMPL,
ABORT_TRANSACTION): Remove macros.
* sysdeps/powerpc/sysdep.h (ABORT_TRANSACTION): Likewise.
* sysdeps/unix/sysv/linux/powerpc/elision-conf.c (elision_init): Set
__pthread_force_elision iff PPC_FEATURE2_HTM_NOSC is set.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/sysdep.h,
sysdeps/unix/sysv/linux/powerpc/powerpc64/sysdep.h
sysdeps/unix/sysv/linux/powerpc/syscall.S (ABORT_TRANSACTION): Remove
usage.
* sysdeps/unix/sysv/linux/powerpc/not-errno.h: Remove file.
Reported-by: Breno Leitão <leitao@debian.org>
This patch adds several new tunables to control the behavior of
elision on supported platforms[1]. Since elision now depends
on tunables, we should always *compile* with elision enabled,
and leave the code disabled, but available for runtime
selection. This gives us *much* better compile-time testing of
the existing code to avoid bit-rot[2].
Tested on ppc, ppc64, ppc64le, s390x and x86_64.
[1] This part of the patch was initially proposed by
Paul Murphy but was "staled" because the framework have changed
since the patch was originally proposed:
https://patchwork.sourceware.org/patch/10342/
[2] This part of the patch was inititally proposed as a RFC by
Carlos O'Donnell. Make sense to me integrate this on the patch:
https://sourceware.org/ml/libc-alpha/2017-05/msg00335.html
* elf/dl-tunables.list: Add elision parameters.
* manual/tunables.texi: Add entries about elision tunable.
* sysdeps/unix/sysv/linux/powerpc/elision-conf.c:
Add callback functions to dynamically enable/disable elision.
Add multiple callbacks functions to set elision parameters.
Deleted __libc_enable_secure check.
* sysdeps/unix/sysv/linux/s390/elision-conf.c: Likewise.
* sysdeps/unix/sysv/linux/x86/elision-conf.c: Likewise.
* configure: Regenerated.
* configure.ac: Option enable_lock_elision was deleted.
* config.h.in: ENABLE_LOCK_ELISION flag was deleted.
* config.make.in: Remove references to enable_lock_elision.
* manual/install.texi: Elision configure option was removed.
* INSTALL: Regenerated to remove enable_lock_elision.
* nptl/Makefile:
Disable elision so it can verify error case for destroying a mutex.
* sysdeps/powerpc/nptl/elide.h:
Cleanup ENABLE_LOCK_ELISION check.
Deleted macros for the case when ENABLE_LOCK_ELISION was not defined.
* sysdeps/s390/configure: Regenerated.
* sysdeps/s390/configure.ac: Remove references to enable_lock_elision..
* nptl/tst-mutex8.c:
Deleted all #ifndef ENABLE_LOCK_ELISION from the test.
* sysdeps/powerpc/powerpc32/sysdep.h:
Deleted all ENABLE_LOCK_ELISION checks.
* sysdeps/powerpc/powerpc64/sysdep.h: Likewise.
* sysdeps/powerpc/sysdep.h: Likewise.
* sysdeps/s390/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/unix/sysv/linux/powerpc/force-elision.h: Likewise.
* sysdeps/unix/sysv/linux/s390/elision-conf.h: Likewise.
* sysdeps/unix/sysv/linux/s390/force-elision.h: Likewise.
* sysdeps/unix/sysv/linux/s390/lowlevellock.h: Likewise.
* sysdeps/unix/sysv/linux/s390/Makefile: Remove references to
enable-lock-elision.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
This patch adds two new internal defines to set the internal
pthread_mutex_t layout required by the supported ABIS:
1. __PTHREAD_MUTEX_NUSERS_AFTER_KIND which control whether to define
__nusers fields before or after __kind. The preferred value for
is 0 for new ports and it sets __nusers before __kind.
2. __PTHREAD_MUTEX_USE_UNION which control whether internal __spins and
__list members will be place inside an union for linuxthreads
compatibility. The preferred value is 0 for ports and it sets
to not use an union to define both fields.
It fixes the wrong offsets value for __kind value on x86_64-linux-gnu-x32.
Checked with a make check run-built-tests=no on all afected ABIs.
[BZ #22298]
* nptl/allocatestack.c (allocate_stack): Check if
__PTHREAD_MUTEX_HAVE_PREV is non-zero, instead if
__PTHREAD_MUTEX_HAVE_PREV is defined.
* nptl/descr.h (pthread): Likewise.
* nptl/nptl-init.c (__pthread_initialize_minimal_internal):
Likewise.
* nptl/pthread_create.c (START_THREAD_DEFN): Likewise.
* sysdeps/nptl/fork.c (__libc_fork): Likewise.
* sysdeps/nptl/pthread.h (PTHREAD_MUTEX_INITIALIZER): Likewise.
* sysdeps/nptl/bits/thread-shared-types.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
defines.
(__pthread_internal_list): Check __PTHREAD_MUTEX_USE_UNION instead
of __WORDSIZE for internal layout.
(__pthread_mutex_s): Check __PTHREAD_MUTEX_NUSERS_AFTER_KIND instead
of __WORDSIZE for internal __nusers layout and __PTHREAD_MUTEX_USE_UNION
instead of __WORDSIZE whether to use an union for __spins and __list
fields.
(__PTHREAD_MUTEX_HAVE_PREV): Define also for __PTHREAD_MUTEX_USE_UNION
case.
* sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
defines.
* sysdeps/alpha/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/arm/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/hppa/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/ia64/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/m68k/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/mips/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/nios2/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/s390/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/sh/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/sparc/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/tile/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/x86/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch adds a new build test to check for internal fields
offsets for user visible internal field. Although currently
the only field which is statically initialized to a non zero value
is pthread_mutex_t.__data.__kind value, the tests also check the
offset of __kind, __spins, __elision (if supported), and __list
internal member. A internal header (pthread-offset.h) is added
to each major ABI with the reference value.
Checked on x86_64-linux-gnu and with a build check for all affected
ABIs (aarch64-linux-gnu, alpha-linux-gnu, arm-linux-gnueabihf,
hppa-linux-gnu, i686-linux-gnu, ia64-linux-gnu, m68k-linux-gnu,
microblaze-linux-gnu, mips64-linux-gnu, mips64-n32-linux-gnu,
mips-linux-gnu, powerpc64le-linux-gnu, powerpc-linux-gnu,
s390-linux-gnu, s390x-linux-gnu, sh4-linux-gnu, sparc64-linux-gnu,
sparcv9-linux-gnu, tilegx-linux-gnu, tilegx-linux-gnu-x32,
tilepro-linux-gnu, x86_64-linux-gnu, and x86_64-linux-x32).
* nptl/pthreadP.h (ASSERT_PTHREAD_STRING,
ASSERT_PTHREAD_INTERNAL_OFFSET): New macro.
* nptl/pthread_mutex_init.c (__pthread_mutex_init): Add build time
checks for internal pthread_mutex_t offsets.
* sysdeps/aarch64/nptl/pthread-offsets.h
(__PTHREAD_MUTEX_NUSERS_OFFSET, __PTHREAD_MUTEX_KIND_OFFSET,
__PTHREAD_MUTEX_SPINS_OFFSET, __PTHREAD_MUTEX_ELISION_OFFSET,
__PTHREAD_MUTEX_LIST_OFFSET): New macro.
* sysdeps/alpha/nptl/pthread-offsets.h: Likewise.
* sysdeps/arm/nptl/pthread-offsets.h: Likewise.
* sysdeps/hppa/nptl/pthread-offsets.h: Likewise.
* sysdeps/i386/nptl/pthread-offsets.h: Likewise.
* sysdeps/ia64/nptl/pthread-offsets.h: Likewise.
* sysdeps/m68k/nptl/pthread-offsets.h: Likewise.
* sysdeps/microblaze/nptl/pthread-offsets.h: Likewise.
* sysdeps/mips/nptl/pthread-offsets.h: Likewise.
* sysdeps/nios2/nptl/pthread-offsets.h: Likewise.
* sysdeps/powerpc/nptl/pthread-offsets.h: Likewise.
* sysdeps/s390/nptl/pthread-offsets.h: Likewise.
* sysdeps/sh/nptl/pthread-offsets.h: Likewise.
* sysdeps/sparc/nptl/pthread-offsets.h: Likewise.
* sysdeps/tile/nptl/pthread-offsets.h: Likewise.
* sysdeps/x86_64/nptl/pthread-offsets.h: Likewise.
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch removes all the replicated pthread definition accross the
architectures and consolidates it on shared headers. The new
organization is as follow:
* Architecture specific definition (such as pthread types sizes) are
place in the new pthreadtypes-arch.h header in arch specific path.
* All shared structure definition are moved to a common NPTL header
at sysdeps/nptl/bits/pthreadtypes.h (with now includes the arch
specific one for internal definitions).
* Also, for C11 future thread support, both mutex and condition
definition are placed in a common header at
sysdeps/nptl/bits/thread-shared-types.h.
It is also a refactor patch without expected functional changes.
Checked with a build for all major ABI (aarch64-linux-gnu, alpha-linux-gnu,
arm-linux-gnueabi, i386-linux-gnu, ia64-linux-gnu,
m68k-linux-gnu, microblaze-linux-gnu, mips{64}-linux-gnu, nios2-linux-gnu,
powerpc{64le}-linux-gnu, s390{x}-linux-gnu, sparc{64}-linux-gnu,
tile{pro,gx}-linux-gnu, and x86_64-linux-gnu).
* posix/Makefile (headers): Add pthreadtypes-arch.h and
thread-shared-types.h.
* sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h: New file: arch
specific thread definition.
* sysdeps/alpha/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/arm/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/hppa/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/ia64/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/m68k/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/mips/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/nios2/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/s390/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/sh/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/sparc/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/tile/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/x86/nptl/bits/pthreadtypes-arch.h: Likewise.
* sysdeps/nptl/bits/thread-shared-types.h: New file: shared
thread definition between POSIX and C11.
* sysdeps/aarch64/nptl/bits/pthreadtypes.h.: Remove file.
* sysdeps/alpha/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/arm/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/hppa/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/m68k/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/microblaze/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/mips/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/nios2/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/ia64/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/powerpc/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/s390/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/sh/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/sparc/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/tile/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/x86/nptl/bits/pthreadtypes.h: Likewise.
* sysdeps/nptl/bits/pthreadtypes.h: New file: common thread
definitions shared across all architectures.
This patch moves all arch specific pthreadtypes.h to a similar path
for all architectures (sysdeps/unix/sysv/<arch>/bits). No functional
or build change is expected. The idea is mainly to organize the
header placement for all architectures.
Checked with a build for all major ABI (aarch64-linux-gnu, alpha-linux-gnu,
arm-linux-gnueabi, i386-linux-gnu, ia64-linux-gnu,
m68k-linux-gnu, microblaze-linux-gnu [1], mips{64}-linux-gnu, nios2-linux-gnu,
powerpc{64le}-linux-gnu, s390{x}-linux-gnu, sparc{64}-linux-gnu,
tile{pro,gx}-linux-gnu, and x86_64-linux-gnu).
* sysdeps/unix/sysv/linux/x86/Implies: New file.
* sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h: Move to ...
* sysdeps/alpha/nptl/bits/pthreadtypes.h: ... here.
* sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h: Move to ...
* sysdeps/powerpc/nptl/bits/pthreadtypes.h: ... here.
* sysdeps/x86/bits/pthreadtypes.h: Move to ...
* sysdeps/x86/nptl/bits/pthreadtypes.h: ... here.
This patch remove the PID cache and usage in current GLIBC code. Current
usage is mainly used a performance optimization to avoid the syscall,
however it adds some issues:
- The exposed clone syscall will try to set pid/tid to make the new
thread somewhat compatible with current GLIBC assumptions. This cause
a set of issue with new workloads and usecases (such as BZ#17214 and
[1]) as well for new internal usage of clone to optimize other algorithms
(such as clone plus CLONE_VM for posix_spawn, BZ#19957).
- The caching complexity also added some bugs in the past [2] [3] and
requires more effort of each port to handle such requirements (for
both clone and vfork implementation).
- Caching performance gain in mainly on getpid and some specific
code paths. The getpid performance leverage is questionable [4],
either by the idea of getpid being a hotspot as for the getpid
implementation itself (if it is indeed a justifiable hotspot a
vDSO symbol could let to a much more simpler solution).
Other usage is mainly for non usual code paths, such as pthread
cancellation signal and handling.
For thread creation (on stack allocation) the code simplification in fact
adds some performance gain due the no need of transverse the stack cache
and invalidate each element pid.
Other thread usages will require a direct getpid syscall, such as
cancellation/setxid signal, thread cancellation, thread fail path (at
create_thread), and thread signal (pthread_kill and pthread_sigqueue).
However these are hardly usual hotspots and I think adding a syscall is
justifiable.
It also simplifies both the clone and vfork arch-specific implementation.
And by review each fork implementation there are some discrepancies that
this patch also solves:
- microblaze clone/vfork does not set/reset the pid/tid field
- hppa uses the default vfork implementation that fallback to fork.
Since vfork is deprecated I do not think we should bother with it.
The patch also removes the TID caching in clone. My understanding for
such semantic is try provide some pthread usage after a user program
issue clone directly (as done by thread creation with CLONE_PARENT_SETTID
and pthread tid member). However, as stated before in multiple discussions
threads, GLIBC provides clone syscalls without further supporting all this
semantics.
I ran a full make check on x86_64, x32, i686, armhf, aarch64, and powerpc64le.
For sparc32, sparc64, and mips I ran the basic fork and vfork tests from
posix/ folder (on a qemu system). So it would require further testing
on alpha, hppa, ia64, m68k, nios2, s390, sh, and tile (I excluded microblaze
because it is already implementing the patch semantic regarding clone/vfork).
[1] https://codereview.chromium.org/800183004/
[2] https://sourceware.org/ml/libc-alpha/2006-07/msg00123.html
[3] https://sourceware.org/bugzilla/show_bug.cgi?id=15368
[4] http://yarchive.net/comp/linux/getpid_caching.html
* sysdeps/nptl/fork.c (__libc_fork): Remove pid cache setting.
* nptl/allocatestack.c (allocate_stack): Likewise.
(__reclaim_stacks): Likewise.
(setxid_signal_thread): Obtain pid through syscall.
* nptl/nptl-init.c (sigcancel_handler): Likewise.
(sighandle_setxid): Likewise.
* nptl/pthread_cancel.c (pthread_cancel): Likewise.
* sysdeps/unix/sysv/linux/pthread_kill.c (__pthread_kill): Likewise.
* sysdeps/unix/sysv/linux/pthread_sigqueue.c (pthread_sigqueue):
Likewise.
* sysdeps/unix/sysv/linux/createthread.c (create_thread): Likewise.
* sysdeps/unix/sysv/linux/getpid.c: Remove file.
* nptl/descr.h (struct pthread): Change comment about pid value.
* nptl/pthread_getattr_np.c (pthread_getattr_np): Remove thread
pid assert.
* sysdeps/unix/sysv/linux/pthread-pids.h (__pthread_initialize_pids):
Do not set pid value.
* nptl_db/td_ta_thr_iter.c (iterate_thread_list): Remove thread
pid cache check.
* nptl_db/td_thr_validate.c (td_thr_validate): Likewise.
* sysdeps/aarch64/nptl/tcb-offsets.sym: Remove pid offset.
* sysdeps/alpha/nptl/tcb-offsets.sym: Likewise.
* sysdeps/arm/nptl/tcb-offsets.sym: Likewise.
* sysdeps/hppa/nptl/tcb-offsets.sym: Likewise.
* sysdeps/i386/nptl/tcb-offsets.sym: Likewise.
* sysdeps/ia64/nptl/tcb-offsets.sym: Likewise.
* sysdeps/m68k/nptl/tcb-offsets.sym: Likewise.
* sysdeps/microblaze/nptl/tcb-offsets.sym: Likewise.
* sysdeps/mips/nptl/tcb-offsets.sym: Likewise.
* sysdeps/nios2/nptl/tcb-offsets.sym: Likewise.
* sysdeps/powerpc/nptl/tcb-offsets.sym: Likewise.
* sysdeps/s390/nptl/tcb-offsets.sym: Likewise.
* sysdeps/sh/nptl/tcb-offsets.sym: Likewise.
* sysdeps/sparc/nptl/tcb-offsets.sym: Likewise.
* sysdeps/tile/nptl/tcb-offsets.sym: Likewise.
* sysdeps/x86_64/nptl/tcb-offsets.sym: Likewise.
* sysdeps/unix/sysv/linux/aarch64/clone.S: Remove pid and tid caching.
* sysdeps/unix/sysv/linux/alpha/clone.S: Likewise.
* sysdeps/unix/sysv/linux/arm/clone.S: Likewise.
* sysdeps/unix/sysv/linux/hppa/clone.S: Likewise.
* sysdeps/unix/sysv/linux/i386/clone.S: Likewise.
* sysdeps/unix/sysv/linux/ia64/clone2.S: Likewise.
* sysdeps/unix/sysv/linux/mips/clone.S: Likewise.
* sysdeps/unix/sysv/linux/nios2/clone.S: Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S: Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S: Likewise.
* sysdeps/unix/sysv/linux/s390/s390-32/clone.S: Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/clone.S: Likewise.
* sysdeps/unix/sysv/linux/sh/clone.S: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc32/clone.S: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/clone.S: Likewise.
* sysdeps/unix/sysv/linux/tile/clone.S: Likewise.
* sysdeps/unix/sysv/linux/x86_64/clone.S: Likewise.
* sysdeps/unix/sysv/linux/aarch64/vfork.S: Remove pid set and reset.
* sysdeps/unix/sysv/linux/alpha/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/arm/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/i386/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/ia64/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/m68k/clone.S: Likewise.
* sysdeps/unix/sysv/linux/m68k/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/mips/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/nios2/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/s390/s390-32/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/sh/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc32/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/tile/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/x86_64/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/tst-clone2.c (f): Remove direct pthread
struct access.
(clone_test): Remove function.
(do_test): Rewrite to take in consideration pid is not cached anymore.
Work around a GCC behavior with hardware transactional memory built-ins.
GCC doesn't treat the PowerPC transactional built-ins as compiler
barriers, moving instructions past the transaction boundaries and
altering their atomicity.
This patch adds a new feature for powerpc. In order to get faster access to
the HWCAP/HWCAP2 bits and platform number (i.e. for implementing
__builtin_cpu_is () / __builtin_cpu_supports () in GCC) without the overhead of
reading from the auxiliary vector, we now reserve space for them in the TCB.
This is an ABI change for GLIBC 2.23.
A new versioned symbol '__parse_hwcap_and_convert_at_platform' is available to
get the data from the auxiliary vector and parse it, and store it for later use
in the TLS initialization code. This function is called very early
(in _dl_sysdep_start () via DL_PLATFORM_INFO for the dynamic linking case, and
in __libc_start_main () for the static linking case) to make sure the data is
available at the time of TLS initialization.
* sysdeps/powerpc/Makefile (sysdep-dl-routines): Add hwcapinfo.
(sysdep_routines): Likewise.
(sysdep-rtld-routines): Likewise.
[$(subdir) = nptl](tests): Add test-get_hwcap and test-get_hwcap-static
[$(subdir) = nptl](tests-static): test-get_hwcap-static
* sysdeps/powerpc/Versions: Added new
__parse_hwcap_and_convert_at_platform symbol to GLIBC-2.23.
* sysdeps/powerpc/hwcapinfo.c: New file.
(__tcb_parse_hwcap_and_convert_at_platform): New function to initialize
and parse hwcap, hwcap2 and platform number information.
* sysdeps/powerpc/hwcapinfo.h: New file. Creates global variables
to store HWCAP+HWCAP2 and platform number.
* sysdeps/powerpc/nptl/tcb-offsets.sym: Added new offsets
for HWCAP+HWCAP2 and platform number in the TCB.
* sysdeps/powerpc/nptl/tls.h: New functionality. Stores
the HWCAP, HWCAP2 and platform number in the TCB.
(dtv): Added new fields for HWCAP+HWCAP2 and platform number.
(TLS_INIT_TP): Included calls to add the hwcap and
at_platform values in the TCB in TP initialization.
(TLS_DEFINE_INIT_TP): Likewise.
(THREAD_GET_HWCAP): New macro.
(THREAD_SET_HWCAP): Likewise.
(THREAD_GET_AT_PLATFORM): Likewise.
(THREAD_SET_AT_PLATFORM): Likewise.
* sysdeps/powerpc/powerpc32/dl-machine.h:
(dl_platform_init): New function that calls
__parse_hwcap_and_convert_at_platform for the dymanic linking case for
powerpc32.
* sysdeps/powerpc/powerpc64/dl-machine.h: Likewise, for powerpc64.
* sysdeps/powerpc/test-get_hwcap-static.c: New file. Testcase for
this functionality, static linking case.
* sysdeps/powerpc/test-get_hwcap.c: New file. Likewise, dynamic
linking case.
* sysdeps/unix/sysv/linux/powerpc/libc-start.c: Added call to
__parse_hwcap_and_convert_at_platform for the static linking case.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/ld.abilist:
Included the new __parse_hwcap_and_convert_at_platform symbol in the
ABI list for GLIBC 2.23.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/ld-le.abilist:
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/ld.abilist:
Likewise.
This patch optimizes powerpc spinlock implementation by:
* Use the correct EH hint bit on the larx for supported ISA. For lock
acquisition, the thread that acquired the lock with a successful stcx
does not want to give away the write ownership on the cacheline. The
idea is to make the load reservation "sticky" about retaining write
authority to the line. That way, the store that must inevitably come
to release the lock can succeed quickly and not contend with other
threads issuing lwarx. If another thread does a store to the line
(false sharing), the winning thread must give up write authority to
the proper value of EH for the larx for a lock acquisition is 1.
* Increase contented lock performance by up to 40%, and no measurable
impact on uncontended locks on P8.
Thanks to Adhemerval Zanella who did most of the work. I've run some
tests, and addressed some minor feedback.
* sysdeps/powerpc/nptl/pthread_spin_lock.c (pthread_spin_lock):
Add lwarx hint, and use macro for acquire instruction.
* sysdeps/powerpc/nptl/pthread_spin_trylock.c (pthread_spin_trylock):
Likewise.
* sysdep/unix/sysv/linux/powerpc/pthread_spin_unlock.c: Move to ...
* sysdeps/powerpc/nptl/pthread_spin_unlock.c: ... here, and
update to use new atomic macros.
The skip_lock_out_of_tbegin_retries adaptive parameter was
not being used correctly, nor as described. This prevents
a fallback for all users of the lock if a transient abort
occurs within the accepted number of retries.
[BZ #19174]
* sysdeps/powerpc/nptl/elide.h (__elide_lock): Fix usage of
.skip_lock_out_of_tbegin_retries.
* sysdeps/unix/sysv/linux/powerpc/elision-lock.c
(__lll_lock_elision): Likewise, and respect a value of
try_tbegin <= 0.
The previous code used to evaluate the preprocessor token is_lock_free to
a variable before starting a transaction. This behavior can cause an
error if another thread got the lock (without using a transaction)
between the evaluation of the token and the beginning of the transaction.
This bug can be triggered with the following order of events:
1. The lock accessed by is_lock_free is free.
2. Thread T1 evaluates is_lock_free and stores into register R1 that the
lock is free.
3. Thread T2 acquires the same lock used in is_lock_free.
4. T1 begins the transaction, creating a memory barrier where is_lock_free
is false, but R1 is true.
5. T1 reads R1 and doesn't abort the transaction.
6. T1 calls ELIDE_UNLOCK, which reads false from is_lock_free and decides
to unlock a lock acquired by T2, leading to undefined behavior.
This patch delays the evaluation of is_lock_free to inside a transaction
by moving this part of the code to the macro ELIDE_LOCK.
[BZ #18743]
* sysdeps/powerpc/nptl/elide.h (__elide_lock): Move most of this
code to...
(ELIDE_LOCK): ...here.
(__get_new_count): New function with part of the code from
__elide_lock that updates the value of adapt_count after a
transaction abort.
(__elided_trylock): Moved this code to...
(ELIDE_TRYLOCK): ...here.