The n32 will require the 64-bit time symbols for y2038 support.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Commit 68ab82f566 added support for the scv
syscall ABI on powerpc. Since then systems that have kernel and processor
support started using scv. However adding the proper support for a new syscall
ABI requires changes to several other projects (e.g. qemu, valgrind, strace,
kernel), which are gradually receiving support.
Meanwhile, having a way to disable scv on glibc at build time can be useful for
distros that may encounter conflicts with projects that still do not support the
scv ABI, buying time until proper support is added.
This commit adds a --disable-scv option that disables scv support and uses sc
for all syscalls, like before commit 68ab82f566.
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
Now that pthread_kill is provided by libc.so it is possible to
implement the generic POSIX implementation as
'pthread_kill(pthread_self(), sig)'.
For Linux implementation, pthread_kill read the targeting TID from
the TCB. For raise, this it not possible because it would make raise
fail when issue after vfork (where creates the resulting process
has a different TID from the parent, but its TCB is not updated as
for pthread_create). To make raise use pthread_kill, it is make
usable from vfork by getting the target thread id through gettid
syscall.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Now that the thread cancellation type is not accessed concurrently
anymore, it is possible to move it out the cancelhandling.
By removing the cancel state out of the internal thread cancel handling
state there is no need to check if cancelled bit was set in CAS
operation.
It allows simplifing the cancellation wrappers and the
CANCEL_CANCELED_AND_ASYNCHRONOUS is removed.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Now that thread cancellation state is not accessed concurrently anymore,
it is possible to move it out the 'cancelhandling'.
The code is also simplified: CANCELLATION_P is replaced with a
internal pthread_testcancel call and the CANCELSTATE_BIT{MASK} is
removed.
With this behavior pthread_setcancelstate does not require to act on
cancellation if cancel type is asynchronous (is already handled either
by pthread_setcanceltype or by the signal handler).
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Since commit 0c1c3a771e
("dlfcn: Move dlopen into libc") libdl.a is empty, so linking
against it is no longer necessary.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Move all gconv-modules configuration files to gconv-modules.conf.
That is, the S390 extensions now become gconv-modules-s390.conf. Move
both configuration files into gconv-modules.d.
Now GCONV_PATH/gconv-modules is read only for backward compatibility
for third-party gconv modules directories.
Reviewed-by: DJ Delorie <dj@redhat.com>
Add inline assembler for the roundeven functions.
Passes GLIBC regression. Note GCC does not inline the builtin (PR100966),
so this cannot be used for now.
This patch replaced obsolete AC_TRY_COMPILE to AC_COMPILE_IFELSE or
AC_PREPROC_IFELSE.
It has been confirmed that GNU 'autoconf' 2.69 suppressed obsolete
warnings, updated the following files:
- configure
- sysdeps/mach/configure
- sysdeps/mach/hurd/configure
- sysdeps/s390/configure
- sysdeps/unix/sysv/linux/configure
and didn't change the following files:
- sysdeps/ieee754/ldbl-opt/configure
- sysdeps/unix/sysv/linux/powerpc/configure
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Consolidate all hooks structures into a single one. There are
no static dlopen ABI concerns because glibc 2.34 already comes
with substantial ABI-incompatible changes in this area. (Static
dlopen requires the exact same dynamic glibc version that was used
for static linking.)
The new approach uses a pointer to the hooks structure into
_rtld_global_ro and initalizes it in __rtld_static_init. This avoids
a back-and-forth with various callback functions.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This commit removes the ELF constructor and internal variables from
dlfcn/dlfcn.c. The file now serves the same purpose as
nptl/libpthread-compat.c, so it is renamed to dlfcn/libdl-compat.c.
The use of libdl-shared-only-routines ensures that libdl.a is empty.
This commit adjusts the test suite not to use $(libdl). The libdl.so
symbolic link is no longer installed.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
In elf/Makefile, remove the $(libdl) dependency from testobj1.so
because it the unused libdl DSO now causes elf/tst-unused-deps to
fail.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
There is a minor functionality enhancement: dlerror now sets
errno if it was set as part of the exception. (This is the result
of using %m in asprintf, to avoid the strerror PLT call.) The
previous errno value upon function return was unpredictable.
Documenting this as a feature is premature; we need to make sure
that the error codes are meaningful when they are set by the dynamic
loader.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Some targets have a GLIBC_2.0 baseline for libdl, while using
GLIBC_2.2 for libc. This means that the generated libc.map file
does not have any version nodes for GLIBC_2.0 or GLIBC_2.1. However,
moving symbols from libdl into libc needs such version nodes.
(Future symbol moves from librt will need this as well.)
This kludge is only necessary for symbols predating GLIBC_2.2 because
the affected targets use GLIBC_2.2 as the baseline for libc. Given
the small number and fixed set of affected architectures, no generic
mechanism is implemented, and instead the map file fragment is
hard-coded in scripts/versions.mk.
The compat_symbol macro already emits the appropriate version strings,
so no adjustments are needed there.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Some symbols have explicit versioned_symbol or compat_symbol markers
in the sources, but no corresponding entry in the Versions files.
This presently works because the local: * directive is only applied
to the base version.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
__pthread_attr_copy can fail and does not initialize the attribute
structure in that case.
If __pthread_attr_copy is never called and there is no allocated
attribute, pthread_attr_destroy should not be called, otherwise
there is a null pointer dereference in rt/tst-mqueue6.
Fixes commit 42d3593505
("Use __pthread_attr_copy in mq_notify (bug 27896)").
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The symbol has never been exported, so no compatibility symbol is
needed. Removing this file prevents ld from creation an exported
symbol in case GLIBC_2_0 expands to a symbol version which
does not have a local: *; directive in the symbol version map file.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch was based on the __memcmp_power8 and the recent
__strlen_power10.
Improvements from __memcmp_power8:
1. Don't need alignment code.
On POWER10 lxvp and lxvl do not generate alignment interrupts, so
they are safe for use on caching-inhibited memory. Notice that the
comparison on the main loop will wait for both VSR to be ready.
Therefore aligning one of the input address does not improve
performance. In order to align both registers a vperm is necessary
which add too much overhead.
2. Uses new POWER10 instructions
This code uses lxvp to decrease contention on load by loading 32 bytes
per instruction.
The vextractbm is used to have a smaller tail code for calculating the
return value.
3. Performance improvement
This version has around 35% better performance on average. I saw no
performance regressions for any length or alignment.
Thanks Matheus for helping me out with some details.
Co-authored-by: Matheus Castanho <msc@linux.ibm.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
This patch optimizes the performance of memset for A64FX [1] which
implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache
per NUMA node.
The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill and prefetch.
SVE assembler code for memset is implemented as Vector Length Agnostic
code so theoretically it can be run on any SOC which supports ARMv8-A
SVE standard.
We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.
And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.
[1] https://github.com/fujitsu/A64FX
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reviewed-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
This patch optimizes the performance of memcpy/memmove for A64FX [1]
which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB
cache per NUMA node.
The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill, and software pipelining.
SVE assembler code for memcpy/memmove is implemented as Vector Length
Agnostic code so theoretically it can be run on any SOC which supports
ARMv8-A SVE standard.
We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.
And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.
[1] https://github.com/fujitsu/A64FX
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reviewed-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
This patch is a test helper script to change Vector Length for child
process. This script can be used as test-wrapper for 'make check'.
Usage examples:
~/build$ make check subdirs=string \
test-wrapper='~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16'
~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16 \
make test t=string/test-memcpy
~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 32 \
./debugglibc.sh string/test-memmove
~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 64 \
./testrun.sh string/test-memset
This patch defines BTI_C and BTI_J macros conditionally for
performance.
If HAVE_AARCH64_BTI is true, BTI_C and BTI_J are defined as HINT
instruction for ARMv8.5 BTI (Branch Target Identification).
If HAVE_AARCH64_BTI is false, both BTI_C and BTI_J are defined as
NOP.
Since the variable expands to nothing under Linux, it is no longer
necessary to clutter the makefiles with it.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
When using scv for templated ASM syscalls, current code interprets any
negative return value as error, but the only valid error codes are in
the range -4095..-1 according to the ABI.
This commit also fixes 'signal.gen.test' strace test, where the issue
was first identified.
Reviewed-by: Matheus Castanho <msc@linux.ibm.com>
1. Replace
if ((((uintptr_t) &_d) & (__alignof (double) - 1)) != 0)
which may be optimized out by compiler, with
int
__attribute__ ((weak, noclone, noinline))
is_aligned (void *p, int align)
{
return (((uintptr_t) p) & (align - 1)) != 0;
}
2. Add TEST_STACK_ALIGN_INIT to TEST_STACK_ALIGN.
3. Add a common TEST_STACK_ALIGN_INIT to check 16-byte stack alignment
for both i386 and x86-64.
4. Update powerpc to use TEST_STACK_ALIGN_INIT.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Only the placeholder compatibility symbols are left now.
The __errno_location symbol was removed (moved) using
scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using scripts/move-symbol-to-libc.py.
The libpthread placeholder symbols need some changes because some
symbol versions have gone away completely. But
__errno_location@@GLIBC_2.0 still exists, so the GLIBC_2.0 version
is still there.
The internal __pthread_create symbol now points to the correct
function, so the sysdeps/nptl/thrd_create.c override is no longer
necessary.
There was an issue how the hidden alias of pthread_getattr_default_np
was defined, so this commit cleans up that aspects and removes the
GLIBC_PRIVATE export altogether.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Use the __nptl_tls_static_size_for_stack inline function instead,
and the GLRO (dl_tls_static_align) value directly.
The computation of GLRO (dl_tls_static_align) in
_dl_determine_tlsoffset ensures that the alignment is at least
TLS_TCB_ALIGN, which at least STACK_ALIGN (see allocate_stack).
Therefore, the additional rounding-up step is removed.
ALso move the initialization of the default stack size from
__pthread_initialize_minimal_internal to __pthread_early_init.
This introduces an extra system call during single-threaded startup,
but this simplifies the initialization sequence. No locking is
needed around the writes to __default_pthread_attr because the
process is single-threaded at this point.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
No bug. This commit makes a few small improvements to
memset-vec-unaligned-erms.S. The changes are 1) only aligning to 64
instead of 128. Either alignment will perform equally well in a loop
and 128 just increases the odds of having to do an extra iteration
which can be significant overhead for small values. 2) Align some
targets and the loop. 3) Remove an ALU from the alignment process. 4)
Reorder the last 4x VEC so that they are stored after the loop. 5)
Move the condition for leq 8x VEC to before the alignment
process. test-memset and test-wmemset are both passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
When compiled with GCC 11.1 and -march=z14 -O3 build flags, running
ld.so (or any dynamically linked program) prints:
Fatal glibc error: CPU lacks VXE support (z14 or later required)
Co-Authored-By: Stefan Liebler <stli@linux.ibm.com>
Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
When built with GCC 11.1 and -mcpu=power9, ld.so prints this error
message when running on POWER8:
Fatal glibc error: CPU lacks ISA 3.00 support (POWER9 or later required)
No bug. This commit optimizes memcmp-evex.S. The optimizations include
adding a new vec compare path for small sizes, reorganizing the entry
control flow, removing some unnecissary ALU instructions from the main
loop, and most importantly replacing the heavy use of vpcmp + kand
logic with vpxor + vptern. test-memcmp and test-wmemcmp are both
passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
No bug. This commit optimizes memcmp-avx2.S. The optimizations include
adding a new vec compare path for small sizes, reorganizing the entry
control flow, and removing some unnecissary ALU instructions from the
main loop. test-memcmp and test-wmemcmp are both passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
The tst-timespec_getres (e5ac7bd679) triggers an issue on 32-bit
architecture on Linux older than 5.1, where the fallback syscall
is used.
Checked on powerpc-linux-gnu.
ISO C2X adds a timespec_getres function alongside the C11
timespec_get, with functionality similar to that of POSIX clock_getres
(including allowing a NULL pointer to be passed to the function).
Implement this function for glibc, similarly to the implementation of
timespec_get.
This includes a basic test like that of timespec_get, but no
documentation in the manual, given that TIME_UTC and timespec_get
aren't documented in the manual at all. The handling of 64-bit time
follows that in timespec_get; people maintaining patch series for
64-bit time will need to update them accordingly (to export
__timespec_getres64, redirect calls in time.h and run the test for
_TIME_BITS=64).
Tested for x86_64 and x86, and (previous version; only testcase
differs) with build-many-glibcs.py.
Reuse code for optimized strlen to implement a faster version of rawmemchr.
This takes advantage of the same benefits provided by the strlen implementation,
but needs some extra steps. __strlen_power10 code should be unchanged after this
change.
rawmemchr returns a pointer to the char found, while strlen returns only the
length, so we have to take that into account when preparing the return value.
To quickly check 64B, the loop on __strlen_power10 merges the whole block into
16B by using unsigned minimum vector operations (vminub) and checks if there are
any \0 on the resulting vector. The same code is used by rawmemchr if the char c
is 0. However, this approach does not work when c != 0. We first need to
subtract each byte by c, so that the value we are looking for is converted to a
0, then taking the minimum and checking for nulls works again.
The new code branches after it has compared ~256 bytes and chooses which of the
two strategies above will be used in the main loop, based on the char c. This
extra branch adds some overhead (~5%) for length ~256, but is quickly amortized
by the faster loop for larger sizes.
Compared to __rawmemchr_power9, this version is ~20% faster for length < 256.
Because of the optimized main loop, the improvement becomes ~35% for c != 0
and ~50% for c = 0 for strings longer than 256.
Reviewed-by: Lucas A. M. Magalhaes <lamm@linux.ibm.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The GLIBC_2.11 version is now empty, so add a placeholder symbol.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
The GLIBC_2.3.4 version is now empty, so add a placeholder symbol.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
Add __libpthread_version_placeholder@@GLIBC_2.12 for the targets
that need it.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
__libpthread_version_placeholder@@GLIBC_2.2 is needed by this change;
the Versions entry for GLIBC_2.2 in libpthread had leftover symbols
due to an error in a previous conflict resolution. The condition
for the placeholder symbol is complicated because some architectures
have earlier symbols at the GLIBC_2.2 symbol versions, so the
placeholder is not required there (yet).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
A new placeholder symbol __libpthread_version_placeholder@GLIBC_2.18
is needed to keep the GLIBC_2.18 symbol version in libpthread.
The __pthread_getattr_default_np@@GLIBC_PRIVATE export is used
from pthread_create.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This helps to clarify that the caching of these fields in libpthread
(in __static_tls_size, __static_tls_align_m1) is unnecessary.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
After static dlopen, a copy of ld.so is loaded into the inner
namespace, but that copy is not initialized at all. Some
architectures run into serious problems as result, which is why the
_dl_var_init mechanism was invented. With libpthread moving into
libc and parts into ld.so, more architectures impacted, so it makes
sense to switch to a generic mechanism which performs the partial
initialization.
As a result, getauxval now works after static dlopen (bug 20802).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The initialization of the report_events TCB field is now performed
in __tls_init_tp instead of __pthread_initialize_minimal_internal
(in libpthread).
The events interface is difficult to test because GDB stopped using it
in 2015. The td_thr_get_info change to ignore lookup issues is enough
to support GDB with this change.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The error paths of __check_native would leave the socket FD open on
return, resulting in an FD leak. Rework function exit paths so that
the fd is always closed on return.
The symbols were moved using scripts/move-symbol-to-libc.py,
in one commit due to their dependency on the internal
__concurrency_level variable.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using scripts/move-symbol-to-libc.py.
Also clean up some unwinder linking leftover in the same spot
in nptl/pthreadP.h.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
It is necessary to arrange for a
__libpthread_version_placeholder@GLIBC_2.6 on some of the powerpc
targets.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This is a follow up patch to the fix for bug 19329. This adds relaxed
MO atomics to accesses that were previously data races but are now
race conditions, and where relaxed MO is sufficient.
The race conditions all follow the pattern that the write is behind the
dlopen lock, but a read can happen concurrently (e.g. during tls access)
without holding the lock. For slotinfo entries the read value only
matters if it reads from a synchronized write in dlopen or dlclose,
otherwise the related dtv entry is not valid to access so it is fine
to leave it in an inconsistent state. The same applies for
GL(dl_tls_max_dtv_idx) and GL(dl_tls_generation), but there the
algorithm relies on the fact that the read of the last synchronized
write is an increasing value.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbols pthread_clockjoin_np, pthread_join, pthread_timedjoin_np,
pthread_tryjoin_np, thrd_join were moved using
scripts/move-symbol-to-libc.py.
Moving the symbols at the same time avoids the need for temporary
exports.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
The export of __default_pthread_attr_freeres is temporary. There
is a minor regression in freeres coverage because in the dynamic case,
__default_pthread_attr_freeres is no longer called if libpthread is
not linked in.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The nptl version is used as default, since now with symbol always
present the single-thread optimization is tricky.
Hurd is not change, it is used it own lock scheme (which call
_cthreads_funlockfile).
Checked on x86_64-linux-gnu.
The nptl version is used as default, since now with symbol always
present the single-thread optimization is tricky.
Hurd is not change, it is used it own lock scheme (which call
_cthreads_ftrylockfile).
Checked on x86_64-linux-gnu.
The nptl version is used as default, since now with symbol always
present the single-thread optimization is tricky.
Hurd is not change, it is used it own lock scheme (which call
_cthreads_flockfile).
Checked on x86_64-linux-gnu.
Linux 5.12 adds the constants PTRACE_SYSEMU and
PTRACE_SYSEMU_SINGLESTEP for s390. Add these to glibc.
Tested with build-many-glibcs.py for s390-linux-gnu and
s390x-linux-gnu.
These workload traces cover the whole "long double" range.
This patch was prepared with the help of Adhemerval Zanella.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
All the stack lists are now in _rtld_global, so it is possible
to change stack permissions directly from there, instead of
calling into libpthread to do the change.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Permissions of the cached stacks may have to be updated if an object
is loaded that requires executable stacks, so the dynamic loader
needs to know about these cached stacks.
The move of in_flight_stack and stack_cache_actsize is a requirement for
merging __reclaim_stacks into the fork implementation in libc.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This is an early variant of __tls_init_tp, primarily for initializing
thread-related elements of _rtld_global/GL.
Some existing initialization code not needed for NPTL is moved into
the generic version of this function.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Always use __libc_multiple_threads if beneficial, and do not assume
the the dynamic loader is single-threaded. This assumption could
become incorrect by accident once more code is moved from libpthread
into it. The previous commit introducing the
NO_SYSCALL_CANCEL_CHECKING macro enables this change.
Do not hint to the compiler that multi-threaded programs are unlikely
(which is not quite true anymore).
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Historically, SINGLE_THREAD_P is defined to 1 in the dynamic loader.
This has the side effect of disabling cancellation points. In order
to enable future use of SINGLE_THREAD_P for single-thread
optimizations in the dynamic loader (which becomes important once
more code is moved from libpthread), introduce a new
NO_SYSCALL_CANCEL_CHECKING macro which is always 1 for IS_IN (rtld),
indepdently of the actual SINGLE_THREAD_P value.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This allows the elimination of the __libc_multiple_threads_ptr
variable in libpthread and its initialization procedure.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
If libpthread is included in libc, it is not necessary to delay
initialization of the lock/unlock function pointers until libpthread
is loaded. This eliminates two unprotected function pointers
from _rtld_global and removes some initialization code from
libpthread.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
No bug.
This commit adds a new implementation for EVEX memchr that is not safe
for RTM because it uses vzeroupper. The benefit is that by using
ymm0-ymm15 it can use vpcmpeq and vpternlogd in the 4x loop which is
faster than the RTM safe version which cannot use vpcmpeq because
there is no EVEX encoding for the instruction. All parts of the
implementation aside from the 4x loop are the same for the two
versions and the optimization is only relevant for large sizes.
Tigerlake:
size , algn , Pos , Cur T , New T , Win , Dif
512 , 6 , 192 , 9.2 , 9.04 , no-RTM , 0.16
512 , 7 , 224 , 9.19 , 8.98 , no-RTM , 0.21
2048 , 0 , 256 , 10.74 , 10.54 , no-RTM , 0.2
2048 , 0 , 512 , 14.81 , 14.87 , RTM , 0.06
2048 , 0 , 1024 , 22.97 , 22.57 , no-RTM , 0.4
2048 , 0 , 2048 , 37.49 , 34.51 , no-RTM , 2.98 <--
Icelake:
size , algn , Pos , Cur T , New T , Win , Dif
512 , 6 , 192 , 7.6 , 7.3 , no-RTM , 0.3
512 , 7 , 224 , 7.63 , 7.27 , no-RTM , 0.36
2048 , 0 , 256 , 8.48 , 8.38 , no-RTM , 0.1
2048 , 0 , 512 , 11.57 , 11.42 , no-RTM , 0.15
2048 , 0 , 1024 , 17.92 , 17.38 , no-RTM , 0.54
2048 , 0 , 2048 , 30.37 , 27.34 , no-RTM , 3.03 <--
test-memchr, test-wmemchr, and test-rawmemchr are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
An unknown vector operation occurred in commit 2a76821c30. Fixed it
by using "ymm{k1}{z}" but not "ymm {k1} {z}".
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
The hwcap2 check for the aforementioned functions should check for
both PPC_FEATURE2_ARCH_3_1 and PPC_FEATURE2_HAS_ISEL but was
mistakenly checking for any one of them, enabling isa 3.1 version of
the functions in incompatible processors, like POWER8.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Big win in binary size and avoids duplicating the logic in multiple
places.
On x86_64, dropped from 1883206 to 1881790, a 1416 byte decrease.
Also changed logic to track if ttyname_buf has been allocated by
checking if it's NULL instead of tracking buflen as an additional
variable.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Simplifies the logic and makes intent clearer, while at the same time
decreasing binary size.
On x86_64, dropped from 1883270 to 1883206, a 64 byte decrease.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
There is no much gain in fallback to cpuinfo if sysfs is no present,
usually on restricted environment neither will be present. It also
simplifies the code and make all architecture use the sched_getaffinity
as the sysfs fallback.
Checked on sparc64-linux-gnu.
Both the sysfs and procfs parsing (through GET_NPROCS_PARSER) are
removed in favor the syscall. The initial scratch buffer should
fit to most of the common usage (1024 bytes with maps to 8192 CPUs).
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
And replace the generic algorithm with the Brian Kernighan's one.
GCC optimize it with popcnt if the architecture supports, so there
is no need to add the extra POPCNT define to enable it.
This is really a micro-optimization that only adds complexity:
recent ABIs already support it (x86-64-v2 or power64le) and it
simplifies the code for internal usage, since i686 does not allow an
internal iFUNC call.
Checked on x86_64-linux-gnu, aarch64-linux-gnu, and
powerpc64le-linux-gnu.
This change continues the improvements to compile-time out of bounds
checking by decorating more APIs with either attribute access, or by
explicitly providing the array bound in APIs such as tmpnam() that
expect arrays of some minimum size as arguments. (The latter feature
is new in GCC 11.)
The only effects of the attribute and/or the array bound is to check
and diagnose calls to the functions that fail to provide a sufficient
number of elements, and the definitions of the functions that access
elements outside the specified bounds. (There is no interplay with
_FORTIFY_SOURCE here yet.)
Tested with GCC 7 through 11 on x86_64-linux.
The symbol was moved using scripts/move-symbol-to-libc.py.
A small adjust to the sem_unlink implementation is necessary to avoid
a check-localplt failure.
A placeholder symbol to keep the GLIBC_2.1.1 version alive in
libpthread is added with this commit.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using move-symbol-to-libc.py.
Both functions are moved at the same time because they depend
on internal functions in sysdeps/pthread/sem_routines.c, which
are moved in this commit as well. Additional hidden prototypes
are required to avoid check-localplt failures.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
A new placeholder version is added at version GLIBC_2.30, to
preserve that version in libpthread.so.0.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Previously, the source file nptl/cancellation.c was compiled multiple
times, for libc, libpthread, librt. This commit switches to a single
implementation, with new __pthread_enable_asynccancel@@GLIBC_PRIVATE,
__pthread_disable_asynccancel@@GLIBC_PRIVATE exports.
The almost-unused CANCEL_ASYNC and CANCEL_RESET macros are replaced
by LIBC_CANCEL_ASYNC and LIBC_CANCEL_ASYNC macros. They call the
__pthread_* functions unconditionally now. The macros are still
needed because shared code uses them; Hurd has different definitions.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
A temporary __pthread_testcancel@@GLIBC_PRIVATE export is created
because it is needed by the semaphore implementation.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The stack list is available in ld.so since commit
1daccf403b ("nptl: Move stack list
variables into _rtld_global"), so it's possible to walk the stack
list directly in ld.so and perform the initialization there.
This eliminates an unprotected function pointer from _rtld_global
and reduces the libpthread initialization code.
No bug. This commit optimizes memchr-evex.S. The optimizations include
replacing some branches with cmovcc, avoiding some branches entirely
in the less_4x_vec case, making the page cross logic less strict,
saving some ALU in the alignment process, and most importantly
increasing ILP in the 4x loop. test-memchr, test-rawmemchr, and
test-wmemchr are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
No bug. This commit optimizes memchr-avx2.S. The optimizations include
replacing some branches with cmovcc, avoiding some branches entirely
in the less_4x_vec case, making the page cross logic less strict,
asaving a few instructions the in loop return loop. test-memchr,
test-rawmemchr, and test-wmemchr are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
It operates similar to execve and it is is already used to implement
fexecve without requiring /proc to be mounted. However, different
than fexecve, if the syscall is not supported by the kernel an error
is returned instead of trying a fallback.
Checked on x86_64-linux-gnu and powerpc64le-linux-gnu.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
And deprecate it in <pthread.h>, redirecting it to sched_yield
for the time being.
The symbol was moved using scripts/move-symbol-to-libc.py.
No GLIBC_2.34 symbol version is added because of the compatibility
symbol status.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
And __pthread_rwlock_trywrlock as a compatibility symbol.
Remove the unused __libc_rwlock_trywrlock macro.
The symbols were moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
And __pthread_rwlock_tryrdlock as a compatibility symbol.
Remove the unused __libc_rwlock_tryrdlock macro.
The symbols were moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
And __pthread_rwlock_init as a compatibility symbol.
__libc_rwlock_init is changed to call __pthread_rwlock_init directly.
The symbols were moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
And __pthread_rwlock_destroy as a compatibility symbol.
rwlocks do not need finalization, so change __libc_rwlock_fini to do
nothing.
The symbols were moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
__pthread_setspecific@@GLIBC_2.34 is no longer needed after the move,
so it is removed with this commit, too.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
__pthread_getspecific@@GLIBC_2.34 is no longer needed after the move,
so it is removed with this commit, too.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
__pthread_key_delete@@GLIBC_PRIVATE is no longer needed after that,
so it is removed as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
__pthread_key_create@@GLIBC_2.34 is no longer needed by glibc
itself with this change, but __pthread_key_create is used by
libstdc++, so it still has to be exported as a public symbol.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_exit@@GLIBC_PRIVATE symbol is no longer needed
after this change, so remove it.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
__pthread_mutex_unlock@GLIBC_2.34 is not removed in this commit
because it is still used from nptl/nptl-init.c.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_mutex_trylock@@GLIBC_2.34 symbol version is no longer
needed because the call is now internal to libc, so remove it with
this commit.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_mutex_timedlock@@GLIBC_PRIVATE export is no longer
needed, so it is removed with this commit.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
__pthread_mutex_lock@GLIBC_2.34 is not removed in this commit
because it is still used from nptl/nptl-init.c.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The calls to __pthread_mutex_init, __pthread_mutexattr_init,
__pthread_mutexattr_settype are now private and no longer need
to be exported. This allows the removal of the newly added
GLIBC_2.34 symbol versions for those functions.
Also clean up some weak declarations in <libc-lockP.h> for
these functions. They are not needed and potentially incorrect
for static linking of mtx_init.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_mutex_destroy@@GLIBC_2.34 symbol is no longer
neded because this commit makes __pthread_mutex_destroy@GLIBC_2.0
a compatibility symbol, so remove the new symbol version.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_wait@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_timedwait@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_signal@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_init@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_destroy@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The __pthread_cond_broadcast@@GLIBC_PRIVATE symbol is no longer
neded, so remove that as well.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
This change also turns __pthread_once into a compatibility symbol
because after the call_once move, an internal call to __pthread_once
can be used. This an adjustment to __libc_once: Outside libc (e.g.,
in nscd), it has to call pthread_once. With __pthread_once as a
compatibility symbol, it is no longer to add a new GLIBC_2.34
version after the move from libpthread, and this commit removes
the new __pthread_once@@GLIBC_2.34 version.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
These make variables can be used to add routines to different
libraries for the Hurd and Linux builds.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
This implementation is based on __memset_power8 and integrates a lot
of suggestions from Anton Blanchard.
The biggest difference is that it makes extensive use of stxvl to
alignment and tail code to avoid branches and small stores. It has
three main execution paths:
a) "Short lengths" for lengths up to 64 bytes, avoiding as many
branches as possible.
b) "General case" for larger lengths, it has an alignment section
using stxvl to avoid branches, a 128 bytes loop and then a tail
code, again using stxvl with few branches.
c) "Zeroing cache blocks" for lengths from 256 bytes upwards and set
value being zero. It is mostly the __memset_power8 code but the
alignment phase was simplified because, at this point, address is
already 16-bytes aligned and also changed to use vector stores.
The tail code was also simplified to reuse the general case tail.
All unaligned stores use stxvl instructions that do not generate
alignment interrupts on POWER10, making it safe to use on
caching-inhibited memory.
On average, this implementation provides something around 30%
improvement when compared to __memset_power8.
Reviewed-by: Matheus Castanho <msc@linux.ibm.com>
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
This implementation is based on __memcpy_power8_cached and integrates
suggestions from Anton Blanchard.
It benefits from loads and stores with length for short lengths and for
tail code, simplifying the code.
All unaligned memory accesses use instructions that do not generate
alignment interrupts on POWER10, making it safe to use on
caching-inhibited memory.
The main loop has also been modified in order to increase instruction
throughput by reducing the dependency on updates from previous iterations.
On average, this implementation provides around 30% improvement when
compared to __memcpy_power7 and 10% improvement in comparison to
__memcpy_power8_cached.
This patch was initially based on the __memmove_power7 with some ideas
from strncpy implementation for Power 9.
Improvements from __memmove_power7:
1. Use lxvl/stxvl for alignment code.
The code for Power 7 uses branches when the input is not naturally
aligned to the width of a vector. The new implementation uses
lxvl/stxvl instead which reduces pressure on GPRs. It also allows
the removal of branch instructions, implicitly removing branch stalls
and mispredictions.
2. Use of lxv/stxv and lxvl/stxvl pair is safe to use on Cache Inhibited
memory.
On Power 10 vector load and stores are safe to use on CI memory for
addresses unaligned to 16B. This code takes advantage of this to
do unaligned loads.
The unaligned loads don't have a significant performance impact by
themselves. However doing so decreases register pressure on GPRs
and interdependence stalls on load/store pairs. This also improved
readability as there are now less code paths for different alignments.
Finally this reduces the overall code size.
3. Improved performance.
This version runs on average about 30% better than memmove_power7
for lengths larger than 8KB. For input lengths shorter than 8KB
the improvement is smaller, it has on average about 17% better
performance.
This version has a degradation of about 50% for input lengths
in the 0 to 31 bytes range when dest is unaligned.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
This patch updates the kernel version in the test tst-mman-consts.py
to 5.12. (There are no new MAP_* constants covered by this test in
5.12 that need any other header changes.)
Tested with build-many-glibcs.py.
Linux 5.12 has one new syscall, mount_setattr. Update
syscall-names.list and regenerate the arch-syscall.h headers with
build-many-glibcs.py update-syscalls.
Tested with build-many-glibcs.py.
On x86_64, when configuring glibc with CFLAGS="-O2 -g -march=native",
some tests fail. After this patch, "make check" succeeds.
Tested on Intel Core i5-4590 with gcc 10.2.1.
GCC 11 warns when a pointer to an uninitialized object is passed
to a function that takes a const-qualified argument. This is done
on the assumption that most such functions read from the object.
For the rare case of a function that doesn't, GCC 11 extends
attribute access to add a new mode called none.
POSIX pthread_setspecific() is one such rare function that takes
a const void* argument but that doesn't read from the object it
points to. To suppress the -Wmaybe-uninitialized issued by GCC
11 when the address of an uninitialized object is passed to it
(e.g., the result of malloc()), this change #defines
__attr_access_none in cdefs.h and uses the macro on the function
in sysdeps/htl/pthread.h and sysdeps/nptl/pthread.h.
No bug. This commit optimizes strchr-evex.S. The optimizations are
mostly small things such as save an ALU in the alignment process,
saving a few instructions in the loop return. The one significant
change is saving 2 instructions in the 4x loop. test-strchr,
test-strchrnul, test-wcschr, and test-wcschrnul are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
No bug. This commit optimizes strchr-avx2.S. The optimizations are all
small things such as save an ALU in the alignment process, saving a
few instructions in the loop return, saving some bytes in the main
loop, and increasing the ILP in the return cases. test-strchr,
test-strchrnul, test-wcschr, and test-wcschrnul are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
For some architectures, the two functions are aliased, so these
symbols need to be moved at the same time.
The symbols were moved using scripts/move-symbol-to-libc.py.
And pthread_mutexattr_setkind_np as a compatibility symbol.
__pthread_mutexattr_settype is used in mtx_init from libpthread,
so this commit adds a GLIBC_2.34 symbol version for it.
The symbols were moved using scripts/move-symbol-to-libc.py.