This patch modifies the current POWER9 implementation of strcpy and
stpcpy to optimize it for POWER9/10.
Since no new POWER10 instructions are used, the original POWER9 strcpy is
modified instead of creating a new implementation for POWER10. This
implementation is based on both the original POWER9 implementation of
strcpy and the preamble of the new POWER10 implementation of strlen.
The changes also affect stpcpy, which uses the same implementation with
some additional code before returning.
On POWER9, averaging improvements across the benchmark
inputs (length/source alignment/destination alignment), for an
experiment that ran the benchmark five times, bench-strcpy showed an
improvement of 5.23%, and bench-stpcpy showed an improvement of 6.59%.
On POWER10, bench-strcpy showed 13.16%, and bench-stpcpy showed 13.59%.
The changes are:
1. Removed the null string optimization.
Although this results in a few extra cycles for the null string, in
combination with the second change, this resulted in improvements for
for other cases.
2. Adapted the preamble from strlen for POWER10.
This is the part of the function that handles up to the first 16 bytes
of the string.
3. Increased number of unrolled iterations in the main loop to 6.
Reviewed-by: Matheus Castanho <msc@linux.ibm.com>
Tested-by: Matheus Castanho <msc@linux.ibm.com>
From
https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
* Intel TSX will be disabled by default.
* The processor will force abort all Restricted Transactional Memory (RTM)
transactions by default.
* A new CPUID bit CPUID.07H.0H.EDX[11](RTM_ALWAYS_ABORT) will be enumerated,
which is set to indicate to updated software that the loaded microcode is
forcing RTM abort.
* On processors that enumerate support for RTM, the CPUID enumeration bits
for Intel TSX (CPUID.07H.0H.EBX[11] and CPUID.07H.0H.EBX[4]) continue to
be set by default after microcode update.
* Workloads that were benefited from Intel TSX might experience a change
in performance.
* System software may use a new bit in Model-Specific Register (MSR) 0x10F
TSX_FORCE_ABORT[TSX_CPUID_CLEAR] functionality to clear the Hardware Lock
Elision (HLE) and RTM bits to indicate to software that Intel TSX is
disabled.
1. Add RTM_ALWAYS_ABORT to CPUID features.
2. Set RTM usable only if RTM_ALWAYS_ABORT isn't set. This skips the
string/tst-memchr-rtm etc. testcases on the affected processors, which
always fail after a microcde update.
3. Check RTM feature, instead of usability, against /proc/cpuinfo.
This fixes BZ #28033.
Linux 5.13 has three new syscalls (landlock_create_ruleset,
landlock_add_rule, landlock_restrict_self). Update syscall-names.list
and regenerate the arch-syscall.h headers with build-many-glibcs.py
update-syscalls.
Tested with build-many-glibcs.py.
On s390 (31bit), the pointer to the first byte after s always wraps
around with n >= 0x80000000 and can lead to stop searching before
end of s.
Thus this patch just use NULL as byte after s in this case and
the srst instruction stops searching with "not found" when wrapping
around from top address to zero.
This is observable with testcase string/test-memchr
starting with commit "String: Add overflow tests for strnlen, memchr,
and strncat [BZ #27974]"
https://sourceware.org/git/?p=glibc.git;a=commit;h=da5a6fba0febbfc90896ce1b2eb75c6d8a88a72d
Starting with recent commit 84f7ce8447
"posix: Add glob64 with 64-bit time_t support", elf/check-localplt
fails due to extra PLT reference __glob64_time64 in __glob64_time64
itself.
This is observable with gcc 7.5 on x86_64 with -m32 or s390x with
-m31. E.g. if build with gcc 10, gcc is generating a call to
__glob64_time64.localalias.
This patch is adding a hidden version of __glob64_time64 in the
same way as for __globfree64_time64.
Add hp-timing.h using the cntvct_el0 counter. Return timing in nanoseconds
so it is fully compatible with generic hp-timing. Don't set HP_TIMING_INLINE
in the dynamic linker since it adds unnecessary overheads and some ancient
kernels may not handle emulating cntcvt correctly. Currently cntvct_el0 is
only used for timing in the benchtests.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Optimize strnlen by avoiding UMINV which is slow on most cores. On Neoverse N1
large strings are 1.8x faster than the current version, and bench-strnlen is
50% faster overall. This version is MTE compatible.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
malloc initialization depends on __get_nprocs, so using
scratch buffers in __get_nprocs may result in infinite recursion.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The symbols forkpty, login, login_tty, logout, logwtmp, openpty
were moved using scripts/move-symbol-to-libc.py.
This is a single commit because most of the symbols are tied together
via forkpty, for example.
Several changes to use hidden prototypes are needed. This commit
also updates pseudoterminal terminology on modified lines.
For 390 (31-bit), this commit follows the existing style for the
compat symbol version creation.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
RFC 8335 defines the network utility PROBE, which builds off of the
capabilities of Ping to query more detailed interface information from
networking nodes.
The definitions included in this patchset have been accepted into the
linux net-next branch and will be included in Linux 5.13. This
patchset adds the same definitions to the glibc for use in the
iputils package.
The relevant commits for the Linux definitions can be found here:
e542d29ca8750f4fc2a1
These changes have been tested by running the glibc tests on x86_64
Signed-off-by: Andreas Roeseler <andreas.a.roeseler@gmail.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
After recent commit
447954a206
"math: redirect roundeven function", building on
s390x fails with:
Error: symbol `__roundevenl' is already defined
Similar to aarch64/riscv fix, this patch redirects target
specific functions for s390x:
commit 3213ed770c
"Update math: redirect roundeven function"
Austin Group issue 62 [1] dropped the async-signal-safe requirement
for fork and provided a async-signal-safe _Fork replacement that
does not run the atfork handlers. It will be included in the next
POSIX standard.
It allow to close a long standing issue to make fork AS-safe (BZ#4737).
As indicated on the bug, besides the internal lock for the atfork
handlers itself; there is no guarantee that the handlers itself will
not introduce more AS-safe issues.
The idea is synchronize fork with the required internal locks to allow
children in multithread processes to use mostly of standard function
(even though POSIX states only AS-safe function should be used). On
signal handles, _Fork should be used intead and only AS-safe functions
should be used.
For testing, the new tst-_Fork only check basic usage. I also added
a new tst-mallocfork3 which uses the same strategy to check for
deadlock of tst-mallocfork2 but using threads instead of subprocesses
(and it does deadlock if it replaces _Fork with fork).
[1] https://austingroupbugs.net/view.php?id=62
The valgrind/helgrind test suite needs a way to make stack dealloction
more prompt, and this feature seems to be generally useful.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
librt.so is no longer installed for PTHREAD_IN_LIBC, and tests
are not linked against it. $(librt) is introduced globally for
shared tests that need to be linked for both PTHREAD_IN_LIBC
and !PTHREAD_IN_LIBC.
GLIBC_PRIVATE symbols that were needed during the transition are
removed again.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The symbols were moved using scripts/move-symbol-to-libc.py.
The way the ABI intransition is implemented is changed with this
commit: the implementation is now consolidated in one file with a
TIMER_T_WAS_INT_COMPAT check.
The shared librt is now empty, so this commit adds a placeholder
symbol at the base version, GLIBC_2.2, and potentially at the
GLIBC_2.3.3 version as well (the leftover from the int/timer_t ABI
transition).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using scripts/move-symbol-to-libc.py.
The way the ABI intransition is implemented is changed with this
commit: the implementation is now consolidated in one file with a
TIMER_T_WAS_INT_COMPAT check.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
The way the ABI intransition is implemented is changed with this
commit: the implementation is now consolidated in one file with a
TIMER_T_WAS_INT_COMPAT check.
Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using scripts/move-symbol-to-libc.py.
timer_create and timer_delete are tied together via the int/timer_t
compatibility code. The way the ABI intransition is implemented
is changed with this commit: the implementation is now consolidated
in one file with a TIMER_T_WAS_INT_COMPAT check.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This is almost equivalent to __WORDSIZE == 64
&& OTHER_SHLIB_COMPAT (librt, GLIBC_2_1, GLIBC_2_3_3), except
that this expression is true for mips64/n64 targets as well,
even though those did not undergo the timer_t transition.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This patch is using the corresponding GCC builtin for roundevenf,
roundeven and roundevenl if the USE_FUNCTION_BUILTIN macros are defined
to one in math-use-builtins.h.
These builtin functions is supported since GCC 10.
The code of the generic implementation is not changed.
Signed-off-by: Shen-Ta Hsieh <ibmibmibm.tw@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
This patch redirect roundeven function for futhermore changes.
Signed-off-by: Shen-Ta Hsieh <ibmibmibm.tw@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
This adds several temporary GLIBC_PRIVATE exports. The symbol names
are changed so that they all start with __timer_.
It is now possible to invoke the fork handler directly, so
pthread_atfork is no longer necessary. The associated error cannot
happen anymore, and cancellation handling can be removed from
the helper thread routine.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
A placeholder symbol is needed on some architectures for the
GLIBC_2.3.4 version.
Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using scripts/move-symbol-to-libc.py.
A placeholder symbol is required to keep the GLIBC_2.7 version.
Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
An explicit call from fork into the mq_notify implementation replaces
the previous use of pthread_atfork.
Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
To introduce the proper symbol versioning, the implementation of
the system call wrapper us moved to a C file.
Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using scripts/move-symbol-to-libc.py.
Placeholder symbols are needed on some architectures, to keep the
GLIBC_2.1 and GLIBC_2.4 symbol versions around.
Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
Move the common code into rt/lio_listio-common.c and include
the file in both rt/lio_listio.c and rt/lio_listio64.c. The common
code automatically defines both public symbols for __WORDSIZE == 64.
Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
Both symbols have to be moved at the same time because they
are intertwined for __WORDSIZE == 64. The treatment of this case
is also changed to match more closely how the other files suppress
the declaration of the *64 identifier.
The symbols were moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using scripts/move-symbol-to-libc.py.
There is a minor oddity here: This is generic code shared with Hurd,
and Hurd does not have time64 support. This is why the
versioned_symbol export for __aio_suspend_time64 is restricted to
the PTHREAD_IN_LIBC code.
Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
Both symbols have to be moved at the same time because they
are intertwined for __WORDSIZE == 64. The treatment of this case
is also changed to match more closely how the other files suppress
the declaration of the *64 identifier.
The symbols were moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using scripts/move-symbol-to-libc.py.
A version placeholder symbol is needed on alpha and sparc because
of the additional symbols formerly at version GLIBC_2.3.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>:
This commit also moves the aio_misc and aio_sigquue helper,
so GLIBC_PRIVATE exports need to be added.
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
No bug. The way wcsnlen will check if near the end of maxlen
is the following macro:
mov %r11, %rsi; \
subq %rax, %rsi; \
andq $-64, %rax; \
testq $-64, %rsi; \
je L(strnlen_ret)
Which words independently of s + maxlen overflowing. So the
second overflow check is unnecissary for correctness and
just extra overhead in the common no overflow case.
test-strlen.c, test-wcslen.c, test-strnlen.c and test-wcsnlen.c are
all passing
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
The pthread_atfork is similar between Linux and Hurd, only the compat
version bits differs. The generic version is place at sysdeps/pthread
with a common name.
It also fixes an issue with Hurd license, where the static-only object
did not use LGPL + exception.
Checked on x86_64-linux-gnu, i686-linux-gnu, and with a build for
i686-gnu.
The Linux nptl implementation is used as base for generic fork
implementation to handle the internal locks and mutexes. The
system specific bits are moved a new internal _Fork symbol.
(This new implementation will be used to provide a async-signal-safe
_Fork now that POSIX has clarified that fork might not be
async-signal-safe [1]).
For Hurd it means that the __nss_database_fork_prepare_parent and
__nss_database_fork_subprocess will be run in a slight different
order.
[1] https://austingroupbugs.net/view.php?id=62
AMD define different flags for IRPB, IBRS, and STIPBP [1], so new
x86_64_cpu are added and IBRS_IBPB is only tested for Intel.
The SSDB is also defined and implemented different on AMD [2],
and also a new AMD_SSDB flag is added. It should map to the
cpuinfo 'ssdb' on recent AMD cpus.
It fixes tst-cpu-features-cpuinfo and tst-cpu-features-cpuinfo-static
on recent AMD cpus.
Checked on x86_64-linux-gnu on AMD Ryzen 9 5900X.
[1] https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf
[2] https://bugzilla.kernel.org/show_bug.cgi?id=199889
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
IBT and SHSTK usable bits are copied from CPUID feature bits and later
cleared if kernel doesn't support CET. Copy IBT and SHSTK usable only
if CET is enabled so that they aren't set on CET capable processors
with non-CET enabled glibc.
This commit fixes the bug mentioned in the previous commit.
The previous implementations of wmemchr in these files relied
on maxlen * sizeof(wchar_t) which was not guranteed by the standard.
The new overflow tests added in the previous commit now
pass (As well as all the other tests).
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
This commit fixes the bug mentioned in the previous commit.
The previous implementations of wmemchr in these files relied
on n * sizeof(wchar_t) which was not guranteed by the standard.
The new overflow tests added in the previous commit now
pass (As well as all the other tests).
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
No bug. This comment adds the ifunc / build infrastructure
necessary for wcslen to prefer the sse4.1 implementation
in strlen-vec.S. test-wcslen.c is passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Since strlen.S contains SSE2 version of strlen/strnlen and SSE4.1
version of wcslen/wcsnlen, move strlen.S to multiarch/strlen-vec.S
and include multiarch/strlen-vec.S from SSE2 and SSE4.1 variants.
This also removes the unused symbols, __GI___strlen_sse2 and
__GI___wcsnlen_sse4_1.
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one.
The large timeout are already tests by io/tst-utimensat-skeleton.c.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
It breaks the usage case of live migration like CRIU or similar
and most usages can be optimized away by either building glibc with
a minimum 5.1 kernel or by using the 32-bit syscall for the common
case.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
It breaks the usage case of live migration like CRIU or similar.
The performance drawback is it would require an extra syscall
on older kernels without 64-bit time support.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
It breaks the usage case of live migration like CRIU or similar.
The performance drawback is it would require an extra syscall
on older kernels without 64-bit time support.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one. This also avoids the need
to use supports_time64() (which breaks the usage case of live migration
like CRIU or similar).
It also fixes an issue on 32-bit select call for !__ASSUME_PSELECT
(microblase with older kernels only) where the expected timeout
is a 'struct timeval' instead of 'struct timespec'.
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one. This also avoids the need
to use supports_time64() (which breaks the usage case of live migration
like CRIU or similar).
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
For !__ASSUME_TIME64_SYSCALLS there is no need to issue a 64-bit syscall
if the provided timeout fits in a 32-bit one. The 64-bit usage should
be rare since the timeout is a relative one. This also avoids the need
to use supports_time64() (which breaks the usage case of live migration
like CRIU or similar).
Checked on i686-linux-gnu on a 4.15 kernel and on a 5.11 kernel
(with and without --enable-kernel=5.1) and on x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
For the legacy ABI with supports 32-bit time_t it calls the 64-bit
time directly, since the LFS symbols calls the 64-bit time_t ones
internally.
Checked on i686-linux-gnu and x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
This mirrors the situation on Hurd. These directories are on
the include search part, so #include <pthreadP.h> works after this
change on both Hurd and nptl.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The pthread-based implementation is the generic one. Replacing
the stubs makes it clear that they do not have to be adjusted for
the libpthread move.
Result of:
git mv -f sysdeps/pthread/aio_misc.h sysdeps/generic/
git mv sysdeps/pthread/timer_routines.c sysdeps/htl/
git mv -f sysdeps/pthread/{aio,lio,timer}_*.c rt/
Followed by manual adjustment of the #include paths in
sysdeps/unix/sysv/linux/wordsize-64, and a move of the version
definitions formerly in sysdeps/pthread/Versions.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This function has no dependency on libpthread, so the move is also
applied to Hurd.
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This function has no dependency on libpthread, so the move is also
applied to Hurd.
To avoid localplt failures, use __open64_nocancel instead of
pthread_setcancelstate and open.
The symbol was moved using scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Result of: git mv -f sysdeps/posix/shm_unlink.c rt
and manual removal of the _POSIX_MAPPED_FILES preprocessor condition.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Result of: git mv -f sysdeps/posix/shm_open.c rt
and manual removal of the _POSIX_MAPPED_FILES preprocessor condition.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
These were turned into compat symbols as part of the libpthread
move. It turns out they are used by language run-time libraries
(e.g., the GCC D front end), so it makes to preserve them as
external symbols even though they are not declared in any header
file.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Starting with recent commit 92a7d13439
"x86-64: Align child stack to 16 bytes [BZ #27902]"
the new test misc/tst-misalign-clone has failed on s390x/s390.
This patch is now aligning the stack to a double
word boundary as also done in start.S files.
Similar to fts, ftw routines passes a stat pointer that might
differ of size and layout when 64-bit time API is used.
Checked on i686-linux-gnu and x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Similar to glob, fts routines passes a stat pointer that might
differ of size and layout when 64-bit time API is used.
Checked on i686-linux-gnu and x86_64-linux-gnu.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The glob might pass a different stat struct for gl_stat and gl_lstat
when GLOB_ALTDIRFUNC is used. This requires add a new 64-bit time
version that also uses 64-bit time stat functions.
Checked on i686-linux-gnu and x86_64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
A new build flag, _TIME_BITS, enables the usage of the newer 64-bit
time symbols for legacy ABI (where 32-bit time_t is default). The 64
bit time support is only enabled if LFS (_FILE_OFFSET_BITS=64) is
also used.
Different than LFS support, the y2038 symbols are added only for the
required ABIs (armhf, csky, hppa, i386, m68k, microblaze, mips32,
mips64-n32, nios2, powerpc32, sparc32, s390-32, and sh). The ABIs with
64-bit time support are unchanged, both for symbol and types
redirection.
On Linux the full 64-bit time support requires a minimum of kernel
version v5.1. Otherwise, the 32-bit fallbacks are used and might
results in error with overflow return code (EOVERFLOW).
The i686-gnu does not yet support 64-bit time.
This patch exports following rediretions to support 64-bit time:
* libc:
adjtime
adjtimex
clock_adjtime
clock_getres
clock_gettime
clock_nanosleep
clock_settime
cnd_timedwait
ctime
ctime_r
difftime
fstat
fstatat
futimens
futimes
futimesat
getitimer
getrusage
gettimeofday
gmtime
gmtime_r
localtime
localtime_r
lstat_time
lutimes
mktime
msgctl
mtx_timedlock
nanosleep
nanosleep
ntp_gettime
ntp_gettimex
ppoll
pselec
pselect
pthread_clockjoin_np
pthread_cond_clockwait
pthread_cond_timedwait
pthread_mutex_clocklock
pthread_mutex_timedlock
pthread_rwlock_clockrdlock
pthread_rwlock_clockwrlock
pthread_rwlock_timedrdlock
pthread_rwlock_timedwrlock
pthread_timedjoin_np
recvmmsg
sched_rr_get_interval
select
sem_clockwait
semctl
semtimedop
sem_timedwait
setitimer
settimeofday
shmctl
sigtimedwait
stat
thrd_sleep
time
timegm
timerfd_gettime
timerfd_settime
timespec_get
utime
utimensat
utimes
utimes
wait3
wait4
* librt:
aio_suspend
mq_timedreceive
mq_timedsend
timer_gettime
timer_settime
* libanl:
gai_suspend
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
It is only used for !__USE_MISC, the default way uses the kernel
headers. The patch also adds the SO_TIMESTAMP, SO_TIMESTAMPNS, and
SO_TIMESTAMPING which uses new values for 64-bit time_t kernel
interfaces.
The __USE_TIME_BITS64 is not defined internally yet, although the
internal header is used when building the 64-bit stat implementations.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Instead of replicate the same definitions from struct_shmid64_ds.h
on the multiple struct_shmid_ds.h, use a common header which is included
when required (struct_shmid64_ds_helper.h).
The __USE_TIME_BITS64 is not defined internally yet, although the
internal header is used when building the 64-bit semctl implementation.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Instead of replicate the same definitions from struct_semid64_ds.h
on the multiple struct_semid_ds.h, use a common header which is included
when required (struct_semid64_ds_helper.h).
The __USE_TIME_BITS64 is not defined internally yet, although the
internal header is used when building the 64-bit semctl implementation.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Instead of replicate the same definitions from struct_msqid64_ds.h
on the multiple struct_msqid_ds.h, use a common header which is included
when required (struct_msqid64_ds_helper.h).
The __USE_TIME_BITS64 is not defined internally yet, although the
internal header is used when building the 64-bit stat implementations.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Instead of replicate the same definitions from struct_stat_time64.h
on the multiple struct_stat.h, use a common header which is included
when required (struct_stat_time64_helper.h). The 64-bit time support
is added only for LFS support.
The __USE_TIME_BITS64 is not defined internally yet, although the
internal header is used when building the 64-bit stat implementations.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The __USE_TIME_BITS64 is not defined internally yet.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Handle the SO_TIMESTAMP{NS} similar to recvmsg: for
!__ASSUME_TIME64_SYSCALLS it converts the first 32-bit time SO_TIMESTAMP
or SO_TIMESTAMPNS and appends it to the control buffer if has extra
space or returns MSG_CTRUNC otherwise. The 32-bit time field is kept
as-is.
Also for !__ASSUME_TIME64_SYSCALLS it limits the maximum number of
'struct mmsghdr *' to IOV_MAX (and also increases the stack size
requirement to IOV_MAX times sizeof (socklen_t)). The Linux imposes
a similar limit to sendmmsg, so bound the array size on recvmmsg is not
unreasonable. And this will be used only on older when building with
32-bit time support.
Checked on x86_64-linux-gnu and i686-linux-gnu (on 5.4 and on 4.15
kernel).
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The recvmsg handling is more complicated because it requires check the
returned kernel control message and make some convertions. For
!__ASSUME_TIME64_SYSCALLS it converts the first 32-bit time SO_TIMESTAMP
or SO_TIMESTAMPNS and appends it to the control buffer if has extra
space or returns MSG_CTRUNC otherwise. The 32-bit time field is kept
as-is.
Calls with __TIMESIZE=32 will see the converted 64-bit time control
messages as spurious control message of unknown type. Calls with
__TIMESIZE=64 running on pre-time64 kernels will see the original
message as a spurious control ones of unknown typ while running on
kernel with native 64-bit time support will only see the time64 version
of the control message.
Checked on x86_64-linux-gnu and i686-linux-gnu (on 5.4 and on 4.15
kernel).
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The constant values will be changed for __TIMESIZE=64, so binaries built
with 64-bit time support might fail to work properly on old kernels.
Both {get,set}sockopt will retry the syscall with the old constant
values and the timeout value adjusted when kernel returns ENOTPROTOPT.
It also adds an internal only SO_{RCV,SND}TIMEO where
COMPAT_SO_{RCV,SND}TIMEO_OLD indicates pre 32-bit time support and
COMPAT_SO_{RCV,SND}TIMEO_NEW indicates time64 support. It allows to
refer to constant independently of the time_t ABI and kernel version
used.
Checked on x86_64-linux-gnu and i686-linux-gnu (on 5.4 and on 4.15
kernel).
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The s390 will require the 64-bit time symbols for y2038 support.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The n32 will require the 64-bit time symbols for y2038 support.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The n32 will require the 64-bit time symbols for y2038 support.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Commit 68ab82f566 added support for the scv
syscall ABI on powerpc. Since then systems that have kernel and processor
support started using scv. However adding the proper support for a new syscall
ABI requires changes to several other projects (e.g. qemu, valgrind, strace,
kernel), which are gradually receiving support.
Meanwhile, having a way to disable scv on glibc at build time can be useful for
distros that may encounter conflicts with projects that still do not support the
scv ABI, buying time until proper support is added.
This commit adds a --disable-scv option that disables scv support and uses sc
for all syscalls, like before commit 68ab82f566.
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
Now that pthread_kill is provided by libc.so it is possible to
implement the generic POSIX implementation as
'pthread_kill(pthread_self(), sig)'.
For Linux implementation, pthread_kill read the targeting TID from
the TCB. For raise, this it not possible because it would make raise
fail when issue after vfork (where creates the resulting process
has a different TID from the parent, but its TCB is not updated as
for pthread_create). To make raise use pthread_kill, it is make
usable from vfork by getting the target thread id through gettid
syscall.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Now that the thread cancellation type is not accessed concurrently
anymore, it is possible to move it out the cancelhandling.
By removing the cancel state out of the internal thread cancel handling
state there is no need to check if cancelled bit was set in CAS
operation.
It allows simplifing the cancellation wrappers and the
CANCEL_CANCELED_AND_ASYNCHRONOUS is removed.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Now that thread cancellation state is not accessed concurrently anymore,
it is possible to move it out the 'cancelhandling'.
The code is also simplified: CANCELLATION_P is replaced with a
internal pthread_testcancel call and the CANCELSTATE_BIT{MASK} is
removed.
With this behavior pthread_setcancelstate does not require to act on
cancellation if cancel type is asynchronous (is already handled either
by pthread_setcanceltype or by the signal handler).
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Since commit 0c1c3a771e
("dlfcn: Move dlopen into libc") libdl.a is empty, so linking
against it is no longer necessary.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Move all gconv-modules configuration files to gconv-modules.conf.
That is, the S390 extensions now become gconv-modules-s390.conf. Move
both configuration files into gconv-modules.d.
Now GCONV_PATH/gconv-modules is read only for backward compatibility
for third-party gconv modules directories.
Reviewed-by: DJ Delorie <dj@redhat.com>
Add inline assembler for the roundeven functions.
Passes GLIBC regression. Note GCC does not inline the builtin (PR100966),
so this cannot be used for now.
This patch replaced obsolete AC_TRY_COMPILE to AC_COMPILE_IFELSE or
AC_PREPROC_IFELSE.
It has been confirmed that GNU 'autoconf' 2.69 suppressed obsolete
warnings, updated the following files:
- configure
- sysdeps/mach/configure
- sysdeps/mach/hurd/configure
- sysdeps/s390/configure
- sysdeps/unix/sysv/linux/configure
and didn't change the following files:
- sysdeps/ieee754/ldbl-opt/configure
- sysdeps/unix/sysv/linux/powerpc/configure
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Consolidate all hooks structures into a single one. There are
no static dlopen ABI concerns because glibc 2.34 already comes
with substantial ABI-incompatible changes in this area. (Static
dlopen requires the exact same dynamic glibc version that was used
for static linking.)
The new approach uses a pointer to the hooks structure into
_rtld_global_ro and initalizes it in __rtld_static_init. This avoids
a back-and-forth with various callback functions.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This commit removes the ELF constructor and internal variables from
dlfcn/dlfcn.c. The file now serves the same purpose as
nptl/libpthread-compat.c, so it is renamed to dlfcn/libdl-compat.c.
The use of libdl-shared-only-routines ensures that libdl.a is empty.
This commit adjusts the test suite not to use $(libdl). The libdl.so
symbolic link is no longer installed.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
In elf/Makefile, remove the $(libdl) dependency from testobj1.so
because it the unused libdl DSO now causes elf/tst-unused-deps to
fail.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
There is a minor functionality enhancement: dlerror now sets
errno if it was set as part of the exception. (This is the result
of using %m in asprintf, to avoid the strerror PLT call.) The
previous errno value upon function return was unpredictable.
Documenting this as a feature is premature; we need to make sure
that the error codes are meaningful when they are set by the dynamic
loader.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Some targets have a GLIBC_2.0 baseline for libdl, while using
GLIBC_2.2 for libc. This means that the generated libc.map file
does not have any version nodes for GLIBC_2.0 or GLIBC_2.1. However,
moving symbols from libdl into libc needs such version nodes.
(Future symbol moves from librt will need this as well.)
This kludge is only necessary for symbols predating GLIBC_2.2 because
the affected targets use GLIBC_2.2 as the baseline for libc. Given
the small number and fixed set of affected architectures, no generic
mechanism is implemented, and instead the map file fragment is
hard-coded in scripts/versions.mk.
The compat_symbol macro already emits the appropriate version strings,
so no adjustments are needed there.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Some symbols have explicit versioned_symbol or compat_symbol markers
in the sources, but no corresponding entry in the Versions files.
This presently works because the local: * directive is only applied
to the base version.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
__pthread_attr_copy can fail and does not initialize the attribute
structure in that case.
If __pthread_attr_copy is never called and there is no allocated
attribute, pthread_attr_destroy should not be called, otherwise
there is a null pointer dereference in rt/tst-mqueue6.
Fixes commit 42d3593505
("Use __pthread_attr_copy in mq_notify (bug 27896)").
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The symbol has never been exported, so no compatibility symbol is
needed. Removing this file prevents ld from creation an exported
symbol in case GLIBC_2_0 expands to a symbol version which
does not have a local: *; directive in the symbol version map file.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch was based on the __memcmp_power8 and the recent
__strlen_power10.
Improvements from __memcmp_power8:
1. Don't need alignment code.
On POWER10 lxvp and lxvl do not generate alignment interrupts, so
they are safe for use on caching-inhibited memory. Notice that the
comparison on the main loop will wait for both VSR to be ready.
Therefore aligning one of the input address does not improve
performance. In order to align both registers a vperm is necessary
which add too much overhead.
2. Uses new POWER10 instructions
This code uses lxvp to decrease contention on load by loading 32 bytes
per instruction.
The vextractbm is used to have a smaller tail code for calculating the
return value.
3. Performance improvement
This version has around 35% better performance on average. I saw no
performance regressions for any length or alignment.
Thanks Matheus for helping me out with some details.
Co-authored-by: Matheus Castanho <msc@linux.ibm.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
This patch optimizes the performance of memset for A64FX [1] which
implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache
per NUMA node.
The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill and prefetch.
SVE assembler code for memset is implemented as Vector Length Agnostic
code so theoretically it can be run on any SOC which supports ARMv8-A
SVE standard.
We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.
And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.
[1] https://github.com/fujitsu/A64FX
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reviewed-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
This patch optimizes the performance of memcpy/memmove for A64FX [1]
which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB
cache per NUMA node.
The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill, and software pipelining.
SVE assembler code for memcpy/memmove is implemented as Vector Length
Agnostic code so theoretically it can be run on any SOC which supports
ARMv8-A SVE standard.
We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.
And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.
[1] https://github.com/fujitsu/A64FX
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reviewed-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
This patch is a test helper script to change Vector Length for child
process. This script can be used as test-wrapper for 'make check'.
Usage examples:
~/build$ make check subdirs=string \
test-wrapper='~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16'
~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16 \
make test t=string/test-memcpy
~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 32 \
./debugglibc.sh string/test-memmove
~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 64 \
./testrun.sh string/test-memset
This patch defines BTI_C and BTI_J macros conditionally for
performance.
If HAVE_AARCH64_BTI is true, BTI_C and BTI_J are defined as HINT
instruction for ARMv8.5 BTI (Branch Target Identification).
If HAVE_AARCH64_BTI is false, both BTI_C and BTI_J are defined as
NOP.
Since the variable expands to nothing under Linux, it is no longer
necessary to clutter the makefiles with it.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
When using scv for templated ASM syscalls, current code interprets any
negative return value as error, but the only valid error codes are in
the range -4095..-1 according to the ABI.
This commit also fixes 'signal.gen.test' strace test, where the issue
was first identified.
Reviewed-by: Matheus Castanho <msc@linux.ibm.com>
1. Replace
if ((((uintptr_t) &_d) & (__alignof (double) - 1)) != 0)
which may be optimized out by compiler, with
int
__attribute__ ((weak, noclone, noinline))
is_aligned (void *p, int align)
{
return (((uintptr_t) p) & (align - 1)) != 0;
}
2. Add TEST_STACK_ALIGN_INIT to TEST_STACK_ALIGN.
3. Add a common TEST_STACK_ALIGN_INIT to check 16-byte stack alignment
for both i386 and x86-64.
4. Update powerpc to use TEST_STACK_ALIGN_INIT.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Only the placeholder compatibility symbols are left now.
The __errno_location symbol was removed (moved) using
scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using scripts/move-symbol-to-libc.py.
The libpthread placeholder symbols need some changes because some
symbol versions have gone away completely. But
__errno_location@@GLIBC_2.0 still exists, so the GLIBC_2.0 version
is still there.
The internal __pthread_create symbol now points to the correct
function, so the sysdeps/nptl/thrd_create.c override is no longer
necessary.
There was an issue how the hidden alias of pthread_getattr_default_np
was defined, so this commit cleans up that aspects and removes the
GLIBC_PRIVATE export altogether.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Use the __nptl_tls_static_size_for_stack inline function instead,
and the GLRO (dl_tls_static_align) value directly.
The computation of GLRO (dl_tls_static_align) in
_dl_determine_tlsoffset ensures that the alignment is at least
TLS_TCB_ALIGN, which at least STACK_ALIGN (see allocate_stack).
Therefore, the additional rounding-up step is removed.
ALso move the initialization of the default stack size from
__pthread_initialize_minimal_internal to __pthread_early_init.
This introduces an extra system call during single-threaded startup,
but this simplifies the initialization sequence. No locking is
needed around the writes to __default_pthread_attr because the
process is single-threaded at this point.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
No bug. This commit makes a few small improvements to
memset-vec-unaligned-erms.S. The changes are 1) only aligning to 64
instead of 128. Either alignment will perform equally well in a loop
and 128 just increases the odds of having to do an extra iteration
which can be significant overhead for small values. 2) Align some
targets and the loop. 3) Remove an ALU from the alignment process. 4)
Reorder the last 4x VEC so that they are stored after the loop. 5)
Move the condition for leq 8x VEC to before the alignment
process. test-memset and test-wmemset are both passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
When compiled with GCC 11.1 and -march=z14 -O3 build flags, running
ld.so (or any dynamically linked program) prints:
Fatal glibc error: CPU lacks VXE support (z14 or later required)
Co-Authored-By: Stefan Liebler <stli@linux.ibm.com>
Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
When built with GCC 11.1 and -mcpu=power9, ld.so prints this error
message when running on POWER8:
Fatal glibc error: CPU lacks ISA 3.00 support (POWER9 or later required)
No bug. This commit optimizes memcmp-evex.S. The optimizations include
adding a new vec compare path for small sizes, reorganizing the entry
control flow, removing some unnecissary ALU instructions from the main
loop, and most importantly replacing the heavy use of vpcmp + kand
logic with vpxor + vptern. test-memcmp and test-wmemcmp are both
passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
No bug. This commit optimizes memcmp-avx2.S. The optimizations include
adding a new vec compare path for small sizes, reorganizing the entry
control flow, and removing some unnecissary ALU instructions from the
main loop. test-memcmp and test-wmemcmp are both passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
The tst-timespec_getres (e5ac7bd679) triggers an issue on 32-bit
architecture on Linux older than 5.1, where the fallback syscall
is used.
Checked on powerpc-linux-gnu.
ISO C2X adds a timespec_getres function alongside the C11
timespec_get, with functionality similar to that of POSIX clock_getres
(including allowing a NULL pointer to be passed to the function).
Implement this function for glibc, similarly to the implementation of
timespec_get.
This includes a basic test like that of timespec_get, but no
documentation in the manual, given that TIME_UTC and timespec_get
aren't documented in the manual at all. The handling of 64-bit time
follows that in timespec_get; people maintaining patch series for
64-bit time will need to update them accordingly (to export
__timespec_getres64, redirect calls in time.h and run the test for
_TIME_BITS=64).
Tested for x86_64 and x86, and (previous version; only testcase
differs) with build-many-glibcs.py.
Reuse code for optimized strlen to implement a faster version of rawmemchr.
This takes advantage of the same benefits provided by the strlen implementation,
but needs some extra steps. __strlen_power10 code should be unchanged after this
change.
rawmemchr returns a pointer to the char found, while strlen returns only the
length, so we have to take that into account when preparing the return value.
To quickly check 64B, the loop on __strlen_power10 merges the whole block into
16B by using unsigned minimum vector operations (vminub) and checks if there are
any \0 on the resulting vector. The same code is used by rawmemchr if the char c
is 0. However, this approach does not work when c != 0. We first need to
subtract each byte by c, so that the value we are looking for is converted to a
0, then taking the minimum and checking for nulls works again.
The new code branches after it has compared ~256 bytes and chooses which of the
two strategies above will be used in the main loop, based on the char c. This
extra branch adds some overhead (~5%) for length ~256, but is quickly amortized
by the faster loop for larger sizes.
Compared to __rawmemchr_power9, this version is ~20% faster for length < 256.
Because of the optimized main loop, the improvement becomes ~35% for c != 0
and ~50% for c = 0 for strings longer than 256.
Reviewed-by: Lucas A. M. Magalhaes <lamm@linux.ibm.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
The symbol was moved using scripts/move-symbol-to-libc.py.
The GLIBC_2.11 version is now empty, so add a placeholder symbol.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
The GLIBC_2.3.4 version is now empty, so add a placeholder symbol.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
Add __libpthread_version_placeholder@@GLIBC_2.12 for the targets
that need it.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
__libpthread_version_placeholder@@GLIBC_2.2 is needed by this change;
the Versions entry for GLIBC_2.2 in libpthread had leftover symbols
due to an error in a previous conflict resolution. The condition
for the placeholder symbol is complicated because some architectures
have earlier symbols at the GLIBC_2.2 symbol versions, so the
placeholder is not required there (yet).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
A new placeholder symbol __libpthread_version_placeholder@GLIBC_2.18
is needed to keep the GLIBC_2.18 symbol version in libpthread.
The __pthread_getattr_default_np@@GLIBC_PRIVATE export is used
from pthread_create.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This helps to clarify that the caching of these fields in libpthread
(in __static_tls_size, __static_tls_align_m1) is unnecessary.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
After static dlopen, a copy of ld.so is loaded into the inner
namespace, but that copy is not initialized at all. Some
architectures run into serious problems as result, which is why the
_dl_var_init mechanism was invented. With libpthread moving into
libc and parts into ld.so, more architectures impacted, so it makes
sense to switch to a generic mechanism which performs the partial
initialization.
As a result, getauxval now works after static dlopen (bug 20802).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The initialization of the report_events TCB field is now performed
in __tls_init_tp instead of __pthread_initialize_minimal_internal
(in libpthread).
The events interface is difficult to test because GDB stopped using it
in 2015. The td_thr_get_info change to ignore lookup issues is enough
to support GDB with this change.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The error paths of __check_native would leave the socket FD open on
return, resulting in an FD leak. Rework function exit paths so that
the fd is always closed on return.
The symbols were moved using scripts/move-symbol-to-libc.py,
in one commit due to their dependency on the internal
__concurrency_level variable.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbols were moved using scripts/move-symbol-to-libc.py.
Also clean up some unwinder linking leftover in the same spot
in nptl/pthreadP.h.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
It is necessary to arrange for a
__libpthread_version_placeholder@GLIBC_2.6 on some of the powerpc
targets.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This is a follow up patch to the fix for bug 19329. This adds relaxed
MO atomics to accesses that were previously data races but are now
race conditions, and where relaxed MO is sufficient.
The race conditions all follow the pattern that the write is behind the
dlopen lock, but a read can happen concurrently (e.g. during tls access)
without holding the lock. For slotinfo entries the read value only
matters if it reads from a synchronized write in dlopen or dlclose,
otherwise the related dtv entry is not valid to access so it is fine
to leave it in an inconsistent state. The same applies for
GL(dl_tls_max_dtv_idx) and GL(dl_tls_generation), but there the
algorithm relies on the fact that the read of the last synchronized
write is an increasing value.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbols pthread_clockjoin_np, pthread_join, pthread_timedjoin_np,
pthread_tryjoin_np, thrd_join were moved using
scripts/move-symbol-to-libc.py.
Moving the symbols at the same time avoids the need for temporary
exports.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The symbol was moved using scripts/move-symbol-to-libc.py.
The export of __default_pthread_attr_freeres is temporary. There
is a minor regression in freeres coverage because in the dynamic case,
__default_pthread_attr_freeres is no longer called if libpthread is
not linked in.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The nptl version is used as default, since now with symbol always
present the single-thread optimization is tricky.
Hurd is not change, it is used it own lock scheme (which call
_cthreads_funlockfile).
Checked on x86_64-linux-gnu.
The nptl version is used as default, since now with symbol always
present the single-thread optimization is tricky.
Hurd is not change, it is used it own lock scheme (which call
_cthreads_ftrylockfile).
Checked on x86_64-linux-gnu.
The nptl version is used as default, since now with symbol always
present the single-thread optimization is tricky.
Hurd is not change, it is used it own lock scheme (which call
_cthreads_flockfile).
Checked on x86_64-linux-gnu.
Linux 5.12 adds the constants PTRACE_SYSEMU and
PTRACE_SYSEMU_SINGLESTEP for s390. Add these to glibc.
Tested with build-many-glibcs.py for s390-linux-gnu and
s390x-linux-gnu.
These workload traces cover the whole "long double" range.
This patch was prepared with the help of Adhemerval Zanella.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
All the stack lists are now in _rtld_global, so it is possible
to change stack permissions directly from there, instead of
calling into libpthread to do the change.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Permissions of the cached stacks may have to be updated if an object
is loaded that requires executable stacks, so the dynamic loader
needs to know about these cached stacks.
The move of in_flight_stack and stack_cache_actsize is a requirement for
merging __reclaim_stacks into the fork implementation in libc.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This is an early variant of __tls_init_tp, primarily for initializing
thread-related elements of _rtld_global/GL.
Some existing initialization code not needed for NPTL is moved into
the generic version of this function.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Always use __libc_multiple_threads if beneficial, and do not assume
the the dynamic loader is single-threaded. This assumption could
become incorrect by accident once more code is moved from libpthread
into it. The previous commit introducing the
NO_SYSCALL_CANCEL_CHECKING macro enables this change.
Do not hint to the compiler that multi-threaded programs are unlikely
(which is not quite true anymore).
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Historically, SINGLE_THREAD_P is defined to 1 in the dynamic loader.
This has the side effect of disabling cancellation points. In order
to enable future use of SINGLE_THREAD_P for single-thread
optimizations in the dynamic loader (which becomes important once
more code is moved from libpthread), introduce a new
NO_SYSCALL_CANCEL_CHECKING macro which is always 1 for IS_IN (rtld),
indepdently of the actual SINGLE_THREAD_P value.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This allows the elimination of the __libc_multiple_threads_ptr
variable in libpthread and its initialization procedure.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>