Commit Graph

15396 Commits

Author SHA1 Message Date
Adhemerval Zanella
774058d729 linux: Fix sys/mount.h usage with kernel headers
Now that kernel exports linux/mount.h and includes it on linux/fs.h,
its definitions might clash with glibc exports sys/mount.h.  To avoid
the need to rearrange the Linux header to be always after glibc one,
the glibc sys/mount.h is changed to:

  1. Undefine the macros also used as enum constants.  This covers prior
     inclusion of <linux/mount.h> (for instance MS_RDONLY).

  2. Include <linux/mount.h> based on the usual __has_include check
     (needs to use __has_include ("linux/mount.h") to paper over GCC
     bugs.

  3. Define enum fsconfig_command only if FSOPEN_CLOEXEC is not defined.
     (FSOPEN_CLOEXEC should be a very close proxy.)

  4. Define struct mount_attr if MOUNT_ATTR_SIZE_VER0 is not defined.
     (Added in the same commit on the Linux side.)

This patch also adds some tests to check if including linux/fs.h and
linux/mount.h after and before sys/mount.h does work.

Checked on x86_64-linux-gnu.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-08-12 09:15:28 -03:00
Adhemerval Zanella
e1226cdc6b linux: Use compile_c_snippet to check linux/mount.h availability
Checked on x86_64-linux-gnu.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-08-12 09:15:23 -03:00
Adhemerval Zanella
c68b6044bc linux: Mimic kernel defition for BLOCK_SIZE
To avoid possible warnings if the kernel header is included before
sys/mount.h.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-08-12 09:15:21 -03:00
Adhemerval Zanella
1542019b69 linux: Use compile_c_snippet to check linux/pidfd.h availability
Instead of tying to a specific kernel version.

Checked on x86_64-linux-gnu.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-08-12 09:15:11 -03:00
caiyinyu
1c9bc1b6e5 LoongArch: Add pointer mangling support. 2022-08-12 09:30:56 +08:00
Wilco Dijkstra
12182ba18d AArch64: Fix typo in sve configure check (BZ# 29394)
Fix a typo in the SVE configure check. This fixes [BZ# 29394].
2022-08-11 17:52:00 +01:00
Wilco Dijkstra
c51c483d2b libio: Improve performance of IO locks
Improve performance of recursive IO locks by adding a fast path for
the single-threaded case. To reduce the number of memory accesses for
locking/unlocking, only increment the recursion counter if the lock
is already taken.

On Neoverse V1, a microbenchmark with many small freads improved by
2.9x. Multithreaded performance improved by 2%.

Reviewed-by: Cristian Rodríguez  <crrodriguez@opensuse.org>
2022-08-11 16:47:45 +01:00
Stefan Liebler
11f09947f3 tst-process_madvise: Check process_madvise-syscall support.
So far this test checks if pidfd_open-syscall is supported,
which was introduced with linux 5.3.

The process_madvise-syscall was introduced with linux 5.10.
Thus you'll get FAILs if you are running a kernel in between.

This patch adds a check if the first process_madvise-syscall
returns ENOSYS and in this case will fail with UNSUPPORTED.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-08-11 12:21:05 +02:00
Noah Goldstein
312ded0d63 x86: Fix #define STRCPY guard in strcpy-sse2.S
`#ifndef STPCPY` is incorrect for checking if `STRCPY` is already
defined.  It doesn't end up mattering as the whole check is
guarded by `#if IS_IN (libc)` but is incorrect none the less.
2022-08-09 17:00:03 +08:00
Adhemerval Zanella
26a3499cdb i386: Use cmpl instead of cmp
Clang cannot assemble cmp in the AT&T dialect mode.
2022-08-05 09:28:39 -03:00
Adhemerval Zanella
1ed5869c4c i386: Use fldt instead of fld on e_logl.S
Clang cannot assemble fldt in the AT&T dialect mode.
2022-08-05 09:28:33 -03:00
Fangrui Song
525ca33a61 i386: Replace movzx with movzbl
Similar to 6720d36b66 for x86-64.

Clang cannot assemble movzx in the AT&T dialect mode.  Change movzx to
movzbl, which follows the AT&T dialect and is used elsewhere in the
file.
2022-08-04 14:06:50 -07:00
Adhemerval Zanella
3698f5a9dd i386: Remove RELA support
Now that prelink is not support, there is no need to keep supporting
rela for non bootstrap.
2022-08-04 10:03:46 -03:00
Adhemerval Zanella
c3f5682215 arm: Remove RELA support
Now that prelink is not support, there is no need to keep supporting
rela for non bootstrap.
2022-08-04 10:03:46 -03:00
Adhemerval Zanella
36676f5e5d Remove ldd libc4 support
The older libc versions are obsolete for over twenty years now.
2022-08-04 10:03:45 -03:00
Lucas A. M. Magalhaes
8ee878592c Assume only FLAG_ELF_LIBC6 suport
The older libc versions are obsolete for over twenty years now.
This patch removes the special flags for libc5 and libc4 and assumes
that all libraries cached are libc6 compatible and use FLAG_ELF_LIBC6.

Checked with a build for all affected architectures.

Co-authored-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-08-04 09:09:48 -03:00
Adhemerval Zanella
5a57ad23ba Remove left over LD_LIBRARY_VERSION usages
The environment variable was removed by
d2db60d8d8.
2022-08-04 09:09:48 -03:00
Florian Weimer
8fabe0e632 Linux: Remove exit system call from _exit
exit only terminates the current thread, not the whole process, so it
is the wrong fallback system call in this context.  All supported
Linux versions implement the exit_group system call anyway.
2022-08-04 06:17:50 +02:00
caiyinyu
3e83843637 LoongArch: Add vdso support for gettimeofday. 2022-08-04 09:19:36 +08:00
Joseph Myers
085030b957 Update kernel version to 5.19 in header constant tests
This patch updates the kernel version in the tests tst-mman-consts.py,
tst-mount-consts.py and tst-pidfd-consts.py to 5.18.  (There are no
new constants covered by these tests in 5.19, or in 5.17 or 5.18 in
the case of tst-mount-consts.py that previously used version 5.16,
that need any other header changes.)

Tested with build-many-glibcs.py.
2022-08-03 16:31:58 +00:00
Florian Weimer
68e036f27f nptl: Remove uses of assert_perror
__pthread_sigmask cannot actually fail with valid pointer arguments
(it would need a really broken seccomp filter), and we do not check
for errors elsewhere.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-08-03 11:42:49 +02:00
Florian Weimer
cca9684f2d stdio: Clean up __libc_message after unconditional abort
Since commit ec2c1fcefb ("malloc:
Abort on heap corruption, without a backtrace [BZ #21754]"),
__libc_message always terminates the process.  Since commit
a289ea09ea ("Do not print backtraces
on fatal glibc errors"), the backtrace facility has been removed.
Therefore, remove enum __libc_message_action and the action
argument of __libc_message, and mark __libc_message as _No_return.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-08-03 11:42:39 +02:00
Joseph Myers
fccadcdf5b Update syscall lists for Linux 5.19
Linux 5.19 has no new syscalls, but enables memfd_secret in the uapi
headers for RISC-V.  Update the version number in syscall-names.list
to reflect that it is still current for 5.19 and regenerate the
arch-syscall.h headers with build-many-glibcs.py update-syscalls.

Tested with build-many-glibcs.py.
2022-08-02 21:05:07 +00:00
Arjun Shankar
9c443ac455 socket: Check lengths before advancing pointer in CMSG_NXTHDR
The inline and library functions that the CMSG_NXTHDR macro may expand
to increment the pointer to the header before checking the stride of
the increment against available space.  Since C only allows incrementing
pointers to one past the end of an array, the increment must be done
after a length check.  This commit fixes that and includes a regression
test for CMSG_FIRSTHDR and CMSG_NXTHDR.

The Linux, Hurd, and generic headers are all changed.

Tested on Linux on armv7hl, i686, x86_64, aarch64, ppc64le, and s390x.

[BZ #28846]

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-08-02 11:10:25 +02:00
Mark Wielaard
325ba824b0 tst-pidfd.c: UNSUPPORTED if we get EPERM on valid pidfd_getfd call
pidfd_getfd can fail for a valid pidfd with errno EPERM for various
reasons in a restricted environment. Use FAIL_UNSUPPORTED in that case.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-07-29 18:52:12 +02:00
caiyinyu
bce0218d9a LoongArch: Add greg_t and gregset_t. 2022-07-29 09:15:21 +08:00
caiyinyu
033e76ea9c LoongArch: Fix VDSO_HASH and VDSO_NAME. 2022-07-29 09:15:21 +08:00
Darius Rad
7c5db7931f riscv: Update rv64 libm test ulps
Generated on a Microsemi Polarfire Icicle Kit running Linux version
5.15.32.  Same ULPs were also produced on QEMU 5.2.0 running Linux
5.18.0.
2022-07-27 10:50:20 -03:00
Darius Rad
5b6d8a650d riscv: Update nofpu libm test ulps 2022-07-27 10:50:10 -03:00
Jason A. Donenfeld
eaad4f9e8f arc4random: simplify design for better safety
Rather than buffering 16 MiB of entropy in userspace (by way of
chacha20), simply call getrandom() every time.

This approach is doubtlessly slower, for now, but trying to prematurely
optimize arc4random appears to be leading toward all sorts of nasty
properties and gotchas. Instead, this patch takes a much more
conservative approach. The interface is added as a basic loop wrapper
around getrandom(), and then later, the kernel and libc together can
work together on optimizing that.

This prevents numerous issues in which userspace is unaware of when it
really must throw away its buffer, since we avoid buffering all
together. Future improvements may include userspace learning more from
the kernel about when to do that, which might make these sorts of
chacha20-based optimizations more possible. The current heuristic of 16
MiB is meaningless garbage that doesn't correspond to anything the
kernel might know about. So for now, let's just do something
conservative that we know is correct and won't lead to cryptographic
issues for users of this function.

This patch might be considered along the lines of, "optimization is the
root of all evil," in that the much more complex implementation it
replaces moves too fast without considering security implications,
whereas the incremental approach done here is a much safer way of going
about things. Once this lands, we can take our time in optimizing this
properly using new interplay between the kernel and userspace.

getrandom(0) is used, since that's the one that ensures the bytes
returned are cryptographically secure. But on systems without it, we
fallback to using /dev/urandom. This is unfortunate because it means
opening a file descriptor, but there's not much of a choice. Secondly,
as part of the fallback, in order to get more or less the same
properties of getrandom(0), we poll on /dev/random, and if the poll
succeeds at least once, then we assume the RNG is initialized. This is a
rough approximation, as the ancient "non-blocking pool" initialized
after the "blocking pool", not before, and it may not port back to all
ancient kernels, though it does to all kernels supported by glibc
(≥3.2), so generally it's the best approximation we can do.

The motivation for including arc4random, in the first place, is to have
source-level compatibility with existing code. That means this patch
doesn't attempt to litigate the interface itself. It does, however,
choose a conservative approach for implementing it.

Cc: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Cristian Rodríguez <crrodriguez@opensuse.org>
Cc: Paul Eggert <eggert@cs.ucla.edu>
Cc: Mark Harris <mark.hsj@gmail.com>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-07-27 08:58:27 -03:00
caiyinyu
68d61026d5 LoongArch: Hard Float Support 2022-07-26 12:35:12 -03:00
caiyinyu
3d87c89815 LoongArch: Build Infrastructure 2022-07-26 12:35:12 -03:00
caiyinyu
0d4a891a7c LoongArch: Add ABI Lists 2022-07-26 12:35:12 -03:00
caiyinyu
f2037efbb3 LoongArch: Linux ABI 2022-07-26 12:35:12 -03:00
caiyinyu
45955fe618 LoongArch: Linux Syscall Interface 2022-07-26 12:35:12 -03:00
caiyinyu
3275882261 LoongArch: Atomic and Locking Routines 2022-07-26 12:35:12 -03:00
caiyinyu
c742795dce LoongArch: Generic <math.h> and soft-fp Routines 2022-07-26 12:35:12 -03:00
caiyinyu
619bfc6770 LoongArch: Thread-Local Storage Support 2022-07-26 12:35:12 -03:00
caiyinyu
a133942025 LoongArch: ABI Implementation 2022-07-26 12:35:12 -03:00
Arnout Vandecappelle (Essensium/Mind)
794c27446f struct stat is not posix conformant on microblaze with __USE_FILE_OFFSET64
Commit a06b40cdf5 updated stat.h to use
__USE_XOPEN2K8 instead of __USE_MISC to add the st_atim, st_mtim and
st_ctim members to struct stat. However, for microblaze, there are two
definitions of struct stat, depending on the __USE_FILE_OFFSET64 macro.
The second one was not updated.

Change __USE_MISC to __USE_XOPEN2K8 in the __USE_FILE_OFFSET64 version
of struct stat for microblaze.
2022-07-25 11:06:49 -03:00
Florian Weimer
0c5605989f Linux: dirent/tst-readdir64-compat needs to use TEST_COMPAT (bug 27654)
The hppa port starts libc at GLIBC_2.2, but has earlier symbol
versions in other shared objects.  This means that the compat
symbol for readdir64 is not actually present in libc even though
have-GLIBC_2.1.3 is defined as yes at the make level.

Fixes commit 15e50e6c96 ("Linux:
dirent/tst-readdir64-compat can be a regular test") by mostly
reverting it.
2022-07-25 11:39:03 +02:00
Adhemerval Zanella Netto
3b56f944c5 s390x: Add optimized chacha20
It adds vectorized ChaCha20 implementation based on libgcrypt
cipher/chacha20-s390x.S.  The final state register clearing is
omitted.

On a z15 it shows the following improvements (using formatted
bench-arc4random data):

GENERIC                                    MB/s
-----------------------------------------------
arc4random [single-thread]               198.92
arc4random_buf(16) [single-thread]       244.49
arc4random_buf(32) [single-thread]       282.73
arc4random_buf(48) [single-thread]       286.64
arc4random_buf(64) [single-thread]       320.06
arc4random_buf(80) [single-thread]       297.43
arc4random_buf(96) [single-thread]       310.96
arc4random_buf(112) [single-thread]      308.10
arc4random_buf(128) [single-thread]      309.90
-----------------------------------------------

VX.                                        MB/s
-----------------------------------------------
arc4random [single-thread]               430.26
arc4random_buf(16) [single-thread]       735.14
arc4random_buf(32) [single-thread]      1029.99
arc4random_buf(48) [single-thread]      1206.76
arc4random_buf(64) [single-thread]      1311.92
arc4random_buf(80) [single-thread]      1378.74
arc4random_buf(96) [single-thread]      1445.06
arc4random_buf(112) [single-thread]     1484.32
arc4random_buf(128) [single-thread]     1517.30
-----------------------------------------------

Checked on s390x-linux-gnu.
2022-07-22 11:58:27 -03:00
Adhemerval Zanella Netto
b7060acfe8 powerpc64: Add optimized chacha20
It adds vectorized ChaCha20 implementation based on libgcrypt
cipher/chacha20-ppc.c.  It targets POWER8 and it is used on default
for LE.

On a POWER8 it shows the following improvements (using formatted
bench-arc4random data):

POWER8

GENERIC                                    MB/s
-----------------------------------------------
arc4random [single-thread]               138.77
arc4random_buf(16) [single-thread]       174.36
arc4random_buf(32) [single-thread]       228.11
arc4random_buf(48) [single-thread]       252.31
arc4random_buf(64) [single-thread]       270.11
arc4random_buf(80) [single-thread]       278.97
arc4random_buf(96) [single-thread]       287.78
arc4random_buf(112) [single-thread]      291.92
arc4random_buf(128) [single-thread]      295.25

POWER8                                     MB/s
-----------------------------------------------
arc4random [single-thread]               198.06
arc4random_buf(16) [single-thread]       278.79
arc4random_buf(32) [single-thread]       448.89
arc4random_buf(48) [single-thread]       551.09
arc4random_buf(64) [single-thread]       646.12
arc4random_buf(80) [single-thread]       698.04
arc4random_buf(96) [single-thread]       756.06
arc4random_buf(112) [single-thread]      784.12
arc4random_buf(128) [single-thread]      808.04
-----------------------------------------------

Checked on powerpc64-linux-gnu and powerpc64le-linux-gnu.
Reviewed-by: Paul E. Murphy <murphyp@linux.ibm.com>
2022-07-22 11:58:27 -03:00
Adhemerval Zanella Netto
84cfc6479b x86: Add AVX2 optimized chacha20
It adds vectorized ChaCha20 implementation based on libgcrypt
cipher/chacha20-amd64-avx2.S.  It is used only if AVX2 is supported
and enabled by the architecture.

As for generic implementation, the last step that XOR with the
input is omited.  The final state register clearing is also
omitted.

On a Ryzen 9 5900X it shows the following improvements (using
formatted bench-arc4random data):

SSE                                        MB/s
-----------------------------------------------
arc4random [single-thread]               704.25
arc4random_buf(16) [single-thread]      1018.17
arc4random_buf(32) [single-thread]      1315.27
arc4random_buf(48) [single-thread]      1449.36
arc4random_buf(64) [single-thread]      1511.16
arc4random_buf(80) [single-thread]      1539.48
arc4random_buf(96) [single-thread]      1571.06
arc4random_buf(112) [single-thread]     1596.16
arc4random_buf(128) [single-thread]     1613.48
-----------------------------------------------

AVX2                                       MB/s
-----------------------------------------------
arc4random [single-thread]               922.61
arc4random_buf(16) [single-thread]      1478.70
arc4random_buf(32) [single-thread]      2241.80
arc4random_buf(48) [single-thread]      2681.28
arc4random_buf(64) [single-thread]      2913.43
arc4random_buf(80) [single-thread]      3009.73
arc4random_buf(96) [single-thread]      3141.16
arc4random_buf(112) [single-thread]     3254.46
arc4random_buf(128) [single-thread]     3305.02
-----------------------------------------------

Checked on x86_64-linux-gnu.
2022-07-22 11:58:27 -03:00
Adhemerval Zanella Netto
e169aff0e9 x86: Add SSE2 optimized chacha20
It adds vectorized ChaCha20 implementation based on libgcrypt
cipher/chacha20-amd64-ssse3.S.  It replaces the ROTATE_SHUF_2 (which
uses pshufb) by ROTATE2 and thus making the original implementation
SSE2.

As for generic implementation, the last step that XOR with the
input is omited. The final state register clearing is also
omitted.

On a Ryzen 9 5900X it shows the following improvements (using
formatted bench-arc4random data):

GENERIC                                    MB/s
-----------------------------------------------
arc4random [single-thread]               443.11
arc4random_buf(16) [single-thread]       552.27
arc4random_buf(32) [single-thread]       626.86
arc4random_buf(48) [single-thread]       649.81
arc4random_buf(64) [single-thread]       663.95
arc4random_buf(80) [single-thread]       674.78
arc4random_buf(96) [single-thread]       675.17
arc4random_buf(112) [single-thread]      680.69
arc4random_buf(128) [single-thread]      683.20
-----------------------------------------------

SSE                                        MB/s
-----------------------------------------------
arc4random [single-thread]               704.25
arc4random_buf(16) [single-thread]      1018.17
arc4random_buf(32) [single-thread]      1315.27
arc4random_buf(48) [single-thread]      1449.36
arc4random_buf(64) [single-thread]      1511.16
arc4random_buf(80) [single-thread]      1539.48
arc4random_buf(96) [single-thread]      1571.06
arc4random_buf(112) [single-thread]     1596.16
arc4random_buf(128) [single-thread]     1613.48
-----------------------------------------------

Checked on x86_64-linux-gnu.
2022-07-22 11:58:27 -03:00
Adhemerval Zanella Netto
4c128c7823 aarch64: Add optimized chacha20
It adds vectorized ChaCha20 implementation based on libgcrypt
cipher/chacha20-aarch64.S.  It is used as default and only
little-endian is supported (BE uses generic code).

As for generic implementation, the last step that XOR with the
input is omited.  The final state register clearing is also
omitted.

On a virtualized Linux on Apple M1 it shows the following
improvements (using formatted bench-arc4random data):

GENERIC                                    MB/s
-----------------------------------------------
arc4random [single-thread]               380.89
arc4random_buf(16) [single-thread]       500.73
arc4random_buf(32) [single-thread]       552.61
arc4random_buf(48) [single-thread]       566.82
arc4random_buf(64) [single-thread]       574.01
arc4random_buf(80) [single-thread]       581.02
arc4random_buf(96) [single-thread]       591.19
arc4random_buf(112) [single-thread]      592.29
arc4random_buf(128) [single-thread]      596.43
-----------------------------------------------

OPTIMIZED                                  MB/s
-----------------------------------------------
arc4random [single-thread]               569.60
arc4random_buf(16) [single-thread]       825.78
arc4random_buf(32) [single-thread]       987.03
arc4random_buf(48) [single-thread]      1042.39
arc4random_buf(64) [single-thread]      1075.50
arc4random_buf(80) [single-thread]      1094.68
arc4random_buf(96) [single-thread]      1130.16
arc4random_buf(112) [single-thread]     1129.58
arc4random_buf(128) [single-thread]     1137.91
-----------------------------------------------

Checked on aarch64-linux-gnu.
2022-07-22 11:58:27 -03:00
Adhemerval Zanella Netto
6f4e0fcfa2 stdlib: Add arc4random, arc4random_buf, and arc4random_uniform (BZ #4417)
The implementation is based on scalar Chacha20 with per-thread cache.
It uses getrandom or /dev/urandom as fallback to get the initial entropy,
and reseeds the internal state on every 16MB of consumed buffer.

To improve performance and lower memory consumption the per-thread cache
is allocated lazily on first arc4random functions call, and if the
memory allocation fails getentropy or /dev/urandom is used as fallback.
The cache is also cleared on thread exit iff it was initialized (so if
arc4random is not called it is not touched).

Although it is lock-free, arc4random is still not async-signal-safe
(the per thread state is not updated atomically).

The ChaCha20 implementation is based on RFC8439 [1], omitting the final
XOR of the keystream with the plaintext because the plaintext is a
stream of zeros.  This strategy is similar to what OpenBSD arc4random
does.

The arc4random_uniform is based on previous work by Florian Weimer,
where the algorithm is based on Jérémie Lumbroso paper Optimal Discrete
Uniform Generation from Coin Flips, and Applications (2013) [2], who
credits Donald E. Knuth and Andrew C. Yao, The complexity of nonuniform
random number generation (1976), for solving the general case.

The main advantage of this method is the that the unit of randomness is not
the uniform random variable (uint32_t), but a random bit.  It optimizes the
internal buffer sampling by initially consuming a 32-bit random variable
and then sampling byte per byte.  Depending of the upper bound requested,
it might lead to better CPU utilization.

Checked on x86_64-linux-gnu, aarch64-linux, and powerpc64le-linux-gnu.

Co-authored-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Yann Droneaud <ydroneaud@opteya.com>

[1] https://datatracker.ietf.org/doc/html/rfc8439
[2] https://arxiv.org/pdf/1304.1916.pdf
2022-07-22 11:58:27 -03:00
Michael Hudson-Doyle
1f4e90d468 linux: return UNSUPPORTED from tst-mount if entering mount namespace fails
Before this the test fails if run in a chroot by a non-root user:

warning: could not become root outside namespace (Operation not permitted)
../sysdeps/unix/sysv/linux/tst-mount.c:36: numeric comparison failure
   left: 1 (0x1); from: errno
  right: 19 (0x13); from: ENODEV
error: ../sysdeps/unix/sysv/linux/tst-mount.c:39: not true: fd != -1
error: ../sysdeps/unix/sysv/linux/tst-mount.c:46: not true: r != -1
error: ../sysdeps/unix/sysv/linux/tst-mount.c:48: not true: r != -1
../sysdeps/unix/sysv/linux/tst-mount.c:52: numeric comparison failure
   left: 1 (0x1); from: errno
  right: 9 (0x9); from: EBADF
error: ../sysdeps/unix/sysv/linux/tst-mount.c:55: not true: mfd != -1
../sysdeps/unix/sysv/linux/tst-mount.c:58: numeric comparison failure
   left: 1 (0x1); from: errno
  right: 2 (0x2); from: ENOENT
error: ../sysdeps/unix/sysv/linux/tst-mount.c:61: not true: r != -1
../sysdeps/unix/sysv/linux/tst-mount.c:65: numeric comparison failure
   left: 1 (0x1); from: errno
  right: 2 (0x2); from: ENOENT
error: ../sysdeps/unix/sysv/linux/tst-mount.c:68: not true: pfd != -1
error: ../sysdeps/unix/sysv/linux/tst-mount.c:75: not true: fd_tree != -1
../sysdeps/unix/sysv/linux/tst-mount.c:88: numeric comparison failure
   left: 1 (0x1); from: errno
  right: 38 (0x26); from: ENOSYS
error: 12 test failures

Checking that the test can enter a new mount namespace is more correct
than just checking the return value of support_become_root() as the test
code changes the mount namespace it runs in so running it as root on a
system that does not support mount namespaces should still skip.

Also change the test to remove the unnecessary fork.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-07-19 06:55:49 +12:00
Noah Goldstein
49889fb256 x86: Add support to build st{p|r}{n}{cpy|cat} with explicit ISA level
1. Add default ISA level selection in non-multiarch/rtld
   implementations.

2. Add ISA level build guards to different implementations.
    - I.e strcpy-avx2.S which is ISA level 3 will only build if
      compiled ISA level <= 3. Otherwise there is no reason to
      include it as we will always use one of the ISA level 4
      implementations (strcpy-evex.S).

3. Refactor the ifunc selector and ifunc implementation list to use
   the ISA level aware wrapper macros that allow functions below the
   compiled ISA level (with a guranteed replacement) to be skipped.

Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}

And m32 with and without multiarch.
2022-07-16 03:07:59 -07:00
Noah Goldstein
192979ee35 x86: Add support to build wcscpy with explicit ISA level
1. Add ISA level build guards to different implementations.
    - wcscpy-ssse3.S is used as ISA level 2/3/4.
    - wcscpy-generic.c is only used at ISA level 1 and will
      only build if compiled with ISA level == 1. Otherwise
      there is no reason to include it as we will always use
      wcscpy-ssse3.S

2. Refactor the ifunc selector and ifunc implementation list to use
   the ISA level aware wrapper macros that allow functions below the
   compiled ISA level (with a guranteed replacement) to be skipped.

Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}

And m32 with and without multiarch.
2022-07-16 03:07:59 -07:00
Noah Goldstein
ceabdcd130 x86: Add support to build strcmp/strlen/strchr with explicit ISA level
1. Add default ISA level selection in non-multiarch/rtld
   implementations.

2. Add ISA level build guards to different implementations.
    - I.e strcmp-avx2.S which is ISA level 3 will only build if
      compiled ISA level <= 3. Otherwise there is no reason to
      include it as we will always use one of the ISA level 4
      implementations (strcmp-evex.S).

3. Refactor the ifunc selector and ifunc implementation list to use
   the ISA level aware wrapper macros that allow functions below the
   compiled ISA level (with a guranteed replacement) to be skipped.

Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}

And m32 with and without multiarch.
2022-07-16 03:07:59 -07:00
Stefan Liebler
779aa039fc S390: Define SINGLE_THREAD_BY_GLOBAL only on s390x
Starting with commit e070501d12
"Replace __libc_multiple_threads with __libc_single_threaded"
the testcases nptl/tst-cancel-self and
nptl/tst-cancel-self-cancelstate are failing.

This is fixed by only defining SINGLE_THREAD_BY_GLOBAL on s390x,
but not on s390.

Starting with commit 09c76a7409
"Linux: Consolidate {RTLD_}SINGLE_THREAD_P definition",
SINGLE_THREAD_BY_GLOBAL was defined in
sysdeps/unix/sysv/linux/s390/s390-64/sysdep.h.

Lateron the commit 9a973da617
"s390: Consolidate Linux syscall definition" consolidates the sysdep.h files
from s390-32/s390-64 subdirectories.  Unfortunately the macro is now always
defined instead of only on s390-64.

As information:
TLS_MULTIPLE_THREADS_IN_TCB is also only defined for s390.
See: sysdeps/s390/nptl/tls.h
2022-07-14 13:39:09 +02:00
Noah Goldstein
7c8ca17893 x86: Add missing rtm tests for strcmp family
Add new tests for:
    strcasecmp
    strncasecmp
    strcmp
    wcscmp

These functions all have avx2_rtm implementations so should be tested.
2022-07-13 14:55:31 -07:00
Noah Goldstein
42b014dd1b x86: Remove unneeded rtld-wmemcmp
wmemcmp isn't used by the dynamic loader so their no need to add an
RTLD stub for it.

Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}

And m32 with and without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
e19bb87c97 x86: Move wcslen SSE2 implementation to multiarch/wcslen-sse2.S
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.

Tested build on x86_64 and x86_32 with/without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
64479f11b7 x86: Move wcschr SSE2 implementation to multiarch/wcschr-sse2.S
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.

Tested build on x86_64 and x86_32 with/without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
72a48ec0f7 x86: Move strcat SSE2 implementation to multiarch/strcat-sse2.S
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.

Tested build on x86_64 and x86_32 with/without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
cd080d0741 x86: Move strchr SSE2 implementation to multiarch/strchr-sse2.S
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.

Tested build on x86_64 and x86_32 with/without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
425647458b x86: Move strrchr SSE2 implementation to multiarch/strrchr-sse2.S
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.

Tested build on x86_64 and x86_32 with/without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
08af081ffd x86: Move memrchr SSE2 implementation to multiarch/memrchr-sse2.S
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.

Tested build on x86_64 and x86_32 with/without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
6b9006bfb0 x86: Move strcpy SSE2 implementation to multiarch/strcpy-sse2.S
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.

Tested build on x86_64 and x86_32 with/without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
58e6cd4bcb x86: Move strlen SSE2 implementation to multiarch/strlen-sse2.S
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.

Tested build on x86_64 and x86_32 with/without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
60a583ec60 x86: Move strcmp SSE42 implementation to multiarch/strcmp-sse4_2.S
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.

Tested build on x86_64 and x86_32 with/without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
427eaa2c85 x86: Move wcscmp SSE2 implementation to multiarch/wcscmp-sse2.S
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.

Tested build on x86_64 and x86_32 with/without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
d561fbb041 x86: Move strcmp SSE2 implementation to multiarch/strcmp-sse2.S
This commit doesn't affect libc.so.6, its just housekeeping to prepare
for adding explicit ISA level support.

Because strcmp-sse2.S implements so many functions (more from
avx2/evex/sse42) add a new file 'strcmp-naming.h' to assist in
getting the correct symbol name for all the function across
multiarch/non-multiarch builds.

Tested build on x86_64 and x86_32 with/without multiarch.
2022-07-13 14:55:31 -07:00
Noah Goldstein
30e57e0a21 x86: Rename STRCASECMP_NONASCII macro to STRCASECMP_L_NONASCII
The previous macro name can be confusing given that both
`__strcasecmp_l_nonascii` and `__strcasecmp_nonascii` are
functions and we use the `_l` version.
2022-07-13 14:55:31 -07:00
Noah Goldstein
f2698954ff x86: Remove __mmask intrinsics in strstr-avx512.c
The intrinsics are not available before GCC7 and using standard
operators generates code of equivalent or better quality.

Removed:
    _cvtmask64_u64
    _kshiftri_mask64
    _kand_mask64

Geometric Mean of 5 Runs of Full Benchmark Suite New / Old: 0.958
2022-07-12 15:41:14 -07:00
Noah Goldstein
9c38deec96 x86: Remove generic strncat, strncpy, and stpncpy implementations
These functions all have optimized versions:
__strncat_sse2_unaligned, __strncpy_sse2_unaligned, and
stpncpy_sse2_unaligned which are faster than their respective generic
implementations.  Since the sse2 versions can run on baseline x86_64,
we should use these as the baseline implementation and can remove the
generic implementations.

Geometric mean of N=20 runs of the entire benchmark suite on:
11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz (Tigerlake)

__strncat_sse2_unaligned / __strncat_generic: .944
__strncpy_sse2_unaligned / __strncpy_generic: .726
__stpncpy_sse2_unaligned / __stpncpy_generic: .650

Tested build with and without multiarch and full check with multiarch.
2022-07-12 11:44:12 -07:00
Fangrui Song
c5bec9d491 i386: Remove -Wa,-mtune=i686
gas -mtune= may change NOP generating patterns but -mtune=i686 has no
difference from the default by inspecting .o and .os files.

Note: Clang doesn't support -Wa,-mtune=i686.
2022-07-12 11:14:32 -07:00
H.J. Lu
ec9013727d x86-64: Remove redundant strcspn-generic/strpbrk-generic/strspn-generic
Remove redundant strcspn-generic, strpbrk-generic and strspn-generic
from sysdep_routines in sysdeps/x86_64/multiarch/Makefile added by

commit c69f960b01
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Sun Jul 3 21:28:07 2022 -0700

    x86: Add support for building str{c|p}{brk|spn} with explicit ISA level

since they have been added to sysdep_routines in sysdeps/x86_64/Makefile.
2022-07-08 16:06:04 -07:00
H.J. Lu
eedf7886ed x86-64: Don't mark symbols as hidden in strcmp-XXX.S
Don't mark symbols as hidden in strcmp-avx2.S, strcmp-evex.S and
strcmp-sse42.S since they are marked as hidden in the IFUNC selectors.
2022-07-07 16:38:11 -07:00
Tom Honermann
8bcca1db3d stdlib: Implement mbrtoc8, c8rtomb, and the char8_t typedef.
This change provides implementations for the mbrtoc8 and c8rtomb
functions adopted for C++20 via WG21 P0482R6 and for C2X via WG14
N2653.  It also provides the char8_t typedef from WG14 N2653.

The mbrtoc8 and c8rtomb functions are declared in uchar.h in C2X
mode or when the _GNU_SOURCE macro or C++20 __cpp_char8_t feature
test macro is defined.

The char8_t typedef is declared in uchar.h in C2X mode or when the
_GNU_SOURCE macro is defined and the C++20 __cpp_char8_t feature
test macro is not defined (if __cpp_char8_t is defined, then char8_t
is a builtin type).

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-07-06 09:29:42 -03:00
Danila Kutenin
3c99806989 aarch64: Optimize string functions with shrn instruction
We found that string functions were using AND+ADDP
to find the nibble/syndrome mask but there is an easier
opportunity through `SHRN dst.8b, src.8h, 4` (shift
right every 2 bytes by 4 and narrow to 1 byte) and has
same latency on all SIMD ARMv8 targets as ADDP. There
are also possible gaps for memcmp but that's for
another patch.

We see 10-20% savings for small-mid size cases (<=128)
which are primary cases for general workloads.
2022-07-06 09:26:20 +01:00
Noah Goldstein
ae308947ff x86: Add support for building {w}memcmp{eq} with explicit ISA level
1. Refactor files so that all implementations are in the multiarch
   directory
    - Moved the implementation portion of memcmp sse2 from memcmp.S to
      multiarch/memcmp-sse2.S

    - The non-multiarch file now only includes one of the
      implementations in the multiarch directory based on the compiled
      ISA level (only used for non-multiarch builds.  Otherwise we go
      through the ifunc selector).

2. Add ISA level build guards to different implementations.
    - I.e memcmp-avx2-movsb.S which is ISA level 3 will only build if
      compiled ISA level <= 3. Otherwise there is no reason to include
      it as we will always use one of the ISA level 4
      implementations (memcmp-evex-movbe.S).

3. Add new multiarch/rtld-{w}memcmp{eq}.S that just include the
   non-multiarch {w}memcmp{eq}.S which will in turn select the best
   implementation based on the compiled ISA level.

4. Refactor the ifunc selector and ifunc implementation list to use
   the ISA level aware wrapper macros that allow functions below the
   compiled ISA level (with a guranteed replacement) to be skipped.

Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}

And m32 with and without multiarch.
2022-07-05 16:42:42 -07:00
Noah Goldstein
37ecc657b2 x86: Add support for building {w}memset{_chk} with explicit ISA level
1. Refactor files so that all implementations are in the multiarch
   directory
    - Moved the implementation portion of memset sse2 from memset.S to
      multiarch/memset-sse2.S

    - The non-multiarch file now only includes one of the
      implementations in the multiarch directory based on the compiled
      ISA level (only used for non-multiarch builds.  Otherwise we go
      through the ifunc selector).

2. Add ISA level build guards to different implementations.
    - I.e memset-avx2-unaligned-erms.S which is ISA level 3 will only
      build if compiled ISA level <= 3. Otherwise there is no reason
      to include it as we will always use one of the ISA level 4
      implementations (memset-evex-unaligned-erms.S).

3. Add new multiarch/rtld-memset.S that just include the
   non-multiarch memset.S which will in turn select the best
   implementation based on the compiled ISA level.

4. Refactor the ifunc selector and ifunc implementation list to use
   the ISA level aware wrapper macros that allow functions below the
   compiled ISA level (with a guranteed replacement) to be skipped.

Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}

And m32 with and without multiarch.
2022-07-05 16:42:42 -07:00
Noah Goldstein
b6a02c3606 x86: Add support for building {w}memmove{_chk} with explicit ISA level
1. Refactor files so that all implementations are in the multiarch
   directory
    - Moved the implementation portion of memmove sse2 from memmove.S
      to multiarch/memmove-sse2.S

    - The non-multiarch file now only includes one of the
      implementations in the multiarch directory based on the compiled
      ISA level (only used for non-multiarch builds.  Otherwise we go
      through the ifunc selector).

2. Add ISA level build guards to different implementations.
    - I.e memmove-avx2-unaligned-erms.S which is ISA level 3 will only
      build if compiled ISA level <= 3. Otherwise there is no reason
      to include it as we will always use one of the ISA level 4
      implementations (memmove-evex-unaligned-erms.S).

3. Add new multiarch/rtld-memmove.S that just include the
   non-multiarch memmove.S which will in turn select the best
   implementation based on the compiled ISA level.

4. Refactor the ifunc selector and ifunc implementation list to use
   the ISA level aware wrapper macros that allow functions below the
   compiled ISA level (with a guranteed replacement) to be skipped.

Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}

And m32 with and without multiarch.
isa raising memmove
2022-07-05 16:42:42 -07:00
Noah Goldstein
c69f960b01 x86: Add support for building str{c|p}{brk|spn} with explicit ISA level
The changes for these functions are different than the others because
the best implementation (sse4_2) requires the generic
implementation as a fallback to be built as well.

Changes are:

1. Add non-multiarch functions for str{c|p}{brk|spn}.c to statically
   select the best implementation based on the configured ISA build
   level.

2. Add stubs for str{c|p}{brk|spn}-generic and varshift.c to in the
   sysdeps/x86_64 directory so that the the sse4 implementation will
   have all of its dependencies for the non-multiarch / rtld build
   when ISA level >= 2.

3. Add new multiarch/rtld-strcspn.c that just include the
   non-multiarch strcspn.c which will in turn select the best
   implementation based on the compiled ISA level.

4. Refactor the ifunc selector and ifunc implementation list to use
   the ISA level aware wrapper macros that allow functions below the
   compiled ISA level (with a guranteed replacement) to be skipped.

Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}

And m32 with and without multiarch.
2022-07-05 16:42:42 -07:00
Noah Goldstein
baeae86fb8 x86: Add comment explaining no Slow_SSE4_2 check in ifunc-sse4_2
Just for clarities sake and so that if a future implementation is
added we remember to add the check.
2022-07-05 16:42:42 -07:00
Adhemerval Zanella
e070501d12 Replace __libc_multiple_threads with __libc_single_threaded
And also fixes the SINGLE_THREAD_P macro for SINGLE_THREAD_BY_GLOBAL,
since header inclusion single-thread.h is in the wrong order, the define
needs to come before including sysdeps/unix/sysdep.h.  The macro
is now moved to a per-arch single-threade.h header.

The SINGLE_THREAD_P is used on some more places.

Checked on aarch64-linux-gnu and x86_64-linux-gnu.
2022-07-05 10:14:47 -03:00
Adhemerval Zanella
af1aa36c61 linux: Add mount_setattr
It was added on Linux 5.12 (2a1867219c7b27f928e2545782b86daaf9ad50bd)
to allow change the properties of a mount or a mount tree using file
descriptors which the new mount api is based on.

Checked on x86_64-linux-gnu.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-07-05 10:08:48 -03:00
Adhemerval Zanella
c3b02b6567 linux: Add tst-mount to check for Linux new mount API
The new mount API was added on Linux 5.2 with six new syscalls:
fsopen, fsconfig, fsmount, move_mount, fspick, and open_tree.

The new test verifies minimal functionality along with error paths
for specific arguments and their corner cases.

Checked on x86_64-linux-gnu.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-07-05 10:08:48 -03:00
Adhemerval Zanella
78a408ee7b linux: Add open_tree
It was added on Linux 5.2 (a07b20004793d8926f78d63eb5980559f7813404)
to return a O_PATH-opened file descriptor to an existing mountpoint.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-07-05 10:08:48 -03:00
Adhemerval Zanella
60f574e140 linux: Add fspick
It was added on Linux 5.2 (cf3cba4a429be43e5527a3f78859b1bfd9ebc5fb)
that can be used to pick an existing mountpoint into an filesystem
context which can thereafter be used to reconfigure a superblock
with fsconfig syscall.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-07-05 10:08:48 -03:00
Adhemerval Zanella
7eae6a91e9 linux: Add fsconfig
It was added on Linux 5.2 (ecdab150fddb42fe6a739335257949220033b782)
as a way to a configure filesystem creation context and trigger
actions upon it, to be used in conjunction with fsopen, fspick and
fsmount.

The fsconfig_command commands are currently only defined as an enum,
so they can't be checked on tst-mount-consts.py with current test
support.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-07-05 10:08:48 -03:00
Tejas Belagod
05844d18f7 AArch64: Reset HWCAP2_AFP bits in FPCR for default fenv
The AFP feature (Alternate floating-point behavior) was added in armv8.7 and
introduced new FPCR bits.

Currently, HWCAP2_AFP bits (bit 0, 1, 2) in FPCR are preserved when fenv is
set to default environment.  This is a deviation from standard behaviour.
Clear these bits when setting the fenv to default.

There is no libc API to modify the new FPCR bits.  Restoring those bits matters
if the user changed them directly.
2022-07-05 14:01:17 +01:00
Adhemerval Zanella
8ee2c043cf Fix hurd namespace issues for internal signal functions
It was introduced by "Refactor internal-signals.h
(a1bdd81664)".  Use the internal symbols instead.

Checked with a build for i686-gnu.
2022-07-04 11:10:06 -03:00
Adhemerval Zanella
a1bdd81664 Refactor internal-signals.h
The main drive is to optimize the internal usage and required size
when sigset_t is embedded in other data structures.  On Linux, the
current supported signal set requires up to 8 bytes (16 on mips),
was lower than the user defined sigset_t (128 bytes).

A new internal type internal_sigset_t is added, along with the
functions to operate on it similar to the ones for sigset_t.  The
internal-signals.h is also refactored to remove unused functions

Besides small stack usage on some functions (posix_spawn, abort)
it lower the struct pthread by about 120 bytes (112 on mips).

Checked on x86_64-linux-gnu.

Reviewed-by: Arjun Shankar <arjun@redhat.com>
2022-06-30 14:56:21 -03:00
Kito Cheng
c22d2021a9
riscv: Use memcpy to handle unaligned access when fixing R_RISCV_RELATIVE
Although RISC-V Linux will enable the unaligned memory access handler by
default, that is quite expensive in general, using memcpy will be much cheaper
- just break down that into several load/store byte instructions.

ARM and MIPS has similar issue:

ARM: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51456
MIPS: https://gcc.gnu.org/legacy-ml/gcc-help/2005-07/msg00325.html

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-06-30 08:04:52 -07:00
Tejas Belagod
e9dd368296 AArch64: Add asymmetric faulting mode for tag violations in mem.tagging tunable
The new asymmetric mode is available when HWCAP2_MTE3 is set (support is
available), bit2 is set in the tunable (user request per application),
and the system is configured such that the asymmetric mode is preferred over
sync or async (per-cpu system-wide setting).

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2022-06-30 14:01:08 +01:00
Adhemerval Zanella
71d87d85bf linux: Fix mq_timereceive check for 32 bit fallback code (BZ 29304)
On  success,  mq_receive() and mq_timedreceive() return the number of
bytes in the received message, so it requires to check if the value
is larger than 0.

Checked on i686-linux-gnu.
2022-06-30 09:12:59 -03:00
Noah Goldstein
96ac447d91 x86: Add missing IS_IN (libc) check to strncmp-sse4_2.S
Was missing to for the multiarch build rtld-strncmp-sse4_2.os was
being built and exporting symbols:

build/glibc/string/rtld-strncmp-sse4_2.os:
0000000000000000 T __strncmp_sse42

Introduced in:

commit 11ffcacb64
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Wed Jun 21 12:10:50 2017 -0700

    x86-64: Implement strcmp family IFUNC selectors in C
2022-06-29 19:47:52 -07:00
Noah Goldstein
0aa294fb88 x86: Add missing IS_IN (libc) check to strcspn-sse4.c
Was missing to for the multiarch build rtld-strcspn-sse4.os was
being built and exporting symbols:

build/glibc/string/rtld-strcspn-sse4.os:
                 U ___m128i_shift_right
                 U __strcspn_generic
0000000000000000 T __strcspn_sse42
                 U strlen

build/glibc/string/rtld-varshift.os:
0000000000000000 R ___m128i_shift_right

Introduced in:

commit 06e51c8f3d
Author: H.J. Lu <hongjiu.lu@intel.com>
Date:   Fri Jul 3 02:48:56 2009 -0700

    Add SSE4.2 support for strcspn, strpbrk, and strspn on x86-64.
2022-06-29 19:47:52 -07:00
Noah Goldstein
8cfbbbcdf9 x86: Add missing IS_IN (libc) check to memmove-ssse3.S
Was missing to for the multiarch build rtld-memmove-ssse3.os was
being built and exporting symbols:

>$ nm string/rtld-memmove-ssse3.os
                 U __GI___chk_fail
0000000000000020 T __memcpy_chk_ssse3
0000000000000040 T __memcpy_ssse3
0000000000000020 T __memmove_chk_ssse3
0000000000000040 T __memmove_ssse3
0000000000000000 T __mempcpy_chk_ssse3
0000000000000010 T __mempcpy_ssse3
                 U __x86_shared_cache_size_half

Introduced after 2.35 in:

commit 26b2478322
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Thu Apr 14 11:47:40 2022 -0500

    x86: Reduce code size of mem{move|pcpy|cpy}-ssse3
2022-06-29 19:47:52 -07:00
H.J. Lu
88070acdd0 x86-64: Properly indent X86_IFUNC_IMPL_ADD_VN arguments
Properly indent X86_IFUNC_IMPL_ADD_VN arguments for memchr, rawmemchr
and wmemchr.

Co-authored-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-29 19:47:52 -07:00
Noah Goldstein
58bcf7b71a x86-64: Small improvements to dl-trampoline.S
1.  Remove sse2 instructions when using the avx512 or avx version.

2.  Fixup some format nits in how the address offsets where aligned.

3.  Use more space efficient instructions in the conditional AVX
    restoral.
        - vpcmpeqq          -> vpcmpeqb
        - cmp imm32, r; jz  -> inc r; jz

4.  Use `rep movsb` instead of `rep movsq`. The former is guranteed to
    be fast with the ERMS flags, the latter is not. The latter also
    wastes an instruction in size setup.
2022-06-29 19:47:52 -07:00
Noah Goldstein
21925f6473 x86: Move mem{p}{mov|cpy}_{chk_}erms to its own file
The primary memmove_{impl}_unaligned_erms implementations don't
interact with this function. Putting them in same file both
wastes space and unnecessarily bloats a hot code section.
2022-06-29 19:47:52 -07:00
Noah Goldstein
4a3f29e7e4 x86: Move and slightly improve memset_erms
Implementation wise:
    1. Remove the VZEROUPPER as memset_{impl}_unaligned_erms does not
       use the L(stosb) label that was previously defined.

    2. Don't give the hotpath (fallthrough) to zero size.

Code positioning wise:

Move memset_{chk}_erms to its own file.  Leaving it in between the
memset_{impl}_unaligned both adds unnecessary complexity to the
file and wastes space in a relatively hot cache section.
2022-06-29 19:47:52 -07:00
Noah Goldstein
2a1099020c x86: Add definition for __wmemset_chk AVX2 RTM in ifunc impl list
This was simply missing and meant we weren't testing it properly.
2022-06-29 19:47:52 -07:00
Arjun Shankar
2c4e368a41 linux: Remove unnecessary nice.c and signal.c
These files simply include the sysdeps/posix implementations which would
be used even in the absence of the files.  They have been unnecessary
since 7b17aeda0c when nice and signal were removed from the
syscalls.list file.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-06-30 00:37:34 +02:00
Florian Weimer
ecd0fbebc0 Linux: Forward declaration of struct iovec for process_madvise
This maintains compatibility between <sys/mman.h> and <linux/uio.h>.
Before that, the addition of process_madvise made those two header
files incompatible.  This has been observed resulting in a build
failure in LLDB's Process/Linux/NativeRegisterContextLinux_s390x.cpp
source file.

Fixes commit d19ee3473d
("linux: Add process_madvise").

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-06-29 13:50:50 +02:00
Noah Goldstein
a3563f3f36 x86: Add more feature definitions to isa-level.h
This commit doesn't change anything in itself.  It is just to add
definitions that will be needed by future patches.
2022-06-28 08:24:56 -07:00
H.J. Lu
cfdc4df66c x86-64: Only define used SSE/AVX/AVX512 run-time resolvers
When glibc is built with x86-64 ISA level v3, SSE run-time resolvers
aren't used.  For x86-64 ISA level v4 build, both SSE and AVX resolvers
are unused.  Check the minimum x86-64 ISA level to exclude the unused
run-time resolvers.
2022-06-27 14:17:52 -07:00
H.J. Lu
f56c497d2b x86: Move CPU_FEATURE{S}_{USABLE|ARCH}_P to isa-level.h
Move X86_ISA_CPU_FEATURE_USABLE_P and X86_ISA_CPU_FEATURES_ARCH_P to
where MINIMUM_X86_ISA_LEVEL and XXX_X86_ISA_LEVEL are defined.
2022-06-27 12:52:58 -07:00
Noah Goldstein
4fc321dc58 x86: Fix backwards Prefer_No_VZEROUPPER check in ifunc-evex.h
Add third argument to X86_ISA_CPU_FEATURES_ARCH_P macro so the runtime
CPU_FEATURES_ARCH_P check can be inverted if the
MINIMUM_X86_ISA_LEVEL is not high enough to constantly evaluate
the check.

Use this new macro to correct the backwards check in ifunc-evex.h
2022-06-27 08:35:51 -07:00
Noah Goldstein
d912127bde x86: Rename strstr_sse2 to strstr_generic as it uses string/strstr.c
This is in accordance with other files in the multiarch directory.
2022-06-27 08:35:51 -07:00
Noah Goldstein
d1e931125b x86: Remove unused file wmemcmp-sse4
The memcmp-sse4 was removed in:

commit 7cbc03d030
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Fri Apr 15 12:28:00 2022 -0500

    x86: Remove memcmp-sse4.S

so this file does nothing.
2022-06-27 08:35:51 -07:00
Noah Goldstein
afc6e4328f x86: Put wcs{n}len-sse4.1 in the sse4.1 text section
Previously was missing but the two implementations shouldn't get in
the sse2 (generic) text section.
2022-06-27 08:35:51 -07:00
Noah Goldstein
227afaa672 x86: Align entry for memrchr to 64-bytes.
The function was tuned around 64-byte entry alignment and performs
better for all sizes with it.

As well different code boths where explicitly written to touch the
minimum number of cache line i.e sizes <= 32 touch only the entry
cache line.
2022-06-27 08:35:51 -07:00
Andreas Schwab
01c60dc90c m68k: optimize RTLD_START 2022-06-25 00:22:02 +02:00
Adhemerval Zanella
baf2a265c7 misc: Optimize internal usage of __libc_single_threaded
By adding an internal alias to avoid the GOT indirection.
On some architecture, __libc_single_thread may be accessed through
copy relocations and thus it requires to update also the copies
default copy.

This is done by adding a new internal macro,
libc_hidden_data_{proto,def}, which has an addition argument that
specifies the alias name (instead of default __GI_ one).

Checked on x86_64-linux-gnu and i686-linux-gnu.

Reviewed-by: Fangrui Song <maskray@google.com>
2022-06-24 17:45:58 -03:00
Adhemerval Zanella
5b41b2659d linux: Add move_mount
It was added on Linux 5.2 (2db154b3ea8e14b04fee23e3fdfd5e9d17fbc6ae)
as way t move a mount from one place to another and, in the next
commit, allow to attach an unattached mount tree.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2022-06-24 16:03:38 -03:00
Adhemerval Zanella
b4deb7beb8 linux: Add fsmount
It was added on 5.2 (93766fbd2696c2c4453dd8e1070977e9cd4e6b6d) to
provide a way by which a filesystem opened with fsopen and configured
by a series of fsconfig calls can have a detached mount object
created for it.

Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-06-24 16:03:31 -03:00
Adhemerval Zanella
6c0eedd97e linux: Add fsopen
It was added on Linux 5.2 (24dcb3d90a1f67fe08c68a004af37df059d74005)
to start the process of preparing to create a superblock that will
then be mountable, using an fd as a context handle.

Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-06-24 16:03:15 -03:00
Kito Cheng
58fc66a91c
riscv: Use elf_machine_rela_relative to handle R_RISCV_RELATIVE
Minor clean-up, we need to change this part in following patch, clean this up
to prevent we duplicated the change twice.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-06-23 21:07:19 -07:00
Noah Goldstein
bd42891bb3 x86: Remove faulty sanity tests for RTLD build with no multiarch
The sanity tests where meant to ensure that the default implementation
was only being built without multiarch with the exception of the
multiarch/rtld-*.S files.

The code used IS_IN (rtld) to check if the build for was for an
multiarch/rtld-*.S file which is incorrect as IS_IN (rtld) is set for
the non-multiarch build as well.
2022-06-23 11:14:08 -07:00
Noah Goldstein
3079f652d7 x86: Replace all sse instructions with vex equivilent in avx+ files
Most of these don't really matter as there was no dirty upper state
but we should generally avoid stray sse when its not needed.

The one case that really matters is in svml_d_tanh4_core_avx2.S:

blendvps %xmm0, %xmm8, %xmm7

When there was a dirty upper state.

Tested on x86_64-linux
2022-06-22 19:42:17 -07:00
Noah Goldstein
3edda6a0f0 x86: Add support for compiling {raw|w}memchr with high ISA level
1. Refactor files so that all implementations for in the multiarch
   directory.
    - Essentially moved sse2 {raw|w}memchr.S implementation to
      multiarch/{raw|w}memchr-sse2.S

    - The non-multiarch {raw|w}memchr.S file now only includes one of
      the implementations in the multiarch directory based on the
      compiled ISA level (only used for non-multiarch builds.
      Otherwise we go through the ifunc selector).

2. Add ISA level build guards to different implementations.
    - I.e memchr-avx2.S which is ISA level 3 will only build if
      compiled ISA level <= 3. Otherwise there is no reason to include
      it as we will always use one of the ISA level 4
      implementations (memchr-evex{-rtm}.S).

3. Add new multiarch/rtld-{raw}memchr.S that just include the
   non-multiarch {raw}memchr.S which will in turn select the best
   implementation based on the compiled ISA level.

4. Refactor the ifunc selector and ifunc implementation list to use
   the ISA level aware wrapper macros that allow functions below the
   compiled ISA level (with a guranteed replacement) to be skipped.
    - Guranteed replacement essentially means that for any ISA level
      build there must be a function that the baseline of the ISA
      supports. So for {raw|w}memchr.S since there is not ISA level 2
      function, the ISA level 2 build still includes the ISA level
      1 (sse2) function. Once we reach the ISA level 3 build, however,
      {raw|w}memchr-avx2{-rtm}.S will always be sufficient so the ISA
      level 1 implementation ({raw|w}memchr-sse2.S) will not be built.

Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}

And m32 with and without multiarch.
2022-06-22 19:41:35 -07:00
Noah Goldstein
703f434108 x86: Add defines / utilities for making ISA specific x86 builds
1. Factor out some of the ISA level defines in isa-level.c to
   standalone header isa-level.h

2. Add new headers with ISA level dependent macros for handling
   ifuncs.

Note, this file does not change any code.

Tested with and without multiarch on x86_64 for ISA levels:
{generic, x86-64-v2, x86-64-v3, x86-64-v4}

And m32 with and without multiarch.
2022-06-22 19:41:35 -07:00
Sam James
2249ec60a9 s390: use LC_ALL=C for readelf call
Let's use LC_ALL=C as we do elsewhere for consistency.

Tested on s390x-ibm-linux-gnu.

See: 72bd208846
Signed-off-by: Sam James <sam@gentoo.org>
Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
2022-06-21 10:16:44 +02:00
Sam James
c376ff3287 s390: use $READELF
We already check for it in root configure.ac with AC_CHECK_TOOL. Let's
use the result.

Tested on s390x-ibm-linux-gnu.

Signed-off-by: Sam James <sam@gentoo.org>
Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
2022-06-21 10:16:44 +02:00
Noah Goldstein
e5446dfea1 i386: Fix include paths for strspn, strcspn, and strpbrk
commit c22eb807b0
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Thu Jun 16 15:07:12 2022 -0700

    x86: Rename generic functions with unique postfix for clarity

Changed the names of the strspn-c, strcspn-c, and strpbrk-c files
in a general refactor. It didn't change the include paths for the
i386 files breaking the i386 build. This commit fixes that.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2022-06-17 16:25:27 -07:00
Noah Goldstein
c22eb807b0 x86: Rename generic functions with unique postfix for clarity
No functions are changed. It just renames generic implementations from
'{func}_sse2' to '{func}_generic'. This is just because the postfix
"_sse2" was overloaded and was used for files that had hand-optimized
sse2 assembly implementations and files that just redirected back
to the generic implementation.

Full xcheck passed on x86_64.
2022-06-16 20:17:45 -07:00
Noah Goldstein
8da9f346cb x86: Add BMI1/BMI2 checks for ISA_V3 check
BMI1/BMI2 are part of the ISA V3 requirements:
https://en.wikipedia.org/wiki/X86-64

And defined by GCC when building with `-march=x86-64-v3`
2022-06-16 20:17:45 -07:00
Fangrui Song
4ef05df5ef x86-64: Handle fewer relocation types for RTLD_BOOTSTRAP
The RTLD_BOOTSTRAP branch is used to relocate ld.so itself.  It only
needs to handle RELATIVE, GLOB_DAT, and JUMP_SLOT.  RELATIVE has been
handled (by _ELF_DYNAMIC_DO_RELOC due to DT_RELACOUNT, or RELR), so the
switch statement only needs to handle GLOB_DAT and JUMP_SLOT.

We can drop these `#if[n]def RTLD_BOOTSTRAP` and add a large
`# ifndef RTLD_BOOTSTRAP` instead.
2022-06-16 11:48:15 -07:00
Fangrui Song
e89913d0aa aarch64: Handle fewer relocations for RTLD_BOOTSTRAP
The RTLD_BOOTSTRAP branch is used to relocate ld.so itself.  It only
needs to handle RELATIVE, GLOB_DAT, and JUMP_SLOT.
TLSDESC/TLS_DTPMOD/TLS_DTPREL handling can be removed.  Remove
`case AARCH64_R(RELATIVE)` as well as elf_machine_rela has checked it.

Tested on aarch64-linux-gnu.
2022-06-15 19:21:53 -07:00
Fangrui Song
57919813e7 riscv: Change the relocations handled for RTLD_BOOTSTRAP
The RTLD_BOOTSTRAP branch is used to relocate ld.so itself.  It only
needs to handle RELATIVE, GLOB_DAT, and the symbolic relocation type
(R_RISCV_{32,64}).  NONE and IRELATIVE can be removed.

The code relies on ld.so having DT_RELACOUNT so that the RTLD_BOOTSTRAP
branch does not need handle RELATIVE.  Drop this minor size
optimization for clarity.

Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-06-15 18:42:03 -07:00
Noah Goldstein
89a25c6f64 x86: Cleanup bounds checking in large memcpy case
1. Fix incorrect lower-bound threshold in L(large_memcpy_2x).
   Previously was using `__x86_rep_movsb_threshold` and should
   have been using `__x86_shared_non_temporal_threshold`.

2. Avoid reloading __x86_shared_non_temporal_threshold before
   the L(large_memcpy_4x) bounds check.

3. Document the second bounds check for L(large_memcpy_4x)
   more clearly.
2022-06-15 14:25:55 -07:00
Noah Goldstein
b446822b6a x86: Add bounds x86_non_temporal_threshold
The lower-bound (16448) and upper-bound (SIZE_MAX / 16) are assumed
by memmove-vec-unaligned-erms.

The lower-bound is needed because memmove-vec-unaligned-erms unrolls
the loop aggressively in the L(large_memset_4x) case.

The upper-bound is needed because memmove-vec-unaligned-erms
right-shifts the value of `x86_non_temporal_threshold` by
LOG_4X_MEMCPY_THRESH (4) which without a bound may overflow.

The lack of lower-bound can be a correctness issue. The lack of
upper-bound cannot.
2022-06-15 14:25:55 -07:00
Fangrui Song
686216945a Remove remnant reference to ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA
This fixes nios2 build after commit de38b2a343.
2022-06-15 13:02:17 -07:00
Fangrui Song
de38b2a343 elf: Remove ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA
If an executable has copy relocations for extern protected data, that
can only work if the library containing the definition is built with
assumptions (a) the compiler emits GOT-generating relocations (b) the
linker produces R_*_GLOB_DAT instead of R_*_RELATIVE.  Otherwise the
library uses its own definition directly and the executable accesses a
stale copy.  Note: the GOT relocations defeat the purpose of protected
visibility as an optimization, but allow rtld to make the executable and
library use the same copy when copy relocations are present, but it
turns out this never worked perfectly.

ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA has strange semantics when both
a.so and b.so define protected var and the executable copy relocates
var: b.so accesses its own copy even with GLOB_DAT.  The behavior change
is from commit 62da1e3b00 (x86) and then
copied to nios2 (ae5eae7cfc) and arc
(0e7d930c4c).

Without ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA, b.so accesses the copy
relocated data like a.so.

There is now a warning for copy relocation on protected symbol since
commit 7374c02b68.  It's extremely
unlikely anyone relies on the ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA
behavior, so let's remove it: this removes a check in the symbol lookup
code.
2022-06-15 11:29:55 -07:00
Noah Goldstein
ff439c4717 x86: Add sse42 implementation to strcmp's ifunc
This has been missing since the the ifuncs where added.

The performance of SSE4.2 is preferable to to SSE2.

Measured on Tigerlake with N = 20 runs.
Geometric Mean of all benchmarks SSE4.2 / SSE2: 0.906
2022-06-14 20:58:09 -07:00
Noah Goldstein
0355915514 x86: Fix misordered logic for setting rep_movsb_stop_threshold
Move the setting of `rep_movsb_stop_threshold` to after the tunables
have been collected so that the `rep_movsb_stop_threshold` (which
is used to redirect control flow to the non_temporal case) will
use any user value for `non_temporal_threshold` (set using
glibc.cpu.x86_non_temporal_threshold)
2022-06-14 20:58:07 -07:00
Fangrui Song
7374c02b68 elf: Refine direct extern access diagnostics to protected symbol
Refine commit 349b0441da:

1. Copy relocations for extern protected data do not work properly,
regardless whether GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS is used.
It makes sense to produce a warning unconditionally.

2. Non-zero value of an undefined function symbol may break pointer
equality, but may be benign in many cases (many programs don't take the
address in the shared object then compare it with the address in the
executable).  Reword the diagnostic to be clearer.

3. Remove the unneeded condition !(undef_map->l_1_needed &
GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS). If the executable does
not not have GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS (can only
occur in error cases), the diagnostic should be emitted as well.

When the defining shared object has
GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS, report an error to apply
the intended enforcement.
2022-06-14 13:07:27 -07:00
Wilco Dijkstra
fdaf78656f Add bounds check to __libc_ifunc_impl_list
Add a proper bounds check to __libc_ifunc_impl_list. This makes MAX_IFUNC
redundant and fixes several targets that will write outside the array.
To avoid unnecessary large diffs, pass the maximum in the argument 'i' to
IFUNC_IMPL_ADD - 'max' can be used in new ifunc definitions and existing
ones can be updated if desired.

Passes buildmanyglibc.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-06-10 17:13:29 +01:00
Noah Goldstein
cffb9414c5 x86: Optimize svml_s_tanhf4_core_sse4.S
Optimizations are:
    1. Reduce code size (-112 bytes).
    2. Remove redundant move instructions.
    3. Slightly improve instruction selection/scheduling where
       possible.
    4. Prefer registers which get short instruction encoding.
    5. Reduce rodata size (-4k+ rodata is shared with avx2).

Result is roughly a 15-16% speedup:

       Function, New Time, Old Time, New / Old
 _ZGVbN4v_tanhf,    3.158,    3.749,     0.842
2022-06-09 12:51:25 -07:00
Noah Goldstein
bcc41f66a4 x86: Optimize svml_s_tanhf8_core_avx2.S
Optimizations are:
    1. Reduce code size (-81 bytes).
    2. Remove redundant move instructions.
    3. Slightly improve instruction selection/scheduling where
       possible.
    4. Prefer registers which get short instruction encoding.
    5. Reduce rodata size (-32 bytes).

Result is roughly a 17-18% speedup:

       Function, New Time, Old Time, New / Old
_ZGVdN8v_tanhf,     1.977,    2.402,     0.823
2022-06-09 12:51:22 -07:00
Noah Goldstein
3a49ce8799 x86: Add data file that can be shared by tanhf-avx2 and tanhf-sse4
tanhf-avx2 and tanhf-sse4 use the same data tables so we can save
over 4kb using a shared datatable. This does increase the memory
footprint of the sse4 version (as now all the targets are 32 bytes
instead of 16), generally it seems worth the code size save.

NB: This patch doesn't do anything itself, it is setup for future
patches.
2022-06-09 12:51:15 -07:00
Noah Goldstein
e560b3c2d2 x86: Optimize svml_s_tanhf16_core_avx512.S
Optimizations are:
    1. Reduce code size (-67 bytes).
    2. Remove redundant move instructions.
    3. Slightly improve instruction selection/scheduling where
       possible.
    4. Reduce rodata usage (-448 bytes).

Result is roughly a 14% speedup:

       Function, New Time, Old Time, New / Old
_ZGVeN16v_tanhf,    0.649,    0.752,     0.863
2022-06-09 12:51:12 -07:00
Noah Goldstein
fe1915d4f6 x86: Improve svml_s_atanhf4_core_sse4.S
Improvements are:
    1. Reduce code size (-62 bytes).
    2. Remove redundant move instructions.
    3. Slightly improve instruction selection/scheduling where
       possible.
    4. Prefer registers which get short instruction encoding.
    5. Reduce rodata usage (-16 bytes).

The throughput improvement is not significant as the port 0 bottleneck
is unavoidable.

       Function, New Time, Old Time, New / Old
_ZGVbN4v_atanhf,    8.821,    8.903,     0.991
2022-06-09 12:51:09 -07:00
Noah Goldstein
65897e9916 x86: Improve svml_s_atanhf8_core_avx2.S
Improvements are:
    1. Reduce code size (-60 bytes).
    2. Remove redundant move instructions.
    3. Slightly improve instruction selection/scheduling where
       possible.
    4. Prefer registers which get short instruction encoding.
    5. Shrink rodata usage (-32 bytes).

The throughput improvement is not that significant (3-5%) as the
port 0 bottleneck is unavoidable.

       Function, New Time, Old Time, New / Old
_ZGVdN8v_atanhf,    2.799,    2.923,     0.958
2022-06-09 12:51:04 -07:00
Noah Goldstein
73bae395cf x86: Improve svml_s_atanhf16_core_avx512.S
Improvements are:
    1. Reduce code size (-64 bytes).
    2. Remove redundant move instructions.
    3. Slightly improve instruction selection/scheduling where
       possible.
    4. Reduce rodata size ([-128, -188] bytes).

The throughput improvement is not significant as the port 0 bottleneck
is unavoidable.

        Function, New Time, Old Time, New / Old
_ZGVeN16v_atanhf,     1.39,    1.408,     0.987
2022-06-09 12:50:58 -07:00
Noah Goldstein
0f91811333 x86: Align varshift table to 32-bytes
This ensures the load will never split a cache line.
2022-06-09 12:50:26 -07:00
Noah Goldstein
4654e7fd5a x86: Add copyright to strpbrk-c.c 2022-06-09 12:50:00 -07:00
Noah Goldstein
2c9af8421d x86: Fix page cross case in rawmemchr-avx2 [BZ #29234]
commit 6dcbb7d95d
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Mon Jun 6 21:11:33 2022 -0700

    x86: Shrink code size of memchr-avx2.S

Changed how the page cross case aligned string (rdi) in
rawmemchr. This was incompatible with how
`L(cross_page_continue)` expected the pointer to be aligned and
would cause rawmemchr to read data start started before the
beginning of the string. What it would read was in valid memory
but could count CHAR matches resulting in an incorrect return
value.

This commit fixes that issue by essentially reverting the changes to
the L(page_cross) case as they didn't really matter.

Test cases added and all pass with the new code (and where confirmed
to fail with the old code).
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-08 17:07:34 -07:00
Adhemerval Zanella
c7d36dcecc nptl: Fix __libc_cleanup_pop_restore asynchronous restore (BZ#29214)
This was due a wrong revert done on 404656009b.

Checked on x86_64-linux-gnu.
2022-06-08 09:23:02 -03:00
Noah Goldstein
c28db9cb29 x86: ZERO_UPPER_VEC_REGISTERS_RETURN_XTEST expect no transactions
Give fall-through path to `vzeroupper` and taken-path to `vzeroall`.

Generally even on machines with RTM the expectation is the
string-library functions will not be called in transactions.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-07 13:10:32 -07:00
Noah Goldstein
56da3fe1dd x86: Shrink code size of memchr-evex.S
This is not meant as a performance optimization. The previous code was
far to liberal in aligning targets and wasted code size unnecissarily.

The total code size saving is: 64 bytes

There are no non-negligible changes in the benchmarks.
Geometric Mean of all benchmarks New / Old: 1.000

Full xcheck passes on x86_64.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-07 13:10:32 -07:00
Noah Goldstein
6dcbb7d95d x86: Shrink code size of memchr-avx2.S
This is not meant as a performance optimization. The previous code was
far to liberal in aligning targets and wasted code size unnecissarily.

The total code size saving is: 59 bytes

There are no major changes in the benchmarks.
Geometric Mean of all benchmarks New / Old: 0.967

Full xcheck passes on x86_64.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-07 13:10:31 -07:00
Noah Goldstein
af5306a735 x86: Optimize memrchr-avx2.S
The new code:
    1. prioritizes smaller user-arg lengths more.
    2. optimizes target placement more carefully
    3. reuses logic more
    4. fixes up various inefficiencies in the logic. The biggest
       case here is the `lzcnt` logic for checking returns which
       saves either a branch or multiple instructions.

The total code size saving is: 306 bytes
Geometric Mean of all benchmarks New / Old: 0.760

Regressions:
There are some regressions. Particularly where the length (user arg
length) is large but the position of the match char is near the
beginning of the string (in first VEC). This case has roughly a
10-20% regression.

This is because the new logic gives the hot path for immediate matches
to shorter lengths (the more common input). This case has roughly
a 15-45% speedup.

Full xcheck passes on x86_64.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-07 13:10:27 -07:00
Noah Goldstein
b4209615a0 x86: Optimize memrchr-evex.S
The new code:
    1. prioritizes smaller user-arg lengths more.
    2. optimizes target placement more carefully
    3. reuses logic more
    4. fixes up various inefficiencies in the logic. The biggest
       case here is the `lzcnt` logic for checking returns which
       saves either a branch or multiple instructions.

The total code size saving is: 263 bytes
Geometric Mean of all benchmarks New / Old: 0.755

Regressions:
There are some regressions. Particularly where the length (user arg
length) is large but the position of the match char is near the
beginning of the string (in first VEC). This case has roughly a
20% regression.

This is because the new logic gives the hot path for immediate matches
to shorter lengths (the more common input). This case has roughly
a 35% speedup.

Full xcheck passes on x86_64.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-07 13:10:24 -07:00
Noah Goldstein
731feee386 x86: Optimize memrchr-sse2.S
The new code:
    1. prioritizes smaller lengths more.
    2. optimizes target placement more carefully.
    3. reuses logic more.
    4. fixes up various inefficiencies in the logic.

The total code size saving is: 394 bytes
Geometric Mean of all benchmarks New / Old: 0.874

Regressions:
    1. The page cross case is now colder, especially re-entry from the
       page cross case if a match is not found in the first VEC
       (roughly 50%). My general opinion with this patch is this is
       acceptable given the "coldness" of this case (less than 4%) and
       generally performance improvement in the other far more common
       cases.

    2. There are some regressions 5-15% for medium/large user-arg
       lengths that have a match in the first VEC. This is because the
       logic was rewritten to optimize finds in the first VEC if the
       user-arg length is shorter (where we see roughly 20-50%
       performance improvements). It is not always the case this is a
       regression. My intuition is some frontend quirk is partially
       explaining the data although I haven't been able to find the
       root cause.

Full xcheck passes on x86_64.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-07 13:09:36 -07:00
Noah Goldstein
dd5c483b25 x86: Add COND_VZEROUPPER that can replace vzeroupper if no ret
The RTM vzeroupper mitigation has no way of replacing inline
vzeroupper not before a return.

This can be useful when hoisting a vzeroupper to save code size
for example:

```
L(foo):
	cmpl	%eax, %edx
	jz	L(bar)
	tzcntl	%eax, %eax
	addq	%rdi, %rax
	VZEROUPPER_RETURN

L(bar):
	xorl	%eax, %eax
	VZEROUPPER_RETURN
```

Can become:

```
L(foo):
	COND_VZEROUPPER
	cmpl	%eax, %edx
	jz	L(bar)
	tzcntl	%eax, %eax
	addq	%rdi, %rax
	ret

L(bar):
	xorl	%eax, %eax
	ret
```

This code does not change any existing functionality.

There is no difference in the objdump of libc.so before and after this
patch.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-07 13:08:28 -07:00
Noah Goldstein
8a780a6b91 x86: Create header for VEC classes in x86 strings library
This patch does not touch any existing code and is only meant to be a
tool for future patches so that simple source files can more easily be
maintained to target multiple VEC classes.

There is no difference in the objdump of libc.so before and after this
patch.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-07 13:08:28 -07:00
Matheus Castanho
0218463dd8 powerpc: Fix VSX register number on __strncpy_power9 [BZ #29197]
__strncpy_power9 initializes VR 18 with zeroes to be used throughout the
code, including when zero-padding the destination string. However, the
v18 reference was mistakenly being used for stxv and stxvl, which take a
VSX vector as operand. The code ended up using the uninitialized VSR 18
register by mistake.

Both occurrences have been changed to use the proper VSX number for VR 18
(i.e. VSR 50).

Tested on powerpc, powerpc64 and powerpc64le.

Signed-off-by: Kewen Lin <linkw@gcc.gnu.org>
2022-06-07 15:07:25 -03:00
Wilco Dijkstra
eea282d9c6 AArch64: Sort makefile entries
Sort makefile entries to reduce conflicts.
2022-06-07 16:58:15 +01:00
Wilco Dijkstra
9f298bfe1f AArch64: Add SVE memcpy
Add an initial SVE memcpy implementation.  Copies up to 32 bytes use SVE
vectors which improves the random memcpy benchmark significantly.
Cleanup the memcpy and memmove ifunc selectors.
2022-06-07 16:58:03 +01:00
Raghuveer Devulapalli
5082a287d5 x86_64: Add strstr function with 512-bit EVEX
Adding a 512-bit EVEX version of strstr. The algorithm works as follows:

(1) We spend a few cycles at the begining to peek into the needle. We
locate an edge in the needle (first occurance of 2 consequent distinct
characters) and also store the first 64-bytes into a zmm register.

(2) We search for the edge in the haystack by looking into one cache
line of the haystack at a time. This avoids having to read past a page
boundary which can cause a seg fault.

(3) If an edge is found in the haystack we first compare the first
64-bytes of the needle (already stored in a zmm register) before we
proceed with a full string compare performed byte by byte.

Benchmarking results: (old = strstr_sse2_unaligned, new = strstr_avx512)

Geometric mean of all benchmarks: new / old =  0.66

Difficult skiptable(0) : new / old =  0.02
Difficult skiptable(1) : new / old =  0.01
Difficult 2-way : new / old =  0.25
Difficult testing first 2 : new / old =  1.26
Difficult skiptable(0) : new / old =  0.05
Difficult skiptable(1) : new / old =  0.06
Difficult 2-way : new / old =  0.26
Difficult testing first 2 : new / old =  1.05
Difficult skiptable(0) : new / old =  0.42
Difficult skiptable(1) : new / old =  0.24
Difficult 2-way : new / old =  0.21
Difficult testing first 2 : new / old =  1.04
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-06 19:46:55 -07:00
Sam James
7df596a58c grep: egrep -> grep -E, fgrep -> grep -F
Newer versions of GNU grep (after grep 3.7, not inclusive) will warn on
'egrep' and 'fgrep' invocations.

Convert usages within the tree to their expanded non-aliased counterparts
to avoid irritating warnings during ./configure and the test suite.

Signed-off-by: Sam James <sam@gentoo.org>
Reviewed-by: Fangrui Song <maskray@google.com>
2022-06-05 12:09:02 -07:00
Adhemerval Zanella
1002f1af1c linux: Add process_mrelease
Added in Linux 5.15 (884a7e5964e06ed93c7771c0d7cf19c09a8946f1), the new
syscalls allows a caller to free the memory of a dying target process.

Checked on x86_64-linux-gnu.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-06-02 15:43:28 -03:00
Adhemerval Zanella
d19ee3473d linux: Add process_madvise
It was added on Linux 5.10 (ecb8ac8b1f146915aa6b96449b66dd48984caacc)
with the same functionality as madvise but using a pidfd of the target
process.

Checked on x86_64-linux-gnu and i686-linux-gnu.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-06-02 15:43:28 -03:00
Adhemerval Zanella
7d3e91ba19 linux: Set tst-pidfd-consts unsupported for kernels headers older than 5.10
Instead of fail trying to build the compare source file.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Matheus Castanho <msc@linux.ibm.com>
Reviewed-by: Matheus Castanho <msc@linux.ibm.com>
2022-06-02 15:43:25 -03:00
Florian Weimer
4b527650e0 Linux: Adjust struct rseq definition to current kernel version
This definition is only used as a fallback with old kernel headers.
The change follows kernel commit bfdf4e6208051ed7165b2e92035b4bf11
("rseq: Remove broken uapi field layout on 32-bit little endian").

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-06-02 16:29:59 +02:00
Adhemerval Zanella
87f1ec12e7 socket: Use 64 bit stat for isfdtype (BZ# 29209)
This is a missing spot initially from 52a5fe70a2.

Checked on i686-linux-gnu.
2022-06-01 13:23:16 -03:00
Adhemerval Zanella
6e7137f28c posix: Use 64 bit stat for fpathconf (_PC_ASYNC_IO) (BZ# 29208)
This is a missing spot initially from 52a5fe70a2.

Checked on i686-linux-gnu.
2022-06-01 13:23:16 -03:00
Adhemerval Zanella
574ba60fc8 posix: Use 64 bit stat for posix_fallocate fallback (BZ# 29207)
This is a missing spot initially from 52a5fe70a2.

Checked on i686-linux-gnu.
2022-06-01 13:23:16 -03:00
WANG Xuerui
e6547d635b linux: use statx for fstat if neither newfstatat nor fstatat64 is present
LoongArch is going to be the first architecture supported by Linux that
has neither fstat* nor newfstatat [1], instead exclusively relying on
statx. So in fstatat64's implementation, we need to also enable statx
usage if neither fstatat64 nor newfstatat is present, to prepare for
this new case of kernel ABI.

[1]: https://lore.kernel.org/all/20220518092619.1269111-1-chenhuacai@loongson.cn/

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2022-06-01 12:29:01 -03:00
Joseph Myers
de3501d60f Add MADV_DONTNEED_LOCKED from Linux 5.18 to bits/mman-linux.h
Linux 5.18 adds a constant MADV_DONTNEED_LOCKED (defined in multiple
header files, but with the same value on all architectures).  Add this
constant to bits/mman-linux.h.

Tested for x86_64.
2022-06-01 14:45:48 +00:00
Joseph Myers
9d03bac7e7 Add HWCAP2_MTE3 from Linux 5.18 to AArch64 bits/hwcap.h
Linux 5.18 defines a new AArch64 HWCAP value HWCAP2_MTE3; add it to
glibc's sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h.

Tested with build-many-glibcs.py for aarch64-linux-gnu.
2022-06-01 14:43:06 +00:00
Adhemerval Zanella
5a6f2cabb6 i686: Use generic sincosf implementation for SSE2 version
The generic implementation shows slight better performance
(gcc 11.2.1 on a Ryzen 9 5900X):

* s_sincosf-sse2.S:
  "sincosf": {
   "workload-random": {
    "duration": 3.89961e+09,
    "iterations": 9.5472e+07,
    "reciprocal-throughput": 40.8429,
    "latency": 40.8483,
    "max-throughput": 2.4484e+07,
    "min-throughput": 2.44808e+07
   }
  }

* generic s_cossinf.c:
  "sincosf": {
   "workload-random": {
    "duration": 3.71953e+09,
    "iterations": 1.48512e+08,
    "reciprocal-throughput": 25.0515,
    "latency": 25.0391,
    "max-throughput": 3.99177e+07,
    "min-throughput": 3.99375e+07
   }
  }

Checked on i686-linux-gnu.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-01 10:47:44 -03:00
Adhemerval Zanella
3323476641 i686: Use generic sinf implementation for SSE2 version
Performance seems to be similar (gcc 11.2.1 on a Ryzen 9 5900X),
the generic algorithm shows slight better performance for
the 'workload-huge.wrf' input set.

* s_sinf-sse2.S:
  "sinf": {
   "": {
    "duration": 3.72405e+09,
    "iterations": 2.38374e+08,
    "max": 63.973,
    "min": 11.211,
    "mean": 15.6227
   },
   "workload-random.wrf": {
    "duration": 3.76923e+09,
    "iterations": 8.4e+07,
    "reciprocal-throughput": 17.6355,
    "latency": 72.108,
    "max-throughput": 5.67037e+07,
    "min-throughput": 1.38681e+07
   },
   "workload-huge.wrf": {
    "duration": 3.76943e+09,
    "iterations": 6e+07,
    "reciprocal-throughput": 29.3493,
    "latency": 96.2985,
    "max-throughput": 3.40724e+07,
    "min-throughput": 1.03844e+07
   }
  }

* generic s_sinf.c:
  "sinf": {
   "": {
    "duration": 3.70989e+09,
    "iterations": 2.18025e+08,
    "max": 69.782,
    "min": 11.1,
    "mean": 17.0159
   },
   "workload-random.wrf": {
    "duration": 3.77213e+09,
    "iterations": 9.6e+07,
    "reciprocal-throughput": 17.5402,
    "latency": 61.0459,
    "max-throughput": 5.70119e+07,
    "min-throughput": 1.63811e+07
   },
   "workload-huge.wrf": {
    "duration": 3.81576e+09,
    "iterations": 5.6e+07,
    "reciprocal-throughput": 38.2111,
    "latency": 98.0659,
    "max-throughput": 2.61704e+07,
    "min-throughput": 1.01972e+07
   }
  }

Checked on i686-linux-gnu.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-06-01 10:47:44 -03:00
Adhemerval Zanella
da39afa4ff i686: Use generic cosf implementation for SSE2 version
Performance seems to be similar (gcc 11.2.1 on a Ryzen 9 5900X):

* s_cosf-sse2.S:
  "cosf": {
   "workload-random": {
    "duration": 3.74987e+09,
    "iterations": 9.616e+07,
    "reciprocal-throughput": 15.8141,
    "latency": 62.1782,
    "max-throughput": 6.32346e+07,
    "min-throughput": 1.60828e+07
   }
  }

* generic s_cosf.c:
  "cosf": {
   "workload-random": {
    "duration": 3.87298e+09,
    "iterations": 1.00968e+08,
    "reciprocal-throughput": 18.3448,
    "latency": 58.3722,
    "max-throughput": 5.45113e+07,
    "min-throughput": 1.71314e+07
   }
  }

Checked on i686-linux-gnu.
2022-06-01 10:47:44 -03:00
Andreas Schwab
dc1e5eeb25 x86_64: Optimize sincos where sin/cos is optimized (bug 29193)
The compiler may substitute calls to sin or cos with calls to sincos, thus
we should have the same optimized implementations for sincos.  The
optimized implementations may produce results that differ, that also makes
sure that the sincos call aggrees with the sin and cos calls.
2022-06-01 10:29:52 +02:00
Joseph Myers
6488f4d006 Add SOL_SMC from Linux 5.18 to bits/socket.h
Linux 5.18 adds a constant SOL_SMC to the getsockopt / setsockopt
levels; add this constant to bits/socket.h.

Tested for x86_64.
2022-05-31 13:49:53 +00:00
Adhemerval Zanella
81e7fdd7cc elf: Remove _dl_skip_args
Now that no architecture uses it anymore.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:33:54 -03:00
Adhemerval Zanella
ec7bc492b6 x86_64: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.   So there is no need to adjust the argc or argv.

Checked on x86_64-linux-gnu and i686-linux-gnu.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:33:34 -03:00
Adhemerval Zanella
b6712b137f sparc: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.   So there is no need to adjust the argc or argv.

Checked on sparc64-linux-gnu and sparcv9-linux-gnu.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:33:31 -03:00
Adhemerval Zanella
4dc1f6530e sh: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.   So there is no need to adjust the argc or argv.

Checked with qemu-user that arguments are correctly passed on both
constructors and main program.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:33:28 -03:00
Adhemerval Zanella
22d8935d1d s390: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.   So there is no need to adjust the argc or argv.

Checked on s390x-linux-gnu and s390-linux-gnu.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:33:25 -03:00
Adhemerval Zanella
d62123c1ed riscv: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.   So there is no need to adjust the argc or argv.

Checked with qemu-user that arguments are correctly passed on both
constructors and main program.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:33:22 -03:00
Adhemerval Zanella
4868ba5d25 nios2: Remove _dl_skip_args usage (BZ# 29187)
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.   So there is no need to adjust the argc or argv.

Checked with qemu-user that arguments are correctly passed on both
constructors and main program.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:33:20 -03:00
Adhemerval Zanella
44fc092c0d mips: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.   So there is no need to adjust the argc or argv.

Checked with qemu-user that arguments are correctly passed on both
constructors and main program.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:33:16 -03:00
Adhemerval Zanella
90cf8e6f0a microblaze: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.   So there is no need to adjust the argc or argv.

Checked with qemu-user that arguments are correctly passed on both
constructors and main program.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:33:14 -03:00
Adhemerval Zanella
ee39fafa98 m68k: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.  So there is no need to adjust the argc or argv.

Checked with qemu-user that arguments are correctly passed on both
constructors and main program.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:33:11 -03:00
Adhemerval Zanella
57bb1e5b9f ia64: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.

The startup code is changed to read the _dl_argc and _dl_argv values,
and envp is calculated from argc and argv.

Checked on ia64-linux-gnu.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:33:08 -03:00
Adhemerval Zanella
1b7f05d11e i686: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.  So there is no need to adjust the argc or argv.

Checked on i686-linux-gnu.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-05-30 16:32:57 -03:00
Adhemerval Zanella
6242602273 hppa: Remove _dl_skip_args usage (BZ# 29165)
Different than other architectures, hppa creates an unrelated stack
frame where ld.so argc/argv adjustments done by ad43cac44a
is not done on the argc/argv saved/restore by _dl_start_user.

Instead load _dl_argc and _dl_argv directlty instead of adjust them
using _dl_skip_args value.

Checked on hppa-linux-gnu.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:32:35 -03:00
Adhemerval Zanella
00477963c6 csky: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.  It makes the fixup_stack branch ununsed.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:32:33 -03:00
Adhemerval Zanella
f20464e9e4 arc: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.  So there is no need to adjust the argc or argv.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:32:29 -03:00
Adhemerval Zanella
49d877a80b arm: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.  It makes the _fixup_stack branch ununsed.

Checked with qemu-user that arguments are correctly passed on both
constructors and main program.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:32:26 -03:00
Adhemerval Zanella
1e4fb2e1ab alpha: Remove _dl_skip_args usage
Since ad43cac44a the generic code already shuffles the argv/envp/auxv
on the stack to remove the ld.so own arguments and thus _dl_skip_args
is always 0.  It makes the fixup_stack branch ununsed.

Checked with qemu-user that arguments are correctly passed on both
constructors and main program.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2022-05-30 16:32:22 -03:00
H.J. Lu
f8587a6189 x86-64: Ignore r_addend for R_X86_64_GLOB_DAT/R_X86_64_JUMP_SLOT
According to x86-64 psABI, r_addend should be ignored for R_X86_64_GLOB_DAT
and R_X86_64_JUMP_SLOT.  Since linkers always set their r_addends to 0, we
can ignore their r_addends.

Reviewed-by: Fangrui Song <maskray@google.com>
2022-05-26 14:00:25 -07:00
Sunil K Pandey
9c66efb86f x86_64: Implement evex512 version of strlen, strnlen, wcslen and wcsnlen
This patch implements following evex512 version of string functions.
Perf gain for evex512 version is up to 50% as compared to evex,
depending on length and alignment.

Placeholder function, not used by any processor at the moment.

- String length function using 512 bit vectors.
- String N length using 512 bit vectors.
- Wide string length using 512 bit vectors.
- Wide string N length using 512 bit vectors.

Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2022-05-26 13:11:36 -07:00
Joseph Myers
8d6c44ee7d Update kernel version to 5.18 in header constant tests
This patch updates the kernel version in the tests tst-mman-consts.py
and tst-pidfd-consts.py to 5.18.  (There are no new constants covered
by these tests in 5.18, or in 5.17 in the case of tst-pidfd-consts.py
that previously used version 5.16, that need any other header
changes.)

Tested with build-many-glibcs.py.
2022-05-26 13:51:17 +00:00
Joseph Myers
3d9926663c Update syscall-names.list for Linux 5.18
Linux 5.18 has no new syscalls.  Update the version number in
syscall-names.list to reflect that it is still current for 5.18.

Tested with build-many-glibcs.py.
2022-05-25 14:37:28 +00:00
Arjun Shankar
52a103e237 Fix deadlock when pthread_atfork handler calls pthread_atfork or dlclose
In multi-threaded programs, registering via pthread_atfork,
de-registering implicitly via dlclose, or running pthread_atfork
handlers during fork was protected by an internal lock.  This meant
that a pthread_atfork handler attempting to register another handler or
dlclose a dynamically loaded library would lead to a deadlock.

This commit fixes the deadlock in the following way:

During the execution of handlers at fork time, the atfork lock is
released prior to the execution of each handler and taken again upon its
return.  Any handler registrations or de-registrations that occurred
during the execution of the handler are accounted for before proceeding
with further handler execution.

If a handler that hasn't been executed yet gets de-registered by another
handler during fork, it will not be executed.   If a handler gets
registered by another handler during fork, it will not be executed
during that particular fork.

The possibility that handlers may now be registered or deregistered
during handler execution means that identifying the next handler to be
run after a given handler may register/de-register others requires some
bookkeeping.  The fork_handler struct has an additional field, 'id',
which is assigned sequentially during registration.  Thus, handlers are
executed in ascending order of 'id' during 'prepare', and descending
order of 'id' during parent/child handler execution after the fork.

Two tests are included:

* tst-atfork3: Adhemerval Zanella <adhemerval.zanella@linaro.org>
  This test exercises calling dlclose from prepare, parent, and child
  handlers.

* tst-atfork4: This test exercises calling pthread_atfork and dlclose
  from the prepare handler.

[BZ #24595, BZ #27054]

Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-05-25 11:27:31 +02:00
Adhemerval Zanella
efeb2bd1ab math: Add math-use-builtins-fabs (BZ#29027)
Both float, double, and _Float128 are assumed to be supported
(float and double already only uses builtins).  Only long double
is parametrized due GCC bug 29253 which prevents its usage on
powerpc.

It allows to remove i686, ia64, x86_64, powerpc, and sparc arch
specific implementation.

On ia64 it also fixes the sNAN handling:

  math/test-float64x-fabs
  math/test-ldouble-fabs

Checked on x86_64-linux-gnu, i686-linux-gnu, powerpc-linux-gnu,
powerpc64-linux-gnu, sparc64-linux-gnu, and ia64-linux-gnu.
2022-05-23 17:49:18 -03:00
Adhemerval Zanella
04b30fe4f8 linux: Add CLONE_NEWTIME from Linux 5.6 to bits/sched.h
It was added in commit 769071ac9f20b6a447410c7eaa55d1a5233ef40c.
2022-05-23 17:49:18 -03:00
Fangrui Song
a7629b1c1b Revert "[ARM][BZ #17711] Fix extern protected data handling"
This reverts commit 3bcea719dd.

Similar to commit e555954e02 for aarch64.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2022-05-23 13:42:10 -07:00
Fangrui Song
e555954e02 Revert "[AArch64][BZ #17711] Fix extern protected data handling"
This reverts commit 0910702c4d.

Say both a.so and b.so define protected data symbol `var` and the executable
copy relocates var.  ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA has strange
semantics: a.so accesses the copy in the executable while b.so accesses its
own.  This behavior requires that (a) the compiler emits GOT-generating
relocations (b) the linker produces GLOB_DAT instead of RELATIVE.

Without the ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA code, b.so's GLOB_DAT
will bind to the executable (normal behavior).

For aarch64 it makes sense to restore the original behavior and don't
pay the ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA cost.  The behavior is very
unlikely used by anyone.

* Clang code generator treats STV_PROTECTED the same way as STV_HIDDEN:
  no GOT-generating relocation in the first place.
* gold and lld reject copy relocation on a STV_PROTECTED symbol.
* Nowadays -fpie/-fpic modes are popular.  GCC/Clang's codegen uses
  GOT-generating relocation when accessing an default visibility
  external symbol which avoids copy relocation.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2022-05-23 13:37:05 -07:00
Noah Goldstein
9a421348cd elf: Optimize _dl_new_hash in dl-new-hash.h
Unroll slightly and enforce good instruction scheduling. This improves
performance on out-of-order machines. The unrolling allows for
pipelined multiplies.

As well, as an optional sysdep, reorder the operations and prevent
reassosiation for better scheduling and higher ILP. This commit
only adds the barrier for x86, although it should be either no
change or a win for any architecture.

Unrolling further started to induce slowdowns for sizes [0, 4]
but can help the loop so if larger sizes are the target further
unrolling can be beneficial.

Results for _dl_new_hash
Benchmarked on Tigerlake: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz

Time as Geometric Mean of N=30 runs
Geometric of all benchmark New / Old: 0.674
  type, length, New Time, Old Time, New Time / Old Time
 fixed,      0,    2.865,     2.72,               1.053
 fixed,      1,    3.567,    2.489,               1.433
 fixed,      2,    2.577,    3.649,               0.706
 fixed,      3,    3.644,    5.983,               0.609
 fixed,      4,    4.211,    6.833,               0.616
 fixed,      5,    4.741,    9.372,               0.506
 fixed,      6,    5.415,    9.561,               0.566
 fixed,      7,    6.649,   10.789,               0.616
 fixed,      8,    8.081,   11.808,               0.684
 fixed,      9,    8.427,   12.935,               0.651
 fixed,     10,    8.673,   14.134,               0.614
 fixed,     11,    10.69,   15.408,               0.694
 fixed,     12,   10.789,   16.982,               0.635
 fixed,     13,   12.169,   18.411,               0.661
 fixed,     14,   12.659,   19.914,               0.636
 fixed,     15,   13.526,   21.541,               0.628
 fixed,     16,   14.211,   23.088,               0.616
 fixed,     32,   29.412,   52.722,               0.558
 fixed,     64,    65.41,  142.351,               0.459
 fixed,    128,  138.505,  295.625,               0.469
 fixed,    256,  291.707,  601.983,               0.485
random,      2,   12.698,   12.849,               0.988
random,      4,   16.065,   15.857,               1.013
random,      8,   19.564,   21.105,               0.927
random,     16,   23.919,   26.823,               0.892
random,     32,   31.987,   39.591,               0.808
random,     64,   49.282,   71.487,               0.689
random,    128,    82.23,  145.364,               0.566
random,    256,  152.209,  298.434,                0.51

Co-authored-by: Alexander Monakov <amonakov@ispras.ru>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-05-23 10:38:40 -05:00
Stefan Liebler
728894dba4 S390: Enable static PIE
This commit enables static PIE on 64bit.  On 31bit, static PIE is
not supported.

A new configure check in sysdeps/s390/s390-64/configure.ac also performs
a minimal test for requirements in ld:
Ensure you also have those patches for:
- binutils (ld)
  - "[PR ld/22263] s390: Avoid dynamic TLS relocs in PIE"
    https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=26b1426577b5dcb32d149c64cca3e603b81948a9
    (Tested by configure check above)
    Otherwise there will be a R_390_TLS_TPOFF relocation, which fails to
    be processed in _dl_relocate_static_pie() as static TLS map is not setup.
  - "s390: Add DT_JMPREL pointing to .rela.[i]plt with static-pie"
    https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=d942d8db12adf4c9e5c7d9ed6496a779ece7149e
    (We can't test it in configure as we are not able to link a static PIE
    executable if the system glibc lacks static PIE support)
    Otherwise there won't be DT_JMPREL, DT_PLTRELA, DT_PLTRELASZ entries
    and the IFUNC symbols are not processed, which leads to crashes.

- kernel (the mentioned links to the commits belong to 5.19 merge window):
  - "s390/mmap: increase stack/mmap gap to 128MB"
    https://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git/commit/?h=features&id=f2f47d0ef72c30622e62471903ea19446ea79ee2
  - "s390/vdso: move vdso mapping to its own function"
    https://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git/commit/?h=features&id=57761da4dc5cd60bed2c81ba0edb7495c3c740b8
  - "s390/vdso: map vdso above stack"
    https://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git/commit/?h=features&id=9e37a2e8546f9e48ea76c839116fa5174d14e033
  - "s390/vdso: add vdso randomization"
    https://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git/commit/?h=features&id=41cd81abafdc4e58a93fcb677712a76885e3ca25
  (We can't test the kernel of the target system)
  Otherwise if /proc/sys/kernel/randomize_va_space is turned off (0),
  static PIE executables like ldconfig will crash.  While startup sbrk is
  used to enlarge the HEAP.  Unfortunately the underlying brk syscall fails
  as there is not enough space after the HEAP.  Then the address of the TLS
  image is invalid and the following memcpy in __libc_setup_tls() leads
  to a segfault.
  If /proc/sys/kernel/randomize_va_space is activated (default: 2), there
  is enough space after HEAP.

- glibc
  - "Linux: Define MMAP_CALL_INTERNAL"
    https://sourceware.org/git/?p=glibc.git;a=commit;h=c1b68685d438373efe64e5f076f4215723004dfb
  - "i386: Remove OPTIMIZE_FOR_GCC_5 from Linux libc-do-syscall.S"
    https://sourceware.org/git/?p=glibc.git;a=commit;h=6e5c7a1e262961adb52443ab91bd2c9b72316402
  - "i386: Honor I386_USE_SYSENTER for 6-argument Linux system calls"
    https://sourceware.org/git/?p=glibc.git;a=commit;h=60f0f2130d30cfd008ca39743027f1e200592dff
  - "ia64: Always define IA64_USE_NEW_STUB as a flag macro"
    https://sourceware.org/git/?p=glibc.git;a=commit;h=18bd9c3d3b1b6a9182698c85354578d1d58e9d64
  - "Linux: Implement a useful version of _startup_fatal"
    https://sourceware.org/git/?p=glibc.git;a=commit;h=a2a6bce7d7e52c1c34369a7da62c501cc350bc31
  - "Linux: Introduce __brk_call for invoking the brk system call"
    https://sourceware.org/git/?p=glibc.git;a=commit;h=b57ab258c1140bc45464b4b9908713e3e0ee35aa
  - "csu: Implement and use _dl_early_allocate during static startup"
    https://sourceware.org/git/?p=glibc.git;a=commit;h=f787e138aa0bf677bf74fa2a08595c446292f3d7
  The mentioned patch series by Florian Weimer avoids the mentioned failing
  sbrk syscall by falling back to mmap.

This commit also adjusts startup code in start.S to be ready for static PIE.
We have to add a wrapper function for main as we are not allowed to use
GOT relocations before __libc_start_main is called.
(Compare also to:
- commit 14d886edbd
  "aarch64: fix start code for static pie"
- commit 3d1d79283e
  "aarch64: fix static pie enabled libc when main is in a shared library"
)
2022-05-18 14:31:26 +02:00
Adhemerval Zanella
d2a1ec2097 linux: Add tst-pidfd.c
To check for the pidfd functions pidfd_open, pidfd_getfd, pid_send_signal,
and waitid with P_PIDFD.

Checked on x86_64-linux-gnu and i686-linux-gnu.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2022-05-17 10:36:59 -03:00
Adhemerval Zanella
b3528b0048 linux: Add P_PIDFD
It was added on Linux 5.4 (3695eae5fee0605f316fbaad0b9e3de791d7dfaf)
to extend waitid to wait on pidfd.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2022-05-17 10:34:36 -03:00
Adhemerval Zanella
56cf9e8eec linux: Add pidfd_send_signal
This was added on Linux 5.1(3eb39f47934f9d5a3027fe00d906a45fe3a15fad)
as a way to avoid the race condition of using kill (where PID might be
reused by the kernel between between obtaining the pid and sending the
signal).

If the siginfo_t argument is NULL then pidfd_send_signal is equivalent
to kill.  If it is not NULL pidfd_send_signal is equivalent to
rt_sigqueueinfo.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2022-05-17 10:33:46 -03:00
Adhemerval Zanella
32dd8c251a linux: Add pidfd_getfd
This was added on Linux 5.6 (8649c322f75c96e7ced2fec201e123b2b073bf09)
as a way to retrieve a file descriptors for another process though
pidfd (created either with CLONE_PIDFD or pidfd_getfd).  The
functionality is similar to recvmmsg SCM_RIGHTS.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2022-05-17 10:33:07 -03:00
Adhemerval Zanella
97f5d19c45 linux: Add pidfd_open
This was added on Linux 5.3 (32fcb426ec001cb6d5a4a195091a8486ea77e2df)
as a way to retrieve a pid file descriptors for process that has not
been created CLONE_PIDFD (by usual fork/clone).

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2022-05-17 10:32:28 -03:00
Szabolcs Nagy
1da064c015 aarch64: Move ld.so _start to separate file and drop _dl_skip_args
A separate asm file is easier to maintain than a macro that expands to
inline asm.

The RTLD_START macro is only needed now because _dl_start is local in
rtld.c, but _start has to call it, if _dl_start was made hidden then it
could be empty.

_dl_skip_args is no longer needed.

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2022-05-17 10:14:03 +01:00
Szabolcs Nagy
9faf5262c7 linux: Add a getauxval test [BZ #23293]
This is for bug 23293 and it relies on the glibc test system running
tests via explicit ld.so invokation by default.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2022-05-17 10:14:03 +01:00
Szabolcs Nagy
86147bbeec rtld: Remove DL_ARGV_NOT_RELRO and make _dl_skip_args const
_dl_skip_args is always 0, so the target specific code that modifies
argv after relro protection is applied is no longer used.

After the patch relro protection is applied to _dl_argv consistently
on all targets.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2022-05-17 10:14:03 +01:00
Szabolcs Nagy
ad43cac44a rtld: Use generic argv adjustment in ld.so [BZ #23293]
When an executable is invoked as

  ./ld.so [ld.so-args] ./exe [exe-args]

then the argv is adujusted in ld.so before calling the entry point of
the executable so ld.so args are not visible to it.  On most targets
this requires moving argv, env and auxv on the stack to ensure correct
stack alignment at the entry point.  This had several issues:

- The code for this adjustment on the stack is written in asm as part
  of the target specific ld.so _start code which is hard to maintain.

- The adjustment is done after _dl_start returns, where it's too late
  to update GLRO(dl_auxv), as it is already readonly, so it points to
  memory that was clobbered by the adjustment. This is bug 23293.

- _environ is also wrong in ld.so after the adjustment, but it is
  likely not used after _dl_start returns so this is not user visible.

- _dl_argv was updated, but for this it was moved out of relro, which
  changes security properties across targets unnecessarily.

This patch introduces a generic _dl_start_args_adjust function that
handles the argument adjustments after ld.so processed its own args
and before relro protection is applied.

The same algorithm is used on all targets, _dl_skip_args is now 0, so
existing target specific adjustment code is no longer used.  The bug
affects aarch64, alpha, arc, arm, csky, ia64, nios2, s390-32 and sparc,
other targets don't need the change in principle, only for consistency.

The GNU Hurd start code relied on _dl_skip_args after dl_main returned,
now it checks directly if args were adjusted and fixes the Hurd startup
data accordingly.

Follow up patches can remove _dl_skip_args and DL_ARGV_NOT_RELRO.

Tested on aarch64-linux-gnu and cross tested on i686-gnu.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-05-17 10:14:03 +01:00
Adhemerval Zanella
d2db60d8d8 Remove dl-librecon.h header.
The Linux version used by i686 and m68k provide three overrrides for
generic code:

  1. DISTINGUISH_LIB_VERSIONS to print additional information when
     libc5 is used by a dependency.

  2. EXTRA_LD_ENVVARS to that enabled LD_LIBRARY_VERSION environment
     variable.

  3. EXTRA_UNSECURE_ENVVARS to add two environment variables related
     to aout support.

None are really requires, it has some decades since libc5 or aout
suppported was removed and Linux even remove support for aout files.
The LD_LIBRARY_VERSION is also dead code, dl_correct_cache_id is not
used anywhere.

Checked on x86_64-linux-gnu and i686-linux-gnu.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-05-16 15:03:49 -03:00
Adhemerval Zanella
c628c22963 elf: Remove ldconfig kernel version check
Now that it was removed on libc.so.
2022-05-16 15:03:49 -03:00
Adhemerval Zanella
b46d250656 Remove kernel version check
The kernel version check is used to avoid glibc to run on older
kernels where some syscall are not available and fallback code are
not enabled to handle graciously fail.  However, it does not prevent
if the kernel does not correctly advertise its version through
vDSO note, uname or procfs.

Also kernel version checks are sometime not desirable by users,
where they want to deploy on different system with different kernel
version knowing the minimum set of syscall is always presented on
such systems.

The kernel version check has been removed along with the
LD_ASSUME_KERNEL environment variable.  The minimum kernel used to
built glibc is still provided through NT_GNU_ABI_TAG ELF note and
also printed when libc.so is issued.

Checked on x86_64-linux-gnu.
2022-05-16 15:03:49 -03:00
Adhemerval Zanella
97a912f7a8 linux: Use /sys/devices/system/cpu on __get_nprocs_conf (BZ#28991)
Currently on Linux __get_nprocs_conf first tries to enumerate the
cpus present in the system by iterating on /sys/devices/system/cpuX
directories.  This only enumerates the CPUs that are present in
system (but possibly offline), not taking in account possible CPU
that might added in the system through hotplugging.

Linux provides the maximum number of configured cpus on the
/sys/devices/system/cpu file.  Although it might present a larger
value of possible active CPUs on some system (where kernel either
get the information from firmaware or is configured at boot time),
the information is what kernel presents to userland.

This also change the returned value of _SC_NPROCESSORS_CONF, which
aligns as the maximum configure cpu in the system.

Checked on x86_64-linux-gnu.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-05-16 14:26:49 -03:00
Florian Weimer
f787e138aa csu: Implement and use _dl_early_allocate during static startup
This implements mmap fallback for a brk failure during TLS
allocation.

scripts/tls-elf-edit.py is updated to support the new patching method.
The script no longer requires that in the input object is of ET_DYN
type.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-05-16 18:42:03 +02:00
Florian Weimer
b57ab258c1 Linux: Introduce __brk_call for invoking the brk system call
Alpha and sparc can now use the generic implementation.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-05-16 18:41:52 +02:00
Adhemerval Zanella
9403b71ae9 x86_64: Remove bzero optimization
Both symbols are marked as legacy in POSIX.1-2001 and removed on
POSIX.1-2008, although the prototypes are defined for _GNU_SOURCE
or _DEFAULT_SOURCE.

GCC also replaces bcopy with a memmove and bzero with memset on default
configuration (to actually get a bzero libc call the code requires
to omit string.h inclusion and built with -fno-builtin), so it is
highly unlikely programs are actually calling libc bzero symbol.

On a recent Linux distro (Ubuntu 22.04), there is no bzero calls
by the installed binaries.

  $ cat count_bstring.sh
  #!/bin/bash

  files=`IFS=':';for i in $PATH; do test -d "$i" && find "$i" -maxdepth 1 -executable -type f; done`
  total=0
  for file in $files; do
    symbols=`objdump -R $file 2>&1`
    if [ $? -eq 0 ]; then
      ncalls=`echo $symbols | grep -w $1 | wc -l`
      ((total=total+ncalls))
      if [ $ncalls -gt 0 ]; then
        echo "$file: $ncalls"
      fi
    fi
  done
  echo "TOTAL=$total"
  $ ./count_bstring.sh bzero
  TOTAL=0

Checked on x86_64-linux-gnu.
2022-05-16 09:36:06 -03:00
Maciej W. Rozycki
7b1cfba79e RISC-V: Use an autoconf template to produce `preconfigure'
Avoid fiddling with autoconf internals and use AC_DEFINE_UNQUOTED to
define macros in the configuration headers rather than handcoding an
equivalent shell sequence with the use of the `as_echo' undocumented
variable.

Switch to using AC_MSG_ERROR rather than `echo' and `exit' directly for
error handling.  Owing to the lack of any kind of error annotation it
makes it difficult to spot the message in the flood in a parallel build
and neither it is logged in `config.log'.

Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
2022-05-13 17:07:23 +01:00
Maciej W. Rozycki
353a1220e3 MIPS: Use an autoconf template to produce `preconfigure'
Avoid fiddling with autoconf internals and use AC_DEFINE_UNQUOTED to
define macros in the configuration headers rather than handcoding an
equivalent shell sequence with the use of the `as_echo' undocumented
variable.

Similarly use AC_MSG_ERROR for error handling rather than the internal
undocumented `as_fn_error' variable.  Switch to using 1 as the exit code
as it makes no sense to refer $? in the contexts involved, it's not a
command failure handled there.
2022-05-13 17:07:23 +01:00
Maciej W. Rozycki
fe7dd93db3 m68k: Use an autoconf template to produce `preconfigure'
Switch to using AC_MSG_ERROR rather than `echo' and `exit' directly for
error handling.  Owing to the lack of any kind of error annotation it
makes it difficult to spot the message in the flood in a parallel build
and neither it is logged in `config.log'.
2022-05-13 17:07:23 +01:00
Maciej W. Rozycki
7c20479d08 C-SKY: Use an autoconf template to produce `preconfigure'
Avoid fiddling with autoconf internals and use AC_DEFINE_UNQUOTED to
define macros in the configuration headers rather than handcoding an
equivalent shell sequence with the use of the `as_echo' undocumented
variable.

Switch to using AC_MSG_ERROR rather than `echo' and `exit' directly for
error handling.  Owing to the lack of any kind of error annotation it
makes it difficult to spot the message in the flood in a parallel build
and neither it is logged in `config.log'.
2022-05-13 17:07:23 +01:00
Adhemerval Zanella
6fad891dfd stdio: Remove the usage of $(fno-unit-at-a-time) for siglist.c
The siglist.c is built with -fno-toplevel-reorder to avoid compiler
to reorder the compat assembly directives due an assembler
issue [1] (fixed on 2.39).

This patch removes the compiler flags by split the compat symbol
generation in two phases.  First the __sys_siglist and __sys_sigabbrev
without any compat symbol directive is preprocessed to generate an
assembly source code.  This generate assembly is then used as input
on a platform agnostic siglist.S which then creates the compat
definitions.  This prevents compiler to move any compat directive
prior the _sys_errlist definition itself.

Checked on a make check run-built-tests=no on all affected ABIs.

Reviewed-by: Fangrui Song <maskray@google.com>
2022-05-13 10:54:41 -03:00
Adhemerval Zanella
900fa25736 stdio: Remove the usage of $(fno-unit-at-a-time) for errlist.c
The errlist.c is built with -fno-toplevel-reorder to avoid compiler to
reorder the compat assembly directives due an assembler issue [1]
(fixed on 2.39).

This patch removes the compiler flags by split the compat symbol
generation in two phases.  First the _sys_errlist_internal internal
without any compat symbol directive is preprocessed to generate an
assembly source code.  This generate assembly is then used as input
on a platform agnostic errlist-data.S which then creates the compat
definitions.  This prevents compiler to move any compat directive
prior the _sys_errlist_internal definition itself.

Checked on a make check run-built-tests=no on all affected ABIs.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=29012
2022-05-13 10:54:41 -03:00
Wangyang Guo
8162147872 nptl: Add backoff mechanism to spinlock loop
When mutiple threads waiting for lock at the same time, once lock owner
releases the lock, waiters will see lock available and all try to lock,
which may cause an expensive CAS storm.

Binary exponential backoff with random jitter is introduced. As try-lock
attempt increases, there is more likely that a larger number threads
compete for adaptive mutex lock, so increase wait time in exponential.
A random jitter is also added to avoid synchronous try-lock from other
threads.

v2: Remove read-check before try-lock for performance.

v3:
1. Restore read-check since it works well in some platform.
2. Make backoff arch dependent, and enable it for x86_64.
3. Limit max backoff to reduce latency in large critical section.

v4: Fix strict-prototypes error in sysdeps/nptl/pthread_mutex_backoff.h

v5: Commit log updated for regression in large critical section.

Result of pthread-mutex-locks bench

Test Platform: Xeon 8280L (2 socket, 112 CPUs in total)
First Row: thread number
First Col: critical section length
Values: backoff vs upstream, time based, low is better

non-critical-length: 1
	1	2	4	8	16	32	64	112	140
0	0.99	0.58	0.52	0.49	0.43	0.44	0.46	0.52	0.54
1	0.98	0.43	0.56	0.50	0.44	0.45	0.50	0.56	0.57
2	0.99	0.41	0.57	0.51	0.45	0.47	0.48	0.60	0.61
4	0.99	0.45	0.59	0.53	0.48	0.49	0.52	0.64	0.65
8	1.00	0.66	0.71	0.63	0.56	0.59	0.66	0.72	0.71
16	0.97	0.78	0.91	0.73	0.67	0.70	0.79	0.80	0.80
32	0.95	1.17	0.98	0.87	0.82	0.86	0.89	0.90	0.90
64	0.96	0.95	1.01	1.01	0.98	1.00	1.03	0.99	0.99
128	0.99	1.01	1.01	1.17	1.08	1.12	1.02	0.97	1.02

non-critical-length: 32
	1	2	4	8	16	32	64	112	140
0	1.03	0.97	0.75	0.65	0.58	0.58	0.56	0.70	0.70
1	0.94	0.95	0.76	0.65	0.58	0.58	0.61	0.71	0.72
2	0.97	0.96	0.77	0.66	0.58	0.59	0.62	0.74	0.74
4	0.99	0.96	0.78	0.66	0.60	0.61	0.66	0.76	0.77
8	0.99	0.99	0.84	0.70	0.64	0.66	0.71	0.80	0.80
16	0.98	0.97	0.95	0.76	0.70	0.73	0.81	0.85	0.84
32	1.04	1.12	1.04	0.89	0.82	0.86	0.93	0.91	0.91
64	0.99	1.15	1.07	1.00	0.99	1.01	1.05	0.99	0.99
128	1.00	1.21	1.20	1.22	1.25	1.31	1.12	1.10	0.99

non-critical-length: 128
	1	2	4	8	16	32	64	112	140
0	1.02	1.00	0.99	0.67	0.61	0.61	0.61	0.74	0.73
1	0.95	0.99	1.00	0.68	0.61	0.60	0.60	0.74	0.74
2	1.00	1.04	1.00	0.68	0.59	0.61	0.65	0.76	0.76
4	1.00	0.96	0.98	0.70	0.63	0.63	0.67	0.78	0.77
8	1.01	1.02	0.89	0.73	0.65	0.67	0.71	0.81	0.80
16	0.99	0.96	0.96	0.79	0.71	0.73	0.80	0.84	0.84
32	0.99	0.95	1.05	0.89	0.84	0.85	0.94	0.92	0.91
64	1.00	0.99	1.16	1.04	1.00	1.02	1.06	0.99	0.99
128	1.00	1.06	0.98	1.14	1.39	1.26	1.08	1.02	0.98

There is regression in large critical section. But adaptive mutex is
aimed for "quick" locks. Small critical section is more common when
users choose to use adaptive pthread_mutex.

Signed-off-by: Wangyang Guo <wangyang.guo@intel.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-05-09 14:38:40 -07:00
Florian Weimer
a2a6bce7d7 Linux: Implement a useful version of _startup_fatal
On i386 and ia64, the TCB is not available at this point.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-05-09 18:15:16 +02:00
Florian Weimer
18bd9c3d3b ia64: Always define IA64_USE_NEW_STUB as a flag macro
And keep the previous definition if it exists.  This allows
disabling IA64_USE_NEW_STUB while keeping USE_DL_SYSINFO defined.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-05-09 18:15:16 +02:00
Adhemerval Zanella
71e2a681f1 linux: Fix posix_spawn return code if clone fails (BZ#29109)
The __clone_internal returns the error on errno.

Checked on x86_64-linux-gnu.
2022-05-06 10:48:30 -03:00
Xiaoming Ni
ed2ddeffa5 clock_adjtime: Use __nonnull to avoid null pointer
clock_adjtime()/clock_adjtime64()
Add __nonnull((2)) to avoid null pointer access.

Link: https://sourceware.org/bugzilla/show_bug.cgi?id=27662
Link: https://sourceware.org/bugzilla/show_bug.cgi?id=29084
Signed-off-by: Xiaoming Ni <nixiaoming@huawei.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-05-05 17:48:04 +05:30
Xiaoming Ni
6a9786b8ec ntp_xxxtimex: Use __nonnull to avoid null pointer
ntp_gettime()
ntp_gettime64()
ntp_gettimex()
ntp_gettimex64()
ntp_adjtime()
Add __nonnull((1)) to avoid null pointer access.

Link: https://sourceware.org/bugzilla/show_bug.cgi?id=27662
Link: https://sourceware.org/bugzilla/show_bug.cgi?id=29084
Signed-off-by: Xiaoming Ni <nixiaoming@huawei.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-05-05 17:48:04 +05:30
Xiaoming Ni
d62a70fda8 adjtimex/adjtimex64: Use __nonnull to avoid null pointer
Add __nonnull((1)) to the adjtimex()/adjtimex64() function declaration
    to avoid null pointer access.

Link: https://sourceware.org/bugzilla/show_bug.cgi?id=27662
Link: https://sourceware.org/bugzilla/show_bug.cgi?id=29084
Signed-off-by: Xiaoming Ni <nixiaoming@huawei.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-05-05 17:48:04 +05:30
Samuel Thibault
eff158b75d hurd spawni: Fix reauthenticating closed fds
When an fd is closed, the port cell remains, but the port becomes
MACH_PORT_NULL, so we have to guard against it.
2022-05-05 02:14:43 +02:00
Florian Weimer
c1b68685d4 Linux: Define MMAP_CALL_INTERNAL
Unlike MMAP_CALL, this avoids a TCB dependency for an errno update
on failure.

<mmap_internal.h> cannot be included as is on several architectures
due to the definition of page_unit, so introduce a separate header
file for the definition of MMAP_CALL and MMAP_CALL_INTERNAL,
<mmap_call.h>.

Reviewed-by: Stefan Liebler <stli@linux.ibm.com>
2022-05-04 15:37:21 +02:00
Florian Weimer
60f0f2130d i386: Honor I386_USE_SYSENTER for 6-argument Linux system calls
Introduce an int-80h-based version of __libc_do_syscall and use
it if I386_USE_SYSENTER is defined as 0.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-05-04 15:37:21 +02:00
Florian Weimer
6e5c7a1e26 i386: Remove OPTIMIZE_FOR_GCC_5 from Linux libc-do-syscall.S
After commit a78e6a10d0
("i386: Remove broken CAN_USE_REGISTER_ASM_EBP (bug 28771)"),
it is never defined.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-05-04 15:37:21 +02:00
Fangrui Song
4e7e4f3b4b powerpc32: Remove unused HAVE_PPC_SECURE_PLT
82a79e7d18 removed the only user of
HAVE_PPC_SECURE_PLT.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-05-02 08:55:36 -07:00
Siddhesh Poyarekar
5b5b1012d5 benchtests: Better libmvec integration
Improve libmvec benchmark integration so that in future other
architectures may be able to run their libmvec benchmarks as well.  This
now allows libmvec benchmarks to be run with `make BENCHSET=bench-math`.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-04-29 11:48:18 +05:30
Siddhesh Poyarekar
944afe6d95 benchtests: Add UNSUPPORTED benchmark status
The libmvec benchmarks print a message indicating that a certain CPU
feature is unsupported and exit prematurelyi, which breaks the JSON in
bench.out.

Handle this more elegantly in the bench makefile target by adding
support for an UNSUPPORTED exit status (77) so that bench.out continues
to have output for valid tests.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2022-04-29 11:48:16 +05:30
Adhemerval Zanella
118a2aee07 linux: Fix fchmodat with AT_SYMLINK_NOFOLLOW for 64 bit time_t (BZ#29097)
The AT_SYMLINK_NOFOLLOW emulation ues the default 32 bit stat internal
calls, which fails with EOVERFLOW if the file constains timestamps
beyond 2038.

Checked on i686-linux-gnu.
2022-04-28 09:58:44 -03:00
Noah Goldstein
911c63a51c sysdeps: Add 'get_fast_jitter' interace in fast-jitter.h
'get_fast_jitter' is meant to be used purely for performance
purposes. In all cases it's used it should be acceptable to get no
randomness (see default case). An example use case is in setting
jitter for retries between threads at a lock. There is a
performance benefit to having jitter, but only if the jitter can
be generated very quickly and ultimately there is no serious issue
if no jitter is generated.

The implementation generally uses 'HP_TIMING_NOW' iff it is
inlined (avoid any potential syscall paths).
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-04-27 17:17:43 -05:00
DJ Delorie
7c477b57a3 posix/glob.c: update from gnulib
Copied from gnulib/lib/glob.c in order to fix rhbz 1982608
Also fixes swbz 25659

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
2022-04-27 17:19:31 -04:00
Adhemerval Zanella
834ddd0432 linux: Fix missing internal 64 bit time_t stat usage
These are two missing spots initially done by 52a5fe70a2.

Checked on i686-linux-gnu.
2022-04-27 14:21:07 -03:00
Adhemerval Zanella
4f7b7d00e0 posix: Remove unused definition on _Fork
Checked on x86_64-linux-gnu.
2022-04-26 14:21:08 -03:00
Fangrui Song
098a657fe4 elf: Replace PI_STATIC_AND_HIDDEN with opposite HIDDEN_VAR_NEEDS_DYNAMIC_RELOC
PI_STATIC_AND_HIDDEN indicates whether accesses to internal linkage
variables and hidden visibility variables in a shared object (ld.so)
need dynamic relocations (usually R_*_RELATIVE). PI (position
independent) in the macro name is a misnomer: a code sequence using GOT
is typically position-independent as well, but using dynamic relocations
does not meet the requirement.

Not defining PI_STATIC_AND_HIDDEN is legacy and we expect that all new
ports will define PI_STATIC_AND_HIDDEN. Current ports defining
PI_STATIC_AND_HIDDEN are more than the opposite. Change the configure
default.

No functional change.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-04-26 09:26:22 -07:00
Carlos O'Donell
e465d97653 i386: Regenerate ulps
These failures were caught while building glibc master for Fedora
Rawhide which is built with '-mtune=generic -msse2 -mfpmath=sse'
using gcc 11.3 (gcc-11.3.1-2.fc35) on a Cascadelake Intel Xeon
processor.
2022-04-26 10:52:41 -04:00
Fangrui Song
693517b922 elf: Remove unused enum allowmask
Unused since 52a01100ad
("elf: Remove ad-hoc restrictions on dlopen callers [BZ #22787]").

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2022-04-25 01:01:02 -07:00
Noah Goldstein
c966099cdc x86: Optimize {str|wcs}rchr-evex
The new code unrolls the main loop slightly without adding too much
overhead and minimizes the comparisons for the search CHAR.

Geometric Mean of all benchmarks New / Old: 0.755
See email for all results.

Full xcheck passes on x86_64 with and without multiarch enabled.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-04-22 23:08:43 -05:00
Noah Goldstein
df7e295d18 x86: Optimize {str|wcs}rchr-avx2
The new code unrolls the main loop slightly without adding too much
overhead and minimizes the comparisons for the search CHAR.

Geometric Mean of all benchmarks New / Old: 0.832
See email for all results.

Full xcheck passes on x86_64 with and without multiarch enabled.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-04-22 23:08:40 -05:00
Noah Goldstein
5307aa9c18 x86: Optimize {str|wcs}rchr-sse2
The new code unrolls the main loop slightly without adding too much
overhead and minimizes the comparisons for the search CHAR.

Geometric Mean of all benchmarks New / Old: 0.741
See email for all results.

Full xcheck passes on x86_64 with and without multiarch enabled.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2022-04-22 23:08:36 -05:00
H.J. Lu
8ea20ee5f6 x86-64: Fix SSE2 memcmp and SSSE3 memmove for x32
Clear the upper 32 bits in RDX (memory size) for x32 to fix

FAIL: string/tst-size_t-memcmp
FAIL: string/tst-size_t-memcmp-2
FAIL: string/tst-size_t-memcpy
FAIL: wcsmbs/tst-size_t-wmemcmp

on x32 introduced by

8804157ad9 x86: Optimize memcmp SSE2 in memcmp.S
26b2478322 x86: Reduce code size of mem{move|pcpy|cpy}-ssse3

Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2022-04-22 11:23:15 -07:00
Florian Weimer
198abcbb94 Default to --with-default-link=no (bug 25812)
This is necessary to place the libio vtables into the RELRO segment.
New tests elf/tst-relro-ldso and elf/tst-relro-libc are added to
verify that this is what actually happens.

The new tests fail on ia64 due to lack of (default) RELRO support
inbutils, so they are XFAILed there.
2022-04-22 10:59:03 +02:00