Add inline assembler for the scalbn functions. Passes GLIBC regression.
GCC 13, LoongArch support ___builtin_scalbn{,f} with -fno-math-errno,
but only "libm" can use -fno-math-errno in GLIBC, and scalbn is in libc
instead of libm because __printf_fp calls it.
GCC 13 compiles these built-ins instead of generic
implementation for function logb.
Link: https://gcc.gnu.org/r13-3922
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
This patch is using the corresponding GCC builtin for logbf, logb,
logbl and logbf128 if the USE_FUNCTION_BUILTIN macros are defined to one
in math-use-builtins-function.h.
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
GCC 13 compiles these built-ins instead of generic
implementation for function llrint.
Link: https://gcc.gnu.org/r13-3920
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
This patch is using the corresponding GCC builtin for llrintf, llrint,
llrintl and llrintf128 if the USE_FUNCTION_BUILTIN macros are defined to one
in math-use-builtins-function.h.
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
GCC 13 compiles these built-ins instead of generic
implementation for function lrint.
Link: https://gcc.gnu.org/r13-3920
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
This patch is using the corresponding GCC builtin for lrintf, lrint,
lrintl and lrintf128 if the USE_FUNCTION_BUILTIN macros are defined to one
in math-use-builtins-function.h.
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
lld does not implement all the linker optimization to avoid the GOT
relocation as done by binutils (bfd/elf32-i386.c:elf_i386_convert_load_reloc).
The current 'movl main@GOT(%ebx), %eax' will then create a GOT
relocation when building with lld, which make static-pie status to
not being able to start the provided main function.
The change uses a __wrap_main local symbol, which in turn calls main
(similar as used by aarch64 and s390x).
Checked on i686-linux-gnu with binutils and lld.
Reviewed-by: Fangrui Song <maskray@google.com>
This patch fixes two problems with audit:
1. The DL_OFFSET_RV_VPCS offset was mixed up with DL_OFFSET_RG_VPCS,
resulting in x2 register value nulling in RG structure.
2. We need to preserve the x8 register before function call, but
don't have to save it's new value and restore it before return.
Anyway the final restore was using OFFSET_RV instead of OFFSET_RG value
which is wrong (althoug doesn't affect anything).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Currently glibc uses in_time_t_range to detects time_t overflow,
and if it occurs fallbacks to 64 bit syscall version.
The function name is confusing because internally time_t might be
either 32 bits or 64 bits (depending on __TIMESIZE).
This patch refactors the in_time_t_range by replacing it with
in_int32_t_range for the case to check if the 64 bit time_t syscall
should be used.
The in_time_t range is used to detect overflow of the
syscall return value.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Use __builtin_{fma, fmaf} to implement function {fma, fmaf} instead of
the generic implementation.
* sysdeps/loongarch/fpu/math-use-builtins-fma.h: New file.
Old applications pass __IPC_64 as part of the command argument because
old glibc did not check for unknown commands, and passed through the
arguments directly to the kernel, without adding __IPC_64.
Applications need to continue doing that for old glibc compatibility,
so this commit enables this approach in current glibc.
For msgctl and shmctl, if no translation is required, make
direct system calls, as we did before the time64 changes. If
translation is required, mask __IPC_64 from the command argument.
For semctl, the union-in-vararg argument handling means that
translation is needed on all architectures.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
RISC-V architecture extends the cache information for level 3 cache
in AUX vector in Linux v.6.1-rc1. This patch supports sysconf to get
the level 3 cache information.
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
Implemented:
wcscat-avx2 (+ 744 bytes
wcscpy-avx2 (+ 539 bytes)
wcpcpy-avx2 (+ 577 bytes)
wcsncpy-avx2 (+1108 bytes)
wcpncpy-avx2 (+1214 bytes)
wcsncat-avx2 (+1085 bytes)
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are reported
as geometric mean of all ratios of New Implementation / Best Old
Implementation. Best Old Implementation was determined with the
highest ISA implementation.
wcscat-avx2 -> 0.975
wcscpy-avx2 -> 0.591
wcpcpy-avx2 -> 0.698
wcsncpy-avx2 -> 0.730
wcpncpy-avx2 -> 0.711
wcsncat-avx2 -> 0.954
Code Size Changes:
This change increase the size of libc.so by ~5.5kb bytes. For
reference the patch optimizing the normal strcpy family functions
decreases libc.so by ~5.2kb.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
Implemented:
wcscat-evex (+ 905 bytes)
wcscpy-evex (+ 674 bytes)
wcpcpy-evex (+ 709 bytes)
wcsncpy-evex (+1358 bytes)
wcpncpy-evex (+1467 bytes)
wcsncat-evex (+1213 bytes)
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are reported
as geometric mean of all ratios of New Implementation / Best Old
Implementation. Best Old Implementation was determined with the
highest ISA implementation.
wcscat-evex -> 0.991
wcscpy-evex -> 0.587
wcpcpy-evex -> 0.695
wcsncpy-evex -> 0.719
wcpncpy-evex -> 0.694
wcsncat-evex -> 0.979
Code Size Changes:
This change increase the size of libc.so by ~6.3kb bytes. For
reference the patch optimizing the normal strcpy family functions
decreases libc.so by ~5.7kb.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
Optimizations are:
1. Use more overlapping stores to avoid branches.
2. Reduce how unrolled the aligning copies are (this is more of a
code-size save, its a negative for some sizes in terms of
perf).
3. For st{r|p}n{cat|cpy} re-order the branches to minimize the
number that are taken.
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are
reported as geometric mean of all ratios of
New Implementation / Old Implementation.
strcat-avx2 -> 0.998
strcpy-avx2 -> 0.937
stpcpy-avx2 -> 0.971
strncpy-avx2 -> 0.793
stpncpy-avx2 -> 0.775
strncat-avx2 -> 0.962
Code Size Changes:
function -> Bytes New / Bytes Old -> Ratio
strcat-avx2 -> 685 / 1639 -> 0.418
strcpy-avx2 -> 560 / 903 -> 0.620
stpcpy-avx2 -> 592 / 939 -> 0.630
strncpy-avx2 -> 1176 / 2390 -> 0.492
stpncpy-avx2 -> 1268 / 2438 -> 0.520
strncat-avx2 -> 1042 / 2563 -> 0.407
Notes:
1. Because of the significant difference between the
implementations they are split into three files.
strcpy-avx2.S -> strcpy, stpcpy, strcat
strncpy-avx2.S -> strncpy
strncat-avx2.S > strncat
I couldn't find a way to merge them without making the
ifdefs incredibly difficult to follow.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
Optimizations are:
1. Use more overlapping stores to avoid branches.
2. Reduce how unrolled the aligning copies are (this is more of a
code-size save, its a negative for some sizes in terms of
perf).
3. Improve the loop a bit (similiar to what we do in strlen with
2x vpminu + kortest instead of 3x vpminu + kmov + test).
4. For st{r|p}n{cat|cpy} re-order the branches to minimize the
number that are taken.
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are
reported as geometric mean of all ratios of
New Implementation / Old Implementation.
stpcpy-evex -> 0.922
strcat-evex -> 0.985
strcpy-evex -> 0.880
strncpy-evex -> 0.831
stpncpy-evex -> 0.780
strncat-evex -> 0.958
Code Size Changes:
function -> Bytes New / Bytes Old -> Ratio
strcat-evex -> 819 / 1874 -> 0.437
strcpy-evex -> 700 / 1074 -> 0.652
stpcpy-evex -> 735 / 1094 -> 0.672
strncpy-evex -> 1397 / 2611 -> 0.535
stpncpy-evex -> 1489 / 2691 -> 0.553
strncat-evex -> 1184 / 2832 -> 0.418
Notes:
1. Because of the significant difference between the
implementations they are split into three files.
strcpy-evex.S -> strcpy, stpcpy, strcat
strncpy-evex.S -> strncpy
strncat-evex.S > strncat
I couldn't find a way to merge them without making the
ifdefs incredibly difficult to follow.
2. All implementations can be made evex512 by including
"x86-evex512-vecs.h" at the top.
3. All implementations have an optional define:
`USE_EVEX_MASKED_STORE`
Setting to one uses evex-masked stores for handling short
strings. This saves code size and branches. It's disabled
for all implementations are the moment as there are some
serious drawbacks to masked stores in certain cases, but
that may be fixed on future architectures.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
Changes to generated code are:
1. In a few places use `vpcmpeqb` instead of `vpcmpneq` to save a
byte of code size.
2. Add a branch for length <= (VEC_SIZE * 6) as opposed to doing
the entire block of [VEC_SIZE * 4 + 1, VEC_SIZE * 8] in a
single basic-block (the space to add the extra branch without
changing code size is bought with the above change).
Change (2) has roughly a 20-25% speedup for sizes in [VEC_SIZE * 4 +
1, VEC_SIZE * 6] and negligible to no-cost for [VEC_SIZE * 6 + 1,
VEC_SIZE * 8]
From N=10 runs on Tigerlake:
align1,align2 ,length ,result ,New Time ,Cur Time ,New Time / Old Time
0 ,0 ,129 ,0 ,5.404 ,6.887 ,0.785
0 ,0 ,129 ,1 ,5.308 ,6.826 ,0.778
0 ,0 ,129 ,18446744073709551615 ,5.359 ,6.823 ,0.785
0 ,0 ,161 ,0 ,5.284 ,6.827 ,0.774
0 ,0 ,161 ,1 ,5.317 ,6.745 ,0.788
0 ,0 ,161 ,18446744073709551615 ,5.406 ,6.778 ,0.798
0 ,0 ,193 ,0 ,6.804 ,6.802 ,1.000
0 ,0 ,193 ,1 ,6.950 ,6.754 ,1.029
0 ,0 ,193 ,18446744073709551615 ,6.792 ,6.719 ,1.011
0 ,0 ,225 ,0 ,6.625 ,6.699 ,0.989
0 ,0 ,225 ,1 ,6.776 ,6.735 ,1.003
0 ,0 ,225 ,18446744073709551615 ,6.758 ,6.738 ,0.992
0 ,0 ,256 ,0 ,5.402 ,5.462 ,0.989
0 ,0 ,256 ,1 ,5.364 ,5.483 ,0.978
0 ,0 ,256 ,18446744073709551615 ,5.341 ,5.539 ,0.964
Rewriting with VMM API allows for memcmpeq-evex to be used with
evex512 by including "x86-evex512-vecs.h" at the top.
Complete check passes on x86-64.
The only change to the existing generated code is `tzcnt` -> `bsf` to
save a byte of code size here and there.
Rewriting with VMM API allows for memcmp-evex-movbe to be used with
evex512 by including "x86-evex512-vecs.h" at the top.
Complete check passes on x86-64.
Similar to ppoll, the poll.h header needs to redirect the poll call
to a proper fortified ppoll with 64 bit time_t support.
The implementation is straightforward, just need to add a similar
check as __poll_chk and call the 64 bit time_t ppoll version. The
debug fortify tests are also extended to cover 64 bit time_t for
affected ABIs.
Unfortunately it requires an aditional symbol, which makes backport
tricky. One possibility is to add a static inline version if compiler
supports is and call abort instead of __chk_fail, so fortified version
will call __poll64 in the end.
Another possibility is to just remove the fortify support for
_TIME_BITS=64.
Checked on i686-linux-gnu.
Changes from v1:
Use vec api for register.
Replace VPCMP with VPCMPEQ
Restructure and remove 1 unconditional jump.
Change page cross logic to use sall.
This patch implements following evex512 version of string functions.
evex512 version takes up to 30% less cycle as compared to evex,
depending on length and alignment.
- strrchr function using 512 bit vectors.
- wcsrchr function using 512 bit vectors.
Code size data:
strrchr-evex.o 879 byte
strrchr-evex512.o 601 byte (-32%)
wcsrchr-evex.o 882 byte
wcsrchr-evex512.o 572 byte (-35%)
Placeholder function, not used by any processor at the moment.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
This makes it more likely that the compiler can compute the strlen
argument in _startup_fatal at compile time, which is required to
avoid a dependency on strlen this early during process startup.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
The old exception handling implementation used function interposition
to replace the dynamic loader implementation (no TLS support) with the
libc implementation (TLS support). This results in problems if the
link order between the dynamic loader and libc is reversed (bug 25486).
The new implementation moves the entire implementation of the
exception handling functions back into the dynamic loader, using
THREAD_GETMEM and THREAD_SETMEM for thread-local data support.
These depends on Hurd support for these macros, added in commit
b65a82e4e7 ("hurd: Add THREAD_GET/SETMEM/_NC").
One small obstacle is that the exception handling facilities are used
before the TCB has been set up, so a check is needed if the TCB is
available. If not, a regular global variable is used to store the
exception handling information.
Also rename dl-error.c to dl-catch.c, to avoid confusion with the
dlerror function.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Commit 6e8a0aac2f ("time: Fix overflow itimer tests on 32-bit
systems") changed in_time_t_range to assume a 32-bit time_t. This broke
fstatat on MIPSn64 that was using it with a 64-bit time_t due to
difference between stat and stat64. This commit fix that by adding a
MIPSn64 specific version, which bypasses the EOVERFLOW tests.
Resolves: BZ #29730
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
clang emits an warning when a double alias redirection is used, to warn
the the original symbol will be used even when weak definition is
overridden. However, this is a common pattern for weak_alias, where
multiple alias are set to same symbol.
Reviewed-by: Fangrui Song <maskray@google.com>
GCC 13 has added more _FloatN and _FloatNx versions of existing
<math.h> and <complex.h> built-in functions, for use in libstdc++-v3.
This breaks the glibc build because of how those functions are defined
as aliases to functions with the same ABI but different types. Add
appropriate -fno-builtin-* options for compiling relevant files, as
already done for the case of long double functions aliasing double
ones and based on the list of files used there.
I fixed some mistakes in that list of double files that I noticed
while implementing this fix, but there may well be more such
(harmless) cases, in this list or the new one (files that don't
actually exist or don't define the named functions as aliases so don't
need the options). I did try to exclude cases where glibc doesn't
define certain functions for _FloatN or _FloatNx types at all from the
new uses of -fno-builtin-* options. As with the options for double
files (see the commit message for commit
49348beafe, "Fix build with GCC 10 when
long double = double."), it's deliberate that the options are used
even if GCC currently doesn't have a built-in version of a given
functions, so providing some level of future-proofing against more
such built-in functions being added in future.
Tested with build-many-glibcs.py for aarch64-linux-gnu
powerpc-linux-gnu powerpc64le-linux-gnu x86_64-linux-gnu (compilers
and glibcs builds) with GCC mainline.
This patch improves following functionality
- Replace VPCMP with VPCMPEQ.
- Replace page cross check logic with sall.
- Remove extra lea from align_more.
- Remove uncondition loop jump.
- Use bsf to check max length in first vector.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
According to the specification of ISO/IEC TS 18661-1:2014,
The strfromd, strfromf, and strfroml functions are equivalent to
snprintf(s, n, format, fp) (7.21.6.5), except the format string contains only
the character %, an optional precision that does not contain an asterisk *, and
one of the conversion specifiers a, A, e, E, f, F, g, or G, which applies to
the type (double, float, or long double) indicated by the function suffix
(rather than by a length modifier). Use of these functions with any other 20
format string results in undefined behavior.
strfromf will convert the arguement with type float to double first.
According to the latest version of IEEE754 which is published in 2019,
Conversion of a quiet NaN from a narrower format to a wider format in the same
radix, and then back to the same narrower format, should not change the quiet
NaN payload in any way except to make it canonical.
When either an input or result is a NaN, this standard does not interpret the
sign of a NaN. However, operations on bit strings—copy, negate, abs,
copySign—specify the sign bit of a NaN result, sometimes based upon the sign
bit of a NaN operand. The logical predicates totalOrder and isSignMinus are
also affected by the sign bit of a NaN operand. For all other operations, this
standard does not specify the sign bit of a NaN result, even when there is only
one input NaN, or when the NaN is produced from an invalid operation.
converting NAN or -NAN with type float to double doesn't need to keep
the signbit. As a result, this test case isn't mandatory.
The problem is that according to RISC-V ISA manual in chapter 11.3 of
riscv-isa-20191213,
Except when otherwise stated, if the result of a floating-point operation is
NaN, it is the canonical NaN. The canonical NaN has a positive sign and all
significand bits clear except the MSB, a.k.a. the quiet bit. For
single-precision floating-point, this corresponds to the pattern 0x7fc00000.
which means that conversion -NAN from float to double won't keep the signbit.
Since glibc ought to be consistent here between types and architectures, this
patch adds copysign to fix this problem if the string is NAN. This patch
adds two different functions under sysdeps directory to work around the
issue.
This patch has been tested on x86_64 and riscv64.
Resolves: BZ #29501
v2: Change from macros to different inline functions.
v3: Add unlikely check to isnan.
v4: Fix wrong commit message header.
v5: Fix style: add space before parentheses.
v6: Add copyright.
Signed-off-by: Letu Ren <fantasquex@gmail.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The extension header is two 32bit words and in the last header both
should be 0. There is plenty space in the __reserved area, but it's
better not to write more than we mean to.
Use an empty wordcopy.c to avoid building the generic one.
It does not seem to be used anywhere.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This consolidates the destructor invocations from _dl_fini and
dlclose. Remove the micro-optimization that avoids
calling _dl_call_fini if they are no destructors (as dlclose is quite
expensive anyway). The debug log message is now printed
unconditionally.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This allows the rest of dynamic loader to check whether the TCB
has been set up (and THREAD_GETMEM and THREAD_SETMEM will work).
Reviewed-by: Siddhesh Poyarekar <siddhesh@gotplt.org>
Since __memcpy_simd is the fastest memcpy on almost all cores, replace
the generic memcpy with it. If SVE is available, a SVE memcpy will be
used by default (including for Neoverse N2).
This patch implements following evex512 version of string functions.
evex512 version takes up to 30% less cycle as compared to evex,
depending on length and alignment.
- strchrnul function using 512 bit vectors.
- strchr function using 512 bit vectors.
- wcschr function using 512 bit vectors.
Code size data:
strchrnul-evex.o 599 byte
strchrnul-evex512.o 569 byte (-5%)
strchr-evex.o 639 byte
strchr-evex512.o 595 byte (-7%)
wcschr-evex.o 644 byte
wcschr-evex512.o 607 byte (-6%)
Placeholder function, not used by any processor at the moment.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
The generic Linux struct_stat misses the conditionals to use
bits/struct_stat_time64_helper.h in the __USE_TIME_BITS64 for
architecture that uses __TIMESIZE == 32 (currently csky and nios2).
Since newer ports should not support 32 bit time_t, the generic
implementation should be used as default.
For arm, hppa, and sh a copy of default struct_stat is added,
while for csky and nios a new one based on generic is used, along
with conditionals to use bits/struct_stat_time64_helper.h.
The default struct_stat is also replaced with the generic one.
Checked on aarch64-linux-gnu and arm-linux-gnueabihf.
Detecting an overflow edge case depended on signed overflow of a long
long. Replace the additions and the overflow checks by
__builtin_add_overflow().
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
The builtin bswap is already used if optimziation is enabled for
GCC 4.8+, so glibc symbols will be used in a very limited scenarios.
Also, gcc generated code is quite similar to all but ia64 and i386
htons.
Checked on alpha, i686, and ia64.
Generic implementation on top of __bswap_32 always expands
inline to either bswap or movbe depending on -march=*.
Signed-off-by: Cristian Rodríguez <crrodriguez@opensuse.org>
Avoid moving code across SET_RESTORE_ROUNDL in order to fix
[BZ #29463].
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Linux 6.0 adds a constant ADDRB, a termios c_cflag bit, to its
include/uapi/asm-generic/termbits-common.h.
Add it accordingly to glibc's bits/termios-c_cflag.h headers. As
other constants in these headers are generally in octal, I converted
the value to octal to match. As ADDRB isn't in a POSIX-reserved
namespace, I made it conditional on __USE_MISC.
Tested for x86_64.
Unused at the moment, but evex512 strcmp, strncmp, strcasecmp{l}, and
strncasecmp{l} functions can be added by including strcmp-evex.S with
"x86-evex512-vecs.h" defined.
In addition save code size a bit in a few places.
1. tzcnt ... -> bsf ...
2. vpcmp{b|d} $0 ... -> vpcmpeq{b|d}
This saves a touch of code size but has minimal net affect.
Full check passes on x86-64.
commit b412213eee
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date: Tue Oct 18 17:44:07 2022 -0700
x86: Optimize strrchr-evex.S and implement with VMM headers
Added `vpcompress{b|d}` to the page-cross logic with is an
AVX512-VBMI2 instruction. This is not supported on SKX. Since the
page-cross logic is relatively cold and the benefit is minimal
revert the page-cross case back to the old logic which is supported
on SKX.
Tested on x86-64.
The ARM preconfigure script tries to detect the capabilities of the
target platform by checking the compiler's predefined architecture
macros. However, if the compiler is tuning for AArch32 on ARMv8/v9 this
step fails:
checking for sysdeps preconfigure fragments... aarch64 alpha arc arm
WARNING: arm/preconfigure: Did not find ARM architecture type; using default
This is because preconfigure.ac doesn't escape the square brackets in
the glob for matching compilers targeting ARMv8. Adding another pair of
brackets to escape the first pair fixes this:
checking for sysdeps preconfigure fragments... aarch64 alpha arc arm
Found compiler is configured for something newer than v7 - using v7
Signed-off-by: Felix Riemann <felix.riemann@sma.de>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The current macros uses pid as signed value, which triggers a compiler
warning for process and thread timers. Replace MAKE_PROCESS_CPUCLOCK
with static inline function that expects the pid as unsigned. These
are similar to what Linux does internally.
Checked on x86_64-linux-gnu.
Reviewed-by: Arjun Shankar <arjun@redhat.com>
Optimization is:
1. Cache latest result in "fast path" loop with `vmovdqu` instead of
`kunpckdq`. This helps if there are more than one matches.
Code Size Changes:
strrchr-evex.S : +30 bytes (Same number of cache lines)
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
strrchr-evex.S : 0.932 (From cases with higher match frequency)
Full results attached in email.
Full check passes on x86-64.
Optimizations are:
1. Use the fact that lzcnt(0) -> VEC_SIZE for memchr to save a branch
in short string case.
2. Save several instructions in len = [VEC_SIZE, 4 * VEC_SIZE] case.
3. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
Code Size Changes:
memrchr-evex.S : -29 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
memrchr-evex.S : 0.949 (Mostly from improvements in small strings)
Full results attached in email.
Full check passes on x86-64.
Optimizations are:
1. Use the fact that bsf(0) leaves the destination unchanged to save a
branch in short string case.
2. Restructure code so that small strings are given the hot path.
- This is a net-zero on the benchmark suite but in general makes
sense as smaller sizes are far more common.
3. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
4. Align labels less aggressively, especially if it doesn't save fetch
blocks / causes the basic-block to span extra cache-lines.
The optimizations (especially for point 2) make the strnlen and
strlen code essentially incompatible so split strnlen-evex
to a new file.
Code Size Changes:
strlen-evex.S : -23 bytes
strnlen-evex.S : -167 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
strlen-evex.S : 0.992 (No real change)
strnlen-evex.S : 0.947
Full results attached in email.
Full check passes on x86-64.
Size Optimizations:
1. Condence hot path for better cache-locality.
- This is most impact for strchrnul where the logic strings with
len <= VEC_SIZE or with a match in the first VEC no fits entirely
in the first cache line.
2. Reuse common targets in first 4x VEC and after the loop.
3. Don't align targets so aggressively if it doesn't change the number
of fetch blocks it will require and put more care in avoiding the
case where targets unnecessarily split cache lines.
4. Align the loop better for DSB/LSD
5. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
6. Align labels less aggressively, especially if it doesn't save fetch
blocks / causes the basic-block to span extra cache-lines.
Code Size Changes:
strchr-evex.S : -63 bytes
strchrnul-evex.S: -48 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
strchr-evex.S (Fixed) : 0.971
strchr-evex.S (Rand) : 0.932
strchrnul-evex.S : 0.965
Full results attached in email.
Full check passes on x86-64.
Optimizations are:
1. Use the fact that tzcnt(0) -> VEC_SIZE for memchr to save a branch
in short string case.
2. Restructure code so that small strings are given the hot path.
- This is a net-zero on the benchmark suite but in general makes
sense as smaller sizes are far more common.
3. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
4. Align labels less aggressively, especially if it doesn't save fetch
blocks / causes the basic-block to span extra cache-lines.
The optimizations (especially for point 2) make the memchr and
rawmemchr code essentially incompatible so split rawmemchr-evex
to a new file.
Code Size Changes:
memchr-evex.S : -107 bytes
rawmemchr-evex.S : -53 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
memchr-evex.S : 0.928
rawmemchr-evex.S : 0.986 (Less targets cross cache lines)
Full results attached in email.
Full check passes on x86-64.
This patch implements following evex512 version of string functions.
evex512 version takes up to 30% less cycle as compared to evex,
depending on length and alignment.
- memchr function using 512 bit vectors.
- rawmemchr function using 512 bit vectors.
- wmemchr function using 512 bit vectors.
Code size data:
memchr-evex.o 762 byte
memchr-evex512.o 576 byte (-24%)
rawmemchr-evex.o 461 byte
rawmemchr-evex512.o 412 byte (-11%)
wmemchr-evex.o 794 byte
wmemchr-evex512.o 552 byte (-30%)
Placeholder function, not used by any processor at the moment.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
In the future, this will result in a compilation failure if the
macros are unexpectedly undefined (due to header inclusion ordering
or header inclusion missing altogether).
Assembler sources are more difficult to convert. In many cases,
they are hand-optimized for the mangling and no-mangling variants,
which is why they are not converted.
sysdeps/s390/s390-32/__longjmp.c and sysdeps/s390/s390-64/__longjmp.c
are special: These are C sources, but most of the implementation is
in assembler, so the PTR_DEMANGLE macro has to be undefined in some
cases, to match the assembler style.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This allows us to define a generic no-op version of PTR_MANGLE and
PTR_DEMANGLE. In the future, we can use PTR_MANGLE and PTR_DEMANGLE
unconditionally in C sources, avoiding an unintended loss of hardening
due to missing include files or unlucky header inclusion ordering.
In i386 and x86_64, we can avoid a <tls.h> dependency in the C
code by using the computed constant from <tcb-offsets.h>. <sysdep.h>
no longer includes these definitions, so there is no cyclic dependency
anymore when computing the <tcb-offsets.h> constants.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This way, we can define the pointer guard macros without including
<sysdep.h> on x86-64. Other architectures will not have such an
inclusion dependency, and the implied header file inclusion would
create a porting hazard.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This works around a gcc issue where it const folded inf/inf into nan,
preventing the invalid exception to be signalled.
(x-x)/(x-x) is more robust against optimizations and works for all
out of bounds values including x==nan.
The gcc issue https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95115
should be fixed on release branches starting from gcc-10, but it is
better to change the code in case glibc is built with older gcc.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
To avoid duplicate the VMM / GPR / mask insn macros in all incoming
evex512 files use the macros defined in 'reg-macros.h' and
'{vec}-macros.h'
This commit does not change libc.so
Tested build on x86-64
1) Copy so that backport will be easier.
2) Make section only define if there is not a previous definition
3) Add `VEC_lo` definition for proper reg-width but in the
ymm/zmm0-15 range.
4) Add macros for accessing GPRs based on VEC_SIZE
This is to make it easier to do think like:
```
vpcmpb %VEC(0), %VEC(1), %k0
kmov{d|q} %k0, %{eax|rax}
test %{eax|rax}
```
It adds macro s.t any GPR can get the proper width with:
`V{upcase_GPR_name}`
and any mask insn can get the proper width with:
`{upcase_mask_insn_without_postfix}`
This commit does not change libc.so
Tested build on x86-64
Besides the option being gcc specific, this approach is still fragile
and not future proof since we do not know if this will be the only
optimization option gcc will add that transforms loops to memset
(or any libcall).
This patch adds a new header, dl-symbol-redir-ifunc.h, that can b
used to redirect the compiler generated libcalls to port the generic
memset implementation if required.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
A start.o compiled from start.S with -DPIC and no -DSHARED is used by
both crt1.o and rcrt1.o. So the LoongArch static PIE patch
unintentionally introduced PC-relative addressing for main and
__libc_start_main into crt1.o.
While the latest Binutils (trunk, which will be released as 2.40)
supports the PC-relative relocs against an external function by creating
a PLT entry, the 2.39 release branch doesn't (and won't) support this.
An error is raised:
"PLT stub does not represent and symbol not defined."
So, we need the following changes:
1. Check if ld supports the PC-relative relocs against an external
function. If it's not supported, we deem static PIE unsupported.
2. Change start.S. If static PIE is supported, use PC-relative
addressing for main and __libc_start_main and rely on the linker to
create PLT entries. Otherwise, restore the old behavior (using GOT
to address these functions).
An alternative would be adding a new "static-pie-start.S", and some
custom logic into Makefile to build rcrt1.o with it. And, restore
start.S to the state before static PIE change so crt1.o won't contain
PC-relative relocs against external symbols. But I can't see any
benefit of this alternative, so I'd just keep it simple.
Tested by building glibc with the following configurations:
1. Binutils trunk + GCC trunk. Static PIE enabled. All tests
passed.
2. Binutils 2.39 branch + GCC trunk. Static PIE disabled. Tests
related to ifunc failed (it's a known issue). All other tests
passed.
3. Binutils 2.39 branch + GCC 12 branch, cross compilation with
build-many-glibcs.py from x86_64-linux-gnu. Static PIE disabled.
Build succeeded.
As per other architectures. I have checked on a armv8 hardware with
the following configurations:
arm-linux-gnueabihf (gcc built with --with-float=hard --with-cpu=arm926ej-s)
armv5-linux-gnueabihf (-march=armv5te -mfpu=vfpv3)
armv7-linux-gnueabihf (-march=armv7-a -mfpu=vfpv3)
armv7-thumb-linux-gnueabihf (-march=armv7-a -mfpu=vfpv3 -mthumb)
armv7-neon-linux-gnueabihf (-march=armv7-a -mfpu=neon)
armv7-neonhard-linux-gnueabihf (-march=armv7-a -mfpu=neon -mfloat-abi=hard)
Without any regression.
I haven't dig into the code, but since Linux atomic-machine.h handle
pre-ARMv6 and ARMv6 I expect the compiler might have some small room
to optimize.
The code size also improves is most of the configurations:
* master
text data bss dec hex filename
1727801 9720 37928 1775449 1b1759 arm-linux-gnueabihf/libc.so
1691729 9720 37928 1739377 1a8a71 arm-linux-gnueabihf-armv7-disable-multi-arch/libc.so
1725509 9720 37928 1773157 1b0e65 armv5-linux-gnueabihf/libc.so
1700757 9720 37928 1748405 1aadb5 armv6-linux-gnueabihf/libc.so
1698973 9720 37928 1746621 1aa6bd armv6t2-linux-gnueabihf/libc.so
1695481 9752 37928 1743161 1a9939 armv7-linux-gnueabihf/libc.so
1692917 9744 37928 1740589 1a8f2d armv7-neonhard-linux-gnueabihf/libc.so
1692917 9744 37928 1740589 1a8f2d armv7-neon-linux-gnueabihf/libc.so
1225353 9752 37928 1273033 136cc9 armv7-thumb-linux-gnueabihf/libc.so
* patched
text data bss dec hex filename
1726805 9720 37928 1774453 1b1375 arm-linux-gnueabihf/libc.so
1689321 9720 37928 1736969 1a8109 arm-linux-gnueabihf-armv7-disable-multi-arch/libc.so
1724433 9720 37928 1772081 1b0a31 armv5-linux-gnueabihf/libc.so
1698301 9720 37928 1745949 1aa41d armv6-linux-gnueabihf/libc.so
1696525 9720 37928 1744173 1a9d2d armv6t2-linux-gnueabihf/libc.so
1693009 9752 37928 1740689 1a8f91 armv7-linux-gnueabihf/libc.so
1690493 9744 37928 1738165 1a85b5 armv7-neonhard-linux-gnueabihf/libc.so
1690493 9744 37928 1738165 1a85b5 armv7-neon-linux-gnueabihf/libc.so
1223837 9752 37928 1271517 1366dd armv7-thumb-linux-gnueabihf/libc.so
The idea is eventually move all architectures to use compiler builtins.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Removal of legacy hwcaps support from the dynamic loader left
no users of _dl_string_hwcap.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Last commit made it so that the value passed for that parameter was
always 0 at its only call site.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
This was to test loading of shared libraries from platform
subdirectories, but this functionality is going away in the
following commits.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch updates the kernel version in the tests tst-mman-consts.py,
tst-mount-consts.py and tst-pidfd-consts.py to 6.0. (There are no new
constants covered by these tests in 6.0 that need any other header
changes.)
Tested with build-many-glibcs.py.
The compiler might transform __stpcpy calls (which are routed to
__builtin_stpcpy as an optimization) to strcpy and x86_64 strcpy
multiarch implementation does not build any working symbol due
ISA_SHOULD_BUILD not being evaluated for IS_IN(rtld).
Checked on x86_64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
This addition to the list of source headers in
sysdeps/mach/hurd/bits/errno.h appears in the source tree after
build-many-glibcs.py runs, I'm guessing resulting from gnumach commit
c566ad85a2d6728ebc8ec0f461a3b35df300e96e.
Linux 6.0 has no new syscalls. Update the version number in
syscall-names.list to reflect that it is still current for 6.0.
Tested with build-many-glibcs.py.
The AVX2 strrchr and wcsrchr implementation uses the 'blsmsk'
instruction which belongs to the BMI1 CPU feature and the 'shrx'
instruction, which belongs to the BMI2 CPU feature.
Fixes: df7e295d18 ("x86: Optimize {str|wcs}rchr-avx2")
Partially resolves: BZ #29611
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
The AVX2 memrchr implementation uses the 'shlxl' instruction, which
belongs to the BMI2 CPU feature and uses the 'lzcnt' instruction, which
belongs to the LZCNT CPU feature.
Fixes: af5306a735 ("x86: Optimize memrchr-avx2.S")
Partially resolves: BZ #29611
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
The AVX2 memchr, rawmemchr and wmemchr implementations use the 'bzhi'
and 'sarx' instructions, which belongs to the BMI2 CPU feature.
Fixes: acfd088a19 ("x86: Optimize memchr-avx2.S")
Partially resolves: BZ #29611
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
The AVX2 wcs(n)cmp implementations use the 'bzhi' instruction, which
belongs to the BMI2 CPU feature.
NB: It also uses the 'tzcnt' BMI1 instruction, but it is executed as BSF
as BSF if the CPU doesn't support TZCNT, and produces the same result
for non-zero input.
Partially fixes: b77b06e0e2 ("x86: Optimize strcmp-avx2.S")
Partially resolves: BZ #29611
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>