Generic implementation on top of __bswap_32 always expands
inline to either bswap or movbe depending on -march=*.
Signed-off-by: Cristian Rodríguez <crrodriguez@opensuse.org>
Avoid moving code across SET_RESTORE_ROUNDL in order to fix
[BZ #29463].
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Linux 6.0 adds a constant ADDRB, a termios c_cflag bit, to its
include/uapi/asm-generic/termbits-common.h.
Add it accordingly to glibc's bits/termios-c_cflag.h headers. As
other constants in these headers are generally in octal, I converted
the value to octal to match. As ADDRB isn't in a POSIX-reserved
namespace, I made it conditional on __USE_MISC.
Tested for x86_64.
Unused at the moment, but evex512 strcmp, strncmp, strcasecmp{l}, and
strncasecmp{l} functions can be added by including strcmp-evex.S with
"x86-evex512-vecs.h" defined.
In addition save code size a bit in a few places.
1. tzcnt ... -> bsf ...
2. vpcmp{b|d} $0 ... -> vpcmpeq{b|d}
This saves a touch of code size but has minimal net affect.
Full check passes on x86-64.
commit b412213eee
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date: Tue Oct 18 17:44:07 2022 -0700
x86: Optimize strrchr-evex.S and implement with VMM headers
Added `vpcompress{b|d}` to the page-cross logic with is an
AVX512-VBMI2 instruction. This is not supported on SKX. Since the
page-cross logic is relatively cold and the benefit is minimal
revert the page-cross case back to the old logic which is supported
on SKX.
Tested on x86-64.
The ARM preconfigure script tries to detect the capabilities of the
target platform by checking the compiler's predefined architecture
macros. However, if the compiler is tuning for AArch32 on ARMv8/v9 this
step fails:
checking for sysdeps preconfigure fragments... aarch64 alpha arc arm
WARNING: arm/preconfigure: Did not find ARM architecture type; using default
This is because preconfigure.ac doesn't escape the square brackets in
the glob for matching compilers targeting ARMv8. Adding another pair of
brackets to escape the first pair fixes this:
checking for sysdeps preconfigure fragments... aarch64 alpha arc arm
Found compiler is configured for something newer than v7 - using v7
Signed-off-by: Felix Riemann <felix.riemann@sma.de>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Remove implicit conversion from enumeration type 'zotypes' to different
type 'nstype'.
Checked on x86_64-linux-gnu.
Reviewed-by: Arjun Shankar <arjun@redhat.com>
The current macros uses pid as signed value, which triggers a compiler
warning for process and thread timers. Replace MAKE_PROCESS_CPUCLOCK
with static inline function that expects the pid as unsigned. These
are similar to what Linux does internally.
Checked on x86_64-linux-gnu.
Reviewed-by: Arjun Shankar <arjun@redhat.com>
Optimization is:
1. Cache latest result in "fast path" loop with `vmovdqu` instead of
`kunpckdq`. This helps if there are more than one matches.
Code Size Changes:
strrchr-evex.S : +30 bytes (Same number of cache lines)
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
strrchr-evex.S : 0.932 (From cases with higher match frequency)
Full results attached in email.
Full check passes on x86-64.
Optimizations are:
1. Use the fact that lzcnt(0) -> VEC_SIZE for memchr to save a branch
in short string case.
2. Save several instructions in len = [VEC_SIZE, 4 * VEC_SIZE] case.
3. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
Code Size Changes:
memrchr-evex.S : -29 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
memrchr-evex.S : 0.949 (Mostly from improvements in small strings)
Full results attached in email.
Full check passes on x86-64.
Optimizations are:
1. Use the fact that bsf(0) leaves the destination unchanged to save a
branch in short string case.
2. Restructure code so that small strings are given the hot path.
- This is a net-zero on the benchmark suite but in general makes
sense as smaller sizes are far more common.
3. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
4. Align labels less aggressively, especially if it doesn't save fetch
blocks / causes the basic-block to span extra cache-lines.
The optimizations (especially for point 2) make the strnlen and
strlen code essentially incompatible so split strnlen-evex
to a new file.
Code Size Changes:
strlen-evex.S : -23 bytes
strnlen-evex.S : -167 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
strlen-evex.S : 0.992 (No real change)
strnlen-evex.S : 0.947
Full results attached in email.
Full check passes on x86-64.
Size Optimizations:
1. Condence hot path for better cache-locality.
- This is most impact for strchrnul where the logic strings with
len <= VEC_SIZE or with a match in the first VEC no fits entirely
in the first cache line.
2. Reuse common targets in first 4x VEC and after the loop.
3. Don't align targets so aggressively if it doesn't change the number
of fetch blocks it will require and put more care in avoiding the
case where targets unnecessarily split cache lines.
4. Align the loop better for DSB/LSD
5. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
6. Align labels less aggressively, especially if it doesn't save fetch
blocks / causes the basic-block to span extra cache-lines.
Code Size Changes:
strchr-evex.S : -63 bytes
strchrnul-evex.S: -48 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
strchr-evex.S (Fixed) : 0.971
strchr-evex.S (Rand) : 0.932
strchrnul-evex.S : 0.965
Full results attached in email.
Full check passes on x86-64.
Optimizations are:
1. Use the fact that tzcnt(0) -> VEC_SIZE for memchr to save a branch
in short string case.
2. Restructure code so that small strings are given the hot path.
- This is a net-zero on the benchmark suite but in general makes
sense as smaller sizes are far more common.
3. Use more code-size efficient instructions.
- tzcnt ... -> bsf ...
- vpcmpb $0 ... -> vpcmpeq ...
4. Align labels less aggressively, especially if it doesn't save fetch
blocks / causes the basic-block to span extra cache-lines.
The optimizations (especially for point 2) make the memchr and
rawmemchr code essentially incompatible so split rawmemchr-evex
to a new file.
Code Size Changes:
memchr-evex.S : -107 bytes
rawmemchr-evex.S : -53 bytes
Net perf changes:
Reported as geometric mean of all improvements / regressions from N=10
runs of the benchtests. Value as New Time / Old Time so < 1.0 is
improvement and 1.0 is regression.
memchr-evex.S : 0.928
rawmemchr-evex.S : 0.986 (Less targets cross cache lines)
Full results attached in email.
Full check passes on x86-64.
This patch implements following evex512 version of string functions.
evex512 version takes up to 30% less cycle as compared to evex,
depending on length and alignment.
- memchr function using 512 bit vectors.
- rawmemchr function using 512 bit vectors.
- wmemchr function using 512 bit vectors.
Code size data:
memchr-evex.o 762 byte
memchr-evex512.o 576 byte (-24%)
rawmemchr-evex.o 461 byte
rawmemchr-evex512.o 412 byte (-11%)
wmemchr-evex.o 794 byte
wmemchr-evex512.o 552 byte (-30%)
Placeholder function, not used by any processor at the moment.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
In the future, this will result in a compilation failure if the
macros are unexpectedly undefined (due to header inclusion ordering
or header inclusion missing altogether).
Assembler sources are more difficult to convert. In many cases,
they are hand-optimized for the mangling and no-mangling variants,
which is why they are not converted.
sysdeps/s390/s390-32/__longjmp.c and sysdeps/s390/s390-64/__longjmp.c
are special: These are C sources, but most of the implementation is
in assembler, so the PTR_DEMANGLE macro has to be undefined in some
cases, to match the assembler style.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This allows us to define a generic no-op version of PTR_MANGLE and
PTR_DEMANGLE. In the future, we can use PTR_MANGLE and PTR_DEMANGLE
unconditionally in C sources, avoiding an unintended loss of hardening
due to missing include files or unlucky header inclusion ordering.
In i386 and x86_64, we can avoid a <tls.h> dependency in the C
code by using the computed constant from <tcb-offsets.h>. <sysdep.h>
no longer includes these definitions, so there is no cyclic dependency
anymore when computing the <tcb-offsets.h> constants.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This way, we can define the pointer guard macros without including
<sysdep.h> on x86-64. Other architectures will not have such an
inclusion dependency, and the implied header file inclusion would
create a porting hazard.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This works around a gcc issue where it const folded inf/inf into nan,
preventing the invalid exception to be signalled.
(x-x)/(x-x) is more robust against optimizations and works for all
out of bounds values including x==nan.
The gcc issue https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95115
should be fixed on release branches starting from gcc-10, but it is
better to change the code in case glibc is built with older gcc.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
To avoid duplicate the VMM / GPR / mask insn macros in all incoming
evex512 files use the macros defined in 'reg-macros.h' and
'{vec}-macros.h'
This commit does not change libc.so
Tested build on x86-64
1) Copy so that backport will be easier.
2) Make section only define if there is not a previous definition
3) Add `VEC_lo` definition for proper reg-width but in the
ymm/zmm0-15 range.
4) Add macros for accessing GPRs based on VEC_SIZE
This is to make it easier to do think like:
```
vpcmpb %VEC(0), %VEC(1), %k0
kmov{d|q} %k0, %{eax|rax}
test %{eax|rax}
```
It adds macro s.t any GPR can get the proper width with:
`V{upcase_GPR_name}`
and any mask insn can get the proper width with:
`{upcase_mask_insn_without_postfix}`
This commit does not change libc.so
Tested build on x86-64
The data in the _ns_debug member must be preserved, otherwise
_dl_debug_initialize enters an infinite loop. To be conservative,
only clear the libc_map member for now, to fix bug 29528.
Fixes commit d0e357ff45
("elf: Call __libc_early_init for reused namespaces (bug 29528)"),
by reverting most of it.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
MAX_FAST_SIZE is 160 at most, so a uint8_t is sufficient. This makes
it harder to use memory corruption, by overwriting global_max_fast
with a large value, to fundamentally alter malloc behavior.
Reviewed-by: DJ Delorie <dj@redhat.com>
Besides the option being gcc specific, this approach is still fragile
and not future proof since we do not know if this will be the only
optimization option gcc will add that transforms loops to memset
(or any libcall).
This patch adds a new header, dl-symbol-redir-ifunc.h, that can b
used to redirect the compiler generated libcalls to port the generic
memset implementation if required.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
A start.o compiled from start.S with -DPIC and no -DSHARED is used by
both crt1.o and rcrt1.o. So the LoongArch static PIE patch
unintentionally introduced PC-relative addressing for main and
__libc_start_main into crt1.o.
While the latest Binutils (trunk, which will be released as 2.40)
supports the PC-relative relocs against an external function by creating
a PLT entry, the 2.39 release branch doesn't (and won't) support this.
An error is raised:
"PLT stub does not represent and symbol not defined."
So, we need the following changes:
1. Check if ld supports the PC-relative relocs against an external
function. If it's not supported, we deem static PIE unsupported.
2. Change start.S. If static PIE is supported, use PC-relative
addressing for main and __libc_start_main and rely on the linker to
create PLT entries. Otherwise, restore the old behavior (using GOT
to address these functions).
An alternative would be adding a new "static-pie-start.S", and some
custom logic into Makefile to build rcrt1.o with it. And, restore
start.S to the state before static PIE change so crt1.o won't contain
PC-relative relocs against external symbols. But I can't see any
benefit of this alternative, so I'd just keep it simple.
Tested by building glibc with the following configurations:
1. Binutils trunk + GCC trunk. Static PIE enabled. All tests
passed.
2. Binutils 2.39 branch + GCC trunk. Static PIE disabled. Tests
related to ifunc failed (it's a known issue). All other tests
passed.
3. Binutils 2.39 branch + GCC 12 branch, cross compilation with
build-many-glibcs.py from x86_64-linux-gnu. Static PIE disabled.
Build succeeded.
As per other architectures. I have checked on a armv8 hardware with
the following configurations:
arm-linux-gnueabihf (gcc built with --with-float=hard --with-cpu=arm926ej-s)
armv5-linux-gnueabihf (-march=armv5te -mfpu=vfpv3)
armv7-linux-gnueabihf (-march=armv7-a -mfpu=vfpv3)
armv7-thumb-linux-gnueabihf (-march=armv7-a -mfpu=vfpv3 -mthumb)
armv7-neon-linux-gnueabihf (-march=armv7-a -mfpu=neon)
armv7-neonhard-linux-gnueabihf (-march=armv7-a -mfpu=neon -mfloat-abi=hard)
Without any regression.
I haven't dig into the code, but since Linux atomic-machine.h handle
pre-ARMv6 and ARMv6 I expect the compiler might have some small room
to optimize.
The code size also improves is most of the configurations:
* master
text data bss dec hex filename
1727801 9720 37928 1775449 1b1759 arm-linux-gnueabihf/libc.so
1691729 9720 37928 1739377 1a8a71 arm-linux-gnueabihf-armv7-disable-multi-arch/libc.so
1725509 9720 37928 1773157 1b0e65 armv5-linux-gnueabihf/libc.so
1700757 9720 37928 1748405 1aadb5 armv6-linux-gnueabihf/libc.so
1698973 9720 37928 1746621 1aa6bd armv6t2-linux-gnueabihf/libc.so
1695481 9752 37928 1743161 1a9939 armv7-linux-gnueabihf/libc.so
1692917 9744 37928 1740589 1a8f2d armv7-neonhard-linux-gnueabihf/libc.so
1692917 9744 37928 1740589 1a8f2d armv7-neon-linux-gnueabihf/libc.so
1225353 9752 37928 1273033 136cc9 armv7-thumb-linux-gnueabihf/libc.so
* patched
text data bss dec hex filename
1726805 9720 37928 1774453 1b1375 arm-linux-gnueabihf/libc.so
1689321 9720 37928 1736969 1a8109 arm-linux-gnueabihf-armv7-disable-multi-arch/libc.so
1724433 9720 37928 1772081 1b0a31 armv5-linux-gnueabihf/libc.so
1698301 9720 37928 1745949 1aa41d armv6-linux-gnueabihf/libc.so
1696525 9720 37928 1744173 1a9d2d armv6t2-linux-gnueabihf/libc.so
1693009 9752 37928 1740689 1a8f91 armv7-linux-gnueabihf/libc.so
1690493 9744 37928 1738165 1a85b5 armv7-neonhard-linux-gnueabihf/libc.so
1690493 9744 37928 1738165 1a85b5 armv7-neon-linux-gnueabihf/libc.so
1223837 9752 37928 1271517 1366dd armv7-thumb-linux-gnueabihf/libc.so
The idea is eventually move all architectures to use compiler builtins.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
For instance on x86_64 with gcc 12.1.1 andwith fstack-protector
enabled the empty function still generates a stack protector code
sequence:
0000000000000000 <_dl_relocate_static_pie>:
0: 48 83 ec 18 sub $0x18,%rsp
4: 64 48 8b 04 25 28 00 mov %fs:0x28,%rax
b: 00 00
d: 48 89 44 24 08 mov %rax,0x8(%rsp)
12: 31 c0 xor %eax,%eax
14: 48 8b 44 24 08 mov 0x8(%rsp),%rax
19: 64 48 2b 04 25 28 00 sub %fs:0x28,%rax
20: 00 00
22: 75 05 jne 29 <_dl_relocate_static_pie+0x29>
24: 48 83 c4 18 add $0x18,%rsp
28: c3 ret
29: e8 00 00 00 00 call 2e <_dl_relocate_static_pie+0x2e>
And since the function is called prior thread pointer setup, it
triggers a invalid memory access (this is shown with the failure
of elf/tst-tls1-static-non-pie).
Although it might characterizes as compiler issue or missed
optimization, to be safe also disables stack protector on
static-reloc object.
Checked on x86_64-linux-gnu and sparc64-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The print_hwcap_1 machinery was useful for the legacy hwcaps
subdirectories, but it is not worth the trouble now that they
are gone.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Removal of legacy hwcaps support from the dynamic loader left
no users of _dl_string_hwcap.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Add a NEWS entry noting the removal of the legacy hwcaps search
mechanism for shared objects.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
They were set to 0 on initialisation and never changed later after
last commit.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Last commit made it so that the value passed for that parameter was
always 0 at its only call site.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Remove support for the legacy hwcaps subdirectories from ldconfig.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Remove support for the legacy hwcaps subdirectories from the dynamic
loader.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
This was to test loading of shared libraries from platform
subdirectories, but this functionality is going away in the
following commits.
Signed-off-by: Javier Pello <devel@otheo.eu>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>