The linux syscall ABI returns long, so the generic syscall code for
linux should use long for the return value.
This fixes the truncation of the return value of the syscall function
when that does not fit into an int.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The assembler is not issued directly, but rather always through CC
wrapper. The binutils version check if done with LD instead.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Previously, getrandom would, each time it's called, traverse the file
system to find /dev/urandom, fetch some random data from it, then throw
away that port. This is quite slow, while calls to getrandom are
genrally expected to be fast.
Additionally, this means that getrandom can not work when /dev/urandom
is unavailable, such as inside a chroot that lacks one. User programs
expect calls to getrandom to work inside a chroot if they first call
getrandom outside of the chroot.
In particular, this is known to break the OpenSSH server, and in that
case the issue is exacerbated by the API of arc4random, which prevents
it from properly reporting errors, forcing glibc to abort on failure.
This causes sshd to just die once it tries to generate a random number.
Caching the random server port, in a manner similar to how socket
server ports are cached, both improves the performance and works around
the chroot issue.
Tested on i686-gnu with the following program:
pthread_barrier_t barrier;
void *worker(void*) {
pthread_barrier_wait(&barrier);
uint32_t sum = 0;
for (int i = 0; i < 10000; i++) {
sum += arc4random();
}
return (void *)(uintptr_t) sum;
}
int main() {
pthread_t threads[THREAD_COUNT];
pthread_barrier_init(&barrier, NULL, THREAD_COUNT);
for (int i = 0; i < THREAD_COUNT; i++) {
pthread_create(&threads[i], NULL, worker, NULL);
}
for (int i = 0; i < THREAD_COUNT; i++) {
void *retval;
pthread_join(threads[i], &retval);
printf("Thread %i: %lu\n", i, (unsigned long)(uintptr_t) retval);
}
In my totally unscientific benchmark, with this patch, this completes
in about 7 seconds, whereas previously it took about 50 seconds. This
program was also used to test that getrandom () doesn't explode if the
random server dies, but instead reopens the /dev/urandom anew. I have
also verified that with this patch, OpenSSH can once again accept
connections properly.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20221202135558.23781-1-bugaevc@gmail.com>
This patch cleans up the power4 strncmp optimization for powerpc64 which
is unlikely to be used anywhere.
Tested on ppc64le with and without --disable-multi-arch flag.
Reviewed-by: Paul E. Murphy <murphyp@linux.ibm.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes strncpy for x32. Tested on x86-64 and x32. On x86-64,
libc.so is the same with and without the fix.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes strncat for x32. Tested on x86-64 and x32. On x86-64,
libc.so is the same with and without the fix.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
Add inline assembler for the scalbn functions. Passes GLIBC regression.
GCC 13, LoongArch support ___builtin_scalbn{,f} with -fno-math-errno,
but only "libm" can use -fno-math-errno in GLIBC, and scalbn is in libc
instead of libm because __printf_fp calls it.
GCC 13 compiles these built-ins instead of generic
implementation for function logb.
Link: https://gcc.gnu.org/r13-3922
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
This patch is using the corresponding GCC builtin for logbf, logb,
logbl and logbf128 if the USE_FUNCTION_BUILTIN macros are defined to one
in math-use-builtins-function.h.
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
GCC 13 compiles these built-ins instead of generic
implementation for function llrint.
Link: https://gcc.gnu.org/r13-3920
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
This patch is using the corresponding GCC builtin for llrintf, llrint,
llrintl and llrintf128 if the USE_FUNCTION_BUILTIN macros are defined to one
in math-use-builtins-function.h.
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
GCC 13 compiles these built-ins instead of generic
implementation for function lrint.
Link: https://gcc.gnu.org/r13-3920
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
This patch is using the corresponding GCC builtin for lrintf, lrint,
lrintl and lrintf128 if the USE_FUNCTION_BUILTIN macros are defined to one
in math-use-builtins-function.h.
Co-Authored-By: Xi Ruoyao <xry111@xry111.site>
lld does not implement all the linker optimization to avoid the GOT
relocation as done by binutils (bfd/elf32-i386.c:elf_i386_convert_load_reloc).
The current 'movl main@GOT(%ebx), %eax' will then create a GOT
relocation when building with lld, which make static-pie status to
not being able to start the provided main function.
The change uses a __wrap_main local symbol, which in turn calls main
(similar as used by aarch64 and s390x).
Checked on i686-linux-gnu with binutils and lld.
Reviewed-by: Fangrui Song <maskray@google.com>
This patch fixes two problems with audit:
1. The DL_OFFSET_RV_VPCS offset was mixed up with DL_OFFSET_RG_VPCS,
resulting in x2 register value nulling in RG structure.
2. We need to preserve the x8 register before function call, but
don't have to save it's new value and restore it before return.
Anyway the final restore was using OFFSET_RV instead of OFFSET_RG value
which is wrong (althoug doesn't affect anything).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Currently glibc uses in_time_t_range to detects time_t overflow,
and if it occurs fallbacks to 64 bit syscall version.
The function name is confusing because internally time_t might be
either 32 bits or 64 bits (depending on __TIMESIZE).
This patch refactors the in_time_t_range by replacing it with
in_int32_t_range for the case to check if the 64 bit time_t syscall
should be used.
The in_time_t range is used to detect overflow of the
syscall return value.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Use __builtin_{fma, fmaf} to implement function {fma, fmaf} instead of
the generic implementation.
* sysdeps/loongarch/fpu/math-use-builtins-fma.h: New file.
Old applications pass __IPC_64 as part of the command argument because
old glibc did not check for unknown commands, and passed through the
arguments directly to the kernel, without adding __IPC_64.
Applications need to continue doing that for old glibc compatibility,
so this commit enables this approach in current glibc.
For msgctl and shmctl, if no translation is required, make
direct system calls, as we did before the time64 changes. If
translation is required, mask __IPC_64 from the command argument.
For semctl, the union-in-vararg argument handling means that
translation is needed on all architectures.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
RISC-V architecture extends the cache information for level 3 cache
in AUX vector in Linux v.6.1-rc1. This patch supports sysconf to get
the level 3 cache information.
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
Implemented:
wcscat-avx2 (+ 744 bytes
wcscpy-avx2 (+ 539 bytes)
wcpcpy-avx2 (+ 577 bytes)
wcsncpy-avx2 (+1108 bytes)
wcpncpy-avx2 (+1214 bytes)
wcsncat-avx2 (+1085 bytes)
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are reported
as geometric mean of all ratios of New Implementation / Best Old
Implementation. Best Old Implementation was determined with the
highest ISA implementation.
wcscat-avx2 -> 0.975
wcscpy-avx2 -> 0.591
wcpcpy-avx2 -> 0.698
wcsncpy-avx2 -> 0.730
wcpncpy-avx2 -> 0.711
wcsncat-avx2 -> 0.954
Code Size Changes:
This change increase the size of libc.so by ~5.5kb bytes. For
reference the patch optimizing the normal strcpy family functions
decreases libc.so by ~5.2kb.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
Implemented:
wcscat-evex (+ 905 bytes)
wcscpy-evex (+ 674 bytes)
wcpcpy-evex (+ 709 bytes)
wcsncpy-evex (+1358 bytes)
wcpncpy-evex (+1467 bytes)
wcsncat-evex (+1213 bytes)
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are reported
as geometric mean of all ratios of New Implementation / Best Old
Implementation. Best Old Implementation was determined with the
highest ISA implementation.
wcscat-evex -> 0.991
wcscpy-evex -> 0.587
wcpcpy-evex -> 0.695
wcsncpy-evex -> 0.719
wcpncpy-evex -> 0.694
wcsncat-evex -> 0.979
Code Size Changes:
This change increase the size of libc.so by ~6.3kb bytes. For
reference the patch optimizing the normal strcpy family functions
decreases libc.so by ~5.7kb.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
Optimizations are:
1. Use more overlapping stores to avoid branches.
2. Reduce how unrolled the aligning copies are (this is more of a
code-size save, its a negative for some sizes in terms of
perf).
3. For st{r|p}n{cat|cpy} re-order the branches to minimize the
number that are taken.
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are
reported as geometric mean of all ratios of
New Implementation / Old Implementation.
strcat-avx2 -> 0.998
strcpy-avx2 -> 0.937
stpcpy-avx2 -> 0.971
strncpy-avx2 -> 0.793
stpncpy-avx2 -> 0.775
strncat-avx2 -> 0.962
Code Size Changes:
function -> Bytes New / Bytes Old -> Ratio
strcat-avx2 -> 685 / 1639 -> 0.418
strcpy-avx2 -> 560 / 903 -> 0.620
stpcpy-avx2 -> 592 / 939 -> 0.630
strncpy-avx2 -> 1176 / 2390 -> 0.492
stpncpy-avx2 -> 1268 / 2438 -> 0.520
strncat-avx2 -> 1042 / 2563 -> 0.407
Notes:
1. Because of the significant difference between the
implementations they are split into three files.
strcpy-avx2.S -> strcpy, stpcpy, strcat
strncpy-avx2.S -> strncpy
strncat-avx2.S > strncat
I couldn't find a way to merge them without making the
ifdefs incredibly difficult to follow.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
Optimizations are:
1. Use more overlapping stores to avoid branches.
2. Reduce how unrolled the aligning copies are (this is more of a
code-size save, its a negative for some sizes in terms of
perf).
3. Improve the loop a bit (similiar to what we do in strlen with
2x vpminu + kortest instead of 3x vpminu + kmov + test).
4. For st{r|p}n{cat|cpy} re-order the branches to minimize the
number that are taken.
Performance Changes:
Times are from N = 10 runs of the benchmark suite and are
reported as geometric mean of all ratios of
New Implementation / Old Implementation.
stpcpy-evex -> 0.922
strcat-evex -> 0.985
strcpy-evex -> 0.880
strncpy-evex -> 0.831
stpncpy-evex -> 0.780
strncat-evex -> 0.958
Code Size Changes:
function -> Bytes New / Bytes Old -> Ratio
strcat-evex -> 819 / 1874 -> 0.437
strcpy-evex -> 700 / 1074 -> 0.652
stpcpy-evex -> 735 / 1094 -> 0.672
strncpy-evex -> 1397 / 2611 -> 0.535
stpncpy-evex -> 1489 / 2691 -> 0.553
strncat-evex -> 1184 / 2832 -> 0.418
Notes:
1. Because of the significant difference between the
implementations they are split into three files.
strcpy-evex.S -> strcpy, stpcpy, strcat
strncpy-evex.S -> strncpy
strncat-evex.S > strncat
I couldn't find a way to merge them without making the
ifdefs incredibly difficult to follow.
2. All implementations can be made evex512 by including
"x86-evex512-vecs.h" at the top.
3. All implementations have an optional define:
`USE_EVEX_MASKED_STORE`
Setting to one uses evex-masked stores for handling short
strings. This saves code size and branches. It's disabled
for all implementations are the moment as there are some
serious drawbacks to masked stores in certain cases, but
that may be fixed on future architectures.
Full check passes on x86-64 and build succeeds for all ISA levels w/
and w/o multiarch.
Changes to generated code are:
1. In a few places use `vpcmpeqb` instead of `vpcmpneq` to save a
byte of code size.
2. Add a branch for length <= (VEC_SIZE * 6) as opposed to doing
the entire block of [VEC_SIZE * 4 + 1, VEC_SIZE * 8] in a
single basic-block (the space to add the extra branch without
changing code size is bought with the above change).
Change (2) has roughly a 20-25% speedup for sizes in [VEC_SIZE * 4 +
1, VEC_SIZE * 6] and negligible to no-cost for [VEC_SIZE * 6 + 1,
VEC_SIZE * 8]
From N=10 runs on Tigerlake:
align1,align2 ,length ,result ,New Time ,Cur Time ,New Time / Old Time
0 ,0 ,129 ,0 ,5.404 ,6.887 ,0.785
0 ,0 ,129 ,1 ,5.308 ,6.826 ,0.778
0 ,0 ,129 ,18446744073709551615 ,5.359 ,6.823 ,0.785
0 ,0 ,161 ,0 ,5.284 ,6.827 ,0.774
0 ,0 ,161 ,1 ,5.317 ,6.745 ,0.788
0 ,0 ,161 ,18446744073709551615 ,5.406 ,6.778 ,0.798
0 ,0 ,193 ,0 ,6.804 ,6.802 ,1.000
0 ,0 ,193 ,1 ,6.950 ,6.754 ,1.029
0 ,0 ,193 ,18446744073709551615 ,6.792 ,6.719 ,1.011
0 ,0 ,225 ,0 ,6.625 ,6.699 ,0.989
0 ,0 ,225 ,1 ,6.776 ,6.735 ,1.003
0 ,0 ,225 ,18446744073709551615 ,6.758 ,6.738 ,0.992
0 ,0 ,256 ,0 ,5.402 ,5.462 ,0.989
0 ,0 ,256 ,1 ,5.364 ,5.483 ,0.978
0 ,0 ,256 ,18446744073709551615 ,5.341 ,5.539 ,0.964
Rewriting with VMM API allows for memcmpeq-evex to be used with
evex512 by including "x86-evex512-vecs.h" at the top.
Complete check passes on x86-64.
The only change to the existing generated code is `tzcnt` -> `bsf` to
save a byte of code size here and there.
Rewriting with VMM API allows for memcmp-evex-movbe to be used with
evex512 by including "x86-evex512-vecs.h" at the top.
Complete check passes on x86-64.
Similar to ppoll, the poll.h header needs to redirect the poll call
to a proper fortified ppoll with 64 bit time_t support.
The implementation is straightforward, just need to add a similar
check as __poll_chk and call the 64 bit time_t ppoll version. The
debug fortify tests are also extended to cover 64 bit time_t for
affected ABIs.
Unfortunately it requires an aditional symbol, which makes backport
tricky. One possibility is to add a static inline version if compiler
supports is and call abort instead of __chk_fail, so fortified version
will call __poll64 in the end.
Another possibility is to just remove the fortify support for
_TIME_BITS=64.
Checked on i686-linux-gnu.
Changes from v1:
Use vec api for register.
Replace VPCMP with VPCMPEQ
Restructure and remove 1 unconditional jump.
Change page cross logic to use sall.
This patch implements following evex512 version of string functions.
evex512 version takes up to 30% less cycle as compared to evex,
depending on length and alignment.
- strrchr function using 512 bit vectors.
- wcsrchr function using 512 bit vectors.
Code size data:
strrchr-evex.o 879 byte
strrchr-evex512.o 601 byte (-32%)
wcsrchr-evex.o 882 byte
wcsrchr-evex512.o 572 byte (-35%)
Placeholder function, not used by any processor at the moment.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
This makes it more likely that the compiler can compute the strlen
argument in _startup_fatal at compile time, which is required to
avoid a dependency on strlen this early during process startup.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
The old exception handling implementation used function interposition
to replace the dynamic loader implementation (no TLS support) with the
libc implementation (TLS support). This results in problems if the
link order between the dynamic loader and libc is reversed (bug 25486).
The new implementation moves the entire implementation of the
exception handling functions back into the dynamic loader, using
THREAD_GETMEM and THREAD_SETMEM for thread-local data support.
These depends on Hurd support for these macros, added in commit
b65a82e4e7 ("hurd: Add THREAD_GET/SETMEM/_NC").
One small obstacle is that the exception handling facilities are used
before the TCB has been set up, so a check is needed if the TCB is
available. If not, a regular global variable is used to store the
exception handling information.
Also rename dl-error.c to dl-catch.c, to avoid confusion with the
dlerror function.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>