commit d585ba47fc
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date: Mon Nov 1 00:49:48 2021 -0500
string: Make tests birdirectional test-memcpy.c
Add tests that had src/dst non 4-byte aligned. Since src/dst are
initialized/compared as uint32_t type which is 4-byte aligned this can
break on some targets.
Fix the issue by specifying a new non-aligned 4-byte
`unaligned_uint32_t` for src/dst.
Another alternative is to rely on memcpy/memcmp for
initializing/testing src/dst. Using memcpy for initializing in memcpy
tests, however, could lead to future bugs.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
localedef currently blindly trust the archive header. When passed an
archive file with the wrong endianess, this leads to a segmentation
fault:
$ localedef --big-endian --list-archive /usr/lib/locale/locale-archive
Segmentation fault (core dumped)
When passed non-archive files, asserts are reported on the best case,
but sometimes it can lead to a segmentation fault:
$ localedef --list-archive /bin/true
localedef: programs/locarchive.c:1643: show_archive_content: Assertion `used < GET (head->namehash_used)' failed.
Aborted (core dumped)
$ localedef --list-archive /usr/lib/locale/C.utf8/LC_COLLATE
Segmentation fault (core dumped)
This patch improves the user experience by looking at the magic value,
which is always written, but never checked. It should still be possible
to trigger a segmentation fault with crafted files, but this already
catch many cases.
The function was renamed to __atomic_wide_counter_load_relaxed
in commit 8bd336a00a ("nptl: Extract
<bits/atomic_wide_counter.h> from pthread_cond_common.c").
Current binutils defines __executable_start as the lowest text
address, so using the entry point address as a fallback is no
longer necessary. As a result, overriding <entry.h> is only
necessary if the entry point is not called _start.
The previous approach to define __ASSEMBLY__ to suppress the
declaration breaks if headers included by <entry.h> are not
compatible with __ASSEMBLY__. This happens with rseq integration
because it is necessary to include kernel headers in more places.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Programs without dynamic dependencies and without a program
interpreter are now run via execve.
Previously, the dynamic linker either crashed while attempting to
read a non-existing dynamic segment (looking for DT_AUDIT/DT_DEPAUDIT
data), or the self-relocated in the static PIE executable crashed
because the outer dynamic linker had already applied RELRO protection.
<dl-execve.h> is needed because execve is not available in the
dynamic loader on Hurd.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
On Ice Lake and Tiger Lake laptops, some test programs timeout when there
are 3 "make check -j8" runs in parallel. Add --with-timeoutfactor=NUM to
specify an integer to scale the timeout of test programs, which can be
overriden by TIMEOUTFACTOR environment variable.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Must use notl %edi here as lower bits are for CHAR comparisons
potentially out of range thus can be 0 without indicating mismatch.
This fixes BZ #28646.
Co-Authored-By: H.J. Lu <hjl.tools@gmail.com>
rseq support will use a 32-byte aligned field in struct pthread,
so the whole struct needs to have at least that alignment.
nptl/tst-tls3mod.c uses TCB_ALIGNMENT, therefore include <descr.h>
to obtain the fallback definition.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
v2 is a complete rewrite of the A64FX memcpy. Performance is improved
by streamlining the code, aligning all large copies and using a single
unrolled loop for all sizes. The code size for memcpy and memmove goes
down from 1796 bytes to 868 bytes. Performance is better in all cases:
bench-memcpy-random is 2.3% faster overall, bench-memcpy-large is ~33%
faster for large sizes, bench-memcpy-walk is 25% faster for small sizes
and 20% for the largest sizes. The geomean of all tests in bench-memcpy
is 5.1% faster, and total time is reduced by 4%.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Rewrite memcmp to improve performance. On small and medium inputs performance
is 10-20% better. Large inputs use a SIMD loop processing 64 bytes per
iteration, which is 30-50% faster depending on the size.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
The syscall pipe2 was added in linux 2.6.27 and glibc requires linux
3.2.0. The patch removes the arch-specific implementation for alpha,
ia64, mips, sh, and sparc which requires a different kernel ABI
than the usual one.
Checked on x86_64-linux-gnu and with a build for the affected ABIs.
Variadic function calls in syscalls.list does not work for all ABIs
(for instance where the argument are passed on the stack instead of
registers) and might have underlying issues depending of the variadic
type (for instance if a 64-bit argument is used).
Checked on x86_64-linux-gnu.
The LFS prlimit64 requires a arch-specific implementation in
syscalls.list. Instead add a generic one that handles the
required symbol alias for __RLIM_T_MATCHES_RLIM64_T.
HPPA is the only outlier which requires a different default
symbol.
Checked on x86_64-linux-gnu and with build for the affected ABIs.
Passing 64-bit arguments on syscalls.list is tricky: it requires
to reimplement the expected kernel abi in each architecture. This
is way to better to represent in C code where we already have
macros for this (SYSCALL_LL64).
Checked on x86_64-linux-gnu.
Add vector sin/sinf and input files to libmvec microbenchmark.
libmvec-sin-inputs:
90% Normal random distribution
range: (-DBL_MAX, DBL_MAX)
mean: 0.0
sigma: 5.0
10% uniform random distribution in range (-1000.0, 1000.0)
libmvec-sinf-inputs:
90% Normal random distribution
range: (-FLT_MAX, FLT_MAX)
mean: 0.0f
sigma: 5.0f
10% uniform random distribution in range (-1000.0f, 1000.0f)
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Add vector pow/powf and input files to libmvec microbenchmark.
libmvec-pow-inputs:
arg1:
90% Normal random distribution
range: (0.0, 256.0)
mean: 0.0
sigma: 32.0
10% uniform random distribution in range (0.0, 256.0)
arg2:
90% Normal random distribution
range: (-127.0, 127.0)
mean: 0.0
sigma: 16.0
10% uniform random distribution in range (-127.0, 127.0)
libmvec-powf-inputs:
arg1:
90% Normal random distribution
range: (0.0f, 100.0f)
mean: 0.0f
sigma: 16.0f
10% uniform random distribution in range (0.0f, 100.0f)
arg2:
90% Normal random distribution
range: (-10.0f, 10.0f)
mean: 0.0f
sigma: 8.0f
10% uniform random distribution in range (-10.0f, 10.0f)
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Add vector log/logf and input files to libmvec microbenchmark.
libmvec-log-inputs:
70% Normal random distribution
range: (0.0, DBL_MAX)
mean: 1.0
sigma: 50.0
30% uniform random distribution in range (0.0, DBL_MAX)
libmvec-logf-inputs:
70% Normal random distribution
range: (0.0f, FLT_MAX)
mean: 1.0f
sigma: 50.0f
30% uniform random distribution in range (0.0f, FLT_MAX)
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Add vector exp/expf and input files to libmvec microbenchmark.
libmvec-exp-inputs:
90% Normal random distribution
range: (-708.0, 709.0)
mean: 0.0
sigma: 16.0
10% uniform random distribution in range (-500.0, 500.0)
libmvec-expf-inputs:
90% Normal random distribution
range: (-87.0f, 88.0f)
mean: 0.0f
sigma: 8.0f
10% uniform random distribution in range (-50.0f, 50.0f)
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Add vector cos/cosf and input files to libmvec microbenchmark.
libmvec-cos-inputs:
90% Normal random distribution
range: (-DBL_MAX, DBL_MAX)
mean: 0.0
sigma: 5.0
10% uniform random distribution in range (-1000.0, 1000.0)
libmvec-cosf-inputs:
90% Normal random distribution
range: (-FLT_MAX, FLT_MAX)
mean: 0.0f
sigma: 5.0f
10% uniform random distribution in range (-1000.0f, 1000.0f)
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Now that Hurd implementis both close_range and closefrom (f2c996597d),
we can make close_range() a base ABI, and make the default closefrom()
implementation on top of close_range().
The generic closefrom() implementation based on __getdtablesize() is
moved to generic close_range(). On Linux it will be overriden by
the auto-generation syscall while on Hurd it will be a system specific
implementation.
The closefrom() now calls close_range() and __closefrom_fallback().
Since on Hurd close_range() does not fail, __closefrom_fallback() is an
empty static inline function set by__ASSUME_CLOSE_RANGE.
The __ASSUME_CLOSE_RANGE also allows optimize Linux
__closefrom_fallback() implementation when --enable-kernel=5.9 or
higher is used.
Finally the Linux specific tst-close_range.c is moved to io and
enabled as default. The Linuxism and CLOSE_RANGE_UNSHARE are
guarded so it can be built for Hurd (I have not actually test it).
Checked on x86_64-linux-gnu, i686-linux-gnu, and with a i686-gnu
build.
__libc_signal_restore_set was in the wrong place: It also ran
when setjmp returned the second time (after pthread_exit or
pthread_cancel). This is observable with blocked pending
signals during thread exit.
Fixes commit b3cae39dcb
("nptl: Start new threads with all signals blocked [BZ #25098]").
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The @notoc usage only yields an advantage on ISA 3.1+ machine (power10)
and for ld.bfd also when it sees pcrel relocations used on the code
(generated if compiler targets ISA 3.1+). On bfd case ISA 3.1+
instruction on stubs are used iff linker also sees the new pc-relative
relocations (for instance R_PPC64_D34), otherwise it generates default
stubs (ppc64_elf_check_relocs:4700).
This patch also help on linkers that do not implement this optimization,
since building for older ISA (such as 3.0 / power9) will also trigger
power10 stubs generation in the assembly code uses the NOTOC imacro.
Checked on powerpc64le-linux-gnu.
Reviewed-by: Fangrui Song <maskray@google.com>
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
It requires less boilerplate code for newer ports. The _Static_assert
checks from internal setjmp are moved to its own internal test since
setjmp.h is included early by multiple headers (to generate
rtld-sizes.sym).
The riscv jmp_buf-macros.h check is also redundant, it is already
done by riscv configure.ac.
Checked with a build for the affected architectures.
This patch updates the kernel version in the test tst-mman-consts.py
to 5.15. (There are no new MAP_* constants covered by this test in
5.15 that need any other header changes.)
Tested with build-many-glibcs.py.
The change 1e5a5866cb ("Remove malloc hooks [BZ #23328]") has broken
ports that are using GLIBC_2_35, like the new OpenRISC port I am working
on.
The libc_malloc_debug.so library used to bring in the debug
infrastructure is currently essentially empty for GLIBC_2_35 ports like
mine causing mtrace tests to fail:
cat sysdeps/unix/sysv/linux/or1k/shlib-versions
DEFAULT GLIBC_2.35
ld=ld-linux-or1k.so.1
FAIL: posix/bug-glob2-mem
FAIL: posix/bug-regex14-mem
FAIL: posix/bug-regex2-mem
FAIL: posix/bug-regex21-mem
FAIL: posix/bug-regex31-mem
FAIL: posix/bug-regex36-mem
FAIL: malloc/tst-mtrace.
The issue seems to be with the ifdefs in malloc/malloc-debug.c. The
ifdefs are currently essentially exluding all symbols for ports > 2.35.
Removing the top level SHLIB_COMPAT ifdef allows things to just work.
Fixes: 1e5a5866cb ("Remove malloc hooks [BZ #23328]")
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
And make it an installed header. This addresses a few aliasing
violations (which do not seem to result in miscompilation due to
the use of atomics), and also enables use of wide counters in other
parts of the library.
The debug output in nptl/tst-cond22 has been adjusted to print
the 32-bit values instead because it avoids a big-endian/little-endian
difference.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Add python script to generate libmvec microbenchmark from the input
values for each libmvec function using skeleton benchmark template.
Creates double and float benchmarks with vector length 1, 2, 4, 8,
and 16 for each libmvec function. Vector length 1 corresponds to
scalar version of function and is included for vector function perf
comparison.
Co-authored-by: Haochen Jiang <haochen.jiang@intel.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Since b05fae4d8e, __minimal malloc code is used during static
startup before PIE self-relocation (_dl_relocate_static_pie).
So it requires the same fix done for other objects by 47618209d0.
Checked on aarch64, x86_64, and i686 with and without static-pie.
1. Use a temporary file to generate Makefile fragments for DSO sorting
tests and use -include on them.
2. Add Makefile fragments to postclean-generated so that a "make clean"
removes the autogenerated fragments and a subsequent "make" regenerates
them.
This partially fixes BZ #28550.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Put all sources for DSO sorting tests in the dso-sort-tests-src directory
and compile test relocatable objects with
$(objpfx)tst-dso-ordering1-dir/tst-dso-ordering1-a.os: $(objpfx)dso-sort-tests-src/tst-dso-ordering1-a.c
$(compile.c) $(OUTPUT_OPTION)
to avoid random $< values from $(before-compile) when compiling test
relocatable objects with
$(objpfx)%$o: $(objpfx)%.c $(before-compile); $$(compile-command.c)
compile-command.c = $(compile.c) $(OUTPUT_OPTION) $(compile-mkdep-flags)
compile.c = $(CC) $< -c $(CFLAGS) $(CPPFLAGS)
for 3 "make -j 28" parallel builds on a machine with 112 cores at the
same time.
This partially fixes BZ #28550.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Update
commit 49302b8fdf
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Thu Nov 11 06:54:01 2021 -0800
Avoid extra load with CAS in __pthread_mutex_clocklock_common [BZ #28537]
Replace boolean CAS with value CAS to avoid the extra load.
and
commit 0b82747dc4
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Thu Nov 11 06:31:51 2021 -0800
Avoid extra load with CAS in __pthread_mutex_lock_full [BZ #28537]
Replace boolean CAS with value CAS to avoid the extra load.
by moving assignment out of the CAS condition.
Currently, if the temporary file creation fails the create_tz_file
function returns NULL. The NULL pointer is then passed to setenv which
causes a SIGSEGV. Rather than failing with a SIGSEGV print a warning
and exit.
CAS instruction is expensive. From the x86 CPU's point of view, getting
a cache line for writing is more expensive than reading. See Appendix
A.2 Spinlock in:
https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf
The full compare and swap will grab the cache line exclusive and cause
excessive cache line bouncing.
Add LLL_MUTEX_READ_LOCK to do an atomic load and skip CAS in spinlock
loop if compare may fail to reduce cache line bouncing on contended locks.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>