The arch13 memmove variant is currently selected by the ifunc selector
if the Miscellaneous-Instruction-Extensions Facility 3 facility bit
is present, but the function is also using vector instructions.
If the vector support is not present, one is receiving an operation
exception.
Therefore this patch also checks for vector support in the ifunc
selector and in ifunc-impl-list.c.
Just to be sure, the configure check is now also testing an arch13
vector instruction and an arch13 Miscellaneous-Instruction-Extensions
Facility 3 instruction.
(cherry picked from commit 7759be2593)
A not so recent kernel change[1] changed how the trampoline
`__kernel_sigtramp_rt64` is used to call signal handlers.
This was exposed on the test misc/tst-sigcontext-get_pc
Before kernel 5.9, the kernel set LR to the trampoline address and
jumped directly to the signal handler, and at the end the signal
handler, as any other function, would `blr` to the address set. In
other words, the trampoline was executed just at the end of the signal
handler and the only thing it did was call sigreturn. But since
kernel 5.9 the kernel set CTRL to the signal handler and calls to the
trampoline code, the trampoline then `bctrl` to the address in CTRL,
setting the LR to the next instruction in the middle of the
trampoline, when the signal handler returns, the rest of the
trampoline code executes the same code as before.
Here is the full trampoline code as of kernel 5.11.0-rc5 for
reference:
V_FUNCTION_BEGIN(__kernel_sigtramp_rt64)
.Lsigrt_start:
bctrl /* call the handler */
addi r1, r1, __SIGNAL_FRAMESIZE
li r0,__NR_rt_sigreturn
sc
.Lsigrt_end:
V_FUNCTION_END(__kernel_sigtramp_rt64)
This new behavior breaks how `backtrace()` uses to detect the
trampoline frame to correctly reconstruct the stack frame when it is
called from inside a signal handling.
This workaround rely on the fact that the trampoline code is at very
least two (maybe 3?) instructions in size (as it is in the 32 bits
version, only on `li` and `sc`), so it is safe to check the return
address be in the range __kernel_sigtramp_rt64 .. + 4.
[1] subject: powerpc/64/signal: Balance return predictor stack in signal trampoline
commit: 0138ba5783ae0dcc799ad401a1e8ac8333790df9
url: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=0138ba5783ae0dcc799ad401a1e8ac8333790df9
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit 5ee506ed35)
In commit 745664bd79 a use-after-free
was fixed, but this led to an occasional double-free. This patch
tracks the "live" allocation better.
Tested manually by a third party.
Related: RHBZ 1927877
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit dca565886b)
The conversion loop to the internal encoding does not follow
the interface contract that __GCONV_FULL_OUTPUT is only returned
after the internal wchar_t buffer has been filled completely. This
is enforced by the first of the two asserts in iconv/skeleton.c:
/* We must run out of output buffer space in this
rerun. */
assert (outbuf == outerr);
assert (nstatus == __GCONV_FULL_OUTPUT);
This commit solves this issue by queuing a second wide character
which cannot be written immediately in the state variable, like
other converters already do (e.g., BIG5-HKSCS or TSCII).
Reported-by: Tavis Ormandy <taviso@gmail.com>
(cherry picked from commit 7d88c6142c)
Calling an IFUNC function defined in unrelocated executable also leads to
segfault. Issue a fatal error message when calling IFUNC function defined
in the unrelocated executable from a shared library.
On x86, ifuncmain6pie failed with:
[hjl@gnu-cfl-2 build-i686-linux]$ ./elf/ifuncmain6pie --direct
./elf/ifuncmain6pie: IFUNC symbol 'foo' referenced in '/export/build/gnu/tools-build/glibc-32bit/build-i686-linux/elf/ifuncmod6.so' is defined in the executable and creates an unsatisfiable circular dependency.
[hjl@gnu-cfl-2 build-i686-linux]$ readelf -rW elf/ifuncmod6.so | grep foo
00003ff4 00000706 R_386_GLOB_DAT 0000400c foo_ptr
00003ff8 00000406 R_386_GLOB_DAT 00000000 foo
0000400c 00000401 R_386_32 00000000 foo
[hjl@gnu-cfl-2 build-i686-linux]$
Remove non-JUMP_SLOT relocations against foo in ifuncmod6.so, which
trigger the circular IFUNC dependency, and build ifuncmain6pie with
-Wl,-z,lazy.
(cherry picked from commits 6ea5b57afa
and 7137d682eb)
When copying with "rep movsb", if the distance between source and
destination is N*4GB + [1..63] with N >= 0, performance may be very
slow. This patch updates memmove-vec-unaligned-erms.S for AVX and
AVX512 versions with the distance in RCX:
cmpl $63, %ecx
// Don't use "rep movsb" if ECX <= 63
jbe L(Don't use rep movsb")
Use "rep movsb"
Benchtests data with bench-memcpy, bench-memcpy-large, bench-memcpy-random
and bench-memcpy-walk on Skylake, Ice Lake and Tiger Lake show that its
performance impact is within noise range as "rep movsb" is only used for
data size >= 4KB.
(cherry picked from commit 3ec5d83d2a)
The byte 0xfe as input to the EUC-KR conversion denotes a user-defined
area and is not allowed. The from_euc_kr function used to skip two bytes
when told to skip over the unknown designation, potentially running over
the buffer end.
(cherry picked from commit ee7a3144c9)
Previously, in UCS4 conversion routines we limit the number of
characters we examine to the minimum of the number of characters in the
input and the number of characters in the output. This is not the
correct behavior when __GCONV_IGNORE_ERRORS is set, as we do not consume
an output character when we skip a code unit. Instead, track the input
and output pointers and terminate the loop when either reaches its
limit.
This resolves assertion failures when resetting the input buffer in a step of
iconv, which assumes that the input will be fully consumed given sufficient
output space.
(cherry picked from commit 228edd356f)
This new variable allows various subsystems in glibc to run all or
some of their tests with MALLOC_CHECK_=3. This patch adds
infrastructure support for this variable as well as an implementation
in malloc/Makefile to allow running some of the tests with
MALLOC_CHECK_=3.
At present some tests in malloc/ have been excluded from the mcheck
tests either because they're specifically testing MALLOC_CHECK_ or
they are failing in master even without the Memory Tagging patches
that prompted this work. Some tests were reviewed and found to need
specific error points that MALLOC_CHECK_ defeats by terminating early
but a thorough review of all tests is needed to bring them into mcheck
coverage.
Backported from 4f969166ce.
The IBM1364, IBM1371, IBM1388, IBM1390 and IBM1399 character sets
share converter logic (iconvdata/ibm1364.c) which would reject
redundant shift sequences when processing input in these character
sets. This led to a hang in the iconv program (CVE-2020-27618).
This commit adjusts the converter to ignore redundant shift sequences
and adds test cases for iconv_prog hangs that would be triggered upon
their rejection. This brings the implementation in line with other
converters that also ignore redundant shift sequences (e.g. IBM930
etc., fixed in commit 692de4b396).
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 9a99c68214)
The commit 605f38177d (sh: Split BE/LE abilist) did not take in
consideration the SH4 fpu support.
Checked with a build for sh4-linux-gnu and manually checked that
the implementations at sysdeps/sh/sh4/fpu/ are selected.
John Paul Adrian Glaubitz also confirmed it fixes the build issues
he encontered.
(cherry-picked from 9ff2674ef8)
The variant PCS support was ineffective because in the common case
linkmap->l_mach.plt == 0 but then the symbol table flags were ignored
and normal lazy binding was used instead of resolving the relocs early.
(This was a misunderstanding about how GOT[1] is setup by the linker.)
In practice this mainly affects SVE calls when the vector length is
more than 128 bits, then the top bits of the argument registers get
clobbered during lazy binding.
Fixes bug 26798.
(cherry picked from commit 558251bd87)
Modifying the shareable cache '__x86_shared_cache_size', which is a
factor in computing the non-temporal threshold parameter
'__x86_shared_non_temporal_threshold' to optimize memcpy for AMD Zen
architectures.
In the existing implementation, the shareable cache is computed as 'L3
per thread, L2 per core'. Recomputing this shareable cache as 'L3 per
CCX(Core-Complex)' has brought in performance gains.
As per the large bench variant results, this patch also addresses the
regression problem on AMD Zen architectures.
Backport of commit 59803e81f9 upstream,
with the fix from cb3a749a22 ("x86:
Restore processing of cache size tunables in init_cacheinfo") applied.
Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
Co-Authored-by: Florian Weimer <fweimer@redhat.com>
The __x86_shared_non_temporal_threshold determines when memcpy on x86
uses non_temporal stores to avoid pushing other data out of the last
level cache.
This patch proposes to revert the calculation change made by H.J. Lu's
patch of June 2, 2017.
H.J. Lu's patch selected a threshold suitable for a single thread
getting maximum performance. It was tuned using the single threaded
large memcpy micro benchmark on an 8 core processor. The last change
changes the threshold from using 3/4 of one thread's share of the
cache to using 3/4 of the entire cache of a multi-threaded system
before switching to non-temporal stores. Multi-threaded systems with
more than a few threads are server-class and typically have many
active threads. If one thread consumes 3/4 of the available cache for
all threads, it will cause other active threads to have data removed
from the cache. Two examples show the range of the effect. John
McCalpin's widely parallel Stream benchmark, which runs in parallel
and fetches data sequentially, saw a 20% slowdown with this patch on
an internal system test of 128 threads. This regression was discovered
when comparing OL8 performance to OL7. An example that compares
normal stores to non-temporal stores may be found at
https://vgatherps.github.io/2018-09-02-nontemporal/. A simple test
shows performance loss of 400 to 500% due to a failure to use
nontemporal stores. These performance losses are most likely to occur
when the system load is heaviest and good performance is critical.
The tunable x86_non_temporal_threshold can be used to override the
default for the knowledgable user who really wants maximum cache
allocation to a single thread in a multi-threaded system.
The manual entry for the tunable has been expanded to provide
more information about its purpose.
modified: sysdeps/x86/cacheinfo.c
modified: manual/tunables.texi
(cherry picked from commit d3c5702747)
(Conflicts in sysdeps/x86/cacheinfo.c due to missing
rep_movsb_threshold, x86_rep_stosb_threshold tunables.)
Add CPU detection of Neoverse N2 and Neoverse V1, and select __memcpy_simd as
the memcpy/memmove ifunc.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit e11ed9d2b4)
Further optimize integer memcpy. Small cases now include copies up
to 32 bytes. 64-128 byte copies are split into two cases to improve
performance of 64-96 byte copies. Comments have been rewritten.
(cherry picked from commit 7000651327)
On some microarchitectures performance of the backwards memmove improves if
the stores use STR with decreasing addresses. So change the memmove loop
in memcpy_advsimd.S to use 2x STR rather than STP.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit bd394d131c)
Add a new memcpy using 128-bit Q registers - this is faster on modern
cores and reduces codesize. Similar to the generic memcpy, small cases
include copies up to 32 bytes. 64-128 byte copies are split into two
cases to improve performance of 64-96 byte copies. Large copies align
the source rather than the destination.
bench-memcpy-random is ~9% faster than memcpy_falkor on Neoverse N1,
so make this memcpy the default on N1 (on Centriq it is 15% faster than
memcpy_falkor).
Passes GLIBC regression tests.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 4a733bf375)
Given almost all uses of ENTRY are for string/memory functions,
align ENTRY to a cacheline to simplify things.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 34f0d01d5e)
Commit 91927b7c76 (Rewrite iconv option parsing [BZ #19519]) did not
handle cases where the output codeset for translations (via the `gettext'
family of functions) might have a caller specified encoding suffix such as
TRANSLIT or IGNORE. This led to a regression where translations did not
work when the codeset had a suffix.
This commit fixes the above issue by parsing any suffixes passed to
__dcigettext and adds two new test-cases to intl/tst-codeset.c to
verify correct behaviour. The iconv-internal function __gconv_create_spec
and the static iconv-internal function gconv_destroy_spec are now visible
internally within glibc and used in intl/dcigettext.c.
(cherry picked from commit 7d4ec75e11)
This commit replaces string manipulation during `iconv_open' and iconv_prog
option parsing with a structured, flag based conversion specification. In
doing so, it alters the internal `__gconv_open' interface and accordingly
adjusts its uses.
This change fixes several hangs in the iconv program and therefore includes
a new test to exercise iconv_prog options that originally led to these hangs.
It also includes a new regression test for option handling in the iconv
function.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 91927b7c76)
__GLRO loaded the word after the requested variable on big-endian
PowerPC, where LOWORD is 4. This can cause the memset implement
go wrong because the masking with the cache line size produces
wrong results, particularly if the loaded value happens to be 1.
The __GLRO macro is not used in any place where loading the lower
32-bit word of a 64-bit value is desired, so the +4 offset is always
wrong.
Fixes commit 18363b4f01
("powerpc: Move cache line size to rtld_global_ro") and bug 26332.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 7650321ce0)
nptl has
/* Opcodes and data types for communication with the signal handler to
change user/group IDs. */
struct xid_command
{
int syscall_no;
long int id[3];
volatile int cntr;
volatile int error;
};
/* This must be last, otherwise the current thread might not have
permissions to send SIGSETXID syscall to the other threads. */
result = INTERNAL_SYSCALL_NCS (cmdp->syscall_no, 3,
cmdp->id[0], cmdp->id[1], cmdp->id[2]);
But the second argument of setgroups syscal is a pointer:
int setgroups (size_t size, const gid_t *list);
But on x32, pointers passed to syscall must have pointer type so that
they will be zero-extended. The kernel XID arguments are unsigned and
do not require sign extension. Change xid_command to
struct xid_command
{
int syscall_no;
unsigned long int id[3];
volatile int cntr;
volatile int error;
};
so that all arguments are zero-extended. A testcase is added for x32 and
setgroups returned with EFAULT when running as root without the fix.
(cherry picked from commit 0ad926f349)
The SELinux API deprecated several symbols in its 3.1 release, including
security_context_t, matchpathcon, avc_init, and sidput, which are used in
makedb and nscd. While the usage of these should eventually be replaced by
newer interfaces, this commit disables GCC warnings due to the use of the
above symbols.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 04726be814)
Unsigned branch instructions could be used for r2 to fix the wrong
behavior when a negative length is passed to memcpy.
This commit fixes the armv7 version.
(cherry picked from commit beea361050)
Unsigned branch instructions could be used for r2 to fix the wrong
behavior when a negative length is passed to memcpy and memmove.
This commit fixes the generic arm implementation of memcpy amd memmove.
(cherry picked from commit 79a4fa341b)
strcmp-avx2.S: In avx2 strncmp function, strings are compared in
chunks of 4 vector size(i.e. 32x4=128 byte for avx2). After first 4
vector size comparison, code must check whether it already passed
the given offset. This patch implement avx2 offset check condition
for strncmp function, if both string compare same for first 4 vector
size.
(cherry picked from commit 75870237ff)
During cleanup, before returning from get*_r functions, the end*ent
calls must not change errno. Otherwise, an ERANGE error from the
underlying implementation can be hidden, causing unexpected lookup
failures. This commit introduces an internal_end*ent_noerror
function which saves and restore errno, and marks the original
internal_end*ent function as warn_unused_result, so that it is used
only in contexts were errors from it can be handled explicitly.
Reviewed-by: DJ Delorie <dj@redhat.com>
(cherry picked from commit 790b8dda44)
This patch fixes the optimized implementation of strcpy and strnlen
on a big-endian arm64 machine.
The optimized method uses neon, which can process 128bit with one
instruction. On a big-endian machine, the bit order should be reversed
for the whole 128-bits double word. But with instuction
rev64 datav.16b, datav.16b
it reverses 64bits in the two halves rather than reversing 128bits.
There is no such instruction as rev128 to reverse the 128bits, but we
can fix this by loading the data registers accordingly.
Fixes 0237b61526e7("aarch64: Optimized implementation of strcpy") and
2911cb68ed3d("aarch64: Optimized implementation of strnlen").
Signed-off-by: Lexi Shao <shaolexi@huawei.com>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 59b64f9cbb)
When using outline atomics (-moutline-atomics, the default for ARMv8-A
starting with GCC 10), libgcc contains an ELF constructor which calls
__getauxval. This code is built outside of glibc, so none of its
internal PLT avoidance schemes can be applied to it. This change
suppresses the elf/check-localplt failure.
(cherry picked from commit 16536e98e3)
(cherry picked from commit 587a332b6fadc4d9f1035ecaa52ba32ee41cd300)
Since __x86_shared_non_temporal_threshold is defined as
long int __x86_shared_non_temporal_threshold;
and long int is 4 bytes for x32, use RDX_LP to compare against
__x86_shared_non_temporal_threshold in assembly code.
(cherry picked from commit 55c7bcc71b)
Confirmed by CLDR and a native speaker: "abril" is more often used even
if "abrial" is also correct. Both nominative (alt_mon) and genitive (mon)
cases are updated.
Add a C wrapper to pass arguments in
/* Control process execution. */
extern int prctl (int __option, ...) __THROW;
to prctl syscall:
extern int prctl (int, unsigned long int, unsigned long int,
unsigned long int, unsigned long int);
(cherry picked from commit ff026950e2)
LOADARGS_N in powerpc/sysdep.h uses argN as local variables. It breaks
when argN is also a function argument. Rename argN to _argN to avoid
conflict.
(cherry picked from commit 14f43dd34d)
Since the the U marker can only be applied to 2 unsigned long arguments
in syscalls.list files, add a C wrapper for process_vm_readv and
process_vm_writev syscals which have more than 2 unsigned long arguments.
(cherry picked from commit ad9fd65d71)
Mark unsigned long arguments in mmap, read, recv, recvfrom, send, sendto,
write, ioperm, sendfile64, setxattr, lsetxattr, fsetxattr, getxattr,
lgetxattr, fgetxattr, listxattr, llistxattr and flistxattr with U in
syscalls.list files.
(cherry picked from commit 86f4f2263b)
Add a test to pass 64-bit long arguments to syscall with undefined upper
32 bits on x32.
Tested on i386, x86-64 and x32 as well as with build-many-glibcs.py.
(cherry picked from commit 781dacc4f4)
X32 has 32-bit long and pointer with 64-bit off_t. Since x32 psABI
requires that pointers passed in registers must be zero-extended to
64bit, x32 can share many syscall interfaces with LP64. When a LP64
syscall with long and unsigned long int arguments is used for x32, these
arguments must be properly extended to 64-bit. Otherwise if the upper
32 bits of the register have undefined value, such a syscall will be
rejected by kernel.
For syscalls implemented in assembly codes, 'U' is added to syscall
signature key letters for unsigned long, which is zero-extended to
64-bit types. SYSCALL_ULONG_ARG_1 and SYSCALL_ULONG_ARG_2 are passed
to syscall-template.S for the first and the second unsigned long int
arguments if PSEUDOS_HAVE_ULONG_INDICES is defined. They are used by
x32 to zero-extend 32-bit arguments to 64 bits.
Tested on i386, x86-64 and x32 as well as with build-many-glibcs.py.
(cherry picked from commit 2ad5d0845d)
X32 has 32-bit long and pointer with 64-bit off_t. Since x32 psABI
requires that pointers passed in registers must be zero-extended to
64bit, x32 can share many syscall interfaces with LP64. When a LP64
syscall with long and unsigned long arguments is used for x32, these
arguments must be properly extended to 64-bit. Otherwise if the upper
32 bits of the register have undefined value, such a syscall will be
rejected by kernel.
Enforce zero-extension for pointers and array system call arguments.
For integer types, extend to int64_t (the full register) using a
regular cast, resulting in zero or sign extension based on the
signedness of the original type.
For
void *mmap(void *addr, size_t length, int prot, int flags,
int fd, off_t offset);
we now generate
0: 41 f7 c1 ff 0f 00 00 test $0xfff,%r9d
7: 75 1f jne 28 <__mmap64+0x28>
9: 48 63 d2 movslq %edx,%rdx
c: 89 f6 mov %esi,%esi
e: 4d 63 c0 movslq %r8d,%r8
11: 4c 63 d1 movslq %ecx,%r10
14: b8 09 00 00 40 mov $0x40000009,%eax
19: 0f 05 syscall
That is
1. addr is unchanged.
2. length is zero-extend to 64 bits.
3. prot is sign-extend to 64 bits.
4. flags is sign-extend to 64 bits.
5. fd is sign-extend to 64 bits.
6. offset is unchanged.
For int arguments, since kernel uses only the lower 32 bits and ignores
the upper 32 bits in 64-bit registers, these work correctly.
Tested on x86-64 and x32. There are no code changes on x86-64.
(cherry picked from commit df76ff3a44)