X32 has 32-bit long and pointer with 64-bit off_t. Since x32 psABI
requires that pointers passed in registers must be zero-extended to
64bit, x32 can share many syscall interfaces with LP64. When a LP64
syscall with long and unsigned long arguments is used for x32, these
arguments must be properly extended to 64-bit. Otherwise if the upper
32 bits of the register have undefined value, such a syscall will be
rejected by kernel.
Enforce zero-extension for pointers and array system call arguments.
For integer types, extend to int64_t (the full register) using a
regular cast, resulting in zero or sign extension based on the
signedness of the original type.
For
void *mmap(void *addr, size_t length, int prot, int flags,
int fd, off_t offset);
we now generate
0: 41 f7 c1 ff 0f 00 00 test $0xfff,%r9d
7: 75 1f jne 28 <__mmap64+0x28>
9: 48 63 d2 movslq %edx,%rdx
c: 89 f6 mov %esi,%esi
e: 4d 63 c0 movslq %r8d,%r8
11: 4c 63 d1 movslq %ecx,%r10
14: b8 09 00 00 40 mov $0x40000009,%eax
19: 0f 05 syscall
That is
1. addr is unchanged.
2. length is zero-extend to 64 bits.
3. prot is sign-extend to 64 bits.
4. flags is sign-extend to 64 bits.
5. fd is sign-extend to 64 bits.
6. offset is unchanged.
For int arguments, since kernel uses only the lower 32 bits and ignores
the upper 32 bits in 64-bit registers, these work correctly.
Tested on x86-64 and x32. There are no code changes on x86-64.
This patch adds the GRND_INSECURE constant from Linux 5.6 to glibc's
sys/random.h. This is also added to the documentation. The constant
acts as a no-op for the Hurd implementation (as that doesn't check
whether the flags are known), which is semantically fine, while older
Linux kernels reject unknown flags with an EINVAL error.
Tested for x86_64.
This patch updates the kernel version in the test tst-mman-consts.py
to 5.6. (There are no new constants covered by this test in 5.6 that
need any other header changes.)
Tested with build-many-glibcs.py.
There are 2 new input values that require to be marked as
xfail-rounding:ibm128-libgcc as they're known to fail because of libgcc
issues with different rounding modes.
Otherwise, the other tests just need an increase in ULP.
Since GCC 6.2 or later is required to build glibc, remove build support
for GCC older than GCC 6.
Testd with GCC 6.4 and GCC 9.3.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Confirmed by CLDR and a native speaker: "abril" is more often used even
if "abrial" is also correct. Both nominative (alt_mon) and genitive (mon)
cases are updated.
This patch provides new __mq_timedreceive_time64 explicit 64 bit function for
receiving messages with absolute timeout.
Moreover, a 32 bit version - __mq_timedreceive has been refactored to
internally use __mq_timedreceive_time64.
The __mq_timedreceive is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversion to 64 bit struct
__timespec64 from struct timespec.
The new mq_timedsend_time64 syscall available from Linux 5.1+ has been used,
when applicable.
As this wrapper function is also used internally in the glibc, to e.g. provide
mq_receive implementation, an explicit check for abs_timeout being NULL has been
added due to conversions between struct timespec and struct __timespec64.
Before this change the Linux kernel handled this NULL pointer.
Build tests:
- ./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Linux kernel, headers and minimal kernel version for glibc build test matrix:
- Linux v5.1 (with mq_timedreceive_time64) and glibc built with v5.1 as
minimal kernel version (--enable-kernel="5.1.0")
The __ASSUME_TIME64_SYSCALLS flag defined.
- Linux v5.1 and default minimal kernel version
The __ASSUME_TIME64_SYSCALLS not defined, but kernel supports
mq_timedreceive_time64 syscall.
- Linux v4.19 (no mq_timedreceive_time64 support) with default minimal kernel
version for contemporary glibc (3.2.0)
This kernel doesn't support mq_timedreceive_time64 syscall, so the fallback to
mq_timedreceive is tested.
Above tests were performed with Y2038 redirection applied as well as without
(so the __TIMESIZE != 64 execution path is checked as well).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch provides new __mq_timedsend_time64 explicit 64 bit function for
sending messages with absolute timeout.
Moreover, a 32 bit version - __mq_timedsend has been refactored to internally
use __mq_timedsend_time64.
The __mq_timedsend is now supposed to be used on systems still supporting 32
bit time (__TIMESIZE != 64) - hence the necessary conversion to 64 bit struct
__timespec64 from struct timespec.
The new __mq_timedsend_time64 syscall available from Linux 5.1+ has been used,
when applicable.
As this wrapper function is also used internally in the glibc, to e.g. provide
mq_send implementation, an explicit check for abs_timeout being NULL has been
added due to conversions between struct timespec and struct __timespec64.
Before this change the Linux kernel handled this NULL pointer.
Build tests:
- ./src/scripts/build-many-glibcs.py glibcs
Run-time tests:
- Run specific tests on ARM/x86 32bit systems (qemu):
https://github.com/lmajewski/meta-y2038 and run tests:
https://github.com/lmajewski/y2038-tests/commits/master
Linux kernel, headers and minimal kernel version for glibc build test matrix:
- Linux v5.1 (with mq_timedsend_time64) and glibc built with v5.1 as a
minimal kernel version (--enable-kernel="5.1.0")
The __ASSUME_TIME64_SYSCALLS flag defined.
- Linux v5.1 and default minimal kernel version
The __ASSUME_TIME64_SYSCALLS not defined, but kernel supports
mq_timedsend_time64 syscall.
- Linux v4.19 (no mq_timedsend_time64 support) with default minimal kernel
version for contemporary glibc (3.2.0)
This kernel doesn't support mq_timedsend_time64 syscall, so the fallback to
mq_timedsend is tested.
Above tests were performed with Y2038 redirection applied as well as without
(so the __TIMESIZE != 64 execution path is checked as well).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The struct __timespec64's definition has been moved from ./include/time.h to
./include/struct___timespec64.h.
This change would prevent from polluting other glibc namespaces (when
headers are modified to support 64 bit time on architectures with
__WORDSIZE==32).
Now it is possible to just include definition of this particular structure
when needed.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The code for set_max_fast() stores an "impossibly small value"
instead of zero, when the parameter is zero. However, for
small values of the parameter (ex: 1 or 2) the computation
results in a zero being stored anyway.
This patch checks for the parameter being small enough for the
computation to result in zero instead, so that a zero is never
stored.
key values which result in zero being stored:
x86-64: 1..7 (or other 64-bit)
i686: 1..11
armhfp: 1..3 (or other 32-bit)
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
We turn off this feature to avoid polluting our shared libary with
a specific value. However, static libgcc is not under our control,
and has enabled this for ibm128 routines. This pollutes the
resulting shared libraries with it.
Attach a post-linking hook to replace this section with one crafted
as hard-float + indeterminate ldbl. This allows IEEE ldbl users to
avoid having to disable the gnu attributes feature which should
protect them from linking ibm ldbl libraries using the gnu attributes
feature.
Currently, this only replaces libc and libm which support both ldbl
formats and rely on application code to explicitly determine which
is to be used.
Strictly speaking, the section could be deleted with minimal lost value.
However correctly set attributes could prove useful for some future change,
and similarly missing attributes.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
-mabi=ieeelongdouble triggers the stdc++ libraries _Float128
support, which then breaks if algorithm is included. For now,
explicitly disable _Float128 for such tests.
I have opened up GCC BZ 94080 to track this.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
I have observed a bug on 7.4.0 whereby __mulkc3 calls are
swapped with __multc3 depending on ABI selection. For the
sake of being overly cautious, build all _Float128 files
with ibm128 to workaround these compilers. This has been
noted in GCC BZ 84914, and will not be fixed for GCC 7.
Likewise, non-math files built with _Float128 are assumed
to have ibm long double. Explicilty preserve this
assumption.
Finally, add some bootstrapping code to avoid applying
these options until IEEE long double is enabled as they
require GCC 7 and above.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
The test for enabling _Float128 or IEEE 128 long double can be
greatly simplified knowing that there is no ibm128, thus we require
no special cases, and everything is canonical.
This reverts the changes to ldbl-128ibm iscanonical.h from commit
8dbfea3a20 and extends the check
for __NO_LONG_DOUBLE_MATH to include a check for float128 redirects
to long double.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
check_consistency should be disabled for GCC 5 and above since there is
no fixed PIC register in GCC 5 and above. Check __GNUC_PREREQ (5,0)
instead OPTIMIZE_FOR_GCC_5 since OPTIMIZE_FOR_GCC_5 is false with
-fno-omit-frame-pointer.
Linux 5.6 has new openat2 and pidfd_getfd syscalls. This patch adds
them to syscall-names.list and regenerates the arch-syscall.h files.
Tested with build-many-glibcs.py.
binutils ld has supported --audit, --depaudit for a long time,
only support in glibc has been missing.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
All list elements are colon-separated strings, and there is a hard
upper limit for the number of audit modules, so it is possible to
pre-allocate a fixed-size array of strings to which the LD_AUDIT
environment variable and --audit arguments are added.
Also eliminate the global variables for the audit list because
the list is only needed briefly during startup.
There is a slight behavior change: All duplicate LD_AUDIT environment
variables are now processed, not just the last one as before. However,
such environment vectors are invalid anyway.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The advantage is that the buffer will always contain the number
of characters as returned from the function, which allows one to use
a sequence like
/* No more audit module output. */
line_length = xgetline (&buffer, &buffer_length, fp);
TEST_COMPARE_BLOB ("", 0, buffer, line_length);
to check for an expected EOF, while also reporting any unexpected
extra data encountered.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
All cancellable syscalls are done by C implementations, so there is no
no need to use a specialized implementation to optimize register usage.
It fixes BZ #25765.
Checked on x86_64-linux-gnu.
Adding the test "tst-safe-linking" for testing that Safe-Linking works
as expected. The test checks these 3 main flows:
* tcache protection
* fastbin protection
* malloc_consolidate() correctness
As there is a random chance of 1/16 that of the alignment will remain
correct, the test checks each flow up to 10 times, using different random
values for the pointer corruption. As a result, the chance for a false
failure of a given tested flow is 2**(-40), thus highly unlikely.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Now there is a generic __timeval32 and helpers we can use them for Alpha
instead of the Alpha specific ones.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The Linux kernel expects rusage to use a 32-bit time_t, even on archs
with a 64-bit time_t (like RV32). To address this let's convert
rusage to/from 32-bit and 64-bit to ensure the kernel always gets
a 32-bit time_t.
While we are converting these functions let's also convert them to be
the y2038 safe versions. This means there is a *64 function that is
called by a backwards compatible wrapper.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The Linux kernel expects itimerval to use a 32-bit time_t, even on archs
with a 64-bit time_t (like RV32). To address this let's convert
itimerval to/from 32-bit and 64-bit to ensure the kernel always gets
a 32-bit time_t.
While we are converting these functions let's also convert them to be
the y2038 safe versions. This means there is a *64 function that is
called by a backwards compatible wrapper.
Tested-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
On y2038 safe 32-bit systems the Linux kernel expects itimerval
and rusage to use a 32-bit time_t, even though the other time_t's
are 64-bit. There are currently no plans to make 64-bit time_t versions
of these structs.
There are also other occurrences where the time passed to the kernel via
timeval doesn't match the wordsize.
To handle these cases let's define a new macro
__KERNEL_OLD_TIMEVAL_MATCHES_TIMEVAL64. This macro specifies if the
kernel's old_timeval matches the new timeval64. This should be 1 for
64-bit architectures except for Alpha's osf syscalls. The define should
be 0 for 32-bit architectures and Alpha's osf syscalls.
Reviewed-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The corner cases included were generated using exhaustive search
for all float/binary32 values on x86_64 (comparing to MPFR for
correct rounding to nearest).
For the j0/j1/y0 functions, only cases with ulp error <= 9 were
included.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Alignment checks should be performed on the user's buffer and NOT
on the mchunkptr as was done before. This caused bugs in 32 bit
versions, because: 2*sizeof(t) != MALLOC_ALIGNMENT.
As the tcache works on users' buffers it uses the aligned_OK()
check, and the rest work on mchunkptr and therefore check using
misaligned_chunk().
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Removed unneeded '\' chars from end of lines and fixed some
indentation issues that were introduced in the original
Safe-Linking patch.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This patch makes build-many-glibcs.py use the current versions of
Linux (5.6) and GMP (6.2.0).
Tested with build-many-glibcs.py (host-libraries, compilers and glibcs
builds).
Adds a POWER9 version of fmaf128 that uses the xsmaddqp
instruction.
Co-authored-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This addresses an issue that is present mainly on SMP machines running
threaded code. In a typical indirect call or PLT import stub, the
target address is loaded first. Then the global pointer is loaded into
the PIC register in the delay slot of a branch to the target address.
During lazy binding, the target address is a trampoline which transfers
to _dl_runtime_resolve().
_dl_runtime_resolve() uses the relocation offset stored in the global
pointer and the linkage map stored in the trampoline to find the
relocation. Then, the function descriptor is updated.
In a multi-threaded application, it is possible for the global pointer
to be updated between the load of the target address and the global
pointer. When this happens, the relocation offset has been replaced
by the new global pointer. The function pointer has probably been
updated as well but there is no way to find the address of the function
descriptor and to transfer to the target. So, _dl_runtime_resolve()
typically crashes.
HP-UX addressed this problem by adding an extra pc-relative branch to
the trampoline. The descriptor is initially setup to point to the
branch. The branch then transfers to the trampoline. This allowed
the trampoline code to figure out which descriptor was being used
without any modification to user code. I didn't use this approach
as it is more complex and changes function pointer canonicalization.
The order of loading the target address and global pointer in
indirect calls was not consistent with the order used in import stubs.
In particular, $$dyncall and some inline versions of it loaded the
global pointer first. This was inconsistent with the global pointer
being updated first in dl-machine.h. Assuming the accesses are
ordered, we want elf_machine_fixup_plt() to store the global pointer
first and calls to load it last. Then, the global pointer will be
correct when the target function is entered.
However, just to make things more fun, HP added support for
out-of-order execution of accesses in PA 2.0. The accesses used by
calls are weakly ordered. So, it's possibly under some circumstances
that a function might be entered with the wrong global pointer.
However, HP uses weakly ordered accesses in 64-bit HP-UX, so I assume
that loading the global pointer in the delay slot of the branch must
work consistently.
The basic fix for the race is a combination of modifying user code to
preserve the address of the function descriptor in register %r22 and
setting the least-significant bit in the relocation offset. The
latter was suggested by Carlos as a way to distinguish relocation
offsets from global pointer values. Conventionally, %r22 is used
as the address of the function descriptor in calls to $$dyncall.
So, it wasn't hard to preserve the address in %r22.
I have updated gcc trunk and gcc-9 branch to not clobber %r22 in
$$dyncall and inline indirect calls. I have also modified the import
stubs in binutils trunk and the 2.33 branch to preserve %r22. This
required making the stubs one instruction longer but we save one
relocation. I also modified binutils to align the .plt section on
a 8-byte boundary. This allows descriptors to be updated atomically
with a floting-point store.
With these changes, _dl_runtime_resolve() can fallback to an alternate
mechanism to find the relocation offset when it has been clobbered.
There's just one additional instruction in the fast path. I tested
the fallback function, _dl_fix_reloc_arg(), by changing the branch to
always use the fallback. Old code still runs as it did before.
Fixes bug 23296.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Similar to fenvinline.h removal, this kind of optimization is better
implemented by the compiler. Also newer code avoid setting exceptions
directly (for instance the code to make new logf, log2f and powf
implementatation to now support SVID compat).
The BZ#94194 [1] the corresponding GCC bug for adding replacements
for these on x86.
Checked on x86_64-linux-gnu and i686-linux-gnu.
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94194
Similar to string2.h (18b10de7ce) and string3.h (09a596cc2c) this
patch removes the fenvinline.h on all architectures. Currently
only powerpc implements some optimizations. This kind of optimization
is better implemented by the compiler (which handles the architecture
ISA transparently).
Also, for the specific optimized powerpc implementation the code is
becoming convoluted and these micro-optimization are hardly wildly
used, even more being a possible hotspot in realword cases
(non-default rounding are used only on specific cases and exception
handling are done most likely only on errors path). Only x86
implements similar optimization (on fenv.h) also indicates that
these should no be on libc.
The math/test-fenv already covers all math/test-fenvinline tests,
so it is safe to remove it.
The powerpc fegetround optimization is moved to internal
fenv_libc.h.
The BZ#94193 [1] the corresponding GCC bug for adding replacements
for these on powerpc.
Checked on x86_64-linux-gnu and powerpc64le-linux-gnu.
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94193
Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbin and tcache) from being tampered by attackers.
The mechanism makes use of randomness from ASLR (mmap_base), and when
combined with chunk alignment integrity checks, it protects the "next"
pointers from being hijacked by an attacker.
While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
* Partial pointer override: modifies the lower bytes (Little Endian)
* Full pointer override: hijacks the pointer to an attacker's location
* Unaligned chunks: pointing the list to an unaligned address
The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
* PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
* *L = PROTECT(P)
This way, the random bits from the address L (which start at the bit
in the PAGE_SHIFT position), will be merged with LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.
An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
* Attackers can't point to illegal (unaligned) memory addresses
* Attackers must guess correctly the alignment bits
On standard 32 bit Linux machines, an attack will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.
This proposed patch was benchmarked and it's effect on the overall
performance of the heap was negligible and couldn't be distinguished
from the default variance between tests on the vanilla version. A
similar protection was added to Chromium's version of TCMalloc
in 2012, and according to their documentation it had an overhead of
less than 2%.
Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Adhemerval Zacnella <adhemerval.zanella@linaro.org>