Bug 26137 reports spurious "inexact" exceptions from strtod, on 32-bit
systems only, for a decimal argument that is exactly 1 + 2^-32. In
fact the same issue also appears for 1 + 2^-64 and 1 + 2^-96 as
arguments to strtof128 on 32-bit systems, and 1 + 2^-64 as an argument
to strtof128 on 64-bit systems. In FE_DOWNWARD or FE_TOWARDZERO mode,
the return value is also incorrect.
The problem is in the multiple-precision division logic used in the
case of dividing by a denominator that occupies at least three GMP
limbs. There was a comment "The division does not work if the upper
limb of the two-limb mumerator is greater than the denominator.", but
in fact there were problems for the case of equality (that is, where
the high limbs are equal, offset by some multiple of the GMP limb
size) as well. In such cases, the code used "quot = ~(mp_limb_t) 0;"
(with subsequent correction if that is an overestimate), because
udiv_qrnnd does not support the case of equality, but it's possible
for the shifted numerator to be greater than or equal to the
denominator, in which case that is an underestimate. To avoid that,
this patch changes the ">" condition to ">=", meaning the first
division is done with a zero high word.
The tests added are all 1 + 2^-n for n from 1 to 113 except for those
that were already present in tst-strtod-round-data.
Tested for x86_64 and x86.
The time argument is NULL in this case, and attempt to convert it
leads to a null pointer dereference.
This fixes commit d2e3b697da
("y2038: linux: Provide __settimeofday64 implementation").
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
On other platforms, RAND_MAX (which is the range of rand(3))
may differ from 2^31-1 (which is the range of random(3)).
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This patch updates the kernel version in the test tst-mman-consts.py
to 5.7. (There are no new constants covered by this test in 5.7 that
need any other header changes; there's a new MREMAP_DONTUNMAP, but
this test doesn't yet cover MREMAP_*.)
Tested with build-many-glibcs.py.
1. Add the directories to hold POWER10 files.
2. Add support to select POWER10 libraries based on AT_PLATFORM.
3. Let submachine=power10 be set automatically.
* hurd/hurdselect.c: Include <sysdep-cancel.h>.
(_hurd_select): Surround call to __mach_msg with enabling async cancel.
* sysdeps/mach/hurd/accept4.c: Include <sysdep-cancel.h>.
(__libc_accept4): Surround call to __socket_accept with enabling async cancel,
and use HURD_DPORT_USE_CANCEL instead of HURD_DPORT_USE.
* sysdeps/mach/hurd/connect.c: Include <sysdep-cancel.h>.
(__connect): Surround call to __file_name_lookup and __socket_connect
with enabling async cancel, and use HURD_DPORT_USE_CANCEL instead of
HURD_DPORT_USE.
* sysdeps/mach/hurd/fdatasync.c: Include <sysdep-cancel.h>.
(fdatasync): Surround call to __file_sync with enabling async cancel, and use
HURD_DPORT_USE_CANCEL instead of HURD_DPORT_USE.
* sysdeps/mach/hurd/fsync.c: Include <sysdep-cancel.h>.
(fsync): Surround call to __file_sync with enabling async cancel, and use
HURD_DPORT_USE_CANCEL instead of HURD_DPORT_USE.
* sysdeps/mach/hurd/ioctl.c: Include <sysdep-cancel.h>.
(__ioctl): When request is TIOCDRAIN, surround call to send_rpc with enabling
async cancel, and use HURD_DPORT_USE_CANCEL instead of HURD_DPORT_USE.
* sysdeps/mach/hurd/msync.c: Include <sysdep-cancel.h>.
(msync): Surround call to __vm_object_sync with enabling async cancel.
* sysdeps/mach/hurd/sigsuspend.c: Include <sysdep-cancel.h>.
(__sigsuspend): Surround call to __mach_msg with enabling async cancel.
* sysdeps/mach/hurd/sigwait.c: Include <sysdep-cancel.h>.
(__sigwait): Surround wait code with enabling async cancel.
* sysdeps/mach/msync.c: Include <sysdep-cancel.h>.
(msync): Surround call to __vm_msync with enabling async cancel.
* sysdeps/mach/sleep.c: Include <sysdep-cancel.h>.
(__sleep): Surround call to __mach_msg with enabling async cancel.
* sysdeps/mach/usleep.c: Include <sysdep-cancel.h>.
(usleep): Surround call to __vm_msync with enabling async cancel.
and add _nocancel variant.
* sysdeps/mach/hurd/Makefile [io] (sysdep_routines): Add fcntl_nocancel.
* sysdeps/mach/hurd/fcntl.c [NOCANCEL]: Include <not-cancel.h>.
[!NOCANCEL]: Include <sysdep-cancel.h>.
(__libc_fcntl) [!NOCANCEL]: Surround __file_record_lock call with enabling async cancel, and use HURD_FD_PORT_USE_CANCEL instead of HURD_FD_PORT_USE.
* sysdeps/mach/hurd/fcntl_nocancel.c: New file, defines __fcntl_nocancel by including fcntl.c.
* sysdeps/mach/hurd/not-cancel.h (__fcntl64_nocancel): Replace macro with
__fcntl_nocancel declaration with hidden proto, and make
__fcntl64_nocancel call __fcntl_nocancel.
and add _nocancel variant.
* sysdeps/mach/hurd/Makefile [io] (sysdep_routines): Add wait4_nocancel.
* sysdeps/mach/hurd/wait4.c: Include <sysdep-cancel.h>
(__wait4): Surround __proc_wait with enabling async cancel, and use
__USEPORT_CANCEL instead of __USEPORT.
* sysdeps/mach/hurd/wait4_nocancel.c: New file, contains previous
implementation of __wait4.
* sysdeps/mach/hurd/not-cancel.h (__waitpid_nocancel): Replace macro with
__wait4_nocancel declaration with hidden proto, and make
__waitpid_nocancel call __wait4_nocancel.
HURD_*PORT_USE link fd and port with a stack-stored structure, so on
thread cancel we need to cleanup this.
* hurd/fd-cleanup.c: New file.
* hurd/port-cleanup.c (_hurd_port_use_cleanup): New function.
* hurd/Makefile (routines): Add fd-cleanup.
* sysdeps/hurd/include/hurd.h (__USEPORT_CANCEL): New macro.
* sysdeps/hurd/include/hurd/fd.h (_hurd_fd_port_use_data): New
structure.
(_hurd_fd_port_use_cleanup): New prototype.
(HURD_DPORT_USE_CANCEL, HURD_FD_PORT_USE_CANCEL): New macros.
* sysdeps/hurd/include/hurd/port.h (_hurd_port_use_data): New structure.
(_hurd_port_use_cleanup): New prototype.
(HURD_PORT_USE_CANCEL): New macro.
* hurd/hurd/fd.h (HURD_FD_PORT_USE): Also refer to HURD_FD_PORT_USE_CANCEL.
* hurd/hurd.h (__USEPORT): Also refer to __USEPORT_CANCEL.
* hurd/hurd/port.h (HURD_PORT_USE): Also refer to HURD_PORT_USE_CANCEL.
* hurd/fd-read.c (_hurd_fd_read): Call HURD_FD_PORT_USE_CANCEL instead
of HURD_FD_PORT_USE.
* hurd/fd-write.c (_hurd_fd_write): Likewise.
* sysdeps/mach/hurd/send.c (__send): Call HURD_DPORT_USE_CANCEL instead
of HURD_DPORT_USE.
* sysdeps/mach/hurd/sendmsg.c (__libc_sendmsg): Likewise.
* sysdeps/mach/hurd/sendto.c (__sendto): Likewise.
* sysdeps/mach/hurd/recv.c (__recv): Likewise.
* sysdeps/mach/hurd/recvfrom.c (__recvfrom): Likewise.
* sysdeps/mach/hurd/recvmsg.c (__libc_recvmsg): Call __USEPORT_CANCEL
instead of __USEPORT, and HURD_DPORT_USE_CANCEL instead of
HURD_DPORT_USE.
This adds sysdeps/htl/libc-lock.h which augments sysdeps/mach/libc-lock.h with
the htl-aware cleanup handling. Otherwise inclusion of libc-lock.h
without libc-lockP.h would keep only the mach-aware handling.
This also fixes cleanup getting called when the binary is
statically-linked without libpthread.
* sysdeps/htl/libc-lockP.h (__libc_cleanup_region_start,
__libc_cleanup_end, __libc_cleanup_region_end,
__pthread_get_cleanup_stack): Move to...
* sysdeps/htl/libc-lock.h: ... new file.
(__libc_cleanup_region_start): Always set handler and arg.
(__libc_cleanup_end): Always call the cleanup handler.
(__libc_cleanup_push, __libc_cleanup_pop): New macros.
These only need exactly to use __libc_ptf_call.
* sysdeps/htl/flockfile.c: Include <libc-lockP.h> instead of
<libc-lock.h>
* sysdeps/htl/ftrylockfile.c: Include <libc-lockP.h> instead of
<errno.h>, <pthread.h>, <stdio-lock.h>
* sysdeps/htl/funlockfile.c: Include <libc-lockP.h> instead of
<pthread.h> and <stdio-lock.h>
Like hurd_thread_cancel does.
* sysdeps/mach/hurd/htl/pt-docancel.c: Include <hurd/signal.h>
(__pthread_do_cancel): Lock target thread's critical_section_lock and ss
lock around thread mangling.
PF_UNIX was actually never intended to be passed as protocol parameter to
socket() calls: it is a protocol family, not a protocol. It happens that
Linux introduced accepting it during its 2.0 development, but it shouldn't.
OpenBSD kernels accept it as well, but FreeBSD and NetBSD rightfully do not.
GNU/Hurd does not either.
* nptl/tst-cancel4-common.c (do_test): Pass 0 instead of PF_UNIX as
protocol.
Intel Advanced Matrix Extensions (Intel AMX) is a new programming
paradigm consisting of two components: a set of 2-dimensional registers
(tiles) representing sub-arrays from a larger 2-dimensional memory image,
and accelerators able to operate on tiles. Intel AMX is an extensible
architecture. New accelerators can be added and the existing accelerator
may be enhanced to provide higher performance. The initial features are
AMX-BF16, AMX-TILE and AMX-INT8, which are usable only if the operating
system supports both XTILECFG state and XTILEDATA state.
Add AMX-BF16, AMX-TILE and AMX-INT8 support to HAS_CPU_FEATURE and
CPU_FEATURE_USABLE.
It turned out that an 256b-mvc instruction which depends on the
result of a previous 256b-mvc instruction is counterproductive.
Therefore this patch adjusts the 256b-loop by storing the
first byte with stc and setting the remaining 255b with mvc.
Now the 255b-mvc instruction depends on the stc instruction.
This patch introduces an extra loop without pfd instructions
as it turned out that the pfd instructions are usefull
for copies >=64KB but are counterproductive for smaller copies.
User provided stack should not be released nor madvised at
thread exit because it's owned by the user.
If the memory is shared or file based then MADV_DONTNEED
can have unwanted effects. With memory tagging on aarch64
linux the tags are dropped and thus it may invalidate
pointers.
Tested on aarch64-linux-gnu with MTE, it fixes
FAIL: nptl/tst-stack3
FAIL: nptl/tst-stack3-mem
* sysdeps/htl/sem-timedwait.c (struct cancel_ctx): Add cancel_wake
field.
(cancel_hook): When unblocking thread, set cancel_wake field to 1.
(__sem_timedwait_internal): Set cancel_wake field to 0 by default.
On cancellation exit, check whether we hold a token, to be put back.
By aligning its implementation on pthread_cond_wait.
* sysdeps/htl/sem-timedwait.c (cancel_ctx): New structure.
(cancel_hook): New function.
(__sem_timedwait_internal): Check for cancellation and register
cancellation hook that wakes the thread up, and check again for
cancellation on exit.
* nptl/tst-cancel13.c, nptl/tst-cancelx13.c: Move to...
* sysdeps/pthread/: ... here.
* nptl/Makefile: Move corresponding references and rules to...
* sysdeps/pthread/Makefile: ... here.
Since __pthread_exit does not return, we do not need to indent the
noncancel path
* sysdeps/htl/pt-cond-timedwait.c (__pthread_cond_timedwait_internal):
Move cancelled path before non-cancelled path, to avoid "else"
indentation.
* nptl/tst-cancel25.c: Move to...
* sysdeps/pthread/tst-cancel25.c: ... here.
(tf2) Do not test for SIGCANCEL when it is not defined.
* nptl/Makefile: Move corresponding reference to...
* sysdeps/pthread/Makefile: ... here.
Linux commit ID ee988c11acf6f9464b7b44e9a091bf6afb3b3a49 reserved 2 new
bits in AT_HWCAP2:
- PPC_FEATURE2_ARCH_3_1 indicates the availability of the POWER ISA
3.1;
- PPC_FEATURE2_MMA indicates the availability of the Matrix-Multiply
Assist facility.
Add support for MTE to strncmp. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Branislav Rankov <branislav.rankov@arm.com>
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
Add support for MTE to strcmp. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Branislav Rankov <branislav.rankov@arm.com>
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
Add support for MTE to strrchr. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
Add support for MTE to memrchr. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
Add support for MTE to memchr. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Gabor Kertesz <gabor.kertesz@arm.com>
Add support for MTE to strcpy. Regression tested with xcheck and benchmarked
with glibc's benchtests on the Cortex-A53, Cortex-A72, and Neoverse N1.
The existing implementation assumes that any access to the pages in which the
string resides is safe. This assumption is not true when MTE is enabled. This
patch updates the algorithm to ensure that accesses remain within the bounds
of an MTE tag (16-byte chunks) and improves overall performance.
Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
1. Divide architecture features into the usable features and the preferred
features. The usable features are for correctness and can be exported in
a stable ABI. The preferred features are for performance and only for
glibc internal use.
2. Change struct cpu_features to
struct cpu_features
{
struct cpu_features_basic basic;
unsigned int *usable_p;
struct cpuid_registers cpuid[COMMON_CPUID_INDEX_MAX];
unsigned int usable[USABLE_FEATURE_INDEX_MAX];
unsigned int preferred[PREFERRED_FEATURE_INDEX_MAX];
...
};
and initialize usable_p to pointer to the usable arary so that
struct cpu_features
{
struct cpu_features_basic basic;
unsigned int *usable_p;
struct cpuid_registers cpuid[COMMON_CPUID_INDEX_MAX];
};
can be exported via a stable ABI. The cpuid and usable arrays can be
expanded with backward binary compatibility for both .o and .so files.
3. Add COMMON_CPUID_INDEX_7_ECX_1 for AVX512_BF16.
4. Detect ENQCMD, PKS, AVX512_VP2INTERSECT, MD_CLEAR, SERIALIZE, HYBRID,
TSXLDTRK, L1D_FLUSH, CORE_CAPABILITIES and AVX512_BF16.
5. Rename CAPABILITIES to ARCH_CAPABILITIES.
6. Check if AVX512_VP2INTERSECT, AVX512_BF16 and PKU are usable.
7. Update CPU feature detection test.
The -fno-math-errno is already added by default and the minimum
required GCC to build glibc (6.2) make the -ffinite-math-only
superflous.
Checked on aarch64-linux-gnu.
Checked with a build for riscv64-linux-gnu-rv64imac-lp64 (no
builtin support), riscv64-linux-gnu-rv64imafdc-lp64, and
riscv64-linux-gnu-rv64imafdc-lp64d.
The generic implementation is simplified by removing the
'optimization' for !_IEEE_FP_INEXACT (which does not handle
inexact neither some values).
Checked on alpha-linux-gnu.