Commit 43c29487 tried to fix the vfork aliases in libpthread.so on MIPS
and SPARC, but failed to do it correctly, introducing an ABI change.
This patch does the remaining changes needed to align the MIPS and SPARC
vfork implementations with the other architectures. That way the the
alpha version of pt-vfork.S works correctly for MIPS and SPARC. The
changes for alpha were done in 82aab97c.
Changelog:
* sysdeps/unix/sysv/linux/mips/vfork.S (__vfork): Rename into
__libc_vfork.
(__vfork) [IS_IN (libc)]: Remove alias.
(__libc_vfork) [IS_IN (libc)]: Define as an alias.
* sysdeps/unix/sysv/linux/sparc/sparc32/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/vfork.S: Likewise.
atomic_compare_and_exchange_bool_rel and
catomic_compare_and_exchange_bool_rel are removed and replaced with the
new C11-like atomic_compare_exchange_weak_release. The concurrent code
in nscd/cache.c has not been reviewed yet, so this patch does not add
detailed comments.
* nscd/cache.c (cache_add): Use new C11-like atomic operation instead
of atomic_compare_and_exchange_bool_rel.
* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise.
* include/atomic.h (atomic_compare_and_exchange_bool_rel,
catomic_compare_and_exchange_bool_rel): Remove.
* sysdeps/aarch64/atomic-machine.h
(atomic_compare_and_exchange_bool_rel): Likewise.
* sysdeps/alpha/atomic-machine.h
(atomic_compare_and_exchange_bool_rel): Likewise.
* sysdeps/arm/atomic-machine.h
(atomic_compare_and_exchange_bool_rel): Likewise.
* sysdeps/mips/atomic-machine.h
(atomic_compare_and_exchange_bool_rel): Likewise.
* sysdeps/tile/atomic-machine.h
(atomic_compare_and_exchange_bool_rel): Likewise.
The x86_64 and i386 versions of scalbl return sNaN for some cases of
sNaN input and are missing "invalid" exceptions for other cases. This
results from overly complicated code that either returns a NaN input,
or discards both inputs when one is NaN and loads a NaN from memory.
This patch fixes this by simplifying the code to add the arguments
when either one is NaN.
Tested for x86_64 and x86.
[BZ #20296]
* sysdeps/i386/fpu/e_scalbl.S (__ieee754_scalbl): Add arguments
when either argument is a NaN.
* sysdeps/x86_64/fpu/e_scalbl.S (__ieee754_scalbl): Likewise.
* math/libm-test.inc (scalb_test_data): Add sNaN tests.
This patch adds tests of sNaN inputs to more functions to
libm-test.inc. This covers the remaining real functions except for
scalb, where there's a bug to fix, and hypot pow fmin fmax, where
there are cases where a qNaN input does not result in a qNaN output
and so sNaN support according to TS 18661-1 is more of a new feature.
Tested for x86_64 and x86.
* math/libm-test.inc (snan_value_ld): New macro.
(isgreater_test_data): Add sNaN tests.
(isgreaterequal_test_data): Likewise.
(isless_test_data): Likewise.
(islessequal_test_data): Likewise.
(islessgreater_test_data): Likewise.
(isunordered_test_data): Likewise.
(nextafter_test_data): Likewise.
(nexttoward_test_data): Likewise.
(remainder_test_data): Likewise.
(remquo_test_data): Likewise.
(significand_test_data): Likewise.
* math/gen-libm-test.pl (%beautify): Add snan_value_ld.
getconf has the capability to do a runtime check for environment
support in cases where there is optional support for an environment
(_POSIX_V7_ILP32_OFF32 on x86_64 for example) and this is indicated by
not defining the _POSIX_V7_ILP32_OFF32 macro, which results in getconf
doing an additional execve of _POSIX_V7_ILP32_OFF32 in the
$GETCONF_DIR.
The default bits/environments.h however does not leave any environment
macros undefined, which means that no such additional execve is
needed. gcc trunk catches this as a build failure since it finds that
the code block inside switch(specs[i].num) is not reachable. Avoid
this error by not bothering about the additional exec (and looking in
specific environments) when all environments are defined.
Tested on aarch64.
* posix/getconf.c: Define ALL_ENVIRONMENTS_DEFINED if all
environment macros are defined.
(main): Avoid execve if ALL_ENVIRONMENTS_DEFINED is defined.
This commit puts all libio vtables in a dedicated, read-only ELF
section, so that they are consecutive in memory. Before any indirect
jump, the vtable pointer is checked against the section boundaries,
and the process is terminated if the vtable pointer does not fall into
the special ELF section.
To enable backwards compatibility, a special flag variable
(_IO_accept_foreign_vtables), protected by the pointer guard, avoids
process termination if libio stream object constructor functions have
been called earlier. Such constructor functions are called by the GCC
2.95 libstdc++ library, and this mechanism ensures compatibility with
old binaries. Existing callers inside glibc of these functions are
adjusted to call the original functions, not the wrappers which enable
vtable compatiblity.
The compatibility mechanism is used to enable passing FILE * objects
across a static dlopen boundary, too.
If the requested size is zero, realloc returns NULL, but the
deallocation is still successful, unless the pointer is also
NULL, when realloc behaves as malloc (0).
__attribute__ ((used)) means that the function has to be
emitted in assembly because it is referenced in ways the
compiler cannot detect (such as asm statements, or some
post-processing on the generated assembly).
The unused attribute needs to come first, otherwise it is
applied to the return type and not the function definition.
The i386 implementations of nearbyint functions, and x86_64
nearbyintl, contain code to mask the "inexact" exception. However,
the fnstenv instruction has the effect of masking all exceptions, so
this masking code has been redundant since fnstenv was added to those
implementations (by commit 846d9a4a3acdb4939ca7bf6aed48f9f6f26911be;
commit 71d1b0166b added the test
math/test-nearbyint-except-2.c that verifies these functions do work
when called with "inexact" traps enabled); this patch removes the
redundant code.
Tested for x86_64 and x86.
* sysdeps/i386/fpu/s_nearbyint.S (__nearbyint): Do not mask
"inexact" exceptions after fnstenv.
* sysdeps/i386/fpu/s_nearbyintf.S (__nearbyintf): Likewise.
* sysdeps/i386/fpu/s_nearbyintl.S (__nearbyintl): Likewise.
* sysdeps/x86_64/fpu/s_nearbyintl.S (__nearbyintl): Likewise.
This file was added to sysdeps/generic/bits in 2012. This appears to
have been an oversight, as the entire sysdeps/generic/bits directory was
moved to the top level in 2005. Accordingly the generic bits/hwcap.h
belongs there too.
* sysdeps/generic/bits/hwcap.h: Moved to ...
* bits/hwcap.h: Here.
This file was added to sysdeps/generic/bits in 2012. This appears to
have been an oversight, as the entire sysdeps/generic/bits directory was
moved to the top level in 2005. Accordingly the generic bits/hwcap.h
belongs there too.
* sysdeps/generic/bits/hwcap.h: Moved to ...
* bits/hwcap.h: Here.
Before this change, the while loop in reused_arena which avoids
returning a corrupt arena would never execute its body if the selected
arena were not corrupt. As a result, result == begin after the loop,
and the function returns NULL, triggering fallback to mmap.
This patch fixes the p{readv,writev}{64} consolidation implementation
from commits 4e77815 and af5fdf5. Different from pread/pwrite
implementation, preadv/pwritev implementation does not require
__ALIGNMENT_ARG because kernel syscall prototypes define
the high and low part of the off_t, if it is the case, directly
(different from pread/pwrite where the architecture ABI for passing
64-bit values must be in consideration for passsing the arguments).
It also adds some basic tests for preadv/pwritev.
Tested on x86_64, i686, and armhf.
* misc/Makefile (tests): Add tst-preadvwritev and tst-preadvwritev64.
* misc/tst-preadvwritev.c: New file.
* misc/tst-preadvwritev64.c: Likewise.
* sysdeps/unix/sysv/linux/preadv.c (preadv): Remove SYSCALL_LL{64}
usage.
* sysdeps/unix/sysv/linux/preadv64.c (preadv64): Likewise.
* sysdeps/unix/sysv/linux/pwritev.c (pwritev): Likewise.
* sysdeps/unix/sysv/linux/pwritev64.c (pwritev64): Likewise.
* sysdeps/unix/sysv/linux/sysdep.h (LO_HI_LONG): New macro.
is the fastest way to search for '\0'. Otherwise use memchr with an infinite
size. This is 3x faster on benchtests for large sizes. Passes GLIBC tests.
* sysdeps/aarch64/rawmemchr.S (__rawmemchr): New file.
* sysdeps/aarch64/strlen.S (__strlen): Change to __strlen to avoid PLT.
cases: small copies of up to 16 bytes, medium copies of 17..96 bytes which are
fully unrolled. Large copies of more than 96 bytes align the destination and
use an unrolled loop processing 64 bytes per iteration. In order to share code
with memmove, small and medium copies read all data before writing, allowing
any kind of overlap. All memmoves except for the large backwards case fall
into memcpy for optimal performance. On a random copy test memcpy/memmove are
40% faster on Cortex-A57 and 28% on Cortex-A53.
* sysdeps/aarch64/memcpy.S (memcpy):
Rewrite of optimized memcpy and memmove.
* sysdeps/aarch64/memmove.S (memmove): Remove
memmove code (merged into memcpy.S).
With recent binutils versions the GNU libc fails to build on at least
MISP and SPARC, with this kind of error:
/home/aurel32/glibc/glibc-build/nptl/libpthread.so:(*IND*+0x0): multiple definition of `vfork@GLIBC_2.0'
/home/aurel32/glibc/glibc-build/nptl/libpthread.so::(.text+0xee50): first defined here
It appears that on these architectures pt-vfork.S includes vfork.S
(through the alpha version of pt-vfork.S) and that the __vfork aliases
are not conditionalized on IS_IN (libc) like on other architectures.
Therefore the aliases are also wrongly included in libpthread.so.
Fix this by properly conditionalizing the aliases like on other
architectures.
Changelog:
* sysdeps/unix/sysv/linux/mips/vfork.S (__vfork): Conditionalize
hidden_def, weak_alias and strong_alias on [IS_IN (libc)].
* sysdeps/unix/sysv/linux/sparc/sparc32/vfork.S: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/vfork.S: Likewise.
TS 18661 adds nextup and nextdown functions alongside nextafter to provide
support for float128 equivalent to it. This patch adds nextupl, nextup,
nextupf, nextdownl, nextdown and nextdownf to libm before float128 support.
The nextup functions return the next representable value in the direction of
positive infinity and the nextdown functions return the next representable
value in the direction of negative infinity. These are currently enabled
as GNU extensions.
fdim suffers from double rounding on i386 because subtracting two
double values can produce an inexact long double value exactly half
way between two double values. This patch fixes this by creating an
i386-specific version of fdim - C, based on the generic version,
unlike the previous .S version - which sets the x87 precision control
to double precision for the subtraction and then restores it
afterwards. As noted in the comment added, there are no issues of
double rounding for subnormals (a case that setting precision control
does not address) because subtraction cannot produce an inexact result
in the subnormal range.
Tested for x86_64 and x86.
[BZ #20255]
* sysdeps/i386/fpu/s_fdim.c: New file. Based on math/s_fdim.c.
* math/libm-test.inc (fdim_test_data): Add another test.
Some architectures have their own versions of fdim functions, which
are missing errno setting (bug 6796) and may also return sNaN instead
of qNaN for sNaN input, in the case of the x86 / x86_64 long double
versions (bug 20256).
These versions are not actually doing anything that a compiler
couldn't generate, just straightforward comparisons / arithmetic (and,
in the x86 / x86_64 case, testing for NaNs with fxam, which isn't
actually needed once you use an unordered comparison and let the NaNs
pass through the same subtraction as non-NaN inputs). This patch
removes the x86 / x86_64 / powerpc versions, so that those
architectures use the generic C versions, which correctly handle
setting errno and deal properly with sNaN inputs. This seems better
than dealing with setting errno in lots of .S versions.
The i386 versions also return results with excess range and precision,
which is not appropriate for a function exactly defined by reference
to IEEE operations. For errno setting to work correctly on overflow,
it's necessary to remove excess range with math_narrow_eval, which
this patch duly does in the float and double versions so that the
tests can reliably pass on x86. For float, this avoids any double
rounding issues as the long double precision is more than twice that
of float. For double, double rounding issues will need to be
addressed separately, so this patch does not fully fix bug 20255.
Tested for x86_64, x86 and powerpc.
[BZ #6796]
[BZ #20255]
[BZ #20256]
* math/s_fdim.c: Include <math_private.h>.
(__fdim): Use math_narrow_eval on result.
* math/s_fdimf.c: Include <math_private.h>.
(__fdimf): Use math_narrow_eval on result.
* sysdeps/i386/fpu/s_fdim.S: Remove file.
* sysdeps/i386/fpu/s_fdimf.S: Likewise.
* sysdeps/i386/fpu/s_fdiml.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdim.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdimf.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdiml.S: Likewise.
* sysdeps/powerpc/fpu/s_fdim.c: Likewise.
* sysdeps/powerpc/fpu/s_fdimf.c: Likewise.
* sysdeps/powerpc/powerpc32/fpu/s_fdim.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_fdim.c: Likewise.
* sysdeps/x86_64/fpu/s_fdiml.S: Likewise.
* math/libm-test.inc (fdim_test_data): Expect errno setting on
overflow. Add sNaN tests.
The generic fdim implementations have unnecessarily complicated code,
using fpclassify to determine whether the arguments are NaNs,
subtracting NaNs if so and otherwise subtracting the non-NaN arguments
if not (x <= y), then using fpclassify on the result to see if it is
infinite.
This patch simplifies the code. Instead of handling NaNs separately,
it suffices to use an unordered comparison with islessequal (x, y) to
determine whether to return zero, and otherwise NaNs can go through
the same subtraction as non-NaN arguments; no explicit tests for NaN
are needed at all. Then, isinf instead of fpclassify can be used to
determine whether to set errno (in the normal non-overflow case, only
one classification will need to occur, unlike the three in the
previous code, of which two occurred even if returning zero, because
the result will not be infinite in the normal case).
The resulting logic is essentially the same as that in the powerpc
version, except that the powerpc version is missing errno setting and
uses <= not islessequal, so relying on
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58684>, the GCC bug that
unordered comparison instructions are wrongly used on powerpc for
ordered comparisons.
The compiled code for fdim and fdimf on x86_64 is less than half the
size of the previous code.
Tested for x86_64.
* math/s_fdim.c (__fdim): Use islessequal and isinf instead of
fpclassify.
* math/s_fdimf.c (__fdimf): Likewise.
* math/s_fdiml.c (__fdiml): Likewise.
This implementation utilizes vectors to improve performance
compared to current byte by byte implementation for POWER7.
The performance improvement is upto 4x. This patch is tested
on powerpc64 and powerpc64le.
The dbl-64 implementation of atan2, passed arguments (sNaN, qNaN),
fails to raise the "invalid" exception. This patch fixes it to add
both arguments, rather than just adding the second argument to itself,
in the case where the second argument is a NaN (which is checked for
before checking for the first argument being a NaN). sNaN tests for
atan2 are added, along with some qNaN tests I noticed were missing but
should have been there by analogy with other tests present.
Tested for x86_64 and x86.
[BZ #20252]
* sysdeps/ieee754/dbl-64/e_atan2.c (__ieee754_atan2): Add both
arguments when second argument is a NaN.
* math/libm-test.inc (atan2_test_data): Add sNaN tests and more
qNaN tests.
Various implementations of frexp functions return sNaN for sNaN
input. This patch fixes them to add such arguments to themselves so
that qNaN is returned.
Tested for x86_64, x86, mips64 and powerpc.
[BZ #20250]
* sysdeps/i386/fpu/s_frexpl.S (__frexpl): Add non-finite input to
itself.
* sysdeps/ieee754/dbl-64/s_frexp.c (__frexp): Add non-finite or
zero input to itself.
* sysdeps/ieee754/dbl-64/wordsize-64/s_frexp.c (__frexp):
Likewise.
* sysdeps/ieee754/flt-32/s_frexpf.c (__frexpf): Likewise.
* sysdeps/ieee754/ldbl-128/s_frexpl.c (__frexpl): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_frexpl.c (__frexpl): Likewise.
* sysdeps/ieee754/ldbl-96/s_frexpl.c (__frexpl): Likewise.
* math/libm-test.inc (frexp_test_data): Add sNaN tests.
This patch adds cancellation tests for both sendmmsg and recvmmsg
syscalls. Since for some system configuration (x86_64/i686 on
older kernels and non-Linux platforms), the tests are added as
two independent that report as unsupported if the syscall is not
presented.
Both new tests uses the already tst-cancel4.c code, which as moved
to a common tst-cancel4-common{.c,h} files.
Tested on x86_64 and i686.
* nptl/Makefile (test): Add tst-cancel4_1 and tst-cancel4_2.
* nptl/tst-cancel4-common.c: New file.
* nptl/tst-cancel4-common.h: Likewise.
* nptl/tst-cancel4.c: Move common definitions to
tst-cancel4-common.{c,h} file.
* nptl/tst-cancel4_1.c: New test.
* nptl/tst-cancel4_2.c: New test.
Currently, printf needs more stack space than what is available with
SIGSTKSZ. This commit use the the write system call directly instead.
Also use sig_atomic_t for the “pass” variable (for general
correctness), and restore signal handlers to their defaults, to avoid
masking crashes.
This patch removes __ASSUME_FUTEX_LOCK_PI usage and assumes that
kernel will correctly return if it supports or not
futex_atomic_cmpxchg_inatomic.
Current PI mutex code already has runtime support by calling
prio_inherit_missing and returns ENOTSUP if the futex operation fails
at initialization (it issues a FUTEX_UNLOCK_PI futex operation).
Also, current minimum supported kernel (v3.2) will return ENOSYS if
futex_atomic_cmpxchg_inatomic is not supported in the system:
kernel/futex.c:
2628 long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
2629 u32 __user *uaddr2, u32 val2, u32 val3)
2630 {
2631 int ret = -ENOSYS, cmd = op & FUTEX_CMD_MASK;
[...]
2667 case FUTEX_UNLOCK_PI:
2668 if (futex_cmpxchg_enabled)
2669 ret = futex_unlock_pi(uaddr, flags);
[...]
2686 return ret;
2687 }
The futex_cmpxchg_enabled is initialized by calling cmpxchg_futex_value_locked,
which calls futex_atomic_cmpxchg_inatomic.
For ARM futex_atomic_cmpxchg_inatomic will be either defined (if both
CONFIG_CPU_USE_DOMAINS and CONFIG_SMP are not defined) or use the
default generic implementation that returns ENOSYS.
For m68k is uses the default generic implementation.
For mips futex_atomic_cmpxchg_inatomic will return ENOSYS if cpu has no
'cpu_has_llsc' support (defined by each chip supporte inside kernel).
For sparc, 32-bit kernel will just use default generic implementation,
while 64-bit kernel has support.
Tested on ARM (v3.8 kernel) and x86_64.
* nptl/pthread_mutex_init.c [__ASSUME_FUTEX_LOCK_PI]
(prio_inherit_missing): Remove define.
* sysdeps/unix/sysv/linux/arm/kernel-features.h
(__ASSUME_FUTEX_LOCK_PI): Likewise.
* sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_FUTEX_LOCK_PI):
Likewise.
* sysdeps/unix/sysv/linux/m68k/kernel-features.h
(__ASSUME_FUTEX_LOCK_PI): Likewise.
* sysdeps/unix/sysv/linux/mips/kernel-features.h
(__ASSUME_FUTEX_LOCK_PI): Likewise.
* sysdeps/unix/sysv/linux/sparc/kernel-features.h
(__ASSUME_FUTEX_LOCK_PI): Likewise.
When get*ent is called without a preceding set*ent, we need
to set the initial iteration position in get*ent.
Reproducer: Add “services: db files” to /etc/nsswitch.conf, then run
“perl -e getservent”. It will segfault before this change, and exit
silently after it.