* sysdeps/unix/sysv/linux/s390/lowlevellock.h: Include
<sysdeps/nptl/lowlevellock.h> and remove macros and
functions that are now defined there.
(SYS_futex): Remove.
(lll_compare_and_swap): Remove.
* sysdeps/s390/bits/atomic.h (atomic_exchange_acq): Define.
This patch fixes bug 15319, missing underflows from atan / atan2 when
the result of atan is very close to its small argument (or that of
atan2 is very close to the ratio of its arguments, which may be an
exact division).
The usual approach of doing an underflowing computation if the
computed result is subnormal is followed. For 32-bit x86, there are
extra complications: the inline __ieee754_atan2 in bits/mathinline.h
needs to be disabled for float and double because other libm functions
using it generally rely on getting proper underflow exceptions from
it, while the out-of-line functions have to remove excess range and
precision from the underflowing result so as to return an exact 0 in
the case where errno should be set for underflow to 0. (The failures
I saw without that are similar to those Carlos reported for other
functions, where I haven't seen a response to
<https://sourceware.org/ml/libc-alpha/2015-01/msg00485.html>
confirming if my diagnosis is correct. Arguably all libm functions
with float and double returns should remove excess range and
precision, but that's a separate matter.)
The x86_64 long double case reported in a comment in bug 15319 is not
a bug (it's an argument of LDBL_MIN, and x86_64 is an after-rounding
architecture so the correct IEEE result is not to raise underflow in
the given rounding mode, in addition to treating the result as an
exact LDBL_MIN being within the newly clarified documentation of
accuracy goals). I'm presuming that the fpatan instruction can be
trusted to raise appropriate exceptions when the (long double) result
underflows (after rounding) and so no changes are needed for x86 /
x86_64 long double functions here; empirically this is the case for
the cases covered in the testsuite, on my system.
Tested for x86_64, x86, powerpc and mips64. Only 32-bit x86 needs
ulps updates (for the changes to inlines meaning some functions no
longer get excess precision from their __ieee754_atan2* calls).
[BZ #15319]
* sysdeps/i386/fpu/e_atan2.S (dbl_min): New object.
(MO): New macro.
(__ieee754_atan2): For results with small absolute value, force
underflow exception and remove excess range and precision from
return value.
* sysdeps/i386/fpu/e_atan2f.S (flt_min): New object.
(MO): New macro.
(__ieee754_atan2f): For results with small absolute value, force
underflow exception and remove excess range and precision from
return value.
* sysdeps/i386/fpu/s_atan.S (dbl_min): New object.
(MO): New macro.
(__atan): For results with small absolute value, force underflow
exception and remove excess range and precision from return value.
* sysdeps/i386/fpu/s_atanf.S (flt_min): New object.
(MO): New macro.
(__atanf): For results with small absolute value, force underflow
exception and remove excess range and precision from return value.
* sysdeps/ieee754/dbl-64/e_atan2.c: Include <float.h> and
<math.h>.
(__ieee754_atan2): Force underflow exception for results with
small absolute value.
* sysdeps/ieee754/dbl-64/s_atan.c: Include <float.h> and
<math_private.h>.
(atan): Force underflow exception for results with small absolute
value.
* sysdeps/ieee754/flt-32/s_atanf.c: Include <float.h>.
(__atanf): Force underflow exception for results with small
absolute value.
* sysdeps/ieee754/ldbl-128/s_atanl.c: Include <float.h> and
<math.h>.
(__atanl): Force underflow exception for results with small
absolute value.
* sysdeps/ieee754/ldbl-128ibm/s_atanl.c: Include <float.h>.
(__atanl): Force underflow exception for results with small
absolute value.
* sysdeps/x86/fpu/bits/mathinline.h
[!__SSE2_MATH__ && !__x86_64__ && __LIBC_INTERNAL_MATH_INLINES]
(__ieee754_atan2): Only define inline for long double.
* sysdeps/x86_64/fpu/multiarch/e_atan2.c
[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Include <math.h>.
* math/auto-libm-test-in: Do not mark underflow exceptions as
possibly missing for bug 15319. Add more tests of atan2.
* math/auto-libm-test-out: Regenerated.
* math/libm-test.inc (casin_test_data): Do not mark underflow
exceptions as possibly missing for bug 15319.
(casinh_test_data): Likewise.
* sysdeps/i386/fpu/libm-test-ulps: Update.
posix_spawn (a standard POSIX function) brings in a use of getrlimit64
(not a standard POSIX function). This patch fixes this by using
__getrlimit64 and making getrlimit64 a weak alias.
This is more complicated than some such changes because of files that
define getrlimit64 in their own way using symbol versioning after
including the main sysdeps/unix/sysv/linux/getrlimit64.c with a
getrlimit macro defined. There are various existing patterns for such
cases in glibc; the one I've used here is that a getrlimit64 macro
disables the weak_alias / libc_hidden_weak calls, leaving it to the
including file to define the getrlimit64 name in whatever way is
appropriate.
Tested for x86_64 and x86 that installed stripped shared libraries are
unchanged by this patch.
[BZ #17991]
* include/sys/resource.h (__getrlimit64): Declare. Use
libc_hidden_proto.
* resource/getrlimit64.c (getrlimit64): Rename to __getrlimit64
and define as weak alias of __getrlimit64. Use libc_hidden_weak.
* sysdeps/posix/spawni.c (__spawni): Call __getrlimit64 instead of
getrlimit64.
* sysdeps/unix/sysv/linux/getrlimit64.c (getrlimit64): Rename to
__getrlimit64.
[!getrlimit64] (getrlimit64): Define as weak alias of
__getrlimit64. Use libc_hidden_weak.
* sysdeps/unix/sysv/linux/i386/getrlimit64.c (getrlimit64): Define
using __getrlimit64 not __new_getrlimit64.
(__GI_getrlimit64): Likewise.
* sysdeps/unix/sysv/linux/mips/getrlimit64.c (getrlimit64):
Likewise.
(__GI_getrlimit64): Likewise.
(__old_getrlimit64): Use __getrlimit64 not __new_getrlimit64.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/syscalls.list
(getrlimit): Add __getrlimit64 alias.
* sysdeps/unix/sysv/linux/wordsize-64/syscalls.list (getrlimit):
Likewise.
* conform/Makefile (test-xfail-XOPEN2K/spawn.h/linknamespace):
Remove variable.
(test-xfail-POSIX2008/spawn.h/linknamespace): Likewise.
(test-xfail-XOPEN2K8/spawn.h/linknamespace): Likewise.
ia64 seems to use the same implementation of low-level locks as the
generic Linux lowlevellock.h. The futex syscalls are somewhat
different, but Roland thought it shouldn't matter. Note that the futex
calls are on the slow path always (except for PI mutexes).
Removing the custom low-level lock implementation will make further
refactoring easier, for example adding proper error checking to futex
operations.
Various remquo implementations produce a zero remainder with the wrong
sign (a zero remainder should always have the sign of the first
argument, as specified in IEEE 754) in round-downward mode, resulting
from the sign of 0 - 0. This patch checks for zero results and fixes
their sign accordingly.
Tested for x86_64, x86, mips64 and powerpc.
[BZ #17987]
* sysdeps/ieee754/dbl-64/s_remquo.c (__remquo): Ensure sign of
zero result does not depend on the sign resulting from
subtraction.
* sysdeps/ieee754/dbl-64/wordsize-64/s_remquo.c (__remquo):
Likewise.
* sysdeps/ieee754/flt-32/s_remquof.c (__remquof): Likewise.
* sysdeps/ieee754/ldbl-128/s_remquol.c (__remquol): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_remquol.c (__remquol): Likewise.
* sysdeps/ieee754/ldbl-96/s_remquol.c (__remquol): Likewise.
* math/libm-test.inc (remquo_test_data): Add more tests.
Various remquo implementations, when computing the last three bits of
the quotient, have spurious overflows when 4 times the second argument
to remquo overflows. These overflows can in turn cause bad results in
rounding modes where that overflow results in a finite value. This
patch adds tests to avoid the problem multiplications in cases where
they would overflow, similar to those that control an earlier
multiplication by 8.
Tested for x86_64, x86, mips64 and powerpc.
[BZ #17978]
* sysdeps/ieee754/dbl-64/s_remquo.c (__remquo): Do not form
products 4 * y and 2 * y where those would overflow.
* sysdeps/ieee754/dbl-64/wordsize-64/s_remquo.c (__remquo):
Likewise.
* sysdeps/ieee754/flt-32/s_remquof.c (__remquof): Likewise.
* sysdeps/ieee754/ldbl-128/s_remquol.c (__remquol): Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_remquol.c (__remquol): Likewise.
* sysdeps/ieee754/ldbl-96/s_remquol.c (__remquol): Likewise.
* math/libm-test.inc (remquo_test_data): Add more tests.
I see an error
../sysdeps/mips/memcpy.S:209:68: error: "_ABIO64" is not defined [-Werror=undef]
#if defined(_MIPS_SIM) && ((_MIPS_SIM == _ABIO32) || (_MIPS_SIM == _ABIO64))
^
cc1: some warnings being treated as errors
in MIPS builds. This patch arranges for _ABIO64 to be defined with
the same value as GCC uses when building for O64 (the ABI itself isn't
supported by glibc, but defining the macro seems the simplest way of
avoiding the error in code that may be shared with other C libraries).
* sysdeps/mips/sgidefs.h [!_ABIO64] (_ABIO64): New macro.
I see an error
../sysdeps/mips/strcmp.S:25:7: error: "_COMPILING_NEWLIB" is not defined [-Werror=undef]
#elif _COMPILING_NEWLIB
^
cc1: some warnings being treated as errors
in MIPS builds. (This is with GCC 4.9; it's possible that the DR#412
change in GCC 5 - see
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60570> - means that
-Wundef diagnostics no longer occur for #elif conditions where a
previous group's condition was true, just as with other errors there.)
This patch duly adjusts the conditionals to test whether
_COMPILING_NEWLIB is defined.
* sysdeps/mips/memcpy.S [_COMPILING_NEWLIB]: Change condition to
[defined _COMPILING_NEWLIB].
* sysdeps/mips/memset.S [_COMPILING_NEWLIB]: Likewise.
* sysdeps/mips/strcmp.S [_COMPILING_NEWLIB]: Likewise.
I see an error
In file included from ../sysdeps/mips/include/sys/asm.h:20:0,
from ../sysdeps/mips/start.S:39:
../sysdeps/mips/sys/asm.h:421:5: error: "__mips_isa_rev" is not defined [-Werror=undef]
#if __mips_isa_rev < 6
^
cc1: some warnings being treated as errors
in MIPS builds. As sys/asm.h is an installed header, it seems better
to test for !defined __mips_isa_rev here, instead of defining it to 0
as done in sysdeps/unix/mips/sysdep.h, to avoid perturbing any code
outside glibc that tests whether __mips_isa_rev is defined; this patch
does so.
* sysdeps/mips/sys/asm.h [__mips_isa_rev < 6]: Change condition to
[!defined __mips_isa_rev || __mips_isa_rev < 6].
Remove IA64 PAGE_SIZE related macros as PAGE_SIZE is not defined.
Also remove macros that are only used for BFD's trad-core support
which is not relavant for IA64 according to the thread starting
here:
https://sourceware.org/ml/libc-ports/2013-11/msg00028.html
This patch is neither built nor tested but is equivalent to a MIPS
patch for the same fix.
The dbl-64/wordsize-64 remquo implementation follows similar logic to
various other implementations, but where that logic computes some
absolute values, it wrongly uses a previously computed bit-pattern for
the absolute value of the first argument, where actually it needs the
absolute value of the first argument mod 8 times the second. This
patch fixes it to compute the correct absolute value.
The integer quotient result of remquo is only specified mod 8
(including its sign); architecture-specific versions may well vary in
what results they give for higher bits of that result (and indeed bug
17569 gives an example correct result from __builtin_remquo giving 9
for that result, where the particular glibc implementation used in
that bug report would give 1 after this fix). Thus, this patch adapts
the tests of remquo to test that result only mod 8, to allow for such
variation when tests with higher quotient are included.
Tested for x86_64 and x86.
[BZ #17569]
* sysdeps/ieee754/dbl-64/wordsize-64/s_remquo.c (__remquo):
Compute absolute value of x as modified by fmod, not original
value of x.
* math/libm-test.inc (RUN_TEST_ffI_f1): Rename to
RUN_TEST_ffI_f1_mod8. Check extra return value mod 8.
(RUN_TEST_LOOP_ffI_f1): Rename to RUN_TEST_LOOP_ffI_f1_mod8. Call
RUN_TEST_ffI_f1_mod8.
(remquo_test_data): Add more tests.
Similarly to sqrt in
<https://sourceware.org/ml/libc-alpha/2015-02/msg00353.html>, the
powerpc sqrtf implementation for when _ARCH_PPCSQ is not defined also
relies on a * b + c being contracted into a fused multiply-add.
Although this contraction is not explicitly disabled for e_sqrtf.c, it
still seems appropriate to make the file explicit about its
requirements by using __builtin_fmaf; this patch does so.
Furthermore, it turns out that doing so fixes the observed inaccuracy
and missing exceptions (that is, that without explicit __builtin_fmaf
usage, it was not being compiled as intended).
Tested for powerpc32 (hard float).
[BZ #17967]
* sysdeps/powerpc/fpu/e_sqrtf.c (__slow_ieee754_sqrtf): Use
__builtin_fmaf instead of relying on contraction of a * b + c.
As Adhemerval noted in
<https://sourceware.org/ml/libc-alpha/2015-01/msg00451.html>, the
powerpc sqrt implementation for when _ARCH_PPCSQ is not defined is
inaccurate in some cases.
The problem is that this code relies on fused multiply-add, and relies
on the compiler contracting a * b + c to get a fused operation. But
sysdeps/ieee754/dbl-64/Makefile disables contraction for e_sqrt.c,
because the implementation in that directory relies on *not* having
contracted operations.
While it would be possible to arrange makefiles so that an earlier
sysdeps directory can disable the setting in
sysdeps/ieee754/dbl-64/Makefile, it seems a lot cleaner to make the
dependence on fused operations explicit in the .c file. GCC 4.6
introduced support for __builtin_fma on powerpc and other
architectures with such instructions, so we can rely on that; this
patch duly makes the code use __builtin_fma for all such fused
operations.
Tested for powerpc32 (hard float).
2015-02-12 Joseph Myers <joseph@codesourcery.com>
[BZ #17964]
* sysdeps/powerpc/fpu/e_sqrt.c (__slow_ieee754_sqrt): Use
__builtin_fma instead of relying on contraction of a * b + c.
This patch fixes the remaining part of bug 16560, spurious underflows
from exp2 of arguments close to 0 (when the result is close to 1, so
should not underflow), by just using 1+x instead of a more complicated
calculation when the argument is sufficiently small.
Tested for x86_64, x86 and mips64.
[BZ #16560]
* math/e_exp2l.c [LDBL_MANT_DIG == 106] (LDBL_EPSILON): Undefine
and redefine.
(__ieee754_exp2l): Do not multiply small fractional parts by
M_LN2l.
* sysdeps/i386/fpu/e_exp2l.S (__ieee754_exp2l): Just add 1 to
small argument.
* sysdeps/ieee754/dbl-64/e_exp2.c (__ieee754_exp2): Likewise.
* sysdeps/ieee754/flt-32/e_exp2f.c (__ieee754_exp2f): Likewise.
* sysdeps/x86_64/fpu/e_exp2l.S (__ieee754_exp2l): Likewise.
* math/auto-libm-test-in: Add more tests of exp2.
* math/auto-libm-test-out: Regenerated.
This patch optimizes strncpy for power7 for unaligned source or
destination address. The source or destination address is aligned
to doubleword and data is shifted based on the alignment and
added with the previous loaded data to be written as a doubleword.
For each load, cmpb instruction is used for faster null check.
The new optimization shows 10 to 70% of performance improvement
for longer string though it does not show big difference on string
size less than 16 due to additional checks.Hence this new algorithm
is restricted to string greater than 16.
This patch makes sincos set errno to EDOM when passed an infinity,
similarly to sin and cos.
Tested for x86_64, x86, powerpc and mips64. I don't know if the
architecture-specific implementations for ia64 and m68k might need
corresponding fixes.
2015-02-11 Joseph Myers <joseph@codesourcery.com>
[BZ #15467]
* sysdeps/ieee754/dbl-64/s_sincos.c: Include <errno.h>.
(__sincos): Set errno to EDOM for infinite argument.
* sysdeps/ieee754/flt-32/s_sincosf.c: Include <errno.h>.
(SINCOSF_FUNC): Set errno to EDOM for infinite argument.
* sysdeps/ieee754/ldbl-128/s_sincosl.c: Include <errno.h>.
(__sincosl): Set errno to EDOM for infinite argument.
* sysdeps/ieee754/ldbl-128ibm/s_sincosl.c: Include <errno.h>.
(__sincosl): Set errno to EDOM for infinite argument.
* sysdeps/ieee754/ldbl-96/s_sincosl.c: Include <errno.h>.
(__sincosl): Set errno to EDOM for infinite argument.
* math/libm-test.inc (sincos_test_data): Test errno setting.
As noted in
<https://sourceware.org/ml/libc-alpha/2014-10/msg00369.html>, soft-fp
sysdeps subdirectories (and more generally, subdirectories where
sysdeps/foo/Implies contains foo/bar) are unnecessary and should be
eliminated. This patch does so for MIPS.
Tested for MIPS64 (all three ABIs, soft-float) that installed stripped
shared libraries are unchanged by this patch.
* sysdeps/mips/soft-fp/sfp-machine.h: Move to ....
* sysdeps/mips/mips32/sfp-machine.h: ... here.
* sysdeps/mips/mips64/soft-fp/Makefile: Move to ....
* sysdeps/mips/mips64/Makefile: ... here.
* sysdeps/mips/mips64/soft-fp/e_sqrtl.c: Move to ....
* sysdeps/mips/mips64/e_sqrtl.c: ... here.
* sysdeps/mips/mips64/soft-fp/sfp-machine.h: Move to ....
* sysdeps/mips/mips64/sfp-machine.h: ... here.
* sysdeps/mips/mips32/Implies: Remove mips/soft-fp.
* sysdeps/mips/mips64/n32/Implies: Remove mips/mips64/soft-fp.
* sysdeps/mips/mips64/n64/Implies: Likewise.
Current minimum binutils supported (2.22) has ".machine altivec" support
as default, so there is no need to add a configure check for such
functionality. This patches removes the configure checks for it.
This patch cleanup some multiarch code related to memmmove
optimization. Initial IFUNC support added specialized wordcopy
symbols which turned in local IFUNC calls used by memmove default
implementation. The patch removes the internal IFUNC for wordcopy
symbols and uses local branches in the memmmove optimization instead.
This patch cleanup some multiarch code related to memmmove
optimization. Initial IFUNC support added specialized wordcopy
symbols which turned in local IFUNC calls used by memmove default
implementation.
This change by removing then and used the optimized memmove instead
for supported chips.
This patch simplify the default bcopy symbol for powerpc64 by just using
memmove instead of implementing using the default bcopy. Since the
symbol is deprecated, it trades speed by code size.
This reverts part of the previous commit to refactor pthread.h.
The refactoring must be done by having pthread.h include arch
bits headers, not the other way around. Then hppa provides the
arch bits header. For now we synchronzie again with pthread.h
and include the entire contents in the hppa copy.
Update all translations.
Update contributions in the manual.
Update installation notes with information about newest working tools.
Reconfigure using exactly autoconf 2.69.
Regenerate INSTALL.
(1) Fix warnings.
This is a bulk update to fix all the warnings that were causing
build failures with -Werror on hppa.
The most egregious problems are in dl-fptr.c which needs to be
entirely rewritten, thus I've used -Wno-error for that.
(2) Fix conformance errors.
The sysdep.c file had __syscall_error and syscall in one file
which caused conformance issues by including syscall when
__syscall_error was linked to. The fix is obviously to split
the file and use syscall.c to implement syscall.
* sysdeps/sparc/sparc32/bits/atomic.h
(__sparc32_atomic_do_unlock24): Put the memory barrier before the
unlock not after it.
(__v9_compare_and_exchange_val_32_acq): Use unions to avoid getting
volatile register usage warnings from the compiler.
memcpy with unaligned 256-bit AVX register loads/stores are slow on older
processorsl like Sandy Bridge. This patch adds bit_AVX_Fast_Unaligned_Load
and sets it only when AVX2 is available.
[BZ #17801]
* sysdeps/x86_64/multiarch/init-arch.c (__init_cpu_features):
Set the bit_AVX_Fast_Unaligned_Load bit for AVX2.
* sysdeps/x86_64/multiarch/init-arch.h (bit_AVX_Fast_Unaligned_Load):
New.
(index_AVX_Fast_Unaligned_Load): Likewise.
(HAS_AVX_FAST_UNALIGNED_LOAD): Likewise.
* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check the
bit_AVX_Fast_Unaligned_Load bit instead of the bit_AVX_Usable bit.
* sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk): Likewise.
* sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Likewise.
* sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk): Likewise.
* sysdeps/x86_64/multiarch/memmove.c (__libc_memmove): Replace
HAS_AVX with HAS_AVX_FAST_UNALIGNED_LOAD.
* sysdeps/x86_64/multiarch/memmove_chk.c (__memmove_chk): Likewise.
This is because of alignment issues in the sem_t support.
tilegx32 does in fact support 64-bit atomics and we will need
to revisit this after the 2.21 freeze.
This patch disables use of 64-bit atomics for MIPS n32 to fix the
problems with unaligned semaphores.
Before 64-bit atomics are used for anything for which such alignment
issues do not arise, and before the addition of any new ILP32 ports
with 64-bit semaphores for which the ABI can be set to have the
greater alignment (AARCH64?), a better approach will need to be
established that allows architectures to declare their 64-bit atomics
availability accurately, without doing so causing inappropriate use of
such atomics on unaligned semaphores.
Tested for MIPS n32 that this fixes the nptl/tst-sem3 failure.
* sysdeps/mips/bits/atomic.h [_MIPS_SIM == _ABIN32]
(__HAVE_64B_ATOMICS): Define to 0.
This patch fixes a bug introduced by 18f2945ae9, where it optimizes
the FPSCR set by just issuing a mtfs instruction if new flag is different
from older one. The issue is a typo, where the new flag should the the
new value, instead of the old one.
It fixes BZ#17885.
Some powerpc64 processors (e5500 core for instance) does not provide the
fsqrt instruction, however current check to use in math_private.h is
__WORDSIZE and _ARCH_PWR4 (ISA 2.02). This is patch change it to use
the compiler flag _ARCH_PPCSQ (which is the same condition GCC uses to
decide whether to generate fsqrt instruction).
It fixes BZ#16576.
GLIBC memset optimization for POWER8 uses the '.machine power8'
directive, which is only supported officially on binutils 2.24+. This
causes a build failure on older binutils.
Since the requirement of .machine power8 is to correctly assembly the
'mtvsrd' instruction and it is already handled by the MTVSRD_V1_R4
macro, there is no really needed of using it.
The patch replaces the power8 with power7 for .machine directive.
It fixes BZ#17869.
This patch fix the elf/ifuncmain6pie failure when building with GCC
4.9+. For some reason, the compiler removes the branch taken code at
resolve_ifunc (sysdeps/powerpc/powerpc64/dl-machine.h) as dead-code
and thus the testcase fails because the ifunc resolves branches to an
invalid memory location. It fixes by explicit adding a dependency of
value based on odp variable to avoid compiler optimization.
It fixes BZ#17868.
This patch replaces unsigned long int and 1UL with uint64_t and
(uint64_t) 1 to support ILP32 targets like x32.
[BZ #17870]
* nptl/sem_post.c (__new_sem_post): Replace unsigned long int
with uint64_t.
* nptl/sem_waitcommon.c (__sem_wait_cleanup): Replace 1UL with
(uint64_t) 1.
(__new_sem_wait_slow): Replace unsigned long int with uint64_t.
Replace 1UL with (uint64_t) 1.
* sysdeps/nptl/internaltypes.h (new_sem): Replace unsigned long
int with uint64_t.
This patch fix powerpc __get_clockfreq racy and cancel-safe issues by
dropping internal static cache and by using nocancel file operations.
The vDSO failure check is also removed, since kernel code does not
return an error (it cleans cr0.so bit on function return) and the static
code (to read value /proc) now uses non-cancellable calls.
The ability to recursively call dlopen is useful for malloc
implementations that wish to load other dynamic modules that
implement reentrant/AS-safe functions to use in their own
implementation.
Given that a user malloc implementation may be called by an
ongoing dlopen to allocate memory the user malloc
implementation interrupts dlopen and if it calls dlopen again
that's a reentrant call.
This patch fixes the issues with the ld.so.cache mapping
and the _r_debug assertion which prevent this from working
as expected.
See:
https://sourceware.org/ml/libc-alpha/2014-12/msg00446.html
This commit fixes semaphore destruction by either using 64b atomic
operations (where available), or by using two separate fields when only
32b atomic operations are available. In the latter case, we keep a
conservative estimate of whether there are any waiting threads in one
bit of the field that counts the number of available tokens, thus
allowing sem_post to atomically both add a token and determine whether
it needs to call futex_wake.
See:
https://sourceware.org/ml/libc-alpha/2014-12/msg00155.html
When fixing namespace issues for <fenv.h> functions I missed one call
to fesetenv for powerpc-nofpu. This patch changes this to a call to
__fesetenv.
Tested for powerpc-nofpu; it fixes the previously observed math.h
linknamespace test failures.
[BZ #17748]
* sysdeps/powerpc/nofpu/feholdexcpt.c (__feholdexcept): Call
__fesetenv instead of fesetenv.
commit 050f7298e1 added an extern
declaration for __tls_get_addr that conflicts with the one in s390
dl-tls.h, based on whether __tls_get_addr is defined as a macro. The
rationale seems to be based on the assumption that __tls_get_addr is
exported for every architecture and hence an internal non-plt alias is
needed. This is not true for s390 though, since it exports
__tls_get_offset and not __tls_get_addr. This results in tst-audit9
being stuck in an infinite loop.
This patch fixes this by defining a __tls_get_addr macro to itself so
as to not use the conflicting declaration.
This patch fixes a performance regression on the POWER7/PPC64 memcmp
porting for Little Endian. The LE code uses 'ldbrx' instruction to read
the memory on byte reversed form, however ISA 2.06 just provide the indexed
form which uses a register value as additional index, instead of a fixed value
enconded in the instruction.
And the port strategy for LE uses r0 index value and update the address
value on each compare loop interation. For large compare size values,
it adds 8 more instructions plus some more depending of trailing
size. This patch fixes it by adding pre-calculate indexes to remove the
address update on loops and tailing sizes.
For large sizes it shows a considerable gain, with double performance
pairing with BE.
This patch adds an optimized POWER8 strncmp. The implementation focus
on speeding up unaligned cases follwing the ideas of power8 strcmp.
The algorithm first check the initial 16 bytes, then align the first
function source and uses unaligned loads on second argument only.
Aditional checks for page boundaries are done for unaligned cases
(where sources alignment are different).
This patch optimized the POWER7 trailing check by avoiding using byte
read operations and instead use the doubleword already readed with
bitwise operations.
This patch adds an optimized POWER8 strcmp using unaligned accesses.
The algorithm first check the initial 16 bytes, then align the first
function source and uses unaligned loads on second argument only.
Aditional checks for page boundaries are done for unaligned cases
This patch adds an optimized POWER8 st{r,p}ncpy using unaligned accesses.
It shows 10%-80% improvement over the optimized POWER7 one that uses
only aligned accesses, specially on unaligned inputs.
The algorithm first read and check 16 bytes (if inputs do not cross a 4K
page size). The it realign source to 16-bytes and issue a 16 bytes read
and compare loop to speedup null byte checks for large strings. Also,
different from POWER7 optimization, the null pad is done inline in the
implementation using possible unaligned accesses, instead of realying on
a memset call. Special case is added for page cross reads.
With 3eb38795db (Simplify strncat) the generic algorithms uses
strlen, strnlen, and memcpy. This is faster than POWER7 current
implementation, especially for unaligned strings (where POWER7 code
uses byte-byte operations).
This patch removes the assembly implementation and uses a multiarch
specialization based on default algorithm calling optimized POWER7
symbols.
This patch adds an optimized POWER8 strcpy using unaligned accesses.
For strings up to 16 bytes the implementation first calculate the
string size, like strlen, and issues a memcpy. For larger strings,
source is first aligned to 16 bytes and then tested over a loop that
reads 16 bytes am combine the cmpb results for speedup. Special case is
added for page cross reads.
It shows 30%-60% improvement over the optimized POWER7 one that uses
only aligned accesses.
The ldbl-96 implementation of scalblnl (used for x86_64 and ia64) uses
a condition k <= -63 to determine when a standard underflowing result
tiny*__copysignl(tiny,x) should be returned. However, that condition
corresponds to values with exponent -16446 or less, and in the case of
-16446, the correct result for round-to-nearest depends on whether the
value is exactly 0x1p-16446 (half the least subnormal) or more than
that. This patch fixes the bug by changing the condition to k <= -64
and accordingly adjusting the exponent by 64 not 63 when converting to
a normal value.
Tested for x86_64.
[BZ #17803]
* sysdeps/ieee754/ldbl-96/s_scalblnl.c (twom63): Rename to
twom64. Adjust value to 0x1p-64L.
(__scalblnl): Only return standard underflowing result for K <=
-64 not K <= -63; adjust exponent for underflowing result by 64
not 63.
* math/libm-test.inc (scalbn_test_data): Add more tests.
(scalbln_test_data): Likewise.
The ldbl-96 implementation of scalblnl (used for x86_64 and ia64) is
incorrect for subnormal arguments (this is a separate bug from bug
17803, which is about underflowing results). There are two problems
with the adjustments of subnormal arguments: the "two63" variable
multiplied by is actually 0x1p52L not 0x1p63L, so is insufficient to
make values normal, and then GET_LDOUBLE_EXP(es,x), used to extract
the new exponent, extracts it into a variable that isn't used, while
the value taken to by the new exponent is wrongly taken from the high
part of the mantissa before the adjustment (hx). This patch fixes
both those problems and adds appropriate tests.
Tested for x86_64.
[BZ #17834]
* sysdeps/ieee754/ldbl-96/s_scalblnl.c (two63): Change value to
0x1p63L.
(__scalblnl): Get new exponent of adjusted subnormal value from ES
not HX.
* math/libm-test.inc (scalbn_test_data): Add more tests.
(scalbln_test_data): Likewise.
Linux 3.15 adds support for clock_gettime, gettimeofday, and time vDSO
(commit id 37c975545ec63320789962bf307f000f08fabd48). This patch adds
GLIBC supports to use such symbol when they are avaiable.
Along with x86 vDSO support, this patch cleanup x86_64 code by moving
all common code to x86 common folder. Only init-first.c is different
between implementations.
Linux kernel powerpc documentation states issuing a syscall inside a
transaction is not recommended and may lead to undefined behavior. It
also states syscalls does not abort transactoin neither they run in
transactional state.
To avoid side-effects being visible outside transactions, GLIBC with
lock elision enabled will issue a transaction abort instruction just
before all syscalls if hardware supports hardware transactions.
This patch adds support for lock elision using ISA 2.07 hardware
transactional memory for rwlocks. The logic is similar to the
one presented in pthread_mutex lock elision.
This patch adds support for lock elision using ISA 2.07 hardware
transactional memory instructions for pthread_mutex primitives.
Similar to s390 version, the for elision logic defined in
'force-elision.h' is only enabled if ENABLE_LOCK_ELISION is defined.
Also, the lock elision code should be able to be built even with
a compiler that does not provide HTM support with builtins.
However I have noted the performance is sub-optimal due scheduling
pressures.
Microblaze apparently has a variable page size (see thread below) and
should not hard-code any page-size related macros.
Also remove macros that are only used for BFD's trad-core support
which is not relavant for microblaze also according to the thread
starting here:
https://sourceware.org/ml/libc-ports/2013-11/msg00028.html
This patch is neither built nor tested but mirrors a MIPS patch that
fixes the same issue.
Thanks,
Matthew
* sysdepsysdeps/unix/sysv/linux/microblaze/sys/user.h
(PAGE_SHIFT, PAGE_SIZE, PAGE_MASK, NBPG, UPAGES): Remove.
(HOST_TEXT_START_ADDR, HOST_STACK_END_ADDR): Remove.
Signed-off-by: David Holsgrove <david.holsgrove@xilinx.com>
2015-01-06 David Holsgrove <david.holsgrove@xilinx.com>
* sysdeps/microblaze/jmpbuf-unwind.h (_jmpbuf_sp): Declare SP as void
pointer and cast to uintptr_t.
Signed-off-by: David Holsgrove <david.holsgrove@xilinx.com>
2015-01-06 David Holsgrove <david.holsgrove@xilinx.com>
* sysdeps/microblaze/nptl/tls.h (TLS_INIT_TP): Use NULL instead
of 0.
Signed-off-by: David Holsgrove <david.holsgrove@xilinx.com>
GCC 5.0 emits an warning when using sizeof on array function parameters
and powerpc internal syscall macros add a check for such cases. More
specifically, on powerpc64 and powerpc32 sysdep.h:
if (__builtin_classify_type (__arg3) != 5 && sizeof (__arg3) > 8) \
__illegally_sized_syscall_arg3 (); \
And for sysdeps/unix/sysv/linux/utimensat.c build GCC emits:
error: ‘sizeof’ on array function parameter ‘tsp’ will return size of
‘const struct timespec *’
This patch uses the address of first struct member instead of the struct
itself in syscall macro.
Concluding the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of feupdateenv by making it a weak alias for
__feupdateenv and making the affected code call __feupdateenv.
Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch). Also tested for ARM
(soft-float) that the math.h linknamespace tests now pass.
[BZ #17748]
* include/fenv.h (__feupdateenv): Use libm_hidden_proto.
* math/feupdateenv.c (__feupdateenv): Use libm_hidden_def.
* sysdeps/aarch64/fpu/feupdateenv.c (feupdateenv): Rename to
__feupdateenv and define as weak alias of __feupdateenv. Use
libm_hidden_weak.
* sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/arm/feupdateenv.c (feupdateenv): Rename to __feupdateenv
and define as weak alias of __feupdateenv. Use libm_hidden_weak.
* sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Rename to
__feupdateenv and define as weak alias of __feupdateenv. Use
libm_hidden_weak.
* sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Rename to
__feupdateenv and define as weak alias of __feupdateenv. Use
libm_hidden_weak.
* sysdeps/powerpc/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c
(__feupdateenv): Likewise.
* sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Rename to
__feupdateenv and define as weak alias of __feupdateenv. Use
libm_hidden_weak.
* sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/tile/math_private.h (__feupdateenv): New inline
function.
* sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Use
libm_hidden_def.
* sysdeps/generic/math_private.h (default_libc_feupdateenv): Call
__feupdateenv instead of feupdateenv.
(default_libc_feupdateenv_test): Likewise.
(libc_feresetround_ctx): Likewise.
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of fesetround by making it a weak alias of
__fesetround and making the affected code call __fesetround. An
existing __fesetround function in fenv_libc.h for powerpc is renamed
to __fesetround_inline.
Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch). Also tested for ARM
(soft-float) that fesetround failures disappear from the linknamespace
test results (feupdateenv remains to be addressed to complete fixing
bug 17748).
[BZ #17748]
* include/fenv.h (__fesetround): Declare. Use libm_hidden_proto.
* math/fesetround.c (fesetround): Rename to __fesetround and
define as weak alias of __fesetround. Use libm_hidden_weak.
* sysdeps/aarch64/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/alpha/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/arm/fesetround.c (fesetround): Likewise.
* sysdeps/hppa/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/i386/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/ia64/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/m68k/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/mips/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/powerpc/fpu/fenv_libc.h (__fesetround): Rename to
__fesetround_inline.
* sysdeps/powerpc/fpu/fenv_private.h (libc_fesetround_ppc): Call
__fesetround_inline instead of __fesetround.
* sysdeps/powerpc/fpu/fesetround.c (fesetround): Rename to
__fesetround and define as weak alias of __fesetround. Use
libm_hidden_weak. Call __fesetround_inline instead of
__fesetround.
* sysdeps/powerpc/nofpu/fesetround.c (fesetround): Rename to
__fesetround and define as weak alias of __fesetround. Use
libm_hidden_weak.
* sysdeps/powerpc/powerpc32/e500/nofpu/fesetround.c (fesetround):
Likewise.
* sysdeps/s390/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/sh/sh4/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/sparc/fpu/fesetround.c (fesetround): Likewise.
* sysdeps/tile/math_private.h (__fesetround): New inline function.
* sysdeps/x86_64/fpu/fesetround.c (fesetround): Rename to
__fesetround and define as weak alias of __fesetround. Use
libm_hidden_weak.
* sysdeps/generic/math_private.h (default_libc_fesetround): Call
__fesetround instead of fesetround.
(default_libc_feholdexcept_setround): Likewise.
(libc_feholdsetround_ctx): Likewise.
(libc_feholdsetround_noex_ctx): Likewise.
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of fesetenv by making it a weak alias of
__fesetenv and making the affected code (including various copies of
feupdateenv which also gets called from C90 functions) call
__fesetenv.
Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch). Also tested for ARM
(soft-float) that fesetenv failures disappear from the linknamespace
test results (fsetround and feupdateenv remain to be addressed to
complete fixing bug 17748).
[BZ #17748]
* include/fenv.h (__fesetenv): Use libm_hidden_proto.
* math/fesetenv.c (__fesetenv): Use libm_hidden_def.
* sysdeps/aarch64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv
and define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/alpha/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
* sysdeps/arm/fesetenv.c (fesetenv): Rename to __fesetenv and
define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/hppa/fpu/fesetenv.c (fesetenv): Likewise.
* sysdeps/i386/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
* sysdeps/ia64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/m68k/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
* sysdeps/mips/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/powerpc/fpu/fesetenv.c (__fesetenv): Use
libm_hidden_def.
* sysdeps/powerpc/nofpu/fesetenv.c (__fesetenv): Likewise.
* sysdeps/powerpc/powerpc32/e500/nofpu/fesetenv.c (__fesetenv):
Likewise.
* sysdeps/s390/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/sh/sh4/fpu/fesetenv.c (fesetenv): Likewise.
* sysdeps/sparc/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
* sysdeps/tile/math_private.h (__fesetenv): New inline function.
* sysdeps/x86_64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv
and define as weak alias of __fesetenv. Use libm_hidden_weak.
* sysdeps/generic/math_private.h (default_libc_fesetenv): Use
__fesetenv instead of fesetenv.
(libc_feresetround_noex_ctx): Likewise.
* sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c
(__feupdateenv): Likewise.
* sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Likewise.
We simplify allocation strategy there so instead of using temporary linked list
and then copying entries to output array we keep them in resizable
array.
C99 specifies that CLOCKS_PER_SEC is an expression with the type clock_t.
This patch adds a generic <bits/time2.h> to define CLOCKS_PER_SEC and
provides the Linux/x86-64 version of <bits/time2.h> to support x32.
[BZ #17797]
* bits/time.h (CLOCKS_PER_SEC): Changed to ((clock_t) 1000000).
* sysdeps/unix/sysv/linux/bits/time.h (CLOCKS_PER_SEC): Likewise.
* sysdeps/unix/sysv/linux/clock.c (clock): _Static_assert
CLOCKS_PER_SEC == 1000000.
* time/clocktest.c (main): Replace %ld with %jd and cast to
intmax_t.
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of feholdexcept by making it a weak alias of
__feholdexcept and making the affected code call __feholdexcept.
Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch). Also tested for ARM
(soft-float) that feholdexcept failures disappear from the
linknamespace test failures (fesetenv, fsetround and feupdateenv
remain to be addressed to complete fixing bug 17748).
[BZ #17748]
* include/fenv.h (__feholdexcept): Declare. Use
libm_hidden_proto.
* math/feholdexcpt.c (feholdexcept): Rename to __feholdexcept and
define as weak alias of __feholdexcept. Use libm_hidden_weak.
* sysdeps/aarch64/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/alpha/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/arm/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/hppa/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/i386/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/ia64/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/m68k/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/mips/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/powerpc/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/powerpc/nofpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/powerpc/powerpc32/e500/nofpu/feholdexcpt.c
(feholdexcept): Likewise.
* sysdeps/s390/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/sh/sh4/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/sparc/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/x86_64/fpu/feholdexcpt.c (feholdexcept): Likewise.
* sysdeps/generic/math_private.h (default_libc_feholdexcept): Use
__feholdexcept instead of feholdexcept.
(default_libc_feholdexcept_setround): Likewise.
sysdeps/unix/sysv/linux/mips/mips64/n64/posix_fadvise.c defines
posix_fadvise64 as a strong alias for posix_fadvise (for
!SHLIB_COMPAT(libc, GLIBC_2_2, GLIBC_2_3_3) - i.e., for static
linking, which is the case when this matters), but it should be a weak
alias. This patch makes it a weak alias.
Tested for MIPS that this fixes the observed linknamespace test
failures.
[BZ #17796]
* sysdeps/unix/sysv/linux/mips/mips64/n64/posix_fadvise.c
[!SHLIB_COMPAT(libc, GLIBC_2_2, GLIBC_2_3_3)] (posix_fadvise64):
Define as weak alias not strong alias.
The tile vDSO vsyscalls were not properly setting the error value.
Conventionally, tile returns the same "non-negative success, negative
errno" value that x86 does (in r0), but it also returns "zero or positive
errno" in r1, which is what the regular syscall code checks. This change
uses that convention for the vDSO calls as well.
Platforms with 64-bit registers where 32-bit values need to have the
high 32 bits set in a particular way need to have an explicit cast
when using the 64-bit sysdeps/ieee754/dbl-64/wordsize-64 version
of llround() as lround(). This includes tilegx32, and likely MIPS.
x32 does not need this, and AArch64 ILP32 will not either. Require
it to be specified in sysdep.h to be explicit.
ARM posix_fadvise calls __posix_fadvise64_l64, to which
posix_fadvise64 is a strong alias, but posix_fadvise is a POSIX
function and posix_fadvise64 isn't. This patch changes it into a weak
alias.
Tested for ARM that this fixes the corresponding linknamespace test
failures.
[BZ #17793]
* sysdeps/unix/sysv/linux/arm/posix_fadvise64.c (posix_fadvise64):
Define as weak alias not strong alias.
On systems using sysdeps/unix/sysv/linux/wordsize-64, posix_fadvise64
and posix_fallocate64 (non-POSIX) are strong aliases for posix_fadvise
and posix_fallocate (POSIX), meaning references to the latter wrongly
bring in definitions of the former. They should be weak aliases; this
patch makes them so.
Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).
[BZ #17777]
* sysdeps/unix/sysv/linux/wordsize-64/posix_fadvise.c
(posix_fadvise64): Define as weak alias not strong alias.
* sysdeps/unix/sysv/linux/wordsize-64/posix_fallocate.c
(posix_fallocate64): Likewise.
* conform/Makefile (test-xfail-XOPEN2K/fcntl.h/linknamespace):
Remove variable.
(test-xfail-XOPEN2K/mqueue.h/linknamespace): Likewise.
(test-xfail-POSIX2008/fcntl.h/linknamespace): Likewise.
(test-xfail-POSIX2008/mqueue.h/linknamespace): Likewise.
(test-xfail-XOPEN2K8/fcntl.h/linknamespace): Likewise.
(test-xfail-XOPEN2K8/mqueue.h/linknamespace): Likewise.
MIPS supports a variable page size but glibc defines a constant.
This causes at least two glibc tests to fail when the page size
does not match the hard-coded size:
inet/test-ifaddrs
inet/test_ifindex
[BZ #16191]
* NEWS: Mention bug fix.
* sysdeps/unix/sysv/linux/mips/sys/user.h (PAGE_SHIFT): Remove.
(PAGE_SIZE, PAGE_MASK, NBPG, UPAGES): Likewise.
(HOST_TEXT_START_ADDR, HOST_DATA_START_ADDR): Likewise.
(HOST_STACK_END_ADDR): Likewise.
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of fegetround by making it a weak alias of
__fegetround and making the affected code call __fegetround.
Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch). Also tested for ARM
(soft-float) that fegetround failures disappear from the linknamespace
test failures (feholdexcept, fesetenv, fesetround and feupdateenv
remain to be addressed before bug 17748 is fully fixed, although this
patch may suffice to fix the failures in some cases, when the libc_fe*
functions are implemented but there is no architecture-specific sqrt
implementation in use so there were failures from fegetround used by
sqrt but no other such failures).
[BZ #17748]
* include/fenv.h (__fegetround): Declare. Use libm_hidden_proto.
* math/fegetround.c (fegetround): Rename to __fegetround and
define as weak alias of __fegetround. Use libm_hidden_weak.
* sysdeps/aarch64/fpu/fegetround.c (fegetround): Likewise.
* sysdeps/alpha/fpu/fegetround.c (fegetround): Likewise.
* sysdeps/arm/fegetround.c (fegetround): Likewise.
* sysdeps/hppa/fpu/fegetround.c (fegetround): Likewise.
* sysdeps/i386/fpu/fegetround.c (fegetround): Likewise.
* sysdeps/ia64/fpu/fegetround.c (fegetround): Likewise.
* sysdeps/m68k/fpu/fegetround.c (fegetround): Likewise.
* sysdeps/mips/fpu/fegetround.c (fegetround): Likewise.
* sysdeps/powerpc/fpu/fegetround.c (fegetround): Likewise.
Undefine after rather than before function definition; use
parentheses around function name in definition.
(__fegetround): Also undefine macro after function definition.
* sysdeps/powerpc/nofpu/fegetround.c (fegetround): Rename to
__fegetround and define as weak alias of __fegetround. Use
libm_hidden_weak. Do not undefine as macro.
* sysdeps/powerpc/powerpc32/e500/nofpu/fegetround.c (fegetround):
Likewise.
* sysdeps/s390/fpu/fegetround.c (fegetround): Rename to
__fegetround and define as weak alias of __fegetround. Use
libm_hidden_weak.
* sysdeps/sh/sh4/fpu/fegetround.c (fegetround): Likewise.
* sysdeps/sparc/fpu/fegetround.c (fegetround): Likewise.
* sysdeps/tile/math_private.h (__fegetround): New inline function.
* sysdeps/x86_64/fpu/fegetround.c (fegetround): Rename to
__fegetround and define as weak alias of __fegetround. Use
libm_hidden_weak.
* sysdeps/ieee754/dbl-64/e_sqrt.c (__ieee754_sqrt): Use
__fegetround instead of fegetround.
sysdeps/unix/sysv/linux/mips/bits/termios.h defines TIOCSER_TEMT
unconditionally, but it's in the user's namespace. This patch
conditions it on __USE_MISC, as on powerpc. I've filed bug 17783 for
the residual inconsistency in conditions on this macro (sparc defines
it for __USE_GNU only).
[BZ #17782]
* sysdeps/unix/sysv/linux/mips/bits/termios.h (TIOCSER_TEMT):
Condition macro definition on [__USE_MISC].
sysdeps/unix/sysv/linux/mips/bits/sigaction.h gives sa_flags type
unsigned int, but POSIX says it should be signed int. This patch
gives it the correct type (the layout is unchanged, so there are no
ABI issues involved).
[BZ #17781]
* sysdeps/unix/sysv/linux/mips/bits/sigaction.h
(struct sigaction): Change type of sa_flags field to int.
sysdeps/unix/sysv/linux/mips/bits/fcntl.h has a structure field called
pad, which is in the user's namespace. This patch changes it to
__glibc_reserved0.
[BZ #17780]
* sysdeps/unix/sysv/linux/mips/bits/fcntl.h (struct flock)
[!__USE_FILE_OFFSET64 && _MIPS_SIM != _ABI64]: Rename pad field to
__glibc_reserved0.
The default required version of binutils has been increased to 2.22
making this check redundant.
ChangeLog:
2015-01-02 Will Newton <will.newton@linaro.org>
* sysdeps/arm/armv7/configure: Removed.
* sysdeps/arm/armv7/configure.ac: Likewise.
Some C90 libm functions call fegetenv via libc_feholdsetround*
functions in math_private.h. This patch makes them call __fegetenv
instead, making fegetenv into a weak alias for __fegetenv as needed.
Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch). Also tested for ARM
(soft-float) that fegetenv failures disappear from the linknamespace
test failures (however, similar fixes will also be needed for
fegetround, feholdexcept, fesetenv, fesetround and feupdateenv before
this set of namespace issues covered by bug 17748 is fully fixed and
those linknamespace tests start passing).
[BZ #17748]
* include/fenv.h (__fegetenv): Use libm_hidden_proto.
* math/fegetenv.c (__fegetenv): Use libm_hidden_def.
* sysdeps/aarch64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv
and define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/alpha/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
* sysdeps/arm/fegetenv.c (fegetenv): Rename to __fegetenv and
define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/hppa/fpu/fegetenv.c (fegetenv): Likewise.
* sysdeps/i386/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
* sysdeps/ia64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/m68k/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
* sysdeps/mips/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/powerpc/fpu/fegetenv.c (__fegetenv): Use
libm_hidden_def.
* sysdeps/powerpc/nofpu/fegetenv.c (__fegetenv): Likewise.
* sysdeps/powerpc/powerpc32/e500/nofpu/fegetenv.c (__fegetenv):
Likewise.
* sysdeps/s390/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/sh/sh4/fpu/fegetenv.c (fegetenv): Likewise.
* sysdeps/sparc/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
* sysdeps/tile/math_private.h (__fegetenv): New inline function.
* sysdeps/x86_64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv
and define as weak alias of __fegetenv. Use libm_hidden_weak.
* sysdeps/generic/math_private.h (libc_feholdsetround_ctx): Use
__fegetenv instead of fegetenv.
(libc_feholdsetround_noex_ctx): Likewise.
This patch optimizes strcpy for ppc64/power7 for unaligned source or
destination address. The source or destination address is aligned
to doubleword and data is shifted based on the alignment and
added with the previous loaded data to be written as a doubleword.
For each load, cmpb instruction is used for faster null check.
The word aligned optimization is also removed, since the new unaligned
code path shows better results handling word-aligned strings.
More combination of unaligned inputs is also added in benchtest
to measure the improvement.The new optimization shows 2 to 80% of
performance improvement for longer string though it does not show
big difference on string size less than 16 due to additional checks.
The natural fix for some linknamespace test failures, where C90 libm
functions call C99 <fenv.h> functions, is to make fe* into weak
aliases for __fe* and call __fe* from within libm as needed.
To do this, the __fe* names need to be available for that purpose -
that is, they must not be used for something other than aliases of
fe*. On powerpc, however, __fegetround is an inline function in
fenv_libc.h, with no corresponding fegetround inline function;
fegetround has an equivalent macro expansion in bits/fenvinline.h, but
that is disabled if __NO_MATH_INLINES (which is defined for building
libm).
I see no need for that disabling; it's not even clear that
__NO_MATH_INLINES should affect <fenv.h>, and the results of
fegetround are completely defined so there is no semantic effect of
that disabling at all outside glibc. The x86 inline feraiseexcept is
conditioned on __USE_EXTERN_INLINES not __NO_MATH_INLINES (but that's
an inline function rather than a macro).
This patch removes the __NO_MATH_INLINES conditional on that
fegetround macro, so resulting in it being expanded inline inside
glibc. In turn, this means that direct calls to __fegetround from C99
functions in ldbl-128ibm can be changed to calls to fegetround, so
that nofpu fenv_libc.h files don't need to define __fegetround at all
and, by changing ldbl-128ibm files to use <fenv.h> not <fenv_libc.h>,
non-e500 nofpu no longer needs an fenv_libc.h file.
The other macros in fenvinline.h are left conditional on
__NO_MATH_INLINES, although since the only case where this should make
a difference is one involving undefined behavior (if the argument to
the function is not a valid exception macro).
The out-of-line definition for fegetround uses __fegetround (the
inline function removed by this patch). So this continues to work,
the fenvinline.h header is made to define __fegetround, and then to
define fegetround to call __fegetround.
Tested for powerpc32 (hard float) that installed stripped shared
libraries are unchanged by this patch; also tested that powerpc-nofpu
build still works. (This patch does not itself fix any bugs; it
simply cleans things up in preparation for separate bug fixes.)
* sysdeps/powerpc/bits/fenvinline.h (fegetround): Rename macro to
__fegetround and redefine to call __fegetround. Remove condition
on [!__NO_MATH_INLINES].
* sysdeps/powerpc/fpu/fenv_libc.h (__fegetround): Remove inline
function.
* sysdeps/powerpc/nofpu/fenv_libc.h: Remove file.
* sysdeps/powerpc/powerpc32/e500/nofpu/fenv_libc.h (__fegetround):
Remove macro.
* sysdeps/ieee754/ldbl-128ibm/s_llrintl.c: Include <fenv.h>
instead of <fenv_libc.h>.
(__llrintl): Call fegetround instead of __fegetround.
* sysdeps/ieee754/ldbl-128ibm/s_llroundl.c: Include <fenv.h>
instead of <fenv_libc.h>.
* sysdeps/ieee754/ldbl-128ibm/s_lrintl.c: Likewise.
(__lrintl): Call fegetround instead of __fegetround.
* sysdeps/ieee754/ldbl-128ibm/s_lroundl.c: Include <fenv.h>
instead of <fenv_libc.h>.
* sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise.
(__rintl): Call fegetround instead of __fegetround.
PI_STATIC_AND_HIDDEN is always defined for i386. There is no need to
check PI_STATIC_AND_HIDDEN in sysdeps/i386/dl-machine.h.
[BZ #17775]
* sysdeps/i386/dl-machine.h (PI_STATIC_AND_HIDDEN): Removed.
(elf_machine_dynamic) [!PI_STATIC_AND_HIDDEN]: Likewise.
(elf_machine_load_address) [!PI_STATIC_AND_HIDDEN]: Likewise.
Fixed 3 "make check" failures on glibc 32bit built by gcc 5.0 due to EBX
was enabled for allocation:
https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00892.html
Tests elf/tst-tls3, elf/tst-execstack-needed, elf/tst-execstack-prog
were failed because EBX was used as PIC register.
* sysdeps/i386/tls-macros.h: Include <features.h>.
(TLS_LE): Use non-PIC version for GCC >= 5.0.
(TLS_IE): Likewise.
(TLS_LD): Likewise.
(TLS_GD): Likewise.
* sysdeps/unix/sysv/linux/i386/sysdep.h (check_consistency): Don't
define for GCC >= 5.0.
Various C90 and UNIX98 libm functions call feraiseexcept, which is not
in those standards. This causes linknamespace test failures - except
on x86 / x86_64, where feraiseexcept is inline (for the relevant
constant arguments) in bits/fenv.h.
This patch fixes this by making those functions call __feraiseexcept
instead. All changes are applied to all architectures rather than
considering the possibility that some might not be needed in some
cases (e.g. x86) as it seems most maintainable to keep architectures
consistent.
Where __feraiseexcept does not exist, it is added, with feraiseexcept
made a weak alias; where it is a strong alias, it is made weak.
libm_hidden_def / libm_hidden_proto are used with __feraiseexcept
(this might in some cases improve code generation for existing calls
to __feraiseexcept in some code on some architectures). Where there
are dummy feraiseexcept macros (on architectures without
floating-point exceptions support, to avoid compile errors from
references to undefined FE_* macros), corresponding dummy
__feraiseexcept macros are added. And on x86, to ensure
__feraiseexcept calls still get inlined, the inline function in
bits/fenv.h is refactored so that most of it can be reused in an
inline __feraiseexcept in a separate include/bits/fenv.h.
Calls are changed in C90/UNIX98 functions, but generally not in
functions missing from those standards. They are also changed in
libc_fe* functions (on the basis that those might be used in any libm
function), and in feupdateenv (on the same basis - may be used, via
default libc_*, in any libm function - of course feupdateenv will need
changing to __feupdateenv in a subsequent patch to make that fully
namespace-clean).
No __feraiseexcept is added corresponding to the feraiseexcept in
powerpc bits/fenvinline.h, because that macro definition is
conditional on !defined __NO_MATH_INLINES, and glibc libm is built
with -D__NO_MATH_INLINES, so changing internal calls to use
__feraiseexcept should make no difference.
Tested for x86_64 (testsuite; the only change in disassembly of
installed shared libraries is a slight code reordering in clog10, of
no apparent significance). Also tested for MIPS, where (in the
configuration tested) it eliminates math.h linknamespace failures for
n32 and n64 (some for o32 remain because of other issues).
[BZ #17723]
* include/fenv.h (__feraiseexcept): Use libm_hidden_proto.
* math/fraiseexcpt.c (__feraiseexcept): Use libm_hidden_def.
* sysdeps/aarch64/fpu/fraiseexcpt.c (feraiseexcept): Rename to
__feraiseexcept and define as weak alias of __feraiseexcept. Use
libm_hidden_weak.
* sysdeps/arm/fraiseexcpt.c (feraiseexcept): Likewise.
* sysdeps/hppa/fpu/fraiseexcpt.c (feraiseexcept): Likewise.
* sysdeps/i386/fpu/fraiseexcpt.c (__feraiseexcept): Use
libm_hidden_def.
* sysdeps/ia64/fpu/fraiseexcpt.c (feraiseexcept): Rename to
__feraiseexcept and define as weak alias of __feraiseexcept. Use
libm_hidden_weak.
* sysdeps/m68k/coldfire/fpu/fraiseexcpt.c (feraiseexcept):
Likewise.
* sysdeps/microblaze/math_private.h (__feraiseexcept): New macro.
* sysdeps/mips/fpu/fraiseexcpt.c (feraiseexcept): Rename to
__feraiseexcept and define as weak alias of __feraiseexcept. Use
libm_hidden_weak.
* sysdeps/powerpc/fpu/fraiseexcpt.c (__feraiseexcept): Use
libm_hidden_def.
* sysdeps/powerpc/nofpu/fraiseexcpt.c (__feraiseexcept): Likewise.
* sysdeps/powerpc/powerpc32/e500/nofpu/fraiseexcpt.c
(__feraiseexcept): Likewise.
* sysdeps/s390/fpu/fraiseexcpt.c (feraiseexcept): Rename to
__feraiseexcept and define as weak alias of __feraiseexcept. Use
libm_hidden_weak.
* sysdeps/sh/sh4/fpu/fraiseexcpt.c (feraiseexcept): Likewise.
* sysdeps/sparc/fpu/fraiseexcpt.c (__feraiseexcept): Use
libm_hidden_def.
* sysdeps/tile/math_private.h (__feraiseexcept): New macro.
* sysdeps/unix/sysv/linux/alpha/fraiseexcpt.S (__feraiseexcept):
Use libm_hidden_def.
* sysdeps/x86_64/fpu/fraiseexcpt.c (__feraiseexcept): Use
libm_hidden_def.
(feraiseexcept): Define as weak not strong alias. Use
libm_hidden_weak.
* sysdeps/x86/fpu/bits/fenv.h (__feraiseexcept_invalid_divbyzero):
New inline function. Factored out of ...
(feraiseexcept): ... here. Use __feraiseexcept_invalid_divbyzero.
* sysdeps/x86/fpu/include/bits/fenv.h: New file.
* math/e_scalb.c (invalid_fn): Call __feraiseexcept instead of
feraiseexcept.
* math/w_acos.c (__acos): Likewise.
* math/w_asin.c (__asin): Likewise.
* math/w_ilogb.c (__ilogb): Likewise.
* math/w_j0.c (y0): Likewise.
* math/w_j1.c (y1): Likewise.
* math/w_jn.c (yn): Likewise.
* math/w_log.c (__log): Likewise.
* math/w_log10.c (__log10): Likewise.
* sysdeps/aarch64/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/aarch64/fpu/math_private.h
(libc_feupdateenv_test_aarch64): Likewise.
* sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/arm/fenv_private.h (libc_feupdateenv_test_vfp): Likewise.
* sysdeps/arm/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Likewise.
* sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/powerpc/fpu/e_sqrt.c (__slow_ieee754_sqrt): Likewise.
* sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise.
* sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Likewise.
These new memcpy functions are the 32-bit version of x86_64 SSE2 unaligned
memcpy. Memcpy average performace benefit is 18% on Silvermont, other
platforms also improved about 35%, benchmarked on Silvermont, Haswell, Ivy
Bridge, Sandy Bridge and Westmere, performance results attached in
https://sourceware.org/ml/libc-alpha/2014-07/msg00157.html
* sysdeps/i386/i686/multiarch/bcopy-sse2-unaligned.S: New file.
* sysdeps/i386/i686/multiarch/memcpy-sse2-unaligned.S: Likewise.
* sysdeps/i386/i686/multiarch/memmove-sse2-unaligned.S: Likewise.
* sysdeps/i386/i686/multiarch/mempcpy-sse2-unaligned.S: Likewise.
* sysdeps/i386/i686/multiarch/bcopy.S: Select the sse2_unaligned
version if bit_Fast_Unaligned_Load is set.
* sysdeps/i386/i686/multiarch/memcpy.S: Likewise.
* sysdeps/i386/i686/multiarch/memcpy_chk.S: Likewise.
* sysdeps/i386/i686/multiarch/memmove.S: Likewise.
* sysdeps/i386/i686/multiarch/memmove_chk.S: Likewise.
* sysdeps/i386/i686/multiarch/mempcpy.S: Likewise.
* sysdeps/i386/i686/multiarch/mempcpy_chk.S: Likewise.
* sysdeps/i386/i686/multiarch/Makefile (sysdep_routines): Add
bcopy-sse2-unaligned, memcpy-sse2-unaligned,
memmove-sse2-unaligned and mempcpy-sse2-unaligned.
* sysdeps/i386/i686/multiarch/ifunc-impl-list.c (MAX_IFUNC): Set
to 4.
(__libc_ifunc_impl_list): Test __bcopy_sse2_unaligned,
__memmove_chk_sse2_unaligned, __memmove_sse2_unaligned,
__memcpy_chk_sse2_unaligned, __memcpy_sse2_unaligned,
__mempcpy_chk_sse2_unaligned, and __mempcpy_sse2_unaligned.
This patch adds support to generate the spec array in getconf from the
conf.list. The generated code is mostly unchanged. the only changes
are due to the change in layout of the spec and val arrays in the ELF.
The val array can also be auto-generated from posix-conf-vars.list
once the remaining macros are added to it.
* posix/posix-conf-vars.list (SPEC:XBS5): Add sysconf prefix.
* posix/confstr.c: Define NEED_SPEC_ARRAY to 0.
* posix/posix-envs.def: Likewise.
* sysdeps/posix/sysconf.c: Likewise.
* posix/getconf.c: Define NEED_SPEC_ARRAY to 1.
(specs): Remove array.
* scripts/gen-posix-conf-vars.awk: Support generation of specs
array.
This fixes the remaining -Wundef warnings. Tested on x86_64.
* posix/posix-conf-vars.list: Add _POSIX sysconf namespace.
* sysdeps/posix/sysconf.c: Include posix-conf-vars.h.
(__sysconf): Use CONF_IS_* macros.
These avoid having tile generate real calls to the no-op
functions, which then causes linknamespace test failures.
It might make sense to factor all of these out into a common
header that can be shared by tile, microblaze, etc., but for
now just fix the test failures.