Commit Graph

10047 Commits

Author SHA1 Message Date
H.J. Lu
8d9c92017d [x86_64] Set DL_RUNTIME_UNALIGNED_VEC_SIZE to 8
Due to GCC bug:

   https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58066

__tls_get_addr may be called with 8-byte stack alignment.  Although
this bug has been fixed in GCC 4.9.4, 5.3 and 6, we can't assume
that stack will be always aligned at 16 bytes.  Since SSE optimized
memory/string functions with aligned SSE register load and store are
used in the dynamic linker, we must set DL_RUNTIME_UNALIGNED_VEC_SIZE
to 8 so that _dl_runtime_resolve_sse will align the stack before
calling _dl_fixup:

Dump of assembler code for function _dl_runtime_resolve_sse:
   0x00007ffff7deea90 <+0>:	push   %rbx
   0x00007ffff7deea91 <+1>:	mov    %rsp,%rbx
   0x00007ffff7deea94 <+4>:	and    $0xfffffffffffffff0,%rsp
                                ^^^^^^^^^^^ Align stack to 16 bytes
   0x00007ffff7deea98 <+8>:	sub    $0x100,%rsp
   0x00007ffff7deea9f <+15>:	mov    %rax,0xc0(%rsp)
   0x00007ffff7deeaa7 <+23>:	mov    %rcx,0xc8(%rsp)
   0x00007ffff7deeaaf <+31>:	mov    %rdx,0xd0(%rsp)
   0x00007ffff7deeab7 <+39>:	mov    %rsi,0xd8(%rsp)
   0x00007ffff7deeabf <+47>:	mov    %rdi,0xe0(%rsp)
   0x00007ffff7deeac7 <+55>:	mov    %r8,0xe8(%rsp)
   0x00007ffff7deeacf <+63>:	mov    %r9,0xf0(%rsp)
   0x00007ffff7deead7 <+71>:	movaps %xmm0,(%rsp)
   0x00007ffff7deeadb <+75>:	movaps %xmm1,0x10(%rsp)
   0x00007ffff7deeae0 <+80>:	movaps %xmm2,0x20(%rsp)
   0x00007ffff7deeae5 <+85>:	movaps %xmm3,0x30(%rsp)
   0x00007ffff7deeaea <+90>:	movaps %xmm4,0x40(%rsp)
   0x00007ffff7deeaef <+95>:	movaps %xmm5,0x50(%rsp)
   0x00007ffff7deeaf4 <+100>:	movaps %xmm6,0x60(%rsp)
   0x00007ffff7deeaf9 <+105>:	movaps %xmm7,0x70(%rsp)

	[BZ #19679]
	* sysdeps/x86_64/dl-trampoline.S (DL_RUNIME_UNALIGNED_VEC_SIZE):
	Renamed to ...
	(DL_RUNTIME_UNALIGNED_VEC_SIZE): This.  Set to 8.
	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.  Updated.
	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.
	* sysdeps/x86_64/dl-trampoline.h
	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.
2016-02-19 15:45:09 -08:00
Joseph Myers
7b428e744b Fix ldbl-128ibm nextafterl, nexttowardl sign of zero result (bug 19678).
The ldbl-128ibm implementation of nextafterl / nexttowardl returns -0
in FE_DOWNWARD mode when taking the next value below the least
positive subnormal, when it should return +0.  This patch fixes it to
check explicitly for this case.

Tested for powerpc.

	[BZ #19678]
	* sysdeps/ieee754/ldbl-128ibm/s_nextafterl.c (__nextafterl):
	Ensure +0.0 is returned when taking the next value below the least
	positive value.
2016-02-19 17:19:53 +00:00
Florian Weimer
59eda029a8 malloc: Remove NO_THREADS
No functional change.  It was not possible to build without
threading support before.
2016-02-19 17:07:45 +01:00
Joseph Myers
c091488e51 Fix ldbl-128ibm powl overflow handling (bug 19674).
The ldbl-128ibm implementation of powl has some problems in the case
of overflow or underflow, which are mainly visible in non-default
rounding modes.

* When overflow or underflow is detected early, the correct sign of an
  overflowing or underflowing result is not allowed for.  This is
  mostly hidden in the default rounding mode by the errno-setting
  wrappers recomputing the result (except in non-default
  error-handling modes such as -lieee), but visible in other rounding
  modes where a result that is not zero or infinity causes the
  wrappers not to do the recomputation.

* The final scaling is done before the sign is incorporated in the
  result, but should be done afterwards for correct overflowing and
  underflowing results in directed rounding modes.

This patch fixes those problems.  Tested for powerpc.

	[BZ #19674]
	* sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Include
	sign in overflowing and underflowing results when overflow or
	underflow is detected early.  Include sign in result before rather
	than after scaling.
2016-02-19 01:07:40 +00:00
Joseph Myers
9120a57f48 Fix ldbl-128ibm remainderl, remquol equality tests (bug 19603).
The ldbl-128ibm implementations of remainderl and remquol have logic
resulting in incorrect tests for equality of the absolute values of
the arguments.  Equality is tested based on the integer
representations of the high and low parts, with the sign bit masked
off the high part - but when this changes the sign of the high part,
the sign of the low part needs to be changed as well, and failure to
do this means arguments are wrongly treated as equal when they are
not.

This patch fixes the logic to adjust signs of low parts as needed.
Tested for powerpc.

	[BZ #19603]
	* sysdeps/ieee754/ldbl-128ibm/e_remainderl.c
	(__ieee754_remainderl): Adjust sign of integer version of low part
	when taking absolute value of high part.
	* sysdeps/ieee754/ldbl-128ibm/s_remquol.c (__remquol): Likewise.
	* math/libm-test.inc (remainder_test_data): Add another test.
	(remquo_test_data): Likewise.
2016-02-19 00:55:46 +00:00
Joseph Myers
0fed79a827 Fix ldbl-128ibm fmodl handling of equal arguments with low part zero (bug 19602).
The ldbl-128ibm implementation of fmodl has logic to detect when the
first argument has absolute value less than or equal to the second.
This logic is only correct for nonzero low parts; if the high parts
are equal and the low parts are zero, then the signs of the low parts
(which have no semantic effect on the value of the long double number)
can result in equal values being wrongly treated as unequal, and an
incorrect result being returned from fmodl.  This patch fixes this by
checking for the case of zero low parts.

Although this does show up in tests from libm-test.inc (both tests of
fmodl, and, indirectly, of remainderl / dreml), the dependence on
non-semantic zero low parts means that test shouldn't be expected to
reproduce it reliably; thus, this patch adds a standalone test that
sets up affected values using unions.

Tested for powerpc.

	[BZ #19602]
	* sysdeps/ieee754/ldbl-128ibm/e_fmodl.c (__ieee754_fmodl): Handle
	equal high parts and both low parts zero specially.
	* sysdeps/ieee754/ldbl-128ibm/test-fmodl-ldbl-128ibm.c: New test.
	* sysdeps/ieee754/ldbl-128ibm/Makefile [$(subdir) = math] (tests):
	Add test-fmodl-ldbl-128ibm.
2016-02-18 22:54:07 +00:00
Joseph Myers
e2c631384a Fix ldbl-128ibm fmodl handling of subnormal results (bug 19595).
The ldbl-128ibm implementation of fmodl has completely bogus logic for
subnormal results (in this context, that means results for which the
result is in the subnormal range for double, not results with absolute
value below LDBL_MIN), based on code used for ldbl-128 that is correct
in that case but incorrect in the ldbl-128ibm use.  This patch fixes
it to convert the mantissa into the correct form expected by
ldbl_insert_mantissa, removing the other cases of the code that were
incorrect and in one case unreachable for ldbl-128ibm.  A correct
exponent value is then passed to ldbl_insert_mantissa to reflect the
shifted result.

Tested for powerpc.

	[BZ #19595]
	* sysdeps/ieee754/ldbl-128ibm/e_fmodl.c (__ieee754_fmodl): Use
	common logic for all cases of shifting subnormal results.  Do not
	insert sign bit in shifted mantissa.  Always pass -1023 as biased
	exponent to ldbl_insert_mantissa in subnormal case.
2016-02-18 22:42:06 +00:00
Joseph Myers
b9a76339be Fix ldbl-128ibm roundl for non-default rounding modes (bug 19594).
The ldbl-128ibm implementation of roundl is only correct in
round-to-nearest mode (in other modes, there are incorrect results and
overflow exceptions in some cases).  This patch reimplements it along
the lines used for floorl, ceill and truncl, using __round on the high
part, and on the low part if the high part is an integer, and then
adjusting in the cases where this is incorrect.

Tested for powerpc.

	[BZ #19594]
	* sysdeps/ieee754/ldbl-128ibm/s_roundl.c (__roundl): Use __round
	on high and low parts then adjust result and use
	ldbl_canonicalize_int if needed.
2016-02-18 22:24:32 +00:00
Joseph Myers
e2310a27be Fix ldbl-128ibm truncl for non-default rounding modes (bug 19593).
The ldbl-128ibm implementation of truncl is only correct in
round-to-nearest mode (in other modes, there are incorrect results and
overflow exceptions in some cases).  It is also unnecessarily
complicated, rounding both high and low parts to the nearest integer
and then adjusting for the semantics of trunc, when it seems more
natural to take the truncation of the high part (__trunc optimized
inline versions can be used), and the floor or ceiling of the low part
(depending on the sign of the high part) if the high part is an
integer, as was done for floorl and ceill.  This patch makes it use
that simpler approach.

Tested for powerpc.

	[BZ #19593]
	* sysdeps/ieee754/ldbl-128ibm/s_truncl.c (__truncl): Use __trunc
	on high part and __floor or __ceil on low part then use
	ldbl_canonicalize_int if needed.
2016-02-18 21:52:07 +00:00
Joseph Myers
8a9fa0086d Fix ldbl-128ibm ceill for non-default rounding modes (bug 19592).
The ldbl-128ibm implementation of ceill is only correct in
round-to-nearest mode (in other modes, there are incorrect results and
overflow exceptions in some cases).  It is also unnecessarily
complicated, rounding both high and low parts to the nearest integer
and then adjusting for the semantics of ceil, when it seems more
natural to take the ceiling of the high part (__ceil optimized inline
versions can be used), and that of the low part if the high part is an
integer, as was done for floorl.  This patch makes it use that simpler
approach.

Tested for powerpc.

	[BZ #19592]
	* sysdeps/ieee754/ldbl-128ibm/s_ceill.c (__ceill): Use __ceil on
	high and low parts then use ldbl_canonicalize_int if needed.
2016-02-18 21:40:39 +00:00
Joseph Myers
1833769e19 Fix ldbl-128ibm floorl for non-default rounding modes (bug 17899).
The ldbl-128ibm implementation of floorl is only correct in
round-to-nearest mode (in other modes, there are incorrect results and
overflow exceptions in some cases going beyond the incorrect signs of
zero results noted in bug 17899).  It is also unnecessarily
complicated, rounding both high and low parts to the nearest integer
and then adjusting for the semantics of floor, when it seems more
natural to take the floor of the high part (__floor optimized inline
versions can be used), and that of the low part if the high part is an
integer.  This patch makes it use that simpler approach, with a
canonicalization that works in all rounding modes (given that the only
way the result can be noncanonical is if taking the floor of a
negative noninteger low part increased its exponent).

Tested for powerpc, where over a thousand failures are removed from
test-ldouble.out (floorl problems affect many powl tests).

	[BZ #17899]
	* sysdeps/ieee754/ldbl-128ibm/math_ldbl.h (ldbl_canonicalize_int):
	New function.
	* sysdeps/ieee754/ldbl-128ibm/s_floorl.c (__floorl): Use __floor
	on high and low parts then use ldbl_canonicalize_int if needed.
2016-02-18 21:31:10 +00:00
H.J. Lu
16396c41de Add _STRING_INLINE_unaligned and string_private.h
As discussed in

https://sourceware.org/ml/libc-alpha/2015-10/msg00403.html

the setting of _STRING_ARCH_unaligned currently controls the external
GLIBC ABI as well as selecting the use of unaligned accesses withing
GLIBC.

Since _STRING_ARCH_unaligned was recently changed for AArch64, this
would potentially break the ABI in GLIBC 2.23, so split the uses and add
_STRING_INLINE_unaligned to select the string ABI. This setting must be
fixed for each target, while _STRING_ARCH_unaligned may be changed from
release to release.  _STRING_ARCH_unaligned is used unconditionally in
glibc.  But <bits/string.h>, which defines _STRING_ARCH_unaligned, isn't
included with -Os.  Since _STRING_ARCH_unaligned is internal to glibc and
may change between glibc releases, it should be made private to glibc.
_STRING_ARCH_unaligned should defined in the new string_private.h heade
file which is included unconditionally from internal <string.h> for glibc
build.

	[BZ #19462]
	* bits/string.h (_STRING_ARCH_unaligned): Renamed to ...
	(_STRING_INLINE_unaligned): This.
	* include/string.h: Include <string_private.h>.
	* string/bits/string2.h: Replace _STRING_ARCH_unaligned with
	_STRING_INLINE_unaligned.
	* sysdeps/aarch64/bits/string.h (_STRING_ARCH_unaligned): Removed.
	(_STRING_INLINE_unaligned): New.
	* sysdeps/aarch64/string_private.h: New file.
	* sysdeps/generic/string_private.h: Likewise.
	* sysdeps/m68k/m680x0/m68020/string_private.h: Likewise.
	* sysdeps/s390/string_private.h: Likewise.
	* sysdeps/x86/string_private.h: Likewise.
	* sysdeps/m68k/m680x0/m68020/bits/string.h
	(_STRING_ARCH_unaligned): Renamed to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/s390/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/sparc/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/x86/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
2016-02-18 14:55:29 -02:00
Andrew Senkevich
a5df3210a6 Use PIC relocation in ALIAS_IMPL
Since libmvec_nonshared.a may be linked into shared objects, ALIAS_IMPL
should use PIC relocation.

	[BZ #19590]
	* sysdeps/x86_64/fpu/svml_finite_alias.S (ALIAS_IMPL): Use PIC
	relocation.
2016-02-17 14:23:32 -08:00
Rajalakshmi Srinivasaraghavan
ebf1264f61 powerpc: Regenerate libm-test-ulps 2016-02-04 16:40:54 -02:00
Joseph Myers
5163b4b76f Fix MIPS mmap negative offset handling for consistency (bug 19550).
The handling of negative offsets in MIPS mmap is inconsistent with
other architectures, as shown by failure of the test
posix/tst-mmap-offset for o32 and n32.  The MIPS mmap syscall uses a
signed argument and does a signed arithmetic shift on it, whereas the
glibc semantics expected by that test are for the offset to be
considered as a large positive offset.  This patch makes MIPS
consistent with other architectures as far as possible by using the
mmap2 syscall on o32 (#including the generic implementation), and
making mmap not an alias for mmap64 for n32, with a custom
implementation for n32 that zero-extends the offset argument to 64-bit
before calling the mmap syscall.

Tested for MIPS64 (o32, n32, n64).

	[BZ #19550]
	* sysdeps/unix/sysv/linux/mips/mips32/mmap.c: New file.
	* sysdeps/unix/sysv/linux/mips/mips64/mmap64.c: Move to ....
	* sysdeps/unix/sysv/linux/mips/mips64/n64/mmap64.c: ... here.
	* sysdeps/unix/sysv/linux/mips/mips64/n32/mmap.c: New file.
	* sysdeps/unix/sysv/linux/mips/mips64/n32/syscalls.list (mmap64):
	New syscall entry.
	* sysdeps/unix/sysv/linux/mips/mips64/n64/syscalls.list (mmap):
	New syscall entry.
	* sysdeps/unix/sysv/linux/mips/mips64/syscalls.list (mmap): Remove
	syscall entry.
2016-02-01 18:20:21 +00:00
Steve Ellcey
8a71d2e27f Fix MIPS64 memcpy regression.
The MIPS memcpy optimizations at
<https://sourceware.org/ml/libc-alpha/2015-10/msg00597.html>
introduced a bug causing many string function tests to fail with
segfaults for n32 and n64:

FAIL: string/stratcliff
FAIL: string/test-bcopy
FAIL: string/test-memccpy
FAIL: string/test-memcmp
FAIL: string/test-memcpy
FAIL: string/test-memmove
FAIL: string/test-mempcpy
FAIL: string/test-stpncpy
FAIL: string/test-strncmp
FAIL: string/test-strncpy

(Some failures in other directories could also be caused by this bug.)

The problem is that after the check for whether a word of input is
left that can be copied as a word before moving to byte copies, a load
can occur in the branch delay slot, resulting in a segfault if we are
at the end of a page and the following page is unmapped.  I don't see
how this would have passed the tests as reported in the original patch
posting (different kernel configurations affecting the code setting up
unmapped pages, maybe?), since the tests in question don't appear to
have changed recently.

This patch moves a later instruction into the delay slot, as suggested
at <https://sourceware.org/ml/libc-alpha/2016-01/msg00584.html>.

Tested for n32 and n64.

2016-01-28  Steve Ellcey  <sellcey@imgtec.com>
            Joseph Myers  <joseph@codesourcery.com>

	* sysdeps/mips/memcpy.S (MEMCPY_NAME) [USE_DOUBLE]: Avoid word
	load in branch delay slot when less than a word of input left.
2016-01-28 01:52:05 +00:00
Andreas Schwab
4fb66fac3a Remove unused variables
They are flagged by -Wunused-const-variable.
2016-01-27 09:30:16 +01:00
David S. Miller
6ef1cb957e Update localplt.data for 32-bit sparc.
* sysdeps/unix/sysv/linux/sparc/sparc32/localplt.data: Add _Q_cmp.
2016-01-26 16:16:38 -08:00
David S. Miller
82e5836613 Define __sqrtl_finite on sparc 32-bit with correct symbol version.
* sysdeps/sparc/sparc32/Versions (GLIBC_2.23): Add entry for __sqrtl_finite.
	* sysdeps/sparc/sparc32/fpu/e_sqrtl.c (__sqrtl_finite): Define instead using
	versioned_symbol.
	* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Fix ordering of entries.
2016-01-25 16:07:15 -08:00
David S. Miller
7a18c2a0c1 Adjust sparc 32-bit __sqrtl_finite version tag.
* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Move
	__sqrtl_finite to GLIBC_2.23
2016-01-25 10:42:17 -08:00
Richard Henderson
89cfb554b8 Update Alpha libm-test-ulps 2016-01-25 10:43:41 -08:00
Paul E. Murphy
9200e581e5 Cleanup ppc bits/ipc.h
Ages ago (commit e9dcb08) the ipc syscalls were inlined and
eventually abstracted away any need for direct __ipc calls.
2016-01-25 10:35:21 -02:00
David S. Miller
c34ae92056 Fix missing __sqrtl_finite symbol in libm on sparc 32-bit.
* sysdeps/sparc/sparc32/fpu/e_sqrtl.c: New file.
	* sysdeps/sparc/sparc32/soft-fp/q_sqrt.c (__ieee754_sqrtl): Remove alias.
	* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Add __sqrtl_finite.
2016-01-24 21:14:12 -08:00
David S. Miller
a9d460a977 Update sparc ULPS.
* sysdeps/sparc/fpu/libm-test-ulps: Update.
2016-01-24 21:12:58 -08:00
Maciej W. Rozycki
d5f2798a0a MIPS: Set the required Linux kernel version to 4.5.0 for 2008 NaN
Complement the addition of the required kernel support, present upstream
as from commit 2b5e869ecfcb3112f7e1267cb0328f3ff6d49b18 ("MIPS: ELF:
Interpret the NAN2008 file header flag") and released with Linux 4.5-rc1
on Jan 24th, 2016.

	* sysdeps/unix/sysv/linux/mips/configure.ac: Set
	`arch_minimum_kernel' to 4.5.0 if 2008 NaN encoding is used.
	* sysdeps/unix/sysv/linux/mips/configure: Regenerate.
2016-01-25 00:19:27 +00:00
Paul E. Murphy
af8ea0f449 powerpc: Fix macro usage of htm builtins
Some extraneous semicolons were included in a
recent patch which causes a build failure with
newer compilers.
2016-01-22 14:13:08 -02:00
Chung-Lin Tang
fba91f1232 Maintainence patch for nios2: update ULPS file and localplt.data changes. 2016-01-21 22:58:03 -08:00
Roland McGrath
a3140836c8 NaCl: Fix unused variable errors in lowlevellock-futex.h macros. 2016-01-20 13:57:14 -08:00
Paul Pluzhnikov
b274130206 2016-01-20 Paul Pluzhnikov <ppluzhnikov@google.com>
[BZ #19490]
* sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S (pthread_cond_broadcast): Use ENTRY/END
* sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S (pthread_cond_signal): Likewise
* sysdeps/x86_64/nptl/pthread_spin_lock.S (pthread_spin_lock): Likewise
* sysdeps/x86_64/nptl/pthread_spin_trylock.S (pthread_spin_trylock): Likewise
* sysdeps/x86_64/nptl/pthread_spin_unlock.S (pthread_spin_unlock): Likewise
2016-01-20 13:39:20 -08:00
Joseph Myers
dcb133b7a4 Fix __finitel libm compat symbol version.
The changes to restrict implementation-namespace symbol aliases such
as __finitel to compat symbols used code for __finitel in libm
analogous to that for __finitel in libc.  However, the versions for
the two symbols are actually different, GLIBC_2.0 in libc and
GLIBC_2.1 in libm.  This patch fixes the handling of the libm compat
symbol.

Tested for mips (o32), where it fixes an ABI test failure.

	* sysdeps/ieee754/dbl-64/s_finite.c
	[NO_LONG_DOUBLE && LDBL_CLASSIFY_COMPAT] (__finitel): Define
	compat symbol at version GLIBC_2_1 and use GLIBC_2_1 in
	SHLIB_COMPAT condition for libm, not GLIBC_2_0.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_finite.c
	[NO_LONG_DOUBLE && LDBL_CLASSIFY_COMPAT] (__finitel): Likewise.
2016-01-20 19:04:43 +00:00
Joseph Myers
00b85374a9 Update localplt.data for powerpc-nofpu.
Testing for powerpc-nofpu showed that localplt.data was out of date.
Two new soft-fp functions showed up in the list: __gtsf2 and
__unordsf2; this patch adds these as optional.  __signbit and
__signbitl no longer appear as local PLT entries; given the move to
__builtin_signbit* for all GCC versions supported for building glibc
(and given the use of the type-generic signbit macro within glibc),
those can safely be removed from the list, which this patch does.

Tested for powerpc-nofpu.

	* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/localplt.data
	(__gtsf2): Add as optional for libc.so.
	(__unordsf2): Likewise.
	(__signbit): Remove for libc.so.
	(__signbitl): Likewise.
2016-01-20 18:19:10 +00:00
Joseph Myers
2e3d0de31f Fix ulps regeneration for *-finite tests.
On running tests after from-scratch ulps regeneration, I found that
some libm tests failed with ulps in excess of those recorded in the
from-scratch regeneration, which should never happen unless those ulps
exceed the limit on ulps that can go in libm-test-ulps files.

Failure: Test: atan2_upward (inf, -inf)
Result:
 is:          2.35619498e+00   0x1.2d97ccp+1
 should be:   2.35619450e+00   0x1.2d97c8p+1
 difference:  4.76837159e-07   0x1.000000p-21
 ulp       :  2.0000
 max.ulp   :  1.0000
Maximal error of `atan2_upward'
 is      : 2 ulp
 accepted: 1 ulp
Failure: Test: carg_upward (-inf + inf i)
Result:
 is:          2.35619498e+00   0x1.2d97ccp+1
 should be:   2.35619450e+00   0x1.2d97c8p+1
 difference:  4.76837159e-07   0x1.000000p-21
 ulp       :  2.0000
 max.ulp   :  1.0000
Maximal error of `carg_upward'
 is      : 2 ulp
 accepted: 1 ulp

The problem comes from the addition of tests for the finite-math-only
versions of libm functions.  Those tests share ulps with the default
function variants.  make regen-ulps runs the default tests before the
finite-math-only tests, concatenating the resulting ulps before
feeding them to gen-libm-test.pl to generate a new libm-test-ulps
file.  But gen-libm-test.pl always takes the last ulps value given for
any (function, type) pair.  So, if the largest ulps for a function
come from non-finite inputs, a from-scratch regeneration loses those
ulps.

This patch fixes gen-libm-test.pl, in the case where there are
multiple ulps values for a (function, type) pair - which can only
happen as part of a regeneration - to take the largest ulps value
rather than the last one.

Tested for ARM / MIPS / powerpc-nofpu.

	* math/gen-libm-test.pl (parse_ulps): Do not reduce
	already-recorded ulps.
	* sysdeps/arm/libm-test-ulps: Regenerated.
	* sysdeps/mips/mips32/libm-test-ulps: Likewise.
	* sysdeps/mips/mips64/libm-test-ulps: Likewise.
	* sysdeps/powerpc/nofpu/libm-test-ulps: Likewise.
2016-01-19 21:42:58 +00:00
Andrew Senkevich
df782dc690 Fixed build with assembler w/o AVX-512 support.
* sysdeps/x86_64/multiarch/ifunc-impl-list.c: Fixed build with
    assembler not supporting AVX-512.
2016-01-19 14:34:53 +03:00
Stefan Liebler
415031f734 S390: Regenerate ULPs
I've regenerated ulps from scratch for s390/s390x.
All math testcases are passing afterwards.

ChangeLog:

	* sysdeps/s390/fpu/libm-test-ulps: Regenerated.
2016-01-19 10:02:44 +01:00
Joseph Myers
204a038e57 Regenerate MIPS libm-test-ulps.
* sysdeps/mips/mips32/libm-test-ulps: Regenerated.
	* sysdeps/mips/mips64/libm-test-ulps: Likewise.
2016-01-18 23:32:40 +00:00
Joseph Myers
844c75aa06 Regenerate powerpc-nofpu libm-test-ulps.
* sysdeps/powerpc/nofpu/libm-test-ulps: Regenerated.
2016-01-18 23:02:03 +00:00
Joseph Myers
a99236df89 Regenerate ARM libm-test-ulps.
* sysdeps/arm/libm-test-ulps: Regenerated.
2016-01-18 22:55:47 +00:00
Stefan Liebler
c4d17461e0 S/390: Do not raise inexact exception in lrint/lround. [BZ #19486]
I get some math test-failures on s390 for float/double/ldouble for
various lrint/lround functions like:
lrint (0x1p64): Exception "Inexact" set
lrint (-0x1p64): Exception "Inexact" set
lround (0x1p64): Exception "Inexact" set
lround (-0x1p64): Exception "Inexact" set
...

GCC emits "convert to fixed" instructions for casting floating point
values to integer values. These instructions raise invalid and inexact
exceptions if the floating point value exceeds the integer type ranges.

This patch enables the various FIX_DBL_LONG_CONVERT_OVERFLOW macros in
order to avoid a cast from floating point to integer type and raise the
invalid exception with feraiseexcept.
The ldbl-128 rint/round functions are now using the same logic.

ChangeLog:

	[BZ #19486]
	* sysdeps/s390/fix-fp-int-convert-overflow.h: New File.
	* sysdeps/generic/fix-fp-int-convert-overflow.h
	(FIX_LDBL_LONG_CONVERT_OVERFLOW,
	FIX_LDBL_LLONG_CONVERT_OVERFLOW): New define.
	* sysdeps/arm/fix-fp-int-convert-overflow.h: Likewise.
	* sysdeps/mips/mips32/fpu/fix-fp-int-convert-overflow.h:
	Likewise.
	* sysdeps/ieee754/ldbl-128/s_lrintl.c (__lrintl):
	Avoid conversions to long int where inexact exceptions
	could be raised.
	* sysdeps/ieee754/ldbl-128/s_lroundl.c (__lroundl):
	Likewise.
	* sysdeps/ieee754/ldbl-128/s_llrintl.c (__llrintl):
	Avoid conversions to long long int where inexact exceptions
	could be raised.
	* sysdeps/ieee754/ldbl-128/s_llroundl.c (__llroundl):
	Likewise.
2016-01-18 12:48:06 +01:00
Andrew Senkevich
214a44f394 Fixed typos in __memcpy_chk.
* sysdeps/x86_64/multiarch/memcpy_chk.S: Fixed typos.
2016-01-16 14:42:26 +03:00
Mike Frysinger
3f2c97261b sparc: mman.h: fix bad comment insertion
The MCL_ONFAULT define was inserted into the middle of a comment which
breaks the build.
2016-01-16 02:34:15 -05:00
Andrew Senkevich
72276d6e88 Added memcpy/memmove family optimized with AVX512 for KNL hardware.
Added AVX512 implementations of memcpy, mempcpy, memmove, memcpy_chk,
mempcpy_chk, memmove_chk.
It shows average improvement more than 30% over AVX versions on KNL
hardware (performance results in the thread
<https://sourceware.org/ml/libc-alpha/2016-01/msg00258.html>).

    * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Added new files.
    * sysdeps/x86_64/multiarch/ifunc-impl-list.c: Added new tests.
    * sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S: New file.
    * sysdeps/x86_64/multiarch/mempcpy-avx512-no-vzeroupper.S: Likewise.
    * sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S: Likewise.
    * sysdeps/x86_64/multiarch/memcpy.S: Added new IFUNC branch.
    * sysdeps/x86_64/multiarch/memcpy_chk.S: Likewise.
    * sysdeps/x86_64/multiarch/memmove.c: Likewise.
    * sysdeps/x86_64/multiarch/memmove_chk.c: Likewise.
    * sysdeps/x86_64/multiarch/mempcpy.S: Likewise.
    * sysdeps/x86_64/multiarch/mempcpy_chk.S: Likewise.
2016-01-16 00:49:45 +03:00
Torvald Riegel
b02840bacd New pthread_barrier algorithm to fulfill barrier destruction requirements.
The previous barrier implementation did not fulfill the POSIX requirements
for when a barrier can be destroyed.  Specifically, it was possible that
threads that haven't noticed yet that their round is complete still access
the barrier's memory, and that those accesses can happen after the barrier
has been legally destroyed.
The new algorithm does not have this issue, and it avoids using a lock
internally.
2016-01-15 21:20:34 +01:00
Martin Sebor
ad37480c4b Fix build errors with -DNDEBUG.
[BZ #18755]
        * iconv/skeleton.c (FUNCTION_NAME): Suppress -Wunused-but-set-variable
        warnings.
        * sysdeps/nptl/gai_misc.h (__gai_start_notify_thread): Same.
        (__gai_create_helper_thread): Same.
        * nscd/nscd.c (do_exit): Suppress -Wunused-variable.
        * iconvdata/iso-2022-cn-ext.c (BODY): Initialize local variable
        to suppress -Wmaybe-uninitialized warnings.
2016-01-15 10:44:07 -07:00
H.J. Lu
09245377da Call math_opt_barrier inside if
Since floating-point operation may trigger floating-point exceptions,
we call math_opt_barrier inside if to prevent code motion.

	[BZ #19465]
	* sysdeps/ieee754/dbl-64/s_fma.c (__fma): Call math_opt_barrier
	inside if.
	* sysdeps/ieee754/ldbl-128/s_fmal.c (__fmal): Likewise.
	* sysdeps/ieee754/ldbl-96/s_fma.c (__fma): Likewise.
	* sysdeps/ieee754/ldbl-96/s_fmal.c (__fmal): Likewise.
2016-01-15 05:23:20 -08:00
Amit Pawar
d7890e6947 Set index_Fast_Unaligned_Load for Excavator family CPUs
GLIBC benchtest testcases shows SSE2_Unaligned based implementations
are performing faster compare to SSE2 based implementations for
routines: strcmp, strcat, strncat, stpcpy, stpncpy, strcpy, strncpy
and strstr. Flag index_Fast_Unaligned_Load is set for Excavator family
0x15h CPU's. This makes SSE2_Unaligned based implementations as
default for these routines.

	[BZ #19467]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	index_Fast_Unaligned_Load flag for Excavator family CPUs.
2016-01-14 08:14:31 -08:00
Marcin Kościelnicki
a4b5177ca8 Add __private_ss to s390 struct tcbhead.
Preparation for gcc -fsplit-stack support (gcc bug #68191).  The new
field is basically identical to the one on x86.  Its TCB offset needs
to be constant, as it'll be hardcoded in gcc.

ChangeLog:

	* sysdeps/s390/nptl/tls.h (struct tcbhead_t): Add __private_ss field.
2016-01-14 16:48:55 +01:00
Joseph Myers
fb53a27c57 Add new header definitions from Linux 4.4 (plus older ptrace definitions).
This patch adds some new header definitions from Linux 4.4:

* MCL_ONFAULT is added to bits/mman.h / bits/mman-linux.h (this was
  already done for hppa).

* PTRACE_SECCOMP_GET_FILTER is added to sys/ptrace.h.  Along with it,
  the older PTRACE_GETSIGMASK and PTRACE_SETSIGMASK, added in Linux
  3.11 but missed at the time, are also added.

Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by the patch).

	* bits/mman-linux.h [!MCL_CURRENT] (MCL_ONFAULT): New macro.
	* sysdeps/unix/sysv/linux/alpha/bits/mman.h (MCL_ONFAULT):
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/bits/mman.h (MCL_ONFAULT):
	Likewise.
	* sysdeps/unix/sysv/linux/sparc/bits/mman.h (MCL_ONFAULT):
	Likewise.
	* sysdeps/unix/sysv/linux/sys/ptrace.h (PTRACE_GETSIGMASK): New
	enum constant and macro.
	(PTRACE_SETSIGMASK): Likewise.
	(PTRACE_SECCOMP_GET_FILTER): Likewise.
	* sysdeps/unix/sysv/linux/aarch64/sys/ptrace.h
	(PTRACE_GETSIGMASK): Likewise.
	(PTRACE_SETSIGMASK): Likewise.
	(PTRACE_SECCOMP_GET_FILTER): Likewise.
	* sysdeps/unix/sysv/linux/ia64/sys/ptrace.h (PTRACE_GETSIGMASK):
	Likewise.
	(PTRACE_SETSIGMASK): Likewise.
	(PTRACE_SECCOMP_GET_FILTER): Likewise.
	* sysdeps/unix/sysv/linux/powerpc/sys/ptrace.h
	(PTRACE_GETSIGMASK): Likewise.
	(PTRACE_SETSIGMASK): Likewise.
	(PTRACE_SECCOMP_GET_FILTER): Likewise.
	* sysdeps/unix/sysv/linux/s390/sys/ptrace.h (PTRACE_GETSIGMASK):
	Likewise.
	(PTRACE_SETSIGMASK): Likewise.
	(PTRACE_SECCOMP_GET_FILTER): Likewise.
	* sysdeps/unix/sysv/linux/sparc/sys/ptrace.h (PTRACE_GETSIGMASK):
	Likewise.
	(PTRACE_SETSIGMASK): Likewise.
	(PTRACE_SECCOMP_GET_FILTER): Likewise.
	* sysdeps/unix/sysv/linux/tile/sys/ptrace.h (PTRACE_GETSIGMASK):
	Likewise.
	(PTRACE_SETSIGMASK): Likewise.
	(PTRACE_SECCOMP_GET_FILTER): Likewise.
2016-01-12 12:42:55 +00:00
Tulio Magno Quites Machado Filho
42bf1c8971 powerpc: Enforce compiler barriers on hardware transactions
Work around a GCC behavior with hardware transactional memory built-ins.
GCC doesn't treat the PowerPC transactional built-ins as compiler
barriers, moving instructions past the transaction boundaries and
altering their atomicity.
2016-01-08 17:47:33 -02:00
Carlos Eduardo Seo
d2de9ef7ad powerpc: Add hwcap2 bits for POWER9.
Added hwcap2 bit masks for Power ISA 3.0 and VSX IEEE binary float 128-bit
features.
2016-01-08 11:19:40 -02:00
John David Anglin
48025aa9ed hppa: fix dladdr [BZ #19415]
The attached patch fixes dladdr on hppa.

Instead of using the generic version of _dl_lookup_address, we use an
implementation more or less modeled after __canonicalize_funcptr_for_compare()
in gcc.  The function pointer is analyzed and if it points to the
trampoline used to call _dl_runtime_resolve just before the global
offset table, then we call _dl_fixup to resolve the function pointer.
Then, we return the instruction pointer from the first word of the
descriptor.

The change fixes the testcase provided in [BZ #19415] and the Debian
nss package now builds successfully.
2016-01-08 02:19:26 -05:00