Commit Graph

97 Commits

Author SHA1 Message Date
Joseph Myers
ce99c0816b Add fesetexcept: aarch64.
This patch adds an AArch64 version of fesetexcept.  Untested.

	* sysdeps/aarch64/fpu/fesetexcept.c: New file.
2016-08-16 16:16:57 +00:00
Szabolcs Nagy
d637e923f9 [AArch64] Update libm-test-ulps
This partly reverts commit f8238ae3c7
that regenerated the ulps, to make the max ulps good for gcc-5,
gcc-6 and gcc-trunk as well.

	* sysdeps/aarch64/libm-test-ulps: Updated.
2016-07-21 09:48:45 +01:00
Szabolcs Nagy
f8238ae3c7 [AArch64] Regenerate libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated.
2016-07-18 11:42:52 +01:00
Szabolcs Nagy
efbe665c3a [AArch64] Fix libc internal asm profiling code
When glibc is built with --enable-profile, the ENTRY of
asm functions includes CALL_MCOUNT for profiling.
(matters for binaries static linked against libc_p.a.)

CALL_MCOUNT did not save/restore argument registers
around the _mcount call so it clobbered them.
(it is enough to only save/restore the arguments passed
to a given asm function, but that would be too many asm
changes so it is simpler to always save all argument
registers in this macro.)

float args are not saved: mcount does not clobber the
float regs and currently no asm function takes float
arguments anyway.

	[BZ #18707]
	* sysdeps/aarch64/Makefile (CFLAGS-mcount.c): Add -mgeneral-regs-only.
	* sysdeps/aarch64/sysdep.h (CALL_MCOUNT): Save argument registers.
2016-07-11 09:50:41 +01:00
Torvald Riegel
76a0b73e81 Remove atomic_compare_and_exchange_bool_rel.
atomic_compare_and_exchange_bool_rel and
catomic_compare_and_exchange_bool_rel are removed and replaced with the
new C11-like atomic_compare_exchange_weak_release.  The concurrent code
in nscd/cache.c has not been reviewed yet, so this patch does not add
detailed comments.

	* nscd/cache.c (cache_add): Use new C11-like atomic operation instead
	of atomic_compare_and_exchange_bool_rel.
	* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise.
	* include/atomic.h (atomic_compare_and_exchange_bool_rel,
	catomic_compare_and_exchange_bool_rel): Remove.
	* sysdeps/aarch64/atomic-machine.h
	(atomic_compare_and_exchange_bool_rel): Likewise.
	* sysdeps/alpha/atomic-machine.h
	(atomic_compare_and_exchange_bool_rel): Likewise.
	* sysdeps/arm/atomic-machine.h
	(atomic_compare_and_exchange_bool_rel): Likewise.
	* sysdeps/mips/atomic-machine.h
	(atomic_compare_and_exchange_bool_rel): Likewise.
	* sysdeps/tile/atomic-machine.h
	(atomic_compare_and_exchange_bool_rel): Likewise.
2016-06-24 23:04:40 +03:00
Wilco Dijkstra
a024b39a4e This patch further tunes memcpy - avoid one branch for sizes 1-3,
add a prefetch and improve small copies that are exact powers of 2.

        * sysdeps/aarch64/memcpy.S (memcpy):
        Further tuning for performance.
2016-06-22 13:24:24 +01:00
Wilco Dijkstra
58ec4fb881 Add a simple rawmemchr implementation. Use strlen for rawmemchr(s, '\0') as it
is the fastest way to search for '\0'.  Otherwise use memchr with an infinite
size.  This is 3x faster on benchtests for large sizes.  Passes GLIBC tests.

	* sysdeps/aarch64/rawmemchr.S (__rawmemchr): New file.
	* sysdeps/aarch64/strlen.S (__strlen): Change to __strlen to avoid PLT.
2016-06-20 17:48:20 +01:00
Wilco Dijkstra
b998e16e71 This is an optimized memcpy/memmove for AArch64. Copies are split into 3 main
cases: small copies of up to 16 bytes, medium copies of 17..96 bytes which are
fully unrolled.  Large copies of more than 96 bytes align the destination and
use an unrolled loop processing 64 bytes per iteration.  In order to share code
with memmove, small and medium copies read all data before writing, allowing
any kind of overlap.  All memmoves except for the large backwards case fall
into memcpy for optimal performance.  On a random copy test memcpy/memmove are
40% faster on Cortex-A57 and 28% on Cortex-A53.

	* sysdeps/aarch64/memcpy.S (memcpy):
	Rewrite of optimized memcpy and memmove.
	* sysdeps/aarch64/memmove.S (memmove): Remove
	memmove code (merged into memcpy.S).
2016-06-20 17:41:33 +01:00
Florian Weimer
aca1daef29 elf: Consolidate machine-agnostic DTV definitions in <dl-dtv.h>
Identical definitions of dtv_t and TLS_DTV_UNALLOCATED were
repeated for all architectures using DTVs.
2016-06-20 14:31:40 +02:00
Wilco Dijkstra
a8c5a2a952 This is an optimized memset for AArch64. Memset is split into 4 main cases:
small sets of up to 16 bytes, medium of 16..96 bytes which are fully unrolled.
Large memsets of more than 96 bytes align the destination and use an unrolled
loop processing 64 bytes per iteration.  Memsets of zero of more than 256 use
the dc zva instruction, and there are faster versions for the common ZVA sizes
64 or 128.  STP of Q registers is used to reduce codesize without loss of
performance.

The speedup on test-memset is 1% on Cortex-A57 and 8% on Cortex-A53.

	* sysdeps/aarch64/memset.S (__memset):
	Rewrite of optimized memset.
2016-05-12 16:44:53 +01:00
H.J. Lu
16396c41de Add _STRING_INLINE_unaligned and string_private.h
As discussed in

https://sourceware.org/ml/libc-alpha/2015-10/msg00403.html

the setting of _STRING_ARCH_unaligned currently controls the external
GLIBC ABI as well as selecting the use of unaligned accesses withing
GLIBC.

Since _STRING_ARCH_unaligned was recently changed for AArch64, this
would potentially break the ABI in GLIBC 2.23, so split the uses and add
_STRING_INLINE_unaligned to select the string ABI. This setting must be
fixed for each target, while _STRING_ARCH_unaligned may be changed from
release to release.  _STRING_ARCH_unaligned is used unconditionally in
glibc.  But <bits/string.h>, which defines _STRING_ARCH_unaligned, isn't
included with -Os.  Since _STRING_ARCH_unaligned is internal to glibc and
may change between glibc releases, it should be made private to glibc.
_STRING_ARCH_unaligned should defined in the new string_private.h heade
file which is included unconditionally from internal <string.h> for glibc
build.

	[BZ #19462]
	* bits/string.h (_STRING_ARCH_unaligned): Renamed to ...
	(_STRING_INLINE_unaligned): This.
	* include/string.h: Include <string_private.h>.
	* string/bits/string2.h: Replace _STRING_ARCH_unaligned with
	_STRING_INLINE_unaligned.
	* sysdeps/aarch64/bits/string.h (_STRING_ARCH_unaligned): Removed.
	(_STRING_INLINE_unaligned): New.
	* sysdeps/aarch64/string_private.h: New file.
	* sysdeps/generic/string_private.h: Likewise.
	* sysdeps/m68k/m680x0/m68020/string_private.h: Likewise.
	* sysdeps/s390/string_private.h: Likewise.
	* sysdeps/x86/string_private.h: Likewise.
	* sysdeps/m68k/m680x0/m68020/bits/string.h
	(_STRING_ARCH_unaligned): Renamed to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/s390/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/sparc/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/x86/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
2016-02-18 14:55:29 -02:00
Joseph Myers
f7a9f785e5 Update copyright dates with scripts/update-copyrights. 2016-01-04 16:05:18 +00:00
Szabolcs Nagy
c960ded0d5 [AArch64] Regenerate libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated.
2015-12-01 12:57:16 +00:00
Wilco Dijkstra
2fee269248 Enable _STRING_ARCH_unaligned on AArch64.
* sysdeps/aarch64/bits/string.h: New file.
        (_STRING_ARCH_unaligned): Define.
2015-11-10 11:15:59 +00:00
Szabolcs Nagy
24ffcbfc24 Regenerate aarch64 libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated.
2015-09-24 14:22:31 +01:00
Joseph Myers
de071d199a Move bits/atomic.h to atomic-machine.h (bug 14912).
It was noted in
<https://sourceware.org/ml/libc-alpha/2012-09/msg00305.html> that the
bits/*.h naming scheme should only be used for installed headers.
This patch renames bits/atomic.h to atomic-machine.h to follow that
convention.

This is the only change in this series that needs to change the
filename rather than simply removing a directory level (because both
atomic.h and bits/atomic.h exist at present).

Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch).

	[BZ #14912]
	* sysdeps/aarch64/bits/atomic.h: Move to ...
	* sysdeps/aarch64/atomic-machine.h: ...here.
	(_AARCH64_BITS_ATOMIC_H): Rename macro to
	_AARCH64_ATOMIC_MACHINE_H.
	* sysdeps/alpha/bits/atomic.h: Move to ...
	* sysdeps/alpha/atomic-machine.h: ...here.
	* sysdeps/arm/bits/atomic.h: Move to ...
	* sysdeps/arm/atomic-machine.h: ...here.  Update comments.
	* bits/atomic.h: Move to ...
	* sysdeps/generic/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/i386/bits/atomic.h: Move to ...
	* sysdeps/i386/atomic-machine.h: ...here.
	* sysdeps/ia64/bits/atomic.h: Move to ...
	* sysdeps/ia64/atomic-machine.h: ...here.
	* sysdeps/m68k/coldfire/bits/atomic.h: Move to ...
	* sysdeps/m68k/coldfire/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/m68k/m680x0/m68020/bits/atomic.h: Move to ...
	* sysdeps/m68k/m680x0/m68020/atomic-machine.h: ...here.
	* sysdeps/microblaze/bits/atomic.h: Move to ...
	* sysdeps/microblaze/atomic-machine.h: ...here.
	* sysdeps/mips/bits/atomic.h: Move to ...
	* sysdeps/mips/atomic-machine.h: ...here.
	(_MIPS_BITS_ATOMIC_H): Rename macro to _MIPS_ATOMIC_MACHINE_H.
	* sysdeps/powerpc/bits/atomic.h: Move to ...
	* sysdeps/powerpc/atomic-machine.h: ...here.  Update comments.
	* sysdeps/powerpc/powerpc32/bits/atomic.h: Move to ...
	* sysdeps/powerpc/powerpc32/atomic-machine.h: ...here.  Update
	comments.  Include <atomic-machine.h> instead of <bits/atomic.h>.
	* sysdeps/powerpc/powerpc64/bits/atomic.h: Move to ...
	* sysdeps/powerpc/powerpc64/atomic-machine.h: ...here.  Include
	<atomic-machine.h> instead of <bits/atomic.h>.
	* sysdeps/s390/bits/atomic.h: Move to ...
	* sysdeps/s390/atomic-machine.h: ...here.
	* sysdeps/sparc/sparc32/bits/atomic.h: Move to ...
	* sysdeps/sparc/sparc32/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/sparc/sparc32/sparcv9/bits/atomic.h: Move to ...
	* sysdeps/sparc/sparc32/sparcv9/atomic-machine.h: ...here.
	* sysdeps/sparc/sparc64/bits/atomic.h: Move to ...
	* sysdeps/sparc/sparc64/atomic-machine.h: ...here.
	* sysdeps/tile/bits/atomic.h: Move to ...
	* sysdeps/tile/atomic-machine.h: ...here.
	* sysdeps/tile/tilegx/bits/atomic.h: Move to ...
	* sysdeps/tile/tilegx/atomic-machine.h: ...here.  Include
	<sysdeps/tile/atomic-machine.h> instead of
	<sysdeps/tile/bits/atomic.h>.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/tile/tilepro/bits/atomic.h: Move to ...
	* sysdeps/tile/tilepro/atomic-machine.h: ...here.  Include
	<sysdeps/tile/atomic-machine.h> instead of
	<sysdeps/tile/bits/atomic.h>.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/arm/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/arm/atomic-machine.h: ...here.  Include
	<sysdeps/arm/atomic-machine.h> instead of
	<sysdeps/arm/bits/atomic.h>.
	* sysdeps/unix/sysv/linux/hppa/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/hppa/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/m68k/coldfire/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/nios2/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/nios2/atomic-machine.h: ...here.
	(_NIOS2_BITS_ATOMIC_H): Rename macro to _NIOS2_ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/sh/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/sh/atomic-machine.h: ...here.
	* sysdeps/x86_64/bits/atomic.h: Move to ...
	* sysdeps/x86_64/atomic-machine.h: ...here.
	* include/atomic.h: Include <atomic-machine.h> instead of
	<bits/atomic.h>.
2015-09-11 20:00:19 +00:00
Joseph Myers
522e02ab8a Rename bits/linkmap.h to linkmap.h (bug 14912).
It was noted in
<https://sourceware.org/ml/libc-alpha/2012-09/msg00305.html> that the
bits/*.h naming scheme should only be used for installed headers.
This patch renames bits/linkmap.h to plain linkmap.h to follow that
convention.

Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch).

	[BZ #14912]
	* bits/linkmap.h: Move to ...
	* sysdeps/generic/linkmap.h: ...here.
	* sysdeps/aarch64/bits/linkmap.h: Move to ...
	* sysdeps/aarch64/linkmap.h: ...here.
	* sysdeps/arm/bits/linkmap.h: Move to ...
	* sysdeps/arm/linkmap.h: ...here.
	* sysdeps/hppa/bits/linkmap.h: Move to ...
	* sysdeps/hppa/linkmap.h: ...here.
	* sysdeps/ia64/bits/linkmap.h: Move to ...
	* sysdeps/ia64/linkmap.h: ...here.
	* sysdeps/mips/bits/linkmap.h: Move to ...
	* sysdeps/mips/linkmap.h: ...here.
	* sysdeps/s390/bits/linkmap.h: Move to ...
	* sysdeps/s390/linkmap.h: ...here.
	* sysdeps/sh/bits/linkmap.h: Move to ...
	* sysdeps/sh/linkmap.h: ...here.
	* sysdeps/x86/bits/linkmap.h: Move to ...
	* sysdeps/x86/linkmap.h: ...here.
	* include/link.h: Include <linkmap.h> instead of <bits/linkmap.h>.
2015-09-04 19:44:27 +00:00
Wilco Dijkstra
edbbc86c3a 2015-08-24 Wilco Dijkstra <wdijkstr@arm.com>
* sysdeps/aarch64/bzero.S (__bzero): Remove.
2015-08-24 14:49:46 +01:00
Wilco Dijkstra
f008c71455 2015-08-24 Wilco Dijkstra <wdijkstr@arm.com>
* sysdeps/aarch64/fpu/math_private.h (libc_feholdsetround_aarch64_ctx):
	Unconditionally set __fpcr to avoid uninialized warning.
	(libc_feholdsetround_noex_aarch64_ctx): Likewise.
2015-08-24 14:42:28 +01:00
Wilco Dijkstra
7b1c56e483 Improve feenableexcept performance - avoid an unnecessary FPCR read in case
the FPCR does not change. Also improve the logic of the return value.
2015-08-05 16:24:02 +01:00
Wilco Dijkstra
3136eb7abd Improve fesetenv performance by avoiding unnecessary FPSR/FPCR reads/writes.
It uses the same logic as the ARM version. The common case removes 1 FPSR
and 1 FPCR read. For FE_DFL_ENV and FE_NOMASK_ENV a FPCR read is avoided in
case the FPCR does not change.
2015-08-05 16:24:01 +01:00
Szabolcs Nagy
0910702c4d [AArch64][BZ #17711] Fix extern protected data handling
Fixes elf/tst-protected1a and elf/tst-protected1b tests.

Depends on a gcc patch that makes protected visibility data non-local:
https://gcc.gnu.org/ml/gcc-patches/2015-07/msg01871.html
and on a binutils patch so R_*_GLOB_DAT relocs are used for it:
https://sourceware.org/ml/binutils/2015-07/msg00246.html
2015-07-24 09:57:32 +01:00
Wilco Dijkstra
82641e16aa Add AArch64 versions of math_opt_barrier and math_force_eval that avoid going via memory. 2015-07-13 12:48:33 +01:00
Wilco Dijkstra
c435989f52 Optimize the strlen implementation by using a page cross check and a fast check
for nul bytes which reverts to separate loop when a non-ASCII char is encountered.
Speedup on test-strlen is ~10%, long ASCII strings are processed ~60% faster,
and on random tests it is ~80% better.
2015-07-13 12:38:12 +01:00
Wilco Dijkstra
6471190491 Inline __ieee754_sqrt and __ieee754_sqrtf. Also add external definitions. 2015-07-06 12:52:55 +01:00
Szabolcs Nagy
cfe4368e51 Regenerate aarch64 libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated.
2015-07-02 14:58:12 +01:00
Szabolcs Nagy
c71c89e5c7 [AArch64] Fix cfi_adjust_cfa_offset usage in dl-tlsdesc.S
Some of the cfi annotations used incorrect sign.

	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Fix
	cfi_adjust_cfa_offset argument.
	(_dl_tlsdesc_undefweak, _dl_tlsdesc_dynamic): Likewise.
	(_dl_tlsdesc_resolve_rela, _dl_tlsdesc_resolve_hold): Likewise.
2015-06-17 12:44:53 +01:00
Szabolcs Nagy
08325735c2 [BZ 18034][AArch64] Lazy TLSDESC relocation data race fix
Lazy TLSDESC initialization needs to be synchronized with concurrent TLS
accesses.  The TLS descriptor contains a function pointer (entry) and an
argument that is accessed from the entry function.  With lazy initialization
the first call to the entry function updates the entry and the argument to
their final value.  A final entry function must make sure that it accesses an
initialized argument, this needs synchronization on systems with weak memory
ordering otherwise the writes of the first call can be observed out of order.

There are at least two issues with the current code:

tlsdesc.c (i386, x86_64, arm, aarch64) uses volatile memory accesses on the
write side (in the initial entry function) instead of C11 atomics.

And on systems with weak memory ordering (arm, aarch64) the read side
synchronization is missing from the final entry functions (dl-tlsdesc.S).

This patch only deals with aarch64.

* Write side:

Volatile accesses were replaced with C11 relaxed atomics, and a release
store was used for the initialization of entry so the read side can
synchronize with it.

* Read side:

TLS access generated by the compiler and an entry function code is roughly

  ldr x1, [x0]    // load the entry
  blr x1          // call it

entryfunc:
  ldr x0, [x0,#8] // load the arg
  ret

Various alternatives were considered to force the ordering in the entry
function between the two loads:

(1) barrier

entryfunc:
  dmb ishld
  ldr x0, [x0,#8]

(2) address dependency (if the address of the second load depends on the
result of the first one the ordering is guaranteed):

entryfunc:
  ldr x1,[x0]
  and x1,x1,#8
  orr x1,x1,#8
  ldr x0,[x0,x1]

(3) load-acquire (ARMv8 instruction that is ordered before subsequent
loads and stores)

entryfunc:
  ldar xzr,[x0]
  ldr x0,[x0,#8]

Option (1) is the simplest but slowest (note: this runs at every TLS
access), options (2) and (3) do one extra load from [x0] (same address
loads are ordered so it happens-after the load on the call site),
option (2) clobbers x1 which is problematic because existing gcc does
not expect that, so approach (3) was chosen.

A new _dl_tlsdesc_return_lazy entry function was introduced for lazily
relocated static TLS, so non-lazy static TLS can avoid the synchronization
cost.

	[BZ #18034]
	* sysdeps/aarch64/dl-tlsdesc.h (_dl_tlsdesc_return_lazy): Declare.
	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Define.
	(_dl_tlsdesc_undefweak): Guarantee TLSDESC entry and argument load-load
	ordering using ldar.
	(_dl_tlsdesc_dynamic): Likewise.
	(_dl_tlsdesc_return_lazy): Likewise.
	* sysdeps/aarch64/tlsdesc.c (_dl_tlsdesc_resolve_rela_fixup): Use
	relaxed atomics instead of volatile and synchronize with release store.
	(_dl_tlsdesc_resolve_hold_fixup): Use relaxed atomics instead of
	volatile.
	* elf/tlsdeschtab.h (_dl_tlsdesc_resolve_early_return_p): Likewise.
2015-06-17 12:41:01 +01:00
Joseph Myers
1769608794 Use libc_hidden_proto / libc_hidden_def with __strnlen.
Various code in glibc uses __strnlen instead of strnlen for namespace
reasons.  However, __strnlen does not use libc_hidden_proto /
libc_hidden_def (as is normally done for any function defined and
called within the same library, whether or not exported from the
library and whatever namespace it is in), so the compiler does not
know that those calls are to a function within libc.

This patch uses libc_hidden_proto / libc_hidden_def with __strnlen.
On x86_64, it makes no difference to the installed stripped shared
libraries.  On 32-bit x86, it causes __strnlen calls to go to the same
place as strnlen calls (the fallback strnlen implementation), rather
than through a PLT entry for the strnlen IFUNC; I'm not sure of the
logic behind when calls from within libc should use IFUNCs versus when
they should go direct to a particular function implementation, but
clearly it doesn't make sense for strnlen and __strnlen to be handled
differently in this regard.

Tested for x86_64 and x86 (testsuite, and comparison of installed
shared libraries as described above).

	* string/strnlen.c [!STRNLEN] (__strnlen): Use libc_hidden_def.
	* include/string.h (__strnlen): Use libc_hidden_proto.
	* sysdeps/aarch64/strnlen.S (__strnlen): Use libc_hidden_def.
	* sysdeps/i386/i686/multiarch/strnlen-c.c [SHARED]
	(libc_hidden_def): Define __GI___strnlen as well as __GI_strnlen.
	* sysdeps/powerpc/powerpc32/power4/multiarch/strnlen-power7.S
	(libc_hidden_def): Undefine and redefine.
	* sysdeps/powerpc/powerpc32/power4/multiarch/strnlen-ppc32.c
	[SHARED] (libc_hidden_def): Define __GI___strnlen as well as
	__GI_strnlen.
	* sysdeps/powerpc/powerpc32/power7/strnlen.S (__strnlen): Use
	libc_hidden_def.
	* sysdeps/tile/tilegx/strnlen.c (__strnlen): Likewise.
2015-06-02 20:24:25 +00:00
Wilco Dijkstra
71bf272d91 2015-06-02 Szabolcs Nagy <szabolcs.nagy@arm.com>
* sysdeps/aarch64/libm-test-ulps: Update.
2015-06-02 10:47:45 +01:00
Szabolcs Nagy
265a9b73ba [AArch64] Fix inline asm clobber list in tls-macros.h 2015-05-13 15:46:24 +01:00
Wilco Dijkstra
eda361c8d9 2015-05-06 Szabolcs Nagy <szabolcs.nagy@arm.com>
* sysdeps/aarch64/libm-test-ulps: Update.
2015-05-06 13:00:15 +00:00
Roland McGrath
ac9e0e5e40 Clean up sysdep-dl-routines variable. 2015-02-06 10:42:08 -08:00
Joseph Myers
8116321f65 Fix libm feupdateenv namespace (bug 17748).
Concluding the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of feupdateenv by making it a weak alias for
__feupdateenv and making the affected code call __feupdateenv.

Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch).  Also tested for ARM
(soft-float) that the math.h linknamespace tests now pass.

	[BZ #17748]
	* include/fenv.h (__feupdateenv): Use libm_hidden_proto.
	* math/feupdateenv.c (__feupdateenv): Use libm_hidden_def.
	* sysdeps/aarch64/fpu/feupdateenv.c (feupdateenv): Rename to
	__feupdateenv and define as weak alias of __feupdateenv.  Use
	libm_hidden_weak.
	* sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/arm/feupdateenv.c (feupdateenv): Rename to __feupdateenv
	and define as weak alias of __feupdateenv.  Use libm_hidden_weak.
	* sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Rename to
	__feupdateenv and define as weak alias of __feupdateenv.  Use
	libm_hidden_weak.
	* sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Rename to
	__feupdateenv and define as weak alias of __feupdateenv.  Use
	libm_hidden_weak.
	* sysdeps/powerpc/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c
	(__feupdateenv): Likewise.
	* sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Rename to
	__feupdateenv and define as weak alias of __feupdateenv.  Use
	libm_hidden_weak.
	* sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/tile/math_private.h (__feupdateenv): New inline
	function.
	* sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/generic/math_private.h (default_libc_feupdateenv): Call
	__feupdateenv instead of feupdateenv.
	(default_libc_feupdateenv_test): Likewise.
	(libc_feresetround_ctx): Likewise.
2015-01-07 19:01:20 +00:00
Richard Earnshaw
dc400d7b73 AArch64: Optimized implementations of strcpy and stpcpy. 2015-01-07 11:31:10 +00:00
Richard Earnshaw
ec582ca0f3 AArch64 optimized implementation of strrchr. 2015-01-07 11:26:13 +00:00
Joseph Myers
01238691bb Fix libm fesetround namespace (bug 17748).
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of fesetround by making it a weak alias of
__fesetround and making the affected code call __fesetround.  An
existing __fesetround function in fenv_libc.h for powerpc is renamed
to __fesetround_inline.

Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).  Also tested for ARM
(soft-float) that fesetround failures disappear from the linknamespace
test results (feupdateenv remains to be addressed to complete fixing
bug 17748).

	[BZ #17748]
	* include/fenv.h (__fesetround): Declare.  Use libm_hidden_proto.
	* math/fesetround.c (fesetround): Rename to __fesetround and
	define as weak alias of __fesetround.  Use libm_hidden_weak.
	* sysdeps/aarch64/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/alpha/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/arm/fesetround.c (fesetround): Likewise.
	* sysdeps/hppa/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/i386/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/ia64/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/m68k/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/mips/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/powerpc/fpu/fenv_libc.h (__fesetround): Rename to
	__fesetround_inline.
	* sysdeps/powerpc/fpu/fenv_private.h (libc_fesetround_ppc): Call
	__fesetround_inline instead of __fesetround.
	* sysdeps/powerpc/fpu/fesetround.c (fesetround): Rename to
	__fesetround and define as weak alias of __fesetround.  Use
	libm_hidden_weak.  Call __fesetround_inline instead of
	__fesetround.
	* sysdeps/powerpc/nofpu/fesetround.c (fesetround): Rename to
	__fesetround and define as weak alias of __fesetround.  Use
	libm_hidden_weak.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fesetround.c (fesetround):
	Likewise.
	* sysdeps/s390/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/sh/sh4/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/sparc/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/tile/math_private.h (__fesetround): New inline function.
	* sysdeps/x86_64/fpu/fesetround.c (fesetround): Rename to
	__fesetround and define as weak alias of __fesetround.  Use
	libm_hidden_weak.
	* sysdeps/generic/math_private.h (default_libc_fesetround): Call
	__fesetround instead of fesetround.
	(default_libc_feholdexcept_setround): Likewise.
	(libc_feholdsetround_ctx): Likewise.
	(libc_feholdsetround_noex_ctx): Likewise.
2015-01-07 00:41:23 +00:00
Joseph Myers
cd42798aef Fix libm fesetenv namespace (bug 17748).
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of fesetenv by making it a weak alias of
__fesetenv and making the affected code (including various copies of
feupdateenv which also gets called from C90 functions) call
__fesetenv.

Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).  Also tested for ARM
(soft-float) that fesetenv failures disappear from the linknamespace
test results (fsetround and feupdateenv remain to be addressed to
complete fixing bug 17748).

	[BZ #17748]
	* include/fenv.h (__fesetenv): Use libm_hidden_proto.
	* math/fesetenv.c (__fesetenv): Use libm_hidden_def.
	* sysdeps/aarch64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv
	and define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/alpha/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
	* sysdeps/arm/fesetenv.c (fesetenv): Rename to __fesetenv and
	define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/hppa/fpu/fesetenv.c (fesetenv): Likewise.
	* sysdeps/i386/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
	* sysdeps/ia64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
	define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/m68k/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
	* sysdeps/mips/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
	define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/powerpc/fpu/fesetenv.c (__fesetenv): Use
	libm_hidden_def.
	* sysdeps/powerpc/nofpu/fesetenv.c (__fesetenv): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fesetenv.c (__fesetenv):
	Likewise.
	* sysdeps/s390/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
	define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/sh/sh4/fpu/fesetenv.c (fesetenv): Likewise.
	* sysdeps/sparc/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
	* sysdeps/tile/math_private.h (__fesetenv): New inline function.
	* sysdeps/x86_64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv
	and define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/generic/math_private.h (default_libc_fesetenv): Use
	__fesetenv instead of fesetenv.
	(libc_feresetround_noex_ctx): Likewise.
	* sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c
	(__feupdateenv): Likewise.
	* sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Likewise.
2015-01-06 23:36:20 +00:00
Joseph Myers
ef9faf1385 Fix libm feholdexcept namespace (bug 17748).
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of feholdexcept by making it a weak alias of
__feholdexcept and making the affected code call __feholdexcept.

Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).  Also tested for ARM
(soft-float) that feholdexcept failures disappear from the
linknamespace test failures (fesetenv, fsetround and feupdateenv
remain to be addressed to complete fixing bug 17748).

	[BZ #17748]
	* include/fenv.h (__feholdexcept): Declare.  Use
	libm_hidden_proto.
	* math/feholdexcpt.c (feholdexcept): Rename to __feholdexcept and
	define as weak alias of __feholdexcept.  Use libm_hidden_weak.
	* sysdeps/aarch64/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/alpha/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/arm/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/hppa/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/i386/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/ia64/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/m68k/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/mips/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/powerpc/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/powerpc/nofpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/feholdexcpt.c
	(feholdexcept): Likewise.
	* sysdeps/s390/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/sh/sh4/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/sparc/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/x86_64/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/generic/math_private.h (default_libc_feholdexcept): Use
	__feholdexcept instead of feholdexcept.
	(default_libc_feholdexcept_setround): Likewise.
2015-01-05 23:06:14 +00:00
Joseph Myers
b93c2205ec Fix libm fegetround namespace (bug 17748).
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of fegetround by making it a weak alias of
__fegetround and making the affected code call __fegetround.

Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).  Also tested for ARM
(soft-float) that fegetround failures disappear from the linknamespace
test failures (feholdexcept, fesetenv, fesetround and feupdateenv
remain to be addressed before bug 17748 is fully fixed, although this
patch may suffice to fix the failures in some cases, when the libc_fe*
functions are implemented but there is no architecture-specific sqrt
implementation in use so there were failures from fegetround used by
sqrt but no other such failures).

	[BZ #17748]
	* include/fenv.h (__fegetround): Declare.  Use libm_hidden_proto.
	* math/fegetround.c (fegetround): Rename to __fegetround and
	define as weak alias of __fegetround.  Use libm_hidden_weak.
	* sysdeps/aarch64/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/alpha/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/arm/fegetround.c (fegetround): Likewise.
	* sysdeps/hppa/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/i386/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/ia64/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/m68k/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/mips/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/powerpc/fpu/fegetround.c (fegetround): Likewise.
	Undefine after rather than before function definition; use
	parentheses around function name in definition.
	(__fegetround): Also undefine macro after function definition.
	* sysdeps/powerpc/nofpu/fegetround.c (fegetround): Rename to
	__fegetround and define as weak alias of __fegetround.  Use
	libm_hidden_weak.  Do not undefine as macro.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fegetround.c (fegetround):
	Likewise.
	* sysdeps/s390/fpu/fegetround.c (fegetround): Rename to
	__fegetround and define as weak alias of __fegetround.  Use
	libm_hidden_weak.
	* sysdeps/sh/sh4/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/sparc/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/tile/math_private.h (__fegetround): New inline function.
	* sysdeps/x86_64/fpu/fegetround.c (fegetround): Rename to
	__fegetround and define as weak alias of __fegetround.  Use
	libm_hidden_weak.
	* sysdeps/ieee754/dbl-64/e_sqrt.c (__ieee754_sqrt): Use
	__fegetround instead of fegetround.
2015-01-02 20:44:42 +00:00
Joseph Myers
b168057aaa Update copyright dates with scripts/update-copyrights. 2015-01-02 16:29:47 +00:00
Joseph Myers
73a268c759 Fix libm fegetenv namespace (bug 17748).
Some C90 libm functions call fegetenv via libc_feholdsetround*
functions in math_private.h.  This patch makes them call __fegetenv
instead, making fegetenv into a weak alias for __fegetenv as needed.

Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).  Also tested for ARM
(soft-float) that fegetenv failures disappear from the linknamespace
test failures (however, similar fixes will also be needed for
fegetround, feholdexcept, fesetenv, fesetround and feupdateenv before
this set of namespace issues covered by bug 17748 is fully fixed and
those linknamespace tests start passing).

	[BZ #17748]
	* include/fenv.h (__fegetenv): Use libm_hidden_proto.
	* math/fegetenv.c (__fegetenv): Use libm_hidden_def.
	* sysdeps/aarch64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv
	and define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/alpha/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
	* sysdeps/arm/fegetenv.c (fegetenv): Rename to __fegetenv and
	define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/hppa/fpu/fegetenv.c (fegetenv): Likewise.
	* sysdeps/i386/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
	* sysdeps/ia64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
	define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/m68k/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
	* sysdeps/mips/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
	define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/powerpc/fpu/fegetenv.c (__fegetenv): Use
	libm_hidden_def.
	* sysdeps/powerpc/nofpu/fegetenv.c (__fegetenv): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fegetenv.c (__fegetenv):
	Likewise.
	* sysdeps/s390/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
	define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/sh/sh4/fpu/fegetenv.c (fegetenv): Likewise.
	* sysdeps/sparc/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
	* sysdeps/tile/math_private.h (__fegetenv): New inline function.
	* sysdeps/x86_64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv
	and define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/generic/math_private.h (libc_feholdsetround_ctx): Use
	__fegetenv instead of fegetenv.
	(libc_feholdsetround_noex_ctx): Likewise.
2014-12-31 22:07:52 +00:00
Joseph Myers
0747f81811 Fix libm feraiseexcept namespace (bug 17723).
Various C90 and UNIX98 libm functions call feraiseexcept, which is not
in those standards.  This causes linknamespace test failures - except
on x86 / x86_64, where feraiseexcept is inline (for the relevant
constant arguments) in bits/fenv.h.

This patch fixes this by making those functions call __feraiseexcept
instead.  All changes are applied to all architectures rather than
considering the possibility that some might not be needed in some
cases (e.g. x86) as it seems most maintainable to keep architectures
consistent.

Where __feraiseexcept does not exist, it is added, with feraiseexcept
made a weak alias; where it is a strong alias, it is made weak.
libm_hidden_def / libm_hidden_proto are used with __feraiseexcept
(this might in some cases improve code generation for existing calls
to __feraiseexcept in some code on some architectures).  Where there
are dummy feraiseexcept macros (on architectures without
floating-point exceptions support, to avoid compile errors from
references to undefined FE_* macros), corresponding dummy
__feraiseexcept macros are added.  And on x86, to ensure
__feraiseexcept calls still get inlined, the inline function in
bits/fenv.h is refactored so that most of it can be reused in an
inline __feraiseexcept in a separate include/bits/fenv.h.

Calls are changed in C90/UNIX98 functions, but generally not in
functions missing from those standards.  They are also changed in
libc_fe* functions (on the basis that those might be used in any libm
function), and in feupdateenv (on the same basis - may be used, via
default libc_*, in any libm function - of course feupdateenv will need
changing to __feupdateenv in a subsequent patch to make that fully
namespace-clean).

No __feraiseexcept is added corresponding to the feraiseexcept in
powerpc bits/fenvinline.h, because that macro definition is
conditional on !defined __NO_MATH_INLINES, and glibc libm is built
with -D__NO_MATH_INLINES, so changing internal calls to use
__feraiseexcept should make no difference.

Tested for x86_64 (testsuite; the only change in disassembly of
installed shared libraries is a slight code reordering in clog10, of
no apparent significance).  Also tested for MIPS, where (in the
configuration tested) it eliminates math.h linknamespace failures for
n32 and n64 (some for o32 remain because of other issues).

	[BZ #17723]
	* include/fenv.h (__feraiseexcept): Use libm_hidden_proto.
	* math/fraiseexcpt.c (__feraiseexcept): Use libm_hidden_def.
	* sysdeps/aarch64/fpu/fraiseexcpt.c (feraiseexcept): Rename to
	__feraiseexcept and define as weak alias of __feraiseexcept.  Use
	libm_hidden_weak.
	* sysdeps/arm/fraiseexcpt.c (feraiseexcept): Likewise.
	* sysdeps/hppa/fpu/fraiseexcpt.c (feraiseexcept): Likewise.
	* sysdeps/i386/fpu/fraiseexcpt.c (__feraiseexcept): Use
	libm_hidden_def.
	* sysdeps/ia64/fpu/fraiseexcpt.c (feraiseexcept): Rename to
	__feraiseexcept and define as weak alias of __feraiseexcept.  Use
	libm_hidden_weak.
	* sysdeps/m68k/coldfire/fpu/fraiseexcpt.c (feraiseexcept):
	Likewise.
	* sysdeps/microblaze/math_private.h (__feraiseexcept): New macro.
	* sysdeps/mips/fpu/fraiseexcpt.c (feraiseexcept): Rename to
	__feraiseexcept and define as weak alias of __feraiseexcept.  Use
	libm_hidden_weak.
	* sysdeps/powerpc/fpu/fraiseexcpt.c (__feraiseexcept): Use
	libm_hidden_def.
	* sysdeps/powerpc/nofpu/fraiseexcpt.c (__feraiseexcept): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fraiseexcpt.c
	(__feraiseexcept): Likewise.
	* sysdeps/s390/fpu/fraiseexcpt.c (feraiseexcept): Rename to
	__feraiseexcept and define as weak alias of __feraiseexcept.  Use
	libm_hidden_weak.
	* sysdeps/sh/sh4/fpu/fraiseexcpt.c (feraiseexcept): Likewise.
	* sysdeps/sparc/fpu/fraiseexcpt.c (__feraiseexcept): Use
	libm_hidden_def.
	* sysdeps/tile/math_private.h (__feraiseexcept): New macro.
	* sysdeps/unix/sysv/linux/alpha/fraiseexcpt.S (__feraiseexcept):
	Use libm_hidden_def.
	* sysdeps/x86_64/fpu/fraiseexcpt.c (__feraiseexcept): Use
	libm_hidden_def.
	(feraiseexcept): Define as weak not strong alias.  Use
	libm_hidden_weak.
	* sysdeps/x86/fpu/bits/fenv.h (__feraiseexcept_invalid_divbyzero):
	New inline function.  Factored out of ...
	(feraiseexcept): ... here.  Use __feraiseexcept_invalid_divbyzero.
	* sysdeps/x86/fpu/include/bits/fenv.h: New file.
	* math/e_scalb.c (invalid_fn): Call __feraiseexcept instead of
	feraiseexcept.
	* math/w_acos.c (__acos): Likewise.
	* math/w_asin.c (__asin): Likewise.
	* math/w_ilogb.c (__ilogb): Likewise.
	* math/w_j0.c (y0): Likewise.
	* math/w_j1.c (y1): Likewise.
	* math/w_jn.c (yn): Likewise.
	* math/w_log.c (__log): Likewise.
	* math/w_log10.c (__log10): Likewise.
	* sysdeps/aarch64/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/aarch64/fpu/math_private.h
	(libc_feupdateenv_test_aarch64): Likewise.
	* sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/arm/fenv_private.h (libc_feupdateenv_test_vfp): Likewise.
	* sysdeps/arm/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/powerpc/fpu/e_sqrt.c (__slow_ieee754_sqrt): Likewise.
	* sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Likewise.
2014-12-30 17:08:09 +00:00
Wilco Dijkstra
9b47df5814 Call libc_fetestexcept_aarch64. 2014-12-22 17:14:54 +00:00
Wilco Dijkstra
97be3cacec Call libc_fesetround_aarch64. 2014-12-22 16:57:41 +00:00
Richard Earnshaw
aa76a5c701 [AArch64] Fix strchrnul clobbering v15 2014-12-10 09:54:09 +00:00
Siddhesh Poyarekar
a38484851a Remove IS_IN_rtld
Replace with IS_IN (rtld).  Generated code is unchanged on
x86_64.

        * elf/Makefile (CPPFLAGS-.os): Remove IS_IN_rtld.
        * elf/dl-open.c: Use IS_IN (rtld) instead if IS_IN_rtld.
        * elf/rtld-Rules: Likewise.
        * elf/setup-vdso.h: Likewise.
        * include/assert.h: Likewise.
        * include/bits/stdlib-float.h: Likewise.
        * include/errno.h: Likewise.
        * include/sys/stat.h: Likewise.
        * include/unistd.h: Likewise.
        * sysdeps/aarch64/setjmp.S: Likewise.
        * sysdeps/alpha/setjmp.S: Likewise.
        * sysdeps/arm/__longjmp.S: Likewise.
        * sysdeps/arm/aeabi_unwind_cpp_pr1.c: Likewise.
        * sysdeps/arm/setjmp.S: Likewise.
        * sysdeps/arm/sysdep.h: Likewise.
        * sysdeps/generic/_itoa.h: Likewise.
        * sysdeps/generic/dl-sysdep.h: Likewise.
        * sysdeps/generic/ldsodefs.h: Likewise.
        * sysdeps/i386/dl-tls.h: Likewise.
        * sysdeps/i386/setjmp.S: Likewise.
        * sysdeps/m68k/setjmp.c: Likewise.
        * sysdeps/mach/hurd/dl-execstack.c: Likewise.
        * sysdeps/mach/hurd/opendir.c: Likewise.
        * sysdeps/posix/getcwd.c: Likewise.
        * sysdeps/posix/opendir.c: Likewise.
        * sysdeps/posix/profil.c: Likewise.
        * sysdeps/powerpc/dl-procinfo.h: Likewise.
        * sysdeps/powerpc/powerpc32/fpu/__longjmp-common.S: Likewise.
        * sysdeps/powerpc/powerpc32/fpu/setjmp-common.S: Likewise.
        * sysdeps/powerpc/powerpc32/power4/multiarch/init-arch.h: Likewise.
        * sysdeps/powerpc/powerpc32/setjmp-common.S: Likewise.
        * sysdeps/powerpc/powerpc64/__longjmp-common.S: Likewise.
        * sysdeps/powerpc/powerpc64/setjmp-common.S: Likewise.
        * sysdeps/s390/dl-tls.h: Likewise.
        * sysdeps/s390/s390-32/setjmp.S: Likewise.
        * sysdeps/s390/s390-64/setjmp.S: Likewise.
        * sysdeps/sh/sh3/setjmp.S: Likewise.
        * sysdeps/sh/sh4/setjmp.S: Likewise.
        * sysdeps/unix/alpha/sysdep.h: Likewise.
        * sysdeps/unix/arm/sysdep.S: Likewise.
        * sysdeps/unix/i386/sysdep.S: Likewise.
        * sysdeps/unix/sysv/linux/aarch64/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/getcwd.c: Likewise.
        * sysdeps/unix/sysv/linux/hppa/nptl/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/i386/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/i386/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/ia64/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/ia64/setjmp.S: Likewise.
        * sysdeps/unix/sysv/linux/ia64/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/lowlevellock-futex.h: Likewise.
        * sysdeps/unix/sysv/linux/m68k/bits/m68k-vdso.h: Likewise.
        * sysdeps/unix/sysv/linux/m68k/m68k-helpers.S: Likewise.
        * sysdeps/unix/sysv/linux/microblaze/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/powerpc/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/powerpc/powerpc32/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/powerpc/powerpc64/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/s390/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/s390/s390-32/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/s390/s390-64/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/sh/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/sh/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/sparc/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/sparc/sparc32/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/sparc/sparc64/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/tile/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/tile/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/x86_64/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/x86_64/sysdep.h: Likewise.
        * sysdeps/unix/x86_64/sysdep.S: Likewise.
        * sysdeps/x86_64/setjmp.S: Likewise.
2014-11-24 11:41:48 +05:30
Andrew Pinski
6d3db89b12 AArch64: Reformat inline-asm in elf_machine_load_address
This patch reformats the inline-asm in elf_machine_load_address so it is
easier to change only part of the inline-asm.  That is using string
concatenating instead of string continuation.

Also document why this inline-asm works - it depends on the 32bit
relocation being resolved at link time.

ChangeLog:

2014-11-21  Will Newton  <will.newton@linaro.org>
	    Andrew Pinski  <andrew.pinski@caviumnetworks.com>

	* sysdeps/aarch64/dl-machine.h (elf_machine_load_address):
	Refactor inline-asm.  Also add comment.
2014-11-21 14:45:11 +00:00
Will Newton
01194ba18d AArch64: Use ELF macros rather than Elf64 throughout
Using the macros for ELF types is required for adding ILP32 support.
In the standard AArch64 configuration this makes no difference to
the types used.

ChangeLog:

2014-11-21  Will Newton  <will.newton@linaro.org>
	    Andrew Pinski  <andrew.pinski@caviumnetworks.com>

	* sysdeps/aarch64/bits/link.h (la_aarch64_gnu_pltenter): Use
	ElfW macro instead of hardcoded Elf64 types.
	(la_aarch64_gnu_pltenter): Likewise.
	* sysdeps/aarch64/dl-machine.h
	(elf_machine_runtime_setup): Use ElfW(Addr).
2014-11-21 14:44:23 +00:00
Will Newton
8c230039a0 AArch64: Update relocations for ILP32
The latest version of the binutils ELF header defines a new set of
dynamic relocations for ILP32 and renames some to make the naming
more uniform.

ChangeLog:

2014-11-21  Will Newton  <will.newton@linaro.org>
	    Andrew Pinski  <andrew.pinski@caviumnetworks.com>

	* elf/elf.h (R_AARCH64_P32_ABS32, R_AARCH64_P32_COPY,
	R_AARCH64_P32_GLOB_DAT, R_AARCH64_P32_JUMP_SLOT,
	R_AARCH64_P32_RELATIVE, R_AARCH64_P32_TLS_DTPMOD,
	R_AARCH64_P32_TLS_DTPREL, R_AARCH64_P32_TLS_TPREL,
	R_AARCH64_P32_TLSDESC, R_AARCH64_P32_IRELATIVE): Define.
	(R_AARCH64_TLS_DTPMOD64): Rename to ..
	(R_AARCH64_TLS_DTPMOD): This.
	(R_AARCH64_TLS_DTPREL64): Rename to ...
	(R_AARCH64_TLS_DTPREL): This.
	(R_AARCH64_TLS_TPREL64): Rename to ...
	(R_AARCH64_TLS_TPREL): This.
	* sysdeps/aarch64/dl-machine.h (elf_machine_type_class): Update
	R_AARCH64_TLS_DTPMOD64, R_AARCH64_TLS_DTPREL64, and
	R_AARCH64_TLS_TPREL64.
	(elf_machine_rela): Likewise.
2014-11-21 14:43:16 +00:00