Commit Graph

394 Commits

Author SHA1 Message Date
Joseph Myers
70e2ba332f Do not include fenv_private.h in math_private.h.
Continuing the clean-up related to the catch-all math_private.h
header, this patch stops math_private.h from including fenv_private.h.
Instead, fenv_private.h is included directly from those users of
math_private.h that also used interfaces from fenv_private.h.  No
attempt is made to remove unused includes of math_private.h, but that
is a natural followup.

(However, since math_private.h sometimes defines optimized versions of
math.h interfaces or __* variants thereof, as well as defining its own
interfaces, I think it might make sense to get all those optimized
versions included from include/math.h, not requiring a separate header
at all, before eliminating unused math_private.h includes - that
avoids a file quietly becoming less-optimized if someone adds a call
to one of those interfaces without restoring a math_private.h include
to that file.)

There is still a pitfall that if code uses plain fe* and __fe*
interfaces, but only includes fenv.h and not fenv_private.h or (before
this patch) math_private.h, it will compile on platforms with
exceptions and rounding modes but not get the optimized versions (and
possibly not compile) on platforms without exception and rounding mode
support, so making it easy to break the build for such platforms
accidentally.

I think it would be most natural to move the inlines / macros for fe*
and __fe* in the case of no exceptions and rounding modes into
include/fenv.h, so that all code including fenv.h with _ISOMAC not
defined automatically gets them.  Then fenv_private.h would be purely
the header for the libc_fe*, SET_RESTORE_ROUND etc. internal
interfaces and the risk of breaking the build on other platforms than
the one you tested on because of a missing fenv_private.h include
would be much reduced (and there would be some unused fenv_private.h
includes to remove along with unused math_private.h includes).

Tested for x86_64 and x86, and tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by this patch.

	* sysdeps/generic/math_private.h: Do not include <fenv_private.h>.
	* math/fromfp.h: Include <fenv_private.h>.
	* math/math-narrow.h: Likewise.
	* math/s_cexp_template.c: Likewise.
	* math/s_csin_template.c: Likewise.
	* math/s_csinh_template.c: Likewise.
	* math/s_ctan_template.c: Likewise.
	* math/s_ctanh_template.c: Likewise.
	* math/s_iseqsig_template.c: Likewise.
	* math/w_acos_compat.c: Likewise.
	* math/w_acosf_compat.c: Likewise.
	* math/w_acosl_compat.c: Likewise.
	* math/w_asin_compat.c: Likewise.
	* math/w_asinf_compat.c: Likewise.
	* math/w_asinl_compat.c: Likewise.
	* math/w_ilogb_template.c: Likewise.
	* math/w_j0_compat.c: Likewise.
	* math/w_j0f_compat.c: Likewise.
	* math/w_j0l_compat.c: Likewise.
	* math/w_j1_compat.c: Likewise.
	* math/w_j1f_compat.c: Likewise.
	* math/w_j1l_compat.c: Likewise.
	* math/w_jn_compat.c: Likewise.
	* math/w_jnf_compat.c: Likewise.
	* math/w_llogb_template.c: Likewise.
	* math/w_log10_compat.c: Likewise.
	* math/w_log10f_compat.c: Likewise.
	* math/w_log10l_compat.c: Likewise.
	* math/w_log2_compat.c: Likewise.
	* math/w_log2f_compat.c: Likewise.
	* math/w_log2l_compat.c: Likewise.
	* math/w_log_compat.c: Likewise.
	* math/w_logf_compat.c: Likewise.
	* math/w_logl_compat.c: Likewise.
	* sysdeps/aarch64/fpu/feholdexcpt.c: Likewise.
	* sysdeps/aarch64/fpu/fesetround.c: Likewise.
	* sysdeps/aarch64/fpu/fgetexcptflg.c: Likewise.
	* sysdeps/aarch64/fpu/ftestexcept.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_atan2.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp2.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_gamma_r.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_jn.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_pow.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_remainder.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_sqrt.c: Likewise.
	* sysdeps/ieee754/dbl-64/gamma_product.c: Likewise.
	* sysdeps/ieee754/dbl-64/lgamma_neg.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_atan.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fma.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_llrint.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_llround.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_lrint.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_lround.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_nearbyint.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_sin.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_sincos.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_tan.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_lround.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_nearbyint.c: Likewise.
	* sysdeps/ieee754/dbl-64/x2y2m1.c: Likewise.
	* sysdeps/ieee754/float128/float128_private.h: Likewise.
	* sysdeps/ieee754/flt-32/e_gammaf_r.c: Likewise.
	* sysdeps/ieee754/flt-32/e_j1f.c: Likewise.
	* sysdeps/ieee754/flt-32/e_jnf.c: Likewise.
	* sysdeps/ieee754/flt-32/lgamma_negf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_llrintf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_llroundf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_lrintf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_lroundf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_nearbyintf.c: Likewise.
	* sysdeps/ieee754/k_standardl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_expl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_gammal_r.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_j1l.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_jnl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/gamma_productl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/lgamma_negl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_llrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_llroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_lrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_lroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nearbyintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/x2y2m1l.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_expl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_j1l.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_jnl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/lgamma_negl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_llrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_llroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_lrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_lroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/x2y2m1l.c: Likewise.
	* sysdeps/ieee754/ldbl-96/e_gammal_r.c: Likewise.
	* sysdeps/ieee754/ldbl-96/e_jnl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/gamma_productl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/lgamma_negl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fma.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_llrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_llroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_lrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_lroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/x2y2m1l.c: Likewise.
	* sysdeps/powerpc/fpu/e_sqrt.c: Likewise.
	* sysdeps/powerpc/fpu/e_sqrtf.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_ceil.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_floor.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_nearbyint.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_round.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_roundeven.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_trunc.c: Likewise.
	* sysdeps/riscv/rvd/s_finite.c: Likewise.
	* sysdeps/riscv/rvd/s_fmax.c: Likewise.
	* sysdeps/riscv/rvd/s_fmin.c: Likewise.
	* sysdeps/riscv/rvd/s_fpclassify.c: Likewise.
	* sysdeps/riscv/rvd/s_isinf.c: Likewise.
	* sysdeps/riscv/rvd/s_isnan.c: Likewise.
	* sysdeps/riscv/rvd/s_issignaling.c: Likewise.
	* sysdeps/riscv/rvf/fegetround.c: Likewise.
	* sysdeps/riscv/rvf/feholdexcpt.c: Likewise.
	* sysdeps/riscv/rvf/fesetenv.c: Likewise.
	* sysdeps/riscv/rvf/fesetround.c: Likewise.
	* sysdeps/riscv/rvf/feupdateenv.c: Likewise.
	* sysdeps/riscv/rvf/fgetexcptflg.c: Likewise.
	* sysdeps/riscv/rvf/ftestexcept.c: Likewise.
	* sysdeps/riscv/rvf/s_ceilf.c: Likewise.
	* sysdeps/riscv/rvf/s_finitef.c: Likewise.
	* sysdeps/riscv/rvf/s_floorf.c: Likewise.
	* sysdeps/riscv/rvf/s_fmaxf.c: Likewise.
	* sysdeps/riscv/rvf/s_fminf.c: Likewise.
	* sysdeps/riscv/rvf/s_fpclassifyf.c: Likewise.
	* sysdeps/riscv/rvf/s_isinff.c: Likewise.
	* sysdeps/riscv/rvf/s_isnanf.c: Likewise.
	* sysdeps/riscv/rvf/s_issignalingf.c: Likewise.
	* sysdeps/riscv/rvf/s_nearbyintf.c: Likewise.
	* sysdeps/riscv/rvf/s_roundevenf.c: Likewise.
	* sysdeps/riscv/rvf/s_roundf.c: Likewise.
	* sysdeps/riscv/rvf/s_truncf.c: Likewise.
2018-09-03 21:09:04 +00:00
Paul Pluzhnikov
a6e8926f8d [BZ #20271] Add newlines in __libc_fatal calls. 2018-08-31 18:04:32 -07:00
Joseph Myers
ff6b24501f Split fenv_private.h out of math_private.h more consistently.
On some architectures, the parts of math_private.h relating to the
floating-point environment are in a separate file fenv_private.h
included from math_private.h.  As this is purely an
architecture-specific convention used by several architectures,
however, all such architectures still need their own math_private.h,
even if it has nothing to do beyond #include <fenv_private.h> and
peculiarity of including the i386 file directly instead of having a
shared file in sysdeps/x86.

This patch makes the fenv_private.h name an architecture-independent
convention in glibc.  The include of fenv_private.h from
math_private.h becomes architecture-independent (until callers are
updated to include fenv_private.h directly so the include from
math_private.h is no longer needed).  Some architecture math_private.h
headers are removed if no longer needed, or renamed to fenv_private.h
if all they define belongs in that header; architecture fenv_private.h
headers now do require #include_next <fenv_private.h>.  The i386
fenv_private.h file moves to sysdeps/x86/fpu/ to reflect how it is
actually shared with x86_64.  The generic math_private.h gets a new
include of <stdbool.h>, as needed for bool in some prototypes in that
header (previously that was indirectly included via include/fenv.h,
which now only gets included too late in math_private.h, after those
prototypes).

Tested for x86_64 and x86, and tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by the patch.

	* sysdeps/aarch64/fpu/fenv_private.h: New file.  Based on ....
	* sysdeps/aarch64/fpu/math_private.h: ... this file.  All contents
	moved to fenv_private.h except for ...
	(TOINT_INTRINSICS): Kept in math_private.h.
	(roundtoint): Likewise.
	(converttoint): Likewise.
	* sysdeps/arm/fenv_private.h: Change multiple-include guard to
	[ARM_FENV_PRIVATE_H].  Include next <fenv_private.h>.
	* sysdeps/arm/math_private.h: Remove.
	* sysdeps/generic/fenv_private.h: New file.  Contents moved from
	....
	* sysdeps/generic/math_private.h: ... this file.  Include
	<stdbool.h>.  Do not include <fenv.h> or <get-rounding-mode.h>.
	Include <fenv_private.h>.  Remove functions and macros moved to
	fenv_private.h.
	* sysdeps/i386/fpu/math_private.h: Remove.
	* sysdeps/mips/math_private.h: Move to ....
	* sysdeps/mips/fpu/fenv_private.h: ... here.  Change
	multiple-include guard to [MIPS_FENV_PRIVATE_H].  Remove
	[__mips_hard_float] conditional.  Include next <fenv_private.h>.
	* sysdeps/powerpc/fpu/fenv_private.h: Change multiple-include
	guard to [POWERPC_FENV_PRIVATE_H].  Include next <fenv_private.h>.
	* sysdeps/powerpc/fpu/math_private.h: Do not include
	<fenv_private.h>.
	* sysdeps/riscv/rvf/math_private.h: Move to ....
	* sysdeps/riscv/rvf/fenv_private.h: ... here.  Change
	multiple-include guard to [RISCV_FENV_PRIVATE_H].  Include next
	<fenv_private.h>.
	* sysdeps/sparc/fpu/fenv_private.h: Change multiple-include guard
	to [SPARC_FENV_PRIVATE_H].  Include next <fenv_private.h>.
	* sysdeps/sparc/fpu/math_private.h: Remove.
	* sysdeps/i386/fpu/fenv_private.h: Move to ....
	* sysdeps/x86/fpu/fenv_private.h: ... here.  Change
	multiple-include guard to [X86_FENV_PRIVATE_H].  Include next
	<fenv_private.h>.
	* sysdeps/x86_64/fpu/math_private.h: Do not include
	<sysdeps/i386/fpu/fenv_private.h>.
2018-08-28 20:48:49 +00:00
Joseph Myers
895ef79e04 Move EXCEPTION_ENABLE_SUPPORTED out of math-tests.h.
Continuing moving macros out of math-tests.h to smaller headers
following typo-proof conventions instead of using #ifndef, this patch
moves the EXCEPTION_ENABLE_SUPPORTED macro out to its own
math-tests-trap.h header.

Tested with build-many-glibcs.py.

	* sysdeps/generic/math-tests-trap.h: New file.
	* sysdeps/generic/math-tests.h: Include <math-tests-trap.h>.
	(EXCEPTION_ENABLE_SUPPORTED): Do not define here.
	* sysdeps/aarch64/math-tests.h: Remove file.
	* sysdeps/arm/math-tests.h: Likewise.
	* sysdeps/riscv/math-tests.h: Likewise.
	* sysdeps/aarch64/math-tests-trap.h: New file.
	* sysdeps/arm/math-tests-trap.h: Likewise.
	* sysdeps/riscv/math-tests-trap.h: Likewise.
2018-08-24 19:18:16 +00:00
Siddhesh Poyarekar
436e4d5b96 [aarch64] Add an ASIMD variant of strlen for falkor
This variant of strlen uses vector loads and operations to reduce the
size of the code and also eliminate the non-ascii fallback.  This
works very well for falkor because of its two vector units and
efficient vector ops.  In the best case it reduces latency of cases in
bench-strlen by 48%, with gains throughout the benchmark.
strlen-walk also sees uniform gains in the 5%-15% range.

Overall the routine appears to work better than the stock one for falkor
regardless of the benchmark, length of string or cache state.

The same cannot be said of a53 and a72 though.  a53 performance was
greatly reduced and for a72 it was a bit of a mixed bag, slightly on the
negative side but I reckon it might be fast in some situations.

	* sysdeps/aarch64/strlen.S (__strlen): Rename to STRLEN.
	[!STRLEN](STRLEN): Set to __strlen.
	* sysdeps/aarch64/multiarch/strlen.c: New file.
	* sysdeps/aarch64/multiarch/strlen_generic.S: Likewise.
	* sysdeps/aarch64/multiarch/strlen_asimd.S: Likewise.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add strlen.
	* sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add
	strlen_generic and strlen_asimd.

Reviewed-By: szabolcs.nagy@arm.com
CC: pinskia@gmail.com
2018-08-15 23:01:33 +05:30
Wilco Dijkstra
599cf39766 Improve performance of sinf and cosf
The second patch improves performance of sinf and cosf using the same
algorithms and polynomials.  The returned values are identical to sincosf
for the same input.  ULP definitions for AArch64 and x64 are updated.

sinf/cosf througput gains on Cortex-A72:
* |x| < 0x1p-12 : 1.2x
* |x| < M_PI_4  : 1.8x
* |x| < 2 * M_PI: 1.7x
* |x| < 120.0   : 2.3x
* |x| < Inf     : 3.0x

	* NEWS: Mention sinf, cosf, sincosf.
	* sysdeps/aarch64/libm-test-ulps: Update ULP for sinf, cosf, sincosf.
	* sysdeps/x86_64/fpu/libm-test-ulps: Update ULP for sinf and cosf.
	* sysdeps/x86_64/fpu/multiarch/s_sincosf-fma.c: Add definitions of
	constants rather than including generic sincosf.h.
	* sysdeps/x86_64/fpu/s_sincosf_data.c: Remove.
	* sysdeps/ieee754/flt-32/s_cosf.c (cosf): Rewrite.
	* sysdeps/ieee754/flt-32/s_sincosf.h (reduced_sin): Remove.
	(reduced_cos): Remove.
	(sinf_poly): New function.
	* sysdeps/ieee754/flt-32/s_sinf.c (sinf): Rewrite.
2018-08-14 10:45:59 +01:00
Szabolcs Nagy
43cfdf8f48 Clean up converttoint handling and document the semantics
This patch currently only affects aarch64.

The roundtoint and converttoint internal functions are only called with small
values, so 32 bit result is enough for converttoint and it is a signed int
conversion so the return type is changed to int32_t.

The original idea was to help the compiler keeping the result in uint64_t,
then it's clear that no sign extension is needed and there is no accidental
undefined or implementation defined signed int arithmetics.

But it turns out gcc does a good job with inlining so changing the type has
no overhead and the semantics of the conversion is less surprising this way.
Since we want to allow the asuint64 (x + 0x1.8p52) style conversion, the top
bits were never usable and the existing code ensures that only the bottom
32 bits of the conversion result are used.

On aarch64 the neon intrinsics (which round ties to even) are changed to
round and lround (which round ties away from zero) this does not affect the
results in a significant way, but more portable (relies on round and lround
being inlined which works with -fno-math-errno).

The TOINT_SHIFT and TOINT_RINT macros were removed, only keep separate code
paths for TOINT_INTRINSICS and !TOINT_INTRINSICS.

	* sysdeps/aarch64/fpu/math_private.h (roundtoint): Use round.
	(converttoint): Use lround.
	* sysdeps/ieee754/flt-32/math_config.h (roundtoint): Declare and
	document the semantics when TOINT_INTRINSICS is set.
	(converttoint): Likewise.
	(TOINT_RINT): Remove.
	(TOINT_SHIFT): Remove.
	* sysdeps/ieee754/flt-32/e_expf.c (__expf): Remove the TOINT_RINT code
	path.
2018-08-10 17:23:16 +01:00
Siddhesh Poyarekar
be64b1946b [aarch64] Fix value of MIN_PAGE_SIZE for testing
MIN_PAGE_SIZE is normally set to 4096 but for testing it can be set to
16 so that it exercises the page crossing code for every misaligned
access.  The value was set to 15, which is obviously wrong, so fixed
as obvious and tested.

	* sysdeps/aarch64/strlen.S [TEST_PAGE_CROSS](MIN_PAGE_SIZE):
	Fix value.
2018-08-08 22:47:17 +05:30
Siddhesh Poyarekar
dce452dc52 Rename the glibc.tune namespace to glibc.cpu
The glibc.tune namespace is vaguely named since it is a 'tunable', so
give it a more specific name that describes what it refers to.  Rename
the tunable namespace to 'cpu' to more accurately reflect what it
encompasses.  Also rename glibc.tune.cpu to glibc.cpu.name since
glibc.cpu.cpu is weird.

	* NEWS: Mention the change.
	* elf/dl-tunables.list: Rename tune namespace to cpu.
	* sysdeps/powerpc/dl-tunables.list: Likewise.
	* sysdeps/x86/dl-tunables.list: Likewise.
	* sysdeps/aarch64/dl-tunables.list: Rename tune.cpu to
	cpu.name.
	* elf/dl-hwcaps.c (_dl_important_hwcaps): Adjust.
	* elf/dl-hwcaps.h (GET_HWCAP_MASK): Likewise.
	* manual/README.tunables: Likewise.
	* manual/tunables.texi: Likewise.
	* sysdeps/powerpc/cpu-features.c: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c
	(init_cpu_features): Likewise.
	* sysdeps/x86/cpu-features.c: Likewise.
	* sysdeps/x86/cpu-features.h: Likewise.
	* sysdeps/x86/cpu-tunables.c: Likewise.
	* sysdeps/x86_64/Makefile: Likewise.
	* sysdeps/x86/dl-cet.c: Likewise.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2018-08-02 23:49:19 +05:30
Siddhesh Poyarekar
0aec4c1d18 aarch64,falkor: Use vector registers for memcpy
Vector registers perform better than scalar register pairs for copying
data so prefer them instead.  This results in a time reduction of over
50% (i.e. 2x speed improvemnet) for some smaller sizes for memcpy-walk.
Larger sizes show improvements of around 1% to 2%.  memcpy-random shows
a very small improvement, in the range of 1-2%.

	* sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor):
	Use vector registers.
2018-06-29 22:45:59 +05:30
Siddhesh Poyarekar
ce76a5cb8d aarch64,falkor: Use vector registers for memmove
Vector registers perform much better for moves compared to pairs of
registers on falkor, so use them instead.  This results in a time
reduction of up to 50% (i.e. 2x improvement) for a lot of the smaller
sizes, i.e. up to 1K in memmove-walk.  Improvements for larger sizes are
smaller, at about 1%-2%.

	* sysdeps/aarch64/multiarch/memmove_falkor.S
	(__memcpy_falkor): Use vector registers.
2018-06-29 22:45:07 +05:30
Hongbo Zhang
fc2ba8037d aarch64: add HXT Phecda core memory operation ifuncs
Phecda is HXT semiconductor's CPU core, this patch adds memory operation
ifuncs for it: sharing the same optimized implementation with Qualcomm's
Falkor core.

2018-06-07  Minfeng Kang <minfeng.kang@hxt-semitech.com>
	    Hongbo Zhang <hongbo.zhang@linaro.org>

	* sysdeps/aarch64/multiarch/memcpy.c (libc_ifunc): reuse
	__memcpy_falkor for phecda core.
	* sysdeps/aarch64/multiarch/memmove.c (libc_ifunc): reuse
	__memmove_falkor for phecda core.
	* sysdeps/aarch64/multiarch/memset.c (libc_ifunc): reuse
	__memset_falkor for phecda core.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c: add MIDR entry
	for phecda core.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_PHECDA): add
	macro to identify phecda core.
2018-06-12 21:29:11 +05:30
H.J. Lu
67c0579669 Mark _init and _fini as hidden [BZ #23145]
_init and _fini are special functions provided by glibc for linker to
define DT_INIT and DT_FINI in executable and shared library.  They
should never be put in dynamic symbol table.  This patch marks them as
hidden to remove them from dynamic symbol table.

Tested with build-many-glibcs.py.

	[BZ #23145]
	* elf/Makefile (tests-special): Add $(objpfx)check-initfini.out.
	($(all-built-dso:=.dynsym): New target.
	(common-generated): Add $(all-built-dso:$(common-objpfx)%=%.dynsym).
	($(objpfx)check-initfini.out): New target.
	(generated): Add check-initfini.out.
	* scripts/check-initfini.awk: New file.
	* sysdeps/aarch64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/alpha/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/arm/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/hppa/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/i386/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/ia64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/m68k/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/microblaze/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/mips/mips32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/mips/mips64/n32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/mips/mips64/n64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/nios2/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/powerpc/powerpc32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/powerpc/powerpc64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/s390/s390-32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/s390/s390-64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/sh/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/sparc/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/x86_64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
2018-06-08 10:28:52 -07:00
Joseph Myers
8f145c7712 Remove sysdeps/aarch64/soft-fp directory.
As per <https://sourceware.org/ml/libc-alpha/2014-10/msg00369.html>,
there should not be separate sysdeps/<arch>/soft-fp directories when
those are used by all configurations that use sysdeps/<arch>, and,
more generally, should not be sysdeps/foo/Implies files pointing to a
subdirectory foo/bar.  This patch eliminates the
sysdeps/aarch64/soft-fp directory accordingly, merging its contents
into sysdeps/aarch64.

Tested with build-many-glibcs.py that installed stripped shared
libraries for aarch64 configurations are unchanged by this patch.

	* sysdeps/aarch64/Implies: Remove aarch64/soft-fp.
	* sysdeps/aarch64/Makefile [$(subdir) = math] (CPPFLAGS): Add
	-I../soft-fp.  Moved from ....
	* sysdeps/aarch64/soft-fp/Makefile: ... here.  Remove file.
	* sysdeps/aarch64/soft-fp/e_sqrtl.c: Move to ....
	* sysdeps/aarch64/e_sqrtl.c: ... here.
	* sysdeps/aarch64/soft-fp/sfp-machine.h: Move to ....
	* sysdeps/aarch64/sfp-machine.h: ... here.
2018-05-22 17:23:34 +00:00
Joseph Myers
b4d5b8b021 Do not include math-barriers.h in math_private.h.
This patch continues the math_private.h cleanup by stopping
math_private.h from including math-barriers.h and making the users of
the barrier macros include the latter header directly.  No attempt is
made to remove any math_private.h includes that are now unused, except
in strtod_l.c where that is done to avoid line number changes in
assertions, so that installed stripped shared libraries can be
compared before and after the patch.  (I think the floating-point
environment support in math_private.h should also move out - some
architectures already have fenv_private.h as an architecture-internal
header included from their math_private.h - and after moving that out
might be a better time to identify unused math_private.h includes.)

Tested for x86_64 and x86, and tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by the patch.

	* sysdeps/generic/math_private.h: Do not include
	<math-barriers.h>.
	* stdlib/strtod_l.c: Include <math-barriers.h> instead of
	<math_private.h>.
	* math/fromfp.h: Include <math-barriers.h>.
	* math/math-narrow.h: Likewise.
	* math/s_nextafter.c: Likewise.
	* math/s_nexttowardf.c: Likewise.
	* sysdeps/aarch64/fpu/s_llrint.c: Likewise.
	* sysdeps/aarch64/fpu/s_llrintf.c: Likewise.
	* sysdeps/aarch64/fpu/s_lrint.c: Likewise.
	* sysdeps/aarch64/fpu/s_lrintf.c: Likewise.
	* sysdeps/i386/fpu/s_nextafterl.c: Likewise.
	* sysdeps/i386/fpu/s_nexttoward.c: Likewise.
	* sysdeps/i386/fpu/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_atan2.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_atanh.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp2.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_j0.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_sqrt.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_expm1.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fma.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_log1p.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_nearbyint.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_nearbyint.c: Likewise.
	* sysdeps/ieee754/flt-32/e_atanhf.c: Likewise.
	* sysdeps/ieee754/flt-32/e_j0f.c: Likewise.
	* sysdeps/ieee754/flt-32/s_expm1f.c: Likewise.
	* sysdeps/ieee754/flt-32/s_log1pf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_nearbyintf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_nextafterf.c: Likewise.
	* sysdeps/ieee754/k_standardl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_asinl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_expl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_powl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nearbyintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nextafterl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nexttoward.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_asinl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_nextafterl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_nexttoward.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/e_atanhl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/e_j0l.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fma.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_nexttoward.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/s_nextafterl.c: Likewise.
2018-05-11 15:11:38 +00:00
Siddhesh Poyarekar
db725a458e aarch64,falkor: Ignore prefetcher tagging for smaller copies
For smaller and medium sized copies, the effect of hardware
prefetching are not as dominant as instruction level parallelism.
Hence it makes more sense to load data into multiple registers than to
try and route them to the same prefetch unit.  This is also the case
for the loop exit where we are unable to latch on to the same prefetch
unit anyway so it makes more sense to have data loaded in parallel.

The performance results are a bit mixed with memcpy-random, with
numbers jumping between -1% and +3%, i.e. the numbers don't seem
repeatable.  memcpy-walk sees a 70% improvement (i.e. > 2x) for 128
bytes and that improvement reduces down as the impact of the tail copy
decreases in comparison to the loop.

	* sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor):
	Use multiple registers to copy data in loop tail.
2018-05-11 00:11:52 +05:30
Siddhesh Poyarekar
70c97f8493 aarch64,falkor: Ignore prefetcher hints for memmove tail
The tail of the copy loops are unable to train the falkor hardware
prefetcher because they load from a different base compared to the hot
loop.  In this case avoid serializing the instructions by loading them
into different registers.  Also peel the last iteration of the loop
into the tail (and have them use different registers) since it gives
better performance for medium sizes.

This results in performance improvements of between 3% and 20% over
the current falkor implementation for sizes between 128 bytes and 1K
on the memmove-walk benchmark, thus mostly covering the regressions
seen against the generic memmove.

	* sysdeps/aarch64/multiarch/memmove_falkor.S
	(__memmove_falkor): Use multiple registers to move data in
	loop tail.
2018-05-11 00:08:02 +05:30
Joseph Myers
9ed2e15ff4 Move math_opt_barrier, math_force_eval to separate math-barriers.h.
This patch continues cleaning up math_private.h by moving the
math_opt_barrier and math_force_eval macros to a separate header
math-barriers.h.

At present, those macros are inside a "#ifndef math_opt_barrier" in
math_private.h to allow architectures to override them and then use
a separate math-barriers.h header, no such #ifndef or #include_next is
needed; architectures just have their own alternative version of
math-barriers.h when providing their own optimized versions that avoid
going through memory unnecessarily.  The generic math-barriers.h has a
comment added to document these two macros.

In this patch, math_private.h is made to #include <math-barriers.h>,
so files using these macros do not need updating yet.  That is because
of uses of math_force_eval in math_check_force_underflow and
math_check_force_underflow_nonneg, which are still defined in
math_private.h.  Once those are moved out to a separate header, that
separate header can be made to include <math-barriers.h>, as can the
other files directly using these barrier macros, and then the include
of <math-barriers.h> from math_private.h can be removed.

Tested for x86_64 and x86.  Also tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by this patch.

	* sysdeps/generic/math-barriers.h: New file.
	* sysdeps/generic/math_private.h [!math_opt_barrier]
	(math_opt_barrier): Move to math-barriers.h.
	[!math_opt_barrier] (math_force_eval): Likewise.
	* sysdeps/aarch64/fpu/math-barriers.h: New file.
	* sysdeps/aarch64/fpu/math_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
	* sysdeps/alpha/fpu/math-barriers.h: New file.
	* sysdeps/alpha/fpu/math_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
	* sysdeps/x86/fpu/math-barriers.h: New file.
	* sysdeps/i386/fpu/fenv_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
	* sysdeps/m68k/m680x0/fpu/math_private.h: Move to....
	* sysdeps/m68k/m680x0/fpu/math-barriers.h: ... here.  Adjust
	multiple-include guard for rename.
	* sysdeps/powerpc/fpu/math-barriers.h: New file.
	* sysdeps/powerpc/fpu/math_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
2018-05-09 19:45:47 +00:00
Maciej W. Rozycki
10a446ddcc elf: Unify symbol address run-time calculation [BZ #19818]
Wrap symbol address run-time calculation into a macro and use it
throughout, replacing inline calculations.

There are a couple of variants, most of them different in a functionally
insignificant way.  Most calculations are right following RESOLVE_MAP,
at which point either the map or the symbol returned can be checked for
validity as the macro sets either both or neither.  In some places both
the symbol and the map has to be checked however.

My initial implementation therefore always checked both, however that
resulted in code larger by as much as 0.3%, as many places know from
elsewhere that no check is needed.  I have decided the size growth was
unacceptable.

Having looked closer I realized that it's the map that is the culprit.
Therefore I have modified LOOKUP_VALUE_ADDRESS to accept an additional
boolean argument telling it to access the map without checking it for
validity.  This in turn has brought quite nice results, with new code
actually being smaller for i686, and MIPS o32, n32 and little-endian n64
targets, unchanged in size for x86-64 and, unusually, marginally larger
for big-endian MIPS n64, as follows:

i686:
   text    data     bss     dec     hex filename
 152255    4052     192  156499   26353 ld-2.27.9000-base.so
 152159    4052     192  156403   262f3 ld-2.27.9000-elf-symbol-value.so
MIPS/o32/el:
   text    data     bss     dec     hex filename
 142906    4396     260  147562   2406a ld-2.27.9000-base.so
 142890    4396     260  147546   2405a ld-2.27.9000-elf-symbol-value.so
MIPS/n32/el:
   text    data     bss     dec     hex filename
 142267    4404     260  146931   23df3 ld-2.27.9000-base.so
 142171    4404     260  146835   23d93 ld-2.27.9000-elf-symbol-value.so
MIPS/n64/el:
   text    data     bss     dec     hex filename
 149835    7376     408  157619   267b3 ld-2.27.9000-base.so
 149787    7376     408  157571   26783 ld-2.27.9000-elf-symbol-value.so
MIPS/o32/eb:
   text    data     bss     dec     hex filename
 142870    4396     260  147526   24046 ld-2.27.9000-base.so
 142854    4396     260  147510   24036 ld-2.27.9000-elf-symbol-value.so
MIPS/n32/eb:
   text    data     bss     dec     hex filename
 142019    4404     260  146683   23cfb ld-2.27.9000-base.so
 141923    4404     260  146587   23c9b ld-2.27.9000-elf-symbol-value.so
MIPS/n64/eb:
   text    data     bss     dec     hex filename
 149763    7376     408  157547   2676b ld-2.27.9000-base.so
 149779    7376     408  157563   2677b ld-2.27.9000-elf-symbol-value.so
x86-64:
   text    data     bss     dec     hex filename
 148462    6452     400  155314   25eb2 ld-2.27.9000-base.so
 148462    6452     400  155314   25eb2 ld-2.27.9000-elf-symbol-value.so

	[BZ #19818]
	* sysdeps/generic/ldsodefs.h (LOOKUP_VALUE_ADDRESS): Add `set'
	parameter.
	(SYMBOL_ADDRESS): New macro.
	[!ELF_FUNCTION_PTR_IS_SPECIAL] (DL_SYMBOL_ADDRESS): Use
	SYMBOL_ADDRESS for symbol address calculation.
	* elf/dl-runtime.c (_dl_fixup): Likewise.
	(_dl_profile_fixup): Likewise.
	* elf/dl-symaddr.c (_dl_symbol_address): Likewise.
	* elf/rtld.c (dl_main): Likewise.
	* sysdeps/aarch64/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/alpha/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/arm/dl-machine.h (elf_machine_rel): Likewise.
	(elf_machine_rela): Likewise.
	* sysdeps/hppa/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/hppa/dl-symaddr.c (_dl_symbol_address): Likewise.
	* sysdeps/i386/dl-machine.h (elf_machine_rel): Likewise.
	(elf_machine_rela): Likewise.
	* sysdeps/ia64/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/m68k/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/microblaze/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/mips/dl-machine.h (ELF_MACHINE_BEFORE_RTLD_RELOC):
	Likewise.
	(elf_machine_reloc): Likewise.
	(elf_machine_got_rel): Likewise.
	* sysdeps/mips/dl-trampoline.c (__dl_runtime_resolve): Likewise.
	* sysdeps/nios2/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/powerpc/powerpc64/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/riscv/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/s390/s390-32/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/s390/s390-64/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/sh/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/sparc/sparc32/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/sparc/sparc64/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/tile/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/x86_64/dl-machine.h (elf_machine_rela): Likewise.

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2018-04-04 23:09:37 +01:00
Wilco Dijkstra
19a8b9a300 [PATCH 1/7] sin/cos slow paths: avoid slow paths for small inputs
This series of patches removes the slow patchs from sin, cos and sincos.
Besides greatly simplifying the implementation, the new version is also much
faster for inputs up to PI (41% faster) and for large inputs needing range
reduction (27% faster).

ULP is ~0.55 with no errors found after testing 1.6 billion inputs across most
of the range with mpsin and mpcos.  The number of incorrectly rounded results
(ie. ULP >0.5) is at most ~2750 per million inputs between 0.125 and 0.5,
the average is ~850 per million between 0 and PI.

Tested on AArch64 and x86_64 with no regressions.

The first patch removes the slow paths for the cases where the input is small
and doesn't require range reduction.  Update ULP tables for sin, cos and sincos
on AArch64 and x86_64.

	* sysdeps/aarch64/libm-test-ulps: Update ULP for sin, cos, sincos.
	* sysdeps/ieee754/dbl-64/s_sin.c (__sin): Remove slow paths for small
	inputs.
	(__cos): Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Update ULP for sin, cos, sincos.
2018-04-03 16:52:16 +01:00
Joseph Myers
ffec7b2740 Use x86_64 backtrace as generic version.
No glibc configuration uses the present debug/backtrace.c, whereas
several #include the x86_64 version.  The x86_64 version is
effectively a generic one (using _Unwind_Backtrace from libgcc, which
works much more reliably than the built-in functions used by
debug/backtrace.c).  This patch moves it to debug/backtrace.c and
removes all the #includes of the x86_64 version from other
architectures which are no longer required.

I do not know whether all the other architecture-specific backtrace
implementations that are based on _Unwind_Backtrace are required, or
whether, where their differences from the generic version do something
useful, suitable hooks could be added to the generic version to reduce
the duplication involved.

Tested with build-many-glibcs.py that installed stripped shared
libraries are unchanged by this patch.

	* sysdeps/x86_64/backtrace.c: Move to ....
	* debug/backtrace.c: ... here.
	* sysdeps/aarch64/backtrace.c: Remove file.
	* sysdeps/alpha/backtrace.c: Likewise.
	* sysdeps/hppa/backtrace.c: Likewise.
	* sysdeps/ia64/backtrace.c: Likewise.
	* sysdeps/mips/backtrace.c: Likewise.
	* sysdeps/nios2/backtrace.c: Likewise.
	* sysdeps/riscv/backtrace.c: Likewise.
	* sysdeps/sh/backtrace.c: Likewise.
	* sysdeps/tile/backtrace.c: Likewise.
2018-03-21 17:25:30 +00:00
Wilco Dijkstra
700593fdd7 Remove all target specific __ieee754_sqrt(f/l) inlines
Remove the now unused target specific__ieee754_sqrt(f/l) inlines.
Also remove inlines of sqrt which are for really old GCC versions.
Removing these is desirable, under the general principle of leaving
such inlining to the compiler rather than trying to do it in installed
headers, especially when only very old compilers are affected.

Note that removing inlines for __ieee754_sqrt disables inlining in the
sqrt wrapper functions.  Given the sqrt function will typically only be
called for negative arguments, it doesn't matter whether the inlining
happens or not.

	* sysdeps/aarch64/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	* sysdeps/alpha/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	* sysdeps/generic/math-type-macros.h (M_SQRT): Use sqrt.
	* sysdeps/m68k/m680x0/fpu/mathimpl.h (__ieee754_sqrt): Remove.
	* sysdeps/powerpc/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	* sysdeps/s390/fpu/bits/mathinline.h: Remove file.
	* sysdeps/sparc/fpu/bits/mathinline.h (sqrt) Remove.
	(sqrtf): Remove.
	(sqrtl): Remove.
	(__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	(__ieee754_sqrtl): Remove.
	* sysdeps/m68k/m680x0/fpu/mathimpl.h (__ieee754_sqrt): Remove.
	* sysdeps/x86/fpu/math_private.h (__ieee754_sqrt): Remove.
	* sysdeps/x86_64/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	(__ieee754_sqrtl): Remove.
2018-03-15 19:21:36 +00:00
Siddhesh Poyarekar
b47c3e7637 aarch64/strncmp: Use lsr instead of mov+lsr
A lsr can do what the mov and lsr did.
2018-03-15 08:06:21 +05:30
Siddhesh Poyarekar
d46f84de74 aarch64/strncmp: Unbreak builds with old binutils
Binutils 2.26.* and older do not support moves with shifted registers,
so use a separate shift instruction instead.
2018-03-14 18:51:05 +05:30
Siddhesh Poyarekar
7108f1f944 aarch64: Improve strncmp for mutually misaligned inputs
The mutually misaligned inputs on aarch64 are compared with a simple
byte copy, which is not very efficient.  Enhance the comparison
similar to strcmp by loading a double-word at a time.  The peak
performance improvement (i.e. 4k maxlen comparisons) due to this on
the strncmp microbenchmark is as follows:

falkor: 3.5x (up to 72% time reduction)
cortex-a73: 3.5x (up to 71% time reduction)
cortex-a53: 3.5x (up to 71% time reduction)

All mutually misaligned inputs from 16 bytes maxlen onwards show
upwards of 15% improvement and there is no measurable effect on the
performance of aligned/mutually aligned inputs.

	* sysdeps/aarch64/strncmp.S (count): New macro.
	(strncmp): Store misaligned length in SRC1 in COUNT.
	(mutual_align): Adjust.
	(misaligned8): Load dword at a time when it is safe.
2018-03-13 23:57:04 +05:30
Samuel Thibault
a5df0318ef hurd: add gscope support
* elf/dl-support.c [!THREAD_GSCOPE_IN_TCB] (_dl_thread_gscope_count):
Define variable.
* sysdeps/generic/ldsodefs.h [!THREAD_GSCOPE_IN_TCB] (struct
rtld_global): Add _dl_thread_gscope_count member.
* sysdeps/mach/hurd/tls.h: Include <atomic.h>.
[!defined __ASSEMBLER__] (THREAD_GSCOPE_GLOBAL, THREAD_GSCOPE_SET_FLAG,
THREAD_GSCOPE_RESET_FLAG, THREAD_GSCOPE_WAIT): Define macros.
* sysdeps/generic/tls.h: Document THREAD_GSCOPE_IN_TCB.
* sysdeps/aarch64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/alpha/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/arm/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/hppa/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/i386/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/ia64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/m68k/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/microblaze/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/mips/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/nios2/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/powerpc/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/riscv/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/s390/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/sh/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/sparc/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/tile/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/x86_64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
2018-03-11 13:06:33 +01:00
Siddhesh Poyarekar
4e54d91863 aarch64: Fix branch target to loop16
I goofed up when changing the loop8 name to loop16 and missed on out
the branch instance.  Fixed and actually build tested this time.

	* sysdeps/aarch64/memcmp.S (more16): Fix branch target loop16.
2018-03-06 23:01:02 +05:30
Siddhesh Poyarekar
30a81dae5b aarch64: Optimized memcmp for medium to large sizes
This improved memcmp provides a fast path for compares up to 16 bytes
and then compares 16 bytes at a time, thus optimizing loads from both
sources.  The glibc memcmp microbenchmark retains performance (with an
error of ~1ns) for smaller compare sizes and reduces up to 31% of
execution time for compares up to 4K on the APM Mustang.  On Qualcomm
Falkor this improves to almost 48%, i.e. it is almost 2x improvement
for sizes of 2K and above.

	* sysdeps/aarch64/memcmp.S: Widen comparison to 16 bytes at a
	time.
2018-03-06 19:22:40 +05:30
Siddhesh Poyarekar
6ca24c4348 aarch64/strcmp: fix misaligned loop jump target
I accidentally set the loop jump back label as misaligned8 instead of
do_misaligned.  The typo is harmless but it's always nice to not have
to unnecessarily execute those two instructions.

	* sysdeps/aarch64/strcmp.S (do_misaligned): Jump back to
	do_misaligned, not misaligned8.
2018-02-22 23:48:14 +05:30
Steve Ellcey
e9537dddc7 IFUNC for Cavium ThunderX2
* sysdeps/aarch64/multiarch/Makefile (sysdep_routines):
	Add memcpy_thunderx2.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c (MAX_IFUNC):
	Increment to 4.
	(__libc_ifunc_impl_list): Add __memcpy_thunderx2.
	* sysdeps/aarch64/multiarch/memcpy.c (libc_ifunc): Add IS_THUNDERX2
	and IS_THUNDERX2PA checks.
	* sysdeps/aarch64/multiarch/memcpy_thunderx.S (USE_THUNDERX2):
	Use macro to set name appropriately.
	(memcpy): Use USE_THUNDERX2 macro to modify prefetches.
	* sysdeps/aarch64/multiarch/memcpy_thunderx2.S: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_THUNDERX2PA):
	New macro.
	(IS_THUNDERX2): New macro.
2018-02-22 08:38:47 -08:00
Wilco Dijkstra
0c8a67a573 [AArch64] Fix include.
Fix include to use <>.

	* sysdeps/aarch64/fpu/fpu_control.h: Use <> in include.
2018-02-15 12:41:06 +00:00
Wilco Dijkstra
c3d466cba1 Remove slow paths from pow
Remove the slow paths from pow.  Like several other double precision math
functions, pow is exactly rounded.  This is not required from math functions
and causes major overheads as it requires multiple fallbacks using higher
precision arithmetic if a result is close to 0.5ULP.  Ridiculous slowdowns
of up to 100000x have been reported when the highest precision path triggers.

All GLIBC math tests pass on AArch64 and x64 (with ULP of pow set to 1).
The worst case error is ~0.506ULP.  A simple test over a few hundred million
values shows pow is 10% faster on average.  This fixes BZ #13932.

	[BZ #13932]
	* sysdeps/ieee754/dbl-64/uexp.h (err_1): Remove.
	* benchtests/pow-inputs: Update comment for slow path cases.
	* manual/probes.texi (slowpow_p10): Delete removed probe.
	(slowpow_p10): Likewise.
	* math/Makefile: Remove halfulp.c and slowpow.c.
	* sysdeps/aarch64/libm-test-ulps: Set ULP of pow to 1.
	* sysdeps/generic/math_private.h (__exp1): Remove error argument.
	(__halfulp): Remove.
	(__slowpow): Remove.
	* sysdeps/i386/fpu/halfulp.c: Delete file.
	* sysdeps/i386/fpu/slowpow.c: Likewise.
	* sysdeps/ia64/fpu/halfulp.c: Likewise.
	* sysdeps/ia64/fpu/slowpow.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp.c (__exp1): Remove error argument,
	improve comments and add error analysis.
	* sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Add error analysis.
	(power1): Remove function:
	(log1): Remove error argument, add error analysis.
	(my_log2): Remove function.
	* sysdeps/ieee754/dbl-64/halfulp.c: Delete file.
	* sysdeps/ieee754/dbl-64/slowpow.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/halfulp.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/slowpow.c: Likewise.
	* sysdeps/powerpc/power4/fpu/Makefile: Remove CPPFLAGS-slowpow.c.
	* sysdeps/x86_64/fpu/libm-test-ulps: Set ULP of pow to 1.
	* sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowpow-fma.c,
	slowpow-fma4.c, halfulp-fma.c, halfulp-fma4.c.
	* sysdeps/x86_64/fpu/multiarch/e_pow-fma.c (__slowpow): Remove define.
	* sysdeps/x86_64/fpu/multiarch/e_pow-fma4.c (__slowpow): Likewise.
	* sysdeps/x86_64/fpu/multiarch/halfulp-fma.c: Delete file.
	* sysdeps/x86_64/fpu/multiarch/halfulp-fma4.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/slowpow-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/slowpow-fma4.c: Likewise.
2018-02-12 10:47:09 +00:00
Wilco Dijkstra
4f5b921eb9 [AArch64] Fix testsuite error due to fpsr/fscr change
Add features.h include for __GNUC_PREREQ.

	* sysdeps/aarch64/fpu/fpu_control.h: Add features.h to fix build error.
2018-02-10 15:02:51 +00:00
Wilco Dijkstra
3f8d9d58c5 [AArch64] Use builtins for fpcr/fpsr
Since GCC has support for accessing FPSR/FPCR, use them when possible
so that the asm instructions can be removed eventually.  Although GCC 5
supports the builtins, it has an optimization bug, so use them from GCC 6
onwards.

	* sysdeps/aarch64/fpu/fpu_control.h: Use builtins for accessing
	FPCR/FPSR.
2018-02-09 16:59:23 +00:00
Siddhesh Poyarekar
84c94d2fd9 aarch64: Use the L() macro for labels in memcmp
The L() macro makes the assembly a bit more readable.

	* sysdeps/aarch64/memcmp.S: Use L() macro for labels.
2018-02-02 10:15:21 +05:30
Szabolcs Nagy
3d1d79283e aarch64: fix static pie enabled libc when main is in a shared library
In the static pie enabled libc, crt1.o uses the same position independent
code as rcrt1.o and crt1.o is used instead of Scrt1.o when -no-pie
executables are linked.  When main is not defined in the executable, but
in a shared library crt1.o is currently broken, it assumes main is local.
(glibc has a test for this but i missed it in my previous testing.)

To make both rcrt1.o and crt1.o happy with the same code, a wrapper is
introduced around main: with this crt1.o works with extern main symbol
while rcrt1.o does not depend on GOT relocations. (The change only
affects static pie enabled libc. Further simplification of start.S is
possible in the future by using the same approach for Scrt1.o too.)

	* aarch64/start.S (_start): Use __wrap_main.
	(__wrap_main): New local symbol.
2018-01-12 18:10:03 +00:00
Joseph Myers
688903eb3e Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2018-01-01 00:32:25 +00:00
Szabolcs Nagy
8bfb461e20 aarch64: update libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Update.
2017-12-20 12:07:10 +00:00
Adhemerval Zanella
4e00196912 aarch64: fix memset with --disable-multi-arch
* sysdeps/aarch64/memset.S (MEMSET): Define.
2017-12-20 12:05:32 +00:00
Szabolcs Nagy
14d886edbd aarch64: fix start code for static pie
There are three flavors of the crt startup code:

1) crt1.o used for non-pie,
2) Scrt1.o used for dynamic linked pie (dynamic linker relocates),
3) rcrt1.o used for static linked pie (self relocation is needed)

In the --enable-static-pie case crt1.o is built with -DPIC and in case
of static linking it interposes _dl_relocate_static_pie in libc to
avoid self relocation.

Scrt1.o is built with -DPIC -DSHARED and it relies on GOT entries that
the static linker cannot relax and thus need relocation before the
start code is executed, so rcrt1.o needs separate implementation.

This implementation does not work for .text > 4G position independent
executables, which is fine since the toolchain does not support
-mcmodel=large with -fPIE.

Tests pass with ld/22269 and ld/22263 binutils bugs fixed.

	* sysdeps/aarch64/start.S (_start): Handle PIC && !SHARED case.
2017-12-18 10:07:07 +00:00
Siddhesh Poyarekar
2bce01ebba aarch64: Improve strcmp unaligned performance
Replace the simple byte-wise compare in the misaligned case with a
dword compare with page boundary checks in place.  For simplicity I've
chosen a 4K page boundary so that we don't have to query the actual
page size on the system.

This results in up to 3x improvement in performance in the unaligned
case on falkor and about 2.5x improvement on mustang as measured using
bench-strcmp.

	* sysdeps/aarch64/strcmp.S (misaligned8): Compare dword at a
	time whenever possible.
2017-12-13 18:50:27 +05:30
Siddhesh Poyarekar
4c1d801a59 aarch64: Avoid hidden symbols for memcpy/memmove into static binaries
The __GI_* symbol aliases for __memcpy_generic are unnecessary since
they're never used.  Add them only for libc.so to avoid PLT.  Maybe
some time in future we need to evaluate the relative cost of PLT vs
gains from multiarch memcpy implementations and take a call on whether
to drop this completely.

	* sysdeps/aarch64/multiarch/memcpy_generic.S (__GI_memcpy):
	Define only for libc.so.
2017-12-04 21:17:17 +05:30
Joseph Myers
15ff490014 Use libm_alias_float for aarch64.
Continuing the preparation for additional _FloatN / _FloatNx function
aliases, this patch makes aarch64 libm function implementations use
libm_alias_float to define function aliases.

Tested with build-many-glibcs.py for aarch64-linux-gnu that installed
stripped shared libraries are unchanged by the patch.

	* sysdeps/aarch64/fpu/s_ceilf.c: Include <libm-alias-float.h>.
	(ceilf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_floorf.c: Include <libm-alias-float.h>.
	(floorf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_fmaf.c: Include <libm-alias-float.h>.
	(fmaf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_fmaxf.c: Include <libm-alias-float.h>.
	(fmaxf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_fminf.c: Include <libm-alias-float.h>.
	(fminf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_llrintf.c: Include <libm-alias-float.h>.
	(llrintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_llroundf.c: Include <libm-alias-float.h>.
	(llroundf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_lrintf.c: Include <libm-alias-float.h>.
	(lrintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_lroundf.c: Include <libm-alias-float.h>.
	(lroundf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_nearbyintf.c: Include
	<libm-alias-float.h>.
	(nearbyintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_rintf.c: Include <libm-alias-float.h>.
	(rintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_roundf.c: Include <libm-alias-float.h>.
	(roundf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_truncf.c: Include <libm-alias-float.h>.
	(truncf): Define using libm_alias_float.
2017-11-28 00:55:42 +00:00
Joseph Myers
f07d2ec8c0 Use libm_alias_double for aarch64.
Continuing the preparation for additional _FloatN / _FloatNx function
aliases, this patch makes aarch64 libm function implementations use
libm_alias_double to define function aliases.

Tested with build-many-glibcs.py for aarch64-linux-gnu that installed
stripped shared libraries are unchanged by the patch.

	* sysdeps/aarch64/fpu/s_ceil.c: Include <libm-alias-double.h>.
	(ceil): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_floor.c: Include <libm-alias-double.h>.
	(floor): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_fma.c: Include <libm-alias-double.h>.
	(fma): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_fmax.c: Include <libm-alias-double.h>.
	(fmax): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_fmin.c: Include <libm-alias-double.h>.
	(fmin): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_llrint.c: Include <libm-alias-double.h>.
	(llrint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_llround.c: Include <libm-alias-double.h>.
	(llround): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_lrint.c: Include <libm-alias-double.h>.
	(lrint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_lround.c: Include <libm-alias-double.h>.
	(lround): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_nearbyint.c: Include <libm-alias-double.h>.
	(nearbyint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_rint.c: Include <libm-alias-double.h>.
	(rint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_round.c: Include <libm-alias-double.h>.
	(round): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_trunc.c: Include <libm-alias-double.h>.
	(trunc): Define using libm_alias_double.
2017-11-27 23:54:32 +00:00
Siddhesh Poyarekar
5a67c4fa01 aarch64: Optimized memset for falkor
The generic memset reads dczid_el0 on every memset.  This has a
significant impact on falkor for a range of sizes because reading
dczid_el0 is slow.

The DZP bit in the dczid_el0 register does not change dynamically, so
it is safe to read once during program startup.  With this patch
dczid_el0 is read once during startup and zva_size is cached.  This is
used to invoke the falkor-specific memset; the generic memset routine
remains unchanged.

The gains due to this are significant for falkor, with run time
reductions as high as 48%.  Here's a sample from the falkor tests:

Function: memset
Variant: walk
                      simple_memset	__memset_falkor	__memset_generic
=====================================================================
length=256, char=0:   139.96 (-698.28%)	   9.07 ( 48.26%)  17.53
length=257, char=0:   140.50 (-699.03%)	   9.53 ( 45.80%)  17.58
length=258, char=0:   140.96 (-703.95%)	   9.58 ( 45.36%)  17.53
length=259, char=0:   141.56 (-705.16%)	   9.53 ( 45.79%)  17.58
length=260, char=0:   142.15 (-710.76%)	   9.57 ( 45.39%)  17.53
length=261, char=0:   142.50 (-710.39%)	   9.53 ( 45.78%)  17.58
length=262, char=0:   142.97 (-715.09%)	   9.57 ( 45.42%)  17.54
length=263, char=0:   143.51 (-716.18%)	   9.53 ( 45.80%)  17.58
length=264, char=0:   143.93 (-720.55%)	   9.58 ( 45.39%)  17.54
length=265, char=0:   144.56 (-722.07%)	   9.53 ( 45.80%)  17.59
length=266, char=0:   144.98 (-726.42%)	   9.58 ( 45.42%)  17.54
length=267, char=0:   145.53 (-727.53%)	   9.53 ( 45.80%)  17.59
length=268, char=0:   146.25 (-731.81%)	   9.53 ( 45.79%)  17.58
length=269, char=0:   146.52 (-735.39%)	   9.53 ( 45.66%)  17.54
length=270, char=0:   146.97 (-735.81%)	   9.53 ( 45.80%)  17.58
length=271, char=0:   147.54 (-741.08%)	   9.58 ( 45.38%)  17.54
length=512, char=0:   268.26 (-1307.85%)  12.06 ( 36.71%)  19.05
length=513, char=0:   268.73 (-1273.89%)  13.56 ( 30.68%)  19.56
length=514, char=0:   269.31 (-1276.89%)  13.56 ( 30.68%)  19.56
length=515, char=0:   269.73 (-1279.05%)  13.56 ( 30.68%)  19.56
length=516, char=0:   270.34 (-1282.24%)  13.56 ( 30.67%)  19.56
length=517, char=0:   270.83 (-1284.71%)  13.56 ( 30.66%)  19.56
length=518, char=0:   271.20 (-1286.54%)  13.56 ( 30.67%)  19.56
length=519, char=0:   271.67 (-1288.67%)  13.65 ( 30.24%)  19.56
length=520, char=0:   272.14 (-1291.04%)  13.65 ( 30.22%)  19.56
length=521, char=0:   272.66 (-1293.69%)  13.65 ( 30.23%)  19.56
length=522, char=0:   273.14 (-1296.13%)  13.65 ( 30.20%)  19.56
length=523, char=0:   273.64 (-1298.75%)  13.65 ( 30.23%)  19.56
length=524, char=0:   274.34 (-1302.16%)  13.66 ( 30.20%)  19.57
length=525, char=0:   274.64 (-1297.78%)  13.56 ( 30.99%)  19.65
length=526, char=0:   275.20 (-1300.04%)  13.56 ( 31.01%)  19.66
length=527, char=0:   275.66 (-1302.86%)  13.56 ( 30.99%)  19.65
length=1024, char=0:  524.46 (-2169.75%)  20.12 ( 12.92%)  23.11
length=1025, char=0:  525.14 (-2124.63%)  21.62 (  8.40%)  23.61
length=1026, char=0:  525.59 (-2125.36%)  21.88 (  7.37%)  23.62
length=1027, char=0:  525.98 (-2127.14%)  21.62 (  8.46%)  23.62
length=1028, char=0:  526.68 (-2131.10%)  21.62 (  8.42%)  23.61
length=1029, char=0:  527.10 (-2131.70%)  21.79 (  7.73%)  23.62
length=1030, char=0:  527.54 (-2118.51%)  21.62 (  9.10%)  23.78
length=1031, char=0:  527.98 (-2136.37%)  21.62 (  8.43%)  23.61
length=1032, char=0:  528.70 (-2139.38%)  21.62 (  8.43%)  23.61
length=1033, char=0:  529.25 (-2124.37%)  21.62 (  9.11%)  23.79
length=1034, char=0:  529.48 (-2142.95%)  21.62 (  8.43%)  23.61
length=1035, char=0:  530.11 (-2145.13%)  21.62 (  8.44%)  23.61
length=1036, char=0:  530.76 (-2147.10%)  21.79 (  7.73%)  23.62
length=1037, char=0:  531.03 (-2149.45%)  21.62 (  8.42%)  23.61
length=1038, char=0:  531.64 (-2151.87%)  21.62 (  8.42%)  23.61
length=1039, char=0:  531.99 (-2151.63%)  21.80 (  7.75%)  23.63

	* sysdeps/aarch64/memset-reg.h: New file.
	* sysdeps/aarch64/memset.S: Use it.
	(__memset): Rename to MEMSET macro.
	[ZVA_MACRO]: Use zva_macro.
	* sysdeps/aarch64/multiarch/Makefile (sysdep_routines):
	Add memset_generic and memset_falkor.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add memset ifuncs.
	* sysdeps/aarch64/multiarch/init-arch.h (INIT_ARCH): New
	local variable zva_size.
	* sysdeps/aarch64/multiarch/memset.c: New file.
	* sysdeps/aarch64/multiarch/memset_generic.S: New file.
	* sysdeps/aarch64/multiarch/memset_falkor.S: New file.
	* sysdeps/aarch64/multiarch/rtld-memset.S: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c
	(DCZID_DZP_MASK): New macro.
	(DCZID_BS_MASK): Likewise.
	(init_cpu_features): Read and set zva_size.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h
	(struct cpu_features): New member zva_size.
2017-11-20 18:25:04 +05:30
Adhemerval Zanella
58a813bf6e aarch64: Fix f{max,min}{f} build for GCC 4.9 and 5
GCC 4.9 and 5 do not generate a correct f{max,min}nm instruction for
__builtin_{fmax,fmin}{f} without -ffinite-math-only.  It is clear a
compiler issue since the instruction can handle NaN and Inf correctly
and GCC6+ does not show this issue.

We can backport a fix to GCC 5, raise the minimum required GCC version
for aarch64 (since GCC 4.9 branch is now closed [1]) and/or add
configure check to check for this issue.  However I think
-ffinite-math-only should be safe for these specific implementations
and it is a simpler solution.

Checked on aarch64-linux-gnu with GCC 5.3.1.

	* sysdeps/aarch64/fpu/Makefile (CFLAGS-s_fmax.c, CFLAGS-s_fmaxf.c,
	CFLAGS-s_fmin.c, CFLAGS-s_fminf.c): New rule: add -ffinite-math-only.

[1] https://gcc.gnu.org/ml/gcc/2016-08/msg00010.html

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2017-11-17 09:23:07 -02:00
Adhemerval Zanella
06be6368da nptl: Define __PTHREAD_MUTEX_{NUSERS_AFTER_KIND,USE_UNION}
This patch adds two new internal defines to set the internal
pthread_mutex_t layout required by the supported ABIS:

  1. __PTHREAD_MUTEX_NUSERS_AFTER_KIND which control whether to define
     __nusers fields before or after __kind.  The preferred value for
     is 0 for new ports and it sets __nusers before __kind.

  2. __PTHREAD_MUTEX_USE_UNION which control whether internal __spins and
     __list members will be place inside an union for linuxthreads
     compatibility.  The preferred value is 0 for ports and it sets
     to not use an union to define both fields.

It fixes the wrong offsets value for __kind value on x86_64-linux-gnu-x32.
Checked with a make check run-built-tests=no on all afected ABIs.

	[BZ #22298]
	* nptl/allocatestack.c (allocate_stack): Check if
	__PTHREAD_MUTEX_HAVE_PREV is non-zero, instead if
	__PTHREAD_MUTEX_HAVE_PREV is defined.
	* nptl/descr.h (pthread): Likewise.
	* nptl/nptl-init.c (__pthread_initialize_minimal_internal):
	Likewise.
	* nptl/pthread_create.c (START_THREAD_DEFN): Likewise.
	* sysdeps/nptl/fork.c (__libc_fork): Likewise.
	* sysdeps/nptl/pthread.h (PTHREAD_MUTEX_INITIALIZER): Likewise.
	* sysdeps/nptl/bits/thread-shared-types.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
	defines.
	(__pthread_internal_list): Check __PTHREAD_MUTEX_USE_UNION instead
	of __WORDSIZE for internal layout.
	(__pthread_mutex_s): Check __PTHREAD_MUTEX_NUSERS_AFTER_KIND instead
	of __WORDSIZE for internal __nusers layout and __PTHREAD_MUTEX_USE_UNION
	instead of __WORDSIZE whether to use an union for __spins and __list
	fields.
	(__PTHREAD_MUTEX_HAVE_PREV): Define also for __PTHREAD_MUTEX_USE_UNION
	case.
	* sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
	defines.
	* sysdeps/alpha/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/arm/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/hppa/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/ia64/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/m68k/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/mips/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/nios2/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/s390/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/sh/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/sparc/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/tile/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/x86/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-07 09:48:41 -02:00
Adhemerval Zanella
dff91cd45e nptl: Add tests for internal pthread_mutex_t offsets
This patch adds a new build test to check for internal fields
offsets for user visible internal field.  Although currently
the only field which is statically initialized to a non zero value
is pthread_mutex_t.__data.__kind value, the tests also check the
offset of __kind, __spins, __elision (if supported), and __list
internal member.  A internal header (pthread-offset.h) is added
to each major ABI with the reference value.

Checked on x86_64-linux-gnu and with a build check for all affected
ABIs (aarch64-linux-gnu, alpha-linux-gnu, arm-linux-gnueabihf,
hppa-linux-gnu, i686-linux-gnu, ia64-linux-gnu, m68k-linux-gnu,
microblaze-linux-gnu, mips64-linux-gnu, mips64-n32-linux-gnu,
mips-linux-gnu, powerpc64le-linux-gnu, powerpc-linux-gnu,
s390-linux-gnu, s390x-linux-gnu, sh4-linux-gnu, sparc64-linux-gnu,
sparcv9-linux-gnu, tilegx-linux-gnu, tilegx-linux-gnu-x32,
tilepro-linux-gnu, x86_64-linux-gnu, and x86_64-linux-x32).

	* nptl/pthreadP.h (ASSERT_PTHREAD_STRING,
	ASSERT_PTHREAD_INTERNAL_OFFSET): New macro.
	* nptl/pthread_mutex_init.c (__pthread_mutex_init): Add build time
	checks for internal pthread_mutex_t offsets.
	* sysdeps/aarch64/nptl/pthread-offsets.h
	(__PTHREAD_MUTEX_NUSERS_OFFSET, __PTHREAD_MUTEX_KIND_OFFSET,
	__PTHREAD_MUTEX_SPINS_OFFSET, __PTHREAD_MUTEX_ELISION_OFFSET,
	__PTHREAD_MUTEX_LIST_OFFSET): New macro.
	* sysdeps/alpha/nptl/pthread-offsets.h: Likewise.
	* sysdeps/arm/nptl/pthread-offsets.h: Likewise.
	* sysdeps/hppa/nptl/pthread-offsets.h: Likewise.
	* sysdeps/i386/nptl/pthread-offsets.h: Likewise.
	* sysdeps/ia64/nptl/pthread-offsets.h: Likewise.
	* sysdeps/m68k/nptl/pthread-offsets.h: Likewise.
	* sysdeps/microblaze/nptl/pthread-offsets.h: Likewise.
	* sysdeps/mips/nptl/pthread-offsets.h: Likewise.
	* sysdeps/nios2/nptl/pthread-offsets.h: Likewise.
	* sysdeps/powerpc/nptl/pthread-offsets.h: Likewise.
	* sysdeps/s390/nptl/pthread-offsets.h: Likewise.
	* sysdeps/sh/nptl/pthread-offsets.h: Likewise.
	* sysdeps/sparc/nptl/pthread-offsets.h: Likewise.
	* sysdeps/tile/nptl/pthread-offsets.h: Likewise.
	* sysdeps/x86_64/nptl/pthread-offsets.h: Likewise.

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-07 09:48:28 -02:00
Szabolcs Nagy
659ca26736 aarch64: optimize _dl_tlsdesc_dynamic fast path
Remove some load/store instructions from the dynamic tlsdesc resolver
fast path.  This gives around 20% faster tls access in dlopened shared
libraries (assuming glibc ran out of static tls space).

	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_dynamic): Optimize.
2017-11-03 14:50:55 +00:00
Szabolcs Nagy
91c5a366d8 aarch64: Remove barriers from TLS descriptor functions
Remove ldar synchronization and most lazy TLSDESC initialization
related code.

	* sysdeps/aarch64/dl-machine.h (elf_machine_runtime_setup): Remove
	DT_TLSDESC_GOT initialization.
	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Remove.
	(_dl_tlsdesc_resolve_rela): Likewise.
	(_dl_tlsdesc_resolve_hold): Likewise.
	(_dl_tlsdesc_undefweak): Remove ldar.
	(_dl_tlsdesc_dynamic): Likewise.
	* sysdeps/aarch64/dl-tlsdesc.h (_dl_tlsdesc_return_lazy): Remove.
	(_dl_tlsdesc_resolve_rela): Likewise.
	(_dl_tlsdesc_resolve_hold): Likewise.
	* sysdeps/aarch64/tlsdesc.c (_dl_tlsdesc_resolve_rela_fixup): Remove.
	(_dl_tlsdesc_resolve_hold_fixup): Likewise.
	(_dl_tlsdesc_resolve_rela): Likewise.
	(_dl_tlsdesc_resolve_hold): Likewise.
2017-11-03 14:43:32 +00:00
Szabolcs Nagy
b7cf203b5c aarch64: Disable lazy symbol binding of TLSDESC
Always do TLS descriptor initialization at load time during relocation
processing to avoid barriers at every TLS access. In non-dlopened shared
libraries the overhead of tls access vs static global access is > 3x
bigger when lazy initialization is used (_dl_tlsdesc_return_lazy)
compared to bind-now (_dl_tlsdesc_return) so the barriers dominate tls
access performance.

TLSDESC relocs are in DT_JMPREL which are processed at load time using
elf_machine_lazy_rel which is only supposed to do lightweight
initialization using the DT_TLSDESC_PLT trampoline (the trampoline code
jumps to the entry point in DT_TLSDESC_GOT which does the lazy tlsdesc
initialization at runtime).  This patch changes elf_machine_lazy_rel
in aarch64 to do the symbol binding and initialization as if DF_BIND_NOW
was set, so the non-lazy code path of elf/do-rel.h was replicated.

The static linker could be changed to emit TLSDESC relocs in DT_REL*,
which are processed non-lazily, but the goal of this patch is to always
guarantee bind-now semantics, even if the binary was produced with an
old linker, so the barriers can be dropped in tls descriptor functions.

After this change the synchronizing ldar instructions can be dropped
as well as the lazy initialization machinery including the DT_TLSDESC_GOT
setup.

I believe this should be done on all targets, including ones where no
barrier is needed for lazy initialization.  There is very little gain in
optimizing for large number of symbolic tlsdesc relocations which is an
extremely uncommon case.  And currently the tlsdesc entries are only
readonly protected with -z now and some hardennings against writable
JUMPSLOT relocs don't work for TLSDESC so they are a security hazard.
(But to fix that the static linker has to be changed.)

	* sysdeps/aarch64/dl-machine.h (elf_machine_lazy_rel): Do symbol
	binding and initialization non-lazily for R_AARCH64_TLSDESC.
2017-11-03 14:41:35 +00:00
Szabolcs Nagy
be080b6c14 aarch64: Add missing math Makefile for recent commit
Without -fno-math-errno, the builtins just do a call instead of
inlining a single instruction.
2017-10-23 15:34:36 +01:00
Michael Collison
5062680c60 aarch64: Implement math acceleration via builtins
This patch converts asm statements into builtins for AArch64.  As an
example for the file sysdeps/aarch64/fpu/s_ceil.c, we convert the
function from

double
__ceil (double x)
{
  double result;
  asm ("frintp\t%d0, %d1" :
       "=w" (result) : "w" (x) );
  return result;
}

into

double
__ceil (double x)
{
  return __builtin_ceil (x);
}

Tested on aarch64-linux-gnu with gcc-4.9.4 and gcc-6.

	* sysdeps/aarch64/fpu/e_sqrt.c (ieee754_sqrt): Replace asm statements
	with __builtin_sqrt.
	* sysdeps/aarch64/fpu/e_sqrtf.c (ieee754_sqrtf): Replace asm statements
	with __builtin_sqrtf.
	* sysdeps/aarch64/fpu/s_ceil.c (__ceil): Replace asm statements
	with __builtin_ceil.
	* sysdeps/aarch64/fpu/s_ceilf.c (__ceilf): Replace asm statements
	with __builtin_ceilf.
	* sysdeps/aarch64/fpu/s_floor.c (__floor): Replace asm statements
	with __builtin_floor.
	* sysdeps/aarch64/fpu/s_floorf.c (__floorf): Replace asm statements
	with __builtin_floorf.
	* sysdeps/aarch64/fpu/s_fma.c (__fma): Replace asm statements
	with __builtin_fma.
	* sysdeps/aarch64/fpu/s_fmaf.c (__fmaf): Replace asm statements
	with __builtin_fmaf.
	* sysdeps/aarch64/fpu/s_fmax.c (__fmax): Replace asm statements
	with __builtin_fmax.
	* sysdeps/aarch64/fpu/s_fmaxf.c (__fmaxf): Replace asm statements
	with __builtin_fmaxf.
	* sysdeps/aarch64/fpu/s_fmin.c (__fmin): Replace asm statements
	with __builtin_fmin.
	* sysdeps/aarch64/fpu/s_fminf.c (__fminf): Replace asm statements
	with __builtin_fminf.
	* sysdeps/aarch64/fpu/s_frint.c: Delete file.
	* sysdeps/aarch64/fpu/s_frintf.c: Delete file.
	* sysdeps/aarch64/fpu/s_llrint.c (__llrint): Replace asm statements
	with builtin_rint and conversion to int.
	* sysdeps/aarch64/fpu/s_llrintf.c (__llrintf): Likewise.
	* sysdeps/aarch64/fpu/s_llround.c (__llround): Replace asm statements
	with builtin_llround.
	* sysdeps/aarch64/fpu/s_llroundf.c (__llroundf): Likewise.
	* sysdeps/aarch64/fpu/s_lrint.c (__lrint): Replace asm statements
	with builtin_rint and conversion to long int.
	* sysdeps/aarch64/fpu/s_lrintf.c (__lrintf): Likewise.
	* sysdeps/aarch64/fpu/s_lround.c (__lround): Replace asm statements
	with builtin_lround.
	* sysdeps/aarch64/fpu/s_lroundf.c (__lroundf): Replace asm statements
	with builtin_lroundf.
	* sysdeps/aarch64/fpu/s_nearbyint.c (__nearbyint): Replace asm
	statements with __builtin_nearbyint.
	* sysdeps/aarch64/fpu/s_nearbyintf.c (__nearbyintf): Replace asm
	statements with __builtin_nearbyintf.
	* sysdeps/aarch64/fpu/s_rint.c (__rint): Replace asm statements
	with __builtin_rint.
	* sysdeps/aarch64/fpu/s_rintf.c (__rintf): Replace asm statements
	with __builtin_rintf.
	* sysdeps/aarch64/fpu/s_round.c (__round): Replace asm statements
	with __builtin_round.
	* sysdeps/aarch64/fpu/s_roundf.c (__roundf): Replace asm statements
	with __builtin_roundf.
	* sysdeps/aarch64/fpu/s_trunc.c (__trunc): Replace asm statements
	with __builtin_trunc.
	* sysdeps/aarch64/fpu/s_truncf.c (__truncf): Replace asm statements
	with __builtin_truncf.
	* sysdeps/aarch64/fpu/Makefile: Build e_sqrt[f].c with -fno-math-errno.
2017-10-23 10:32:56 +01:00
Szabolcs Nagy
a68ba2f3cd [AARCH64] Rewrite elf_machine_load_address using _DYNAMIC symbol
This patch rewrites aarch64 elf_machine_load_address to use special _DYNAMIC
symbol instead of _dl_start.

The static address of _DYNAMIC symbol is stored in the first GOT entry.
Here is the change which makes this solution work (part of binutils 2.24):
https://sourceware.org/ml/binutils/2013-06/msg00248.html

i386, x86_64 targets use the same method to do this as well.

The original implementation relies on a trick that R_AARCH64_ABS32 relocation
being resolved at link time and the static address fits in the 32bits.
However, in LP64, normally, the address is defined to be 64 bit.

Here is the C version one which should be portable in all cases.

	* sysdeps/aarch64/dl-machine.h (elf_machine_load_address): Use
	_DYNAMIC symbol to calculate load address.
2017-10-18 17:35:16 +01:00
Siddhesh Poyarekar
dd5bc7f1b3 aarch64: Optimized implementation of memmove for Qualcomm Falkor
This is an optimized memmove implementation for the Qualcomm Falkor
processor core.  Due to the way the falkor memcpy needs to be written,
code cannot be easily shared between memmove and memcpy like in case
of other aarch64 memcpy implementations due to which this routine is
separate.  The underlying principle is the same as that of memcpy
where it tries to use registers with the same lower 4 bits for
fetching the same stream, thus optimizing hardware prefetcher
performance.

The memcpy copy loop copies 64 bytes at a time using the same register
pair since that's the way to train the hardware prefetcher on the
falkor core.  memmove cannot quite do that since it needs to avoid
overlaps, so it does the next best thing, i.e. has a 32 byte loop with
a 32 byte end (prefetch a loop ahead to account for overlapping
locations) with register pairs that alias so that they hit the same
prefetcher.  Due to this difference in loop size, they have to
currently be separate implementations but efforts are on to try and
get memmove to fall back into memcpy whenever it can without simply
duplicating all of the code.

Performance:

The routine fares around 20-25% better than the generic memmove for
most medium to large sizes (i.e. > 128 bytes) for the new walking
memmove benchmark (memmove-walk) with an unexplained regression
between 1K and 2K.  The minor regression is something worth looking
into for us, but the remaining gains are significant enough that we
would like this included upstream as we looking into the cause for the
regression.  Here is a snippet of the numbers as generated from the
microbenchmark by the compare_strings script.  Comparisons are against
__memmove_generic:

Function: memmove
Variant: walk
                                    __memmove_thunderx	__memmove_falkor	__memmove_generic
========================================================================================================================
<snip>
                        length=16384:  12508800.00 (  6.09%)	 11486800.00 ( 13.76%)	 13319600.00
                        length=16400:  13614200.00 ( -0.67%)	 11585000.00 ( 14.33%)	 13523600.00
                        length=16385:  13448400.00 (  0.10%)	 11732700.00 ( 12.84%)	 13461200.00
                        length=16399:  13594100.00 ( -0.22%)	 11859600.00 ( 12.57%)	 13564400.00
                        length=16386:  13211600.00 (  1.13%)	 11503800.00 ( 13.91%)	 13362400.00
                        length=16398:  13218600.00 (  2.12%)	 11573200.00 ( 14.30%)	 13504700.00
                        length=16387:  13510900.00 ( -0.37%)	 11744200.00 ( 12.76%)	 13461300.00
                        length=16397:  13603700.00 ( -0.15%)	 11878200.00 ( 12.55%)	 13583200.00
                        length=16388:  13461700.00 ( -0.13%)	 11558000.00 ( 14.03%)	 13444100.00
                        length=16396:  13517500.00 ( -0.03%)	 11561300.00 ( 14.45%)	 13513900.00
                        length=16389:  13534100.00 (  0.17%)	 11756800.00 ( 13.28%)	 13556900.00
                        length=16395:  13585600.00 (  0.11%)	 11791800.00 ( 13.30%)	 13601200.00
                        length=16390:  13480100.00 ( -0.13%)	 11685500.00 ( 13.20%)	 13462100.00
                        length=16394:  13529900.00 ( -0.23%)	 11549800.00 ( 14.43%)	 13498200.00
                        length=16391:  13595400.00 ( -0.26%)	 11768200.00 ( 13.22%)	 13560600.00
                        length=16393:  13567000.00 (  0.20%)	 11779700.00 ( 13.35%)	 13594700.00
                        length=32768:  71308800.00 ( -6.53%)	 50220800.00 ( 24.98%)	 66939200.00
                        length=32784:  72100800.00 (-11.55%)	 50114100.00 ( 22.47%)	 64636300.00
                        length=32769:  71767000.00 ( -7.10%)	 51238400.00 ( 23.54%)	 67010000.00
                        length=32783:  70113700.00 (-40.95%)	 51129000.00 ( -2.78%)	 49744400.00
                        length=32770:  71367600.00 ( -6.52%)	 50244700.00 ( 25.01%)	 67000900.00
                        length=32782:  64366700.00 (  4.71%)	 50101400.00 ( 25.83%)	 67545600.00
                        length=32771:  71440100.00 ( -6.51%)	 51263900.00 ( 23.57%)	 67074900.00
                        length=32781:  66993000.00 (  0.34%)	 51108300.00 ( 23.97%)	 67220300.00
                        length=32772:  71443900.00 (-60.50%)	 50062100.00 (-12.47%)	 44512600.00
                        length=32780:  71759100.00 ( -6.58%)	 50263200.00 ( 25.35%)	 67328600.00
                        length=32773:  71714900.00 (-33.21%)	 51076600.00 (  5.12%)	 53835400.00
                        length=32779:  71756900.00 ( -6.56%)	 51290800.00 ( 23.83%)	 67337800.00
                        length=32774:  59689300.00 (-34.55%)	 50068400.00 (-12.86%)	 44363300.00
                        length=32778:  71847500.00 (-18.20%)	 50084100.00 ( 17.61%)	 60786500.00
                        length=32775:  71599300.00 ( -6.54%)	 51278200.00 ( 23.70%)	 67204800.00
                        length=32777:  71862900.00 (-60.85%)	 51094000.00 (-14.36%)	 44677900.00
                        length=65536: 282848000.00 ( -6.60%)	199187000.00 ( 24.93%)	265325000.00
                        length=65552: 243285000.00 (-41.61%)	198512000.00 (-15.54%)	171805000.00
                        length=65537: 255415000.00 (-23.47%)	202499000.00 (  2.11%)	206858000.00
                        length=65551: 280122000.00 (-62.95%)	203349000.00 (-18.29%)	171911000.00
                        length=65538: 283676000.00 (-14.46%)	198368000.00 ( 19.96%)	247848000.00
                        length=65550: 275566000.00 (-51.76%)	198494000.00 ( -9.31%)	181581000.00
                        length=65539: 283699000.00 ( -6.58%)	203453000.00 ( 23.57%)	266195000.00
                        length=65549: 286572000.00 ( -6.65%)	202607000.00 ( 24.60%)	268712000.00
                        length=65540: 283710000.00 ( -6.59%)	199161000.00 ( 25.17%)	266160000.00
                        length=65548: 237573000.00 ( 11.48%)	198462000.00 ( 26.06%)	268395000.00
                        length=65541: 284150000.00 ( -6.58%)	203273000.00 ( 23.75%)	266600000.00
                        length=65547: 286250000.00 ( -6.70%)	202594000.00 ( 24.48%)	268263000.00
                        length=65542: 284167000.00 ( -6.60%)	199122000.00 ( 25.31%)	266584000.00
                        length=65546: 285656000.00 ( -6.59%)	198443000.00 ( 25.95%)	268002000.00
                        length=65543: 284600000.00 ( -6.58%)	203247000.00 ( 23.89%)	267030000.00
                        length=65545: 285665000.00 ( -6.40%)	202575000.00 ( 24.55%)	268472000.00
<snip>

	* sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add
	memmove_falkor.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Likewise.
	* sysdeps/aarch64/multiarch/memmove.c: Likewise.
	* sysdeps/aarch64/multiarch/memmove_falkor.S: New file.
2017-10-05 22:20:23 +05:30
Szabolcs Nagy
db4f87bad4 aarch64: don't use MIN in dl-machine.h
MIN is used, but param.h may not be included, so expand its
single use inline.

	* sysdeps/aarch64/dl-machine.h (elf_machine_rela): Expand MIN.
2017-10-04 17:49:38 +01:00
Szabolcs Nagy
b2f03cf3a4 AArch64: update libm-test-ulps
Update for new expf and logf.

	* sysdeps/aarch64/libm-test-ulps: Update.
2017-09-28 15:28:46 +01:00
Szabolcs Nagy
72aa623345 Optimized generic expf and exp2f with wrappers
Based on new expf and exp2f code from
https://github.com/ARM-software/optimized-routines/

with wrapper on aarch64:
expf reciprocal-throughput: 2.3x faster
expf latency: 1.7x faster
without wrapper on aarch64:
expf reciprocal-throughput: 3.3x faster
expf latency: 1.7x faster
without wrapper on aarch64:
exp2f reciprocal-throughput: 2.8x faster
exp2f latency: 1.3x faster
libm.so size on aarch64:
.text size: -152 bytes
.rodata size: -1740 bytes
expf/exp2f worst case nearest rounding error: 0.502 ulp
worst case non-nearest rounding error: 1 ulp

Error checks are inline and errno setting is in separate tail called
functions, but the wrappers are kept in this patch to handle the
_LIB_VERSION==_SVID_ case.  (So e.g. errno is set twice for expf calls
and once for __expf_finite calls on targets where the new code is used.)

Double precision arithmetics is used which is expected to be faster on
most targets (including soft-float) than using single precision and it
is easier to get good precision result with it.

Const data is kept in a separate translation unit which complicates
maintenance a bit, but is expected to give good code for literal loads
on most targets and allows sharing data across expf, exp2f and powf.
(This data is disabled on i386, m68k and ia64 which have their own
expf, exp2f and powf code.)

Some details may need target specific tweaks:
- best convert and round to int operation in the arg reduction may be
different across targets.
- code was optimized on fma target, optimal polynomial eval may be
different without fma.
- gcc does not always generate good code for fp bit representation
access via unions or it may be inherently slow on some targets.

The libm-test-ulps will need adjustment because..
- The argument reduction ideally uses nearest rounded rint, but that is
not efficient on most targets, so the polynomial can get evaluated on a
wider interval in non-nearest rounding mode making 1 ulp errors common
in that case.
- The polynomial is evaluated such that it may have 1 ulp error on
negative tiny inputs with upward rounding.

	* math/Makefile (type-float-routines): Add math_errf and e_exp2f_data.
	* sysdeps/aarch64/fpu/math_private.h (TOINT_INTRINSICS): Define.
	(roundtoint, converttoint): Likewise.
	* sysdeps/ieee754/flt-32/e_expf.c: New implementation.
	* sysdeps/ieee754/flt-32/e_exp2f.c: New implementation.
	* sysdeps/ieee754/flt-32/e_exp2f_data.c: New file.
	* sysdeps/ieee754/flt-32/math_config.h: New file.
	* sysdeps/ieee754/flt-32/math_errf.c: New file.
	* sysdeps/ieee754/flt-32/t_exp2f.h: Remove.
	* sysdeps/i386/fpu/e_exp2f_data.c: New file.
	* sysdeps/i386/fpu/math_errf.c: New file.
	* sysdeps/ia64/fpu/e_exp2f_data.c: New file.
	* sysdeps/ia64/fpu/math_errf.c: New file.
	* sysdeps/m68k/m680x0/fpu/e_exp2f_data.c: New file.
	* sysdeps/m68k/m680x0/fpu/math_errf.c: New file.
2017-09-25 10:44:39 +01:00
Wilco Dijkstra
ca3a382ea3 Enable unwind info in libc-start.c and backtrace.c
Add unwind info to __libc_start_main so that unwinding continues one
extra level to _start.  Similarly add unwind info to backtrace.
Given many targets require this, do this in a general way.

	* csu/Makefile: Add -funwind-tables to libc-start.c.
	* debug/Makefile: Add -funwind-tables to backtrace.c.
	* sysdeps/aarch64/Makefile: Remove CFLAGS-backtrace.c.
	* sysdeps/arm/Makefile: Likewise.
	* sysdeps/i386/Makefile: Likewise.
	* sysdeps/m68k/Makefile: Likewise.
	* sysdeps/mips/Makefile: Likewise.
	* sysdeps/nios2/Makefile: Likewise.
	* sysdeps/sh/Makefile: Likewise.
	* sysdeps/sparc/Makefile: Likewise.
2017-09-19 15:07:58 +01:00
Wang Boshi
6cd380dd36 AArch64: use movz/movk instead of literal pools in start.S
eXecute-Only Memory (XOM) is a protection mechanism against some ROP
attacks. XOM sets the code as executable and unreadable, so the access
to any data, like literal pools, in the code section causes the fault
with XOM. The compiler can disable literal pools for C source files,
but not for assembly files, so I use movz/movk instead of literal pools
in start.S for XOM.

I add MOVL macro with movz/movk instructions like movl pseudo-instruction
in armasm, and use the macro instead of literal pools.

	* sysdeps/aarch64/start.S: Use MOVL instead of literal pools.
	* sysdeps/aarch64/sysdep.h (MOVL): Add MOVL macro.
2017-09-18 18:15:47 +01:00
Joseph Myers
5a80d39d0d Obsolete pow10 functions.
This patch obsoletes the pow10, pow10f and pow10l functions (makes
them into compat symbols, not available for new ports or static
linking).  The exp10 names for these functions are standardized (in TS
18661-4) and were added in the same glibc version (2.1) as pow10 so
source code can change to use them without any loss of portability.
Since pow10 is deliberately not provided for _Float128, only exp10,
this slightly simplifies moving to the new wrapper templates in the
!LIBM_SVID_COMPAT case, by avoiding needing to arrange for pow10,
pow10f and pow10l to be defined by those templates.

Tested for x86_64, and with build-many-glibcs.py.

	* manual/math.texi (pow10): Do not document.
	(pow10f): Likewise.
	(pow10l): Likewise.
	* math/bits/mathcalls.h [__USE_GNU] (pow10): Do not declare.
	* math/bits/math-finite.h [__USE_GNU] (pow10): Likewise.
	* math/libm-test-exp10.inc (pow10_test): Remove.
	(do_test): Do not call pow10.
	* math/w_exp10_compat.c (pow10): Make into compat symbol.
	[NO_LONG_DOUBLE] (pow10l): Likewise.
	* math/w_exp10f_compat.c (pow10f): Likewise.
	* math/w_exp10l_compat.c (pow10l): Likewise.
	* sysdeps/ia64/fpu/e_exp10.S: Include <shlib-compat.h>.
	(pow10): Make into compat symbol.
	* sysdeps/ia64/fpu/e_exp10f.S: Include <shlib-compat.h>.
	(pow10f): Make into compat symbol.
	* sysdeps/ia64/fpu/e_exp10l.S: Include <shlib-compat.h>.
	(pow10l): Make into compat symbol.
	* sysdeps/ieee754/ldbl-opt/Makefile (libnldbl-calls): Remove
	pow10.
	(CFLAGS-nldbl-pow10.c): Remove variable..
	* sysdeps/ieee754/ldbl-opt/nldbl-pow10.c: Remove file.
	* sysdeps/ieee754/ldbl-opt/w_exp10_compat.c (pow10l): Condition on
	[SHLIB_COMPAT (libm, GLIBC_2_1, GLIBC_2_27)].
	* sysdeps/ieee754/ldbl-opt/w_exp10l_compat.c (compat_symbol):
	Undefine and redefine.
	(pow10l): Make into compat symbol.
	* sysdeps/aarch64/libm-test-ulps: Remove pow10 ulps.
	* sysdeps/alpha/fpu/libm-test-ulps: Likewise.
	* sysdeps/arm/libm-test-ulps: Likewise.
	* sysdeps/hppa/fpu/libm-test-ulps: Likewise.
	* sysdeps/i386/fpu/libm-test-ulps: Likewise.
	* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
	* sysdeps/microblaze/libm-test-ulps: Likewise.
	* sysdeps/mips/mips32/libm-test-ulps: Likewise.
	* sysdeps/mips/mips64/libm-test-ulps: Likewise.
	* sysdeps/nios2/libm-test-ulps: Likewise.
	* sysdeps/powerpc/fpu/libm-test-ulps: Likewise.
	* sysdeps/powerpc/nofpu/libm-test-ulps: Likewise.
	* sysdeps/s390/fpu/libm-test-ulps: Likewise.
	* sysdeps/sh/libm-test-ulps: Likewise.
	* sysdeps/sparc/fpu/libm-test-ulps: Likewise.
	* sysdeps/tile/libm-test-ulps: Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2017-09-01 21:13:18 +00:00
Steve Ellcey
d9ff799a5b ILP32 math changes
* sysdeps/aarch64/fpu/s_llrint.c (OREG_SIZE): New macro.
	* sysdeps/aarch64/fpu/s_llround.c (OREG_SIZE): Likewise.
	* sysdeps/aarch64/fpu/s_llrintf.c (OREGS, IREGS): Remove.
	(IREG_SIZE, OREG_SIZE): New macros.
	* sysdeps/aarch64/fpu/s_llroundf.c: (OREGS, IREGS): Remove.
	(IREG_SIZE, OREG_SIZE): New macros.
	* sysdeps/aarch64/fpu/s_lrintf.c (IREGS): Remove.
	(IREG_SIZE): New macro.
	* sysdeps/aarch64/fpu/s_lroundf.c (IREGS): Remove.
	(IREG_SIZE): New macro.
	* sysdeps/aarch64/fpu/s_lrint.c (get-rounding-mode.h, stdint.h):
	New includes.
	(IREG_SIZE, OREG_SIZE): Initialize if not already set.
	(OREGS, IREGS): Set based on IREG_SIZE and OREG_SIZE.
	(__CONCATX): Handle exceptions correctly on large values that may
	set FE_INVALID.
	* sysdeps/aarch64/fpu/s_lround.c (IREG_SIZE, OREG_SIZE):
	Initialize if not already set.
        (OREGS, IREGS): Set based on IREG_SIZE and OREG_SIZE.
2017-08-31 13:38:11 -07:00
Steve Ellcey
9eee633b68 Change argument type passed to ifunc resolvers
* sysdeps/aarch64/dl-irel.h: (elf_ifunc_invoke): Change argument type
	in resolver call.
2017-08-31 10:34:55 -07:00
Florian Weimer
17e00cc69e elf: Remove internal_function attribute 2017-08-31 16:59:37 +02:00
Steve Ellcey
5a706f649d aarch64: Use PTR_REG macro to fix ILP32 bug and make code consistent
* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_dynamic):
	Use PTR_REG macro in cmp instruction.
2017-08-22 16:22:05 -07:00
Wilco Dijkstra
922369032c [AArch64] Optimized memcmp.
This is an optimized memcmp for AArch64.  This is a complete rewrite
using a different algorithm.  The previous version split into cases
where both inputs were aligned, the inputs were mutually aligned and
unaligned using a byte loop.  The new version combines all these cases,
while small inputs of less than 8 bytes are handled separately.

This allows the main code to be sped up using unaligned loads since
there are now at least 8 bytes to be compared.  After the first 8 bytes,
align the first input.  This ensures each iteration does at most one
unaligned access and mutually aligned inputs behave as aligned.
After the main loop, process the last 8 bytes using unaligned accesses.

This improves performance of (mutually) aligned cases by 25% and
unaligned by >500% (yes >6 times faster) on large inputs.

	* sysdeps/aarch64/memcmp.S (memcmp):
	Rewrite of optimized memcmp.
2017-08-10 17:00:38 +01:00
Siddhesh Poyarekar
0e02b5107e memcpy_falkor: Fix code style in comments 2017-08-09 12:57:59 +05:30
Siddhesh Poyarekar
36ada5f681 aarch64: Optimized memcpy for Qualcomm Falkor processor
This is an optimized implementation of the memcpy routine that gives a
significant gain in performance for all sizes of copies on the
Qualcomm Falkor processor.  A detailed rationale of the implementation
is written in a comment in the patch.

This implementation improves time for copies up to 128 bytes by up to
15% and for larger copies by up to 35% in the glibc
microbenchmark. The memcpy-random benchmark sees improvements in all
sizes in the range of 13%-18%.

Here are the full numbers extracted from the glibc microbenchmark
using the commands:

../benchtests/scripts/compare_strings.py benchtests/bench-memcpy.out \
		../benchtests/scripts/benchout_strings.schema.json \
		-base=__memcpy_generic length align1 align2

../benchtests/scripts/compare_strings.py benchtests/bench-memcpy-large.out \
		../benchtests/scripts/benchout_strings.schema.json \
		-base=__memcpy_generic length align1 align2

../benchtests/scripts/compare_strings.py benchtests/bench-memcpy-random.out \
		../benchtests/scripts/benchout_strings.schema.json \
		-base=__memcpy_generic max-size

Function: memcpy
__memcpy_thunderx       __memcpy_falkor __memcpy_generic
Variant: default
================================================================================
length=1,align1=0,align2=0:     33.59 (-115.00%)        15.62 (0.00%)   15.62
length=1,align1=0,align2=0:     16.41 (-10.53%) 14.06 (5.26%)   14.84
length=1,align1=0,align2=0:     14.84 (0.00%)   14.84 (0.00%)   14.84
length=1,align1=0,align2=0:     15.62 (-5.26%)  14.06 (5.26%)   14.84
length=2,align1=0,align2=0:     15.62 (-5.26%)  14.06 (5.26%)   14.84
length=2,align1=1,align2=0:     15.62 (-5.26%)  14.06 (5.26%)   14.84
length=2,align1=0,align2=1:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=2,align1=1,align2=1:     14.84 (-5.56%)  14.06 (0.00%)   14.06
length=4,align1=0,align2=0:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=4,align1=2,align2=0:     14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=4,align1=0,align2=2:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=4,align1=2,align2=2:     14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=8,align1=0,align2=0:     14.84 (-5.56%)  13.28 (5.56%)   14.06
length=8,align1=3,align2=0:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=8,align1=0,align2=3:     13.28 (0.00%)   13.28 (0.00%)   13.28
length=8,align1=3,align2=3:     13.28 (-6.25%)  13.28 (-6.25%)  12.50
length=16,align1=0,align2=0:    13.28 (0.00%)   13.28 (0.00%)   13.28
length=16,align1=4,align2=0:    13.28 (0.00%)   12.50 (5.88%)   13.28
length=16,align1=0,align2=4:    13.28 (0.00%)   13.28 (0.00%)   13.28
length=16,align1=4,align2=4:    13.28 (-6.25%)  12.50 (0.00%)   12.50
length=32,align1=0,align2=0:    14.06 (0.00%)   12.50 (11.11%)  14.06
length=32,align1=5,align2=0:    13.28 (0.00%)   12.50 (5.88%)   13.28
length=32,align1=0,align2=5:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=32,align1=5,align2=5:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=64,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=64,align1=6,align2=0:    13.28 (0.00%)   13.28 (0.00%)   13.28
length=64,align1=0,align2=6:    14.06 (5.26%)   14.06 (5.26%)   14.84
length=64,align1=6,align2=6:    14.84 (-11.77%) 14.06 (-5.88%)  13.28
length=128,align1=0,align2=0:   17.19 (-4.76%)  14.84 (9.52%)   16.41
length=128,align1=7,align2=0:   16.41 (4.55%)   15.62 (9.09%)   17.19
length=128,align1=0,align2=7:   16.41 (0.00%)   14.06 (14.29%)  16.41
length=128,align1=7,align2=7:   16.41 (4.55%)   15.62 (9.09%)   17.19
length=256,align1=0,align2=0:   21.88 (-3.70%)  21.09 (0.00%)   21.09
length=256,align1=8,align2=0:   21.09 (-3.85%)  21.09 (-3.85%)  20.31
length=256,align1=0,align2=8:   20.31 (-4.00%)  20.31 (-4.00%)  19.53
length=256,align1=8,align2=8:   21.88 (-7.69%)  20.31 (0.00%)   20.31
length=512,align1=0,align2=0:   28.91 (-2.78%)  28.91 (-2.78%)  28.12
length=512,align1=9,align2=0:   30.47 (-2.63%)  30.47 (-2.63%)  29.69
length=512,align1=0,align2=9:   29.69 (0.00%)   29.69 (0.00%)   29.69
length=512,align1=9,align2=9:   28.12 (-2.86%)  28.12 (-2.86%)  27.34
length=1024,align1=0,align2=0:  44.53 (0.00%)   44.53 (0.00%)   44.53
length=1024,align1=10,align2=0:         50.00 (0.00%)   50.00 (0.00%)   50.00
length=1024,align1=0,align2=10:         49.22 (1.56%)   50.78 (-1.56%)  50.00
length=1024,align1=10,align2=10:        44.53 (-1.79%)  43.75 (0.00%)   43.75
length=2048,align1=0,align2=0:  77.34 (-1.02%)  76.56 (0.00%)   76.56
length=2048,align1=11,align2=0:         89.84 (0.00%)   89.84 (0.00%)   89.84
length=2048,align1=0,align2=11:         89.84 (0.00%)   89.84 (0.00%)   89.84
length=2048,align1=11,align2=11:        75.78 (0.00%)   75.78 (0.00%)   75.78
length=4096,align1=0,align2=0:  141.41 (-0.56%) 140.62 (0.00%)  140.62
length=4096,align1=12,align2=0:         171.09 (-0.46%) 170.31 (0.00%)  170.31
length=4096,align1=0,align2=12:         170.31 (0.00%)  170.31 (0.00%)  170.31
length=4096,align1=12,align2=12:        140.62 (0.00%)  140.62 (0.00%)  140.62
length=8192,align1=0,align2=0:  278.91 (-0.28%) 275.78 (0.84%)  278.12
length=8192,align1=13,align2=0:         338.28 (0.23%)  335.94 (0.92%)  339.06
length=8192,align1=0,align2=13:         338.28 (0.00%)  455.47 (-34.64%)        338.28
length=8192,align1=13,align2=13:        278.12 (-0.28%) 275.78 (0.56%)  277.34
length=16384,align1=0,align2=0:         535.94 (-0.15%) 531.25 (0.73%)  535.16
length=16384,align1=14,align2=0:        659.38 (0.12%)  659.38 (0.12%)  660.16
length=16384,align1=0,align2=14:        659.38 (0.00%)  657.03 (0.36%)  659.38
length=16384,align1=14,align2=14:       535.16 (0.44%)  532.81 (0.87%)  537.50
length=32768,align1=0,align2=0:         1260.94 (10.68%)        1121.88 (20.53%)        1411.72
length=32768,align1=15,align2=0:        1368.75 (10.02%)        1376.56 (9.50%) 1521.09
length=32768,align1=0,align2=15:        1333.59 (10.91%)        1373.44 (8.25%) 1496.88
length=32768,align1=15,align2=15:       1256.25 (13.96%)        1125.78 (22.90%)        1460.16
length=65536,align1=0,align2=0:         2853.91 (30.11%)        2589.06 (36.60%)        4083.59
length=65536,align1=16,align2=0:        2850.00 (30.14%)        2589.84 (36.52%)        4079.69
length=65536,align1=0,align2=16:        2853.12 (30.60%)        2589.84 (37.00%)        4110.94
length=65536,align1=16,align2=16:       2850.78 (30.07%)        2589.06 (36.49%)        4076.56
length=0,align1=0,align2=0:     15.62 (-5.26%)  16.41 (-10.53%) 14.84
length=0,align1=0,align2=0:     14.84 (-5.56%)  14.84 (-5.56%)  14.06
length=0,align1=0,align2=0:     14.84 (0.00%)   14.84 (0.00%)   14.84
length=0,align1=0,align2=0:     16.41 (-16.67%) 14.84 (-5.56%)  14.06
length=1,align1=0,align2=0:     15.62 (4.76%)   15.62 (4.76%)   16.41
length=1,align1=1,align2=0:     15.62 (0.00%)   14.84 (5.00%)   15.62
length=1,align1=0,align2=1:     14.84 (0.00%)   14.84 (0.00%)   14.84
length=1,align1=1,align2=1:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=2,align1=0,align2=0:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=2,align1=2,align2=0:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=2,align1=0,align2=2:     14.84 (-5.56%)  14.06 (0.00%)   14.06
length=2,align1=2,align2=2:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=3,align1=0,align2=0:     14.84 (0.00%)   14.84 (0.00%)   14.84
length=3,align1=3,align2=0:     14.84 (-5.56%)  14.06 (0.00%)   14.06
length=3,align1=0,align2=3:     15.62 (-11.11%) 14.06 (0.00%)   14.06
length=3,align1=3,align2=3:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=4,align1=0,align2=0:     17.97 (-27.78%) 14.06 (0.00%)   14.06
length=4,align1=4,align2=0:     13.28 (5.56%)   14.06 (0.00%)   14.06
length=4,align1=0,align2=4:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=4,align1=4,align2=4:     13.28 (5.56%)   13.28 (5.56%)   14.06
length=5,align1=0,align2=0:     13.28 (5.56%)   13.28 (5.56%)   14.06
length=5,align1=5,align2=0:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=5,align1=0,align2=5:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=5,align1=5,align2=5:     14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=6,align1=0,align2=0:     14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=6,align1=6,align2=0:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=6,align1=0,align2=6:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=6,align1=6,align2=6:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=7,align1=0,align2=0:     14.84 (-11.77%) 14.06 (-5.88%)  13.28
length=7,align1=7,align2=0:     13.28 (0.00%)   14.06 (-5.88%)  13.28
length=7,align1=0,align2=7:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=7,align1=7,align2=7:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=8,align1=0,align2=0:     14.06 (-5.88%)  13.28 (0.00%)   13.28
length=8,align1=8,align2=0:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=8,align1=0,align2=8:     13.28 (0.00%)   13.28 (0.00%)   13.28
length=8,align1=8,align2=8:     14.06 (-5.88%)  13.28 (0.00%)   13.28
length=9,align1=0,align2=0:     13.28 (0.00%)   13.28 (0.00%)   13.28
length=9,align1=9,align2=0:     13.28 (0.00%)   13.28 (0.00%)   13.28
length=9,align1=0,align2=9:     13.28 (0.00%)   14.06 (-5.88%)  13.28
length=9,align1=9,align2=9:     14.06 (-5.88%)  13.28 (0.00%)   13.28
length=10,align1=0,align2=0:    14.06 (0.00%)   13.28 (5.56%)   14.06
length=10,align1=10,align2=0:   14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=10,align1=0,align2=10:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=10,align1=10,align2=10:  14.06 (0.00%)   13.28 (5.56%)   14.06
length=11,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=11,align1=11,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=11,align1=0,align2=11:   13.28 (0.00%)   13.28 (0.00%)   13.28
length=11,align1=11,align2=11:  13.28 (0.00%)   13.28 (0.00%)   13.28
length=12,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=12,align1=12,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=12,align1=0,align2=12:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=12,align1=12,align2=12:  14.06 (0.00%)   13.28 (5.56%)   14.06
length=13,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=13,align1=13,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=13,align1=0,align2=13:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=13,align1=13,align2=13:  13.28 (0.00%)   13.28 (0.00%)   13.28
length=14,align1=0,align2=0:    13.28 (0.00%)   13.28 (0.00%)   13.28
length=14,align1=14,align2=0:   13.28 (5.56%)   13.28 (5.56%)   14.06
length=14,align1=0,align2=14:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=14,align1=14,align2=14:  14.06 (-5.88%)  13.28 (0.00%)   13.28
length=15,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=15,align1=15,align2=0:   14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=15,align1=0,align2=15:   13.28 (0.00%)   13.28 (0.00%)   13.28
length=15,align1=15,align2=15:  13.28 (0.00%)   14.06 (-5.88%)  13.28
length=16,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=16,align1=16,align2=0:   13.28 (5.56%)   14.06 (0.00%)   14.06
length=16,align1=0,align2=16:   14.84 (-11.77%) 13.28 (0.00%)   13.28
length=16,align1=16,align2=16:  13.28 (-6.25%)  12.50 (0.00%)   12.50
length=17,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=17,align1=17,align2=0:   14.84 (-11.77%) 12.50 (5.88%)   13.28
length=17,align1=0,align2=17:   14.84 (-5.56%)  12.50 (11.11%)  14.06
length=17,align1=17,align2=17:  14.84 (-11.77%) 12.50 (5.88%)   13.28
length=18,align1=0,align2=0:    14.06 (0.00%)   12.50 (11.11%)  14.06
length=18,align1=18,align2=0:   13.28 (5.56%)   12.50 (11.11%)  14.06
length=18,align1=0,align2=18:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=18,align1=18,align2=18:  14.06 (0.00%)   12.50 (11.11%)  14.06
length=19,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=19,align1=19,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=19,align1=0,align2=19:   14.84 (-5.56%)  12.50 (11.11%)  14.06
length=19,align1=19,align2=19:  14.06 (-5.88%)  12.50 (5.88%)   13.28
length=20,align1=0,align2=0:    14.84 (-11.77%) 12.50 (5.88%)   13.28
length=20,align1=20,align2=0:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=20,align1=0,align2=20:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=20,align1=20,align2=20:  14.06 (0.00%)   13.28 (5.56%)   14.06
length=21,align1=0,align2=0:    14.84 (-5.56%)  12.50 (11.11%)  14.06
length=21,align1=21,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=21,align1=0,align2=21:   14.84 (-11.77%) 12.50 (5.88%)   13.28
length=21,align1=21,align2=21:  13.28 (5.56%)   13.28 (5.56%)   14.06
length=22,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=22,align1=22,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=22,align1=0,align2=22:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=22,align1=22,align2=22:  14.06 (0.00%)   12.50 (11.11%)  14.06
length=23,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=23,align1=23,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=23,align1=0,align2=23:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=23,align1=23,align2=23:  14.06 (-5.88%)  13.28 (0.00%)   13.28
length=24,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=24,align1=24,align2=0:   14.06 (0.00%)   13.28 (5.56%)   14.06
length=24,align1=0,align2=24:   14.84 (-11.77%) 12.50 (5.88%)   13.28
length=24,align1=24,align2=24:  14.06 (-5.88%)  13.28 (0.00%)   13.28
length=25,align1=0,align2=0:    14.06 (0.00%)   12.50 (11.11%)  14.06
length=25,align1=25,align2=0:   14.06 (0.00%)   13.28 (5.56%)   14.06
length=25,align1=0,align2=25:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=25,align1=25,align2=25:  13.28 (0.00%)   13.28 (0.00%)   13.28
length=26,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=26,align1=26,align2=0:   14.06 (0.00%)   13.28 (5.56%)   14.06
length=26,align1=0,align2=26:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=26,align1=26,align2=26:  14.06 (0.00%)   13.28 (5.56%)   14.06
length=27,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=27,align1=27,align2=0:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=27,align1=0,align2=27:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=27,align1=27,align2=27:  14.06 (0.00%)   12.50 (11.11%)  14.06
length=28,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=28,align1=28,align2=0:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=28,align1=0,align2=28:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=28,align1=28,align2=28:  14.84 (-11.77%) 13.28 (0.00%)   13.28
length=29,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=29,align1=29,align2=0:   13.28 (0.00%)   12.50 (5.88%)   13.28
length=29,align1=0,align2=29:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=29,align1=29,align2=29:  13.28 (5.56%)   12.50 (11.11%)  14.06
length=30,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=30,align1=30,align2=0:   13.28 (5.56%)   12.50 (11.11%)  14.06
length=30,align1=0,align2=30:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=30,align1=30,align2=30:  13.28 (0.00%)   12.50 (5.88%)   13.28
length=31,align1=0,align2=0:    13.28 (0.00%)   12.50 (5.88%)   13.28
length=31,align1=31,align2=0:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=31,align1=0,align2=31:   13.28 (0.00%)   12.50 (5.88%)   13.28
length=31,align1=31,align2=31:  14.06 (0.00%)   12.50 (11.11%)  14.06
length=48,align1=0,align2=0:    14.06 (0.00%)   14.06 (0.00%)   14.06
length=48,align1=3,align2=0:    14.06 (0.00%)   14.06 (0.00%)   14.06
length=48,align1=0,align2=3:    14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=48,align1=3,align2=3:    13.28 (5.56%)   14.06 (0.00%)   14.06
length=80,align1=0,align2=0:    15.62 (-11.11%) 14.84 (-5.56%)  14.06
length=80,align1=5,align2=0:    15.62 (-11.11%) 16.41 (-16.67%) 14.06
length=80,align1=0,align2=5:    14.06 (0.00%)   15.62 (-11.11%) 14.06
length=80,align1=5,align2=5:    15.62 (-5.26%)  17.19 (-15.79%) 14.84
length=96,align1=0,align2=0:    14.06 (0.00%)   14.84 (-5.56%)  14.06
length=96,align1=6,align2=0:    14.84 (-5.56%)  16.41 (-16.67%) 14.06
length=96,align1=0,align2=6:    14.06 (0.00%)   14.84 (-5.56%)  14.06
length=96,align1=6,align2=6:    14.84 (-5.56%)  17.19 (-22.22%) 14.06
length=112,align1=0,align2=0:   17.19 (-4.76%)  14.06 (14.29%)  16.41
length=112,align1=7,align2=0:   17.19 (0.00%)   16.41 (4.55%)   17.19
length=112,align1=0,align2=7:   16.41 (0.00%)   14.84 (9.52%)   16.41
length=112,align1=7,align2=7:   17.19 (0.00%)   17.19 (0.00%)   17.19
length=144,align1=0,align2=0:   17.19 (-10.00%) 17.97 (-15.00%) 15.62
length=144,align1=9,align2=0:   17.19 (-4.76%)  18.75 (-14.29%) 16.41
length=144,align1=0,align2=9:   20.31 (-8.33%)  18.75 (0.00%)   18.75
length=144,align1=9,align2=9:   18.75 (-4.35%)  18.75 (-4.35%)  17.97
length=160,align1=0,align2=0:   18.75 (-4.35%)  17.97 (0.00%)   17.97
length=160,align1=10,align2=0:  18.75 (4.00%)   18.75 (4.00%)   19.53
length=160,align1=0,align2=10:  19.53 (-4.17%)  17.97 (4.17%)   18.75
length=160,align1=10,align2=10:         18.75 (-4.35%)  18.75 (-4.35%)  17.97
length=176,align1=0,align2=0:   18.75 (-4.35%)  17.19 (4.35%)   17.97
length=176,align1=11,align2=0:  19.53 (0.00%)   19.53 (0.00%)   19.53
length=176,align1=0,align2=11:  19.53 (-4.17%)  18.75 (0.00%)   18.75
length=176,align1=11,align2=11:         18.75 (0.00%)   17.97 (4.17%)   18.75
length=192,align1=0,align2=0:   18.75 (0.00%)   17.97 (4.17%)   18.75
length=192,align1=12,align2=0:  21.09 (-8.00%)  18.75 (4.00%)   19.53
length=192,align1=0,align2=12:  18.75 (0.00%)   18.75 (0.00%)   18.75
length=192,align1=12,align2=12:         18.75 (0.00%)   17.97 (4.17%)   18.75
length=208,align1=0,align2=0:   17.97 (0.00%)   20.31 (-13.04%) 17.97
length=208,align1=13,align2=0:  19.53 (7.41%)   21.09 (0.00%)   21.09
length=208,align1=0,align2=13:  23.44 (-11.11%) 21.09 (0.00%)   21.09
length=208,align1=13,align2=13:         21.09 (-3.85%)  21.09 (-3.85%)  20.31
length=224,align1=0,align2=0:   21.09 (-8.00%)  20.31 (-4.00%)  19.53
length=224,align1=14,align2=0:  23.44 (-11.11%) 20.31 (3.70%)   21.09
length=224,align1=0,align2=14:  21.09 (3.57%)   20.31 (7.14%)   21.88
length=224,align1=14,align2=14:         20.31 (0.00%)   19.53 (3.85%)   20.31
length=240,align1=0,align2=0:   20.31 (-4.00%)  19.53 (0.00%)   19.53
length=240,align1=15,align2=0:  22.66 (0.00%)   20.31 (10.34%)  22.66
length=240,align1=0,align2=15:  20.31 (-4.00%)  20.31 (-4.00%)  19.53
length=240,align1=15,align2=15:         21.88 (0.00%)   21.09 (3.57%)   21.88
length=272,align1=0,align2=0:   20.31 (0.00%)   28.12 (-38.46%) 20.31
length=272,align1=17,align2=0:  22.66 (0.00%)   27.34 (-20.69%) 22.66
length=272,align1=0,align2=17:  25.78 (-10.00%) 28.12 (-20.00%) 23.44
length=272,align1=17,align2=17:         22.66 (-3.57%)  27.34 (-25.00%) 21.88
length=288,align1=0,align2=0:   23.44 (-7.14%)  27.34 (-25.00%) 21.88
length=288,align1=18,align2=0:  22.66 (0.00%)   27.34 (-20.69%) 22.66
length=288,align1=0,align2=18:  23.44 (-3.45%)  25.00 (-10.35%) 22.66
length=288,align1=18,align2=18:         22.66 (-3.57%)  21.88 (0.00%)   21.88
length=304,align1=0,align2=0:   21.88 (0.00%)   21.88 (0.00%)   21.88
length=304,align1=19,align2=0:  23.44 (-3.45%)  22.66 (0.00%)   22.66
length=304,align1=0,align2=19:  22.66 (0.00%)   22.66 (0.00%)   22.66
length=304,align1=19,align2=19:         22.66 (-3.57%)  21.88 (0.00%)   21.88
length=320,align1=0,align2=0:   22.66 (-3.57%)  21.88 (0.00%)   21.88
length=320,align1=20,align2=0:  22.66 (0.00%)   22.66 (0.00%)   22.66
length=320,align1=0,align2=20:  22.66 (0.00%)   22.66 (0.00%)   22.66
length=320,align1=20,align2=20:         22.66 (-3.57%)  21.88 (0.00%)   21.88
length=336,align1=0,align2=0:   21.88 (0.00%)   24.22 (-10.71%) 21.88
length=336,align1=21,align2=0:  22.66 (0.00%)   25.00 (-10.35%) 22.66
length=336,align1=0,align2=21:  25.78 (0.00%)   25.00 (3.03%)   25.78
length=336,align1=21,align2=21:         25.00 (0.00%)   23.44 (6.25%)   25.00
length=352,align1=0,align2=0:   24.22 (0.00%)   24.22 (0.00%)   24.22
length=352,align1=22,align2=0:  25.00 (0.00%)   25.00 (0.00%)   25.00
length=352,align1=0,align2=22:  25.00 (-3.23%)  25.00 (-3.23%)  24.22
length=352,align1=22,align2=22:         25.00 (-3.23%)  24.22 (0.00%)   24.22
length=368,align1=0,align2=0:   25.00 (-3.23%)  23.44 (3.23%)   24.22
length=368,align1=23,align2=0:  25.00 (0.00%)   24.22 (3.12%)   25.00
length=368,align1=0,align2=23:  25.00 (-3.23%)  25.00 (-3.23%)  24.22
length=368,align1=23,align2=23:         25.00 (-6.67%)  23.44 (0.00%)   23.44
length=384,align1=0,align2=0:   24.22 (0.00%)   24.22 (0.00%)   24.22
length=384,align1=24,align2=0:  25.00 (0.00%)   24.22 (3.12%)   25.00
length=384,align1=0,align2=24:  25.00 (0.00%)   25.78 (-3.12%)  25.00
length=384,align1=24,align2=24:         24.22 (-3.33%)  23.44 (0.00%)   23.44
length=400,align1=0,align2=0:   25.00 (-3.23%)  26.56 (-9.68%)  24.22
length=400,align1=25,align2=0:  25.78 (-3.12%)  27.34 (-9.38%)  25.00
length=400,align1=0,align2=25:  27.34 (0.00%)   27.34 (0.00%)   27.34
length=400,align1=25,align2=25:         26.56 (0.00%)   25.78 (2.94%)   26.56
length=416,align1=0,align2=0:   26.56 (-3.03%)  25.78 (0.00%)   25.78
length=416,align1=26,align2=0:  28.12 (-2.86%)  27.34 (0.00%)   27.34
length=416,align1=0,align2=26:  27.34 (-2.94%)  28.12 (-5.88%)  26.56
length=416,align1=26,align2=26:         25.78 (0.00%)   26.56 (-3.03%)  25.78
length=432,align1=0,align2=0:   27.34 (-2.94%)  25.78 (2.94%)   26.56
length=432,align1=27,align2=0:  28.12 (-2.86%)  27.34 (0.00%)   27.34
length=432,align1=0,align2=27:  27.34 (0.00%)   28.12 (-2.86%)  27.34
length=432,align1=27,align2=27:         25.78 (0.00%)   25.78 (0.00%)   25.78
length=448,align1=0,align2=0:   26.56 (-3.03%)  25.78 (0.00%)   25.78
length=448,align1=28,align2=0:  27.34 (0.00%)   27.34 (0.00%)   27.34
length=448,align1=0,align2=28:  27.34 (0.00%)   28.12 (-2.86%)  27.34
length=448,align1=28,align2=28:         25.78 (0.00%)   25.78 (0.00%)   25.78
length=464,align1=0,align2=0:   25.78 (0.00%)   28.12 (-9.09%)  25.78
length=464,align1=29,align2=0:  28.12 (-2.86%)  29.69 (-8.57%)  27.34
length=464,align1=0,align2=29:  30.47 (0.00%)   30.47 (0.00%)   30.47
length=464,align1=29,align2=29:         28.12 (0.00%)   27.34 (2.78%)   28.12
length=480,align1=0,align2=0:   29.69 (-5.56%)  28.12 (0.00%)   28.12
length=480,align1=30,align2=0:  31.25 (-2.56%)  29.69 (2.56%)   30.47
length=480,align1=0,align2=30:  29.69 (0.00%)   30.47 (-2.63%)  29.69
length=480,align1=30,align2=30:         28.12 (0.00%)   28.12 (0.00%)   28.12
length=496,align1=0,align2=0:   28.12 (0.00%)   27.34 (2.78%)   28.12
length=496,align1=31,align2=0:  30.47 (-2.63%)  29.69 (0.00%)   29.69
length=496,align1=0,align2=31:  29.69 (0.00%)   30.47 (-2.63%)  29.69
length=496,align1=31,align2=31:         28.12 (-2.86%)  28.12 (-2.86%)  27.34
length=1024,align1=0,align2=0:  44.53 (0.00%)   44.53 (0.00%)   44.53
length=1024,align1=32,align2=0:         44.53 (-1.79%)  44.53 (-1.79%)  43.75
length=1024,align1=0,align2=32:         44.53 (-1.79%)  43.75 (0.00%)   43.75
length=1024,align1=32,align2=32:        43.75 (1.75%)   43.75 (1.75%)   44.53
length=1056,align1=0,align2=0:  46.88 (-1.69%)  46.88 (-1.69%)  46.09
length=1056,align1=33,align2=0:         53.12 (0.00%)   52.34 (1.47%)   53.12
length=1056,align1=0,align2=33:         52.34 (0.00%)   53.12 (-1.49%)  52.34
length=1056,align1=33,align2=33:        46.09 (0.00%)   46.88 (-1.69%)  46.09
length=1088,align1=0,align2=0:  46.88 (-1.69%)  46.09 (0.00%)   46.09
length=1088,align1=34,align2=0:         52.34 (0.00%)   52.34 (0.00%)   52.34
length=1088,align1=0,align2=34:         53.12 (-3.03%)  53.12 (-3.03%)  51.56
length=1088,align1=34,align2=34:        46.09 (0.00%)   46.88 (-1.69%)  46.09
length=1120,align1=0,align2=0:  49.22 (-1.61%)  48.44 (0.00%)   48.44
length=1120,align1=35,align2=0:         54.69 (1.41%)   55.47 (0.00%)   55.47
length=1120,align1=0,align2=35:         57.03 (0.00%)   55.47 (2.74%)   57.03
length=1120,align1=35,align2=35:        48.44 (0.00%)   49.22 (-1.61%)  48.44
length=1152,align1=0,align2=0:  47.66 (1.61%)   48.44 (0.00%)   48.44
length=1152,align1=36,align2=0:         55.47 (-1.43%)  55.47 (-1.43%)  54.69
length=1152,align1=0,align2=36:         58.59 (-1.35%)  55.47 (4.05%)   57.81
length=1152,align1=36,align2=36:        48.44 (0.00%)   49.22 (-1.61%)  48.44
length=1184,align1=0,align2=0:  53.12 (-3.03%)  50.78 (1.52%)   51.56
length=1184,align1=37,align2=0:         61.72 (-2.60%)  57.03 (5.19%)   60.16
length=1184,align1=0,align2=37:         62.50 (-1.27%)  57.03 (7.60%)   61.72
length=1184,align1=37,align2=37:        53.12 (-1.49%)  50.78 (2.99%)   52.34
length=1216,align1=0,align2=0:  53.91 (-4.55%)  50.78 (1.52%)   51.56
length=1216,align1=38,align2=0:         60.94 (0.00%)   57.03 (6.41%)   60.94
length=1216,align1=0,align2=38:         60.16 (0.00%)   57.81 (3.90%)   60.16
length=1216,align1=38,align2=38:        52.34 (-1.52%)  50.00 (3.03%)   51.56
length=1248,align1=0,align2=0:  54.69 (-2.94%)  53.12 (0.00%)   53.12
length=1248,align1=39,align2=0:         64.06 (-1.23%)  60.16 (4.94%)   63.28
length=1248,align1=0,align2=39:         60.94 (-2.63%)  60.16 (-1.32%)  59.38
length=1248,align1=39,align2=39:        53.12 (0.00%)   52.34 (1.47%)   53.12
length=1280,align1=0,align2=0:  52.34 (-1.52%)  52.34 (-1.52%)  51.56
length=1280,align1=40,align2=0:         61.72 (3.66%)   59.38 (7.32%)   64.06
length=1280,align1=0,align2=40:         60.94 (-2.63%)  60.16 (-1.32%)  59.38
length=1280,align1=40,align2=40:        52.34 (-1.52%)  52.34 (-1.52%)  51.56
length=1312,align1=0,align2=0:  54.69 (-1.45%)  55.47 (-2.90%)  53.91
length=1312,align1=41,align2=0:         63.28 (0.00%)   62.50 (1.23%)   63.28
length=1312,align1=0,align2=41:         62.50 (0.00%)   62.50 (0.00%)   62.50
length=1312,align1=41,align2=41:        53.91 (0.00%)   54.69 (-1.45%)  53.91
length=1344,align1=0,align2=0:  54.69 (0.00%)   54.69 (0.00%)   54.69
length=1344,align1=42,align2=0:         62.50 (0.00%)   62.50 (0.00%)   62.50
length=1344,align1=0,align2=42:         62.50 (-1.27%)  62.50 (-1.27%)  61.72
length=1344,align1=42,align2=42:        53.91 (0.00%)   53.91 (0.00%)   53.91
length=1376,align1=0,align2=0:  65.62 (-16.67%) 68.75 (-22.22%) 56.25
length=1376,align1=43,align2=0:         71.88 (-9.52%)  73.44 (-11.90%) 65.62
length=1376,align1=0,align2=43:         72.66 (-12.05%) 74.22 (-14.46%) 64.84
length=1376,align1=43,align2=43:        64.06 (-13.89%) 67.97 (-20.83%) 56.25
length=1408,align1=0,align2=0:  57.03 (-1.39%)  68.75 (-22.22%) 56.25
length=1408,align1=44,align2=0:         65.62 (-1.20%)  73.44 (-13.25%) 64.84
length=1408,align1=0,align2=44:         64.84 (0.00%)   74.22 (-14.46%) 64.84
length=1408,align1=44,align2=44:        56.25 (-1.41%)  68.75 (-23.94%) 55.47
length=1440,align1=0,align2=0:  67.97 (-14.47%) 64.84 (-9.21%)  59.38
length=1440,align1=45,align2=0:         74.22 (-10.47%) 68.75 (-2.33%)  67.19
length=1440,align1=0,align2=45:         72.66 (-6.90%)  69.53 (-2.30%)  67.97
length=1440,align1=45,align2=45:        65.62 (-13.51%) 58.59 (-1.35%)  57.81
length=1472,align1=0,align2=0:  66.41 (-14.86%) 58.59 (-1.35%)  57.81
length=1472,align1=46,align2=0:         73.44 (-9.30%)  67.19 (0.00%)   67.19
length=1472,align1=0,align2=46:         70.31 (-4.65%)  67.97 (-1.16%)  67.19
length=1472,align1=46,align2=46:        57.81 (0.00%)   58.59 (-1.35%)  57.81
length=1504,align1=0,align2=0:  60.94 (0.00%)   60.94 (0.00%)   60.94
length=1504,align1=47,align2=0:         71.09 (-1.11%)  70.31 (0.00%)   70.31
length=1504,align1=0,align2=47:         70.31 (-1.12%)  70.31 (-1.12%)  69.53
length=1504,align1=47,align2=47:        60.94 (-1.30%)  60.16 (0.00%)   60.16
length=1536,align1=0,align2=0:  62.50 (-3.90%)  60.16 (0.00%)   60.16
length=1536,align1=48,align2=0:         60.94 (-1.30%)  60.16 (0.00%)   60.16
length=1536,align1=0,align2=48:         61.72 (-3.95%)  60.16 (-1.32%)  59.38
length=1536,align1=48,align2=48:        60.94 (-1.30%)  60.16 (0.00%)   60.16
length=1568,align1=0,align2=0:  80.47 (-27.16%) 63.28 (0.00%)   63.28
length=1568,align1=49,align2=0:         86.72 (-18.09%) 72.66 (1.06%)   73.44
length=1568,align1=0,align2=49:         74.22 (-3.26%)  74.22 (-3.26%)  71.88
length=1568,align1=49,align2=49:        62.50 (0.00%)   61.72 (1.25%)   62.50
length=1600,align1=0,align2=0:  62.50 (-1.27%)  62.50 (-1.27%)  61.72
length=1600,align1=50,align2=0:         73.44 (0.00%)   71.88 (2.13%)   73.44
length=1600,align1=0,align2=50:         72.66 (0.00%)   73.44 (-1.08%)  72.66
length=1600,align1=50,align2=50:        62.50 (-1.27%)  62.50 (-1.27%)  61.72
length=1632,align1=0,align2=0:  64.84 (0.00%)   64.84 (0.00%)   64.84
length=1632,align1=51,align2=0:         75.78 (0.00%)   75.00 (1.03%)   75.78
length=1632,align1=0,align2=51:         78.91 (0.00%)   75.78 (3.96%)   78.91
length=1632,align1=51,align2=51:        64.84 (-2.47%)  64.84 (-2.47%)  63.28
length=1664,align1=0,align2=0:  64.84 (-1.22%)  64.84 (-1.22%)  64.06
length=1664,align1=52,align2=0:         75.78 (0.00%)   75.00 (1.03%)   75.78
length=1664,align1=0,align2=52:         80.47 (-0.98%)  75.78 (4.90%)   79.69
length=1664,align1=52,align2=52:        64.06 (-1.23%)  65.62 (-3.70%)  63.28
length=1696,align1=0,align2=0:  69.53 (-3.49%)  72.66 (-8.14%)  67.19
length=1696,align1=53,align2=0:         80.47 (-0.98%)  82.03 (-2.94%)  79.69
length=1696,align1=0,align2=53:         80.47 (0.96%)   82.03 (-0.96%)  81.25
length=1696,align1=53,align2=53:        68.75 (-2.33%)  72.66 (-8.14%)  67.19
length=1728,align1=0,align2=0:  67.97 (0.00%)   72.66 (-6.90%)  67.97
length=1728,align1=54,align2=0:         80.47 (-0.98%)  82.81 (-3.92%)  79.69
length=1728,align1=0,align2=54:         78.91 (-1.00%)  82.03 (-5.00%)  78.12
length=1728,align1=54,align2=54:        68.75 (0.00%)   72.66 (-5.68%)  68.75
length=1760,align1=0,align2=0:  77.34 (-12.50%) 68.75 (0.00%)   68.75
length=1760,align1=55,align2=0:         91.41 (-8.33%)  79.69 (5.56%)   84.38
length=1760,align1=0,align2=55:         88.28 (-10.78%) 80.47 (-0.98%)  79.69
length=1760,align1=55,align2=55:        77.34 (-11.24%) 68.75 (1.12%)   69.53
length=1792,align1=0,align2=0:  78.12 (-14.94%) 68.75 (-1.15%)  67.97
length=1792,align1=56,align2=0:         88.28 (-4.63%)  79.69 (5.56%)   84.38
length=1792,align1=0,align2=56:         88.28 (-9.71%)  80.47 (0.00%)   80.47
length=1792,align1=56,align2=56:        77.34 (-11.24%) 68.75 (1.12%)   69.53
length=1824,align1=0,align2=0:  72.66 (7.92%)   70.31 (10.89%)  78.91
length=1824,align1=57,align2=0:         85.94 (5.17%)   82.03 (9.48%)   90.62
length=1824,align1=0,align2=57:         82.03 (3.67%)   82.81 (2.75%)   85.16
length=1824,align1=57,align2=57:        70.31 (-1.12%)  70.31 (-1.12%)  69.53
length=1856,align1=0,align2=0:  70.31 (-1.12%)  70.31 (-1.12%)  69.53
length=1856,align1=58,align2=0:         83.59 (-0.94%)  82.03 (0.94%)   82.81
length=1856,align1=0,align2=58:         178.12 (-115.09%)       82.81 (0.00%)   82.81
length=1856,align1=58,align2=58:        70.31 (-1.12%)  70.31 (-1.12%)  69.53
length=1888,align1=0,align2=0:  73.44 (-1.08%)  78.91 (-8.60%)  72.66
length=1888,align1=59,align2=0:         85.94 (0.00%)   89.84 (-4.55%)  85.94
length=1888,align1=0,align2=59:         84.38 (0.00%)   89.06 (-5.56%)  84.38
length=1888,align1=59,align2=59:        72.66 (-1.09%)  78.12 (-8.70%)  71.88
length=1920,align1=0,align2=0:  72.66 (-1.09%)  78.12 (-8.70%)  71.88
length=1920,align1=60,align2=0:         85.94 (0.00%)   89.84 (-4.55%)  85.94
length=1920,align1=0,align2=60:         85.16 (0.00%)   89.06 (-4.59%)  85.16
length=1920,align1=60,align2=60:        72.66 (-1.09%)  78.91 (-9.78%)  71.88
length=1952,align1=0,align2=0:  75.00 (-1.05%)  75.00 (-1.05%)  74.22
length=1952,align1=61,align2=0:         88.28 (0.00%)   87.50 (0.88%)   88.28
length=1952,align1=0,align2=61:         87.50 (0.00%)   88.28 (-0.89%)  87.50
length=1952,align1=61,align2=61:        74.22 (0.00%)   74.22 (0.00%)   74.22
length=1984,align1=0,align2=0:  75.00 (-1.05%)  73.44 (1.05%)   74.22
length=1984,align1=62,align2=0:         89.06 (-0.89%)  87.50 (0.88%)   88.28
length=1984,align1=0,align2=62:         87.50 (0.00%)   88.28 (-0.89%)  87.50
length=1984,align1=62,align2=62:        74.22 (0.00%)   74.22 (0.00%)   74.22
length=2016,align1=0,align2=0:  77.34 (-1.02%)  76.56 (0.00%)   76.56
length=2016,align1=63,align2=0:         91.41 (-0.86%)  90.62 (0.00%)   90.62
length=2016,align1=0,align2=63:         89.84 (0.00%)   90.62 (-0.87%)  89.84
length=2016,align1=63,align2=63:        77.34 (-1.02%)  76.56 (0.00%)   76.56
length=4096,align1=0,align2=0:  141.41 (-0.56%) 146.88 (-4.44%) 140.62

Function: memcpy
__memcpy_thunderx       __memcpy_falkor __memcpy_generic
Variant: large
================================================================================
length=65543,align1=0,align2=0:         4018.75 (3.09%) 2634.38 (36.47%)        4146.88
length=65551,align1=0,align2=3:         4425.00 (-6.47%)        3134.38 (24.59%)        4156.25
length=65567,align1=3,align2=0:         2909.38 (29.95%)        3134.38 (24.53%)        4153.12
length=65599,align1=3,align2=5:         4415.62 (-6.16%)        3134.38 (24.64%)        4159.38
length=131079,align1=0,align2=0:        5765.62 (30.38%)        5240.62 (36.72%)        8281.25
length=131087,align1=0,align2=3:        8831.25 (-6.56%)        6271.88 (24.32%)        8287.50
length=131103,align1=3,align2=0:        5793.75 (29.05%)        6268.75 (23.23%)        8165.62
length=131135,align1=3,align2=5:        5806.25 (29.97%)        6259.38 (24.50%)        8290.62
length=262151,align1=0,align2=0:        11850.00 (28.91%)       10762.50 (35.43%)       16668.80
length=262159,align1=0,align2=3:        12043.80 (27.72%)       12700.00 (23.78%)       16662.50
length=262175,align1=3,align2=0:        12046.90 (27.90%)       12687.50 (24.07%)       16709.40
length=262207,align1=3,align2=5:        11984.40 (28.08%)       12678.10 (23.91%)       16662.50
length=524295,align1=0,align2=0:        24825.00 (25.00%)       24268.80 (27.34%)       33400.00
length=524303,align1=0,align2=3:        35731.20 (-6.53%)       25678.10 (23.44%)       33540.60
length=524319,align1=3,align2=0:        25893.80 (22.71%)       25725.00 (23.22%)       33503.10
length=524351,align1=3,align2=5:        25887.50 (22.86%)       25690.60 (23.45%)       33559.40
length=1048583,align1=0,align2=0:       50621.90 (0.30%)        50600.00 (0.34%)        50771.90
length=1048591,align1=0,align2=3:       53206.20 (0.54%)        51081.20 (4.51%)        53493.80
length=1048607,align1=3,align2=0:       53221.90 (0.32%)        51975.00 (2.66%)        53393.80
length=1048639,align1=3,align2=5:       53240.60 (0.36%)        51953.10 (2.77%)        53431.20
length=2097159,align1=0,align2=0:       103744.00 (-2.00%)      102447.00 (-1.00%)      102425.00
length=2097167,align1=0,align2=3:       108588.00 (-1.00%)      105159.00 (2.00%)       107606.00
length=2097183,align1=3,align2=0:       107678.00 (0.00%)       105250.00 (2.00%)       108125.00
length=2097215,align1=3,align2=5:       107906.00 (1.00%)       105841.00 (3.00%)       109475.00
length=4194311,align1=0,align2=0:       202994.00 (0.00%)       202500.00 (1.00%)       204809.00
length=4194319,align1=0,align2=3:       213350.00 (0.00%)       205997.00 (3.00%)       213384.00
length=4194335,align1=3,align2=0:       212653.00 (0.00%)       206444.00 (3.00%)       212900.00
length=4194367,align1=3,align2=5:       213044.00 (0.00%)       206084.00 (3.00%)       213847.00
length=8388615,align1=0,align2=0:       401294.00 (0.00%)       401231.00 (0.00%)       401944.00
length=8388623,align1=0,align2=3:       480872.00 (-14.00%)     406444.00 (3.00%)       422900.00
length=8388639,align1=3,align2=0:       422147.00 (0.00%)       407750.00 (3.00%)       422803.00
length=8388671,align1=3,align2=5:       442003.00 (-5.00%)      407125.00 (3.00%)       423509.00
length=16777223,align1=0,align2=0:      799809.00 (0.00%)       800000.00 (0.00%)       801756.00
length=16777231,align1=0,align2=3:      841184.00 (0.00%)       808525.00 (4.00%)       843775.00
length=16777247,align1=3,align2=0:      841166.00 (0.00%)       810147.00 (3.00%)       843147.00
length=16777279,align1=3,align2=5:      972569.00 (-16.00%)     808588.00 (4.00%)       843731.00
length=33554439,align1=0,align2=0:      1842240.00 (-0.01%)     1863590.00 (-1.17%)     1841990.00
length=33554447,align1=0,align2=3:      2103470.00 (-2.74%)     1919460.00 (6.25%)      2047440.00
length=33554463,align1=3,align2=0:      2075690.00 (-1.07%)     1930040.00 (6.02%)      2053720.00
length=33554495,align1=3,align2=5:      2110590.00 (-2.82%)     1924440.00 (6.25%)      2052650.00

Function: memcpy
__memcpy_thunderx       __memcpy_falkor __memcpy_generic
Variant: random
================================================================================
max-size=4096:  44061.90 (5.85%)        38568.20 (17.59%)       46799.90
max-size=8192:  42790.90 (5.27%)        38158.90 (15.52%)       45171.50
max-size=16384:         44912.10 (2.25%)        38710.40 (15.75%)       45945.00
max-size=32768:         43577.90 (1.23%)        37975.10 (13.93%)       44120.00
max-size=65536:         44375.50 (1.04%)        38474.20 (14.20%)       44840.60

	* manual/tunables.texi (Tunable glibc.tune.cpu): Add falkor.
	* sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add
	memcpy_falkor.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c (MAX_IFUNC):
	Bump.
	(__libc_ifunc_impl_list): Add __memcpy_falkor.
	* sysdeps/aarch64/multiarch/memcpy.c: Likewise.
	* sysdeps/aarch64/multiarch/memcpy_falkor.S: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c (cpu_list):
	Add falkor.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_FALKOR):
	New macro.
2017-08-09 06:32:17 +05:30
Siddhesh Poyarekar
28cfa3a48e tunables, aarch64: New tunable to override cpu
Add a new tunable (glibc.tune.cpu) to override CPU identification on
aarch64.  This is useful in two cases: one where it is desirable to
pretend to be another CPU for purposes of testing or because routines
written for that CPU are beneficial for specific workloads and second
where the underlying kernel does not support emulation of MRS to get
the MIDR of the CPU.

	* elf/dl-tunables.h (tunable_is_name): Move from...
	* elf/dl-tunables.c (is_name): ... here.
	(parse_tunables, __tunables_init): Adjust.
	* manual/tunables.texi: Document glibc.tune.cpu.
	* sysdeps/aarch64/dl-tunables.list: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c (struct
	cpu_list): New type.
	(cpu_list): New list of CPU names and their MIDR.
	(get_midr_from_mcpu): New function.
	(init_cpu_features): Override MIDR if necessary.
2017-06-30 22:58:39 +05:30
Siddhesh Poyarekar
ab85da1530 aarch64: Call all string function implementations in tests
The string function implementations implemented so far do not use any
instructions that may deviate from standard aarch64, so it is possible
for all routines to run on all armv8 hardware.  Select all
implementations in the benchmarks and tests.

	* sysdeps/aarch64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Unconditionally select thunderx
	routine for testing.
2017-06-30 22:57:12 +05:30
Szabolcs Nagy
e535139e82 [AArch64] Add more cfi annotations to tlsdesc entry points
Backtrace through _dl_tlsdesc_resolve_rela was broken because the offset
of x30 from cfa was not in the debug info.

Add enough annotation so backtracing from the dynamic linker through
tlsdesc entry points works and the debugger shows registers correctly.
2017-06-21 15:04:37 +01:00
Szabolcs Nagy
e9177fba13 [AArch64] Use hidden __GI__dl_argv in rtld startup code
We rely on the symbol being locally defined so using extern symbol
is not correct and the linker may complain about the relocations.
2017-06-21 14:54:11 +01:00
Zack Weinberg
09a596cc2c Remove bits/string.h.
These machine-dependent inline string functions have never been on by
default, and even if they were a good idea at the time they were
introduced, they haven't really been touched in ten to fifteen years
and probably aren't a good idea on current-gen processors.  Current
thinking is that this class of optimization is best left to the
compiler.

	* bits/string.h, string/bits/string.h
	* sysdeps/aarch64/bits/string.h
	* sysdeps/m68k/m680x0/m68020/bits/string.h
	* sysdeps/s390/bits/string.h, sysdeps/sparc/bits/string.h
	* sysdeps/x86/bits/string.h: Delete file.

	* string/string.h: Don't include bits/string.h.
	* string/bits/string3.h: Rename to bits/string_fortified.h.
	No need to undef various symbols that the removed headers
	might have defined as macros.
	* string/Makefile (headers): Remove bits/string.h, change
	bits/string3.h to bits/string_fortified.h.
	* string/string-inlines.c: Update commentary.  Remove definitions
	of various macros that nothing looks at anymore.  Don't directly
	include bits/string.h. Set _STRING_INLINE_unaligned here, based on
	compiler-predefined macros.
	* string/strncat.c: If STRNCAT is not defined, or STRNCAT_PRIMARY
	_is_ defined, provide internal hidden alias __strncat.
	* include/string.h: Declare internal hidden alias __strncat.
	Only forward __stpcpy to __builtin_stpcpy if __NO_STRING_INLINES is
	not defined.
	* include/bits/string3.h: Rename to bits/string_fortified.h,
	update to match above.

	* sysdeps/i386/string-inlines.c: Define compat symbols for
	everything formerly defined by sysdeps/x86/bits/string.h.
	Make existing definitions into compat symbols as well.
	Remove some no-longer-necessary messing around with macros.

	* sysdeps/powerpc/powerpc32/power4/multiarch/mempcpy.c
	* sysdeps/powerpc/powerpc64/multiarch/mempcpy.c
	* sysdeps/powerpc/powerpc64/multiarch/stpcpy.c
	* sysdeps/s390/multiarch/mempcpy.c
	No need to define _HAVE_STRING_ARCH_mempcpy.
	Do define __NO_STRING_INLINES and NO_MEMPCPY_STPCPY_REDIRECT.

	* sysdeps/i386/i686/multiarch/strncat-c.c
	* sysdeps/s390/multiarch/strncat-c.c
	* sysdeps/x86_64/multiarch/strncat-c.c
	Define STRNCAT_PRIMARY.  Don't change definition of libc_hidden_def.
2017-06-20 08:21:24 -04:00
Alan Modra
0572433b5b PowerPC64 ELFv2 PPC64_OPT_LOCALENTRY
ELFv2 functions with localentry:0 are those with a single entry point,
ie. global entry == local entry, that have no requirement on r2 or
r12 and guarantee r2 is unchanged on return.  Such an external
function can be called via the PLT without saving r2 or restoring it
on return, avoiding a common load-hit-store for small functions.

This patch implements the ld.so changes necessary for this
optimization.  ld.so needs to check that an optimized plt call
sequence is in fact calling a function implemented with localentry:0,
end emit a fatal error otherwise.

The elf/testobj6.c change is to stop "error while loading shared
libraries: expected localentry:0 `preload'" when running
elf/preloadtest, which we'd get otherwise.

	* elf/elf.h (PPC64_OPT_LOCALENTRY): Define.
	* sysdeps/alpha/dl-machine.h (elf_machine_fixup_plt): Add
	refsym and sym parameters.  Adjust callers.
	* sysdeps/aarch64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/arm/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/generic/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/hppa/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/i386/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/ia64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/m68k/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/microblaze/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/mips/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/nios2/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_fixup_plt):
	Likewise.
	* sysdeps/s390/s390-32/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/s390/s390-64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/sh/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/sparc/sparc32/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/sparc/sparc64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/tile/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/x86_64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/powerpc/powerpc64/dl-machine.c (_dl_error_localentry): New.
	(_dl_reloc_overflow): Increase buffser size.  Formatting.
	* sysdeps/powerpc/powerpc64/dl-machine.h (ppc64_local_entry_offset):
	Delete reloc param, add refsym and sym.  Check optimized plt
	call stubs for localentry:0 functions.  Adjust callers.
	(elf_machine_fixup_plt, elf_machine_plt_conflict): Add refsym
	and sym parameters.  Adjust callers.
	(_dl_reloc_overflow): Move attribute.
	(_dl_error_localentry): Declare.
	* elf/dl-runtime.c (_dl_fixup): Save original sym.  Pass
	refsym and sym to elf_machine_fixup_plt.
	* elf/testobj6.c (preload): Call printf.
2017-06-14 10:47:25 +09:30
Stefan Liebler
12d2dd7060 Optimize generic spinlock code and use C11 like atomic macros.
This patch optimizes the generic spinlock code.

The type pthread_spinlock_t is a typedef to volatile int on all archs.
Passing a volatile pointer to the atomic macros which are not mapped to the
C11 atomic builtins can lead to extra stores and loads to stack if such
a macro creates a temporary variable by using "__typeof (*(mem)) tmp;".
Thus, those macros which are used by spinlock code - atomic_exchange_acquire,
atomic_load_relaxed, atomic_compare_exchange_weak - have to be adjusted.
According to the comment from  Szabolcs Nagy, the type of a cast expression is
unqualified (see http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_423.htm):
__typeof ((__typeof (*(mem)) *(mem)) tmp;
Thus from spinlock perspective the variable tmp is of type int instead of
type volatile int.  This patch adjusts those macros in include/atomic.h.
With this construct GCC >= 5 omits the extra stores and loads.

The atomic macros are replaced by the C11 like atomic macros and thus
the code is aligned to it.  The pthread_spin_unlock implementation is now
using release memory order instead of sequentially consistent memory order.
The issue with passed volatile int pointers applies to the C11 like atomic
macros as well as the ones used before.

I've added a glibc_likely hint to the first atomic exchange in
pthread_spin_lock in order to return immediately to the caller if the lock is
free.  Without the hint, there is an additional jump if the lock is free.

I've added the atomic_spin_nop macro within the loop of plain reads.
The plain reads are also realized by C11 like atomic_load_relaxed macro.

The new define ATOMIC_EXCHANGE_USES_CAS determines if the first try to acquire
the spinlock in pthread_spin_lock or pthread_spin_trylock is an exchange
or a CAS.  This is defined in atomic-machine.h for all architectures.

The define SPIN_LOCK_READS_BETWEEN_CMPXCHG is now removed.
There is no technical reason for throwing in a CAS every now and then,
and so far we have no evidence that it can improve performance.
If that would be the case, we have to adjust other spin-waiting loops
elsewhere, too!  Using a CAS loop without plain reads is not a good idea
on many targets and wasn't used by one.  Thus there is now no option to
do so.

Architectures are now using the generic spinlock automatically if they
do not provide an own implementation.  Thus the pthread_spin_lock.c files
in sysdeps folder are deleted.

ChangeLog:

	* NEWS: Mention new spinlock implementation.
	* include/atomic.h:
	(__atomic_val_bysize): Cast type to omit volatile qualifier.
	(atomic_exchange_acq): Likewise.
	(atomic_load_relaxed): Likewise.
	(ATOMIC_EXCHANGE_USES_CAS): Check definition.
	* nptl/pthread_spin_init.c (pthread_spin_init):
	Use atomic_store_relaxed.
	* nptl/pthread_spin_lock.c (pthread_spin_lock):
	Use C11-like atomic macros.
	* nptl/pthread_spin_trylock.c (pthread_spin_trylock):
	Likewise.
	* nptl/pthread_spin_unlock.c (pthread_spin_unlock):
	Use atomic_store_release.
	* sysdeps/aarch64/nptl/pthread_spin_lock.c: Delete File.
	* sysdeps/arm/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/hppa/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/m68k/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/microblaze/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/mips/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/nios2/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/aarch64/atomic-machine.h (ATOMIC_EXCHANGE_USES_CAS): Define.
	* sysdeps/alpha/atomic-machine.h: Likewise.
	* sysdeps/arm/atomic-machine.h: Likewise.
	* sysdeps/i386/atomic-machine.h: Likewise.
	* sysdeps/ia64/atomic-machine.h: Likewise.
	* sysdeps/m68k/coldfire/atomic-machine.h: Likewise.
	* sysdeps/m68k/m680x0/m68020/atomic-machine.h: Likewise.
	* sysdeps/microblaze/atomic-machine.h: Likewise.
	* sysdeps/mips/atomic-machine.h: Likewise.
	* sysdeps/powerpc/powerpc32/atomic-machine.h: Likewise.
	* sysdeps/powerpc/powerpc64/atomic-machine.h: Likewise.
	* sysdeps/s390/atomic-machine.h: Likewise.
	* sysdeps/sparc/sparc32/atomic-machine.h: Likewise.
	* sysdeps/sparc/sparc32/sparcv9/atomic-machine.h: Likewise.
	* sysdeps/sparc/sparc64/atomic-machine.h: Likewise.
	* sysdeps/tile/tilegx/atomic-machine.h: Likewise.
	* sysdeps/tile/tilepro/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/hppa/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/nios2/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/sh/atomic-machine.h: Likewise.
	* sysdeps/x86_64/atomic-machine.h: Likewise.
2017-06-06 09:41:56 +02:00
Steve Ellcey
6a2c695266 aarch64: Thunderx specific memcpy and memmove
* sysdeps/aarch64/memcpy.S (MEMMOVE, MEMCPY): New macros.
	(memmove): Use MEMMOVE for name.
	(memcpy): Use MEMCPY for name.  Change internal labels
	to external labels.
	* sysdeps/aarch64/multiarch/Makefile: New file.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c: Likewise.
	* sysdeps/aarch64/multiarch/init-arch.h: Likewise.
	* sysdeps/aarch64/multiarch/memcpy.c: Likewise.
	* sysdeps/aarch64/multiarch/memcpy_generic.S: Likewise.
	* sysdeps/aarch64/multiarch/memcpy_thunderx.S: Likewise.
	* sysdeps/aarch64/multiarch/memmove.c: Likewise.
2017-05-24 16:46:48 -07:00
Adhemerval Zanella
eab380d8ec Move shared pthread definitions to common headers
This patch removes all the replicated pthread definition accross the
architectures and consolidates it on shared headers.  The new
organization is as follow:

  * Architecture specific definition (such as pthread types sizes) are
    place in the new pthreadtypes-arch.h header in arch specific path.

  * All shared structure definition are moved to a common NPTL header
    at sysdeps/nptl/bits/pthreadtypes.h (with now includes the arch
    specific one for internal definitions).

  * Also, for C11 future thread support, both mutex and condition
    definition are placed in a common header at
    sysdeps/nptl/bits/thread-shared-types.h.

It is also a refactor patch without expected functional changes.
Checked with a build for all major ABI (aarch64-linux-gnu, alpha-linux-gnu,
arm-linux-gnueabi, i386-linux-gnu, ia64-linux-gnu,
m68k-linux-gnu, microblaze-linux-gnu, mips{64}-linux-gnu, nios2-linux-gnu,
powerpc{64le}-linux-gnu, s390{x}-linux-gnu, sparc{64}-linux-gnu,
tile{pro,gx}-linux-gnu, and x86_64-linux-gnu).

	* posix/Makefile (headers): Add pthreadtypes-arch.h and
	thread-shared-types.h.
	* sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h: New file: arch
	specific thread definition.
	* sysdeps/alpha/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/arm/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/hppa/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/ia64/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/m68k/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/mips/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/nios2/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/s390/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/sh/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/sparc/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/tile/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/x86/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/nptl/bits/thread-shared-types.h: New file: shared
	thread definition between POSIX and C11.
	* sysdeps/aarch64/nptl/bits/pthreadtypes.h.: Remove file.
	* sysdeps/alpha/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/arm/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/hppa/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/m68k/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/microblaze/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/mips/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/nios2/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/ia64/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/powerpc/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/s390/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/sh/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/sparc/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/tile/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/x86/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/nptl/bits/pthreadtypes.h: New file: common thread
	definitions shared across all architectures.
2017-05-09 17:49:17 -03:00
Szabolcs Nagy
b737847f87 [AArch64] Update libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Update.
2017-03-27 12:02:47 +01:00
Steve Ellcey
d2e4346a30 Add ifunc support for aarch64.
* sysdeps/aarch64/dl-machine.h: Include cpu-features.c.
	(DL_PLATFORM_INIT): New define.
	(dl_platform_init): New function.
	* sysdeps/aarch64/ldsodefs.h: Include cpu-features.h.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/dl-procinfo.c: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/libc-start.c: Likewise.
2017-03-15 16:46:26 -07:00
Torvald Riegel
cc25c8b4c1 New pthread rwlock that is more scalable.
This replaces the pthread rwlock with a new implementation that uses a
more scalable algorithm (primarily through not using a critical section
anymore to make state changes).  The fast path for rdlock acquisition and
release is now basically a single atomic read-modify write or CAS and a few
branches.  See nptl/pthread_rwlock_common.c for details.

	* nptl/DESIGN-rwlock.txt: Remove.
	* nptl/lowlevelrwlock.sym: Remove.
	* nptl/Makefile: Add new tests.
	* nptl/pthread_rwlock_common.c: New file.  Contains the new rwlock.
	* nptl/pthreadP.h (PTHREAD_RWLOCK_PREFER_READER_P): Remove.
	(PTHREAD_RWLOCK_WRPHASE, PTHREAD_RWLOCK_WRLOCKED,
	PTHREAD_RWLOCK_RWAITING, PTHREAD_RWLOCK_READER_SHIFT,
	PTHREAD_RWLOCK_READER_OVERFLOW, PTHREAD_RWLOCK_WRHANDOVER,
	PTHREAD_RWLOCK_FUTEX_USED): New.
	* nptl/pthread_rwlock_init.c (__pthread_rwlock_init): Adapt to new
	implementation.
	* nptl/pthread_rwlock_rdlock.c (__pthread_rwlock_rdlock_slow): Remove.
	(__pthread_rwlock_rdlock): Adapt.
	* nptl/pthread_rwlock_timedrdlock.c
	(pthread_rwlock_timedrdlock): Adapt.
	* nptl/pthread_rwlock_timedwrlock.c
	(pthread_rwlock_timedwrlock): Adapt.
	* nptl/pthread_rwlock_trywrlock.c (pthread_rwlock_trywrlock): Adapt.
	* nptl/pthread_rwlock_tryrdlock.c (pthread_rwlock_tryrdlock): Adapt.
	* nptl/pthread_rwlock_unlock.c (pthread_rwlock_unlock): Adapt.
	* nptl/pthread_rwlock_wrlock.c (__pthread_rwlock_wrlock_slow): Remove.
	(__pthread_rwlock_wrlock): Adapt.
	* nptl/tst-rwlock10.c: Adapt.
	* nptl/tst-rwlock11.c: Adapt.
	* nptl/tst-rwlock17.c: New file.
	* nptl/tst-rwlock18.c: New file.
	* nptl/tst-rwlock19.c: New file.
	* nptl/tst-rwlock2b.c: New file.
	* nptl/tst-rwlock8.c: Adapt.
	* nptl/tst-rwlock9.c: Adapt.
	* sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/hppa/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/sparc/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h
	(pthread_rwlock_t): Adapt.
	* sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h
	(pthread_rwlock_t): Adapt.
	* sysdeps/x86/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* nptl/nptl-printers.py (): Adapt.
	* nptl/nptl_lock_constants.pysym: Adapt.
	* nptl/test-rwlock-printers.py: Adapt.
	* nptl/test-rwlockattr-printers.c: Adapt.
	* nptl/test-rwlockattr-printers.py: Adapt.
2017-01-10 11:50:17 +01:00
Joseph Myers
bfff8b1bec Update copyright dates with scripts/update-copyrights. 2017-01-01 00:14:16 +00:00
Torvald Riegel
ed19993b5b New condvar implementation that provides stronger ordering guarantees.
This is a new implementation for condition variables, required
after http://austingroupbugs.net/view.php?id=609 to fix bug 13165.  In
essence, we need to be stricter in which waiters a signal or broadcast
is required to wake up; this couldn't be solved using the old algorithm.
ISO C++ made a similar clarification, so this also fixes a bug in
current libstdc++, for example.

We can't use the old algorithm anymore because futexes do not guarantee
to wake in FIFO order.  Thus, when we wake, we can't simply let any
waiter grab a signal, but we need to ensure that one of the waiters
happening before the signal is woken up.  This is something the previous
algorithm violated (see bug 13165).

There's another issue specific to condvars: ABA issues on the underlying
futexes.  Unlike mutexes that have just three states, or semaphores that
have no tokens or a limited number of them, the state of a condvar is
the *order* of the waiters.  A waiter on a semaphore can grab a token
whenever one is available; a condvar waiter must only consume a signal
if it is eligible to do so as determined by the relative order of the
waiter and the signal.
Therefore, this new algorithm maintains two groups of waiters: Those
eligible to consume signals (G1), and those that have to wait until
previous waiters have consumed signals (G2).  Once G1 is empty, G2
becomes the new G1.  64b counters are used to avoid ABA issues.

This condvar doesn't yet use a requeue optimization (ie, on a broadcast,
waking just one thread and requeueing all others on the futex of the
mutex supplied by the program).  I don't think doing the requeue is
necessarily the right approach (but I haven't done real measurements
yet):
* If a program expects to wake many threads at the same time and make
that scalable, a condvar isn't great anyway because of how it requires
waiters to operate mutually exclusive (due to the mutex usage).  Thus, a
thundering herd problem is a scalability problem with or without the
optimization.  Using something like a semaphore might be more
appropriate in such a case.
* The scalability problem is actually at the mutex side; the condvar
could help (and it tries to with the requeue optimization), but it
should be the mutex who decides how that is done, and whether it is done
at all.
* Forcing all but one waiter into the kernel-side wait queue of the
mutex prevents/avoids the use of lock elision on the mutex.  Thus, it
prevents the only cure against the underlying scalability problem
inherent to condvars.
* If condvars use short critical sections (ie, hold the mutex just to
check a binary flag or such), which they should do ideally, then forcing
all those waiter to proceed serially with kernel-based hand-off (ie,
futex ops in the mutex' contended state, via the futex wait queues) will
be less efficient than just letting a scalable mutex implementation take
care of it.  Our current mutex impl doesn't employ spinning at all, but
if critical sections are short, spinning can be much better.
* Doing the requeue stuff requires all waiters to always drive the mutex
into the contended state.  This leads to each waiter having to call
futex_wake after lock release, even if this wouldn't be necessary.

	[BZ #13165]
	* nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to
	use new algorithm.
	* nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise.
	* nptl/pthread_cond_init.c (__pthread_cond_init): Likewise.
	* nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise.
	* nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise.
	(__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c.
	(__condvar_confirm_wakeup, __condvar_cancel_waiting,
	__condvar_cleanup_waiting, __condvar_dec_grefs,
	__pthread_cond_wait_common): New.
	(__condvar_cleanup): Remove.
	* npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt.
	* npt/pthread_condattr_setclock.c (pthread_condattr_setclock):
	Likewise.
	* npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared):
	Likewise.
	* npt/pthread_condattr_init.c (pthread_condattr_init): Likewise.
	* nptl/tst-cond1.c: Add comment.
	* nptl/tst-cond20.c (do_test): Adapt.
	* nptl/tst-cond22.c (do_test): Likewise.
	* sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt
	structure.
	* sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t):
	Likewise.
	* sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t):
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t):
	Likewise.
	* sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove.
	(COND_CLOCK_BITS): Adapt.
	* sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt.
	* nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK,
	__PTHREAD_COND_SHARED_MASK): New.
	* nptl/nptl-printers.py (CLOCK_IDS): Remove.
	(ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt.
	* nptl/nptl_lock_constants.pysym: Adapt.
	* nptl/test-cond-printers.py: Adapt.
	* sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear,
	cond_compat_check_and_clear): Adapt.
	* sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ...
	* sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c
	(__pthread_cond_timedwait): ... and move here.
	* nptl/DESIGN-condvar.txt: Remove file.
	* nptl/lowlevelcond.sym: Likewise.
	* nptl/pthread_cond_timedwait.c: Likewise.
	* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-12-31 14:56:47 +01:00
Joseph Myers
0acb8a2a85 Refactor long double information into bits/long-double.h.
Information about whether the ABI of long double is the same as that
of double is split between bits/mathdef.h and bits/wordsize.h.

When the ABIs are the same, bits/mathdef.h defines
__NO_LONG_DOUBLE_MATH.  In addition, in the case where the same glibc
binary supports both -mlong-double-64 and -mlong-double-128,
bits/wordsize.h defines __LONG_DOUBLE_MATH_OPTIONAL, along with
__NO_LONG_DOUBLE_MATH if this particular compilation is with
-mlong-double-64.

As part of the refactoring I proposed in
<https://sourceware.org/ml/libc-alpha/2016-11/msg00745.html>, this
patch puts all that information in a single header,
bits/long-double.h.  It is included from sys/cdefs.h alongside the
include of bits/wordsize.h, so other headers generally do not need to
include bits/long-double.h directly.

Previously, various bits/mathdef.h headers and bits/wordsize.h headers
had this long double information (including implicitly in some
bits/mathdef.h headers through not having the defines present in the
default version).  After the patch, it's all in six bits/long-double.h
headers.  Furthermore, most of those new headers are not
architecture-specific.  Architectures with optional long double all
use the ldbl-opt sysdeps directory, either in the order (ldbl-64-128,
ldbl-opt, ldbl-128) or (ldbl-128ibm, ldbl-opt).  Thus a generic header
for the case where long double = double, and headers in ldbl-128,
ldbl-96 and ldbl-opt, suffices to cover every architecture except for
cases where long double properties vary between different ABIs sharing
a set of installed headers; fortunately all the ldbl-opt cases share a
single compiler-predefined macro __LONG_DOUBLE_128__ that can be used
to tell whether this compilation is -mlong-double-64 or
-mlong-double-128.

The two cases where a set of headers is shared between ABIs with
different long double properties, MIPS (o32 has long double = double,
other ABIs use ldbl-128) and SPARC (32-bit has optional long double,
64-bit has required long double), need their own bits/long-double.h
headers.

As with bits/wordsize.h, multiple-include protection for this header
is generally implicit through the include guards on sys/cdefs.h, and
multiple inclusion is harmless in any case.  There is one subtlety:
the header must not define __LONG_DOUBLE_MATH_OPTIONAL if
__NO_LONG_DOUBLE_MATH was defined before its inclusion, because doing
so breaks how sysdeps/ieee754/ldbl-opt/nldbl-compat.h defines
__NO_LONG_DOUBLE_MATH itself before including system headers.  Subject
to keeping that working, it would be reasonable to move these macros
from defined/undefined #ifdef to always-defined 1/0 #if semantics, but
this patch does not attempt to do so, just rearranges where the macros
are defined.

After this patch, the only use of bits/mathdef.h is the alpha one for
modifying complex function ABIs for old GCC.  Thus, all versions of
the header other than the default and alpha versions are removed, as
is the include from math.h.

Tested for x86_64 and x86.  Also did compilation-only testing with
build-many-glibcs.py.

	* bits/long-double.h: New file.
	* sysdeps/ieee754/ldbl-128/bits/long-double.h: Likewise.
	* sysdeps/ieee754/ldbl-96/bits/long-double.h: Likewise.
	* sysdeps/ieee754/ldbl-opt/bits/long-double.h: Likewise.
	* sysdeps/mips/bits/long-double.h: Likewise.
	* sysdeps/unix/sysv/linux/sparc/bits/long-double.h: Likewise.
	* math/Makefile (headers): Add bits/long-double.h.
	* misc/sys/cdefs.h: Include <bits/long-double.h>.
	* stdlib/strtold.c: Include <bits/long-double.h> instead of
	<bits/wordsize.h>.
	* bits/mathdef.h [!_COMPLEX_H]: Do not allow inclusion.
	[!__NO_LONG_DOUBLE_MATH]: Remove conditional code.
	* math/math.h: Do not include <bits/mathdef.h>.
	* sysdeps/aarch64/bits/mathdef.h: Remove file.
	* sysdeps/alpha/bits/mathdef.h [!_COMPLEX_H]: Do not allow
	inclusion.
	* sysdeps/ia64/bits/mathdef.h: Remove file.
	* sysdeps/m68k/m680x0/bits/mathdef.h: Likewise.
	* sysdeps/mips/bits/mathdef.h: Likewise.
	* sysdeps/powerpc/bits/mathdef.h: Likewise.
	* sysdeps/s390/bits/mathdef.h: Likewise.
	* sysdeps/sparc/bits/mathdef.h: Likewise.
	* sysdeps/x86/bits/mathdef.h: Likewise.
	* sysdeps/s390/s390-32/bits/wordsize.h
	[!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Remove
	conditional code.
	* sysdeps/s390/s390-64/bits/wordsize.h
	[!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]:
	Likewise.
	* sysdeps/unix/sysv/linux/alpha/bits/wordsize.h
	[!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]:
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/bits/wordsize.h
	[!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]:
	Likewise.
	* sysdeps/unix/sysv/linux/sparc/bits/wordsize.h
	[!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]:
	Likewise.
2016-12-14 18:27:56 +00:00
Florian Weimer
67aae64512 aarch64: Use explicit offsets in _dl_tlsdesc_dynamic
Commit 389d1f1b23 (“Partial ILP32
support for aarch64”) broke dynamic TLS support because a load
offset changed:

 0000000000000030 <_dl_tlsdesc_dynamic>:
   30:  a9bc7bfd        stp     x29, x30, [sp,#-64]!
   34:  910003fd        mov     x29, sp
   38:  a9020be1        stp     x1, x2, [sp,#32]
   3c:  a90313e3        stp     x3, x4, [sp,#48]
   40:  d53bd044        mrs     x4, tpidr_el0
   44:  c8dffc1f        ldar    xzr, [x0]
   48:  f9400401        ldr     x1, [x0,#8]
   4c:  f9400080        ldr     x0, [x4]
   50:  f9400823        ldr     x3, [x1,#16]
   54:  f9400002        ldr     x2, [x0]
   58:  eb02007f        cmp     x3, x2
   5c:  540001a8        b.hi    90 <_dl_tlsdesc_dynamic+0x60>
   60:  f9400022        ldr     x2, [x1]
   64:  8b021000        add     x0, x0, x2, lsl #4
   68:  f9400000        ldr     x0, [x0]
   6c:  b100041f        cmn     x0, #0x1
   70:  54000100        b.eq    90 <_dl_tlsdesc_dynamic+0x60>
-  74:  f9400421        ldr     x1, [x1,#8]
+  74:  f9400821        ldr     x1, [x1,#16]
   78:  8b010000        add     x0, x0, x1
…

This commit introduces explicit struct offsets, generated
from the C headers, fixing the regression.
2016-12-02 16:52:57 +01:00
Joseph Myers
b2491db6c8 Refactor FP_ILOGB* out of bits/mathdef.h.
Continuing the refactoring of bits/mathdef.h, this patch stops it
defining FP_ILOGB0 and FP_ILOGBNAN, moving the required information to
a new header bits/fp-logb.h.

There are only two possible values of each of those macros permitted
by ISO C.  TS 18661-1 adds corresponding macros for llogb, and their
values are required to correspond to those of the ilogb macros in the
obvious way.  Thus two boolean values - for which the same choices are
correct for most architectures - suffice to determine the value of all
these macros, and by defining macros for those boolean values in
bits/fp-logb.h we can then define the public FP_* macros in math.h and
avoid the present duplication of the associated feature test macro
logic.

This patch duly moves to bits/fp-logb.h defining __FP_LOGB0_IS_MIN and
__FP_LOGBNAN_IS_MIN.  Default definitions of those to 0 are correct
for both architectures, while ia64, m68k and x86 get their own
versions of bits/fp-logb.h to reflect their use of values different
from the defaults.

The patch renders many copies of bits/mathdef.h trivial (needed only
to avoid the default __NO_LONG_DOUBLE_MATH).  I'll revise
<https://sourceware.org/ml/libc-alpha/2016-11/msg00865.html>
accordingly so that it removes all bits/mathdef.h headers except the
default one and the alpha one, and arranges for the header to be
included only by complex.h as the only remaining use at that point
will be for the alpha ABI issues there.

Tested for x86_64 and x86.  Also did compile-only testing with
build-many-glibcs.py (using glibc sources from before the commit that
introduced many build failures with undefined __GI___sigsetjmp).

	* bits/fp-logb.h: New file.
	* sysdeps/ia64/bits/fp-logb.h: Likewise.
	* sysdeps/m68k/m680x0/bits/fp-logb.h: Likewise.
	* sysdeps/x86/bits/fp-logb.h: Likewise.
	* math/Makefile (headers): Add bits/fp-logb.h.
	* math/math.h: Include <bits/fp-logb.h>.
	[__USE_ISOC99] (FP_ILOGB0): Define based on __FP_LOGB0_IS_MIN.
	[__USE_ISOC99] (FP_ILOGBNAN): Define based on __FP_LOGBNAN_IS_MIN.
	* bits/mathdef.h (FP_ILOGB0): Remove.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/aarch64/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/alpha/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/ia64/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/m68k/m680x0/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/mips/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/powerpc/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/s390/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/sparc/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/x86/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
2016-12-01 02:56:55 +00:00
Joseph Myers
f11e220d2d Refactor FP_FAST_* into bits/fp-fast.h.
Continuing the refactoring of bits/mathdef.h, this patch moves the
FP_FAST_* definitions into a new bits/fp-fast.h header.  Currently
this is only for FP_FAST_FMA*, but in future it would be the
appropriate place for the FP_FAST_* macros from TS 18661-1 as well.

The generic bits/mathdef.h header defines these macros based on
whether the compiler defines __FP_FAST_*.  Most architecture-specific
headers, however, fail to do so, meaning that if the architecture (or
some particular processors) does in fact have fused operations, and
GCC knows to use them inline, the FP_FAST_* macros will still not be
defined.

By refactoring, this patch causes the generic version (based on
__FP_FAST_*) to be used in more cases, and so the macro definitions to
be more accurate.  Architectures that already defined some or all of
these macros other than based on the predefines have their own
versions of fp-fast.h, which are arranged so they define FP_FAST_* if
either the architecture-specific conditions are true or __FP_FAST_*
are defined.

After this refactoring, various bits/mathdef.h headers for
architectures with long double = double are semantically identical to
the generic version.  The patch removes those headers that are
redundant.  (In fact two of the four removed were already redundant
before this patch because they did use __FP_FAST_*.)

Tested for x86_64 and x86, and compilation-only with
build-many-glibcs.py.

	* bits/fp-fast.h: New file.
	* sysdeps/aarch64/bits/fp-fast.h: Likewise.
	* sysdeps/powerpc/bits/fp-fast.h: Likewise.
	* math/Makefile (headers): Add bits/fp-fast.h.
	* math/math.h: Include <bits/fp-fast.h>.
	* bits/mathdef.h (FP_FAST_FMA): Remove.
	(FP_FAST_FMAF): Likewise.
	(FP_FAST_FMAL): Likewise.
	* sysdeps/aarch64/bits/mathdef.h (FP_FAST_FMA): Likewise.
	(FP_FAST_FMAF): Likewise.
	* sysdeps/powerpc/bits/mathdef.h (FP_FAST_FMA): Likewise.
	(FP_FAST_FMAF): Likewise.
	* sysdeps/x86/bits/mathdef.h (FP_FAST_FMA): Likewise.
	(FP_FAST_FMAF): Likewise.
	(FP_FAST_FMAL): Likewise.
	* sysdeps/arm/bits/mathdef.h: Remove file.
	* sysdeps/hppa/fpu/bits/mathdef.h: Likewise.
	* sysdeps/sh/sh4/bits/mathdef.h: Likewise.
	* sysdeps/tile/bits/mathdef.h: Likewise.
2016-11-29 01:45:00 +00:00
Steve Ellcey
389d1f1b23 Partial ILP32 support for aarch64.
* sysdeps/aarch64/crti.S: Add include of sysdep.h.
	(call_weak_fn): Use PTR_REG to get correct reg name in ILP32.
	* sysdeps/aarch64/dl-irel.h: Add include of sysdep.h.
	(elf_irela): Use AARCH64_R macro to get correct relocation in ILP32.
	* sysdeps/aarch64/dl-machine.h: Add include of sysdep.h.
	(elf_machine_load_address, RTLD_START, RTLD_START_1, RTLD_START,
	elf_machine_type_class, ELF_MACHINE_JMP_SLOT, elf_machine_rela,
	elf_machine_lazy_rel): Add ifdef's for ILP32 support.
	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return,
	_dl_tlsdesc_return_lazy, _dl_tlsdesc_dynamic,
	_dl_tlsdesc_resolve_hold): Extend pointers in ILP32, use PTR_REG
	to get correct reg name for ILP32.
	* sysdeps/aarch64/dl-trampoline.S (ip01): New Macro.
	(RELA_SIZE): New Macro.
	(_dl_runtime_resolve, _dl_runtime_profile): Use new macros and PTR_REG
	to support ILP32.
	* sysdeps/aarch64/jmpbuf-unwind.h (_JMPBUF_CFA_UNWINDS_ADJ): Add
	cast for ILP32 mode.
	* sysdeps/aarch64/memcmp.S (memcmp): Extend arg pointers for ILP32 mode.
	* sysdeps/aarch64/memcpy.S (memmove, memcpy): Ditto.
	* sysdeps/aarch64/memset.S (__memset): Ditto.
	* sysdeps/aarch64/strchr.S (strchr): Ditto.
	* sysdeps/aarch64/strchrnul.S (__strchrnul): Ditto.
	* sysdeps/aarch64/strcmp.S (strcmp): Ditto.
	* sysdeps/aarch64/strcpy.S (strcpy): Ditto.
	* sysdeps/aarch64/strlen.S (__strlen): Ditto.
	* sysdeps/aarch64/strncmp.S (strncmp): Ditto.
	* sysdeps/aarch64/strnlen.S (strnlen): Ditto.
	* sysdeps/aarch64/strrchr.S (strrchr): Ditto.
	* sysdeps/unix/sysv/linux/aarch64/clone.S: Ditto.
	* sysdeps/unix/sysv/linux/aarch64/setcontext.S (__setcontext): Ditto.
	* sysdeps/unix/sysv/linux/aarch64/swapcontext.S (__swapcontext): Ditto.
	* sysdeps/aarch64/__longjmp.S (__longjmp): Extend pointers in ILP32,
	change PTR_MANGLE call to use register numbers instead of names.
	* sysdeps/unix/sysv/linux/aarch64/getcontext.S (__getcontext): Ditto.
	* sysdeps/aarch64/setjmp.S (__sigsetjmp): Extend arg pointers for
	ILP32 mode, change PTR_MANGLE calls to use register numbers.
	* sysdeps/aarch64/start.S (_start): Ditto.
	* sysdeps/aarch64/nptl/bits/pthreadtypes.h
	(__PTHREAD_RWLOCK_INT_FLAGS_SHARED): New define.
	(__SIZEOF_PTHREAD_ATTR_T, __SIZEOF_PTHREAD_MUTEX_T,
	__SIZEOF_PTHREAD_MUTEXATTR_T, __SIZEOF_PTHREAD_COND_T,
	__SIZEOF_PTHREAD_COND_COMPAT_T, __SIZEOF_PTHREAD_CONDATTR_T,
	__SIZEOF_PTHREAD_RWLOCK_T, __SIZEOF_PTHREAD_RWLOCKATTR_T,
	__SIZEOF_PTHREAD_BARRIER_T, __SIZEOF_PTHREAD_BARRIERATTR_T):
	Make defined values dependent on __ILP32__.
	* sysdeps/aarch64/nptl/bits/semaphore.h (__SIZEOF_SEM_T): Change define.
	(sem_t): Change __align type.
	* sysdeps/aarch64/sysdep.h (AARCH64_R, PTR_REG, PTR_LOG_SIZE, DELOUSE,
	PTR_SIZE): New Macros.
	(LDST_PCREL, LDST_GLOBAL) Update to use PTR_REG.
	* sysdeps/unix/sysv/linux/aarch64/bits/fcntl.h (O_LARGEFILE):
	Set when in ILP32 mode.
	(F_GETLK64, F_SETLK64, F_SETLKW64): Only set in LP64 mode.
	* sysdeps/unix/sysv/linux/aarch64/dl-cache.h (DL_CACHE_DEFAULT_ID):
	Set elf flags for ILP32.
	(add_system_dir): Set ILP32 library directories.
	* sysdeps/unix/sysv/linux/aarch64/init-first.c
	(_libc_vdso_platform_setup): Set minimum kernel version for ILP32.
	* sysdeps/unix/sysv/linux/aarch64/ldconfig.h
	(SYSDEP_KNOWN_INTERPRETER_NAMES): Add ILP32 names.
	* sysdeps/unix/sysv/linux/aarch64/sigcontextinfo.h (GET_PC, SET_PC):
	New Macros.
	* sysdeps/unix/sysv/linux/aarch64/sysdep.h: Handle ILP32 pointers.
2016-11-28 09:01:23 -08:00
Adhemerval Zanella
c579f48edb Remove cached PID/TID in clone
This patch remove the PID cache and usage in current GLIBC code.  Current
usage is mainly used a performance optimization to avoid the syscall,
however it adds some issues:

  - The exposed clone syscall will try to set pid/tid to make the new
    thread somewhat compatible with current GLIBC assumptions.  This cause
    a set of issue with new workloads and usecases (such as BZ#17214 and
    [1]) as well for new internal usage of clone to optimize other algorithms
    (such as clone plus CLONE_VM for posix_spawn, BZ#19957).

  - The caching complexity also added some bugs in the past [2] [3] and
    requires more effort of each port to handle such requirements (for
    both clone and vfork implementation).

  - Caching performance gain in mainly on getpid and some specific
    code paths.  The getpid performance leverage is questionable [4],
    either by the idea of getpid being a hotspot as for the getpid
    implementation itself (if it is indeed a justifiable hotspot a
    vDSO symbol could let to a much more simpler solution).

    Other usage is mainly for non usual code paths, such as pthread
    cancellation signal and handling.

For thread creation (on stack allocation) the code simplification in fact
adds some performance gain due the no need of transverse the stack cache
and invalidate each element pid.

Other thread usages will require a direct getpid syscall, such as
cancellation/setxid signal, thread cancellation, thread fail path (at
create_thread), and thread signal (pthread_kill and pthread_sigqueue).
However these are hardly usual hotspots and I think adding a syscall is
justifiable.

It also simplifies both the clone and vfork arch-specific implementation.
And by review each fork implementation there are some discrepancies that
this patch also solves:

  - microblaze clone/vfork does not set/reset the pid/tid field
  - hppa uses the default vfork implementation that fallback to fork.
    Since vfork is deprecated I do not think we should bother with it.

The patch also removes the TID caching in clone. My understanding for
such semantic is try provide some pthread usage after a user program
issue clone directly (as done by thread creation with CLONE_PARENT_SETTID
and pthread tid member).  However, as stated before in multiple discussions
threads, GLIBC provides clone syscalls without further supporting all this
semantics.

I ran a full make check on x86_64, x32, i686, armhf, aarch64, and powerpc64le.
For sparc32, sparc64, and mips I ran the basic fork and vfork tests from
posix/ folder (on a qemu system).  So it would require further testing
on alpha, hppa, ia64, m68k, nios2, s390, sh, and tile (I excluded microblaze
because it is already implementing the patch semantic regarding clone/vfork).

[1] https://codereview.chromium.org/800183004/
[2] https://sourceware.org/ml/libc-alpha/2006-07/msg00123.html
[3] https://sourceware.org/bugzilla/show_bug.cgi?id=15368
[4] http://yarchive.net/comp/linux/getpid_caching.html

	* sysdeps/nptl/fork.c (__libc_fork): Remove pid cache setting.
	* nptl/allocatestack.c (allocate_stack): Likewise.
	(__reclaim_stacks): Likewise.
	(setxid_signal_thread): Obtain pid through syscall.
	* nptl/nptl-init.c (sigcancel_handler): Likewise.
	(sighandle_setxid): Likewise.
	* nptl/pthread_cancel.c (pthread_cancel): Likewise.
	* sysdeps/unix/sysv/linux/pthread_kill.c (__pthread_kill): Likewise.
	* sysdeps/unix/sysv/linux/pthread_sigqueue.c (pthread_sigqueue):
	Likewise.
	* sysdeps/unix/sysv/linux/createthread.c (create_thread): Likewise.
	* sysdeps/unix/sysv/linux/getpid.c: Remove file.
	* nptl/descr.h (struct pthread): Change comment about pid value.
	* nptl/pthread_getattr_np.c (pthread_getattr_np): Remove thread
	pid assert.
	* sysdeps/unix/sysv/linux/pthread-pids.h (__pthread_initialize_pids):
	Do not set pid value.
	* nptl_db/td_ta_thr_iter.c (iterate_thread_list): Remove thread
	pid cache check.
	* nptl_db/td_thr_validate.c (td_thr_validate): Likewise.
	* sysdeps/aarch64/nptl/tcb-offsets.sym: Remove pid offset.
	* sysdeps/alpha/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/arm/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/hppa/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/i386/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/ia64/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/m68k/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/microblaze/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/mips/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/nios2/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/powerpc/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/s390/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/sh/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/sparc/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/tile/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/x86_64/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/clone.S: Remove pid and tid caching.
	* sysdeps/unix/sysv/linux/alpha/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/arm/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/hppa/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/ia64/clone2.S: Likewise.
	* sysdeps/unix/sysv/linux/mips/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/nios2/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sh/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/tile/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/vfork.S: Remove pid set and reset.
	* sysdeps/unix/sysv/linux/alpha/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/arm/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/ia64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/m68k/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/m68k/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/mips/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/nios2/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sh/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/tile/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/tst-clone2.c (f): Remove direct pthread
	struct access.
	(clone_test): Remove function.
	(do_test): Rewrite to take in consideration pid is not cached anymore.
2016-11-24 19:38:51 -02:00
Joseph Myers
93eb85ceb2 Refactor float_t, double_t information into bits/flt-eval-method.h.
At present, definitions of float_t and double_t are split among many
bits/mathdef.h headers.

For all but three architectures, these types are float and double.
Furthermore, if you assume __FLT_EVAL_METHOD__ to be defined, that
provides a more generic way of determining the correct values of these
typedefs.  Defining these typedefs more generally based on
__FLT_EVAL_METHOD__ was previously proposed by Paul Eggert in
<https://sourceware.org/ml/libc-alpha/2012-02/msg00002.html>.

This patch refactors things in the way I proposed in
<https://sourceware.org/ml/libc-alpha/2016-11/msg00745.html>.  A new
header bits/flt-eval-method.h defines a single macro,
__GLIBC_FLT_EVAL_METHOD, which is then used by math.h to define
float_t and double_t.  The default is based on __FLT_EVAL_METHOD__
(although actually a default to 0 would have the same effect for
current ports, because ports where values other than 0 or 16 are
possible all have their own headers).

To avoid changing the existing semantics in any case, including for
compilers not defining __FLT_EVAL_METHOD__, architecture-specific
files are then added for m68k, s390, x86 which replicate the existing
semantics.  At least with __FLT_EVAL_METHOD__ values possible with
GCC, there should be no change to the choices of float_t and double_t
for any supported configuration.

Architecture maintainer notes:

* m68k: sysdeps/m68k/m680x0/bits/flt-eval-method.h always defines
  __GLIBC_FLT_EVAL_METHOD to 2 to replicate the existing logic.  But
  actually GCC defines __FLT_EVAL_METHOD__ to 0 if TARGET_68040.  It
  might make sense to make the header prefer to base things on
  __FLT_EVAL_METHOD__ if defined, like the x86 version, and so make
  the choices of these types more accurate (with a NEWS entry as for
  the other changes to these types on particular architectures).

* s390: sysdeps/s390/bits/flt-eval-method.h always defines
  __GLIBC_FLT_EVAL_METHOD to 1 to replicate the existing logic.  As
  previously discussed, it might make sense in coordination with GCC
  to eliminate the historic mistake, avoid excess precision in the
  -fexcess-precision=standard case and make the typedefs match (with a
  NEWS entry, again).

Tested for x86-64 and x86.  Also did compilation-only testing with
build-many-glibcs.py.

	* bits/flt-eval-method.h: New file.
	* sysdeps/m68k/m680x0/bits/flt-eval-method.h: Likewise.
	* sysdeps/s390/bits/flt-eval-method.h: Likewise.
	* sysdeps/x86/bits/flt-eval-method.h: Likewise.
	* math/Makefile (headers): Add bits/flt-eval-method.h.
	* math/math.h: Include <bits/flt-eval-method.h>.
	[__USE_ISOC99] (float_t): Define based on __GLIBC_FLT_EVAL_METHOD.
	[__USE_ISOC99] (double_t): Likewise.
	* bits/mathdef.h (float_t): Remove.
	(double_t): Likewise.
	* sysdeps/aarch64/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/alpha/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/arm/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/hppa/fpu/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/ia64/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/m68k/m680x0/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/mips/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/powerpc/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/s390/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/sh/sh4/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/sparc/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/tile/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/x86/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
2016-11-24 18:44:50 +00:00
Siddhesh Poyarekar
8f3a4687ad Regenerate ULPs for aarch64
* sysdeps/aarch64/libm-test-ulps: Regenerated.
2016-11-10 16:52:35 +05:30
Florian Weimer
c74940f2a7 nptl: Document the reason why __kind in pthread_mutex_t is part of the ABI 2016-11-07 20:24:32 +01:00
Joseph Myers
799131036e Do not hardcode platform names in manual/libm-err-tab.pl (bug 14139).
manual/libm-err-tab.pl hardcodes a list of names for particular
platforms (mapping from sysdeps directory name to friendly name for
the manual).  This goes against the principle of keeping information
about individual platforms in their corresponding sysdeps directory,
and the list is also very out-of-date regarding supported platforms
and their corresponding sysdeps directories.

This patch fixes this by adding a libm-test-ulps-name file alongside
each libm-test-ulps file.  The script then gets the friendly name from
that file, which is required to exist, so it no longer needs to allow
for the mapping being missing.

Tested for x86_64.

	[BZ #14139]
	* manual/libm-err-tab.pl (%pplatforms): Initialize to empty.
	(find_files): Obtain platform name from libm-test-ulps-name and
	store in %pplatforms.
	(canonicalize_platform): Remove.
	(print_platforms): Use $pplatforms directly.
	(by_platforms): Do not allow for platforms missing from
	%pplatforms.
	* sysdeps/aarch64/libm-test-ulps-name: New file.
	* sysdeps/alpha/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/arm/libm-test-ulps-name: Likewise.
	* sysdeps/generic/libm-test-ulps-name: Likewise.
	* sysdeps/hppa/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/i386/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps-name: Likewise.
	* sysdeps/ia64/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/m68k/coldfire/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/m68k/m680x0/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/microblaze/libm-test-ulps-name: Likewise.
	* sysdeps/mips/mips32/libm-test-ulps-name: Likewise.
	* sysdeps/mips/mips64/libm-test-ulps-name: Likewise.
	* sysdeps/nios2/libm-test-ulps-name: Likewise.
	* sysdeps/powerpc/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/powerpc/nofpu/libm-test-ulps-name: Likewise.
	* sysdeps/s390/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/sh/libm-test-ulps-name: Likewise.
	* sysdeps/sparc/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/tile/libm-test-ulps-name: Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps-name: Likewise.
2016-11-04 16:49:06 +00:00
Steve Ellcey
d060cd002d Define wordsize.h macros everywhere
* bits/wordsize.h: Add documentation.
	* sysdeps/aarch64/bits/wordsize.h : New file
	* sysdeps/generic/stdint.h (PTRDIFF_MIN, PTRDIFF_MAX): Update
	definitions.
	(SIZE_MAX): Change ifdef to if in __WORDSIZE32_SIZE_ULONG check.
	* sysdeps/gnu/bits/utmp.h (__WORDSIZE_TIME64_COMPAT32): Check
	with #if instead of #ifdef.
	* sysdeps/gnu/bits/utmpx.h (__WORDSIZE_TIME64_COMPAT32): Ditto.
	* sysdeps/mips/bits/wordsize.h (__WORDSIZE32_SIZE_ULONG,
	__WORDSIZE32_PTRDIFF_LONG, __WORDSIZE_TIME64_COMPAT32):
	Add or change defines.
	* sysdeps/powerpc/powerpc32/bits/wordsize.h: Likewise.
	* sysdeps/powerpc/powerpc64/bits/wordsize.h: Likewise.
	* sysdeps/s390/s390-32/bits/wordsize.h: Likewise.
	* sysdeps/s390/s390-64/bits/wordsize.h: Likewise.
	* sysdeps/sparc/sparc32/bits/wordsize.h: Likewise.
	* sysdeps/sparc/sparc64/bits/wordsize.h: Likewise.
	* sysdeps/tile/tilegx/bits/wordsize.h: Likewise.
	* sysdeps/tile/tilepro/bits/wordsize.h: Likewise.
	* sysdeps/unix/sysv/linux/alpha/bits/wordsize.h: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/bits/wordsize.h: Likewise.
	* sysdeps/unix/sysv/linux/sparc/bits/wordsize.h: Likewise.
	* sysdeps/wordsize-32/bits/wordsize.h: Likewise.
	* sysdeps/wordsize-64/bits/wordsize.h: Likewise.
	* sysdeps/x86/bits/wordsize.h: Likewise.
2016-11-04 09:37:44 -07:00
Wilco Dijkstra
95e431cc73 An optimized memchr was missing for AArch64. This version is similar to
strchr and is significantly faster than the C version.

2016-11-04  Wilco Dijkstra  <wdijkstr@arm.com>
	    Kevin Petit  <kevin.petit@arm.com>

	* sysdeps/aarch64/memchr.S (__memchr): New file.
2016-11-04 14:37:10 +00:00
Joseph Myers
1396c647a9 Add femode_t functions: aarch64.
This patch adds AArch64 versions of fegetmode and fesetmode.
Untested.

	* sysdeps/aarch64/fpu/fegetmode.c: New file.
	* sysdeps/aarch64/fpu/fesetmode.c: Likewise.
2016-09-07 16:41:20 +00:00
Joseph Myers
ec94343f59 Add femode_t functions.
TS 18661-1 defines a type femode_t to represent the set of dynamic
floating-point control modes (such as the rounding mode and trap
enablement modes), and functions fegetmode and fesetmode to manipulate
those modes (without affecting other state such as the raised
exception flags) and a corresponding macro FE_DFL_MODE.

This patch series implements those interfaces for glibc.  This first
patch adds the architecture-independent pieces, the x86 and x86_64
implementations, and the <bits/fenv.h> and ABI baseline updates for
all architectures so glibc keeps building and passing the ABI tests on
all architectures.  Subsequent patches add the fegetmode and fesetmode
implementations for other architectures.

femode_t is generally an integer type - the same type as fenv_t, or as
the single element of fenv_t where fenv_t is a structure containing a
single integer (or the single relevant element, where it has elements
for both status and control registers) - except where architecture
properties or consistency with the fenv_t implementation indicate
otherwise.  FE_DFL_MODE follows FE_DFL_ENV in whether it's a magic
pointer value (-1 cast to const femode_t *), a value that can be
distinguished from valid pointers by its high bits but otherwise
contains a representation of the desired register contents, or a
pointer to a constant variable (the powerpc case; __fe_dfl_mode is
added as an exported constant object, an alias to __fe_dfl_env).

Note that where architectures (that share a register between control
and status bits) gain definitions of new floating-point control or
status bits in future, the implementations of fesetmode for those
architectures may need updating (depending on whether the new bits are
control or status bits and what the implementation does with
previously unknown bits), just like existing implementations of
<fenv.h> functions that take care not to touch reserved bits may need
updating when the set of reserved bits changes.  (As any new bits are
outside the scope of ISO C, that's just a quality-of-implementation
issue for supporting them, not a conformance issue.)

As with fenv_t, femode_t should properly include any software DFP
rounding mode (and for both fenv_t and femode_t I'd consider that
fragment of DFP support appropriate for inclusion in glibc even in the
absence of the rest of libdfp; hardware DFP rounding modes should
already be included if the definitions of which bits are status /
control bits are correct).

Tested for x86_64, x86, mips64 (hard float, and soft float to test the
fallback version), arm (hard float) and powerpc (hard float, soft
float and e500).  Other architecture versions are untested.

	* math/fegetmode.c: New file.
	* math/fesetmode.c: Likewise.
	* sysdeps/i386/fpu/fegetmode.c: Likewise.
	* sysdeps/i386/fpu/fesetmode.c: Likewise.
	* sysdeps/x86_64/fpu/fegetmode.c: Likewise.
	* sysdeps/x86_64/fpu/fesetmode.c: Likewise.
	* math/fenv.h: Update comment on inclusion of <bits/fenv.h>.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (fegetmode): New function
	declaration.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (fesetmode): Likewise.
	* bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New
	typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/aarch64/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/alpha/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/arm/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/hppa/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/ia64/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/m68k/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/microblaze/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/mips/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/nios2/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/powerpc/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (__fe_dfl_mode): New variable
	declaration.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/s390/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/sh/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/sparc/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/tile/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/x86/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* manual/arith.texi (FE_DFL_MODE): Document macro.
	(fegetmode): Document function.
	(fesetmode): Likewise.
	* math/Versions (fegetmode): New libm symbol at version
	GLIBC_2.25.
	(fesetmode): Likewise.
	* math/Makefile (libm-support): Add fegetmode and fesetmode.
	(tests): Add test-femode and test-femode-traps.
	* math/test-femode-traps.c: New file.
	* math/test-femode.c: Likewise.
	* sysdeps/powerpc/fpu/fenv_const.c (__fe_dfl_mode): Declare as
	alias for __fe_dfl_env.
	* sysdeps/powerpc/nofpu/fenv_const.c (__fe_dfl_mode): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fenv_const.c
	(__fe_dfl_mode): Likewise.
	* sysdeps/powerpc/Versions (__fe_dfl_mode): New libm symbol at
	version GLIBC_2.25.
	* sysdeps/nacl/libm.abilist: Update.
	* sysdeps/unix/sysv/linux/aarch64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/alpha/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/arm/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/hppa/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/i386/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/ia64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/m68k/coldfire/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/m68k/m680x0/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/microblaze/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/mips/mips32/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/nios2/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm-le.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/sh/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/tile/tilepro/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Likewise.
2016-09-07 16:40:09 +00:00
Paul E. Murphy
847c9161c7 Make common fmax implementation generic.
Also update aarch64 to ensure the correct s_fmin.c is included.
The include order favors including the generated copy.
2016-09-01 09:31:05 -05:00
Joseph Myers
ce99c0816b Add fesetexcept: aarch64.
This patch adds an AArch64 version of fesetexcept.  Untested.

	* sysdeps/aarch64/fpu/fesetexcept.c: New file.
2016-08-16 16:16:57 +00:00
Szabolcs Nagy
d637e923f9 [AArch64] Update libm-test-ulps
This partly reverts commit f8238ae3c7
that regenerated the ulps, to make the max ulps good for gcc-5,
gcc-6 and gcc-trunk as well.

	* sysdeps/aarch64/libm-test-ulps: Updated.
2016-07-21 09:48:45 +01:00
Szabolcs Nagy
f8238ae3c7 [AArch64] Regenerate libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated.
2016-07-18 11:42:52 +01:00
Szabolcs Nagy
efbe665c3a [AArch64] Fix libc internal asm profiling code
When glibc is built with --enable-profile, the ENTRY of
asm functions includes CALL_MCOUNT for profiling.
(matters for binaries static linked against libc_p.a.)

CALL_MCOUNT did not save/restore argument registers
around the _mcount call so it clobbered them.
(it is enough to only save/restore the arguments passed
to a given asm function, but that would be too many asm
changes so it is simpler to always save all argument
registers in this macro.)

float args are not saved: mcount does not clobber the
float regs and currently no asm function takes float
arguments anyway.

	[BZ #18707]
	* sysdeps/aarch64/Makefile (CFLAGS-mcount.c): Add -mgeneral-regs-only.
	* sysdeps/aarch64/sysdep.h (CALL_MCOUNT): Save argument registers.
2016-07-11 09:50:41 +01:00
Torvald Riegel
76a0b73e81 Remove atomic_compare_and_exchange_bool_rel.
atomic_compare_and_exchange_bool_rel and
catomic_compare_and_exchange_bool_rel are removed and replaced with the
new C11-like atomic_compare_exchange_weak_release.  The concurrent code
in nscd/cache.c has not been reviewed yet, so this patch does not add
detailed comments.

	* nscd/cache.c (cache_add): Use new C11-like atomic operation instead
	of atomic_compare_and_exchange_bool_rel.
	* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise.
	* include/atomic.h (atomic_compare_and_exchange_bool_rel,
	catomic_compare_and_exchange_bool_rel): Remove.
	* sysdeps/aarch64/atomic-machine.h
	(atomic_compare_and_exchange_bool_rel): Likewise.
	* sysdeps/alpha/atomic-machine.h
	(atomic_compare_and_exchange_bool_rel): Likewise.
	* sysdeps/arm/atomic-machine.h
	(atomic_compare_and_exchange_bool_rel): Likewise.
	* sysdeps/mips/atomic-machine.h
	(atomic_compare_and_exchange_bool_rel): Likewise.
	* sysdeps/tile/atomic-machine.h
	(atomic_compare_and_exchange_bool_rel): Likewise.
2016-06-24 23:04:40 +03:00
Wilco Dijkstra
a024b39a4e This patch further tunes memcpy - avoid one branch for sizes 1-3,
add a prefetch and improve small copies that are exact powers of 2.

        * sysdeps/aarch64/memcpy.S (memcpy):
        Further tuning for performance.
2016-06-22 13:24:24 +01:00
Wilco Dijkstra
58ec4fb881 Add a simple rawmemchr implementation. Use strlen for rawmemchr(s, '\0') as it
is the fastest way to search for '\0'.  Otherwise use memchr with an infinite
size.  This is 3x faster on benchtests for large sizes.  Passes GLIBC tests.

	* sysdeps/aarch64/rawmemchr.S (__rawmemchr): New file.
	* sysdeps/aarch64/strlen.S (__strlen): Change to __strlen to avoid PLT.
2016-06-20 17:48:20 +01:00
Wilco Dijkstra
b998e16e71 This is an optimized memcpy/memmove for AArch64. Copies are split into 3 main
cases: small copies of up to 16 bytes, medium copies of 17..96 bytes which are
fully unrolled.  Large copies of more than 96 bytes align the destination and
use an unrolled loop processing 64 bytes per iteration.  In order to share code
with memmove, small and medium copies read all data before writing, allowing
any kind of overlap.  All memmoves except for the large backwards case fall
into memcpy for optimal performance.  On a random copy test memcpy/memmove are
40% faster on Cortex-A57 and 28% on Cortex-A53.

	* sysdeps/aarch64/memcpy.S (memcpy):
	Rewrite of optimized memcpy and memmove.
	* sysdeps/aarch64/memmove.S (memmove): Remove
	memmove code (merged into memcpy.S).
2016-06-20 17:41:33 +01:00
Florian Weimer
aca1daef29 elf: Consolidate machine-agnostic DTV definitions in <dl-dtv.h>
Identical definitions of dtv_t and TLS_DTV_UNALLOCATED were
repeated for all architectures using DTVs.
2016-06-20 14:31:40 +02:00
Wilco Dijkstra
a8c5a2a952 This is an optimized memset for AArch64. Memset is split into 4 main cases:
small sets of up to 16 bytes, medium of 16..96 bytes which are fully unrolled.
Large memsets of more than 96 bytes align the destination and use an unrolled
loop processing 64 bytes per iteration.  Memsets of zero of more than 256 use
the dc zva instruction, and there are faster versions for the common ZVA sizes
64 or 128.  STP of Q registers is used to reduce codesize without loss of
performance.

The speedup on test-memset is 1% on Cortex-A57 and 8% on Cortex-A53.

	* sysdeps/aarch64/memset.S (__memset):
	Rewrite of optimized memset.
2016-05-12 16:44:53 +01:00
H.J. Lu
16396c41de Add _STRING_INLINE_unaligned and string_private.h
As discussed in

https://sourceware.org/ml/libc-alpha/2015-10/msg00403.html

the setting of _STRING_ARCH_unaligned currently controls the external
GLIBC ABI as well as selecting the use of unaligned accesses withing
GLIBC.

Since _STRING_ARCH_unaligned was recently changed for AArch64, this
would potentially break the ABI in GLIBC 2.23, so split the uses and add
_STRING_INLINE_unaligned to select the string ABI. This setting must be
fixed for each target, while _STRING_ARCH_unaligned may be changed from
release to release.  _STRING_ARCH_unaligned is used unconditionally in
glibc.  But <bits/string.h>, which defines _STRING_ARCH_unaligned, isn't
included with -Os.  Since _STRING_ARCH_unaligned is internal to glibc and
may change between glibc releases, it should be made private to glibc.
_STRING_ARCH_unaligned should defined in the new string_private.h heade
file which is included unconditionally from internal <string.h> for glibc
build.

	[BZ #19462]
	* bits/string.h (_STRING_ARCH_unaligned): Renamed to ...
	(_STRING_INLINE_unaligned): This.
	* include/string.h: Include <string_private.h>.
	* string/bits/string2.h: Replace _STRING_ARCH_unaligned with
	_STRING_INLINE_unaligned.
	* sysdeps/aarch64/bits/string.h (_STRING_ARCH_unaligned): Removed.
	(_STRING_INLINE_unaligned): New.
	* sysdeps/aarch64/string_private.h: New file.
	* sysdeps/generic/string_private.h: Likewise.
	* sysdeps/m68k/m680x0/m68020/string_private.h: Likewise.
	* sysdeps/s390/string_private.h: Likewise.
	* sysdeps/x86/string_private.h: Likewise.
	* sysdeps/m68k/m680x0/m68020/bits/string.h
	(_STRING_ARCH_unaligned): Renamed to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/s390/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/sparc/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/x86/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
2016-02-18 14:55:29 -02:00
Joseph Myers
f7a9f785e5 Update copyright dates with scripts/update-copyrights. 2016-01-04 16:05:18 +00:00
Szabolcs Nagy
c960ded0d5 [AArch64] Regenerate libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated.
2015-12-01 12:57:16 +00:00
Wilco Dijkstra
2fee269248 Enable _STRING_ARCH_unaligned on AArch64.
* sysdeps/aarch64/bits/string.h: New file.
        (_STRING_ARCH_unaligned): Define.
2015-11-10 11:15:59 +00:00
Szabolcs Nagy
24ffcbfc24 Regenerate aarch64 libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated.
2015-09-24 14:22:31 +01:00
Joseph Myers
de071d199a Move bits/atomic.h to atomic-machine.h (bug 14912).
It was noted in
<https://sourceware.org/ml/libc-alpha/2012-09/msg00305.html> that the
bits/*.h naming scheme should only be used for installed headers.
This patch renames bits/atomic.h to atomic-machine.h to follow that
convention.

This is the only change in this series that needs to change the
filename rather than simply removing a directory level (because both
atomic.h and bits/atomic.h exist at present).

Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch).

	[BZ #14912]
	* sysdeps/aarch64/bits/atomic.h: Move to ...
	* sysdeps/aarch64/atomic-machine.h: ...here.
	(_AARCH64_BITS_ATOMIC_H): Rename macro to
	_AARCH64_ATOMIC_MACHINE_H.
	* sysdeps/alpha/bits/atomic.h: Move to ...
	* sysdeps/alpha/atomic-machine.h: ...here.
	* sysdeps/arm/bits/atomic.h: Move to ...
	* sysdeps/arm/atomic-machine.h: ...here.  Update comments.
	* bits/atomic.h: Move to ...
	* sysdeps/generic/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/i386/bits/atomic.h: Move to ...
	* sysdeps/i386/atomic-machine.h: ...here.
	* sysdeps/ia64/bits/atomic.h: Move to ...
	* sysdeps/ia64/atomic-machine.h: ...here.
	* sysdeps/m68k/coldfire/bits/atomic.h: Move to ...
	* sysdeps/m68k/coldfire/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/m68k/m680x0/m68020/bits/atomic.h: Move to ...
	* sysdeps/m68k/m680x0/m68020/atomic-machine.h: ...here.
	* sysdeps/microblaze/bits/atomic.h: Move to ...
	* sysdeps/microblaze/atomic-machine.h: ...here.
	* sysdeps/mips/bits/atomic.h: Move to ...
	* sysdeps/mips/atomic-machine.h: ...here.
	(_MIPS_BITS_ATOMIC_H): Rename macro to _MIPS_ATOMIC_MACHINE_H.
	* sysdeps/powerpc/bits/atomic.h: Move to ...
	* sysdeps/powerpc/atomic-machine.h: ...here.  Update comments.
	* sysdeps/powerpc/powerpc32/bits/atomic.h: Move to ...
	* sysdeps/powerpc/powerpc32/atomic-machine.h: ...here.  Update
	comments.  Include <atomic-machine.h> instead of <bits/atomic.h>.
	* sysdeps/powerpc/powerpc64/bits/atomic.h: Move to ...
	* sysdeps/powerpc/powerpc64/atomic-machine.h: ...here.  Include
	<atomic-machine.h> instead of <bits/atomic.h>.
	* sysdeps/s390/bits/atomic.h: Move to ...
	* sysdeps/s390/atomic-machine.h: ...here.
	* sysdeps/sparc/sparc32/bits/atomic.h: Move to ...
	* sysdeps/sparc/sparc32/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/sparc/sparc32/sparcv9/bits/atomic.h: Move to ...
	* sysdeps/sparc/sparc32/sparcv9/atomic-machine.h: ...here.
	* sysdeps/sparc/sparc64/bits/atomic.h: Move to ...
	* sysdeps/sparc/sparc64/atomic-machine.h: ...here.
	* sysdeps/tile/bits/atomic.h: Move to ...
	* sysdeps/tile/atomic-machine.h: ...here.
	* sysdeps/tile/tilegx/bits/atomic.h: Move to ...
	* sysdeps/tile/tilegx/atomic-machine.h: ...here.  Include
	<sysdeps/tile/atomic-machine.h> instead of
	<sysdeps/tile/bits/atomic.h>.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/tile/tilepro/bits/atomic.h: Move to ...
	* sysdeps/tile/tilepro/atomic-machine.h: ...here.  Include
	<sysdeps/tile/atomic-machine.h> instead of
	<sysdeps/tile/bits/atomic.h>.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/arm/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/arm/atomic-machine.h: ...here.  Include
	<sysdeps/arm/atomic-machine.h> instead of
	<sysdeps/arm/bits/atomic.h>.
	* sysdeps/unix/sysv/linux/hppa/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/hppa/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/m68k/coldfire/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/nios2/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/nios2/atomic-machine.h: ...here.
	(_NIOS2_BITS_ATOMIC_H): Rename macro to _NIOS2_ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/sh/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/sh/atomic-machine.h: ...here.
	* sysdeps/x86_64/bits/atomic.h: Move to ...
	* sysdeps/x86_64/atomic-machine.h: ...here.
	* include/atomic.h: Include <atomic-machine.h> instead of
	<bits/atomic.h>.
2015-09-11 20:00:19 +00:00
Joseph Myers
522e02ab8a Rename bits/linkmap.h to linkmap.h (bug 14912).
It was noted in
<https://sourceware.org/ml/libc-alpha/2012-09/msg00305.html> that the
bits/*.h naming scheme should only be used for installed headers.
This patch renames bits/linkmap.h to plain linkmap.h to follow that
convention.

Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch).

	[BZ #14912]
	* bits/linkmap.h: Move to ...
	* sysdeps/generic/linkmap.h: ...here.
	* sysdeps/aarch64/bits/linkmap.h: Move to ...
	* sysdeps/aarch64/linkmap.h: ...here.
	* sysdeps/arm/bits/linkmap.h: Move to ...
	* sysdeps/arm/linkmap.h: ...here.
	* sysdeps/hppa/bits/linkmap.h: Move to ...
	* sysdeps/hppa/linkmap.h: ...here.
	* sysdeps/ia64/bits/linkmap.h: Move to ...
	* sysdeps/ia64/linkmap.h: ...here.
	* sysdeps/mips/bits/linkmap.h: Move to ...
	* sysdeps/mips/linkmap.h: ...here.
	* sysdeps/s390/bits/linkmap.h: Move to ...
	* sysdeps/s390/linkmap.h: ...here.
	* sysdeps/sh/bits/linkmap.h: Move to ...
	* sysdeps/sh/linkmap.h: ...here.
	* sysdeps/x86/bits/linkmap.h: Move to ...
	* sysdeps/x86/linkmap.h: ...here.
	* include/link.h: Include <linkmap.h> instead of <bits/linkmap.h>.
2015-09-04 19:44:27 +00:00
Wilco Dijkstra
edbbc86c3a 2015-08-24 Wilco Dijkstra <wdijkstr@arm.com>
* sysdeps/aarch64/bzero.S (__bzero): Remove.
2015-08-24 14:49:46 +01:00
Wilco Dijkstra
f008c71455 2015-08-24 Wilco Dijkstra <wdijkstr@arm.com>
* sysdeps/aarch64/fpu/math_private.h (libc_feholdsetround_aarch64_ctx):
	Unconditionally set __fpcr to avoid uninialized warning.
	(libc_feholdsetround_noex_aarch64_ctx): Likewise.
2015-08-24 14:42:28 +01:00
Wilco Dijkstra
7b1c56e483 Improve feenableexcept performance - avoid an unnecessary FPCR read in case
the FPCR does not change. Also improve the logic of the return value.
2015-08-05 16:24:02 +01:00
Wilco Dijkstra
3136eb7abd Improve fesetenv performance by avoiding unnecessary FPSR/FPCR reads/writes.
It uses the same logic as the ARM version. The common case removes 1 FPSR
and 1 FPCR read. For FE_DFL_ENV and FE_NOMASK_ENV a FPCR read is avoided in
case the FPCR does not change.
2015-08-05 16:24:01 +01:00
Szabolcs Nagy
0910702c4d [AArch64][BZ #17711] Fix extern protected data handling
Fixes elf/tst-protected1a and elf/tst-protected1b tests.

Depends on a gcc patch that makes protected visibility data non-local:
https://gcc.gnu.org/ml/gcc-patches/2015-07/msg01871.html
and on a binutils patch so R_*_GLOB_DAT relocs are used for it:
https://sourceware.org/ml/binutils/2015-07/msg00246.html
2015-07-24 09:57:32 +01:00
Wilco Dijkstra
82641e16aa Add AArch64 versions of math_opt_barrier and math_force_eval that avoid going via memory. 2015-07-13 12:48:33 +01:00
Wilco Dijkstra
c435989f52 Optimize the strlen implementation by using a page cross check and a fast check
for nul bytes which reverts to separate loop when a non-ASCII char is encountered.
Speedup on test-strlen is ~10%, long ASCII strings are processed ~60% faster,
and on random tests it is ~80% better.
2015-07-13 12:38:12 +01:00
Wilco Dijkstra
6471190491 Inline __ieee754_sqrt and __ieee754_sqrtf. Also add external definitions. 2015-07-06 12:52:55 +01:00
Szabolcs Nagy
cfe4368e51 Regenerate aarch64 libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated.
2015-07-02 14:58:12 +01:00
Szabolcs Nagy
c71c89e5c7 [AArch64] Fix cfi_adjust_cfa_offset usage in dl-tlsdesc.S
Some of the cfi annotations used incorrect sign.

	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Fix
	cfi_adjust_cfa_offset argument.
	(_dl_tlsdesc_undefweak, _dl_tlsdesc_dynamic): Likewise.
	(_dl_tlsdesc_resolve_rela, _dl_tlsdesc_resolve_hold): Likewise.
2015-06-17 12:44:53 +01:00
Szabolcs Nagy
08325735c2 [BZ 18034][AArch64] Lazy TLSDESC relocation data race fix
Lazy TLSDESC initialization needs to be synchronized with concurrent TLS
accesses.  The TLS descriptor contains a function pointer (entry) and an
argument that is accessed from the entry function.  With lazy initialization
the first call to the entry function updates the entry and the argument to
their final value.  A final entry function must make sure that it accesses an
initialized argument, this needs synchronization on systems with weak memory
ordering otherwise the writes of the first call can be observed out of order.

There are at least two issues with the current code:

tlsdesc.c (i386, x86_64, arm, aarch64) uses volatile memory accesses on the
write side (in the initial entry function) instead of C11 atomics.

And on systems with weak memory ordering (arm, aarch64) the read side
synchronization is missing from the final entry functions (dl-tlsdesc.S).

This patch only deals with aarch64.

* Write side:

Volatile accesses were replaced with C11 relaxed atomics, and a release
store was used for the initialization of entry so the read side can
synchronize with it.

* Read side:

TLS access generated by the compiler and an entry function code is roughly

  ldr x1, [x0]    // load the entry
  blr x1          // call it

entryfunc:
  ldr x0, [x0,#8] // load the arg
  ret

Various alternatives were considered to force the ordering in the entry
function between the two loads:

(1) barrier

entryfunc:
  dmb ishld
  ldr x0, [x0,#8]

(2) address dependency (if the address of the second load depends on the
result of the first one the ordering is guaranteed):

entryfunc:
  ldr x1,[x0]
  and x1,x1,#8
  orr x1,x1,#8
  ldr x0,[x0,x1]

(3) load-acquire (ARMv8 instruction that is ordered before subsequent
loads and stores)

entryfunc:
  ldar xzr,[x0]
  ldr x0,[x0,#8]

Option (1) is the simplest but slowest (note: this runs at every TLS
access), options (2) and (3) do one extra load from [x0] (same address
loads are ordered so it happens-after the load on the call site),
option (2) clobbers x1 which is problematic because existing gcc does
not expect that, so approach (3) was chosen.

A new _dl_tlsdesc_return_lazy entry function was introduced for lazily
relocated static TLS, so non-lazy static TLS can avoid the synchronization
cost.

	[BZ #18034]
	* sysdeps/aarch64/dl-tlsdesc.h (_dl_tlsdesc_return_lazy): Declare.
	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Define.
	(_dl_tlsdesc_undefweak): Guarantee TLSDESC entry and argument load-load
	ordering using ldar.
	(_dl_tlsdesc_dynamic): Likewise.
	(_dl_tlsdesc_return_lazy): Likewise.
	* sysdeps/aarch64/tlsdesc.c (_dl_tlsdesc_resolve_rela_fixup): Use
	relaxed atomics instead of volatile and synchronize with release store.
	(_dl_tlsdesc_resolve_hold_fixup): Use relaxed atomics instead of
	volatile.
	* elf/tlsdeschtab.h (_dl_tlsdesc_resolve_early_return_p): Likewise.
2015-06-17 12:41:01 +01:00
Joseph Myers
1769608794 Use libc_hidden_proto / libc_hidden_def with __strnlen.
Various code in glibc uses __strnlen instead of strnlen for namespace
reasons.  However, __strnlen does not use libc_hidden_proto /
libc_hidden_def (as is normally done for any function defined and
called within the same library, whether or not exported from the
library and whatever namespace it is in), so the compiler does not
know that those calls are to a function within libc.

This patch uses libc_hidden_proto / libc_hidden_def with __strnlen.
On x86_64, it makes no difference to the installed stripped shared
libraries.  On 32-bit x86, it causes __strnlen calls to go to the same
place as strnlen calls (the fallback strnlen implementation), rather
than through a PLT entry for the strnlen IFUNC; I'm not sure of the
logic behind when calls from within libc should use IFUNCs versus when
they should go direct to a particular function implementation, but
clearly it doesn't make sense for strnlen and __strnlen to be handled
differently in this regard.

Tested for x86_64 and x86 (testsuite, and comparison of installed
shared libraries as described above).

	* string/strnlen.c [!STRNLEN] (__strnlen): Use libc_hidden_def.
	* include/string.h (__strnlen): Use libc_hidden_proto.
	* sysdeps/aarch64/strnlen.S (__strnlen): Use libc_hidden_def.
	* sysdeps/i386/i686/multiarch/strnlen-c.c [SHARED]
	(libc_hidden_def): Define __GI___strnlen as well as __GI_strnlen.
	* sysdeps/powerpc/powerpc32/power4/multiarch/strnlen-power7.S
	(libc_hidden_def): Undefine and redefine.
	* sysdeps/powerpc/powerpc32/power4/multiarch/strnlen-ppc32.c
	[SHARED] (libc_hidden_def): Define __GI___strnlen as well as
	__GI_strnlen.
	* sysdeps/powerpc/powerpc32/power7/strnlen.S (__strnlen): Use
	libc_hidden_def.
	* sysdeps/tile/tilegx/strnlen.c (__strnlen): Likewise.
2015-06-02 20:24:25 +00:00
Wilco Dijkstra
71bf272d91 2015-06-02 Szabolcs Nagy <szabolcs.nagy@arm.com>
* sysdeps/aarch64/libm-test-ulps: Update.
2015-06-02 10:47:45 +01:00
Szabolcs Nagy
265a9b73ba [AArch64] Fix inline asm clobber list in tls-macros.h 2015-05-13 15:46:24 +01:00
Wilco Dijkstra
eda361c8d9 2015-05-06 Szabolcs Nagy <szabolcs.nagy@arm.com>
* sysdeps/aarch64/libm-test-ulps: Update.
2015-05-06 13:00:15 +00:00
Roland McGrath
ac9e0e5e40 Clean up sysdep-dl-routines variable. 2015-02-06 10:42:08 -08:00
Joseph Myers
8116321f65 Fix libm feupdateenv namespace (bug 17748).
Concluding the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of feupdateenv by making it a weak alias for
__feupdateenv and making the affected code call __feupdateenv.

Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch).  Also tested for ARM
(soft-float) that the math.h linknamespace tests now pass.

	[BZ #17748]
	* include/fenv.h (__feupdateenv): Use libm_hidden_proto.
	* math/feupdateenv.c (__feupdateenv): Use libm_hidden_def.
	* sysdeps/aarch64/fpu/feupdateenv.c (feupdateenv): Rename to
	__feupdateenv and define as weak alias of __feupdateenv.  Use
	libm_hidden_weak.
	* sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/arm/feupdateenv.c (feupdateenv): Rename to __feupdateenv
	and define as weak alias of __feupdateenv.  Use libm_hidden_weak.
	* sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Rename to
	__feupdateenv and define as weak alias of __feupdateenv.  Use
	libm_hidden_weak.
	* sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Rename to
	__feupdateenv and define as weak alias of __feupdateenv.  Use
	libm_hidden_weak.
	* sysdeps/powerpc/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c
	(__feupdateenv): Likewise.
	* sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Rename to
	__feupdateenv and define as weak alias of __feupdateenv.  Use
	libm_hidden_weak.
	* sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/tile/math_private.h (__feupdateenv): New inline
	function.
	* sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Use
	libm_hidden_def.
	* sysdeps/generic/math_private.h (default_libc_feupdateenv): Call
	__feupdateenv instead of feupdateenv.
	(default_libc_feupdateenv_test): Likewise.
	(libc_feresetround_ctx): Likewise.
2015-01-07 19:01:20 +00:00
Richard Earnshaw
dc400d7b73 AArch64: Optimized implementations of strcpy and stpcpy. 2015-01-07 11:31:10 +00:00
Richard Earnshaw
ec582ca0f3 AArch64 optimized implementation of strrchr. 2015-01-07 11:26:13 +00:00
Joseph Myers
01238691bb Fix libm fesetround namespace (bug 17748).
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of fesetround by making it a weak alias of
__fesetround and making the affected code call __fesetround.  An
existing __fesetround function in fenv_libc.h for powerpc is renamed
to __fesetround_inline.

Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).  Also tested for ARM
(soft-float) that fesetround failures disappear from the linknamespace
test results (feupdateenv remains to be addressed to complete fixing
bug 17748).

	[BZ #17748]
	* include/fenv.h (__fesetround): Declare.  Use libm_hidden_proto.
	* math/fesetround.c (fesetround): Rename to __fesetround and
	define as weak alias of __fesetround.  Use libm_hidden_weak.
	* sysdeps/aarch64/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/alpha/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/arm/fesetround.c (fesetround): Likewise.
	* sysdeps/hppa/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/i386/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/ia64/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/m68k/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/mips/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/powerpc/fpu/fenv_libc.h (__fesetround): Rename to
	__fesetround_inline.
	* sysdeps/powerpc/fpu/fenv_private.h (libc_fesetround_ppc): Call
	__fesetround_inline instead of __fesetround.
	* sysdeps/powerpc/fpu/fesetround.c (fesetround): Rename to
	__fesetround and define as weak alias of __fesetround.  Use
	libm_hidden_weak.  Call __fesetround_inline instead of
	__fesetround.
	* sysdeps/powerpc/nofpu/fesetround.c (fesetround): Rename to
	__fesetround and define as weak alias of __fesetround.  Use
	libm_hidden_weak.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fesetround.c (fesetround):
	Likewise.
	* sysdeps/s390/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/sh/sh4/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/sparc/fpu/fesetround.c (fesetround): Likewise.
	* sysdeps/tile/math_private.h (__fesetround): New inline function.
	* sysdeps/x86_64/fpu/fesetround.c (fesetround): Rename to
	__fesetround and define as weak alias of __fesetround.  Use
	libm_hidden_weak.
	* sysdeps/generic/math_private.h (default_libc_fesetround): Call
	__fesetround instead of fesetround.
	(default_libc_feholdexcept_setround): Likewise.
	(libc_feholdsetround_ctx): Likewise.
	(libc_feholdsetround_noex_ctx): Likewise.
2015-01-07 00:41:23 +00:00
Joseph Myers
cd42798aef Fix libm fesetenv namespace (bug 17748).
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of fesetenv by making it a weak alias of
__fesetenv and making the affected code (including various copies of
feupdateenv which also gets called from C90 functions) call
__fesetenv.

Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).  Also tested for ARM
(soft-float) that fesetenv failures disappear from the linknamespace
test results (fsetround and feupdateenv remain to be addressed to
complete fixing bug 17748).

	[BZ #17748]
	* include/fenv.h (__fesetenv): Use libm_hidden_proto.
	* math/fesetenv.c (__fesetenv): Use libm_hidden_def.
	* sysdeps/aarch64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv
	and define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/alpha/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
	* sysdeps/arm/fesetenv.c (fesetenv): Rename to __fesetenv and
	define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/hppa/fpu/fesetenv.c (fesetenv): Likewise.
	* sysdeps/i386/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
	* sysdeps/ia64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
	define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/m68k/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
	* sysdeps/mips/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
	define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/powerpc/fpu/fesetenv.c (__fesetenv): Use
	libm_hidden_def.
	* sysdeps/powerpc/nofpu/fesetenv.c (__fesetenv): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fesetenv.c (__fesetenv):
	Likewise.
	* sysdeps/s390/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and
	define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/sh/sh4/fpu/fesetenv.c (fesetenv): Likewise.
	* sysdeps/sparc/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def.
	* sysdeps/tile/math_private.h (__fesetenv): New inline function.
	* sysdeps/x86_64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv
	and define as weak alias of __fesetenv.  Use libm_hidden_weak.
	* sysdeps/generic/math_private.h (default_libc_fesetenv): Use
	__fesetenv instead of fesetenv.
	(libc_feresetround_noex_ctx): Likewise.
	* sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c
	(__feupdateenv): Likewise.
	* sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Likewise.
2015-01-06 23:36:20 +00:00
Joseph Myers
ef9faf1385 Fix libm feholdexcept namespace (bug 17748).
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of feholdexcept by making it a weak alias of
__feholdexcept and making the affected code call __feholdexcept.

Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).  Also tested for ARM
(soft-float) that feholdexcept failures disappear from the
linknamespace test failures (fesetenv, fsetround and feupdateenv
remain to be addressed to complete fixing bug 17748).

	[BZ #17748]
	* include/fenv.h (__feholdexcept): Declare.  Use
	libm_hidden_proto.
	* math/feholdexcpt.c (feholdexcept): Rename to __feholdexcept and
	define as weak alias of __feholdexcept.  Use libm_hidden_weak.
	* sysdeps/aarch64/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/alpha/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/arm/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/hppa/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/i386/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/ia64/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/m68k/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/mips/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/powerpc/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/powerpc/nofpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/feholdexcpt.c
	(feholdexcept): Likewise.
	* sysdeps/s390/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/sh/sh4/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/sparc/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/x86_64/fpu/feholdexcpt.c (feholdexcept): Likewise.
	* sysdeps/generic/math_private.h (default_libc_feholdexcept): Use
	__feholdexcept instead of feholdexcept.
	(default_libc_feholdexcept_setround): Likewise.
2015-01-05 23:06:14 +00:00
Joseph Myers
b93c2205ec Fix libm fegetround namespace (bug 17748).
Continuing the fixes for C90 libm functions calling C99 fe* functions,
this patch fixes the case of fegetround by making it a weak alias of
__fegetround and making the affected code call __fegetround.

Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).  Also tested for ARM
(soft-float) that fegetround failures disappear from the linknamespace
test failures (feholdexcept, fesetenv, fesetround and feupdateenv
remain to be addressed before bug 17748 is fully fixed, although this
patch may suffice to fix the failures in some cases, when the libc_fe*
functions are implemented but there is no architecture-specific sqrt
implementation in use so there were failures from fegetround used by
sqrt but no other such failures).

	[BZ #17748]
	* include/fenv.h (__fegetround): Declare.  Use libm_hidden_proto.
	* math/fegetround.c (fegetround): Rename to __fegetround and
	define as weak alias of __fegetround.  Use libm_hidden_weak.
	* sysdeps/aarch64/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/alpha/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/arm/fegetround.c (fegetround): Likewise.
	* sysdeps/hppa/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/i386/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/ia64/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/m68k/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/mips/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/powerpc/fpu/fegetround.c (fegetround): Likewise.
	Undefine after rather than before function definition; use
	parentheses around function name in definition.
	(__fegetround): Also undefine macro after function definition.
	* sysdeps/powerpc/nofpu/fegetround.c (fegetround): Rename to
	__fegetround and define as weak alias of __fegetround.  Use
	libm_hidden_weak.  Do not undefine as macro.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fegetround.c (fegetround):
	Likewise.
	* sysdeps/s390/fpu/fegetround.c (fegetround): Rename to
	__fegetround and define as weak alias of __fegetround.  Use
	libm_hidden_weak.
	* sysdeps/sh/sh4/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/sparc/fpu/fegetround.c (fegetround): Likewise.
	* sysdeps/tile/math_private.h (__fegetround): New inline function.
	* sysdeps/x86_64/fpu/fegetround.c (fegetround): Rename to
	__fegetround and define as weak alias of __fegetround.  Use
	libm_hidden_weak.
	* sysdeps/ieee754/dbl-64/e_sqrt.c (__ieee754_sqrt): Use
	__fegetround instead of fegetround.
2015-01-02 20:44:42 +00:00
Joseph Myers
b168057aaa Update copyright dates with scripts/update-copyrights. 2015-01-02 16:29:47 +00:00
Joseph Myers
73a268c759 Fix libm fegetenv namespace (bug 17748).
Some C90 libm functions call fegetenv via libc_feholdsetround*
functions in math_private.h.  This patch makes them call __fegetenv
instead, making fegetenv into a weak alias for __fegetenv as needed.

Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).  Also tested for ARM
(soft-float) that fegetenv failures disappear from the linknamespace
test failures (however, similar fixes will also be needed for
fegetround, feholdexcept, fesetenv, fesetround and feupdateenv before
this set of namespace issues covered by bug 17748 is fully fixed and
those linknamespace tests start passing).

	[BZ #17748]
	* include/fenv.h (__fegetenv): Use libm_hidden_proto.
	* math/fegetenv.c (__fegetenv): Use libm_hidden_def.
	* sysdeps/aarch64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv
	and define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/alpha/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
	* sysdeps/arm/fegetenv.c (fegetenv): Rename to __fegetenv and
	define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/hppa/fpu/fegetenv.c (fegetenv): Likewise.
	* sysdeps/i386/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
	* sysdeps/ia64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
	define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/m68k/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
	* sysdeps/mips/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
	define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/powerpc/fpu/fegetenv.c (__fegetenv): Use
	libm_hidden_def.
	* sysdeps/powerpc/nofpu/fegetenv.c (__fegetenv): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fegetenv.c (__fegetenv):
	Likewise.
	* sysdeps/s390/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and
	define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/sh/sh4/fpu/fegetenv.c (fegetenv): Likewise.
	* sysdeps/sparc/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def.
	* sysdeps/tile/math_private.h (__fegetenv): New inline function.
	* sysdeps/x86_64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv
	and define as weak alias of __fegetenv.  Use libm_hidden_weak.
	* sysdeps/generic/math_private.h (libc_feholdsetround_ctx): Use
	__fegetenv instead of fegetenv.
	(libc_feholdsetround_noex_ctx): Likewise.
2014-12-31 22:07:52 +00:00
Joseph Myers
0747f81811 Fix libm feraiseexcept namespace (bug 17723).
Various C90 and UNIX98 libm functions call feraiseexcept, which is not
in those standards.  This causes linknamespace test failures - except
on x86 / x86_64, where feraiseexcept is inline (for the relevant
constant arguments) in bits/fenv.h.

This patch fixes this by making those functions call __feraiseexcept
instead.  All changes are applied to all architectures rather than
considering the possibility that some might not be needed in some
cases (e.g. x86) as it seems most maintainable to keep architectures
consistent.

Where __feraiseexcept does not exist, it is added, with feraiseexcept
made a weak alias; where it is a strong alias, it is made weak.
libm_hidden_def / libm_hidden_proto are used with __feraiseexcept
(this might in some cases improve code generation for existing calls
to __feraiseexcept in some code on some architectures).  Where there
are dummy feraiseexcept macros (on architectures without
floating-point exceptions support, to avoid compile errors from
references to undefined FE_* macros), corresponding dummy
__feraiseexcept macros are added.  And on x86, to ensure
__feraiseexcept calls still get inlined, the inline function in
bits/fenv.h is refactored so that most of it can be reused in an
inline __feraiseexcept in a separate include/bits/fenv.h.

Calls are changed in C90/UNIX98 functions, but generally not in
functions missing from those standards.  They are also changed in
libc_fe* functions (on the basis that those might be used in any libm
function), and in feupdateenv (on the same basis - may be used, via
default libc_*, in any libm function - of course feupdateenv will need
changing to __feupdateenv in a subsequent patch to make that fully
namespace-clean).

No __feraiseexcept is added corresponding to the feraiseexcept in
powerpc bits/fenvinline.h, because that macro definition is
conditional on !defined __NO_MATH_INLINES, and glibc libm is built
with -D__NO_MATH_INLINES, so changing internal calls to use
__feraiseexcept should make no difference.

Tested for x86_64 (testsuite; the only change in disassembly of
installed shared libraries is a slight code reordering in clog10, of
no apparent significance).  Also tested for MIPS, where (in the
configuration tested) it eliminates math.h linknamespace failures for
n32 and n64 (some for o32 remain because of other issues).

	[BZ #17723]
	* include/fenv.h (__feraiseexcept): Use libm_hidden_proto.
	* math/fraiseexcpt.c (__feraiseexcept): Use libm_hidden_def.
	* sysdeps/aarch64/fpu/fraiseexcpt.c (feraiseexcept): Rename to
	__feraiseexcept and define as weak alias of __feraiseexcept.  Use
	libm_hidden_weak.
	* sysdeps/arm/fraiseexcpt.c (feraiseexcept): Likewise.
	* sysdeps/hppa/fpu/fraiseexcpt.c (feraiseexcept): Likewise.
	* sysdeps/i386/fpu/fraiseexcpt.c (__feraiseexcept): Use
	libm_hidden_def.
	* sysdeps/ia64/fpu/fraiseexcpt.c (feraiseexcept): Rename to
	__feraiseexcept and define as weak alias of __feraiseexcept.  Use
	libm_hidden_weak.
	* sysdeps/m68k/coldfire/fpu/fraiseexcpt.c (feraiseexcept):
	Likewise.
	* sysdeps/microblaze/math_private.h (__feraiseexcept): New macro.
	* sysdeps/mips/fpu/fraiseexcpt.c (feraiseexcept): Rename to
	__feraiseexcept and define as weak alias of __feraiseexcept.  Use
	libm_hidden_weak.
	* sysdeps/powerpc/fpu/fraiseexcpt.c (__feraiseexcept): Use
	libm_hidden_def.
	* sysdeps/powerpc/nofpu/fraiseexcpt.c (__feraiseexcept): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fraiseexcpt.c
	(__feraiseexcept): Likewise.
	* sysdeps/s390/fpu/fraiseexcpt.c (feraiseexcept): Rename to
	__feraiseexcept and define as weak alias of __feraiseexcept.  Use
	libm_hidden_weak.
	* sysdeps/sh/sh4/fpu/fraiseexcpt.c (feraiseexcept): Likewise.
	* sysdeps/sparc/fpu/fraiseexcpt.c (__feraiseexcept): Use
	libm_hidden_def.
	* sysdeps/tile/math_private.h (__feraiseexcept): New macro.
	* sysdeps/unix/sysv/linux/alpha/fraiseexcpt.S (__feraiseexcept):
	Use libm_hidden_def.
	* sysdeps/x86_64/fpu/fraiseexcpt.c (__feraiseexcept): Use
	libm_hidden_def.
	(feraiseexcept): Define as weak not strong alias.  Use
	libm_hidden_weak.
	* sysdeps/x86/fpu/bits/fenv.h (__feraiseexcept_invalid_divbyzero):
	New inline function.  Factored out of ...
	(feraiseexcept): ... here.  Use __feraiseexcept_invalid_divbyzero.
	* sysdeps/x86/fpu/include/bits/fenv.h: New file.
	* math/e_scalb.c (invalid_fn): Call __feraiseexcept instead of
	feraiseexcept.
	* math/w_acos.c (__acos): Likewise.
	* math/w_asin.c (__asin): Likewise.
	* math/w_ilogb.c (__ilogb): Likewise.
	* math/w_j0.c (y0): Likewise.
	* math/w_j1.c (y1): Likewise.
	* math/w_jn.c (yn): Likewise.
	* math/w_log.c (__log): Likewise.
	* math/w_log10.c (__log10): Likewise.
	* sysdeps/aarch64/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/aarch64/fpu/math_private.h
	(libc_feupdateenv_test_aarch64): Likewise.
	* sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/arm/fenv_private.h (libc_feupdateenv_test_vfp): Likewise.
	* sysdeps/arm/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Likewise.
	* sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/powerpc/fpu/e_sqrt.c (__slow_ieee754_sqrt): Likewise.
	* sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise.
	* sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Likewise.
2014-12-30 17:08:09 +00:00
Wilco Dijkstra
9b47df5814 Call libc_fetestexcept_aarch64. 2014-12-22 17:14:54 +00:00
Wilco Dijkstra
97be3cacec Call libc_fesetround_aarch64. 2014-12-22 16:57:41 +00:00
Richard Earnshaw
aa76a5c701 [AArch64] Fix strchrnul clobbering v15 2014-12-10 09:54:09 +00:00
Siddhesh Poyarekar
a38484851a Remove IS_IN_rtld
Replace with IS_IN (rtld).  Generated code is unchanged on
x86_64.

        * elf/Makefile (CPPFLAGS-.os): Remove IS_IN_rtld.
        * elf/dl-open.c: Use IS_IN (rtld) instead if IS_IN_rtld.
        * elf/rtld-Rules: Likewise.
        * elf/setup-vdso.h: Likewise.
        * include/assert.h: Likewise.
        * include/bits/stdlib-float.h: Likewise.
        * include/errno.h: Likewise.
        * include/sys/stat.h: Likewise.
        * include/unistd.h: Likewise.
        * sysdeps/aarch64/setjmp.S: Likewise.
        * sysdeps/alpha/setjmp.S: Likewise.
        * sysdeps/arm/__longjmp.S: Likewise.
        * sysdeps/arm/aeabi_unwind_cpp_pr1.c: Likewise.
        * sysdeps/arm/setjmp.S: Likewise.
        * sysdeps/arm/sysdep.h: Likewise.
        * sysdeps/generic/_itoa.h: Likewise.
        * sysdeps/generic/dl-sysdep.h: Likewise.
        * sysdeps/generic/ldsodefs.h: Likewise.
        * sysdeps/i386/dl-tls.h: Likewise.
        * sysdeps/i386/setjmp.S: Likewise.
        * sysdeps/m68k/setjmp.c: Likewise.
        * sysdeps/mach/hurd/dl-execstack.c: Likewise.
        * sysdeps/mach/hurd/opendir.c: Likewise.
        * sysdeps/posix/getcwd.c: Likewise.
        * sysdeps/posix/opendir.c: Likewise.
        * sysdeps/posix/profil.c: Likewise.
        * sysdeps/powerpc/dl-procinfo.h: Likewise.
        * sysdeps/powerpc/powerpc32/fpu/__longjmp-common.S: Likewise.
        * sysdeps/powerpc/powerpc32/fpu/setjmp-common.S: Likewise.
        * sysdeps/powerpc/powerpc32/power4/multiarch/init-arch.h: Likewise.
        * sysdeps/powerpc/powerpc32/setjmp-common.S: Likewise.
        * sysdeps/powerpc/powerpc64/__longjmp-common.S: Likewise.
        * sysdeps/powerpc/powerpc64/setjmp-common.S: Likewise.
        * sysdeps/s390/dl-tls.h: Likewise.
        * sysdeps/s390/s390-32/setjmp.S: Likewise.
        * sysdeps/s390/s390-64/setjmp.S: Likewise.
        * sysdeps/sh/sh3/setjmp.S: Likewise.
        * sysdeps/sh/sh4/setjmp.S: Likewise.
        * sysdeps/unix/alpha/sysdep.h: Likewise.
        * sysdeps/unix/arm/sysdep.S: Likewise.
        * sysdeps/unix/i386/sysdep.S: Likewise.
        * sysdeps/unix/sysv/linux/aarch64/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/getcwd.c: Likewise.
        * sysdeps/unix/sysv/linux/hppa/nptl/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/i386/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/i386/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/ia64/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/ia64/setjmp.S: Likewise.
        * sysdeps/unix/sysv/linux/ia64/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/lowlevellock-futex.h: Likewise.
        * sysdeps/unix/sysv/linux/m68k/bits/m68k-vdso.h: Likewise.
        * sysdeps/unix/sysv/linux/m68k/m68k-helpers.S: Likewise.
        * sysdeps/unix/sysv/linux/microblaze/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/powerpc/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/powerpc/powerpc32/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/powerpc/powerpc64/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/s390/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/s390/s390-32/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/s390/s390-64/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/sh/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/sh/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/sparc/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/sparc/sparc32/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/sparc/sparc64/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/tile/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/tile/sysdep.h: Likewise.
        * sysdeps/unix/sysv/linux/x86_64/lowlevellock.h: Likewise.
        * sysdeps/unix/sysv/linux/x86_64/sysdep.h: Likewise.
        * sysdeps/unix/x86_64/sysdep.S: Likewise.
        * sysdeps/x86_64/setjmp.S: Likewise.
2014-11-24 11:41:48 +05:30
Andrew Pinski
6d3db89b12 AArch64: Reformat inline-asm in elf_machine_load_address
This patch reformats the inline-asm in elf_machine_load_address so it is
easier to change only part of the inline-asm.  That is using string
concatenating instead of string continuation.

Also document why this inline-asm works - it depends on the 32bit
relocation being resolved at link time.

ChangeLog:

2014-11-21  Will Newton  <will.newton@linaro.org>
	    Andrew Pinski  <andrew.pinski@caviumnetworks.com>

	* sysdeps/aarch64/dl-machine.h (elf_machine_load_address):
	Refactor inline-asm.  Also add comment.
2014-11-21 14:45:11 +00:00
Will Newton
01194ba18d AArch64: Use ELF macros rather than Elf64 throughout
Using the macros for ELF types is required for adding ILP32 support.
In the standard AArch64 configuration this makes no difference to
the types used.

ChangeLog:

2014-11-21  Will Newton  <will.newton@linaro.org>
	    Andrew Pinski  <andrew.pinski@caviumnetworks.com>

	* sysdeps/aarch64/bits/link.h (la_aarch64_gnu_pltenter): Use
	ElfW macro instead of hardcoded Elf64 types.
	(la_aarch64_gnu_pltenter): Likewise.
	* sysdeps/aarch64/dl-machine.h
	(elf_machine_runtime_setup): Use ElfW(Addr).
2014-11-21 14:44:23 +00:00
Will Newton
8c230039a0 AArch64: Update relocations for ILP32
The latest version of the binutils ELF header defines a new set of
dynamic relocations for ILP32 and renames some to make the naming
more uniform.

ChangeLog:

2014-11-21  Will Newton  <will.newton@linaro.org>
	    Andrew Pinski  <andrew.pinski@caviumnetworks.com>

	* elf/elf.h (R_AARCH64_P32_ABS32, R_AARCH64_P32_COPY,
	R_AARCH64_P32_GLOB_DAT, R_AARCH64_P32_JUMP_SLOT,
	R_AARCH64_P32_RELATIVE, R_AARCH64_P32_TLS_DTPMOD,
	R_AARCH64_P32_TLS_DTPREL, R_AARCH64_P32_TLS_TPREL,
	R_AARCH64_P32_TLSDESC, R_AARCH64_P32_IRELATIVE): Define.
	(R_AARCH64_TLS_DTPMOD64): Rename to ..
	(R_AARCH64_TLS_DTPMOD): This.
	(R_AARCH64_TLS_DTPREL64): Rename to ...
	(R_AARCH64_TLS_DTPREL): This.
	(R_AARCH64_TLS_TPREL64): Rename to ...
	(R_AARCH64_TLS_TPREL): This.
	* sysdeps/aarch64/dl-machine.h (elf_machine_type_class): Update
	R_AARCH64_TLS_DTPMOD64, R_AARCH64_TLS_DTPREL64, and
	R_AARCH64_TLS_TPREL64.
	(elf_machine_rela): Likewise.
2014-11-21 14:43:16 +00:00
Torvald Riegel
1ea339b697 Add arch-specific configuration for C11 atomics support.
This sets __HAVE_64B_ATOMICS if provided.  It also sets
USE_ATOMIC_COMPILER_BUILTINS to true if the existing atomic ops use the
__atomic* builtins (aarch64, mips partially) or if this has been
tested (x86_64); otherwise, this is set to false so that C11 atomics will
be based on the existing atomic operations.
2014-11-20 11:57:38 +01:00
Renlin Li
80085defb8 [AArch64] End frame record chain correctly. 2014-11-11 15:02:02 +00:00
Richard Earnshaw
be9d4ccc7f [AArch64] Add optimized strchrnul.
Here is an optimized implementation of __strchrnul.  The
simplification that we don't have to track precisely why the loop
terminates (match or end-of-string) means we have to do less work in
both setup and the core inner loop.  That means this should never be
slower than strchr.

As with strchr, the use of LD1 means we do not need different versions
for big-/little-endian.
2014-11-05 13:51:56 +00:00
Joseph Myers
c5684fdb2b Don't use INTDEF/INTUSE with _dl_init (bug 14132).
Continuing the removal of the obsolete INTDEF / INTUSE mechanism, this
patch eliminates its use for _dl_init.  Since _dl_init was already
declared with hidden visibility, creating a second hidden alias for it
was completely pointless, so this patch replaces all uses of
_dl_init_internal with plain _dl_init instead of using hidden_proto /
hidden_def (which are only needed when you want a hidden alias for a
non-hidden symbol; it's quite possible there are cases where they are
used but don't need to be because the symbol in question is not part
of the public ABI and is only used within a single library, so using
attributes_hidden instead would suffice).

Tested for x86_64 that installed stripped shared libraries are
unchanged by the patch.

	[BZ #14132]
	* elf/dl-init.c (_dl_init): Don't use INTDEF.
	* sysdeps/aarch64/dl-machine.h (RTLD_START): Use _dl_init instead
	of _dl_init_internal.
	* sysdeps/alpha/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/arm/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/hppa/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/i386/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/ia64/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/m68k/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/microblaze/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/mips/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/powerpc/powerpc32/dl-start.S (_start): Likewise.
	* sysdeps/s390/s390-32/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/s390/s390-64/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/sh/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/sparc/sparc32/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/sparc/sparc64/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/tile/dl-start.S (_start): Likewise.
	* sysdeps/x86_64/dl-machine.h (RTLD_START): Likewise.
	* sysdeps/x86_64/x32/dl-machine.h (RTLD_START): Likewise.
2014-11-04 23:26:39 +00:00
Wilco Dijkstra
6a9ad2faee Call libc_fetestexcept_aarch64 from math_private.h rather than duplicating functionality. 2014-10-24 13:23:12 +00:00
Wilco Dijkstra
1c8810ed95 Call libc_feholdexcept_aarch64 from math_private.h rather than duplicating functionality. 2014-10-24 13:21:27 +00:00
Wilco Dijkstra
8b1af712d1 Call get_rounding_mode rather than duplicating functionality. 2014-10-24 13:19:24 +00:00
Wilco Dijkstra
a7b00c1101 Cleanup feenableexcept to use the same logic as the ARM version. No functional changes. 2014-10-24 13:07:17 +00:00
Wilco Dijkstra
3a84f1a651 Cleanup fedisableexcept to use the same logic as the ARM version. No functional changes. 2014-10-24 13:06:04 +00:00
Wilco Dijkstra
ea9a7c8b06 Cleanup feclearexcept to use the same logic as the ARM version. No functional changes. 2014-10-24 13:03:11 +00:00
Wilco Dijkstra
e226de3372 Cleanup fesetexceptflag to use the same logic as the ARM version. No functional changes. 2014-10-24 13:03:09 +00:00
Wilco Dijkstra
6e3d8ed360 Remove an unused include. 2014-10-24 13:03:08 +00:00
Wilco Dijkstra
eb04247d5d Remove spaces. 2014-10-24 12:53:19 +00:00
H.J. Lu
f4a58f0d35 Require autoconf 2.69
* aclocal.m4: Require autoconf 2.69.
	* configure: Regenerated.
	* sysdeps/aarch64/configure: Likewise.
	* sysdeps/alpha/configure: Likewise.
	* sysdeps/arm/armv7/configure: Likewise.
	* sysdeps/arm/configure: Likewise.
	* sysdeps/ia64/configure: Likewise.
	* sysdeps/mach/configure: Likewise.
	* sysdeps/mips/configure: Likewise.
	* sysdeps/s390/configure: Likewise.
	* sysdeps/unix/sysv/linux/mips/configure: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/configure: Likewise.

	* sysdeps/alpha/configure.ac: Avoid empty lines at the end of
	file.
	* sysdeps/ia64/configure.ac: Likewise.
2014-09-29 07:53:36 -07:00
Siddhesh Poyarekar
eb72478a28 Remove unnecessary uses of NOT_IN_libc
If a IS_IN_* macro is defined, then NOT_IN_libc is always defined,
except obviously for IS_IN_libc.  There's no need to check for both.
Verified on x86_64 and i686 that the source is unchanged.

       * include/libc-symbols.h: Remove unnecessary check for
       NOT_IN_libc.
       * nptl/pthreadP.h: Likewise.
       * sysdeps/aarch64/setjmp.S: Likewise.
       * sysdeps/alpha/setjmp.S: Likewise.
       * sysdeps/arm/sysdep.h: Likewise.
       * sysdeps/i386/setjmp.S: Likewise.
       * sysdeps/m68k/setjmp.c: Likewise.
       * sysdeps/posix/getcwd.c: Likewise.
       * sysdeps/powerpc/powerpc32/setjmp-common.S: Likewise.
       * sysdeps/powerpc/powerpc64/setjmp-common.S: Likewise.
       * sysdeps/s390/s390-32/setjmp.S: Likewise.
       * sysdeps/s390/s390-64/setjmp.S: Likewise.
       * sysdeps/sh/sh3/setjmp.S: Likewise.
       * sysdeps/sh/sh4/setjmp.S: Likewise.
       * sysdeps/unix/alpha/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/aarch64/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/i386/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/ia64/setjmp.S: Likewise.
       * sysdeps/unix/sysv/linux/ia64/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/powerpc/powerpc32/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/powerpc/powerpc64/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/s390/s390-32/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/s390/s390-64/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/sh/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/sparc/sparc32/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/sparc/sparc64/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/tile/sysdep.h: Likewise.
       * sysdeps/unix/sysv/linux/x86_64/sysdep.h: Likewise.
       * sysdeps/x86_64/setjmp.S: Likewise.
2014-08-21 10:26:46 +05:30
Wilco Dijkstra
656b84c2ef This patch adds new function libc_feholdsetround_noex_aarch64_ctx, enabling
further optimization. libc_feholdsetround_aarch64_ctx now only needs to
read the FPCR in the typical case, avoiding a redundant FPSR read.
Performance results show a good improvement (5-10% on sin()) on cores with
expensive FPCR/FPSR instructions.
2014-08-07 16:29:55 +00:00
Marcus Shawcroft
33ef2f0c76 Revert "aarch64: Add hp-timing.h"
This reverts commit 4052993954.

Conflicts:
	sysdeps/aarch64/hp-timing.h
2014-07-22 12:09:44 +01:00
Joseph Myers
29c4f53e2a Move architecture shlib-versions files to Linux-specific directories.
Various architectures have files such as sysdeps/<arch>/shlib-versions
whose contents are in fact entirely Linux-specific, relating only to
the symbol / shared library versions for the port to Linux on that
architecture, when any future port to a different OS on that
architecture would use the symbol version of the glibc release it goes
in, as standard for new ports.

This patch moves such files under sysdeps/unix/sysv/linux/, merging in
the contents of sysdeps/<arch>/nptl/shlib-versions in the process.
The only bits not moved are those relating to libgcc_s versions, which
don't appear OS-specific in the same way that glibc's symbol versions
so.  It deliberately does not change the regular expressions given for
matching configurations in each file; some match only Linux although
not Linux-specific, or match other OSes although Linux-specific.  It
is with a view to at least the following further cleanups:

* Move architecture-specific content from the toplevel shlib-versions
  and nptl/shlib-versions into sysdeps shlib-versions files, so
  eliminating another difference between ex-ports and non-ex-ports
  architectures.

* Likewise, for OS-specific content in shlib-versions files.

* At that point, the first field in shlib-versions files (the regular
  expression matching a configuration triplet) should be redundant, so
  eliminate that field and leave shlib-versions selection working
  purely on a sysdeps basis (with limited use of %ifdef in
  shlib-versions files when needed) rather than having its own
  separate mechanism to select what configuration information is
  relevant.

* Move the build of gnu/lib-names.h to a similar mechanism to that
  used for gnu/stubs.h (each library build installing a version of the
  header specifically for that build), so we can eliminate the
  duplication of soname information in the makefiles and get it purely
  from shlib-versions files again.

There may be other cleanups possible as well (in particular, I'm not
sure that all cases where the same "Earliest symbol set" information
is repeated for many different libraries actually should need to
repeat it rather than specifying it just once for DEFAULT for the
given configuration, and separately specifying any non-default choices
of soname).

Tested x86_64 that the installed shared libraries are unchanged by
this patch.

	* sysdeps/aarch64/shlib-versions: Move to ...
	* sysdeps/unix/sysv/linux/aarch64/shlib-versions: ... here.
	* sysdeps/alpha/shlib-versions: Move to ...
	* sysdeps/unix/sysv/linux/alpha/shlib-versions: ... here.
	* sysdeps/arm/shlib-versions: Move to ...
	* sysdeps/unix/sysv/linux/arm/shlib-versions: ... here.
	* sysdeps/hppa/shlib-versions: Move all contents except for
	libgcc_s entry to ...
	* sysdeps/unix/sysv/linux/hppa/shlib-versions: ... here.  Merge in
	entry from ...
	* sysdeps/hppa/nptl/shlib-versions: ... here.  Remove file.
	* sysdeps/ia64/shlib-versions: Move to ...
	* sysdeps/unix/sysv/linux/ia64/shlib-versions: ... here.  Merge in
	entry from ...
	* sysdeps/ia64/nptl/shlib-versions: ... here.  Remove file.
	* sysdeps/m68k/coldfire/shlib-versions: Move to ...
	* sysdeps/unix/sysv/linux/m68k/coldfire/shlib-versions: ... here.
	* sysdeps/microblaze/shlib-versions: Move to ...
	* sysdeps/unix/sysv/linux/microblaze/shlib-versions: ... here.
	* sysdeps/mips/shlib-versions: Move to ...
	* sysdeps/unix/sysv/linux/mips/shlib-versions: ... here.  Merge in
	entry from ...
	* sysdeps/mips/nptl/shlib-versions: ... here.  Remove file.
	* sysdeps/tile/shlib-versions: Move to ...
	* sysdeps/unix/sysv/linux/tile/shlib-versions: ... here.
	* sysdeps/unix/sysv/linux/x86_64/64/shlib-versions: Merge in entry
	from ...
	* sysdeps/x86_64/64/shlib-versions: ... here.  Remove file.
	* sysdeps/unix/sysv/linux/x86_64/x32/shlib-versions: Merge in
	entry from ...
	* sysdeps/x86_64/x32/shlib-versions: ... here.  Remove file.
2014-07-17 14:31:12 +00:00
Richard Henderson
a75b89b776 aarch64: Update libm-test-ulps 2014-07-11 10:57:48 -07:00
Will Newton
82374e65d7 Fix -Wundef warnings for SHARED
The definition of SHARED is tested with #ifdef pretty much everywhere
apart from these few places. The tlsdesc.c code seems to be copy and
pasted to a few architectures and there is one instance in the hppa
startup code.

ChangeLog:

2014-07-09  Will Newton  <will.newton@linaro.org>

	* sysdeps/aarch64/tlsdesc.c (_dl_unmap): Test SHARED with #ifdef.
	* sysdeps/arm/tlsdesc.c (_dl_unmap): Likewise.
	* sysdeps/i386/tlsdesc.c (_dl_unmap): Likewise.
	* sysdeps/x86_64/tlsdesc.c (_dl_unmap): Likewise.
	* sysdeps/hppa/start.S (_start): Likewise.
2014-07-09 09:26:07 +01:00
Richard Henderson
05502548e9 Always provide HP_SMALL_TIMING_AVAIL 2014-07-03 08:38:36 -07:00
Richard Henderson
4052993954 aarch64: Add hp-timing.h 2014-07-03 08:38:34 -07:00
Joseph Myers
cb403c34c6 Remove relro configure test.
This patch removes the configure test for working -z relro.

The use of -z relro in Makeconfig became unconditional with

commit 2e6ab1df44c412bb9d30b26a4d8a679150a7e375
Author: Ulrich Drepper <drepper@redhat.com>
Date:   Sat Oct 28 06:44:04 2006 +0000

    Remove conditional code which now is unnecessary.

(commit reference from git://repo.or.cz/glibc/history), so since then
the configure test has not controlled anything about how glibc is
built - simply about whether configure succeeds and allows a build to
be attempted.  The test for whether the option did something useful
(as opposed to whether it exists - which we can certainly just assume
by now) was originally added in
<https://sourceware.org/ml/libc-hacker/2004-09/msg00069.html> to
disable the option in a case when it did nothing useful on ia64 (as a
result of something deliberate in the linker on ia64).  Since 2006
that disabling has been of no effect, and given that the current test
does not set libc_relro_required for ia64, it does nothing whatever
useful for the original motivating case.  Also at around the same time
in 2006 the test was made to give an error for missing or broken -z
relro support on various architectures.

So effectively all the test does now is verify that, on certain
architectures, the linker has not been changed deliberately to make
the option ineffective.  I see no apparent reason why such a change
should be expected, or why the build should be stopped if it were to
be made (any more than we disallow build on ia64); I think we can
trust binutils patch review to point out the consequences of any
change to COMMONPAGESIZE setting.  The only thing that might now make
sense would be disabling the -z relro use on an architecture-specific
basis if there were an architecture-specific reason to consider that
to make sense; it would be for the ia64 maintainer to decide if that
makes sense for ia64 at present, but I think that could be done
through sysdeps Makefiles - no special configure tests needed.

Tested for x86_64 that this patch makes no change to the installed
shared libraries.

Together with
<https://sourceware.org/ml/libc-alpha/2014-06/msg00788.html> (pending
review) this substantially eliminates architecture-specific cases from
architecture-independent configure.ac files.  There remains an i386
case in sysdeps/mach/hurd/configure.ac that should properly move to
the i386 subdirectory.  (There are also OS-specific cases outside
OS-specific directories; in principle I think should should also
move.)

	* configure.ac (libc_commonpagesize): Remove variable.
	(libc_relro_required): Likewise.
	(libc_cv_z_relro): Remove configure test.
	* configure: Regenerated.
	* sysdeps/aarch64/preconfigure (libc_commonpagesize): Do not set
	variable.
	(libc_relro_required): Likewise.
	* sysdeps/alpha/preconfigure (libc_commonpagesize): Likewise.
	(libc_relro_required): Likewise.
	* sysdeps/arm/preconfigure.ac (libc_commonpagesize): Likewise.
	(libc_relro_required): Likewise.
	* sysdeps/arm/preconfigure: Regenerated.
	* sysdeps/ia64/preconfigure: Remove file.
	* sysdeps/tile/preconfigure (libc_commonpagesize): Do not set
	variable.
	(libc_relro_required): Likewise.
2014-06-27 16:51:22 +00:00
Siddhesh Poyarekar
4cf5b6d0d7 Fix Wundef warning for ELF_MACHINE_NO_RELA
This patch defines ELF_MACHINE_NO_RELA on all architectures.  Tested
only on x86_64 to verify that the sources before and after are
identical except for two instructions that pass the current line
number in dl-machine.h to assert_fail.
2014-06-26 22:30:40 +05:30
Roland McGrath
6ad2df0bda AArch64: Consolidate nptl/ subdirectories under linux/... 2014-06-26 09:29:24 -07:00
Richard Earnshaw
f940b96522 [AArch64] Add optimized strchr.
Implementation of strchr for AArch64.  Speedups taken from micro-bench
show the improvements relative to the standard C code.

The use of LD1 means we have identical code for both big- and
little-endian systems.
2014-06-19 11:03:59 +01:00
Roland McGrath
9503784a07 AArch64: Define TLS_DEFINE_INIT_TP 2014-06-11 12:23:01 -07:00
Marcus Shawcroft
ccc3991113 [AArch64] Regenerate libm-test-ulps 2014-06-03 12:45:10 +00:00
Wilco
693096cc7b [AArch64] Switch from FE_TOWARDZERO to _FPU_FPCR_RM_MASK 2014-06-03 12:44:50 +00:00
Wilco
0b4366bc9b [AArch64] Cleanup declarations in math_private.h. 2014-06-03 12:44:49 +00:00
Wilco
a88dadbed5 [AArch64] Remove ISB after FPCR write. 2014-06-02 12:44:21 +01:00
Wilco
c95b301101 [AArch64] Rewrite feupdateenv (BZ 17009). 2014-06-02 12:36:34 +01:00
Andreas Schwab
774f928582 Remove second argument from TLS_INIT_TP macro 2014-05-27 14:48:46 +02:00
Kyle McMartin
75f11331f9 [AARCH64] correct alignment of TLS_TCB_ALIGN (BZ #16796)
This fixes a variety of testsuite failures for me:
tststatic.out Error 1
tststatic2.out Error 1
tst-tls9-static.out Error 1
tst-audit8.out Error 127
tst-audit9.out Error 127
tst-audit1.out Error 127
and also has the added benefit of making LD_AUDIT/sotruss work on
AArch64.

Otherwise, we bail out early in _dl_try_allocate_static_tls as the
alignment requirement of the PT_TLS section in libc is 16.
2014-05-26 12:37:19 +05:30
Roland McGrath
e0db65176f Clean up __exit_thread. 2014-05-13 09:49:20 -07:00
Ian Bolton
e5e0d9a4f6 [AArch64] Suppress unnecessary FPSR and FPCR writes. 2014-04-24 07:15:33 +01:00
Venkataramanan Kumar
140cc7abf7 aarch64: Add setjmp and longjmp SystemTap probes
Add setjmp, longjmp and longjmp_target SystemTap probes.

ChangeLog:

2014-04-22  Will Newton  <will.newton@linaro.org>
	    Venkataramanan Kumar  <venkataramanan.kumar@linaro.org>

	* sysdeps/aarch64/__longjmp.S: Include stap-probe.h.
	(__longjmp): Add longjmp and longjmp_target SystemTap
	probes.
	* sysdeps/aarch64/setjmp.S: Include stap-probe.h.
	(__sigsetjmp): Add setjmp SystemTap probe.
2014-04-22 11:13:16 +01:00
Ian Bolton
39e6cd8d64 Add fenv test support for AArch64. 2014-04-17 16:58:35 +01:00
Ian Bolton
bc93ab2946 [AArch64] Define HAVE_RM_CTX and related hooks. 2014-04-17 08:30:07 +01:00
Ian Bolton
7e0b6763e9 [AArch64] Provide initial implementation of math_private.h. 2014-04-17 08:28:41 +01:00
Marcus Shawcroft
a9ea2e0cbe [AArch64] Regenerate libm-test-ulps. 2014-04-16 23:08:51 +01:00
Roland McGrath
fcccd51286 Factor mmap/munmap of PT_LOAD segments out of _dl_map_object_from_fd et al. 2014-04-03 10:47:14 -07:00
Roland McGrath
498a22333b Compile with -Wundef. 2014-03-14 11:32:51 -07:00
Marcus Shawcroft
302949e294 [PATCH] [AArch64] Optional trapping exceptions support.
Trapping exceptions in AArch64 are optional.  The relevant exception
control bits in FPCR are are defined as RES0 hence the absence of
support can be detected by reading back the FPCR and comparing with
the desired value.
2014-03-07 14:05:20 +00:00
Joseph Myers
e6b6a85705 Don't include individual test ulps in libm-test-ulps.
As recently discussed
<https://sourceware.org/ml/libc-alpha/2014-02/msg00670.html>, it
doesn't seem particularly useful for libm-test-ulps files to contain
huge amounts of data on ulps for individual tests; just the global
maximum observed ulps for each function, together with the
verification of exceptions, errno and special results such as
infinities and NaNs for each test, suffices to verify that a
function's behavior on the given test inputs is within the expected
accuracy.  Removing this data reduces source tree churn caused by
updates to these files when libm tests are added, and reduces the
frequency with which testsuite additions actually need libm-test-ulps
changes at all.

Accordingly, this patch removes that data, so that individual tests
get checked against the global bounds for the given function and only
generate an error if those are exceeded.  Tested x86_64 (including
verifying that if an ulps value is artificially reduced, the tests do
indeed fail as they should and "make regen-ulps" generates the
expected changes).

	* math/libm-test.inc (struct ulp_data): Don't refer to ulps for
	individual tests in comment.
	(libm-test-ulps.h): Don't refer to test_ulps in #include comment.
	(prev_max_error): New variable.
	(prev_real_max_error): Likewise.
	(prev_imag_max_error): Likewise.
	(compare_ulp_data): Don't refer to test names in comment.
	(find_test_ulps): Remove function.
	(find_function_ulps): Likewise.
	(find_complex_function_ulps): Likewise.
	(init_max_error): Take function name as argument.  Look up ulps
	for that function.
	(print_ulps): Remove function.
	(print_max_error): Use prev_max_error instead of calling
	find_function_ulps.
	(print_complex_max_error): Use prev_real_max_error and
	prev_imag_max_error instead of calling find_complex_function_ulps.
	(check_float_internal): Take max_ulp parameter instead of calling
	find_test_ulps.  Don't call print_ulps.
	(check_float): Update call to check_float_internal.
	(check_complex): Update calls to check_float_internal.
	(START): Pass argument to init_max_error.
	* math/gen-libm-test.pl (%results): Don't include "kind"
	information.
	(parse_ulps): Don't handle ulps of individual tests.
	(print_ulps_file): Likewise.
	(output_ulps): Likewise.
	* math/README.libm-test: Update.
	* manual/libm-err-tab.pl (parse_ulps): Don't handle ulps of
	individual tests.
	* sysdeps/aarch64/libm-test-ulps: Remove individual test ulps.
	* sysdeps/alpha/fpu/libm-test-ulps: Likewise.
	* sysdeps/arm/libm-test-ulps: Likewise.
	* sysdeps/i386/fpu/libm-test-ulps: Likewise.
	* sysdeps/ia64/fpu/libm-test-ulps: Likewise.
	* sysdeps/m68k/coldfire/fpu/libm-test-ulps: Likewise.
	* sysdeps/m68k/m680x0/fpu/libm-test-ulps: Likewise.
	* sysdeps/microblaze/libm-test-ulps: Likewise.
	* sysdeps/mips/mips32/libm-test-ulps: Likewise.
	* sysdeps/mips/mips64/libm-test-ulps: Likewise.
	* sysdeps/powerpc/fpu/libm-test-ulps: Likewise.
	* sysdeps/powerpc/nofpu/libm-test-ulps: Likewise.
	* sysdeps/s390/fpu/libm-test-ulps: Likewise.
	* sysdeps/sh/libm-test-ulps: Likewise.
	* sysdeps/sparc/fpu/libm-test-ulps: Likewise.
	* sysdeps/tile/libm-test-ulps: Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.

	* sysdeps/hppa/fpu/libm-test-ulps: Remove individual test ulps.
2014-03-05 15:02:38 +00:00
Joseph Myers
ace614b8a5 soft-fp: support after-rounding tininess detection.
IEEE 754-2008 defines two ways in which tiny results can be detected,
"before rounding" (based on the infinite-precision result) and "after
rounding" (based on the result when rounded to normal precision as if
the exponent range were unbounded).  All binary operations on an
architecture must use the same choice of how tininess is detected.

soft-fp has so far implemented only before-rounding tininess
detection.  This patch adds support for after-rounding tininess
detection.  A new macro _FP_TININESS_AFTER_ROUNDING is added that
sfp-machine.h must define (soft-fp is meant to be self-contained so
the existing tininess.h files aren't used here, though the information
going in sfp-machine.h has been taken from them).  The soft-fp macros
dealing with raising underflow exceptions then handle the cases where
the choice matters specially, rounding a copy of the input to the
appropriate precision to see if a value that's tiny before rounding
isn't tiny after rounding.

Tested for mips64 using GCC trunk (which now uses soft-fp on MIPS, so
supporting exceptions and rounding modes for long double where not
previously supported - this is the immediate motivation for doing this
patch now) together with (a) a patch to sysdeps/mips/math-tests.h to
enable exceptions / rounding modes tests for long double for GCC 4.9
and later, and (b) corresponding changes applied to libgcc's soft-fp
and sfp-machine.h files.  In the libgcc context this is also tested on
x86_64 (also an after-rounding architecture) with testcases for
__float128 that I intend to add to the GCC testsuite when updating
soft-fp there.

(To be clear: this patch does not fix any glibc bugs that were
user-visible in past releases, since after-rounding architectures
didn't use soft-fp in any affected case with support for
floating-point exceptions - so there is no corresponding Bugzilla bug.
Rather, it works together with the GCC changes to use soft-fp on MIPS
to allow previously absent long double functionality to work properly,
and allows soft-fp to be used in glibc on after-rounding architectures
in cases where it couldn't previously be used.)

	* soft-fp/op-common.h (_FP_DECL): Mark exponent as possibly
	unused.
	(_FP_PACK_SEMIRAW): Determine tininess based on rounding shifted
	value if _FP_TININESS_AFTER_ROUNDING and unrounded value is in
	subnormal range.
	(_FP_PACK_CANONICAL): Determine tininess based on rounding to
	normal precision if _FP_TININESS_AFTER_ROUNDING and unrounded
	value has largest subnormal exponent.
	* soft-fp/soft-fp.h [FP_NO_EXCEPTIONS]
	(_FP_TININESS_AFTER_ROUNDING): Undefine and redefine to 0.
	* sysdeps/aarch64/soft-fp/sfp-machine.h
	(_FP_TININESS_AFTER_ROUNDING): New macro.
	* sysdeps/alpha/soft-fp/sfp-machine.h
	(_FP_TININESS_AFTER_ROUNDING): Likewise.
	* sysdeps/arm/soft-fp/sfp-machine.h (_FP_TININESS_AFTER_ROUNDING):
	Likewise.
	* sysdeps/mips/mips64/soft-fp/sfp-machine.h
	(_FP_TININESS_AFTER_ROUNDING): Likewise.
	* sysdeps/mips/soft-fp/sfp-machine.h
	(_FP_TININESS_AFTER_ROUNDING): Likewise.
	* sysdeps/powerpc/soft-fp/sfp-machine.h
	(_FP_TININESS_AFTER_ROUNDING): Likewise.
	* sysdeps/sh/soft-fp/sfp-machine.h (_FP_TININESS_AFTER_ROUNDING):
	Likewise.
	* sysdeps/sparc/sparc32/soft-fp/sfp-machine.h
	(_FP_TININESS_AFTER_ROUNDING): Likewise.
	* sysdeps/sparc/sparc64/soft-fp/sfp-machine.h
	(_FP_TININESS_AFTER_ROUNDING): Likewise.
	* sysdeps/tile/sfp-machine.h (_FP_TININESS_AFTER_ROUNDING):
	Likewise.
2014-02-12 18:27:12 +00:00
Marcus Shawcroft
75eff3fe90 Relocate AArch64 from ports to libc.
This patch moves the AArch64 port to the main sysdeps hierarchy.  The
move is essentially:

  git mv ports/sysdeps/aarch64 sysdeps/aarch64
  git mv ports/sysdeps/unix/sysv/linux/aarch64 sysdeps/unix/sysv/linux/aarch64

The README is updated and I've updated ChangeLog.aarch64 along the
lines of the ARM move.  The AArch64 build has been tested to confirm
that there were no changes in objdump -dr output or the shared
objects.
2014-02-11 11:36:00 +00:00