Commit Graph

188 Commits

Author SHA1 Message Date
Szabolcs Nagy
43cfdf8f48 Clean up converttoint handling and document the semantics
This patch currently only affects aarch64.

The roundtoint and converttoint internal functions are only called with small
values, so 32 bit result is enough for converttoint and it is a signed int
conversion so the return type is changed to int32_t.

The original idea was to help the compiler keeping the result in uint64_t,
then it's clear that no sign extension is needed and there is no accidental
undefined or implementation defined signed int arithmetics.

But it turns out gcc does a good job with inlining so changing the type has
no overhead and the semantics of the conversion is less surprising this way.
Since we want to allow the asuint64 (x + 0x1.8p52) style conversion, the top
bits were never usable and the existing code ensures that only the bottom
32 bits of the conversion result are used.

On aarch64 the neon intrinsics (which round ties to even) are changed to
round and lround (which round ties away from zero) this does not affect the
results in a significant way, but more portable (relies on round and lround
being inlined which works with -fno-math-errno).

The TOINT_SHIFT and TOINT_RINT macros were removed, only keep separate code
paths for TOINT_INTRINSICS and !TOINT_INTRINSICS.

	* sysdeps/aarch64/fpu/math_private.h (roundtoint): Use round.
	(converttoint): Use lround.
	* sysdeps/ieee754/flt-32/math_config.h (roundtoint): Declare and
	document the semantics when TOINT_INTRINSICS is set.
	(converttoint): Likewise.
	(TOINT_RINT): Remove.
	(TOINT_SHIFT): Remove.
	* sysdeps/ieee754/flt-32/e_expf.c (__expf): Remove the TOINT_RINT code
	path.
2018-08-10 17:23:16 +01:00
Siddhesh Poyarekar
be64b1946b [aarch64] Fix value of MIN_PAGE_SIZE for testing
MIN_PAGE_SIZE is normally set to 4096 but for testing it can be set to
16 so that it exercises the page crossing code for every misaligned
access.  The value was set to 15, which is obviously wrong, so fixed
as obvious and tested.

	* sysdeps/aarch64/strlen.S [TEST_PAGE_CROSS](MIN_PAGE_SIZE):
	Fix value.
2018-08-08 22:47:17 +05:30
Siddhesh Poyarekar
dce452dc52 Rename the glibc.tune namespace to glibc.cpu
The glibc.tune namespace is vaguely named since it is a 'tunable', so
give it a more specific name that describes what it refers to.  Rename
the tunable namespace to 'cpu' to more accurately reflect what it
encompasses.  Also rename glibc.tune.cpu to glibc.cpu.name since
glibc.cpu.cpu is weird.

	* NEWS: Mention the change.
	* elf/dl-tunables.list: Rename tune namespace to cpu.
	* sysdeps/powerpc/dl-tunables.list: Likewise.
	* sysdeps/x86/dl-tunables.list: Likewise.
	* sysdeps/aarch64/dl-tunables.list: Rename tune.cpu to
	cpu.name.
	* elf/dl-hwcaps.c (_dl_important_hwcaps): Adjust.
	* elf/dl-hwcaps.h (GET_HWCAP_MASK): Likewise.
	* manual/README.tunables: Likewise.
	* manual/tunables.texi: Likewise.
	* sysdeps/powerpc/cpu-features.c: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c
	(init_cpu_features): Likewise.
	* sysdeps/x86/cpu-features.c: Likewise.
	* sysdeps/x86/cpu-features.h: Likewise.
	* sysdeps/x86/cpu-tunables.c: Likewise.
	* sysdeps/x86_64/Makefile: Likewise.
	* sysdeps/x86/dl-cet.c: Likewise.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2018-08-02 23:49:19 +05:30
Siddhesh Poyarekar
0aec4c1d18 aarch64,falkor: Use vector registers for memcpy
Vector registers perform better than scalar register pairs for copying
data so prefer them instead.  This results in a time reduction of over
50% (i.e. 2x speed improvemnet) for some smaller sizes for memcpy-walk.
Larger sizes show improvements of around 1% to 2%.  memcpy-random shows
a very small improvement, in the range of 1-2%.

	* sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor):
	Use vector registers.
2018-06-29 22:45:59 +05:30
Siddhesh Poyarekar
ce76a5cb8d aarch64,falkor: Use vector registers for memmove
Vector registers perform much better for moves compared to pairs of
registers on falkor, so use them instead.  This results in a time
reduction of up to 50% (i.e. 2x improvement) for a lot of the smaller
sizes, i.e. up to 1K in memmove-walk.  Improvements for larger sizes are
smaller, at about 1%-2%.

	* sysdeps/aarch64/multiarch/memmove_falkor.S
	(__memcpy_falkor): Use vector registers.
2018-06-29 22:45:07 +05:30
Hongbo Zhang
fc2ba8037d aarch64: add HXT Phecda core memory operation ifuncs
Phecda is HXT semiconductor's CPU core, this patch adds memory operation
ifuncs for it: sharing the same optimized implementation with Qualcomm's
Falkor core.

2018-06-07  Minfeng Kang <minfeng.kang@hxt-semitech.com>
	    Hongbo Zhang <hongbo.zhang@linaro.org>

	* sysdeps/aarch64/multiarch/memcpy.c (libc_ifunc): reuse
	__memcpy_falkor for phecda core.
	* sysdeps/aarch64/multiarch/memmove.c (libc_ifunc): reuse
	__memmove_falkor for phecda core.
	* sysdeps/aarch64/multiarch/memset.c (libc_ifunc): reuse
	__memset_falkor for phecda core.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c: add MIDR entry
	for phecda core.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_PHECDA): add
	macro to identify phecda core.
2018-06-12 21:29:11 +05:30
H.J. Lu
67c0579669 Mark _init and _fini as hidden [BZ #23145]
_init and _fini are special functions provided by glibc for linker to
define DT_INIT and DT_FINI in executable and shared library.  They
should never be put in dynamic symbol table.  This patch marks them as
hidden to remove them from dynamic symbol table.

Tested with build-many-glibcs.py.

	[BZ #23145]
	* elf/Makefile (tests-special): Add $(objpfx)check-initfini.out.
	($(all-built-dso:=.dynsym): New target.
	(common-generated): Add $(all-built-dso:$(common-objpfx)%=%.dynsym).
	($(objpfx)check-initfini.out): New target.
	(generated): Add check-initfini.out.
	* scripts/check-initfini.awk: New file.
	* sysdeps/aarch64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/alpha/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/arm/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/hppa/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/i386/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/ia64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/m68k/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/microblaze/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/mips/mips32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/mips/mips64/n32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/mips/mips64/n64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/nios2/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/powerpc/powerpc32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/powerpc/powerpc64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/s390/s390-32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/s390/s390-64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/sh/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/sparc/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/x86_64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
2018-06-08 10:28:52 -07:00
Joseph Myers
8f145c7712 Remove sysdeps/aarch64/soft-fp directory.
As per <https://sourceware.org/ml/libc-alpha/2014-10/msg00369.html>,
there should not be separate sysdeps/<arch>/soft-fp directories when
those are used by all configurations that use sysdeps/<arch>, and,
more generally, should not be sysdeps/foo/Implies files pointing to a
subdirectory foo/bar.  This patch eliminates the
sysdeps/aarch64/soft-fp directory accordingly, merging its contents
into sysdeps/aarch64.

Tested with build-many-glibcs.py that installed stripped shared
libraries for aarch64 configurations are unchanged by this patch.

	* sysdeps/aarch64/Implies: Remove aarch64/soft-fp.
	* sysdeps/aarch64/Makefile [$(subdir) = math] (CPPFLAGS): Add
	-I../soft-fp.  Moved from ....
	* sysdeps/aarch64/soft-fp/Makefile: ... here.  Remove file.
	* sysdeps/aarch64/soft-fp/e_sqrtl.c: Move to ....
	* sysdeps/aarch64/e_sqrtl.c: ... here.
	* sysdeps/aarch64/soft-fp/sfp-machine.h: Move to ....
	* sysdeps/aarch64/sfp-machine.h: ... here.
2018-05-22 17:23:34 +00:00
Joseph Myers
b4d5b8b021 Do not include math-barriers.h in math_private.h.
This patch continues the math_private.h cleanup by stopping
math_private.h from including math-barriers.h and making the users of
the barrier macros include the latter header directly.  No attempt is
made to remove any math_private.h includes that are now unused, except
in strtod_l.c where that is done to avoid line number changes in
assertions, so that installed stripped shared libraries can be
compared before and after the patch.  (I think the floating-point
environment support in math_private.h should also move out - some
architectures already have fenv_private.h as an architecture-internal
header included from their math_private.h - and after moving that out
might be a better time to identify unused math_private.h includes.)

Tested for x86_64 and x86, and tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by the patch.

	* sysdeps/generic/math_private.h: Do not include
	<math-barriers.h>.
	* stdlib/strtod_l.c: Include <math-barriers.h> instead of
	<math_private.h>.
	* math/fromfp.h: Include <math-barriers.h>.
	* math/math-narrow.h: Likewise.
	* math/s_nextafter.c: Likewise.
	* math/s_nexttowardf.c: Likewise.
	* sysdeps/aarch64/fpu/s_llrint.c: Likewise.
	* sysdeps/aarch64/fpu/s_llrintf.c: Likewise.
	* sysdeps/aarch64/fpu/s_lrint.c: Likewise.
	* sysdeps/aarch64/fpu/s_lrintf.c: Likewise.
	* sysdeps/i386/fpu/s_nextafterl.c: Likewise.
	* sysdeps/i386/fpu/s_nexttoward.c: Likewise.
	* sysdeps/i386/fpu/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_atan2.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_atanh.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp2.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_j0.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_sqrt.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_expm1.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fma.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_log1p.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_nearbyint.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_nearbyint.c: Likewise.
	* sysdeps/ieee754/flt-32/e_atanhf.c: Likewise.
	* sysdeps/ieee754/flt-32/e_j0f.c: Likewise.
	* sysdeps/ieee754/flt-32/s_expm1f.c: Likewise.
	* sysdeps/ieee754/flt-32/s_log1pf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_nearbyintf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_nextafterf.c: Likewise.
	* sysdeps/ieee754/k_standardl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_asinl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_expl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_powl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nearbyintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nextafterl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nexttoward.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_asinl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_nextafterl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_nexttoward.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/e_atanhl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/e_j0l.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fma.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_nexttoward.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/s_nextafterl.c: Likewise.
2018-05-11 15:11:38 +00:00
Siddhesh Poyarekar
db725a458e aarch64,falkor: Ignore prefetcher tagging for smaller copies
For smaller and medium sized copies, the effect of hardware
prefetching are not as dominant as instruction level parallelism.
Hence it makes more sense to load data into multiple registers than to
try and route them to the same prefetch unit.  This is also the case
for the loop exit where we are unable to latch on to the same prefetch
unit anyway so it makes more sense to have data loaded in parallel.

The performance results are a bit mixed with memcpy-random, with
numbers jumping between -1% and +3%, i.e. the numbers don't seem
repeatable.  memcpy-walk sees a 70% improvement (i.e. > 2x) for 128
bytes and that improvement reduces down as the impact of the tail copy
decreases in comparison to the loop.

	* sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor):
	Use multiple registers to copy data in loop tail.
2018-05-11 00:11:52 +05:30
Siddhesh Poyarekar
70c97f8493 aarch64,falkor: Ignore prefetcher hints for memmove tail
The tail of the copy loops are unable to train the falkor hardware
prefetcher because they load from a different base compared to the hot
loop.  In this case avoid serializing the instructions by loading them
into different registers.  Also peel the last iteration of the loop
into the tail (and have them use different registers) since it gives
better performance for medium sizes.

This results in performance improvements of between 3% and 20% over
the current falkor implementation for sizes between 128 bytes and 1K
on the memmove-walk benchmark, thus mostly covering the regressions
seen against the generic memmove.

	* sysdeps/aarch64/multiarch/memmove_falkor.S
	(__memmove_falkor): Use multiple registers to move data in
	loop tail.
2018-05-11 00:08:02 +05:30
Joseph Myers
9ed2e15ff4 Move math_opt_barrier, math_force_eval to separate math-barriers.h.
This patch continues cleaning up math_private.h by moving the
math_opt_barrier and math_force_eval macros to a separate header
math-barriers.h.

At present, those macros are inside a "#ifndef math_opt_barrier" in
math_private.h to allow architectures to override them and then use
a separate math-barriers.h header, no such #ifndef or #include_next is
needed; architectures just have their own alternative version of
math-barriers.h when providing their own optimized versions that avoid
going through memory unnecessarily.  The generic math-barriers.h has a
comment added to document these two macros.

In this patch, math_private.h is made to #include <math-barriers.h>,
so files using these macros do not need updating yet.  That is because
of uses of math_force_eval in math_check_force_underflow and
math_check_force_underflow_nonneg, which are still defined in
math_private.h.  Once those are moved out to a separate header, that
separate header can be made to include <math-barriers.h>, as can the
other files directly using these barrier macros, and then the include
of <math-barriers.h> from math_private.h can be removed.

Tested for x86_64 and x86.  Also tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by this patch.

	* sysdeps/generic/math-barriers.h: New file.
	* sysdeps/generic/math_private.h [!math_opt_barrier]
	(math_opt_barrier): Move to math-barriers.h.
	[!math_opt_barrier] (math_force_eval): Likewise.
	* sysdeps/aarch64/fpu/math-barriers.h: New file.
	* sysdeps/aarch64/fpu/math_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
	* sysdeps/alpha/fpu/math-barriers.h: New file.
	* sysdeps/alpha/fpu/math_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
	* sysdeps/x86/fpu/math-barriers.h: New file.
	* sysdeps/i386/fpu/fenv_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
	* sysdeps/m68k/m680x0/fpu/math_private.h: Move to....
	* sysdeps/m68k/m680x0/fpu/math-barriers.h: ... here.  Adjust
	multiple-include guard for rename.
	* sysdeps/powerpc/fpu/math-barriers.h: New file.
	* sysdeps/powerpc/fpu/math_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
2018-05-09 19:45:47 +00:00
Maciej W. Rozycki
10a446ddcc elf: Unify symbol address run-time calculation [BZ #19818]
Wrap symbol address run-time calculation into a macro and use it
throughout, replacing inline calculations.

There are a couple of variants, most of them different in a functionally
insignificant way.  Most calculations are right following RESOLVE_MAP,
at which point either the map or the symbol returned can be checked for
validity as the macro sets either both or neither.  In some places both
the symbol and the map has to be checked however.

My initial implementation therefore always checked both, however that
resulted in code larger by as much as 0.3%, as many places know from
elsewhere that no check is needed.  I have decided the size growth was
unacceptable.

Having looked closer I realized that it's the map that is the culprit.
Therefore I have modified LOOKUP_VALUE_ADDRESS to accept an additional
boolean argument telling it to access the map without checking it for
validity.  This in turn has brought quite nice results, with new code
actually being smaller for i686, and MIPS o32, n32 and little-endian n64
targets, unchanged in size for x86-64 and, unusually, marginally larger
for big-endian MIPS n64, as follows:

i686:
   text    data     bss     dec     hex filename
 152255    4052     192  156499   26353 ld-2.27.9000-base.so
 152159    4052     192  156403   262f3 ld-2.27.9000-elf-symbol-value.so
MIPS/o32/el:
   text    data     bss     dec     hex filename
 142906    4396     260  147562   2406a ld-2.27.9000-base.so
 142890    4396     260  147546   2405a ld-2.27.9000-elf-symbol-value.so
MIPS/n32/el:
   text    data     bss     dec     hex filename
 142267    4404     260  146931   23df3 ld-2.27.9000-base.so
 142171    4404     260  146835   23d93 ld-2.27.9000-elf-symbol-value.so
MIPS/n64/el:
   text    data     bss     dec     hex filename
 149835    7376     408  157619   267b3 ld-2.27.9000-base.so
 149787    7376     408  157571   26783 ld-2.27.9000-elf-symbol-value.so
MIPS/o32/eb:
   text    data     bss     dec     hex filename
 142870    4396     260  147526   24046 ld-2.27.9000-base.so
 142854    4396     260  147510   24036 ld-2.27.9000-elf-symbol-value.so
MIPS/n32/eb:
   text    data     bss     dec     hex filename
 142019    4404     260  146683   23cfb ld-2.27.9000-base.so
 141923    4404     260  146587   23c9b ld-2.27.9000-elf-symbol-value.so
MIPS/n64/eb:
   text    data     bss     dec     hex filename
 149763    7376     408  157547   2676b ld-2.27.9000-base.so
 149779    7376     408  157563   2677b ld-2.27.9000-elf-symbol-value.so
x86-64:
   text    data     bss     dec     hex filename
 148462    6452     400  155314   25eb2 ld-2.27.9000-base.so
 148462    6452     400  155314   25eb2 ld-2.27.9000-elf-symbol-value.so

	[BZ #19818]
	* sysdeps/generic/ldsodefs.h (LOOKUP_VALUE_ADDRESS): Add `set'
	parameter.
	(SYMBOL_ADDRESS): New macro.
	[!ELF_FUNCTION_PTR_IS_SPECIAL] (DL_SYMBOL_ADDRESS): Use
	SYMBOL_ADDRESS for symbol address calculation.
	* elf/dl-runtime.c (_dl_fixup): Likewise.
	(_dl_profile_fixup): Likewise.
	* elf/dl-symaddr.c (_dl_symbol_address): Likewise.
	* elf/rtld.c (dl_main): Likewise.
	* sysdeps/aarch64/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/alpha/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/arm/dl-machine.h (elf_machine_rel): Likewise.
	(elf_machine_rela): Likewise.
	* sysdeps/hppa/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/hppa/dl-symaddr.c (_dl_symbol_address): Likewise.
	* sysdeps/i386/dl-machine.h (elf_machine_rel): Likewise.
	(elf_machine_rela): Likewise.
	* sysdeps/ia64/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/m68k/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/microblaze/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/mips/dl-machine.h (ELF_MACHINE_BEFORE_RTLD_RELOC):
	Likewise.
	(elf_machine_reloc): Likewise.
	(elf_machine_got_rel): Likewise.
	* sysdeps/mips/dl-trampoline.c (__dl_runtime_resolve): Likewise.
	* sysdeps/nios2/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/powerpc/powerpc64/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/riscv/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/s390/s390-32/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/s390/s390-64/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/sh/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/sparc/sparc32/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/sparc/sparc64/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/tile/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/x86_64/dl-machine.h (elf_machine_rela): Likewise.

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2018-04-04 23:09:37 +01:00
Wilco Dijkstra
19a8b9a300 [PATCH 1/7] sin/cos slow paths: avoid slow paths for small inputs
This series of patches removes the slow patchs from sin, cos and sincos.
Besides greatly simplifying the implementation, the new version is also much
faster for inputs up to PI (41% faster) and for large inputs needing range
reduction (27% faster).

ULP is ~0.55 with no errors found after testing 1.6 billion inputs across most
of the range with mpsin and mpcos.  The number of incorrectly rounded results
(ie. ULP >0.5) is at most ~2750 per million inputs between 0.125 and 0.5,
the average is ~850 per million between 0 and PI.

Tested on AArch64 and x86_64 with no regressions.

The first patch removes the slow paths for the cases where the input is small
and doesn't require range reduction.  Update ULP tables for sin, cos and sincos
on AArch64 and x86_64.

	* sysdeps/aarch64/libm-test-ulps: Update ULP for sin, cos, sincos.
	* sysdeps/ieee754/dbl-64/s_sin.c (__sin): Remove slow paths for small
	inputs.
	(__cos): Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Update ULP for sin, cos, sincos.
2018-04-03 16:52:16 +01:00
Joseph Myers
ffec7b2740 Use x86_64 backtrace as generic version.
No glibc configuration uses the present debug/backtrace.c, whereas
several #include the x86_64 version.  The x86_64 version is
effectively a generic one (using _Unwind_Backtrace from libgcc, which
works much more reliably than the built-in functions used by
debug/backtrace.c).  This patch moves it to debug/backtrace.c and
removes all the #includes of the x86_64 version from other
architectures which are no longer required.

I do not know whether all the other architecture-specific backtrace
implementations that are based on _Unwind_Backtrace are required, or
whether, where their differences from the generic version do something
useful, suitable hooks could be added to the generic version to reduce
the duplication involved.

Tested with build-many-glibcs.py that installed stripped shared
libraries are unchanged by this patch.

	* sysdeps/x86_64/backtrace.c: Move to ....
	* debug/backtrace.c: ... here.
	* sysdeps/aarch64/backtrace.c: Remove file.
	* sysdeps/alpha/backtrace.c: Likewise.
	* sysdeps/hppa/backtrace.c: Likewise.
	* sysdeps/ia64/backtrace.c: Likewise.
	* sysdeps/mips/backtrace.c: Likewise.
	* sysdeps/nios2/backtrace.c: Likewise.
	* sysdeps/riscv/backtrace.c: Likewise.
	* sysdeps/sh/backtrace.c: Likewise.
	* sysdeps/tile/backtrace.c: Likewise.
2018-03-21 17:25:30 +00:00
Wilco Dijkstra
700593fdd7 Remove all target specific __ieee754_sqrt(f/l) inlines
Remove the now unused target specific__ieee754_sqrt(f/l) inlines.
Also remove inlines of sqrt which are for really old GCC versions.
Removing these is desirable, under the general principle of leaving
such inlining to the compiler rather than trying to do it in installed
headers, especially when only very old compilers are affected.

Note that removing inlines for __ieee754_sqrt disables inlining in the
sqrt wrapper functions.  Given the sqrt function will typically only be
called for negative arguments, it doesn't matter whether the inlining
happens or not.

	* sysdeps/aarch64/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	* sysdeps/alpha/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	* sysdeps/generic/math-type-macros.h (M_SQRT): Use sqrt.
	* sysdeps/m68k/m680x0/fpu/mathimpl.h (__ieee754_sqrt): Remove.
	* sysdeps/powerpc/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	* sysdeps/s390/fpu/bits/mathinline.h: Remove file.
	* sysdeps/sparc/fpu/bits/mathinline.h (sqrt) Remove.
	(sqrtf): Remove.
	(sqrtl): Remove.
	(__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	(__ieee754_sqrtl): Remove.
	* sysdeps/m68k/m680x0/fpu/mathimpl.h (__ieee754_sqrt): Remove.
	* sysdeps/x86/fpu/math_private.h (__ieee754_sqrt): Remove.
	* sysdeps/x86_64/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	(__ieee754_sqrtl): Remove.
2018-03-15 19:21:36 +00:00
Siddhesh Poyarekar
b47c3e7637 aarch64/strncmp: Use lsr instead of mov+lsr
A lsr can do what the mov and lsr did.
2018-03-15 08:06:21 +05:30
Siddhesh Poyarekar
d46f84de74 aarch64/strncmp: Unbreak builds with old binutils
Binutils 2.26.* and older do not support moves with shifted registers,
so use a separate shift instruction instead.
2018-03-14 18:51:05 +05:30
Siddhesh Poyarekar
7108f1f944 aarch64: Improve strncmp for mutually misaligned inputs
The mutually misaligned inputs on aarch64 are compared with a simple
byte copy, which is not very efficient.  Enhance the comparison
similar to strcmp by loading a double-word at a time.  The peak
performance improvement (i.e. 4k maxlen comparisons) due to this on
the strncmp microbenchmark is as follows:

falkor: 3.5x (up to 72% time reduction)
cortex-a73: 3.5x (up to 71% time reduction)
cortex-a53: 3.5x (up to 71% time reduction)

All mutually misaligned inputs from 16 bytes maxlen onwards show
upwards of 15% improvement and there is no measurable effect on the
performance of aligned/mutually aligned inputs.

	* sysdeps/aarch64/strncmp.S (count): New macro.
	(strncmp): Store misaligned length in SRC1 in COUNT.
	(mutual_align): Adjust.
	(misaligned8): Load dword at a time when it is safe.
2018-03-13 23:57:04 +05:30
Samuel Thibault
a5df0318ef hurd: add gscope support
* elf/dl-support.c [!THREAD_GSCOPE_IN_TCB] (_dl_thread_gscope_count):
Define variable.
* sysdeps/generic/ldsodefs.h [!THREAD_GSCOPE_IN_TCB] (struct
rtld_global): Add _dl_thread_gscope_count member.
* sysdeps/mach/hurd/tls.h: Include <atomic.h>.
[!defined __ASSEMBLER__] (THREAD_GSCOPE_GLOBAL, THREAD_GSCOPE_SET_FLAG,
THREAD_GSCOPE_RESET_FLAG, THREAD_GSCOPE_WAIT): Define macros.
* sysdeps/generic/tls.h: Document THREAD_GSCOPE_IN_TCB.
* sysdeps/aarch64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/alpha/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/arm/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/hppa/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/i386/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/ia64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/m68k/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/microblaze/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/mips/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/nios2/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/powerpc/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/riscv/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/s390/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/sh/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/sparc/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/tile/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/x86_64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
2018-03-11 13:06:33 +01:00
Siddhesh Poyarekar
4e54d91863 aarch64: Fix branch target to loop16
I goofed up when changing the loop8 name to loop16 and missed on out
the branch instance.  Fixed and actually build tested this time.

	* sysdeps/aarch64/memcmp.S (more16): Fix branch target loop16.
2018-03-06 23:01:02 +05:30
Siddhesh Poyarekar
30a81dae5b aarch64: Optimized memcmp for medium to large sizes
This improved memcmp provides a fast path for compares up to 16 bytes
and then compares 16 bytes at a time, thus optimizing loads from both
sources.  The glibc memcmp microbenchmark retains performance (with an
error of ~1ns) for smaller compare sizes and reduces up to 31% of
execution time for compares up to 4K on the APM Mustang.  On Qualcomm
Falkor this improves to almost 48%, i.e. it is almost 2x improvement
for sizes of 2K and above.

	* sysdeps/aarch64/memcmp.S: Widen comparison to 16 bytes at a
	time.
2018-03-06 19:22:40 +05:30
Siddhesh Poyarekar
6ca24c4348 aarch64/strcmp: fix misaligned loop jump target
I accidentally set the loop jump back label as misaligned8 instead of
do_misaligned.  The typo is harmless but it's always nice to not have
to unnecessarily execute those two instructions.

	* sysdeps/aarch64/strcmp.S (do_misaligned): Jump back to
	do_misaligned, not misaligned8.
2018-02-22 23:48:14 +05:30
Steve Ellcey
e9537dddc7 IFUNC for Cavium ThunderX2
* sysdeps/aarch64/multiarch/Makefile (sysdep_routines):
	Add memcpy_thunderx2.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c (MAX_IFUNC):
	Increment to 4.
	(__libc_ifunc_impl_list): Add __memcpy_thunderx2.
	* sysdeps/aarch64/multiarch/memcpy.c (libc_ifunc): Add IS_THUNDERX2
	and IS_THUNDERX2PA checks.
	* sysdeps/aarch64/multiarch/memcpy_thunderx.S (USE_THUNDERX2):
	Use macro to set name appropriately.
	(memcpy): Use USE_THUNDERX2 macro to modify prefetches.
	* sysdeps/aarch64/multiarch/memcpy_thunderx2.S: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_THUNDERX2PA):
	New macro.
	(IS_THUNDERX2): New macro.
2018-02-22 08:38:47 -08:00
Wilco Dijkstra
0c8a67a573 [AArch64] Fix include.
Fix include to use <>.

	* sysdeps/aarch64/fpu/fpu_control.h: Use <> in include.
2018-02-15 12:41:06 +00:00
Wilco Dijkstra
c3d466cba1 Remove slow paths from pow
Remove the slow paths from pow.  Like several other double precision math
functions, pow is exactly rounded.  This is not required from math functions
and causes major overheads as it requires multiple fallbacks using higher
precision arithmetic if a result is close to 0.5ULP.  Ridiculous slowdowns
of up to 100000x have been reported when the highest precision path triggers.

All GLIBC math tests pass on AArch64 and x64 (with ULP of pow set to 1).
The worst case error is ~0.506ULP.  A simple test over a few hundred million
values shows pow is 10% faster on average.  This fixes BZ #13932.

	[BZ #13932]
	* sysdeps/ieee754/dbl-64/uexp.h (err_1): Remove.
	* benchtests/pow-inputs: Update comment for slow path cases.
	* manual/probes.texi (slowpow_p10): Delete removed probe.
	(slowpow_p10): Likewise.
	* math/Makefile: Remove halfulp.c and slowpow.c.
	* sysdeps/aarch64/libm-test-ulps: Set ULP of pow to 1.
	* sysdeps/generic/math_private.h (__exp1): Remove error argument.
	(__halfulp): Remove.
	(__slowpow): Remove.
	* sysdeps/i386/fpu/halfulp.c: Delete file.
	* sysdeps/i386/fpu/slowpow.c: Likewise.
	* sysdeps/ia64/fpu/halfulp.c: Likewise.
	* sysdeps/ia64/fpu/slowpow.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp.c (__exp1): Remove error argument,
	improve comments and add error analysis.
	* sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Add error analysis.
	(power1): Remove function:
	(log1): Remove error argument, add error analysis.
	(my_log2): Remove function.
	* sysdeps/ieee754/dbl-64/halfulp.c: Delete file.
	* sysdeps/ieee754/dbl-64/slowpow.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/halfulp.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/slowpow.c: Likewise.
	* sysdeps/powerpc/power4/fpu/Makefile: Remove CPPFLAGS-slowpow.c.
	* sysdeps/x86_64/fpu/libm-test-ulps: Set ULP of pow to 1.
	* sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowpow-fma.c,
	slowpow-fma4.c, halfulp-fma.c, halfulp-fma4.c.
	* sysdeps/x86_64/fpu/multiarch/e_pow-fma.c (__slowpow): Remove define.
	* sysdeps/x86_64/fpu/multiarch/e_pow-fma4.c (__slowpow): Likewise.
	* sysdeps/x86_64/fpu/multiarch/halfulp-fma.c: Delete file.
	* sysdeps/x86_64/fpu/multiarch/halfulp-fma4.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/slowpow-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/slowpow-fma4.c: Likewise.
2018-02-12 10:47:09 +00:00
Wilco Dijkstra
4f5b921eb9 [AArch64] Fix testsuite error due to fpsr/fscr change
Add features.h include for __GNUC_PREREQ.

	* sysdeps/aarch64/fpu/fpu_control.h: Add features.h to fix build error.
2018-02-10 15:02:51 +00:00
Wilco Dijkstra
3f8d9d58c5 [AArch64] Use builtins for fpcr/fpsr
Since GCC has support for accessing FPSR/FPCR, use them when possible
so that the asm instructions can be removed eventually.  Although GCC 5
supports the builtins, it has an optimization bug, so use them from GCC 6
onwards.

	* sysdeps/aarch64/fpu/fpu_control.h: Use builtins for accessing
	FPCR/FPSR.
2018-02-09 16:59:23 +00:00
Siddhesh Poyarekar
84c94d2fd9 aarch64: Use the L() macro for labels in memcmp
The L() macro makes the assembly a bit more readable.

	* sysdeps/aarch64/memcmp.S: Use L() macro for labels.
2018-02-02 10:15:21 +05:30
Szabolcs Nagy
3d1d79283e aarch64: fix static pie enabled libc when main is in a shared library
In the static pie enabled libc, crt1.o uses the same position independent
code as rcrt1.o and crt1.o is used instead of Scrt1.o when -no-pie
executables are linked.  When main is not defined in the executable, but
in a shared library crt1.o is currently broken, it assumes main is local.
(glibc has a test for this but i missed it in my previous testing.)

To make both rcrt1.o and crt1.o happy with the same code, a wrapper is
introduced around main: with this crt1.o works with extern main symbol
while rcrt1.o does not depend on GOT relocations. (The change only
affects static pie enabled libc. Further simplification of start.S is
possible in the future by using the same approach for Scrt1.o too.)

	* aarch64/start.S (_start): Use __wrap_main.
	(__wrap_main): New local symbol.
2018-01-12 18:10:03 +00:00
Joseph Myers
688903eb3e Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2018-01-01 00:32:25 +00:00
Szabolcs Nagy
8bfb461e20 aarch64: update libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Update.
2017-12-20 12:07:10 +00:00
Adhemerval Zanella
4e00196912 aarch64: fix memset with --disable-multi-arch
* sysdeps/aarch64/memset.S (MEMSET): Define.
2017-12-20 12:05:32 +00:00
Szabolcs Nagy
14d886edbd aarch64: fix start code for static pie
There are three flavors of the crt startup code:

1) crt1.o used for non-pie,
2) Scrt1.o used for dynamic linked pie (dynamic linker relocates),
3) rcrt1.o used for static linked pie (self relocation is needed)

In the --enable-static-pie case crt1.o is built with -DPIC and in case
of static linking it interposes _dl_relocate_static_pie in libc to
avoid self relocation.

Scrt1.o is built with -DPIC -DSHARED and it relies on GOT entries that
the static linker cannot relax and thus need relocation before the
start code is executed, so rcrt1.o needs separate implementation.

This implementation does not work for .text > 4G position independent
executables, which is fine since the toolchain does not support
-mcmodel=large with -fPIE.

Tests pass with ld/22269 and ld/22263 binutils bugs fixed.

	* sysdeps/aarch64/start.S (_start): Handle PIC && !SHARED case.
2017-12-18 10:07:07 +00:00
Siddhesh Poyarekar
2bce01ebba aarch64: Improve strcmp unaligned performance
Replace the simple byte-wise compare in the misaligned case with a
dword compare with page boundary checks in place.  For simplicity I've
chosen a 4K page boundary so that we don't have to query the actual
page size on the system.

This results in up to 3x improvement in performance in the unaligned
case on falkor and about 2.5x improvement on mustang as measured using
bench-strcmp.

	* sysdeps/aarch64/strcmp.S (misaligned8): Compare dword at a
	time whenever possible.
2017-12-13 18:50:27 +05:30
Siddhesh Poyarekar
4c1d801a59 aarch64: Avoid hidden symbols for memcpy/memmove into static binaries
The __GI_* symbol aliases for __memcpy_generic are unnecessary since
they're never used.  Add them only for libc.so to avoid PLT.  Maybe
some time in future we need to evaluate the relative cost of PLT vs
gains from multiarch memcpy implementations and take a call on whether
to drop this completely.

	* sysdeps/aarch64/multiarch/memcpy_generic.S (__GI_memcpy):
	Define only for libc.so.
2017-12-04 21:17:17 +05:30
Joseph Myers
15ff490014 Use libm_alias_float for aarch64.
Continuing the preparation for additional _FloatN / _FloatNx function
aliases, this patch makes aarch64 libm function implementations use
libm_alias_float to define function aliases.

Tested with build-many-glibcs.py for aarch64-linux-gnu that installed
stripped shared libraries are unchanged by the patch.

	* sysdeps/aarch64/fpu/s_ceilf.c: Include <libm-alias-float.h>.
	(ceilf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_floorf.c: Include <libm-alias-float.h>.
	(floorf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_fmaf.c: Include <libm-alias-float.h>.
	(fmaf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_fmaxf.c: Include <libm-alias-float.h>.
	(fmaxf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_fminf.c: Include <libm-alias-float.h>.
	(fminf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_llrintf.c: Include <libm-alias-float.h>.
	(llrintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_llroundf.c: Include <libm-alias-float.h>.
	(llroundf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_lrintf.c: Include <libm-alias-float.h>.
	(lrintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_lroundf.c: Include <libm-alias-float.h>.
	(lroundf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_nearbyintf.c: Include
	<libm-alias-float.h>.
	(nearbyintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_rintf.c: Include <libm-alias-float.h>.
	(rintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_roundf.c: Include <libm-alias-float.h>.
	(roundf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_truncf.c: Include <libm-alias-float.h>.
	(truncf): Define using libm_alias_float.
2017-11-28 00:55:42 +00:00
Joseph Myers
f07d2ec8c0 Use libm_alias_double for aarch64.
Continuing the preparation for additional _FloatN / _FloatNx function
aliases, this patch makes aarch64 libm function implementations use
libm_alias_double to define function aliases.

Tested with build-many-glibcs.py for aarch64-linux-gnu that installed
stripped shared libraries are unchanged by the patch.

	* sysdeps/aarch64/fpu/s_ceil.c: Include <libm-alias-double.h>.
	(ceil): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_floor.c: Include <libm-alias-double.h>.
	(floor): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_fma.c: Include <libm-alias-double.h>.
	(fma): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_fmax.c: Include <libm-alias-double.h>.
	(fmax): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_fmin.c: Include <libm-alias-double.h>.
	(fmin): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_llrint.c: Include <libm-alias-double.h>.
	(llrint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_llround.c: Include <libm-alias-double.h>.
	(llround): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_lrint.c: Include <libm-alias-double.h>.
	(lrint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_lround.c: Include <libm-alias-double.h>.
	(lround): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_nearbyint.c: Include <libm-alias-double.h>.
	(nearbyint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_rint.c: Include <libm-alias-double.h>.
	(rint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_round.c: Include <libm-alias-double.h>.
	(round): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_trunc.c: Include <libm-alias-double.h>.
	(trunc): Define using libm_alias_double.
2017-11-27 23:54:32 +00:00
Siddhesh Poyarekar
5a67c4fa01 aarch64: Optimized memset for falkor
The generic memset reads dczid_el0 on every memset.  This has a
significant impact on falkor for a range of sizes because reading
dczid_el0 is slow.

The DZP bit in the dczid_el0 register does not change dynamically, so
it is safe to read once during program startup.  With this patch
dczid_el0 is read once during startup and zva_size is cached.  This is
used to invoke the falkor-specific memset; the generic memset routine
remains unchanged.

The gains due to this are significant for falkor, with run time
reductions as high as 48%.  Here's a sample from the falkor tests:

Function: memset
Variant: walk
                      simple_memset	__memset_falkor	__memset_generic
=====================================================================
length=256, char=0:   139.96 (-698.28%)	   9.07 ( 48.26%)  17.53
length=257, char=0:   140.50 (-699.03%)	   9.53 ( 45.80%)  17.58
length=258, char=0:   140.96 (-703.95%)	   9.58 ( 45.36%)  17.53
length=259, char=0:   141.56 (-705.16%)	   9.53 ( 45.79%)  17.58
length=260, char=0:   142.15 (-710.76%)	   9.57 ( 45.39%)  17.53
length=261, char=0:   142.50 (-710.39%)	   9.53 ( 45.78%)  17.58
length=262, char=0:   142.97 (-715.09%)	   9.57 ( 45.42%)  17.54
length=263, char=0:   143.51 (-716.18%)	   9.53 ( 45.80%)  17.58
length=264, char=0:   143.93 (-720.55%)	   9.58 ( 45.39%)  17.54
length=265, char=0:   144.56 (-722.07%)	   9.53 ( 45.80%)  17.59
length=266, char=0:   144.98 (-726.42%)	   9.58 ( 45.42%)  17.54
length=267, char=0:   145.53 (-727.53%)	   9.53 ( 45.80%)  17.59
length=268, char=0:   146.25 (-731.81%)	   9.53 ( 45.79%)  17.58
length=269, char=0:   146.52 (-735.39%)	   9.53 ( 45.66%)  17.54
length=270, char=0:   146.97 (-735.81%)	   9.53 ( 45.80%)  17.58
length=271, char=0:   147.54 (-741.08%)	   9.58 ( 45.38%)  17.54
length=512, char=0:   268.26 (-1307.85%)  12.06 ( 36.71%)  19.05
length=513, char=0:   268.73 (-1273.89%)  13.56 ( 30.68%)  19.56
length=514, char=0:   269.31 (-1276.89%)  13.56 ( 30.68%)  19.56
length=515, char=0:   269.73 (-1279.05%)  13.56 ( 30.68%)  19.56
length=516, char=0:   270.34 (-1282.24%)  13.56 ( 30.67%)  19.56
length=517, char=0:   270.83 (-1284.71%)  13.56 ( 30.66%)  19.56
length=518, char=0:   271.20 (-1286.54%)  13.56 ( 30.67%)  19.56
length=519, char=0:   271.67 (-1288.67%)  13.65 ( 30.24%)  19.56
length=520, char=0:   272.14 (-1291.04%)  13.65 ( 30.22%)  19.56
length=521, char=0:   272.66 (-1293.69%)  13.65 ( 30.23%)  19.56
length=522, char=0:   273.14 (-1296.13%)  13.65 ( 30.20%)  19.56
length=523, char=0:   273.64 (-1298.75%)  13.65 ( 30.23%)  19.56
length=524, char=0:   274.34 (-1302.16%)  13.66 ( 30.20%)  19.57
length=525, char=0:   274.64 (-1297.78%)  13.56 ( 30.99%)  19.65
length=526, char=0:   275.20 (-1300.04%)  13.56 ( 31.01%)  19.66
length=527, char=0:   275.66 (-1302.86%)  13.56 ( 30.99%)  19.65
length=1024, char=0:  524.46 (-2169.75%)  20.12 ( 12.92%)  23.11
length=1025, char=0:  525.14 (-2124.63%)  21.62 (  8.40%)  23.61
length=1026, char=0:  525.59 (-2125.36%)  21.88 (  7.37%)  23.62
length=1027, char=0:  525.98 (-2127.14%)  21.62 (  8.46%)  23.62
length=1028, char=0:  526.68 (-2131.10%)  21.62 (  8.42%)  23.61
length=1029, char=0:  527.10 (-2131.70%)  21.79 (  7.73%)  23.62
length=1030, char=0:  527.54 (-2118.51%)  21.62 (  9.10%)  23.78
length=1031, char=0:  527.98 (-2136.37%)  21.62 (  8.43%)  23.61
length=1032, char=0:  528.70 (-2139.38%)  21.62 (  8.43%)  23.61
length=1033, char=0:  529.25 (-2124.37%)  21.62 (  9.11%)  23.79
length=1034, char=0:  529.48 (-2142.95%)  21.62 (  8.43%)  23.61
length=1035, char=0:  530.11 (-2145.13%)  21.62 (  8.44%)  23.61
length=1036, char=0:  530.76 (-2147.10%)  21.79 (  7.73%)  23.62
length=1037, char=0:  531.03 (-2149.45%)  21.62 (  8.42%)  23.61
length=1038, char=0:  531.64 (-2151.87%)  21.62 (  8.42%)  23.61
length=1039, char=0:  531.99 (-2151.63%)  21.80 (  7.75%)  23.63

	* sysdeps/aarch64/memset-reg.h: New file.
	* sysdeps/aarch64/memset.S: Use it.
	(__memset): Rename to MEMSET macro.
	[ZVA_MACRO]: Use zva_macro.
	* sysdeps/aarch64/multiarch/Makefile (sysdep_routines):
	Add memset_generic and memset_falkor.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add memset ifuncs.
	* sysdeps/aarch64/multiarch/init-arch.h (INIT_ARCH): New
	local variable zva_size.
	* sysdeps/aarch64/multiarch/memset.c: New file.
	* sysdeps/aarch64/multiarch/memset_generic.S: New file.
	* sysdeps/aarch64/multiarch/memset_falkor.S: New file.
	* sysdeps/aarch64/multiarch/rtld-memset.S: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c
	(DCZID_DZP_MASK): New macro.
	(DCZID_BS_MASK): Likewise.
	(init_cpu_features): Read and set zva_size.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h
	(struct cpu_features): New member zva_size.
2017-11-20 18:25:04 +05:30
Adhemerval Zanella
58a813bf6e aarch64: Fix f{max,min}{f} build for GCC 4.9 and 5
GCC 4.9 and 5 do not generate a correct f{max,min}nm instruction for
__builtin_{fmax,fmin}{f} without -ffinite-math-only.  It is clear a
compiler issue since the instruction can handle NaN and Inf correctly
and GCC6+ does not show this issue.

We can backport a fix to GCC 5, raise the minimum required GCC version
for aarch64 (since GCC 4.9 branch is now closed [1]) and/or add
configure check to check for this issue.  However I think
-ffinite-math-only should be safe for these specific implementations
and it is a simpler solution.

Checked on aarch64-linux-gnu with GCC 5.3.1.

	* sysdeps/aarch64/fpu/Makefile (CFLAGS-s_fmax.c, CFLAGS-s_fmaxf.c,
	CFLAGS-s_fmin.c, CFLAGS-s_fminf.c): New rule: add -ffinite-math-only.

[1] https://gcc.gnu.org/ml/gcc/2016-08/msg00010.html

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2017-11-17 09:23:07 -02:00
Adhemerval Zanella
06be6368da nptl: Define __PTHREAD_MUTEX_{NUSERS_AFTER_KIND,USE_UNION}
This patch adds two new internal defines to set the internal
pthread_mutex_t layout required by the supported ABIS:

  1. __PTHREAD_MUTEX_NUSERS_AFTER_KIND which control whether to define
     __nusers fields before or after __kind.  The preferred value for
     is 0 for new ports and it sets __nusers before __kind.

  2. __PTHREAD_MUTEX_USE_UNION which control whether internal __spins and
     __list members will be place inside an union for linuxthreads
     compatibility.  The preferred value is 0 for ports and it sets
     to not use an union to define both fields.

It fixes the wrong offsets value for __kind value on x86_64-linux-gnu-x32.
Checked with a make check run-built-tests=no on all afected ABIs.

	[BZ #22298]
	* nptl/allocatestack.c (allocate_stack): Check if
	__PTHREAD_MUTEX_HAVE_PREV is non-zero, instead if
	__PTHREAD_MUTEX_HAVE_PREV is defined.
	* nptl/descr.h (pthread): Likewise.
	* nptl/nptl-init.c (__pthread_initialize_minimal_internal):
	Likewise.
	* nptl/pthread_create.c (START_THREAD_DEFN): Likewise.
	* sysdeps/nptl/fork.c (__libc_fork): Likewise.
	* sysdeps/nptl/pthread.h (PTHREAD_MUTEX_INITIALIZER): Likewise.
	* sysdeps/nptl/bits/thread-shared-types.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
	defines.
	(__pthread_internal_list): Check __PTHREAD_MUTEX_USE_UNION instead
	of __WORDSIZE for internal layout.
	(__pthread_mutex_s): Check __PTHREAD_MUTEX_NUSERS_AFTER_KIND instead
	of __WORDSIZE for internal __nusers layout and __PTHREAD_MUTEX_USE_UNION
	instead of __WORDSIZE whether to use an union for __spins and __list
	fields.
	(__PTHREAD_MUTEX_HAVE_PREV): Define also for __PTHREAD_MUTEX_USE_UNION
	case.
	* sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
	defines.
	* sysdeps/alpha/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/arm/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/hppa/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/ia64/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/m68k/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/mips/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/nios2/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/s390/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/sh/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/sparc/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/tile/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/x86/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-07 09:48:41 -02:00
Adhemerval Zanella
dff91cd45e nptl: Add tests for internal pthread_mutex_t offsets
This patch adds a new build test to check for internal fields
offsets for user visible internal field.  Although currently
the only field which is statically initialized to a non zero value
is pthread_mutex_t.__data.__kind value, the tests also check the
offset of __kind, __spins, __elision (if supported), and __list
internal member.  A internal header (pthread-offset.h) is added
to each major ABI with the reference value.

Checked on x86_64-linux-gnu and with a build check for all affected
ABIs (aarch64-linux-gnu, alpha-linux-gnu, arm-linux-gnueabihf,
hppa-linux-gnu, i686-linux-gnu, ia64-linux-gnu, m68k-linux-gnu,
microblaze-linux-gnu, mips64-linux-gnu, mips64-n32-linux-gnu,
mips-linux-gnu, powerpc64le-linux-gnu, powerpc-linux-gnu,
s390-linux-gnu, s390x-linux-gnu, sh4-linux-gnu, sparc64-linux-gnu,
sparcv9-linux-gnu, tilegx-linux-gnu, tilegx-linux-gnu-x32,
tilepro-linux-gnu, x86_64-linux-gnu, and x86_64-linux-x32).

	* nptl/pthreadP.h (ASSERT_PTHREAD_STRING,
	ASSERT_PTHREAD_INTERNAL_OFFSET): New macro.
	* nptl/pthread_mutex_init.c (__pthread_mutex_init): Add build time
	checks for internal pthread_mutex_t offsets.
	* sysdeps/aarch64/nptl/pthread-offsets.h
	(__PTHREAD_MUTEX_NUSERS_OFFSET, __PTHREAD_MUTEX_KIND_OFFSET,
	__PTHREAD_MUTEX_SPINS_OFFSET, __PTHREAD_MUTEX_ELISION_OFFSET,
	__PTHREAD_MUTEX_LIST_OFFSET): New macro.
	* sysdeps/alpha/nptl/pthread-offsets.h: Likewise.
	* sysdeps/arm/nptl/pthread-offsets.h: Likewise.
	* sysdeps/hppa/nptl/pthread-offsets.h: Likewise.
	* sysdeps/i386/nptl/pthread-offsets.h: Likewise.
	* sysdeps/ia64/nptl/pthread-offsets.h: Likewise.
	* sysdeps/m68k/nptl/pthread-offsets.h: Likewise.
	* sysdeps/microblaze/nptl/pthread-offsets.h: Likewise.
	* sysdeps/mips/nptl/pthread-offsets.h: Likewise.
	* sysdeps/nios2/nptl/pthread-offsets.h: Likewise.
	* sysdeps/powerpc/nptl/pthread-offsets.h: Likewise.
	* sysdeps/s390/nptl/pthread-offsets.h: Likewise.
	* sysdeps/sh/nptl/pthread-offsets.h: Likewise.
	* sysdeps/sparc/nptl/pthread-offsets.h: Likewise.
	* sysdeps/tile/nptl/pthread-offsets.h: Likewise.
	* sysdeps/x86_64/nptl/pthread-offsets.h: Likewise.

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-07 09:48:28 -02:00
Szabolcs Nagy
659ca26736 aarch64: optimize _dl_tlsdesc_dynamic fast path
Remove some load/store instructions from the dynamic tlsdesc resolver
fast path.  This gives around 20% faster tls access in dlopened shared
libraries (assuming glibc ran out of static tls space).

	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_dynamic): Optimize.
2017-11-03 14:50:55 +00:00
Szabolcs Nagy
91c5a366d8 aarch64: Remove barriers from TLS descriptor functions
Remove ldar synchronization and most lazy TLSDESC initialization
related code.

	* sysdeps/aarch64/dl-machine.h (elf_machine_runtime_setup): Remove
	DT_TLSDESC_GOT initialization.
	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Remove.
	(_dl_tlsdesc_resolve_rela): Likewise.
	(_dl_tlsdesc_resolve_hold): Likewise.
	(_dl_tlsdesc_undefweak): Remove ldar.
	(_dl_tlsdesc_dynamic): Likewise.
	* sysdeps/aarch64/dl-tlsdesc.h (_dl_tlsdesc_return_lazy): Remove.
	(_dl_tlsdesc_resolve_rela): Likewise.
	(_dl_tlsdesc_resolve_hold): Likewise.
	* sysdeps/aarch64/tlsdesc.c (_dl_tlsdesc_resolve_rela_fixup): Remove.
	(_dl_tlsdesc_resolve_hold_fixup): Likewise.
	(_dl_tlsdesc_resolve_rela): Likewise.
	(_dl_tlsdesc_resolve_hold): Likewise.
2017-11-03 14:43:32 +00:00
Szabolcs Nagy
b7cf203b5c aarch64: Disable lazy symbol binding of TLSDESC
Always do TLS descriptor initialization at load time during relocation
processing to avoid barriers at every TLS access. In non-dlopened shared
libraries the overhead of tls access vs static global access is > 3x
bigger when lazy initialization is used (_dl_tlsdesc_return_lazy)
compared to bind-now (_dl_tlsdesc_return) so the barriers dominate tls
access performance.

TLSDESC relocs are in DT_JMPREL which are processed at load time using
elf_machine_lazy_rel which is only supposed to do lightweight
initialization using the DT_TLSDESC_PLT trampoline (the trampoline code
jumps to the entry point in DT_TLSDESC_GOT which does the lazy tlsdesc
initialization at runtime).  This patch changes elf_machine_lazy_rel
in aarch64 to do the symbol binding and initialization as if DF_BIND_NOW
was set, so the non-lazy code path of elf/do-rel.h was replicated.

The static linker could be changed to emit TLSDESC relocs in DT_REL*,
which are processed non-lazily, but the goal of this patch is to always
guarantee bind-now semantics, even if the binary was produced with an
old linker, so the barriers can be dropped in tls descriptor functions.

After this change the synchronizing ldar instructions can be dropped
as well as the lazy initialization machinery including the DT_TLSDESC_GOT
setup.

I believe this should be done on all targets, including ones where no
barrier is needed for lazy initialization.  There is very little gain in
optimizing for large number of symbolic tlsdesc relocations which is an
extremely uncommon case.  And currently the tlsdesc entries are only
readonly protected with -z now and some hardennings against writable
JUMPSLOT relocs don't work for TLSDESC so they are a security hazard.
(But to fix that the static linker has to be changed.)

	* sysdeps/aarch64/dl-machine.h (elf_machine_lazy_rel): Do symbol
	binding and initialization non-lazily for R_AARCH64_TLSDESC.
2017-11-03 14:41:35 +00:00
Szabolcs Nagy
be080b6c14 aarch64: Add missing math Makefile for recent commit
Without -fno-math-errno, the builtins just do a call instead of
inlining a single instruction.
2017-10-23 15:34:36 +01:00
Michael Collison
5062680c60 aarch64: Implement math acceleration via builtins
This patch converts asm statements into builtins for AArch64.  As an
example for the file sysdeps/aarch64/fpu/s_ceil.c, we convert the
function from

double
__ceil (double x)
{
  double result;
  asm ("frintp\t%d0, %d1" :
       "=w" (result) : "w" (x) );
  return result;
}

into

double
__ceil (double x)
{
  return __builtin_ceil (x);
}

Tested on aarch64-linux-gnu with gcc-4.9.4 and gcc-6.

	* sysdeps/aarch64/fpu/e_sqrt.c (ieee754_sqrt): Replace asm statements
	with __builtin_sqrt.
	* sysdeps/aarch64/fpu/e_sqrtf.c (ieee754_sqrtf): Replace asm statements
	with __builtin_sqrtf.
	* sysdeps/aarch64/fpu/s_ceil.c (__ceil): Replace asm statements
	with __builtin_ceil.
	* sysdeps/aarch64/fpu/s_ceilf.c (__ceilf): Replace asm statements
	with __builtin_ceilf.
	* sysdeps/aarch64/fpu/s_floor.c (__floor): Replace asm statements
	with __builtin_floor.
	* sysdeps/aarch64/fpu/s_floorf.c (__floorf): Replace asm statements
	with __builtin_floorf.
	* sysdeps/aarch64/fpu/s_fma.c (__fma): Replace asm statements
	with __builtin_fma.
	* sysdeps/aarch64/fpu/s_fmaf.c (__fmaf): Replace asm statements
	with __builtin_fmaf.
	* sysdeps/aarch64/fpu/s_fmax.c (__fmax): Replace asm statements
	with __builtin_fmax.
	* sysdeps/aarch64/fpu/s_fmaxf.c (__fmaxf): Replace asm statements
	with __builtin_fmaxf.
	* sysdeps/aarch64/fpu/s_fmin.c (__fmin): Replace asm statements
	with __builtin_fmin.
	* sysdeps/aarch64/fpu/s_fminf.c (__fminf): Replace asm statements
	with __builtin_fminf.
	* sysdeps/aarch64/fpu/s_frint.c: Delete file.
	* sysdeps/aarch64/fpu/s_frintf.c: Delete file.
	* sysdeps/aarch64/fpu/s_llrint.c (__llrint): Replace asm statements
	with builtin_rint and conversion to int.
	* sysdeps/aarch64/fpu/s_llrintf.c (__llrintf): Likewise.
	* sysdeps/aarch64/fpu/s_llround.c (__llround): Replace asm statements
	with builtin_llround.
	* sysdeps/aarch64/fpu/s_llroundf.c (__llroundf): Likewise.
	* sysdeps/aarch64/fpu/s_lrint.c (__lrint): Replace asm statements
	with builtin_rint and conversion to long int.
	* sysdeps/aarch64/fpu/s_lrintf.c (__lrintf): Likewise.
	* sysdeps/aarch64/fpu/s_lround.c (__lround): Replace asm statements
	with builtin_lround.
	* sysdeps/aarch64/fpu/s_lroundf.c (__lroundf): Replace asm statements
	with builtin_lroundf.
	* sysdeps/aarch64/fpu/s_nearbyint.c (__nearbyint): Replace asm
	statements with __builtin_nearbyint.
	* sysdeps/aarch64/fpu/s_nearbyintf.c (__nearbyintf): Replace asm
	statements with __builtin_nearbyintf.
	* sysdeps/aarch64/fpu/s_rint.c (__rint): Replace asm statements
	with __builtin_rint.
	* sysdeps/aarch64/fpu/s_rintf.c (__rintf): Replace asm statements
	with __builtin_rintf.
	* sysdeps/aarch64/fpu/s_round.c (__round): Replace asm statements
	with __builtin_round.
	* sysdeps/aarch64/fpu/s_roundf.c (__roundf): Replace asm statements
	with __builtin_roundf.
	* sysdeps/aarch64/fpu/s_trunc.c (__trunc): Replace asm statements
	with __builtin_trunc.
	* sysdeps/aarch64/fpu/s_truncf.c (__truncf): Replace asm statements
	with __builtin_truncf.
	* sysdeps/aarch64/fpu/Makefile: Build e_sqrt[f].c with -fno-math-errno.
2017-10-23 10:32:56 +01:00
Szabolcs Nagy
a68ba2f3cd [AARCH64] Rewrite elf_machine_load_address using _DYNAMIC symbol
This patch rewrites aarch64 elf_machine_load_address to use special _DYNAMIC
symbol instead of _dl_start.

The static address of _DYNAMIC symbol is stored in the first GOT entry.
Here is the change which makes this solution work (part of binutils 2.24):
https://sourceware.org/ml/binutils/2013-06/msg00248.html

i386, x86_64 targets use the same method to do this as well.

The original implementation relies on a trick that R_AARCH64_ABS32 relocation
being resolved at link time and the static address fits in the 32bits.
However, in LP64, normally, the address is defined to be 64 bit.

Here is the C version one which should be portable in all cases.

	* sysdeps/aarch64/dl-machine.h (elf_machine_load_address): Use
	_DYNAMIC symbol to calculate load address.
2017-10-18 17:35:16 +01:00
Siddhesh Poyarekar
dd5bc7f1b3 aarch64: Optimized implementation of memmove for Qualcomm Falkor
This is an optimized memmove implementation for the Qualcomm Falkor
processor core.  Due to the way the falkor memcpy needs to be written,
code cannot be easily shared between memmove and memcpy like in case
of other aarch64 memcpy implementations due to which this routine is
separate.  The underlying principle is the same as that of memcpy
where it tries to use registers with the same lower 4 bits for
fetching the same stream, thus optimizing hardware prefetcher
performance.

The memcpy copy loop copies 64 bytes at a time using the same register
pair since that's the way to train the hardware prefetcher on the
falkor core.  memmove cannot quite do that since it needs to avoid
overlaps, so it does the next best thing, i.e. has a 32 byte loop with
a 32 byte end (prefetch a loop ahead to account for overlapping
locations) with register pairs that alias so that they hit the same
prefetcher.  Due to this difference in loop size, they have to
currently be separate implementations but efforts are on to try and
get memmove to fall back into memcpy whenever it can without simply
duplicating all of the code.

Performance:

The routine fares around 20-25% better than the generic memmove for
most medium to large sizes (i.e. > 128 bytes) for the new walking
memmove benchmark (memmove-walk) with an unexplained regression
between 1K and 2K.  The minor regression is something worth looking
into for us, but the remaining gains are significant enough that we
would like this included upstream as we looking into the cause for the
regression.  Here is a snippet of the numbers as generated from the
microbenchmark by the compare_strings script.  Comparisons are against
__memmove_generic:

Function: memmove
Variant: walk
                                    __memmove_thunderx	__memmove_falkor	__memmove_generic
========================================================================================================================
<snip>
                        length=16384:  12508800.00 (  6.09%)	 11486800.00 ( 13.76%)	 13319600.00
                        length=16400:  13614200.00 ( -0.67%)	 11585000.00 ( 14.33%)	 13523600.00
                        length=16385:  13448400.00 (  0.10%)	 11732700.00 ( 12.84%)	 13461200.00
                        length=16399:  13594100.00 ( -0.22%)	 11859600.00 ( 12.57%)	 13564400.00
                        length=16386:  13211600.00 (  1.13%)	 11503800.00 ( 13.91%)	 13362400.00
                        length=16398:  13218600.00 (  2.12%)	 11573200.00 ( 14.30%)	 13504700.00
                        length=16387:  13510900.00 ( -0.37%)	 11744200.00 ( 12.76%)	 13461300.00
                        length=16397:  13603700.00 ( -0.15%)	 11878200.00 ( 12.55%)	 13583200.00
                        length=16388:  13461700.00 ( -0.13%)	 11558000.00 ( 14.03%)	 13444100.00
                        length=16396:  13517500.00 ( -0.03%)	 11561300.00 ( 14.45%)	 13513900.00
                        length=16389:  13534100.00 (  0.17%)	 11756800.00 ( 13.28%)	 13556900.00
                        length=16395:  13585600.00 (  0.11%)	 11791800.00 ( 13.30%)	 13601200.00
                        length=16390:  13480100.00 ( -0.13%)	 11685500.00 ( 13.20%)	 13462100.00
                        length=16394:  13529900.00 ( -0.23%)	 11549800.00 ( 14.43%)	 13498200.00
                        length=16391:  13595400.00 ( -0.26%)	 11768200.00 ( 13.22%)	 13560600.00
                        length=16393:  13567000.00 (  0.20%)	 11779700.00 ( 13.35%)	 13594700.00
                        length=32768:  71308800.00 ( -6.53%)	 50220800.00 ( 24.98%)	 66939200.00
                        length=32784:  72100800.00 (-11.55%)	 50114100.00 ( 22.47%)	 64636300.00
                        length=32769:  71767000.00 ( -7.10%)	 51238400.00 ( 23.54%)	 67010000.00
                        length=32783:  70113700.00 (-40.95%)	 51129000.00 ( -2.78%)	 49744400.00
                        length=32770:  71367600.00 ( -6.52%)	 50244700.00 ( 25.01%)	 67000900.00
                        length=32782:  64366700.00 (  4.71%)	 50101400.00 ( 25.83%)	 67545600.00
                        length=32771:  71440100.00 ( -6.51%)	 51263900.00 ( 23.57%)	 67074900.00
                        length=32781:  66993000.00 (  0.34%)	 51108300.00 ( 23.97%)	 67220300.00
                        length=32772:  71443900.00 (-60.50%)	 50062100.00 (-12.47%)	 44512600.00
                        length=32780:  71759100.00 ( -6.58%)	 50263200.00 ( 25.35%)	 67328600.00
                        length=32773:  71714900.00 (-33.21%)	 51076600.00 (  5.12%)	 53835400.00
                        length=32779:  71756900.00 ( -6.56%)	 51290800.00 ( 23.83%)	 67337800.00
                        length=32774:  59689300.00 (-34.55%)	 50068400.00 (-12.86%)	 44363300.00
                        length=32778:  71847500.00 (-18.20%)	 50084100.00 ( 17.61%)	 60786500.00
                        length=32775:  71599300.00 ( -6.54%)	 51278200.00 ( 23.70%)	 67204800.00
                        length=32777:  71862900.00 (-60.85%)	 51094000.00 (-14.36%)	 44677900.00
                        length=65536: 282848000.00 ( -6.60%)	199187000.00 ( 24.93%)	265325000.00
                        length=65552: 243285000.00 (-41.61%)	198512000.00 (-15.54%)	171805000.00
                        length=65537: 255415000.00 (-23.47%)	202499000.00 (  2.11%)	206858000.00
                        length=65551: 280122000.00 (-62.95%)	203349000.00 (-18.29%)	171911000.00
                        length=65538: 283676000.00 (-14.46%)	198368000.00 ( 19.96%)	247848000.00
                        length=65550: 275566000.00 (-51.76%)	198494000.00 ( -9.31%)	181581000.00
                        length=65539: 283699000.00 ( -6.58%)	203453000.00 ( 23.57%)	266195000.00
                        length=65549: 286572000.00 ( -6.65%)	202607000.00 ( 24.60%)	268712000.00
                        length=65540: 283710000.00 ( -6.59%)	199161000.00 ( 25.17%)	266160000.00
                        length=65548: 237573000.00 ( 11.48%)	198462000.00 ( 26.06%)	268395000.00
                        length=65541: 284150000.00 ( -6.58%)	203273000.00 ( 23.75%)	266600000.00
                        length=65547: 286250000.00 ( -6.70%)	202594000.00 ( 24.48%)	268263000.00
                        length=65542: 284167000.00 ( -6.60%)	199122000.00 ( 25.31%)	266584000.00
                        length=65546: 285656000.00 ( -6.59%)	198443000.00 ( 25.95%)	268002000.00
                        length=65543: 284600000.00 ( -6.58%)	203247000.00 ( 23.89%)	267030000.00
                        length=65545: 285665000.00 ( -6.40%)	202575000.00 ( 24.55%)	268472000.00
<snip>

	* sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add
	memmove_falkor.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Likewise.
	* sysdeps/aarch64/multiarch/memmove.c: Likewise.
	* sysdeps/aarch64/multiarch/memmove_falkor.S: New file.
2017-10-05 22:20:23 +05:30
Szabolcs Nagy
db4f87bad4 aarch64: don't use MIN in dl-machine.h
MIN is used, but param.h may not be included, so expand its
single use inline.

	* sysdeps/aarch64/dl-machine.h (elf_machine_rela): Expand MIN.
2017-10-04 17:49:38 +01:00