Commit Graph

351 Commits

Author SHA1 Message Date
Adhemerval Zanella
ea04f02131 aarch64: Remove fpu Makefile
The -fno-math-errno is already added by default and the minimum
required GCC to build glibc (6.2) make the -ffinite-math-only
superflous.

Checked on aarch64-linux-gnu.
2020-06-22 11:09:50 -03:00
Adhemerval Zanella
271afad8f4 aarch64: Use math-use-builtins for ceil{f}
The define is already set on the math-use-builtins-ceil.h, the patch
just removes the implementations (it was missed on c9feb1be93).

Checked on aarch64-linux-gnu.
2020-06-22 11:09:49 -03:00
Adhemerval Zanella
e80501a5c9 math: Decompose math-use-builtins.h
Each symbol definitions are moved on a separated file and it
cover all symbol type definitions (float, double, long double,
and float128).

It allows to set support for architectures without the boiler
place of copying default values.

Checked with a build on the affected ABIs.
2020-06-22 11:09:45 -03:00
Andrea Corallo
a365ac45b7 aarch64: MTE compatible strlen
Introduce an Arm MTE compatible strlen implementation.

The existing implementation assumes that any access to the pages in
which the string resides is safe.  This assumption is not true when
MTE is enabled.  This patch updates the algorithm to ensure that
accesses remain within the bounds of an MTE tag (16-byte chunks) and
improves overall performance on modern cores. On cores with less
efficient Advanced SIMD implementation such as Cortex-A53 it can
be slower.

Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1.

Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
2020-06-09 09:21:11 +01:00
Andrea Corallo
49beaaec1b aarch64: MTE compatible strchr
Introduce an Arm MTE compatible strchr implementation.

The existing implementation assumes that any access to the pages in
which the string resides is safe.  This assumption is not true when
MTE is enabled.  This patch updates the algorithm to ensure that
accesses remain within the bounds of an MTE tag (16-byte chunks) and
improves overall performance.

Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1.

Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
2020-06-09 09:20:27 +01:00
Andrea Corallo
f7de454f20 aarch64: MTE compatible strchrnul
Introduce an Arm MTE compatible strchrnul implementation.

The existing implementation assumes that any access to the pages in
which the string resides is safe.  This assumption is not true when
MTE is enabled.  This patch updates the algorithm to ensure that
accesses remain within the bounds of an MTE tag (16-byte chunks) and
improves overall performance.

Benchmarked on Cortex-A72, Cortex-A53, Neoverse N1.

Co-authored-by: Wilco Dijkstra <wilco.dijkstra@arm.com>
2020-06-09 09:20:27 +01:00
Krzysztof Koch
d1f75e9644 AArch64: Merge Falkor memcpy and memmove implementations
Falkor's memcpy and memmove share some implementation details,
therefore, the two routines are moved to a single source file
for code reuse.

The two routines now share code for small and medium copies
(up to and including 128 bytes). Large copies in memcpy do not
handle overlap correctly, consequently, the loops for
moving/copying more than 128 bytes stay separate for memcpy
and memmove.

To increase code reuse a number of small modifications were made:

1. The old implementation of memcpy copied the first 16-bytes as
   soon as the size of data was determined to be greater than 32 bytes.
   For memcpy code to also work when copying small/medium overlapping
   data, the first load and store was moved to the large copy case.
2. Medium memcpy case no longer assumes that 16 bytes were already
   copied and uses 8 registers to copy up to 128 bytes.
3. Small case for memmove was enlarged to that of memcpy, which is
   less than or equal to 32 bytes.
4. Medium case for memmove was enlarged to that of memcpy, which is
   less than or equal to 128 bytes.

Other changes include:

1. Improve alignment of existing loop bodies.
2. 'Delouse' memmove and memcpy input arguments. Make sure that
   upper 32-bits of input registers are zeroed if unused.
3. Do one more iteration in memmove loops and reduce the number of
   copies made from the start/end of the buffer, depending on
   the direction of the memmove loop.

Benchmarking:

Looking at the results from bench-memcpy-random.out, we can see that
now memmove_falkor is about 5% faster than memcpy_falkor_old, while
memmove_falkor_old was more than 15% slower. The memcpy implementation
remained largely unmodified, so there is no significant performance
change.

The reason for such a significant memmove performance gain is the
increase of the upper bound on the small copy case to 32 bytes and
the increase of the upper bound on the medium copy case to 128 bytes.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2020-06-08 14:13:05 +01:00
Vineet Gupta
c9feb1be93 aarch/fpu: use generic builtins based math functions
introduce sysdep header math-use-builtins.h to replace aarch64
implementations with corresponding generic ones.

 - newly inroduced generic sqrt{,f}, fma{,f}
 - existing floor{,f}, nearbyint{,f}, rint{,f}, round{,f}, trunc{,f}
 - Note that generic copysign was already enabled (via generic
   math-use-builtins.h) now thru sysdep header

Tested with build-many-glibcs for aarch64-linux-gnu

This is a non functional change and aarch64 libm before/after was
byte invariant as compared below:

| cd /SCRATCH/vgupta/gnu/install-glibc-A-baseline
| for i in `find . -name libm-2.31.9000.so`; do
|   echo $i; diff $i /SCRATCH/vgupta/gnu/install-glibc-C-reduce-scope/$i ;
|   echo $?;
| done

| ./aarch64-linux-gnu/lib64/libm-2.31.9000.so
| 0
| ./arm-linux-gnueabi/lib/libm-2.31.9000.so
| 0
| ./x86_64-linux-gnu/lib64/libm-2.31.9000.so
| 0
| ./arm-linux-gnueabihf/lib/libm-2.31.9000.so
| 0
| ./riscv64-linux-gnu-rv64imac-lp64/lib64/lp64/libm-2.31.9000.so
| 0
| ./riscv64-linux-gnu-rv64imafdc-lp64/lib64/lp64/libm-2.31.9000.so
| 0
| ./powerpc-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./microblaze-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./nios2-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./hppa-linux-gnu/lib/libm-2.31.9000.so
| 0
| ./s390x-linux-gnu/lib64/libm-2.31.9000.so
| 0

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-06-03 10:23:33 -07:00
Lexi Shao
59b64f9cbb aarch64: fix strcpy and strnlen for big-endian [BZ #25824]
This patch fixes the optimized implementation of strcpy and strnlen
on a big-endian arm64 machine.

The optimized method uses neon, which can process 128bit with one
instruction. On a big-endian machine, the bit order should be reversed
for the whole 128-bits double word. But with instuction
	rev64	datav.16b, datav.16b
it reverses 64bits in the two halves rather than reversing 128bits.
There is no such instruction as rev128 to reverse the 128bits, but we
can fix this by loading the data registers accordingly.

Fixes 0237b61526e7("aarch64: Optimized implementation of strcpy") and
2911cb68ed3d("aarch64: Optimized implementation of strnlen").

Signed-off-by: Lexi Shao <shaolexi@huawei.com>
Reviewed-by: Szabolcs Nagy  <szabolcs.nagy@arm.com>
2020-05-15 12:15:56 +01:00
Adhemerval Zanella
6a0474c769 Update aarch64 libm-test-ulps 2020-04-08 13:52:44 -03:00
Adhemerval Zanella
1c15464ca0 math: Remove inline math tests
With mathinline removal there is no need to keep building and testing
inline math tests.

The gen-libm-tests.py support to generate ULP_I_* is removed and all
libm-test-ulps files are updated to longer have the
i{float,double,ldouble} entries.  The support for no-test-inline is
also removed from both gen-auto-libm-tests and the
auto-libm-test-out-* were regenerated.

Checked on x86_64-linux-gnu and i686-linux-gnu.
2020-03-19 11:45:44 -03:00
Wilco Dijkstra
7000651327 [AArch64] Improve integer memcpy
Further optimize integer memcpy.  Small cases now include copies up
to 32 bytes.  64-128 byte copies are split into two cases to improve
performance of 64-96 byte copies.  Comments have been rewritten.
2020-03-11 17:15:25 +00:00
Florian Weimer
f4349837d9 Introduce <elf-initfini.h> and ELF_INITFINI for all architectures
This supersedes the init_array sysdeps directory.  It allows us to
check for ELF_INITFINI in both C and assembler code, and skip DT_INIT
and DT_FINI processing completely on newer architectures.

A new header file is needed because <dl-machine.h> is incompatible
with assembler code.  <sysdep.h> is compatible with assembler code,
but it cannot be included in all assembler files because on some
architectures, it redefines register names, and some assembler files
conflict with that.

<elf-initfini.h> is replicated for legacy architectures which need
DT_INIT/DT_FINI support.  New architectures follow the generic default
and disable it.
2020-02-18 15:12:25 +01:00
Andreas Schwab
4970c9e0b5 nptl: add missing pthread-offsets.h
All architectures using their own definition of struct
__pthread_rwlock_arch_t need to provide their own pthread-offsets.h.
2020-02-10 17:01:21 +01:00
Wilco Dijkstra
220622dde5 Add libm_alias_finite for _finite symbols
This patch adds a new macro, libm_alias_finite, to define all _finite
symbol.  It sets all _finite symbol as compat symbol based on its first
version (obtained from the definition at built generated first-versions.h).

The <fn>f128_finite symbols were introduced in GLIBC 2.26 and so need
special treatment in code that is shared between long double and float128.
It is done by adding a list, similar to internal symbol redifinition,
on sysdeps/ieee754/float128/float128_private.h.

Alpha also needs some tricky changes to ensure we still emit 2 compat
symbols for sqrt(f).

Passes buildmanyglibc.

Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2020-01-03 10:02:04 -03:00
Joseph Myers
d614a75396 Update copyright dates with scripts/update-copyrights. 2020-01-01 00:14:33 +00:00
Xuelei Zhang
863d775c48 aarch64: add default memcpy version for kunpeng920
Checked on aarch64-linux-gnu.
2019-12-27 11:59:37 -03:00
Xuelei Zhang
10df95cdaf aarch64: ifunc rename for kunpeng
Rename ifunc for kunpeng to kunpeng920, and modify the corresponding
function files including IS_KUNPENG920 judgement.

Checked on aarch64-linux-gnu.
2019-12-27 11:59:51 -03:00
Xuelei Zhang
64297d49b3 aarch64: Modify error-shown comments for strcpy
Checked on aarch64-linux-gnu.
2019-12-27 11:59:37 -03:00
Xuelei Zhang
525de033a9 aarch64: Optimized memset for Kunpeng processor.
Due to the branch prediction issue of Kunpeng processor, we found
memset_generic has poor performance on middle sizes setting, and so
we reconstructed the logic, expanded the loop by 4 times in set_long
to solve the problem, even when setting below 1K sizes have benefit.

Another change is that DZ_ZVA seems no work when setting zero, so we
discarded it and used set_long to set zero instead. Fewer branches and
predictions also make the zero case have slightly improvement.

Checked on aarch64-linux-gnu.

Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2019-12-19 16:31:04 -03:00
Xuelei Zhang
c2150769d0 aarch64: Optimized strlen for strlen_asimd
Optimize the strlen implementation by using vector operations and
loop unrolling in main loop.Compared to __strlen_generic,it reduces
latency of cases in bench-strlen by 7%~18% when the length of src
is greater than 128 bytes, with gains throughout the benchmark.

Checked on aarch64-linux-gnu.

Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2019-12-19 16:31:04 -03:00
Xuelei Zhang
a7611806d5 aarch64: Optimized implementation of memrchr
Considering the excellent performance of memchr.S on glibc 2.30, the
same algorithm is used to find chrin. Compared to memrchr.c, this
method with memrchr.S achieves an average performance improvement
of 58% based on benchtest and its extension cases.

Checked on aarch64-linux-gnu.

Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2019-12-19 16:31:04 -03:00
Xuelei Zhang
2911cb68ed aarch64: Optimized implementation of strnlen
Optimize the strlen implementation by using vector operations and
loop unrooling in main loop. Compared to aarch64/strnlen.S, it
reduces latency of cases in bench-strnlen by 11%~24% when the length
of src is greater than 64 bytes, with gains throughout the benchmark.

Checked on aarch64-linux-gnu.

Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2019-12-19 16:31:04 -03:00
Xuelei Zhang
0237b61526 aarch64: Optimized implementation of strcpy
Optimize the strcpy implementation by using vector loads and operations
in main loop.Compared to aarch64/strcpy.S, it reduces latency of cases
in bench-strlen by 5%~18% when the length of src is greater than 64
bytes, with gains throughout the benchmark.

Checked on aarch64-linux-gnu.

Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2019-12-19 16:31:04 -03:00
Xuelei Zhang
233efd433d aarch64: Optimized implementation of memcmp
The loop body is expanded from a 16-byte comparison to a 64-byte
comparison, and the usage of ldp is replaced by the Post-index
mode to the Base plus offset mode. Hence, compare can faster 18%
around > 128 bytes in all.

Checked on aarch64-linux-gnu.

Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2019-12-19 16:31:04 -03:00
Florian Weimer
4db71d2f98 elf: Do not run IFUNC resolvers for LD_DEBUG=unused [BZ #24214]
This commit adds missing skip_ifunc checks to aarch64, arm, i386,
sparc, and x86_64.  A new test case ensures that IRELATIVE IFUNC
resolvers do not run in various diagnostic modes of the dynamic
loader.

Reviewed-By: Szabolcs Nagy <szabolcs.nagy@arm.com>
2019-12-02 14:55:22 +01:00
Adhemerval Zanella
7ddac7f265 nptl: Add default pthread-offsets.h
This patch adds a default pthread-offsets.h based on default
thread definitions from struct_mutex.h and struct_rwlock.h.
The idea is to simplify new ports inclusion.

Checked with a build on affected abis.

Change-Id: I7785a9581e651feb80d1413b9e03b5ac0452668a
2019-11-26 13:53:36 +00:00
Adhemerval Zanella
7df8af43ad nptl: Add struct_rwlock.h
This patch adds a new generic __pthread_rwlock_arch_t definition meant
to be used by new ports.  Its layout mimics the current usage on some
64 bits ports and it allows some ports to use the generic definition.
The arch __pthread_rwlock_arch_t definition is moved from
pthreadtypes-arch.h to another arch-specific header (struct_rwlock.h).

Also the static intialization macro for pthread_rwlock_t is set to use
an arch defined on (__PTHREAD_RWLOCK_INITIALIZER) which simplifies its
implementation.

The default pthread_rwlock_t layout differs from current ports with:

  1. Internal layout is the same for 32 bits and 64 bits.

  2. Internal flag is an unsigned short so it should not required
     additional padding to align for word boundary (if it is the case
     for the ABI).

Checked with a build on affected abis.

Change-Id: I776a6a986c23199929d28a3dcd30272db21cd1d0
2019-11-26 13:53:36 +00:00
Adhemerval Zanella
1c3f9acf1f nptl: Add struct_mutex.h
The current way of defining the common mutex definition for POSIX and
C11 on pthreadtypes-arch.h (added by commit 06be6368da) is
not really the best options for newer ports.  It requires define some
misleading flags that should be always defined as 0
(__PTHREAD_COMPAT_PADDING_MID and __PTHREAD_COMPAT_PADDING_END), it
exposes options used solely for linuxthreads compat mode
(__PTHREAD_MUTEX_USE_UNION and __PTHREAD_MUTEX_NUSERS_AFTER_KIND), and
requires newer ports to explicit define them (adding more boilerplate
code).

This patch adds a new default __pthread_mutex_s definition meant to
be used by newer ports.  Its layout mimics the current usage on both
32 and 64 bits ports and it allows most ports to use the generic
definition.  Only ports that use some arch-specific definition (such
as hardware lock-elision or linuxthreads compat) requires specific
headers.

For 32 bit, the generic definitions mimic the other 32-bit ports
of using an union to define the fields uses on adaptive and robust
mutexes (thus not allowing both usage at same time) and by using a
single linked-list for robust mutexes.  Both decisions seemed to
follow what recent ports have done and make the resulting
pthread_mutex_t/mtx_t object smaller.

Also the static intialization macro for pthread_mutex_t is set to use
a macro __PTHREAD_MUTEX_INITIALIZER where the architecture can redefine
in its struct_mutex.h if it requires additional fields to be
initialized.

Checked with a build on affected abis.

Change-Id: I30a22c3e3497805fd6e52994c5925897cffcfe13
2019-11-26 13:53:36 +00:00
Adhemerval Zanella
0377a7fde6 nptl: Remove rwlock elision definitions
The new rwlock implementation added by cc25c8b4c1 (2.25) removed
support for lock-elision.  This patch removes remaining the
arch-specific unused definitions.

Checked with a build against all affected ABIs.

Change-Id: I5dec8af50e3cd56d7351c52ceff4aa3771b53cd6
2019-11-26 13:53:36 +00:00
Adhemerval Zanella
48dbce60cf nptl: Add tests for internal pthread_rwlock_t offsets
This patch new build tests to check for internal fields offsets for
internal pthread_rwlock_t definition.  Althoug the '__data.__flags'
field layout should be preserved due static initializators, the patch
also adds tests for the futexes that may be used in a shared memory
(although using different libc version in such scenario is not really
supported).

Checked with a build against all affected ABIs.

Change-Id: Iccc103d557de13d17e4a3f59a0cad2f4a640c148
2019-11-26 13:53:36 +00:00
Adhemerval Zanella
71d260c107 nptl: Cleanup mutex internal offset tests
The offsets of pthread_mutex_t __data.__nusers, __data.__spins,
__data.elision, __data.list are not required to be constant over
the releases.  Only the __data.__kind is used for static
initializers.

This patch also adds an additional size check for __data.__kind.

Checked with a build against affected ABIs.

Change-Id: I7a4e48cc91b4c4ada57e9a5d1b151fb702bfaa9f
2019-11-26 13:53:36 +00:00
Krzysztof Koch
b9f145df85 aarch64: Increase small and medium cases for __memcpy_generic
Increase the upper bound on medium cases from 96 to 128 bytes.
Now, up to 128 bytes are copied unrolled.

Increase the upper bound on small cases from 16 to 32 bytes so that
copies of 17-32 bytes are not impacted by the larger medium case.

Benchmarking:
The attached figures show relative timing difference with respect
to 'memcpy_generic', which is the existing implementation.
'memcpy_med_128' denotes the the version of memcpy_generic with
only the medium case enlarged. The 'memcpy_med_128_small_32' numbers
are for the version of memcpy_generic submitted in this patch, which
has both medium and small cases enlarged. The figures were generated
using the script from:
https://www.sourceware.org/ml/libc-alpha/2019-10/msg00563.html

Depending on the platform, the performance improvement in the
bench-memcpy-random.c benchmark ranges from 6% to 20% between
the original and final version of memcpy.S

Tested against GLIBC testsuite and randomized tests.
2019-11-12 17:08:18 +00:00
Alistair Francis
aa706e13f4 Split up endian.h to minimize exposure of BYTE_ORDER.
With only two exceptions (sys/types.h and sys/param.h, both of which
historically might have defined BYTE_ORDER) the public headers that
include <endian.h> only want to be able to test __BYTE_ORDER against
__*_ENDIAN.

This patch creates a new bits/endian.h that can be included by any
header that wants to be able to test __BYTE_ORDER and/or
__FLOAT_WORD_ORDER against the __*_ENDIAN constants, or needs
__LONG_LONG_PAIR.  It only defines macros in the implementation
namespace.

The existing bits/endian.h (which could not be included independently
of endian.h, and only defines __BYTE_ORDER and maybe __FLOAT_WORD_ORDER)
is renamed to bits/endianness.h.  I also took the opportunity to
canonicalize the form of this header, which we are stuck with having
one copy of per architecture.  Since they are so short, this means git
doesn’t understand that they were renamed from existing headers, sigh.

endian.h itself is a nonstandard header and its only remaining use
from a standard header is guarded by __USE_MISC, so I dropped the
__USE_MISC conditionals from around all of the public-namespace things
it defines.  (This means, an application that requests strict library
conformance but includes endian.h will still see the definition of
BYTE_ORDER.)

A few changes to specific bits/endian(ness).h variants deserve
mention:

 - sysdeps/unix/sysv/linux/ia64/bits/endian.h is moved to
   sysdeps/ia64/bits/endianness.h.  If I remember correctly, ia64 did
   have selectable endianness, but we have assembly code in
   sysdeps/ia64 that assumes it’s little-endian, so there is no reason
   to treat the ia64 endianness.h as linux-specific.

 - The C-SKY port does not fully support big-endian mode, the compile
   will error out if __CSKYBE__ is defined.

 - The PowerPC port had extra logic in its bits/endian.h to detect a
   broken compiler, which strikes me as unnecessary, so I removed it.

 - The only files that defined __FLOAT_WORD_ORDER always defined it to
   the same value as __BYTE_ORDER, so I removed those definitions.
   The SH bits/endian(ness).h had comments inconsistent with the
   actual setting of __FLOAT_WORD_ORDER, which I also removed.

 - I *removed* copyright boilerplate from the few bits/endian(ness).h
   headers that had it; these files record a single fact in a fashion
   dictated by an external spec, so I do not think they are copyrightable.

As long as I was changing every copy of ieee754.h in the tree, I
noticed that only the MIPS variant includes float.h, because it uses
LDBL_MANT_DIG to decide among three different versions of
ieee854_long_double.  This patch makes it not include float.h when
GCC’s intrinsic __LDBL_MANT_DIG__ is available.

	* string/endian.h: Unconditionally define LITTLE_ENDIAN,
	BIG_ENDIAN, PDP_ENDIAN, and BYTE_ORDER.	 Condition byteswapping
	macros only on !__ASSEMBLER__.	Move the definitions of
	__BIG_ENDIAN, __LITTLE_ENDIAN, __PDP_ENDIAN, __FLOAT_WORD_ORDER,
	and __LONG_LONG_PAIR to...
	* string/bits/endian.h: ...this new file, which includes
	the renamed header bits/endianness.h for the definition of
	__BYTE_ORDER and possibly __FLOAT_WORD_ORDER.

	* string/Makefile: Install bits/endianness.h.
	* include/bits/endian.h: New wrapper.

	* bits/endian.h: Rename to bits/endianness.h.
	Add multiple-include guard.  Rewrite the comment explaining what
	the machine-specific variants of this file should do.

	* sysdeps/unix/sysv/linux/ia64/bits/endian.h:
	Move to sysdeps/ia64.

	* sysdeps/aarch64/bits/endian.h
	* sysdeps/alpha/bits/endian.h
	* sysdeps/arm/bits/endian.h
	* sysdeps/csky/bits/endian.h
	* sysdeps/hppa/bits/endian.h
	* sysdeps/ia64/bits/endian.h
	* sysdeps/m68k/bits/endian.h
	* sysdeps/microblaze/bits/endian.h
	* sysdeps/mips/bits/endian.h
	* sysdeps/nios2/bits/endian.h
	* sysdeps/powerpc/bits/endian.h
	* sysdeps/riscv/bits/endian.h
	* sysdeps/s390/bits/endian.h
	* sysdeps/sh/bits/endian.h
	* sysdeps/sparc/bits/endian.h
	* sysdeps/x86/bits/endian.h:
	Rename to endianness.h; canonicalize form of file; remove
	redundant definitions of __FLOAT_WORD_ORDER.

	* sysdeps/powerpc/bits/endianness.h: Remove logic to check for
	broken compilers.

	* ctype/ctype.h
	* sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h
	* sysdeps/arm/nptl/bits/pthreadtypes-arch.h
	* sysdeps/csky/nptl/bits/pthreadtypes-arch.h
	* sysdeps/ia64/ieee754.h
	* sysdeps/ieee754/ieee754.h
	* sysdeps/ieee754/ldbl-128/ieee754.h
	* sysdeps/ieee754/ldbl-128ibm/ieee754.h
	* sysdeps/m68k/nptl/bits/pthreadtypes-arch.h
	* sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h
	* sysdeps/mips/ieee754/ieee754.h
	* sysdeps/mips/nptl/bits/pthreadtypes-arch.h
	* sysdeps/nios2/nptl/bits/pthreadtypes-arch.h
	* sysdeps/nptl/pthread.h
	* sysdeps/riscv/nptl/bits/pthreadtypes-arch.h
	* sysdeps/sh/nptl/bits/pthreadtypes-arch.h
	* sysdeps/sparc/sparc32/ieee754.h
	* sysdeps/unix/sysv/linux/generic/bits/stat.h
	* sysdeps/unix/sysv/linux/generic/bits/statfs.h
	* sysdeps/unix/sysv/linux/sys/acct.h
	* wctype/bits/wctype-wchar.h:
	Include bits/endian.h, not endian.h.

	* sysdeps/unix/sysv/linux/hppa/pthread.h: Don’t include endian.h.

	* sysdeps/mips/ieee754/ieee754.h: Use __LDBL_MANT_DIG__
	in ifdefs, instead of LDBL_MANT_DIG.  Only include float.h
	when __LDBL_MANT_DIG__ is not predefined, in which case
	define __LDBL_MANT_DIG__ to equal LDBL_MANT_DIG.
2019-10-01 14:54:46 -07:00
Paul Eggert
5a82c74822 Prefer https to http for gnu.org and fsf.org URLs
Also, change sources.redhat.com to sourceware.org.
This patch was automatically generated by running the following shell
script, which uses GNU sed, and which avoids modifying files imported
from upstream:

sed -ri '
  s,(http|ftp)(://(.*\.)?(gnu|fsf|sourceware)\.org($|[^.]|\.[^a-z])),https\2,g
  s,(http|ftp)(://(.*\.)?)sources\.redhat\.com($|[^.]|\.[^a-z]),https\2sourceware.org\4,g
' \
  $(find $(git ls-files) -prune -type f \
      ! -name '*.po' \
      ! -name 'ChangeLog*' \
      ! -path COPYING ! -path COPYING.LIB \
      ! -path manual/fdl-1.3.texi ! -path manual/lgpl-2.1.texi \
      ! -path manual/texinfo.tex ! -path scripts/config.guess \
      ! -path scripts/config.sub ! -path scripts/install-sh \
      ! -path scripts/mkinstalldirs ! -path scripts/move-if-change \
      ! -path INSTALL ! -path  locale/programs/charmap-kw.h \
      ! -path po/libc.pot ! -path sysdeps/gnu/errlist.c \
      ! '(' -name configure \
            -execdir test -f configure.ac -o -f configure.in ';' ')' \
      ! '(' -name preconfigure \
            -execdir test -f preconfigure.ac ';' ')' \
      -print)

and then by running 'make dist-prepare' to regenerate files built
from the altered files, and then executing the following to cleanup:

  chmod a+x sysdeps/unix/sysv/linux/riscv/configure
  # Omit irrelevant whitespace and comment-only changes,
  # perhaps from a slightly-different Autoconf version.
  git checkout -f \
    sysdeps/csky/configure \
    sysdeps/hppa/configure \
    sysdeps/riscv/configure \
    sysdeps/unix/sysv/linux/csky/configure
  # Omit changes that caused a pre-commit check to fail like this:
  # remote: *** error: sysdeps/powerpc/powerpc64/ppc-mcount.S: trailing lines
  git checkout -f \
    sysdeps/powerpc/powerpc64/ppc-mcount.S \
    sysdeps/unix/sysv/linux/s390/s390-64/syscall.S
  # Omit change that caused a pre-commit check to fail like this:
  # remote: *** error: sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S: last line does not end in newline
  git checkout -f sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S
2019-09-07 02:43:31 -07:00
Feng Xue
b68fabfbbc aarch64: Disable using DC ZVA in emag memset
* sysdeps/aarch64/multiarch/memset_base64.S (DC_ZVA_THRESHOLD):
    Disable DC ZVA code if this macro is defined as zero.
    * sysdeps/aarch64/multiarch/memset_emag.S (DC_ZVA_THRESHOLD):
    Change to zero to disable using DC ZVA.
2019-08-14 10:58:21 +08:00
Joseph Myers
0175c9e9be Declare most TS 18661-1 interfaces for C2X.
C2X adds the interfaces from TS 18661-1, and all except a handful in
Annex F are unconditionally visible in C2X rather than only visible
when __STDC_WANT_IEC_60559_BFP_EXT__ is defined.  This patch updates
glibc headers accordingly: most uses of __GLIBC_USE
(IEC_60559_BFP_EXT) are changed to a new __GLIBC_USE
(IEC_60559_BFP_EXT_C2X).  (Regarding totalorder and totalordermag, the
type-generic macros in tgmath.h will go away when the functions are
changed to take pointer arguments.)

	* bits/libc-header-start.h (__GLIBC_USE_IEC_60559_BFP_EXT): Update
	comment.
	(__GLIBC_USE_IEC_60559_BFP_EXT_C2X): New macro.
	* bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Change to
	[__GLIBC_USE (IEC_60559_BFP_EXT_C2X)].
	* include/limits.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise.
	* math/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise.
	* math/math.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise.
	* stdlib/bits/stdlib-ldbl.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* stdlib/stdint.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise.
	* stdlib/stdlib.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise.
	* sysdeps/aarch64/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/alpha/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/arm/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/csky/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/hppa/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/ia64/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/m68k/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/microblaze/bits/fenv.h [__GLIBC_USE
	(IEC_60559_BFP_EXT)]: Likewise.
	* sysdeps/mips/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/nios2/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/powerpc/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/riscv/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/s390/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/sh/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/sparc/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* sysdeps/x86/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise.
	* math/bits/mathcalls.h [__GLIBC_USE (IEC_60559_BFP_EXT)]:
	Likewise, except for totalorder, totalordermag, getpayload,
	setpayload and setpayloadsig.
	* math/tgmath.h [__GLIBC_USE (IEC_60559_BFP_EXT)]: Likewise,
	except for totalorder and totalordermag.
2019-08-13 11:28:51 +00:00
Szabolcs Nagy
30ba037546 aarch64: simplify the DT_AARCH64_VARIANT_PCS handling code
Remove unnecessary variant_pcs field: the dynamic tag can be checked
directly.

	* sysdeps/aarch64/dl-machine.h (elf_machine_runtime_setup): Remove the
	DT_AARCH64_VARIANT_PCS check.
	(elf_machine_lazy_rel): Use l_info[DT_AARCH64 (VARIANT_PCS)].
	* sysdeps/aarch64/linkmap.h (struct link_map_machine): Remove
	variant_pcs.
2019-07-10 15:28:00 +01:00
Szabolcs Nagy
2b8a3c86e7 aarch64: new ifunc resolver ABI
Passing a second argument to the ifunc resolver allows accessing
AT_HWCAP2 values from the resolver. AArch64 will start using AT_HWCAP2
on linux because for ilp32 to remain compatible with lp64 ABI no more
than 32bit hwcap flags can be in AT_HWCAP which is already used up.

Currently the relocation ordering logic does not guarantee that ifunc
resolvers can call libc apis or access libc objects, so only the
resolver arguments and runtime environment dependent instructions can
be used to do the dispatch (this affects ifunc resolvers outside of
the libc).

Since ifunc resolver is target specific and only supposed to be
called by the dynamic linker, the call ABI can be changed in a
backward compatible way:

Old call ABI passed hwcap as uint64_t, new abi sets the
_IFUNC_ARG_HWCAP flag in the hwcap and passes a second argument
that's a pointer to an extendible struct. A resolver has to check
the _IFUNC_ARG_HWCAP flag before accessing the second argument.

The new sys/ifunc.h installed header has the definitions for the
new ABI, everything is in the implementation reserved namespace.

An alternative approach is to try to support extern calls from ifunc
resolvers such as getauxval, but that seems non-trivial
https://sourceware.org/ml/libc-alpha/2017-01/msg00468.html

	* sysdeps/aarch64/Makefile: Install sys/ifunc.h and add tests.
	* sysdeps/aarch64/dl-irel.h (elf_ifunc_invoke): Update to new ABI.
	* sysdeps/aarch64/sys/ifunc.h: New file.
	* sysdeps/aarch64/tst-ifunc-arg-1.c: New file.
	* sysdeps/aarch64/tst-ifunc-arg-2.c: New file.
2019-07-04 11:13:32 +01:00
Szabolcs Nagy
82bc69c012 aarch64: handle STO_AARCH64_VARIANT_PCS
Avoid lazy binding of symbols that may follow a variant PCS with different
register usage convention from the base PCS.

Currently the lazy binding entry code does not preserve all the registers
required for AdvSIMD and SVE vector calls.  Saving and restoring all
registers unconditionally may break existing binaries, even if they never
use vector calls, because of the larger stack requirement for lazy
resolution, which can be significant on an SVE system.

The solution is to mark all symbols in the symbol table that may follow
a variant PCS so the dynamic linker can handle them specially.  In this
patch such symbols are always resolved at load time, not lazily.

So currently LD_AUDIT for variant PCS symbols are not supported, for that
the _dl_runtime_profile entry needs to be changed e.g. to unconditionally
save/restore all registers (but pass down arg and retval registers to
pltentry/exit callbacks according to the base PCS).

This patch also removes a __builtin_expect from the modified code because
the branch prediction hint did not seem useful.

	* sysdeps/aarch64/dl-dtprocnum.h: New file.
	* sysdeps/aarch64/dl-machine.h (DT_AARCH64): Define.
	(elf_machine_runtime_setup): Handle DT_AARCH64_VARIANT_PCS.
	(elf_machine_lazy_rel): Check STO_AARCH64_VARIANT_PCS and bind such
	symbols at load time.
	* sysdeps/aarch64/linkmap.h (struct link_map_machine): Add variant_pcs.
2019-06-13 09:45:00 +01:00
Anton Youdkevitch
32e902a94e aarch64: thunderx2 memmove performance improvements
The performance improvement is about 20%-30% for
larger cases and about 1%-5% for smaller cases.

Used SIMD load/store instead of GPR for large
overlapping forward moves.

Reused existing memcpy implementation for smaller
or overlapping backward moves.

Fixed the existing memcpy implementation to allow it
to deal with the overlapping case.

Simplified loop tails in the memcpy implementation -
use branchless overlapping sequence of fixed length
load/stores instead of branching depending on the
size.

A cleanup/optimization converting str's to stp's.

Added __memmove_thunderx2 to the list of the
available implementations.
2019-05-03 11:01:34 -07:00
Anton Youdkevitch
94e358f6d4 aarch64: thunderx2 memcpy implementation cleanup and streamlining
Here is the updated patch for improving the long unaligned
code path (the one using "ext" instruction).

1. Always taken conditional branch at the beginning is
removed.

2. Epilogue code is placed after the end of the loop to
reduce the number of branches.

3. The redundant "mov" instructions inside the loop are
gone due to the changed order of the registers in the "ext"
instructions inside the loop,  the prologue has additional
"ext" instruction.

4.Updating count in the prologue was hoisted out as
it is the same update for each prologue.

5. Invariant code of the loop epilogue was hoisted out.

6. As the current size of the ext chunk is exactly 16
instructions long "nop" was added at the beginning
of the code sequence so that the loop entry for all the
chunks be aligned.

	* sysdeps/aarch64/multiarch/memcpy_thunderx2.S: Cleanup branching
	and remove redundant code.
2019-04-05 13:59:54 -07:00
Joseph Myers
a04549c194 Break more lines before not after operators.
This patch makes further coding style fixes where code was breaking
lines after an operator, contrary to the GNU Coding Standards.  As
with the previous patch, it is limited to files following a reasonable
approximation to GNU style already, and is not exhaustive; more such
issues remain to be fixed.

Tested for x86_64, and with build-many-glibcs.py.

	* dirent/dirent.h [!_DIRENT_HAVE_D_NAMLEN
	&& _DIRENT_HAVE_D_RECLEN] (_D_ALLOC_NAMLEN): Break lines before
	rather than after operators.
	* elf/cache.c (print_cache): Likewise.
	* gshadow/fgetsgent_r.c (__fgetsgent_r): Likewise.
	* htl/pt-getattr.c (__pthread_getattr_np): Likewise.
	* hurd/hurdinit.c (_hurd_setproc): Likewise.
	* hurd/hurdkill.c (_hurd_sig_post): Likewise.
	* hurd/hurdlookup.c (__file_name_lookup_under): Likewise.
	* hurd/hurdsig.c (_hurd_internal_post_signal): Likewise.
	(reauth_proc): Likewise.
	* hurd/lookup-at.c (__file_name_lookup_at): Likewise.
	(__file_name_split_at): Likewise.
	(__directory_name_split_at): Likewise.
	* hurd/lookup-retry.c (__hurd_file_name_lookup_retry): Likewise.
	* hurd/port2fd.c (_hurd_port2fd): Likewise.
	* iconv/gconv_dl.c (do_print): Likewise.
	* inet/netinet/in.h (struct sockaddr_in): Likewise.
	* libio/wstrops.c (_IO_wstr_seekoff): Likewise.
	* locale/setlocale.c (new_composite_name): Likewise.
	* malloc/memusagestat.c (main): Likewise.
	* misc/fstab.c (fstab_convert): Likewise.
	* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_usercnt):
	Likewise.
	* nss/nss_compat/compat-grp.c (getgrent_next_nss): Likewise.
	(getgrent_next_file): Likewise.
	(internal_getgrnam_r): Likewise.
	(internal_getgrgid_r): Likewise.
	* nss/nss_compat/compat-initgroups.c (getgrent_next_nss):
	Likewise.
	(internal_getgrent_r): Likewise.
	* nss/nss_compat/compat-pwd.c (getpwent_next_nss_netgr): Likewise.
	(getpwent_next_nss): Likewise.
	(getpwent_next_file): Likewise.
	(internal_getpwnam_r): Likewise.
	(internal_getpwuid_r): Likewise.
	* nss/nss_compat/compat-spwd.c (getspent_next_nss_netgr):
	Likewise.
	(getspent_next_nss): Likewise.
	(internal_getspnam_r): Likewise.
	* pwd/fgetpwent_r.c (__fgetpwent_r): Likewise.
	* shadow/fgetspent_r.c (__fgetspent_r): Likewise.
	* string/strchr.c (STRCHR): Likewise.
	* string/strchrnul.c (STRCHRNUL): Likewise.
	* sysdeps/aarch64/fpu/fpu_control.h (_FPU_FPCR_IEEE): Likewise.
	* sysdeps/aarch64/sfp-machine.h (_FP_CHOOSENAN): Likewise.
	* sysdeps/csky/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/generic/memcopy.h (PAGE_COPY_FWD_MAYBE): Likewise.
	* sysdeps/generic/symbol-hacks.h (__stack_chk_fail_local):
	Likewise.
	* sysdeps/gnu/netinet/ip_icmp.h (ICMP_INFOTYPE): Likewise.
	* sysdeps/gnu/updwtmp.c (TRANSFORM_UTMP_FILE_NAME): Likewise.
	* sysdeps/gnu/utmp_file.c (TRANSFORM_UTMP_FILE_NAME): Likewise.
	* sysdeps/hppa/jmpbuf-unwind.h (_JMPBUF_UNWINDS): Likewise.
	* sysdeps/mach/hurd/bits/stat.h (S_ISPARE): Likewise.
	* sysdeps/mach/hurd/dl-sysdep.c (_dl_sysdep_start): Likewise.
	(open_file): Likewise.
	* sysdeps/mach/hurd/htl/pt-mutexattr-setprotocol.c
	(pthread_mutexattr_setprotocol): Likewise.
	* sysdeps/mach/hurd/ioctl.c (__ioctl): Likewise.
	* sysdeps/mach/hurd/mmap.c (__mmap): Likewise.
	* sysdeps/mach/hurd/ptrace.c (ptrace): Likewise.
	* sysdeps/mach/hurd/spawni.c (__spawni): Likewise.
	* sysdeps/microblaze/dl-machine.h (elf_machine_type_class):
	Likewise.
	(elf_machine_rela): Likewise.
	* sysdeps/mips/mips32/sfp-machine.h (_FP_CHOOSENAN): Likewise.
	* sysdeps/mips/mips64/sfp-machine.h (_FP_CHOOSENAN): Likewise.
	* sysdeps/mips/sys/asm.h (multiple #if conditionals): Likewise.
	* sysdeps/posix/rename.c (rename): Likewise.
	* sysdeps/powerpc/novmx-sigjmp.c (__novmx__sigjmp_save): Likewise.
	* sysdeps/powerpc/sigjmp.c (__vmx__sigjmp_save): Likewise.
	* sysdeps/s390/fpu/fenv_libc.h (FPC_VALID_MASK): Likewise.
	* sysdeps/s390/utf8-utf16-z9.c (gconv_end): Likewise.
	* sysdeps/unix/grantpt.c (grantpt): Likewise.
	* sysdeps/unix/sysv/linux/a.out.h (N_TXTOFF): Likewise.
	* sysdeps/unix/sysv/linux/updwtmp.c (TRANSFORM_UTMP_FILE_NAME):
	Likewise.
	* sysdeps/unix/sysv/linux/utmp_file.c (TRANSFORM_UTMP_FILE_NAME):
	Likewise.
	* sysdeps/x86/cpu-features.c (get_common_indices): Likewise.
	* time/tzfile.c (__tzfile_compute): Likewise.
2019-02-25 13:19:19 +00:00
Feng Xue
83d1cc42d8 aarch64: Optimized memchr specific to AmpereComputing emag
This version uses general register based memory instruction to load
data, because vector register based is slightly slower in emag.

Character-matching is performed on 16-byte (both size and alignment)
memory block in parallel each iteration.

    * sysdeps/aarch64/memchr.S (__memchr): Rename to MEMCHR.
    [!MEMCHR](MEMCHR): Set to __memchr.
    * sysdeps/aarch64/multiarch/Makefile (sysdep_routines):
    Add memchr_generic and memchr_nosimd.
    * sysdeps/aarch64/multiarch/ifunc-impl-list.c
    (__libc_ifunc_impl_list): Add memchr ifuncs.
    * sysdeps/aarch64/multiarch/memchr.c: New file.
    * sysdeps/aarch64/multiarch/memchr_generic.S: Likewise.
    * sysdeps/aarch64/multiarch/memchr_nosimd.S: Likewise.
2019-02-01 08:14:21 -05:00
Feng Xue
c7d3890ff5 aarch64: Optimized memset specific to AmpereComputing emag
This version uses general register based memory store instead of
vector register based, for the former is faster than the latter
in emag.

The fact that DC ZVA size in emag is 64-byte, is used by IFUNC
dispatch to select this memset, so that cost of runtime-check on
DC ZVA size can be saved.

    * sysdeps/aarch64/multiarch/Makefile (sysdep_routines):
    Add memset_emag.
    * sysdeps/aarch64/multiarch/ifunc-impl-list.c
    (__libc_ifunc_impl_list): Add __memset_emag to memset ifunc.
    * sysdeps/aarch64/multiarch/memset.c (libc_ifunc):
    Add IS_EMAG check for ifunc dispatch.
    * sysdeps/aarch64/multiarch/memset_base64.S: New file.
    * sysdeps/aarch64/multiarch/memset_emag.S: New file.
2019-02-01 07:59:18 -05:00
Wilco Dijkstra
02f440c1ef [AArch64] Add ifunc support for Ares
Add Ares to the midr_el0 list and support ifunc dispatch.  Since Ares
supports 2 128-bit loads/stores, use Neon registers for memcpy by
selecting __memcpy_falkor by default (we should rename this to
__memcpy_simd or similar).

	* manual/tunables.texi (glibc.cpu.name): Add ares tunable.
	* sysdeps/aarch64/multiarch/memcpy.c (__libc_memcpy): Use
	__memcpy_falkor for ares.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_ARES):
	Add new define.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c (cpu_list):
	Add ares cpu.
2019-01-09 10:35:34 +00:00
Joseph Myers
04277e02d7 Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2019-01-01 00:11:28 +00:00
Wilco Dijkstra
5770c0ad1e [AArch64] Adjust writeback in non-zero memset
This fixes an ineffiency in the non-zero memset.  Delaying the writeback
until the end of the loop is slightly faster on some cores - this shows
~5% performance gain on Cortex-A53 when doing large non-zero memsets.

	* sysdeps/aarch64/memset.S (MEMSET): Improve non-zero memset loop.
2018-11-20 12:37:00 +00:00
Steve Ellcey
f0da0bcf8b Remove extra space at end of line. 2018-10-16 11:02:03 -07:00
Anton Youdkevitch
75c1aee500 aarch64: optimized memcpy implementation for thunderx2
Since aligned loads and stores are huge performance
advantage the implementation always tries to do aligned
access. Among the cases when src and dst addresses are
aligned or unaligned evenly there are cases of not evenly
unaligned src and dst. For such cases (if the length is
big enough) ext instruction is used to merge-and-shift
two memory chunks loaded from two adjacent aligned
locations and then the adjusted chunk gets stored to
aligned address.

Performance gain against the current T2 implementation:
     memcpy-large: 65K-32M: +40% - +10%
     memcpy-walk:  128-32M: +20% - +2%
2018-10-16 11:00:27 -07:00
Joseph Myers
c52944e8cc Remove unnecessary math_private.h includes.
After my changes to move various macros, inlines and other content
from math_private.h to more specific headers, many files including
math_private.h no longer need to do so.  Furthermore, since the
optimized inlines of various functions have been moved to
include/fenv.h or replaced by use of function names GCC inlines
automatically, a missing math_private.h include where one is
appropriate will reliably cause a build failure rather than possibly
causing code to be less well optimized while still building
successfully.  Thus, this patch removes includes of math_private.h
that are now unnecessary.  In the case of two RISC-V files, the
include is replaced by one of stdbool.h because the files in question
were relying on math_private.h to get a definition of bool.

Tested for x86_64 and x86, and with build-many-glibcs.py.

	* math/fromfp.h: Do not include <math_private.h>.
	* math/s_cacosh_template.c: Likewise.
	* math/s_casin_template.c: Likewise.
	* math/s_casinh_template.c: Likewise.
	* math/s_ccos_template.c: Likewise.
	* math/s_cproj_template.c: Likewise.
	* math/s_fdim_template.c: Likewise.
	* math/s_fmaxmag_template.c: Likewise.
	* math/s_fminmag_template.c: Likewise.
	* math/s_iseqsig_template.c: Likewise.
	* math/s_ldexp_template.c: Likewise.
	* math/s_nextdown_template.c: Likewise.
	* math/w_log1p_template.c: Likewise.
	* math/w_scalbln_template.c: Likewise.
	* sysdeps/aarch64/fpu/feholdexcpt.c: Likewise.
	* sysdeps/aarch64/fpu/fesetround.c: Likewise.
	* sysdeps/aarch64/fpu/fgetexcptflg.c: Likewise.
	* sysdeps/aarch64/fpu/ftestexcept.c: Likewise.
	* sysdeps/aarch64/fpu/s_llrint.c: Likewise.
	* sysdeps/aarch64/fpu/s_llrintf.c: Likewise.
	* sysdeps/aarch64/fpu/s_lrint.c: Likewise.
	* sysdeps/aarch64/fpu/s_lrintf.c: Likewise.
	* sysdeps/i386/fpu/s_atanl.c: Likewise.
	* sysdeps/i386/fpu/s_f32xaddf64.c: Likewise.
	* sysdeps/i386/fpu/s_f32xsubf64.c: Likewise.
	* sysdeps/i386/fpu/s_fdim.c: Likewise.
	* sysdeps/i386/fpu/s_logbl.c: Likewise.
	* sysdeps/i386/fpu/s_rintl.c: Likewise.
	* sysdeps/i386/fpu/s_significandl.c: Likewise.
	* sysdeps/ia64/fpu/s_matherrf.c: Likewise.
	* sysdeps/ia64/fpu/s_matherrl.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_atan.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_cbrt.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fma.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_cbrtf.c: Likewise.
	* sysdeps/ieee754/k_standardf.c: Likewise.
	* sysdeps/ieee754/k_standardl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_copysignl.c: Likewise.
	* sysdeps/ieee754/ldbl-64-128/s_finitel.c: Likewise.
	* sysdeps/ieee754/ldbl-64-128/s_fpclassifyl.c: Likewise.
	* sysdeps/ieee754/ldbl-64-128/s_isinfl.c: Likewise.
	* sysdeps/ieee754/ldbl-64-128/s_isnanl.c: Likewise.
	* sysdeps/ieee754/ldbl-64-128/s_signbitl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_cbrtl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fma.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise.
	* sysdeps/ieee754/s_signgam.c: Likewise.
	* sysdeps/powerpc/power5+/fpu/s_modf.c: Likewise.
	* sysdeps/powerpc/power5+/fpu/s_modff.c: Likewise.
	* sysdeps/powerpc/power7/fpu/s_logbf.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_ceil.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_floor.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_nearbyint.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_round.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_roundeven.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_trunc.c: Likewise.
	* sysdeps/riscv/rvd/s_finite.c: Likewise.
	* sysdeps/riscv/rvd/s_fmax.c: Likewise.
	* sysdeps/riscv/rvd/s_fmin.c: Likewise.
	* sysdeps/riscv/rvd/s_fpclassify.c: Likewise.
	* sysdeps/riscv/rvd/s_isinf.c: Likewise.
	* sysdeps/riscv/rvd/s_isnan.c: Likewise.
	* sysdeps/riscv/rvd/s_issignaling.c: Likewise.
	* sysdeps/riscv/rvf/fegetround.c: Likewise.
	* sysdeps/riscv/rvf/feholdexcpt.c: Likewise.
	* sysdeps/riscv/rvf/fesetenv.c: Likewise.
	* sysdeps/riscv/rvf/fesetround.c: Likewise.
	* sysdeps/riscv/rvf/feupdateenv.c: Likewise.
	* sysdeps/riscv/rvf/fgetexcptflg.c: Likewise.
	* sysdeps/riscv/rvf/ftestexcept.c: Likewise.
	* sysdeps/riscv/rvf/s_ceilf.c: Likewise.
	* sysdeps/riscv/rvf/s_finitef.c: Likewise.
	* sysdeps/riscv/rvf/s_floorf.c: Likewise.
	* sysdeps/riscv/rvf/s_fmaxf.c: Likewise.
	* sysdeps/riscv/rvf/s_fminf.c: Likewise.
	* sysdeps/riscv/rvf/s_fpclassifyf.c: Likewise.
	* sysdeps/riscv/rvf/s_isinff.c: Likewise.
	* sysdeps/riscv/rvf/s_isnanf.c: Likewise.
	* sysdeps/riscv/rvf/s_issignalingf.c: Likewise.
	* sysdeps/riscv/rvf/s_nearbyintf.c: Likewise.
	* sysdeps/riscv/rvf/s_roundevenf.c: Likewise.
	* sysdeps/riscv/rvf/s_roundf.c: Likewise.
	* sysdeps/riscv/rvf/s_truncf.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_rint.c: Include <stdbool.h> instead of
	<math_private.h>.
	* sysdeps/riscv/rvf/s_rintf.c: Likewise.
2018-09-28 21:53:33 +00:00
Joseph Myers
9755bc4686 Use round functions not __round functions in glibc libm.
Continuing the move to use, within libm, public names for libm
functions that can be inlined as built-in functions on many
architectures, this patch moves calls to __round functions to call the
corresponding round names instead, with asm redirection to __round
when the calls are not inlined.

An additional complication arises in
sysdeps/ieee754/ldbl-128ibm/e_expl.c, where a call to roundl, with the
result converted to int, gets converted by the compiler to call
lroundl in the case of 32-bit long, so resulting in localplt test
failures.  It's logically correct to let the compiler make such an
optimization; an appropriate asm redirection of lroundl to __lroundl
is thus added to that file (it's not needed anywhere else).

Tested for x86_64, and with build-many-glibcs.py.

	* include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ &&
	__FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (round): Redirect
	using MATH_REDIRECT.
	* sysdeps/aarch64/fpu/s_round.c: Define NO_MATH_REDIRECT before
	header inclusion.
	* sysdeps/aarch64/fpu/s_roundf.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_round.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_round.c: Likewise.
	* sysdeps/ieee754/float128/s_roundf128.c: Likewise.
	* sysdeps/ieee754/flt-32/s_roundf.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_roundl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_roundl.c: Likewise.
	* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_round.c: Likewise.
	* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_roundf.c: Likewise.
	* sysdeps/powerpc/powerpc64/fpu/multiarch/s_round.c: Likewise.
	* sysdeps/powerpc/powerpc64/fpu/multiarch/s_roundf.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_round.c: Likewise.
	* sysdeps/riscv/rvf/s_roundf.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_roundl.c: Likewise.
	(round): Redirect to __round.
	(__roundl): Call round instead of __round.
	* sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__round):
	Remove macro.
	[_ARCH_PWR5X] (__roundf): Likewise.
	* sysdeps/ieee754/dbl-64/e_gamma_r.c (gamma_positive): Use round
	functions instead of __round variants.
	* sysdeps/ieee754/flt-32/e_gammaf_r.c (gammaf_positive): Likewise.
	* sysdeps/ieee754/ldbl-128/e_gammal_r.c (gammal_positive):
	Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (gammal_positive):
	Likewise.
	* sysdeps/ieee754/ldbl-96/e_gammal_r.c (gammal_positive):
	Likewise.
	* sysdeps/x86/fpu/powl_helper.c (__powl_helper): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_expl.c (lroundl): Redirect to
	__lroundl.
	(__ieee754_expl): Call roundl instead of __roundl.
2018-09-27 12:35:23 +00:00
Joseph Myers
7abf97bed9 Use trunc functions not __trunc functions in glibc libm.
Continuing the move to use, within libm, public names for libm
functions that can be inlined as built-in functions on many
architectures, this patch moves calls to __trunc functions to call the
corresponding trunc names instead, with asm redirection to __trunc
when the calls are not inlined.

Tested for x86_64, and with build-many-glibcs.py.

	* include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ &&
	__FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (trunc): Redirect
	using MATH_REDIRECT.
	* sysdeps/aarch64/fpu/s_trunc.c: Define NO_MATH_REDIRECT before
	header inclusion.
	* sysdeps/aarch64/fpu/s_truncf.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_trunc.c: Likewise.
	* sysdeps/ieee754/float128/s_truncf128.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_trunc.c: Likewise.
	* sysdeps/ieee754/flt-32/s_truncf.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_truncl.c: Likewise.
	* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_trunc.c: Likewise.
	* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_truncf.c: Likewise.
	* sysdeps/powerpc/powerpc64/fpu/multiarch/s_trunc.c: Likewise.
	* sysdeps/powerpc/powerpc64/fpu/multiarch/s_truncf.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_trunc.c: Likewise.
	* sysdeps/riscv/rvf/s_truncf.c: Likewise.
	* sysdeps/sparc/sparc64/fpu/multiarch/s_trunc.c: Likewise.
	* sysdeps/sparc/sparc64/fpu/multiarch/s_truncf.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_trunc.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_truncf.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/s_trunc_template.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_truncl.c: Likewise.
	(ceil): Redirect to __ceil.
	(floor): Redirect to __floor.
	(trunc): Redirect to __trunc.
	(__truncl): Call trunc instead of __trunc.
	* sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__trunc):
	Remove macro.
	[_ARCH_PWR5X] (__truncf): Likewise.
	* sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r): Use
	trunc functions instead of __trunc variants.
	* sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r):
	Likewise.
	* sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r):
	Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r):
	Likewise.
	* sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r):
	Likewise.
2018-09-20 21:11:10 +00:00
Joseph Myers
71223ef909 Use ceil functions not __ceil functions in glibc libm.
Continuing the move to use, within libm, public names for libm
functions that can be inlined as built-in functions on many
architectures, this patch moves calls to __ceil functions to call the
corresponding ceil names instead, with asm redirection to __ceil when
the calls are not inlined.

Tested for x86_64, and with build-many-glibcs.py.

	* include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ &&
	__FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (ceil): Redirect
	using MATH_REDIRECT.
	* sysdeps/aarch64/fpu/s_ceil.c: Define NO_MATH_REDIRECT before
	header inclusion.
	* sysdeps/aarch64/fpu/s_ceilf.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_ceil.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_ceil.c: Likewise.
	* sysdeps/ieee754/float128/s_ceilf128.c: Likewise.
	* sysdeps/ieee754/flt-32/s_ceilf.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_ceill.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_ceill.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/s_ceil_template.c: Likewise.
	* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_ceil.c: Likewise.
	* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_ceilf.c: Likewise.
	* sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceil.c: Likewise.
	* sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceilf.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_ceil.c: Likewise.
	* sysdeps/riscv/rvf/s_ceilf.c: Likewise.
	* sysdeps/sparc/sparc64/fpu/multiarch/s_ceil.c: Likewise.
	* sysdeps/sparc/sparc64/fpu/multiarch/s_ceilf.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_ceil.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_ceilf.c: Likewise.
	* sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__ceil):
	Remove macro.
	* sysdeps/ieee754/dbl-64/e_gamma_r.c (gamma_positive): Use ceil
	functions instead of __ceil variants.
	* sysdeps/ieee754/flt-32/e_gammaf_r.c (gammaf_positive): Likewise.
	* sysdeps/ieee754/ldbl-128/e_gammal_r.c (gammal_positive):
	Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (gammal_positive):
	Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_truncl.c (__truncl): Likewise.
	* sysdeps/ieee754/ldbl-96/e_gammal_r.c (gammal_positive):
	Likewise.
	* sysdeps/powerpc/power5+/fpu/s_modf.c (__modf): Likewise.
	* sysdeps/powerpc/power5+/fpu/s_modff.c (__modff): Likewise.
2018-09-17 20:42:06 +00:00
Joseph Myers
f29b6f17e4 Use rint functions not __rint functions in glibc libm.
Continuing the move to use, within libm, public names for libm
functions that can be inlined as built-in functions on many
architectures, this patch moves calls to __rint functions to call the
corresponding rint names instead, with asm redirection to __rint when
the calls are not inlined.  The x86_64 math_private.h is removed as no
longer useful after this patch.

This patch is relative to a tree with my floor patch
<https://sourceware.org/ml/libc-alpha/2018-09/msg00148.html> applied,
and much the same considerations arise regarding possibly replacing an
IFUNC call with a direct inline expansion.

Tested for x86_64, and with build-many-glibcs.py.

	* include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ &&
	__FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (rint): Redirect
	using MATH_REDIRECT.
	* sysdeps/aarch64/fpu/s_rint.c: Define NO_MATH_REDIRECT before
	header inclusion.
	* sysdeps/aarch64/fpu/s_rintf.c: Likewise.
	* sysdeps/alpha/fpu/s_rint.c: Likewise.
	* sysdeps/alpha/fpu/s_rintf.c: Likewise.
	* sysdeps/i386/fpu/s_rintl.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_rint.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_rint.c: Likewise.
	* sysdeps/ieee754/float128/s_rintf128.c: Likewise.
	* sysdeps/ieee754/flt-32/s_rintf.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_rintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise.
	* sysdeps/m68k/coldfire/fpu/s_rint.c: Likewise.
	* sysdeps/m68k/coldfire/fpu/s_rintf.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/s_rint.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/s_rintf.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/s_rintl.c: Likewise.
	* sysdeps/powerpc/fpu/s_rint.c: Likewise.
	* sysdeps/powerpc/fpu/s_rintf.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_rint.c: Likewise.
	* sysdeps/riscv/rvf/s_rintf.c: Likewise.
	* sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_rint.c: Likewise.
	* sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_rintf.c: Likewise.
	* sysdeps/sparc/sparc64/fpu/multiarch/s_rint.c: Likewise.
	* sysdeps/sparc/sparc64/fpu/multiarch/s_rintf.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_rint.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_rintf.c: Likewise.
	* sysdeps/x86_64/fpu/math_private.h: Remove file.
	* math/e_scalb.c (invalid_fn): Use rint functions instead of
	__rint variants.
	* math/e_scalbf.c (invalid_fn): Likewise.
	* math/e_scalbl.c (invalid_fn): Likewise.
	* sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r):
	Likewise.
	* sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r):
	Likewise.
	* sysdeps/ieee754/k_standard.c (__kernel_standard): Likewise.
	* sysdeps/ieee754/k_standardl.c (__kernel_standard_l): Likewise.
	* sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r):
	Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r):
	Likewise.
	* sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r):
	Likewise.
	* sysdeps/powerpc/powerpc32/fpu/s_llrint.c (__llrint): Likewise.
	* sysdeps/powerpc/powerpc32/fpu/s_llrintf.c (__llrintf): Likewise.
2018-09-14 13:10:39 +00:00
Joseph Myers
e44acb2063 Use floor functions not __floor functions in glibc libm.
Similar to the changes that were made to call sqrt functions directly
in glibc, instead of __ieee754_sqrt variants, so that the compiler
could inline them automatically without needing special inline
definitions in lots of math_private.h headers, this patch makes libm
code call floor functions directly instead of __floor variants,
removing the inlines / macros for x86_64 (SSE4.1) and powerpc
(POWER5).

The redirection used to ensure that __ieee754_sqrt does still get
called when the compiler doesn't inline a built-in function expansion
is refactored so it can be applied to other functions; the refactoring
is arranged so it's not limited to unary functions either (it would be
reasonable to use this mechanism for copysign - removing the inline in
math_private_calls.h but also eliminating unnecessary local PLT entry
use in the cases (powerpc soft-float and e500v1, for IBM long double)
where copysign calls don't get inlined).

The point of this change is that more architectures can get floor
calls inlined where they weren't previously (AArch64, for example),
without needing special inline definitions in their math_private.h,
and existing such definitions in math_private.h headers can be
removed.

Note that it's possible that in some cases an inline may be used where
an IFUNC call was previously used - this is the case on x86_64, for
example.  I think the direct calls to floor are still appropriate; if
there's any significant performance cost from inline SSE2 floor
instead of an IFUNC call ending up with SSE4.1 floor, that indicates
that either the function should be doing something else that's faster
than using floor at all, or it should itself have IFUNC variants, or
that the compiler choice of inlining for generic tuning should change
to allow for the possibility that, by not inlining, an SSE4.1 IFUNC
might be called at runtime - but not that glibc should avoid calling
floor internally.  (After all, all the same considerations would apply
to any user program calling floor, where it might either be inlined or
left as an out-of-line call allowing for a possible IFUNC.)

Tested for x86_64, and with build-many-glibcs.py.

	* include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ &&
	__FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (MATH_REDIRECT):
	New macro.
	[!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0)
	&& !NO_MATH_REDIRECT] (MATH_REDIRECT_LDBL): Likewise.
	[!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0)
	&& !NO_MATH_REDIRECT] (MATH_REDIRECT_F128): Likewise.
	[!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0)
	&& !NO_MATH_REDIRECT] (MATH_REDIRECT_UNARY_ARGS): Likewise.
	[!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0)
	&& !NO_MATH_REDIRECT] (sqrt): Redirect using MATH_REDIRECT.
	[!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0)
	&& !NO_MATH_REDIRECT] (floor): Likewise.
	* sysdeps/aarch64/fpu/s_floor.c: Define NO_MATH_REDIRECT before
	header inclusion.
	* sysdeps/aarch64/fpu/s_floorf.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_floor.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_floor.c: Likewise.
	* sysdeps/ieee754/float128/s_floorf128.c: Likewise.
	* sysdeps/ieee754/flt-32/s_floorf.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_floorl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_floorl.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/s_floor_template.c: Likewise.
	* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_floor.c: Likewise.
	* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_floorf.c: Likewise.
	* sysdeps/powerpc/powerpc64/fpu/multiarch/s_floor.c: Likewise.
	* sysdeps/powerpc/powerpc64/fpu/multiarch/s_floorf.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_floor.c: Likewise.
	* sysdeps/riscv/rvf/s_floorf.c: Likewise.
	* sysdeps/sparc/sparc64/fpu/multiarch/s_floor.c: Likewise.
	* sysdeps/sparc/sparc64/fpu/multiarch/s_floorf.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_floor.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_floorf.c: Likewise.
	* sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__floor):
	Remove macro.
	[_ARCH_PWR5X] (__floorf): Likewise.
	* sysdeps/x86_64/fpu/math_private.h [__SSE4_1__] (__floor): Remove
	inline function.
	[__SSE4_1__] (__floorf): Likewise.
	* math/w_lgamma_main.c (LGFUNC (__lgamma)): Use floor functions
	instead of __floor variants.
	* math/w_lgamma_r_compat.c (__lgamma_r): Likewise.
	* math/w_lgammaf_main.c (LGFUNC (__lgammaf)): Likewise.
	* math/w_lgammaf_r_compat.c (__lgammaf_r): Likewise.
	* math/w_lgammal_main.c (LGFUNC (__lgammal)): Likewise.
	* math/w_lgammal_r_compat.c (__lgammal_r): Likewise.
	* math/w_tgamma_compat.c (__tgamma): Likewise.
	* math/w_tgamma_template.c (M_DECL_FUNC (__tgamma)): Likewise.
	* math/w_tgammaf_compat.c (__tgammaf): Likewise.
	* math/w_tgammal_compat.c (__tgammal): Likewise.
	* sysdeps/ieee754/dbl-64/e_lgamma_r.c (sin_pi): Likewise.
	* sysdeps/ieee754/dbl-64/k_rem_pio2.c (__kernel_rem_pio2):
	Likewise.
	* sysdeps/ieee754/dbl-64/lgamma_neg.c (__lgamma_neg): Likewise.
	* sysdeps/ieee754/flt-32/e_lgammaf_r.c (sin_pif): Likewise.
	* sysdeps/ieee754/flt-32/lgamma_negf.c (__lgamma_negf): Likewise.
	* sysdeps/ieee754/ldbl-128/e_lgammal_r.c (__ieee754_lgammal_r):
	Likewise.
	* sysdeps/ieee754/ldbl-128/e_powl.c (__ieee754_powl): Likewise.
	* sysdeps/ieee754/ldbl-128/lgamma_negl.c (__lgamma_negl):
	Likewise.
	* sysdeps/ieee754/ldbl-128/s_expm1l.c (__expm1l): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_lgammal_r.c (__ieee754_lgammal_r):
	Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/lgamma_negl.c (__lgamma_negl):
	Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_expm1l.c (__expm1l): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_truncl.c (__truncl): Likewise.
	* sysdeps/ieee754/ldbl-96/e_lgammal_r.c (sin_pi): Likewise.
	* sysdeps/ieee754/ldbl-96/lgamma_negl.c (__lgamma_negl): Likewise.
	* sysdeps/powerpc/power5+/fpu/s_modf.c (__modf): Likewise.
	* sysdeps/powerpc/power5+/fpu/s_modff.c (__modff): Likewise.
2018-09-14 13:09:01 +00:00
Szabolcs Nagy
e70c176825 Add new exp and exp2 implementations
Optimized exp and exp2 implementations using a lookup table for
fractional powers of 2.  There are several variants, see e_exp_data.c,
they can be selected by modifying math_config.h allowing different
tradeoffs.

The default selection should be acceptable as generic libm code.
Worst case error is 0.509 ULP for exp and 0.507 ULP for exp2, on
aarch64 the rodata size is 2160 bytes, shared between exp and exp2.
On aarch64 .text + .rodata size decreased by 24912 bytes.

The non-nearest rounding error is less than 1 ULP even on targets
without efficient round implementation (although the error rate is
higher in that case).  Targets with single instruction, rounding mode
independent, to nearest integer rounding and conversion can use them
by setting TOINT_INTRINSICS and adding the necessary code to their
math_private.h.

The __exp1 code uses the same algorithm, so the error bound of pow
increased a bit.

New double precision error handling code was added following the
style of the single precision error handling code.

Improvements on Cortex-A72 compared to current glibc master:
exp thruput: 1.61x in [-9.9 9.9]
exp latency: 1.53x in [-9.9 9.9]
exp thruput: 1.13x in [0.5 1]
exp latency: 1.30x in [0.5 1]
exp2 thruput: 2.03x in [-9.9 9.9]
exp2 latency: 1.64x in [-9.9 9.9]

For small (< 1) inputs the current exp code uses a separate algorithm
so the speed up there is less.

Was tested on
aarch64-linux-gnu (TOINT_INTRINSICS, fma contraction) and
arm-linux-gnueabihf (!TOINT_INTRINSICS, no fma contraction) and
x86_64-linux-gnu (!TOINT_INTRINSICS, no fma contraction) and
powerpc64le-linux-gnu (!TOINT_INTRINSICS, fma contraction) targets,
only non-nearest rounding ulp errors increase and they are within
acceptable bounds (ulp updates are in separate patches).

	* NEWS: Mention exp and exp2 improvements.
	* math/Makefile (libm-support): Remove t_exp.
	(type-double-routines): Add math_err and e_exp_data.
	* sysdeps/aarch64/libm-test-ulps: Update.
	* sysdeps/arm/libm-test-ulps: Update.
	* sysdeps/i386/fpu/e_exp_data.c: New file.
	* sysdeps/i386/fpu/math_err.c: New file.
	* sysdeps/i386/fpu/t_exp.c: Remove.
	* sysdeps/ia64/fpu/e_exp_data.c: New file.
	* sysdeps/ia64/fpu/math_err.c: New file.
	* sysdeps/ia64/fpu/t_exp.c: Remove.
	* sysdeps/ieee754/dbl-64/e_exp.c: Rewrite.
	* sysdeps/ieee754/dbl-64/e_exp2.c: Rewrite.
	* sysdeps/ieee754/dbl-64/e_exp_data.c: New file.
	* sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Update error bound.
	* sysdeps/ieee754/dbl-64/eexp.tbl: Remove.
	* sysdeps/ieee754/dbl-64/math_config.h: New file.
	* sysdeps/ieee754/dbl-64/math_err.c: New file.
	* sysdeps/ieee754/dbl-64/t_exp.c: Remove.
	* sysdeps/ieee754/dbl-64/t_exp2.h: Remove.
	* sysdeps/ieee754/dbl-64/uexp.h: Remove.
	* sysdeps/ieee754/dbl-64/uexp.tbl: Remove.
	* sysdeps/m68k/m680x0/fpu/e_exp_data.c: New file.
	* sysdeps/m68k/m680x0/fpu/math_err.c: New file.
	* sysdeps/m68k/m680x0/fpu/t_exp.c: Remove.
	* sysdeps/powerpc/fpu/libm-test-ulps: Update.
	* sysdeps/x86_64/fpu/libm-test-ulps: Update.
2018-09-05 16:22:00 +01:00
Joseph Myers
70e2ba332f Do not include fenv_private.h in math_private.h.
Continuing the clean-up related to the catch-all math_private.h
header, this patch stops math_private.h from including fenv_private.h.
Instead, fenv_private.h is included directly from those users of
math_private.h that also used interfaces from fenv_private.h.  No
attempt is made to remove unused includes of math_private.h, but that
is a natural followup.

(However, since math_private.h sometimes defines optimized versions of
math.h interfaces or __* variants thereof, as well as defining its own
interfaces, I think it might make sense to get all those optimized
versions included from include/math.h, not requiring a separate header
at all, before eliminating unused math_private.h includes - that
avoids a file quietly becoming less-optimized if someone adds a call
to one of those interfaces without restoring a math_private.h include
to that file.)

There is still a pitfall that if code uses plain fe* and __fe*
interfaces, but only includes fenv.h and not fenv_private.h or (before
this patch) math_private.h, it will compile on platforms with
exceptions and rounding modes but not get the optimized versions (and
possibly not compile) on platforms without exception and rounding mode
support, so making it easy to break the build for such platforms
accidentally.

I think it would be most natural to move the inlines / macros for fe*
and __fe* in the case of no exceptions and rounding modes into
include/fenv.h, so that all code including fenv.h with _ISOMAC not
defined automatically gets them.  Then fenv_private.h would be purely
the header for the libc_fe*, SET_RESTORE_ROUND etc. internal
interfaces and the risk of breaking the build on other platforms than
the one you tested on because of a missing fenv_private.h include
would be much reduced (and there would be some unused fenv_private.h
includes to remove along with unused math_private.h includes).

Tested for x86_64 and x86, and tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by this patch.

	* sysdeps/generic/math_private.h: Do not include <fenv_private.h>.
	* math/fromfp.h: Include <fenv_private.h>.
	* math/math-narrow.h: Likewise.
	* math/s_cexp_template.c: Likewise.
	* math/s_csin_template.c: Likewise.
	* math/s_csinh_template.c: Likewise.
	* math/s_ctan_template.c: Likewise.
	* math/s_ctanh_template.c: Likewise.
	* math/s_iseqsig_template.c: Likewise.
	* math/w_acos_compat.c: Likewise.
	* math/w_acosf_compat.c: Likewise.
	* math/w_acosl_compat.c: Likewise.
	* math/w_asin_compat.c: Likewise.
	* math/w_asinf_compat.c: Likewise.
	* math/w_asinl_compat.c: Likewise.
	* math/w_ilogb_template.c: Likewise.
	* math/w_j0_compat.c: Likewise.
	* math/w_j0f_compat.c: Likewise.
	* math/w_j0l_compat.c: Likewise.
	* math/w_j1_compat.c: Likewise.
	* math/w_j1f_compat.c: Likewise.
	* math/w_j1l_compat.c: Likewise.
	* math/w_jn_compat.c: Likewise.
	* math/w_jnf_compat.c: Likewise.
	* math/w_llogb_template.c: Likewise.
	* math/w_log10_compat.c: Likewise.
	* math/w_log10f_compat.c: Likewise.
	* math/w_log10l_compat.c: Likewise.
	* math/w_log2_compat.c: Likewise.
	* math/w_log2f_compat.c: Likewise.
	* math/w_log2l_compat.c: Likewise.
	* math/w_log_compat.c: Likewise.
	* math/w_logf_compat.c: Likewise.
	* math/w_logl_compat.c: Likewise.
	* sysdeps/aarch64/fpu/feholdexcpt.c: Likewise.
	* sysdeps/aarch64/fpu/fesetround.c: Likewise.
	* sysdeps/aarch64/fpu/fgetexcptflg.c: Likewise.
	* sysdeps/aarch64/fpu/ftestexcept.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_atan2.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp2.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_gamma_r.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_jn.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_pow.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_remainder.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_sqrt.c: Likewise.
	* sysdeps/ieee754/dbl-64/gamma_product.c: Likewise.
	* sysdeps/ieee754/dbl-64/lgamma_neg.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_atan.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fma.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_llrint.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_llround.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_lrint.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_lround.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_nearbyint.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_sin.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_sincos.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_tan.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_lround.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_nearbyint.c: Likewise.
	* sysdeps/ieee754/dbl-64/x2y2m1.c: Likewise.
	* sysdeps/ieee754/float128/float128_private.h: Likewise.
	* sysdeps/ieee754/flt-32/e_gammaf_r.c: Likewise.
	* sysdeps/ieee754/flt-32/e_j1f.c: Likewise.
	* sysdeps/ieee754/flt-32/e_jnf.c: Likewise.
	* sysdeps/ieee754/flt-32/lgamma_negf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_llrintf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_llroundf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_lrintf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_lroundf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_nearbyintf.c: Likewise.
	* sysdeps/ieee754/k_standardl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_expl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_gammal_r.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_j1l.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_jnl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/gamma_productl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/lgamma_negl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_llrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_llroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_lrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_lroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nearbyintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/x2y2m1l.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_expl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_j1l.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_jnl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/lgamma_negl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_llrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_llroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_lrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_lroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/x2y2m1l.c: Likewise.
	* sysdeps/ieee754/ldbl-96/e_gammal_r.c: Likewise.
	* sysdeps/ieee754/ldbl-96/e_jnl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/gamma_productl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/lgamma_negl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fma.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_llrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_llroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_lrintl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_lroundl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/x2y2m1l.c: Likewise.
	* sysdeps/powerpc/fpu/e_sqrt.c: Likewise.
	* sysdeps/powerpc/fpu/e_sqrtf.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_ceil.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_floor.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_nearbyint.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_round.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_roundeven.c: Likewise.
	* sysdeps/riscv/rv64/rvd/s_trunc.c: Likewise.
	* sysdeps/riscv/rvd/s_finite.c: Likewise.
	* sysdeps/riscv/rvd/s_fmax.c: Likewise.
	* sysdeps/riscv/rvd/s_fmin.c: Likewise.
	* sysdeps/riscv/rvd/s_fpclassify.c: Likewise.
	* sysdeps/riscv/rvd/s_isinf.c: Likewise.
	* sysdeps/riscv/rvd/s_isnan.c: Likewise.
	* sysdeps/riscv/rvd/s_issignaling.c: Likewise.
	* sysdeps/riscv/rvf/fegetround.c: Likewise.
	* sysdeps/riscv/rvf/feholdexcpt.c: Likewise.
	* sysdeps/riscv/rvf/fesetenv.c: Likewise.
	* sysdeps/riscv/rvf/fesetround.c: Likewise.
	* sysdeps/riscv/rvf/feupdateenv.c: Likewise.
	* sysdeps/riscv/rvf/fgetexcptflg.c: Likewise.
	* sysdeps/riscv/rvf/ftestexcept.c: Likewise.
	* sysdeps/riscv/rvf/s_ceilf.c: Likewise.
	* sysdeps/riscv/rvf/s_finitef.c: Likewise.
	* sysdeps/riscv/rvf/s_floorf.c: Likewise.
	* sysdeps/riscv/rvf/s_fmaxf.c: Likewise.
	* sysdeps/riscv/rvf/s_fminf.c: Likewise.
	* sysdeps/riscv/rvf/s_fpclassifyf.c: Likewise.
	* sysdeps/riscv/rvf/s_isinff.c: Likewise.
	* sysdeps/riscv/rvf/s_isnanf.c: Likewise.
	* sysdeps/riscv/rvf/s_issignalingf.c: Likewise.
	* sysdeps/riscv/rvf/s_nearbyintf.c: Likewise.
	* sysdeps/riscv/rvf/s_roundevenf.c: Likewise.
	* sysdeps/riscv/rvf/s_roundf.c: Likewise.
	* sysdeps/riscv/rvf/s_truncf.c: Likewise.
2018-09-03 21:09:04 +00:00
Paul Pluzhnikov
a6e8926f8d [BZ #20271] Add newlines in __libc_fatal calls. 2018-08-31 18:04:32 -07:00
Joseph Myers
ff6b24501f Split fenv_private.h out of math_private.h more consistently.
On some architectures, the parts of math_private.h relating to the
floating-point environment are in a separate file fenv_private.h
included from math_private.h.  As this is purely an
architecture-specific convention used by several architectures,
however, all such architectures still need their own math_private.h,
even if it has nothing to do beyond #include <fenv_private.h> and
peculiarity of including the i386 file directly instead of having a
shared file in sysdeps/x86.

This patch makes the fenv_private.h name an architecture-independent
convention in glibc.  The include of fenv_private.h from
math_private.h becomes architecture-independent (until callers are
updated to include fenv_private.h directly so the include from
math_private.h is no longer needed).  Some architecture math_private.h
headers are removed if no longer needed, or renamed to fenv_private.h
if all they define belongs in that header; architecture fenv_private.h
headers now do require #include_next <fenv_private.h>.  The i386
fenv_private.h file moves to sysdeps/x86/fpu/ to reflect how it is
actually shared with x86_64.  The generic math_private.h gets a new
include of <stdbool.h>, as needed for bool in some prototypes in that
header (previously that was indirectly included via include/fenv.h,
which now only gets included too late in math_private.h, after those
prototypes).

Tested for x86_64 and x86, and tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by the patch.

	* sysdeps/aarch64/fpu/fenv_private.h: New file.  Based on ....
	* sysdeps/aarch64/fpu/math_private.h: ... this file.  All contents
	moved to fenv_private.h except for ...
	(TOINT_INTRINSICS): Kept in math_private.h.
	(roundtoint): Likewise.
	(converttoint): Likewise.
	* sysdeps/arm/fenv_private.h: Change multiple-include guard to
	[ARM_FENV_PRIVATE_H].  Include next <fenv_private.h>.
	* sysdeps/arm/math_private.h: Remove.
	* sysdeps/generic/fenv_private.h: New file.  Contents moved from
	....
	* sysdeps/generic/math_private.h: ... this file.  Include
	<stdbool.h>.  Do not include <fenv.h> or <get-rounding-mode.h>.
	Include <fenv_private.h>.  Remove functions and macros moved to
	fenv_private.h.
	* sysdeps/i386/fpu/math_private.h: Remove.
	* sysdeps/mips/math_private.h: Move to ....
	* sysdeps/mips/fpu/fenv_private.h: ... here.  Change
	multiple-include guard to [MIPS_FENV_PRIVATE_H].  Remove
	[__mips_hard_float] conditional.  Include next <fenv_private.h>.
	* sysdeps/powerpc/fpu/fenv_private.h: Change multiple-include
	guard to [POWERPC_FENV_PRIVATE_H].  Include next <fenv_private.h>.
	* sysdeps/powerpc/fpu/math_private.h: Do not include
	<fenv_private.h>.
	* sysdeps/riscv/rvf/math_private.h: Move to ....
	* sysdeps/riscv/rvf/fenv_private.h: ... here.  Change
	multiple-include guard to [RISCV_FENV_PRIVATE_H].  Include next
	<fenv_private.h>.
	* sysdeps/sparc/fpu/fenv_private.h: Change multiple-include guard
	to [SPARC_FENV_PRIVATE_H].  Include next <fenv_private.h>.
	* sysdeps/sparc/fpu/math_private.h: Remove.
	* sysdeps/i386/fpu/fenv_private.h: Move to ....
	* sysdeps/x86/fpu/fenv_private.h: ... here.  Change
	multiple-include guard to [X86_FENV_PRIVATE_H].  Include next
	<fenv_private.h>.
	* sysdeps/x86_64/fpu/math_private.h: Do not include
	<sysdeps/i386/fpu/fenv_private.h>.
2018-08-28 20:48:49 +00:00
Joseph Myers
895ef79e04 Move EXCEPTION_ENABLE_SUPPORTED out of math-tests.h.
Continuing moving macros out of math-tests.h to smaller headers
following typo-proof conventions instead of using #ifndef, this patch
moves the EXCEPTION_ENABLE_SUPPORTED macro out to its own
math-tests-trap.h header.

Tested with build-many-glibcs.py.

	* sysdeps/generic/math-tests-trap.h: New file.
	* sysdeps/generic/math-tests.h: Include <math-tests-trap.h>.
	(EXCEPTION_ENABLE_SUPPORTED): Do not define here.
	* sysdeps/aarch64/math-tests.h: Remove file.
	* sysdeps/arm/math-tests.h: Likewise.
	* sysdeps/riscv/math-tests.h: Likewise.
	* sysdeps/aarch64/math-tests-trap.h: New file.
	* sysdeps/arm/math-tests-trap.h: Likewise.
	* sysdeps/riscv/math-tests-trap.h: Likewise.
2018-08-24 19:18:16 +00:00
Siddhesh Poyarekar
436e4d5b96 [aarch64] Add an ASIMD variant of strlen for falkor
This variant of strlen uses vector loads and operations to reduce the
size of the code and also eliminate the non-ascii fallback.  This
works very well for falkor because of its two vector units and
efficient vector ops.  In the best case it reduces latency of cases in
bench-strlen by 48%, with gains throughout the benchmark.
strlen-walk also sees uniform gains in the 5%-15% range.

Overall the routine appears to work better than the stock one for falkor
regardless of the benchmark, length of string or cache state.

The same cannot be said of a53 and a72 though.  a53 performance was
greatly reduced and for a72 it was a bit of a mixed bag, slightly on the
negative side but I reckon it might be fast in some situations.

	* sysdeps/aarch64/strlen.S (__strlen): Rename to STRLEN.
	[!STRLEN](STRLEN): Set to __strlen.
	* sysdeps/aarch64/multiarch/strlen.c: New file.
	* sysdeps/aarch64/multiarch/strlen_generic.S: Likewise.
	* sysdeps/aarch64/multiarch/strlen_asimd.S: Likewise.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add strlen.
	* sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add
	strlen_generic and strlen_asimd.

Reviewed-By: szabolcs.nagy@arm.com
CC: pinskia@gmail.com
2018-08-15 23:01:33 +05:30
Wilco Dijkstra
599cf39766 Improve performance of sinf and cosf
The second patch improves performance of sinf and cosf using the same
algorithms and polynomials.  The returned values are identical to sincosf
for the same input.  ULP definitions for AArch64 and x64 are updated.

sinf/cosf througput gains on Cortex-A72:
* |x| < 0x1p-12 : 1.2x
* |x| < M_PI_4  : 1.8x
* |x| < 2 * M_PI: 1.7x
* |x| < 120.0   : 2.3x
* |x| < Inf     : 3.0x

	* NEWS: Mention sinf, cosf, sincosf.
	* sysdeps/aarch64/libm-test-ulps: Update ULP for sinf, cosf, sincosf.
	* sysdeps/x86_64/fpu/libm-test-ulps: Update ULP for sinf and cosf.
	* sysdeps/x86_64/fpu/multiarch/s_sincosf-fma.c: Add definitions of
	constants rather than including generic sincosf.h.
	* sysdeps/x86_64/fpu/s_sincosf_data.c: Remove.
	* sysdeps/ieee754/flt-32/s_cosf.c (cosf): Rewrite.
	* sysdeps/ieee754/flt-32/s_sincosf.h (reduced_sin): Remove.
	(reduced_cos): Remove.
	(sinf_poly): New function.
	* sysdeps/ieee754/flt-32/s_sinf.c (sinf): Rewrite.
2018-08-14 10:45:59 +01:00
Szabolcs Nagy
43cfdf8f48 Clean up converttoint handling and document the semantics
This patch currently only affects aarch64.

The roundtoint and converttoint internal functions are only called with small
values, so 32 bit result is enough for converttoint and it is a signed int
conversion so the return type is changed to int32_t.

The original idea was to help the compiler keeping the result in uint64_t,
then it's clear that no sign extension is needed and there is no accidental
undefined or implementation defined signed int arithmetics.

But it turns out gcc does a good job with inlining so changing the type has
no overhead and the semantics of the conversion is less surprising this way.
Since we want to allow the asuint64 (x + 0x1.8p52) style conversion, the top
bits were never usable and the existing code ensures that only the bottom
32 bits of the conversion result are used.

On aarch64 the neon intrinsics (which round ties to even) are changed to
round and lround (which round ties away from zero) this does not affect the
results in a significant way, but more portable (relies on round and lround
being inlined which works with -fno-math-errno).

The TOINT_SHIFT and TOINT_RINT macros were removed, only keep separate code
paths for TOINT_INTRINSICS and !TOINT_INTRINSICS.

	* sysdeps/aarch64/fpu/math_private.h (roundtoint): Use round.
	(converttoint): Use lround.
	* sysdeps/ieee754/flt-32/math_config.h (roundtoint): Declare and
	document the semantics when TOINT_INTRINSICS is set.
	(converttoint): Likewise.
	(TOINT_RINT): Remove.
	(TOINT_SHIFT): Remove.
	* sysdeps/ieee754/flt-32/e_expf.c (__expf): Remove the TOINT_RINT code
	path.
2018-08-10 17:23:16 +01:00
Siddhesh Poyarekar
be64b1946b [aarch64] Fix value of MIN_PAGE_SIZE for testing
MIN_PAGE_SIZE is normally set to 4096 but for testing it can be set to
16 so that it exercises the page crossing code for every misaligned
access.  The value was set to 15, which is obviously wrong, so fixed
as obvious and tested.

	* sysdeps/aarch64/strlen.S [TEST_PAGE_CROSS](MIN_PAGE_SIZE):
	Fix value.
2018-08-08 22:47:17 +05:30
Siddhesh Poyarekar
dce452dc52 Rename the glibc.tune namespace to glibc.cpu
The glibc.tune namespace is vaguely named since it is a 'tunable', so
give it a more specific name that describes what it refers to.  Rename
the tunable namespace to 'cpu' to more accurately reflect what it
encompasses.  Also rename glibc.tune.cpu to glibc.cpu.name since
glibc.cpu.cpu is weird.

	* NEWS: Mention the change.
	* elf/dl-tunables.list: Rename tune namespace to cpu.
	* sysdeps/powerpc/dl-tunables.list: Likewise.
	* sysdeps/x86/dl-tunables.list: Likewise.
	* sysdeps/aarch64/dl-tunables.list: Rename tune.cpu to
	cpu.name.
	* elf/dl-hwcaps.c (_dl_important_hwcaps): Adjust.
	* elf/dl-hwcaps.h (GET_HWCAP_MASK): Likewise.
	* manual/README.tunables: Likewise.
	* manual/tunables.texi: Likewise.
	* sysdeps/powerpc/cpu-features.c: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c
	(init_cpu_features): Likewise.
	* sysdeps/x86/cpu-features.c: Likewise.
	* sysdeps/x86/cpu-features.h: Likewise.
	* sysdeps/x86/cpu-tunables.c: Likewise.
	* sysdeps/x86_64/Makefile: Likewise.
	* sysdeps/x86/dl-cet.c: Likewise.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2018-08-02 23:49:19 +05:30
Siddhesh Poyarekar
0aec4c1d18 aarch64,falkor: Use vector registers for memcpy
Vector registers perform better than scalar register pairs for copying
data so prefer them instead.  This results in a time reduction of over
50% (i.e. 2x speed improvemnet) for some smaller sizes for memcpy-walk.
Larger sizes show improvements of around 1% to 2%.  memcpy-random shows
a very small improvement, in the range of 1-2%.

	* sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor):
	Use vector registers.
2018-06-29 22:45:59 +05:30
Siddhesh Poyarekar
ce76a5cb8d aarch64,falkor: Use vector registers for memmove
Vector registers perform much better for moves compared to pairs of
registers on falkor, so use them instead.  This results in a time
reduction of up to 50% (i.e. 2x improvement) for a lot of the smaller
sizes, i.e. up to 1K in memmove-walk.  Improvements for larger sizes are
smaller, at about 1%-2%.

	* sysdeps/aarch64/multiarch/memmove_falkor.S
	(__memcpy_falkor): Use vector registers.
2018-06-29 22:45:07 +05:30
Hongbo Zhang
fc2ba8037d aarch64: add HXT Phecda core memory operation ifuncs
Phecda is HXT semiconductor's CPU core, this patch adds memory operation
ifuncs for it: sharing the same optimized implementation with Qualcomm's
Falkor core.

2018-06-07  Minfeng Kang <minfeng.kang@hxt-semitech.com>
	    Hongbo Zhang <hongbo.zhang@linaro.org>

	* sysdeps/aarch64/multiarch/memcpy.c (libc_ifunc): reuse
	__memcpy_falkor for phecda core.
	* sysdeps/aarch64/multiarch/memmove.c (libc_ifunc): reuse
	__memmove_falkor for phecda core.
	* sysdeps/aarch64/multiarch/memset.c (libc_ifunc): reuse
	__memset_falkor for phecda core.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c: add MIDR entry
	for phecda core.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_PHECDA): add
	macro to identify phecda core.
2018-06-12 21:29:11 +05:30
H.J. Lu
67c0579669 Mark _init and _fini as hidden [BZ #23145]
_init and _fini are special functions provided by glibc for linker to
define DT_INIT and DT_FINI in executable and shared library.  They
should never be put in dynamic symbol table.  This patch marks them as
hidden to remove them from dynamic symbol table.

Tested with build-many-glibcs.py.

	[BZ #23145]
	* elf/Makefile (tests-special): Add $(objpfx)check-initfini.out.
	($(all-built-dso:=.dynsym): New target.
	(common-generated): Add $(all-built-dso:$(common-objpfx)%=%.dynsym).
	($(objpfx)check-initfini.out): New target.
	(generated): Add check-initfini.out.
	* scripts/check-initfini.awk: New file.
	* sysdeps/aarch64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/alpha/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/arm/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/hppa/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/i386/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/ia64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/m68k/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/microblaze/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/mips/mips32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/mips/mips64/n32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/mips/mips64/n64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/nios2/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/powerpc/powerpc32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/powerpc/powerpc64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/s390/s390-32/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/s390/s390-64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/sh/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/sparc/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
	* sysdeps/x86_64/crti.S (_init): Mark as hidden.
	(_fini): Likewise.
2018-06-08 10:28:52 -07:00
Joseph Myers
8f145c7712 Remove sysdeps/aarch64/soft-fp directory.
As per <https://sourceware.org/ml/libc-alpha/2014-10/msg00369.html>,
there should not be separate sysdeps/<arch>/soft-fp directories when
those are used by all configurations that use sysdeps/<arch>, and,
more generally, should not be sysdeps/foo/Implies files pointing to a
subdirectory foo/bar.  This patch eliminates the
sysdeps/aarch64/soft-fp directory accordingly, merging its contents
into sysdeps/aarch64.

Tested with build-many-glibcs.py that installed stripped shared
libraries for aarch64 configurations are unchanged by this patch.

	* sysdeps/aarch64/Implies: Remove aarch64/soft-fp.
	* sysdeps/aarch64/Makefile [$(subdir) = math] (CPPFLAGS): Add
	-I../soft-fp.  Moved from ....
	* sysdeps/aarch64/soft-fp/Makefile: ... here.  Remove file.
	* sysdeps/aarch64/soft-fp/e_sqrtl.c: Move to ....
	* sysdeps/aarch64/e_sqrtl.c: ... here.
	* sysdeps/aarch64/soft-fp/sfp-machine.h: Move to ....
	* sysdeps/aarch64/sfp-machine.h: ... here.
2018-05-22 17:23:34 +00:00
Joseph Myers
b4d5b8b021 Do not include math-barriers.h in math_private.h.
This patch continues the math_private.h cleanup by stopping
math_private.h from including math-barriers.h and making the users of
the barrier macros include the latter header directly.  No attempt is
made to remove any math_private.h includes that are now unused, except
in strtod_l.c where that is done to avoid line number changes in
assertions, so that installed stripped shared libraries can be
compared before and after the patch.  (I think the floating-point
environment support in math_private.h should also move out - some
architectures already have fenv_private.h as an architecture-internal
header included from their math_private.h - and after moving that out
might be a better time to identify unused math_private.h includes.)

Tested for x86_64 and x86, and tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by the patch.

	* sysdeps/generic/math_private.h: Do not include
	<math-barriers.h>.
	* stdlib/strtod_l.c: Include <math-barriers.h> instead of
	<math_private.h>.
	* math/fromfp.h: Include <math-barriers.h>.
	* math/math-narrow.h: Likewise.
	* math/s_nextafter.c: Likewise.
	* math/s_nexttowardf.c: Likewise.
	* sysdeps/aarch64/fpu/s_llrint.c: Likewise.
	* sysdeps/aarch64/fpu/s_llrintf.c: Likewise.
	* sysdeps/aarch64/fpu/s_lrint.c: Likewise.
	* sysdeps/aarch64/fpu/s_lrintf.c: Likewise.
	* sysdeps/i386/fpu/s_nextafterl.c: Likewise.
	* sysdeps/i386/fpu/s_nexttoward.c: Likewise.
	* sysdeps/i386/fpu/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_atan2.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_atanh.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp2.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_j0.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_sqrt.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_expm1.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fma.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_log1p.c: Likewise.
	* sysdeps/ieee754/dbl-64/s_nearbyint.c: Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_nearbyint.c: Likewise.
	* sysdeps/ieee754/flt-32/e_atanhf.c: Likewise.
	* sysdeps/ieee754/flt-32/e_j0f.c: Likewise.
	* sysdeps/ieee754/flt-32/s_expm1f.c: Likewise.
	* sysdeps/ieee754/flt-32/s_log1pf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_nearbyintf.c: Likewise.
	* sysdeps/ieee754/flt-32/s_nextafterf.c: Likewise.
	* sysdeps/ieee754/k_standardl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_asinl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_expl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/e_powl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nearbyintl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nextafterl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nexttoward.c: Likewise.
	* sysdeps/ieee754/ldbl-128/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_asinl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_nextafterl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_nexttoward.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/e_atanhl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/e_j0l.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fma.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_nexttoward.c: Likewise.
	* sysdeps/ieee754/ldbl-96/s_nexttowardf.c: Likewise.
	* sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/s_nextafterl.c: Likewise.
2018-05-11 15:11:38 +00:00
Siddhesh Poyarekar
db725a458e aarch64,falkor: Ignore prefetcher tagging for smaller copies
For smaller and medium sized copies, the effect of hardware
prefetching are not as dominant as instruction level parallelism.
Hence it makes more sense to load data into multiple registers than to
try and route them to the same prefetch unit.  This is also the case
for the loop exit where we are unable to latch on to the same prefetch
unit anyway so it makes more sense to have data loaded in parallel.

The performance results are a bit mixed with memcpy-random, with
numbers jumping between -1% and +3%, i.e. the numbers don't seem
repeatable.  memcpy-walk sees a 70% improvement (i.e. > 2x) for 128
bytes and that improvement reduces down as the impact of the tail copy
decreases in comparison to the loop.

	* sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor):
	Use multiple registers to copy data in loop tail.
2018-05-11 00:11:52 +05:30
Siddhesh Poyarekar
70c97f8493 aarch64,falkor: Ignore prefetcher hints for memmove tail
The tail of the copy loops are unable to train the falkor hardware
prefetcher because they load from a different base compared to the hot
loop.  In this case avoid serializing the instructions by loading them
into different registers.  Also peel the last iteration of the loop
into the tail (and have them use different registers) since it gives
better performance for medium sizes.

This results in performance improvements of between 3% and 20% over
the current falkor implementation for sizes between 128 bytes and 1K
on the memmove-walk benchmark, thus mostly covering the regressions
seen against the generic memmove.

	* sysdeps/aarch64/multiarch/memmove_falkor.S
	(__memmove_falkor): Use multiple registers to move data in
	loop tail.
2018-05-11 00:08:02 +05:30
Joseph Myers
9ed2e15ff4 Move math_opt_barrier, math_force_eval to separate math-barriers.h.
This patch continues cleaning up math_private.h by moving the
math_opt_barrier and math_force_eval macros to a separate header
math-barriers.h.

At present, those macros are inside a "#ifndef math_opt_barrier" in
math_private.h to allow architectures to override them and then use
a separate math-barriers.h header, no such #ifndef or #include_next is
needed; architectures just have their own alternative version of
math-barriers.h when providing their own optimized versions that avoid
going through memory unnecessarily.  The generic math-barriers.h has a
comment added to document these two macros.

In this patch, math_private.h is made to #include <math-barriers.h>,
so files using these macros do not need updating yet.  That is because
of uses of math_force_eval in math_check_force_underflow and
math_check_force_underflow_nonneg, which are still defined in
math_private.h.  Once those are moved out to a separate header, that
separate header can be made to include <math-barriers.h>, as can the
other files directly using these barrier macros, and then the include
of <math-barriers.h> from math_private.h can be removed.

Tested for x86_64 and x86.  Also tested with build-many-glibcs.py that
installed stripped shared libraries are unchanged by this patch.

	* sysdeps/generic/math-barriers.h: New file.
	* sysdeps/generic/math_private.h [!math_opt_barrier]
	(math_opt_barrier): Move to math-barriers.h.
	[!math_opt_barrier] (math_force_eval): Likewise.
	* sysdeps/aarch64/fpu/math-barriers.h: New file.
	* sysdeps/aarch64/fpu/math_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
	* sysdeps/alpha/fpu/math-barriers.h: New file.
	* sysdeps/alpha/fpu/math_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
	* sysdeps/x86/fpu/math-barriers.h: New file.
	* sysdeps/i386/fpu/fenv_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
	* sysdeps/m68k/m680x0/fpu/math_private.h: Move to....
	* sysdeps/m68k/m680x0/fpu/math-barriers.h: ... here.  Adjust
	multiple-include guard for rename.
	* sysdeps/powerpc/fpu/math-barriers.h: New file.
	* sysdeps/powerpc/fpu/math_private.h (math_opt_barrier): Move to
	math-barriers.h.
	(math_force_eval): Likewise.
2018-05-09 19:45:47 +00:00
Maciej W. Rozycki
10a446ddcc elf: Unify symbol address run-time calculation [BZ #19818]
Wrap symbol address run-time calculation into a macro and use it
throughout, replacing inline calculations.

There are a couple of variants, most of them different in a functionally
insignificant way.  Most calculations are right following RESOLVE_MAP,
at which point either the map or the symbol returned can be checked for
validity as the macro sets either both or neither.  In some places both
the symbol and the map has to be checked however.

My initial implementation therefore always checked both, however that
resulted in code larger by as much as 0.3%, as many places know from
elsewhere that no check is needed.  I have decided the size growth was
unacceptable.

Having looked closer I realized that it's the map that is the culprit.
Therefore I have modified LOOKUP_VALUE_ADDRESS to accept an additional
boolean argument telling it to access the map without checking it for
validity.  This in turn has brought quite nice results, with new code
actually being smaller for i686, and MIPS o32, n32 and little-endian n64
targets, unchanged in size for x86-64 and, unusually, marginally larger
for big-endian MIPS n64, as follows:

i686:
   text    data     bss     dec     hex filename
 152255    4052     192  156499   26353 ld-2.27.9000-base.so
 152159    4052     192  156403   262f3 ld-2.27.9000-elf-symbol-value.so
MIPS/o32/el:
   text    data     bss     dec     hex filename
 142906    4396     260  147562   2406a ld-2.27.9000-base.so
 142890    4396     260  147546   2405a ld-2.27.9000-elf-symbol-value.so
MIPS/n32/el:
   text    data     bss     dec     hex filename
 142267    4404     260  146931   23df3 ld-2.27.9000-base.so
 142171    4404     260  146835   23d93 ld-2.27.9000-elf-symbol-value.so
MIPS/n64/el:
   text    data     bss     dec     hex filename
 149835    7376     408  157619   267b3 ld-2.27.9000-base.so
 149787    7376     408  157571   26783 ld-2.27.9000-elf-symbol-value.so
MIPS/o32/eb:
   text    data     bss     dec     hex filename
 142870    4396     260  147526   24046 ld-2.27.9000-base.so
 142854    4396     260  147510   24036 ld-2.27.9000-elf-symbol-value.so
MIPS/n32/eb:
   text    data     bss     dec     hex filename
 142019    4404     260  146683   23cfb ld-2.27.9000-base.so
 141923    4404     260  146587   23c9b ld-2.27.9000-elf-symbol-value.so
MIPS/n64/eb:
   text    data     bss     dec     hex filename
 149763    7376     408  157547   2676b ld-2.27.9000-base.so
 149779    7376     408  157563   2677b ld-2.27.9000-elf-symbol-value.so
x86-64:
   text    data     bss     dec     hex filename
 148462    6452     400  155314   25eb2 ld-2.27.9000-base.so
 148462    6452     400  155314   25eb2 ld-2.27.9000-elf-symbol-value.so

	[BZ #19818]
	* sysdeps/generic/ldsodefs.h (LOOKUP_VALUE_ADDRESS): Add `set'
	parameter.
	(SYMBOL_ADDRESS): New macro.
	[!ELF_FUNCTION_PTR_IS_SPECIAL] (DL_SYMBOL_ADDRESS): Use
	SYMBOL_ADDRESS for symbol address calculation.
	* elf/dl-runtime.c (_dl_fixup): Likewise.
	(_dl_profile_fixup): Likewise.
	* elf/dl-symaddr.c (_dl_symbol_address): Likewise.
	* elf/rtld.c (dl_main): Likewise.
	* sysdeps/aarch64/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/alpha/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/arm/dl-machine.h (elf_machine_rel): Likewise.
	(elf_machine_rela): Likewise.
	* sysdeps/hppa/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/hppa/dl-symaddr.c (_dl_symbol_address): Likewise.
	* sysdeps/i386/dl-machine.h (elf_machine_rel): Likewise.
	(elf_machine_rela): Likewise.
	* sysdeps/ia64/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/m68k/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/microblaze/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/mips/dl-machine.h (ELF_MACHINE_BEFORE_RTLD_RELOC):
	Likewise.
	(elf_machine_reloc): Likewise.
	(elf_machine_got_rel): Likewise.
	* sysdeps/mips/dl-trampoline.c (__dl_runtime_resolve): Likewise.
	* sysdeps/nios2/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/powerpc/powerpc64/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/riscv/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/s390/s390-32/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/s390/s390-64/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/sh/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/sparc/sparc32/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/sparc/sparc64/dl-machine.h (elf_machine_rela):
	Likewise.
	* sysdeps/tile/dl-machine.h (elf_machine_rela): Likewise.
	* sysdeps/x86_64/dl-machine.h (elf_machine_rela): Likewise.

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2018-04-04 23:09:37 +01:00
Wilco Dijkstra
19a8b9a300 [PATCH 1/7] sin/cos slow paths: avoid slow paths for small inputs
This series of patches removes the slow patchs from sin, cos and sincos.
Besides greatly simplifying the implementation, the new version is also much
faster for inputs up to PI (41% faster) and for large inputs needing range
reduction (27% faster).

ULP is ~0.55 with no errors found after testing 1.6 billion inputs across most
of the range with mpsin and mpcos.  The number of incorrectly rounded results
(ie. ULP >0.5) is at most ~2750 per million inputs between 0.125 and 0.5,
the average is ~850 per million between 0 and PI.

Tested on AArch64 and x86_64 with no regressions.

The first patch removes the slow paths for the cases where the input is small
and doesn't require range reduction.  Update ULP tables for sin, cos and sincos
on AArch64 and x86_64.

	* sysdeps/aarch64/libm-test-ulps: Update ULP for sin, cos, sincos.
	* sysdeps/ieee754/dbl-64/s_sin.c (__sin): Remove slow paths for small
	inputs.
	(__cos): Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Update ULP for sin, cos, sincos.
2018-04-03 16:52:16 +01:00
Joseph Myers
ffec7b2740 Use x86_64 backtrace as generic version.
No glibc configuration uses the present debug/backtrace.c, whereas
several #include the x86_64 version.  The x86_64 version is
effectively a generic one (using _Unwind_Backtrace from libgcc, which
works much more reliably than the built-in functions used by
debug/backtrace.c).  This patch moves it to debug/backtrace.c and
removes all the #includes of the x86_64 version from other
architectures which are no longer required.

I do not know whether all the other architecture-specific backtrace
implementations that are based on _Unwind_Backtrace are required, or
whether, where their differences from the generic version do something
useful, suitable hooks could be added to the generic version to reduce
the duplication involved.

Tested with build-many-glibcs.py that installed stripped shared
libraries are unchanged by this patch.

	* sysdeps/x86_64/backtrace.c: Move to ....
	* debug/backtrace.c: ... here.
	* sysdeps/aarch64/backtrace.c: Remove file.
	* sysdeps/alpha/backtrace.c: Likewise.
	* sysdeps/hppa/backtrace.c: Likewise.
	* sysdeps/ia64/backtrace.c: Likewise.
	* sysdeps/mips/backtrace.c: Likewise.
	* sysdeps/nios2/backtrace.c: Likewise.
	* sysdeps/riscv/backtrace.c: Likewise.
	* sysdeps/sh/backtrace.c: Likewise.
	* sysdeps/tile/backtrace.c: Likewise.
2018-03-21 17:25:30 +00:00
Wilco Dijkstra
700593fdd7 Remove all target specific __ieee754_sqrt(f/l) inlines
Remove the now unused target specific__ieee754_sqrt(f/l) inlines.
Also remove inlines of sqrt which are for really old GCC versions.
Removing these is desirable, under the general principle of leaving
such inlining to the compiler rather than trying to do it in installed
headers, especially when only very old compilers are affected.

Note that removing inlines for __ieee754_sqrt disables inlining in the
sqrt wrapper functions.  Given the sqrt function will typically only be
called for negative arguments, it doesn't matter whether the inlining
happens or not.

	* sysdeps/aarch64/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	* sysdeps/alpha/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	* sysdeps/generic/math-type-macros.h (M_SQRT): Use sqrt.
	* sysdeps/m68k/m680x0/fpu/mathimpl.h (__ieee754_sqrt): Remove.
	* sysdeps/powerpc/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	* sysdeps/s390/fpu/bits/mathinline.h: Remove file.
	* sysdeps/sparc/fpu/bits/mathinline.h (sqrt) Remove.
	(sqrtf): Remove.
	(sqrtl): Remove.
	(__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	(__ieee754_sqrtl): Remove.
	* sysdeps/m68k/m680x0/fpu/mathimpl.h (__ieee754_sqrt): Remove.
	* sysdeps/x86/fpu/math_private.h (__ieee754_sqrt): Remove.
	* sysdeps/x86_64/fpu/math_private.h (__ieee754_sqrt): Remove.
	(__ieee754_sqrtf): Remove.
	(__ieee754_sqrtl): Remove.
2018-03-15 19:21:36 +00:00
Siddhesh Poyarekar
b47c3e7637 aarch64/strncmp: Use lsr instead of mov+lsr
A lsr can do what the mov and lsr did.
2018-03-15 08:06:21 +05:30
Siddhesh Poyarekar
d46f84de74 aarch64/strncmp: Unbreak builds with old binutils
Binutils 2.26.* and older do not support moves with shifted registers,
so use a separate shift instruction instead.
2018-03-14 18:51:05 +05:30
Siddhesh Poyarekar
7108f1f944 aarch64: Improve strncmp for mutually misaligned inputs
The mutually misaligned inputs on aarch64 are compared with a simple
byte copy, which is not very efficient.  Enhance the comparison
similar to strcmp by loading a double-word at a time.  The peak
performance improvement (i.e. 4k maxlen comparisons) due to this on
the strncmp microbenchmark is as follows:

falkor: 3.5x (up to 72% time reduction)
cortex-a73: 3.5x (up to 71% time reduction)
cortex-a53: 3.5x (up to 71% time reduction)

All mutually misaligned inputs from 16 bytes maxlen onwards show
upwards of 15% improvement and there is no measurable effect on the
performance of aligned/mutually aligned inputs.

	* sysdeps/aarch64/strncmp.S (count): New macro.
	(strncmp): Store misaligned length in SRC1 in COUNT.
	(mutual_align): Adjust.
	(misaligned8): Load dword at a time when it is safe.
2018-03-13 23:57:04 +05:30
Samuel Thibault
a5df0318ef hurd: add gscope support
* elf/dl-support.c [!THREAD_GSCOPE_IN_TCB] (_dl_thread_gscope_count):
Define variable.
* sysdeps/generic/ldsodefs.h [!THREAD_GSCOPE_IN_TCB] (struct
rtld_global): Add _dl_thread_gscope_count member.
* sysdeps/mach/hurd/tls.h: Include <atomic.h>.
[!defined __ASSEMBLER__] (THREAD_GSCOPE_GLOBAL, THREAD_GSCOPE_SET_FLAG,
THREAD_GSCOPE_RESET_FLAG, THREAD_GSCOPE_WAIT): Define macros.
* sysdeps/generic/tls.h: Document THREAD_GSCOPE_IN_TCB.
* sysdeps/aarch64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/alpha/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/arm/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/hppa/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/i386/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/ia64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/m68k/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/microblaze/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/mips/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/nios2/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/powerpc/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/riscv/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/s390/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/sh/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/sparc/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/tile/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
* sysdeps/x86_64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1.
2018-03-11 13:06:33 +01:00
Siddhesh Poyarekar
4e54d91863 aarch64: Fix branch target to loop16
I goofed up when changing the loop8 name to loop16 and missed on out
the branch instance.  Fixed and actually build tested this time.

	* sysdeps/aarch64/memcmp.S (more16): Fix branch target loop16.
2018-03-06 23:01:02 +05:30
Siddhesh Poyarekar
30a81dae5b aarch64: Optimized memcmp for medium to large sizes
This improved memcmp provides a fast path for compares up to 16 bytes
and then compares 16 bytes at a time, thus optimizing loads from both
sources.  The glibc memcmp microbenchmark retains performance (with an
error of ~1ns) for smaller compare sizes and reduces up to 31% of
execution time for compares up to 4K on the APM Mustang.  On Qualcomm
Falkor this improves to almost 48%, i.e. it is almost 2x improvement
for sizes of 2K and above.

	* sysdeps/aarch64/memcmp.S: Widen comparison to 16 bytes at a
	time.
2018-03-06 19:22:40 +05:30
Siddhesh Poyarekar
6ca24c4348 aarch64/strcmp: fix misaligned loop jump target
I accidentally set the loop jump back label as misaligned8 instead of
do_misaligned.  The typo is harmless but it's always nice to not have
to unnecessarily execute those two instructions.

	* sysdeps/aarch64/strcmp.S (do_misaligned): Jump back to
	do_misaligned, not misaligned8.
2018-02-22 23:48:14 +05:30
Steve Ellcey
e9537dddc7 IFUNC for Cavium ThunderX2
* sysdeps/aarch64/multiarch/Makefile (sysdep_routines):
	Add memcpy_thunderx2.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c (MAX_IFUNC):
	Increment to 4.
	(__libc_ifunc_impl_list): Add __memcpy_thunderx2.
	* sysdeps/aarch64/multiarch/memcpy.c (libc_ifunc): Add IS_THUNDERX2
	and IS_THUNDERX2PA checks.
	* sysdeps/aarch64/multiarch/memcpy_thunderx.S (USE_THUNDERX2):
	Use macro to set name appropriately.
	(memcpy): Use USE_THUNDERX2 macro to modify prefetches.
	* sysdeps/aarch64/multiarch/memcpy_thunderx2.S: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_THUNDERX2PA):
	New macro.
	(IS_THUNDERX2): New macro.
2018-02-22 08:38:47 -08:00
Wilco Dijkstra
0c8a67a573 [AArch64] Fix include.
Fix include to use <>.

	* sysdeps/aarch64/fpu/fpu_control.h: Use <> in include.
2018-02-15 12:41:06 +00:00
Wilco Dijkstra
c3d466cba1 Remove slow paths from pow
Remove the slow paths from pow.  Like several other double precision math
functions, pow is exactly rounded.  This is not required from math functions
and causes major overheads as it requires multiple fallbacks using higher
precision arithmetic if a result is close to 0.5ULP.  Ridiculous slowdowns
of up to 100000x have been reported when the highest precision path triggers.

All GLIBC math tests pass on AArch64 and x64 (with ULP of pow set to 1).
The worst case error is ~0.506ULP.  A simple test over a few hundred million
values shows pow is 10% faster on average.  This fixes BZ #13932.

	[BZ #13932]
	* sysdeps/ieee754/dbl-64/uexp.h (err_1): Remove.
	* benchtests/pow-inputs: Update comment for slow path cases.
	* manual/probes.texi (slowpow_p10): Delete removed probe.
	(slowpow_p10): Likewise.
	* math/Makefile: Remove halfulp.c and slowpow.c.
	* sysdeps/aarch64/libm-test-ulps: Set ULP of pow to 1.
	* sysdeps/generic/math_private.h (__exp1): Remove error argument.
	(__halfulp): Remove.
	(__slowpow): Remove.
	* sysdeps/i386/fpu/halfulp.c: Delete file.
	* sysdeps/i386/fpu/slowpow.c: Likewise.
	* sysdeps/ia64/fpu/halfulp.c: Likewise.
	* sysdeps/ia64/fpu/slowpow.c: Likewise.
	* sysdeps/ieee754/dbl-64/e_exp.c (__exp1): Remove error argument,
	improve comments and add error analysis.
	* sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Add error analysis.
	(power1): Remove function:
	(log1): Remove error argument, add error analysis.
	(my_log2): Remove function.
	* sysdeps/ieee754/dbl-64/halfulp.c: Delete file.
	* sysdeps/ieee754/dbl-64/slowpow.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/halfulp.c: Likewise.
	* sysdeps/m68k/m680x0/fpu/slowpow.c: Likewise.
	* sysdeps/powerpc/power4/fpu/Makefile: Remove CPPFLAGS-slowpow.c.
	* sysdeps/x86_64/fpu/libm-test-ulps: Set ULP of pow to 1.
	* sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowpow-fma.c,
	slowpow-fma4.c, halfulp-fma.c, halfulp-fma4.c.
	* sysdeps/x86_64/fpu/multiarch/e_pow-fma.c (__slowpow): Remove define.
	* sysdeps/x86_64/fpu/multiarch/e_pow-fma4.c (__slowpow): Likewise.
	* sysdeps/x86_64/fpu/multiarch/halfulp-fma.c: Delete file.
	* sysdeps/x86_64/fpu/multiarch/halfulp-fma4.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/slowpow-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/slowpow-fma4.c: Likewise.
2018-02-12 10:47:09 +00:00
Wilco Dijkstra
4f5b921eb9 [AArch64] Fix testsuite error due to fpsr/fscr change
Add features.h include for __GNUC_PREREQ.

	* sysdeps/aarch64/fpu/fpu_control.h: Add features.h to fix build error.
2018-02-10 15:02:51 +00:00
Wilco Dijkstra
3f8d9d58c5 [AArch64] Use builtins for fpcr/fpsr
Since GCC has support for accessing FPSR/FPCR, use them when possible
so that the asm instructions can be removed eventually.  Although GCC 5
supports the builtins, it has an optimization bug, so use them from GCC 6
onwards.

	* sysdeps/aarch64/fpu/fpu_control.h: Use builtins for accessing
	FPCR/FPSR.
2018-02-09 16:59:23 +00:00
Siddhesh Poyarekar
84c94d2fd9 aarch64: Use the L() macro for labels in memcmp
The L() macro makes the assembly a bit more readable.

	* sysdeps/aarch64/memcmp.S: Use L() macro for labels.
2018-02-02 10:15:21 +05:30
Szabolcs Nagy
3d1d79283e aarch64: fix static pie enabled libc when main is in a shared library
In the static pie enabled libc, crt1.o uses the same position independent
code as rcrt1.o and crt1.o is used instead of Scrt1.o when -no-pie
executables are linked.  When main is not defined in the executable, but
in a shared library crt1.o is currently broken, it assumes main is local.
(glibc has a test for this but i missed it in my previous testing.)

To make both rcrt1.o and crt1.o happy with the same code, a wrapper is
introduced around main: with this crt1.o works with extern main symbol
while rcrt1.o does not depend on GOT relocations. (The change only
affects static pie enabled libc. Further simplification of start.S is
possible in the future by using the same approach for Scrt1.o too.)

	* aarch64/start.S (_start): Use __wrap_main.
	(__wrap_main): New local symbol.
2018-01-12 18:10:03 +00:00
Joseph Myers
688903eb3e Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2018-01-01 00:32:25 +00:00
Szabolcs Nagy
8bfb461e20 aarch64: update libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Update.
2017-12-20 12:07:10 +00:00
Adhemerval Zanella
4e00196912 aarch64: fix memset with --disable-multi-arch
* sysdeps/aarch64/memset.S (MEMSET): Define.
2017-12-20 12:05:32 +00:00
Szabolcs Nagy
14d886edbd aarch64: fix start code for static pie
There are three flavors of the crt startup code:

1) crt1.o used for non-pie,
2) Scrt1.o used for dynamic linked pie (dynamic linker relocates),
3) rcrt1.o used for static linked pie (self relocation is needed)

In the --enable-static-pie case crt1.o is built with -DPIC and in case
of static linking it interposes _dl_relocate_static_pie in libc to
avoid self relocation.

Scrt1.o is built with -DPIC -DSHARED and it relies on GOT entries that
the static linker cannot relax and thus need relocation before the
start code is executed, so rcrt1.o needs separate implementation.

This implementation does not work for .text > 4G position independent
executables, which is fine since the toolchain does not support
-mcmodel=large with -fPIE.

Tests pass with ld/22269 and ld/22263 binutils bugs fixed.

	* sysdeps/aarch64/start.S (_start): Handle PIC && !SHARED case.
2017-12-18 10:07:07 +00:00
Siddhesh Poyarekar
2bce01ebba aarch64: Improve strcmp unaligned performance
Replace the simple byte-wise compare in the misaligned case with a
dword compare with page boundary checks in place.  For simplicity I've
chosen a 4K page boundary so that we don't have to query the actual
page size on the system.

This results in up to 3x improvement in performance in the unaligned
case on falkor and about 2.5x improvement on mustang as measured using
bench-strcmp.

	* sysdeps/aarch64/strcmp.S (misaligned8): Compare dword at a
	time whenever possible.
2017-12-13 18:50:27 +05:30
Siddhesh Poyarekar
4c1d801a59 aarch64: Avoid hidden symbols for memcpy/memmove into static binaries
The __GI_* symbol aliases for __memcpy_generic are unnecessary since
they're never used.  Add them only for libc.so to avoid PLT.  Maybe
some time in future we need to evaluate the relative cost of PLT vs
gains from multiarch memcpy implementations and take a call on whether
to drop this completely.

	* sysdeps/aarch64/multiarch/memcpy_generic.S (__GI_memcpy):
	Define only for libc.so.
2017-12-04 21:17:17 +05:30
Joseph Myers
15ff490014 Use libm_alias_float for aarch64.
Continuing the preparation for additional _FloatN / _FloatNx function
aliases, this patch makes aarch64 libm function implementations use
libm_alias_float to define function aliases.

Tested with build-many-glibcs.py for aarch64-linux-gnu that installed
stripped shared libraries are unchanged by the patch.

	* sysdeps/aarch64/fpu/s_ceilf.c: Include <libm-alias-float.h>.
	(ceilf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_floorf.c: Include <libm-alias-float.h>.
	(floorf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_fmaf.c: Include <libm-alias-float.h>.
	(fmaf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_fmaxf.c: Include <libm-alias-float.h>.
	(fmaxf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_fminf.c: Include <libm-alias-float.h>.
	(fminf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_llrintf.c: Include <libm-alias-float.h>.
	(llrintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_llroundf.c: Include <libm-alias-float.h>.
	(llroundf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_lrintf.c: Include <libm-alias-float.h>.
	(lrintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_lroundf.c: Include <libm-alias-float.h>.
	(lroundf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_nearbyintf.c: Include
	<libm-alias-float.h>.
	(nearbyintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_rintf.c: Include <libm-alias-float.h>.
	(rintf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_roundf.c: Include <libm-alias-float.h>.
	(roundf): Define using libm_alias_float.
	* sysdeps/aarch64/fpu/s_truncf.c: Include <libm-alias-float.h>.
	(truncf): Define using libm_alias_float.
2017-11-28 00:55:42 +00:00
Joseph Myers
f07d2ec8c0 Use libm_alias_double for aarch64.
Continuing the preparation for additional _FloatN / _FloatNx function
aliases, this patch makes aarch64 libm function implementations use
libm_alias_double to define function aliases.

Tested with build-many-glibcs.py for aarch64-linux-gnu that installed
stripped shared libraries are unchanged by the patch.

	* sysdeps/aarch64/fpu/s_ceil.c: Include <libm-alias-double.h>.
	(ceil): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_floor.c: Include <libm-alias-double.h>.
	(floor): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_fma.c: Include <libm-alias-double.h>.
	(fma): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_fmax.c: Include <libm-alias-double.h>.
	(fmax): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_fmin.c: Include <libm-alias-double.h>.
	(fmin): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_llrint.c: Include <libm-alias-double.h>.
	(llrint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_llround.c: Include <libm-alias-double.h>.
	(llround): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_lrint.c: Include <libm-alias-double.h>.
	(lrint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_lround.c: Include <libm-alias-double.h>.
	(lround): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_nearbyint.c: Include <libm-alias-double.h>.
	(nearbyint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_rint.c: Include <libm-alias-double.h>.
	(rint): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_round.c: Include <libm-alias-double.h>.
	(round): Define using libm_alias_double.
	* sysdeps/aarch64/fpu/s_trunc.c: Include <libm-alias-double.h>.
	(trunc): Define using libm_alias_double.
2017-11-27 23:54:32 +00:00
Siddhesh Poyarekar
5a67c4fa01 aarch64: Optimized memset for falkor
The generic memset reads dczid_el0 on every memset.  This has a
significant impact on falkor for a range of sizes because reading
dczid_el0 is slow.

The DZP bit in the dczid_el0 register does not change dynamically, so
it is safe to read once during program startup.  With this patch
dczid_el0 is read once during startup and zva_size is cached.  This is
used to invoke the falkor-specific memset; the generic memset routine
remains unchanged.

The gains due to this are significant for falkor, with run time
reductions as high as 48%.  Here's a sample from the falkor tests:

Function: memset
Variant: walk
                      simple_memset	__memset_falkor	__memset_generic
=====================================================================
length=256, char=0:   139.96 (-698.28%)	   9.07 ( 48.26%)  17.53
length=257, char=0:   140.50 (-699.03%)	   9.53 ( 45.80%)  17.58
length=258, char=0:   140.96 (-703.95%)	   9.58 ( 45.36%)  17.53
length=259, char=0:   141.56 (-705.16%)	   9.53 ( 45.79%)  17.58
length=260, char=0:   142.15 (-710.76%)	   9.57 ( 45.39%)  17.53
length=261, char=0:   142.50 (-710.39%)	   9.53 ( 45.78%)  17.58
length=262, char=0:   142.97 (-715.09%)	   9.57 ( 45.42%)  17.54
length=263, char=0:   143.51 (-716.18%)	   9.53 ( 45.80%)  17.58
length=264, char=0:   143.93 (-720.55%)	   9.58 ( 45.39%)  17.54
length=265, char=0:   144.56 (-722.07%)	   9.53 ( 45.80%)  17.59
length=266, char=0:   144.98 (-726.42%)	   9.58 ( 45.42%)  17.54
length=267, char=0:   145.53 (-727.53%)	   9.53 ( 45.80%)  17.59
length=268, char=0:   146.25 (-731.81%)	   9.53 ( 45.79%)  17.58
length=269, char=0:   146.52 (-735.39%)	   9.53 ( 45.66%)  17.54
length=270, char=0:   146.97 (-735.81%)	   9.53 ( 45.80%)  17.58
length=271, char=0:   147.54 (-741.08%)	   9.58 ( 45.38%)  17.54
length=512, char=0:   268.26 (-1307.85%)  12.06 ( 36.71%)  19.05
length=513, char=0:   268.73 (-1273.89%)  13.56 ( 30.68%)  19.56
length=514, char=0:   269.31 (-1276.89%)  13.56 ( 30.68%)  19.56
length=515, char=0:   269.73 (-1279.05%)  13.56 ( 30.68%)  19.56
length=516, char=0:   270.34 (-1282.24%)  13.56 ( 30.67%)  19.56
length=517, char=0:   270.83 (-1284.71%)  13.56 ( 30.66%)  19.56
length=518, char=0:   271.20 (-1286.54%)  13.56 ( 30.67%)  19.56
length=519, char=0:   271.67 (-1288.67%)  13.65 ( 30.24%)  19.56
length=520, char=0:   272.14 (-1291.04%)  13.65 ( 30.22%)  19.56
length=521, char=0:   272.66 (-1293.69%)  13.65 ( 30.23%)  19.56
length=522, char=0:   273.14 (-1296.13%)  13.65 ( 30.20%)  19.56
length=523, char=0:   273.64 (-1298.75%)  13.65 ( 30.23%)  19.56
length=524, char=0:   274.34 (-1302.16%)  13.66 ( 30.20%)  19.57
length=525, char=0:   274.64 (-1297.78%)  13.56 ( 30.99%)  19.65
length=526, char=0:   275.20 (-1300.04%)  13.56 ( 31.01%)  19.66
length=527, char=0:   275.66 (-1302.86%)  13.56 ( 30.99%)  19.65
length=1024, char=0:  524.46 (-2169.75%)  20.12 ( 12.92%)  23.11
length=1025, char=0:  525.14 (-2124.63%)  21.62 (  8.40%)  23.61
length=1026, char=0:  525.59 (-2125.36%)  21.88 (  7.37%)  23.62
length=1027, char=0:  525.98 (-2127.14%)  21.62 (  8.46%)  23.62
length=1028, char=0:  526.68 (-2131.10%)  21.62 (  8.42%)  23.61
length=1029, char=0:  527.10 (-2131.70%)  21.79 (  7.73%)  23.62
length=1030, char=0:  527.54 (-2118.51%)  21.62 (  9.10%)  23.78
length=1031, char=0:  527.98 (-2136.37%)  21.62 (  8.43%)  23.61
length=1032, char=0:  528.70 (-2139.38%)  21.62 (  8.43%)  23.61
length=1033, char=0:  529.25 (-2124.37%)  21.62 (  9.11%)  23.79
length=1034, char=0:  529.48 (-2142.95%)  21.62 (  8.43%)  23.61
length=1035, char=0:  530.11 (-2145.13%)  21.62 (  8.44%)  23.61
length=1036, char=0:  530.76 (-2147.10%)  21.79 (  7.73%)  23.62
length=1037, char=0:  531.03 (-2149.45%)  21.62 (  8.42%)  23.61
length=1038, char=0:  531.64 (-2151.87%)  21.62 (  8.42%)  23.61
length=1039, char=0:  531.99 (-2151.63%)  21.80 (  7.75%)  23.63

	* sysdeps/aarch64/memset-reg.h: New file.
	* sysdeps/aarch64/memset.S: Use it.
	(__memset): Rename to MEMSET macro.
	[ZVA_MACRO]: Use zva_macro.
	* sysdeps/aarch64/multiarch/Makefile (sysdep_routines):
	Add memset_generic and memset_falkor.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add memset ifuncs.
	* sysdeps/aarch64/multiarch/init-arch.h (INIT_ARCH): New
	local variable zva_size.
	* sysdeps/aarch64/multiarch/memset.c: New file.
	* sysdeps/aarch64/multiarch/memset_generic.S: New file.
	* sysdeps/aarch64/multiarch/memset_falkor.S: New file.
	* sysdeps/aarch64/multiarch/rtld-memset.S: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c
	(DCZID_DZP_MASK): New macro.
	(DCZID_BS_MASK): Likewise.
	(init_cpu_features): Read and set zva_size.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h
	(struct cpu_features): New member zva_size.
2017-11-20 18:25:04 +05:30
Adhemerval Zanella
58a813bf6e aarch64: Fix f{max,min}{f} build for GCC 4.9 and 5
GCC 4.9 and 5 do not generate a correct f{max,min}nm instruction for
__builtin_{fmax,fmin}{f} without -ffinite-math-only.  It is clear a
compiler issue since the instruction can handle NaN and Inf correctly
and GCC6+ does not show this issue.

We can backport a fix to GCC 5, raise the minimum required GCC version
for aarch64 (since GCC 4.9 branch is now closed [1]) and/or add
configure check to check for this issue.  However I think
-ffinite-math-only should be safe for these specific implementations
and it is a simpler solution.

Checked on aarch64-linux-gnu with GCC 5.3.1.

	* sysdeps/aarch64/fpu/Makefile (CFLAGS-s_fmax.c, CFLAGS-s_fmaxf.c,
	CFLAGS-s_fmin.c, CFLAGS-s_fminf.c): New rule: add -ffinite-math-only.

[1] https://gcc.gnu.org/ml/gcc/2016-08/msg00010.html

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2017-11-17 09:23:07 -02:00
Adhemerval Zanella
06be6368da nptl: Define __PTHREAD_MUTEX_{NUSERS_AFTER_KIND,USE_UNION}
This patch adds two new internal defines to set the internal
pthread_mutex_t layout required by the supported ABIS:

  1. __PTHREAD_MUTEX_NUSERS_AFTER_KIND which control whether to define
     __nusers fields before or after __kind.  The preferred value for
     is 0 for new ports and it sets __nusers before __kind.

  2. __PTHREAD_MUTEX_USE_UNION which control whether internal __spins and
     __list members will be place inside an union for linuxthreads
     compatibility.  The preferred value is 0 for ports and it sets
     to not use an union to define both fields.

It fixes the wrong offsets value for __kind value on x86_64-linux-gnu-x32.
Checked with a make check run-built-tests=no on all afected ABIs.

	[BZ #22298]
	* nptl/allocatestack.c (allocate_stack): Check if
	__PTHREAD_MUTEX_HAVE_PREV is non-zero, instead if
	__PTHREAD_MUTEX_HAVE_PREV is defined.
	* nptl/descr.h (pthread): Likewise.
	* nptl/nptl-init.c (__pthread_initialize_minimal_internal):
	Likewise.
	* nptl/pthread_create.c (START_THREAD_DEFN): Likewise.
	* sysdeps/nptl/fork.c (__libc_fork): Likewise.
	* sysdeps/nptl/pthread.h (PTHREAD_MUTEX_INITIALIZER): Likewise.
	* sysdeps/nptl/bits/thread-shared-types.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
	defines.
	(__pthread_internal_list): Check __PTHREAD_MUTEX_USE_UNION instead
	of __WORDSIZE for internal layout.
	(__pthread_mutex_s): Check __PTHREAD_MUTEX_NUSERS_AFTER_KIND instead
	of __WORDSIZE for internal __nusers layout and __PTHREAD_MUTEX_USE_UNION
	instead of __WORDSIZE whether to use an union for __spins and __list
	fields.
	(__PTHREAD_MUTEX_HAVE_PREV): Define also for __PTHREAD_MUTEX_USE_UNION
	case.
	* sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
	defines.
	* sysdeps/alpha/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/arm/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/hppa/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/ia64/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/m68k/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/mips/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/nios2/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/s390/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/sh/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/sparc/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/tile/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/x86/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-07 09:48:41 -02:00
Adhemerval Zanella
dff91cd45e nptl: Add tests for internal pthread_mutex_t offsets
This patch adds a new build test to check for internal fields
offsets for user visible internal field.  Although currently
the only field which is statically initialized to a non zero value
is pthread_mutex_t.__data.__kind value, the tests also check the
offset of __kind, __spins, __elision (if supported), and __list
internal member.  A internal header (pthread-offset.h) is added
to each major ABI with the reference value.

Checked on x86_64-linux-gnu and with a build check for all affected
ABIs (aarch64-linux-gnu, alpha-linux-gnu, arm-linux-gnueabihf,
hppa-linux-gnu, i686-linux-gnu, ia64-linux-gnu, m68k-linux-gnu,
microblaze-linux-gnu, mips64-linux-gnu, mips64-n32-linux-gnu,
mips-linux-gnu, powerpc64le-linux-gnu, powerpc-linux-gnu,
s390-linux-gnu, s390x-linux-gnu, sh4-linux-gnu, sparc64-linux-gnu,
sparcv9-linux-gnu, tilegx-linux-gnu, tilegx-linux-gnu-x32,
tilepro-linux-gnu, x86_64-linux-gnu, and x86_64-linux-x32).

	* nptl/pthreadP.h (ASSERT_PTHREAD_STRING,
	ASSERT_PTHREAD_INTERNAL_OFFSET): New macro.
	* nptl/pthread_mutex_init.c (__pthread_mutex_init): Add build time
	checks for internal pthread_mutex_t offsets.
	* sysdeps/aarch64/nptl/pthread-offsets.h
	(__PTHREAD_MUTEX_NUSERS_OFFSET, __PTHREAD_MUTEX_KIND_OFFSET,
	__PTHREAD_MUTEX_SPINS_OFFSET, __PTHREAD_MUTEX_ELISION_OFFSET,
	__PTHREAD_MUTEX_LIST_OFFSET): New macro.
	* sysdeps/alpha/nptl/pthread-offsets.h: Likewise.
	* sysdeps/arm/nptl/pthread-offsets.h: Likewise.
	* sysdeps/hppa/nptl/pthread-offsets.h: Likewise.
	* sysdeps/i386/nptl/pthread-offsets.h: Likewise.
	* sysdeps/ia64/nptl/pthread-offsets.h: Likewise.
	* sysdeps/m68k/nptl/pthread-offsets.h: Likewise.
	* sysdeps/microblaze/nptl/pthread-offsets.h: Likewise.
	* sysdeps/mips/nptl/pthread-offsets.h: Likewise.
	* sysdeps/nios2/nptl/pthread-offsets.h: Likewise.
	* sysdeps/powerpc/nptl/pthread-offsets.h: Likewise.
	* sysdeps/s390/nptl/pthread-offsets.h: Likewise.
	* sysdeps/sh/nptl/pthread-offsets.h: Likewise.
	* sysdeps/sparc/nptl/pthread-offsets.h: Likewise.
	* sysdeps/tile/nptl/pthread-offsets.h: Likewise.
	* sysdeps/x86_64/nptl/pthread-offsets.h: Likewise.

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-07 09:48:28 -02:00
Szabolcs Nagy
659ca26736 aarch64: optimize _dl_tlsdesc_dynamic fast path
Remove some load/store instructions from the dynamic tlsdesc resolver
fast path.  This gives around 20% faster tls access in dlopened shared
libraries (assuming glibc ran out of static tls space).

	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_dynamic): Optimize.
2017-11-03 14:50:55 +00:00
Szabolcs Nagy
91c5a366d8 aarch64: Remove barriers from TLS descriptor functions
Remove ldar synchronization and most lazy TLSDESC initialization
related code.

	* sysdeps/aarch64/dl-machine.h (elf_machine_runtime_setup): Remove
	DT_TLSDESC_GOT initialization.
	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Remove.
	(_dl_tlsdesc_resolve_rela): Likewise.
	(_dl_tlsdesc_resolve_hold): Likewise.
	(_dl_tlsdesc_undefweak): Remove ldar.
	(_dl_tlsdesc_dynamic): Likewise.
	* sysdeps/aarch64/dl-tlsdesc.h (_dl_tlsdesc_return_lazy): Remove.
	(_dl_tlsdesc_resolve_rela): Likewise.
	(_dl_tlsdesc_resolve_hold): Likewise.
	* sysdeps/aarch64/tlsdesc.c (_dl_tlsdesc_resolve_rela_fixup): Remove.
	(_dl_tlsdesc_resolve_hold_fixup): Likewise.
	(_dl_tlsdesc_resolve_rela): Likewise.
	(_dl_tlsdesc_resolve_hold): Likewise.
2017-11-03 14:43:32 +00:00
Szabolcs Nagy
b7cf203b5c aarch64: Disable lazy symbol binding of TLSDESC
Always do TLS descriptor initialization at load time during relocation
processing to avoid barriers at every TLS access. In non-dlopened shared
libraries the overhead of tls access vs static global access is > 3x
bigger when lazy initialization is used (_dl_tlsdesc_return_lazy)
compared to bind-now (_dl_tlsdesc_return) so the barriers dominate tls
access performance.

TLSDESC relocs are in DT_JMPREL which are processed at load time using
elf_machine_lazy_rel which is only supposed to do lightweight
initialization using the DT_TLSDESC_PLT trampoline (the trampoline code
jumps to the entry point in DT_TLSDESC_GOT which does the lazy tlsdesc
initialization at runtime).  This patch changes elf_machine_lazy_rel
in aarch64 to do the symbol binding and initialization as if DF_BIND_NOW
was set, so the non-lazy code path of elf/do-rel.h was replicated.

The static linker could be changed to emit TLSDESC relocs in DT_REL*,
which are processed non-lazily, but the goal of this patch is to always
guarantee bind-now semantics, even if the binary was produced with an
old linker, so the barriers can be dropped in tls descriptor functions.

After this change the synchronizing ldar instructions can be dropped
as well as the lazy initialization machinery including the DT_TLSDESC_GOT
setup.

I believe this should be done on all targets, including ones where no
barrier is needed for lazy initialization.  There is very little gain in
optimizing for large number of symbolic tlsdesc relocations which is an
extremely uncommon case.  And currently the tlsdesc entries are only
readonly protected with -z now and some hardennings against writable
JUMPSLOT relocs don't work for TLSDESC so they are a security hazard.
(But to fix that the static linker has to be changed.)

	* sysdeps/aarch64/dl-machine.h (elf_machine_lazy_rel): Do symbol
	binding and initialization non-lazily for R_AARCH64_TLSDESC.
2017-11-03 14:41:35 +00:00
Szabolcs Nagy
be080b6c14 aarch64: Add missing math Makefile for recent commit
Without -fno-math-errno, the builtins just do a call instead of
inlining a single instruction.
2017-10-23 15:34:36 +01:00
Michael Collison
5062680c60 aarch64: Implement math acceleration via builtins
This patch converts asm statements into builtins for AArch64.  As an
example for the file sysdeps/aarch64/fpu/s_ceil.c, we convert the
function from

double
__ceil (double x)
{
  double result;
  asm ("frintp\t%d0, %d1" :
       "=w" (result) : "w" (x) );
  return result;
}

into

double
__ceil (double x)
{
  return __builtin_ceil (x);
}

Tested on aarch64-linux-gnu with gcc-4.9.4 and gcc-6.

	* sysdeps/aarch64/fpu/e_sqrt.c (ieee754_sqrt): Replace asm statements
	with __builtin_sqrt.
	* sysdeps/aarch64/fpu/e_sqrtf.c (ieee754_sqrtf): Replace asm statements
	with __builtin_sqrtf.
	* sysdeps/aarch64/fpu/s_ceil.c (__ceil): Replace asm statements
	with __builtin_ceil.
	* sysdeps/aarch64/fpu/s_ceilf.c (__ceilf): Replace asm statements
	with __builtin_ceilf.
	* sysdeps/aarch64/fpu/s_floor.c (__floor): Replace asm statements
	with __builtin_floor.
	* sysdeps/aarch64/fpu/s_floorf.c (__floorf): Replace asm statements
	with __builtin_floorf.
	* sysdeps/aarch64/fpu/s_fma.c (__fma): Replace asm statements
	with __builtin_fma.
	* sysdeps/aarch64/fpu/s_fmaf.c (__fmaf): Replace asm statements
	with __builtin_fmaf.
	* sysdeps/aarch64/fpu/s_fmax.c (__fmax): Replace asm statements
	with __builtin_fmax.
	* sysdeps/aarch64/fpu/s_fmaxf.c (__fmaxf): Replace asm statements
	with __builtin_fmaxf.
	* sysdeps/aarch64/fpu/s_fmin.c (__fmin): Replace asm statements
	with __builtin_fmin.
	* sysdeps/aarch64/fpu/s_fminf.c (__fminf): Replace asm statements
	with __builtin_fminf.
	* sysdeps/aarch64/fpu/s_frint.c: Delete file.
	* sysdeps/aarch64/fpu/s_frintf.c: Delete file.
	* sysdeps/aarch64/fpu/s_llrint.c (__llrint): Replace asm statements
	with builtin_rint and conversion to int.
	* sysdeps/aarch64/fpu/s_llrintf.c (__llrintf): Likewise.
	* sysdeps/aarch64/fpu/s_llround.c (__llround): Replace asm statements
	with builtin_llround.
	* sysdeps/aarch64/fpu/s_llroundf.c (__llroundf): Likewise.
	* sysdeps/aarch64/fpu/s_lrint.c (__lrint): Replace asm statements
	with builtin_rint and conversion to long int.
	* sysdeps/aarch64/fpu/s_lrintf.c (__lrintf): Likewise.
	* sysdeps/aarch64/fpu/s_lround.c (__lround): Replace asm statements
	with builtin_lround.
	* sysdeps/aarch64/fpu/s_lroundf.c (__lroundf): Replace asm statements
	with builtin_lroundf.
	* sysdeps/aarch64/fpu/s_nearbyint.c (__nearbyint): Replace asm
	statements with __builtin_nearbyint.
	* sysdeps/aarch64/fpu/s_nearbyintf.c (__nearbyintf): Replace asm
	statements with __builtin_nearbyintf.
	* sysdeps/aarch64/fpu/s_rint.c (__rint): Replace asm statements
	with __builtin_rint.
	* sysdeps/aarch64/fpu/s_rintf.c (__rintf): Replace asm statements
	with __builtin_rintf.
	* sysdeps/aarch64/fpu/s_round.c (__round): Replace asm statements
	with __builtin_round.
	* sysdeps/aarch64/fpu/s_roundf.c (__roundf): Replace asm statements
	with __builtin_roundf.
	* sysdeps/aarch64/fpu/s_trunc.c (__trunc): Replace asm statements
	with __builtin_trunc.
	* sysdeps/aarch64/fpu/s_truncf.c (__truncf): Replace asm statements
	with __builtin_truncf.
	* sysdeps/aarch64/fpu/Makefile: Build e_sqrt[f].c with -fno-math-errno.
2017-10-23 10:32:56 +01:00
Szabolcs Nagy
a68ba2f3cd [AARCH64] Rewrite elf_machine_load_address using _DYNAMIC symbol
This patch rewrites aarch64 elf_machine_load_address to use special _DYNAMIC
symbol instead of _dl_start.

The static address of _DYNAMIC symbol is stored in the first GOT entry.
Here is the change which makes this solution work (part of binutils 2.24):
https://sourceware.org/ml/binutils/2013-06/msg00248.html

i386, x86_64 targets use the same method to do this as well.

The original implementation relies on a trick that R_AARCH64_ABS32 relocation
being resolved at link time and the static address fits in the 32bits.
However, in LP64, normally, the address is defined to be 64 bit.

Here is the C version one which should be portable in all cases.

	* sysdeps/aarch64/dl-machine.h (elf_machine_load_address): Use
	_DYNAMIC symbol to calculate load address.
2017-10-18 17:35:16 +01:00
Siddhesh Poyarekar
dd5bc7f1b3 aarch64: Optimized implementation of memmove for Qualcomm Falkor
This is an optimized memmove implementation for the Qualcomm Falkor
processor core.  Due to the way the falkor memcpy needs to be written,
code cannot be easily shared between memmove and memcpy like in case
of other aarch64 memcpy implementations due to which this routine is
separate.  The underlying principle is the same as that of memcpy
where it tries to use registers with the same lower 4 bits for
fetching the same stream, thus optimizing hardware prefetcher
performance.

The memcpy copy loop copies 64 bytes at a time using the same register
pair since that's the way to train the hardware prefetcher on the
falkor core.  memmove cannot quite do that since it needs to avoid
overlaps, so it does the next best thing, i.e. has a 32 byte loop with
a 32 byte end (prefetch a loop ahead to account for overlapping
locations) with register pairs that alias so that they hit the same
prefetcher.  Due to this difference in loop size, they have to
currently be separate implementations but efforts are on to try and
get memmove to fall back into memcpy whenever it can without simply
duplicating all of the code.

Performance:

The routine fares around 20-25% better than the generic memmove for
most medium to large sizes (i.e. > 128 bytes) for the new walking
memmove benchmark (memmove-walk) with an unexplained regression
between 1K and 2K.  The minor regression is something worth looking
into for us, but the remaining gains are significant enough that we
would like this included upstream as we looking into the cause for the
regression.  Here is a snippet of the numbers as generated from the
microbenchmark by the compare_strings script.  Comparisons are against
__memmove_generic:

Function: memmove
Variant: walk
                                    __memmove_thunderx	__memmove_falkor	__memmove_generic
========================================================================================================================
<snip>
                        length=16384:  12508800.00 (  6.09%)	 11486800.00 ( 13.76%)	 13319600.00
                        length=16400:  13614200.00 ( -0.67%)	 11585000.00 ( 14.33%)	 13523600.00
                        length=16385:  13448400.00 (  0.10%)	 11732700.00 ( 12.84%)	 13461200.00
                        length=16399:  13594100.00 ( -0.22%)	 11859600.00 ( 12.57%)	 13564400.00
                        length=16386:  13211600.00 (  1.13%)	 11503800.00 ( 13.91%)	 13362400.00
                        length=16398:  13218600.00 (  2.12%)	 11573200.00 ( 14.30%)	 13504700.00
                        length=16387:  13510900.00 ( -0.37%)	 11744200.00 ( 12.76%)	 13461300.00
                        length=16397:  13603700.00 ( -0.15%)	 11878200.00 ( 12.55%)	 13583200.00
                        length=16388:  13461700.00 ( -0.13%)	 11558000.00 ( 14.03%)	 13444100.00
                        length=16396:  13517500.00 ( -0.03%)	 11561300.00 ( 14.45%)	 13513900.00
                        length=16389:  13534100.00 (  0.17%)	 11756800.00 ( 13.28%)	 13556900.00
                        length=16395:  13585600.00 (  0.11%)	 11791800.00 ( 13.30%)	 13601200.00
                        length=16390:  13480100.00 ( -0.13%)	 11685500.00 ( 13.20%)	 13462100.00
                        length=16394:  13529900.00 ( -0.23%)	 11549800.00 ( 14.43%)	 13498200.00
                        length=16391:  13595400.00 ( -0.26%)	 11768200.00 ( 13.22%)	 13560600.00
                        length=16393:  13567000.00 (  0.20%)	 11779700.00 ( 13.35%)	 13594700.00
                        length=32768:  71308800.00 ( -6.53%)	 50220800.00 ( 24.98%)	 66939200.00
                        length=32784:  72100800.00 (-11.55%)	 50114100.00 ( 22.47%)	 64636300.00
                        length=32769:  71767000.00 ( -7.10%)	 51238400.00 ( 23.54%)	 67010000.00
                        length=32783:  70113700.00 (-40.95%)	 51129000.00 ( -2.78%)	 49744400.00
                        length=32770:  71367600.00 ( -6.52%)	 50244700.00 ( 25.01%)	 67000900.00
                        length=32782:  64366700.00 (  4.71%)	 50101400.00 ( 25.83%)	 67545600.00
                        length=32771:  71440100.00 ( -6.51%)	 51263900.00 ( 23.57%)	 67074900.00
                        length=32781:  66993000.00 (  0.34%)	 51108300.00 ( 23.97%)	 67220300.00
                        length=32772:  71443900.00 (-60.50%)	 50062100.00 (-12.47%)	 44512600.00
                        length=32780:  71759100.00 ( -6.58%)	 50263200.00 ( 25.35%)	 67328600.00
                        length=32773:  71714900.00 (-33.21%)	 51076600.00 (  5.12%)	 53835400.00
                        length=32779:  71756900.00 ( -6.56%)	 51290800.00 ( 23.83%)	 67337800.00
                        length=32774:  59689300.00 (-34.55%)	 50068400.00 (-12.86%)	 44363300.00
                        length=32778:  71847500.00 (-18.20%)	 50084100.00 ( 17.61%)	 60786500.00
                        length=32775:  71599300.00 ( -6.54%)	 51278200.00 ( 23.70%)	 67204800.00
                        length=32777:  71862900.00 (-60.85%)	 51094000.00 (-14.36%)	 44677900.00
                        length=65536: 282848000.00 ( -6.60%)	199187000.00 ( 24.93%)	265325000.00
                        length=65552: 243285000.00 (-41.61%)	198512000.00 (-15.54%)	171805000.00
                        length=65537: 255415000.00 (-23.47%)	202499000.00 (  2.11%)	206858000.00
                        length=65551: 280122000.00 (-62.95%)	203349000.00 (-18.29%)	171911000.00
                        length=65538: 283676000.00 (-14.46%)	198368000.00 ( 19.96%)	247848000.00
                        length=65550: 275566000.00 (-51.76%)	198494000.00 ( -9.31%)	181581000.00
                        length=65539: 283699000.00 ( -6.58%)	203453000.00 ( 23.57%)	266195000.00
                        length=65549: 286572000.00 ( -6.65%)	202607000.00 ( 24.60%)	268712000.00
                        length=65540: 283710000.00 ( -6.59%)	199161000.00 ( 25.17%)	266160000.00
                        length=65548: 237573000.00 ( 11.48%)	198462000.00 ( 26.06%)	268395000.00
                        length=65541: 284150000.00 ( -6.58%)	203273000.00 ( 23.75%)	266600000.00
                        length=65547: 286250000.00 ( -6.70%)	202594000.00 ( 24.48%)	268263000.00
                        length=65542: 284167000.00 ( -6.60%)	199122000.00 ( 25.31%)	266584000.00
                        length=65546: 285656000.00 ( -6.59%)	198443000.00 ( 25.95%)	268002000.00
                        length=65543: 284600000.00 ( -6.58%)	203247000.00 ( 23.89%)	267030000.00
                        length=65545: 285665000.00 ( -6.40%)	202575000.00 ( 24.55%)	268472000.00
<snip>

	* sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add
	memmove_falkor.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Likewise.
	* sysdeps/aarch64/multiarch/memmove.c: Likewise.
	* sysdeps/aarch64/multiarch/memmove_falkor.S: New file.
2017-10-05 22:20:23 +05:30
Szabolcs Nagy
db4f87bad4 aarch64: don't use MIN in dl-machine.h
MIN is used, but param.h may not be included, so expand its
single use inline.

	* sysdeps/aarch64/dl-machine.h (elf_machine_rela): Expand MIN.
2017-10-04 17:49:38 +01:00
Szabolcs Nagy
b2f03cf3a4 AArch64: update libm-test-ulps
Update for new expf and logf.

	* sysdeps/aarch64/libm-test-ulps: Update.
2017-09-28 15:28:46 +01:00
Szabolcs Nagy
72aa623345 Optimized generic expf and exp2f with wrappers
Based on new expf and exp2f code from
https://github.com/ARM-software/optimized-routines/

with wrapper on aarch64:
expf reciprocal-throughput: 2.3x faster
expf latency: 1.7x faster
without wrapper on aarch64:
expf reciprocal-throughput: 3.3x faster
expf latency: 1.7x faster
without wrapper on aarch64:
exp2f reciprocal-throughput: 2.8x faster
exp2f latency: 1.3x faster
libm.so size on aarch64:
.text size: -152 bytes
.rodata size: -1740 bytes
expf/exp2f worst case nearest rounding error: 0.502 ulp
worst case non-nearest rounding error: 1 ulp

Error checks are inline and errno setting is in separate tail called
functions, but the wrappers are kept in this patch to handle the
_LIB_VERSION==_SVID_ case.  (So e.g. errno is set twice for expf calls
and once for __expf_finite calls on targets where the new code is used.)

Double precision arithmetics is used which is expected to be faster on
most targets (including soft-float) than using single precision and it
is easier to get good precision result with it.

Const data is kept in a separate translation unit which complicates
maintenance a bit, but is expected to give good code for literal loads
on most targets and allows sharing data across expf, exp2f and powf.
(This data is disabled on i386, m68k and ia64 which have their own
expf, exp2f and powf code.)

Some details may need target specific tweaks:
- best convert and round to int operation in the arg reduction may be
different across targets.
- code was optimized on fma target, optimal polynomial eval may be
different without fma.
- gcc does not always generate good code for fp bit representation
access via unions or it may be inherently slow on some targets.

The libm-test-ulps will need adjustment because..
- The argument reduction ideally uses nearest rounded rint, but that is
not efficient on most targets, so the polynomial can get evaluated on a
wider interval in non-nearest rounding mode making 1 ulp errors common
in that case.
- The polynomial is evaluated such that it may have 1 ulp error on
negative tiny inputs with upward rounding.

	* math/Makefile (type-float-routines): Add math_errf and e_exp2f_data.
	* sysdeps/aarch64/fpu/math_private.h (TOINT_INTRINSICS): Define.
	(roundtoint, converttoint): Likewise.
	* sysdeps/ieee754/flt-32/e_expf.c: New implementation.
	* sysdeps/ieee754/flt-32/e_exp2f.c: New implementation.
	* sysdeps/ieee754/flt-32/e_exp2f_data.c: New file.
	* sysdeps/ieee754/flt-32/math_config.h: New file.
	* sysdeps/ieee754/flt-32/math_errf.c: New file.
	* sysdeps/ieee754/flt-32/t_exp2f.h: Remove.
	* sysdeps/i386/fpu/e_exp2f_data.c: New file.
	* sysdeps/i386/fpu/math_errf.c: New file.
	* sysdeps/ia64/fpu/e_exp2f_data.c: New file.
	* sysdeps/ia64/fpu/math_errf.c: New file.
	* sysdeps/m68k/m680x0/fpu/e_exp2f_data.c: New file.
	* sysdeps/m68k/m680x0/fpu/math_errf.c: New file.
2017-09-25 10:44:39 +01:00
Wilco Dijkstra
ca3a382ea3 Enable unwind info in libc-start.c and backtrace.c
Add unwind info to __libc_start_main so that unwinding continues one
extra level to _start.  Similarly add unwind info to backtrace.
Given many targets require this, do this in a general way.

	* csu/Makefile: Add -funwind-tables to libc-start.c.
	* debug/Makefile: Add -funwind-tables to backtrace.c.
	* sysdeps/aarch64/Makefile: Remove CFLAGS-backtrace.c.
	* sysdeps/arm/Makefile: Likewise.
	* sysdeps/i386/Makefile: Likewise.
	* sysdeps/m68k/Makefile: Likewise.
	* sysdeps/mips/Makefile: Likewise.
	* sysdeps/nios2/Makefile: Likewise.
	* sysdeps/sh/Makefile: Likewise.
	* sysdeps/sparc/Makefile: Likewise.
2017-09-19 15:07:58 +01:00
Wang Boshi
6cd380dd36 AArch64: use movz/movk instead of literal pools in start.S
eXecute-Only Memory (XOM) is a protection mechanism against some ROP
attacks. XOM sets the code as executable and unreadable, so the access
to any data, like literal pools, in the code section causes the fault
with XOM. The compiler can disable literal pools for C source files,
but not for assembly files, so I use movz/movk instead of literal pools
in start.S for XOM.

I add MOVL macro with movz/movk instructions like movl pseudo-instruction
in armasm, and use the macro instead of literal pools.

	* sysdeps/aarch64/start.S: Use MOVL instead of literal pools.
	* sysdeps/aarch64/sysdep.h (MOVL): Add MOVL macro.
2017-09-18 18:15:47 +01:00
Joseph Myers
5a80d39d0d Obsolete pow10 functions.
This patch obsoletes the pow10, pow10f and pow10l functions (makes
them into compat symbols, not available for new ports or static
linking).  The exp10 names for these functions are standardized (in TS
18661-4) and were added in the same glibc version (2.1) as pow10 so
source code can change to use them without any loss of portability.
Since pow10 is deliberately not provided for _Float128, only exp10,
this slightly simplifies moving to the new wrapper templates in the
!LIBM_SVID_COMPAT case, by avoiding needing to arrange for pow10,
pow10f and pow10l to be defined by those templates.

Tested for x86_64, and with build-many-glibcs.py.

	* manual/math.texi (pow10): Do not document.
	(pow10f): Likewise.
	(pow10l): Likewise.
	* math/bits/mathcalls.h [__USE_GNU] (pow10): Do not declare.
	* math/bits/math-finite.h [__USE_GNU] (pow10): Likewise.
	* math/libm-test-exp10.inc (pow10_test): Remove.
	(do_test): Do not call pow10.
	* math/w_exp10_compat.c (pow10): Make into compat symbol.
	[NO_LONG_DOUBLE] (pow10l): Likewise.
	* math/w_exp10f_compat.c (pow10f): Likewise.
	* math/w_exp10l_compat.c (pow10l): Likewise.
	* sysdeps/ia64/fpu/e_exp10.S: Include <shlib-compat.h>.
	(pow10): Make into compat symbol.
	* sysdeps/ia64/fpu/e_exp10f.S: Include <shlib-compat.h>.
	(pow10f): Make into compat symbol.
	* sysdeps/ia64/fpu/e_exp10l.S: Include <shlib-compat.h>.
	(pow10l): Make into compat symbol.
	* sysdeps/ieee754/ldbl-opt/Makefile (libnldbl-calls): Remove
	pow10.
	(CFLAGS-nldbl-pow10.c): Remove variable..
	* sysdeps/ieee754/ldbl-opt/nldbl-pow10.c: Remove file.
	* sysdeps/ieee754/ldbl-opt/w_exp10_compat.c (pow10l): Condition on
	[SHLIB_COMPAT (libm, GLIBC_2_1, GLIBC_2_27)].
	* sysdeps/ieee754/ldbl-opt/w_exp10l_compat.c (compat_symbol):
	Undefine and redefine.
	(pow10l): Make into compat symbol.
	* sysdeps/aarch64/libm-test-ulps: Remove pow10 ulps.
	* sysdeps/alpha/fpu/libm-test-ulps: Likewise.
	* sysdeps/arm/libm-test-ulps: Likewise.
	* sysdeps/hppa/fpu/libm-test-ulps: Likewise.
	* sysdeps/i386/fpu/libm-test-ulps: Likewise.
	* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
	* sysdeps/microblaze/libm-test-ulps: Likewise.
	* sysdeps/mips/mips32/libm-test-ulps: Likewise.
	* sysdeps/mips/mips64/libm-test-ulps: Likewise.
	* sysdeps/nios2/libm-test-ulps: Likewise.
	* sysdeps/powerpc/fpu/libm-test-ulps: Likewise.
	* sysdeps/powerpc/nofpu/libm-test-ulps: Likewise.
	* sysdeps/s390/fpu/libm-test-ulps: Likewise.
	* sysdeps/sh/libm-test-ulps: Likewise.
	* sysdeps/sparc/fpu/libm-test-ulps: Likewise.
	* sysdeps/tile/libm-test-ulps: Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2017-09-01 21:13:18 +00:00
Steve Ellcey
d9ff799a5b ILP32 math changes
* sysdeps/aarch64/fpu/s_llrint.c (OREG_SIZE): New macro.
	* sysdeps/aarch64/fpu/s_llround.c (OREG_SIZE): Likewise.
	* sysdeps/aarch64/fpu/s_llrintf.c (OREGS, IREGS): Remove.
	(IREG_SIZE, OREG_SIZE): New macros.
	* sysdeps/aarch64/fpu/s_llroundf.c: (OREGS, IREGS): Remove.
	(IREG_SIZE, OREG_SIZE): New macros.
	* sysdeps/aarch64/fpu/s_lrintf.c (IREGS): Remove.
	(IREG_SIZE): New macro.
	* sysdeps/aarch64/fpu/s_lroundf.c (IREGS): Remove.
	(IREG_SIZE): New macro.
	* sysdeps/aarch64/fpu/s_lrint.c (get-rounding-mode.h, stdint.h):
	New includes.
	(IREG_SIZE, OREG_SIZE): Initialize if not already set.
	(OREGS, IREGS): Set based on IREG_SIZE and OREG_SIZE.
	(__CONCATX): Handle exceptions correctly on large values that may
	set FE_INVALID.
	* sysdeps/aarch64/fpu/s_lround.c (IREG_SIZE, OREG_SIZE):
	Initialize if not already set.
        (OREGS, IREGS): Set based on IREG_SIZE and OREG_SIZE.
2017-08-31 13:38:11 -07:00
Steve Ellcey
9eee633b68 Change argument type passed to ifunc resolvers
* sysdeps/aarch64/dl-irel.h: (elf_ifunc_invoke): Change argument type
	in resolver call.
2017-08-31 10:34:55 -07:00
Florian Weimer
17e00cc69e elf: Remove internal_function attribute 2017-08-31 16:59:37 +02:00
Steve Ellcey
5a706f649d aarch64: Use PTR_REG macro to fix ILP32 bug and make code consistent
* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_dynamic):
	Use PTR_REG macro in cmp instruction.
2017-08-22 16:22:05 -07:00
Wilco Dijkstra
922369032c [AArch64] Optimized memcmp.
This is an optimized memcmp for AArch64.  This is a complete rewrite
using a different algorithm.  The previous version split into cases
where both inputs were aligned, the inputs were mutually aligned and
unaligned using a byte loop.  The new version combines all these cases,
while small inputs of less than 8 bytes are handled separately.

This allows the main code to be sped up using unaligned loads since
there are now at least 8 bytes to be compared.  After the first 8 bytes,
align the first input.  This ensures each iteration does at most one
unaligned access and mutually aligned inputs behave as aligned.
After the main loop, process the last 8 bytes using unaligned accesses.

This improves performance of (mutually) aligned cases by 25% and
unaligned by >500% (yes >6 times faster) on large inputs.

	* sysdeps/aarch64/memcmp.S (memcmp):
	Rewrite of optimized memcmp.
2017-08-10 17:00:38 +01:00
Siddhesh Poyarekar
0e02b5107e memcpy_falkor: Fix code style in comments 2017-08-09 12:57:59 +05:30
Siddhesh Poyarekar
36ada5f681 aarch64: Optimized memcpy for Qualcomm Falkor processor
This is an optimized implementation of the memcpy routine that gives a
significant gain in performance for all sizes of copies on the
Qualcomm Falkor processor.  A detailed rationale of the implementation
is written in a comment in the patch.

This implementation improves time for copies up to 128 bytes by up to
15% and for larger copies by up to 35% in the glibc
microbenchmark. The memcpy-random benchmark sees improvements in all
sizes in the range of 13%-18%.

Here are the full numbers extracted from the glibc microbenchmark
using the commands:

../benchtests/scripts/compare_strings.py benchtests/bench-memcpy.out \
		../benchtests/scripts/benchout_strings.schema.json \
		-base=__memcpy_generic length align1 align2

../benchtests/scripts/compare_strings.py benchtests/bench-memcpy-large.out \
		../benchtests/scripts/benchout_strings.schema.json \
		-base=__memcpy_generic length align1 align2

../benchtests/scripts/compare_strings.py benchtests/bench-memcpy-random.out \
		../benchtests/scripts/benchout_strings.schema.json \
		-base=__memcpy_generic max-size

Function: memcpy
__memcpy_thunderx       __memcpy_falkor __memcpy_generic
Variant: default
================================================================================
length=1,align1=0,align2=0:     33.59 (-115.00%)        15.62 (0.00%)   15.62
length=1,align1=0,align2=0:     16.41 (-10.53%) 14.06 (5.26%)   14.84
length=1,align1=0,align2=0:     14.84 (0.00%)   14.84 (0.00%)   14.84
length=1,align1=0,align2=0:     15.62 (-5.26%)  14.06 (5.26%)   14.84
length=2,align1=0,align2=0:     15.62 (-5.26%)  14.06 (5.26%)   14.84
length=2,align1=1,align2=0:     15.62 (-5.26%)  14.06 (5.26%)   14.84
length=2,align1=0,align2=1:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=2,align1=1,align2=1:     14.84 (-5.56%)  14.06 (0.00%)   14.06
length=4,align1=0,align2=0:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=4,align1=2,align2=0:     14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=4,align1=0,align2=2:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=4,align1=2,align2=2:     14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=8,align1=0,align2=0:     14.84 (-5.56%)  13.28 (5.56%)   14.06
length=8,align1=3,align2=0:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=8,align1=0,align2=3:     13.28 (0.00%)   13.28 (0.00%)   13.28
length=8,align1=3,align2=3:     13.28 (-6.25%)  13.28 (-6.25%)  12.50
length=16,align1=0,align2=0:    13.28 (0.00%)   13.28 (0.00%)   13.28
length=16,align1=4,align2=0:    13.28 (0.00%)   12.50 (5.88%)   13.28
length=16,align1=0,align2=4:    13.28 (0.00%)   13.28 (0.00%)   13.28
length=16,align1=4,align2=4:    13.28 (-6.25%)  12.50 (0.00%)   12.50
length=32,align1=0,align2=0:    14.06 (0.00%)   12.50 (11.11%)  14.06
length=32,align1=5,align2=0:    13.28 (0.00%)   12.50 (5.88%)   13.28
length=32,align1=0,align2=5:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=32,align1=5,align2=5:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=64,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=64,align1=6,align2=0:    13.28 (0.00%)   13.28 (0.00%)   13.28
length=64,align1=0,align2=6:    14.06 (5.26%)   14.06 (5.26%)   14.84
length=64,align1=6,align2=6:    14.84 (-11.77%) 14.06 (-5.88%)  13.28
length=128,align1=0,align2=0:   17.19 (-4.76%)  14.84 (9.52%)   16.41
length=128,align1=7,align2=0:   16.41 (4.55%)   15.62 (9.09%)   17.19
length=128,align1=0,align2=7:   16.41 (0.00%)   14.06 (14.29%)  16.41
length=128,align1=7,align2=7:   16.41 (4.55%)   15.62 (9.09%)   17.19
length=256,align1=0,align2=0:   21.88 (-3.70%)  21.09 (0.00%)   21.09
length=256,align1=8,align2=0:   21.09 (-3.85%)  21.09 (-3.85%)  20.31
length=256,align1=0,align2=8:   20.31 (-4.00%)  20.31 (-4.00%)  19.53
length=256,align1=8,align2=8:   21.88 (-7.69%)  20.31 (0.00%)   20.31
length=512,align1=0,align2=0:   28.91 (-2.78%)  28.91 (-2.78%)  28.12
length=512,align1=9,align2=0:   30.47 (-2.63%)  30.47 (-2.63%)  29.69
length=512,align1=0,align2=9:   29.69 (0.00%)   29.69 (0.00%)   29.69
length=512,align1=9,align2=9:   28.12 (-2.86%)  28.12 (-2.86%)  27.34
length=1024,align1=0,align2=0:  44.53 (0.00%)   44.53 (0.00%)   44.53
length=1024,align1=10,align2=0:         50.00 (0.00%)   50.00 (0.00%)   50.00
length=1024,align1=0,align2=10:         49.22 (1.56%)   50.78 (-1.56%)  50.00
length=1024,align1=10,align2=10:        44.53 (-1.79%)  43.75 (0.00%)   43.75
length=2048,align1=0,align2=0:  77.34 (-1.02%)  76.56 (0.00%)   76.56
length=2048,align1=11,align2=0:         89.84 (0.00%)   89.84 (0.00%)   89.84
length=2048,align1=0,align2=11:         89.84 (0.00%)   89.84 (0.00%)   89.84
length=2048,align1=11,align2=11:        75.78 (0.00%)   75.78 (0.00%)   75.78
length=4096,align1=0,align2=0:  141.41 (-0.56%) 140.62 (0.00%)  140.62
length=4096,align1=12,align2=0:         171.09 (-0.46%) 170.31 (0.00%)  170.31
length=4096,align1=0,align2=12:         170.31 (0.00%)  170.31 (0.00%)  170.31
length=4096,align1=12,align2=12:        140.62 (0.00%)  140.62 (0.00%)  140.62
length=8192,align1=0,align2=0:  278.91 (-0.28%) 275.78 (0.84%)  278.12
length=8192,align1=13,align2=0:         338.28 (0.23%)  335.94 (0.92%)  339.06
length=8192,align1=0,align2=13:         338.28 (0.00%)  455.47 (-34.64%)        338.28
length=8192,align1=13,align2=13:        278.12 (-0.28%) 275.78 (0.56%)  277.34
length=16384,align1=0,align2=0:         535.94 (-0.15%) 531.25 (0.73%)  535.16
length=16384,align1=14,align2=0:        659.38 (0.12%)  659.38 (0.12%)  660.16
length=16384,align1=0,align2=14:        659.38 (0.00%)  657.03 (0.36%)  659.38
length=16384,align1=14,align2=14:       535.16 (0.44%)  532.81 (0.87%)  537.50
length=32768,align1=0,align2=0:         1260.94 (10.68%)        1121.88 (20.53%)        1411.72
length=32768,align1=15,align2=0:        1368.75 (10.02%)        1376.56 (9.50%) 1521.09
length=32768,align1=0,align2=15:        1333.59 (10.91%)        1373.44 (8.25%) 1496.88
length=32768,align1=15,align2=15:       1256.25 (13.96%)        1125.78 (22.90%)        1460.16
length=65536,align1=0,align2=0:         2853.91 (30.11%)        2589.06 (36.60%)        4083.59
length=65536,align1=16,align2=0:        2850.00 (30.14%)        2589.84 (36.52%)        4079.69
length=65536,align1=0,align2=16:        2853.12 (30.60%)        2589.84 (37.00%)        4110.94
length=65536,align1=16,align2=16:       2850.78 (30.07%)        2589.06 (36.49%)        4076.56
length=0,align1=0,align2=0:     15.62 (-5.26%)  16.41 (-10.53%) 14.84
length=0,align1=0,align2=0:     14.84 (-5.56%)  14.84 (-5.56%)  14.06
length=0,align1=0,align2=0:     14.84 (0.00%)   14.84 (0.00%)   14.84
length=0,align1=0,align2=0:     16.41 (-16.67%) 14.84 (-5.56%)  14.06
length=1,align1=0,align2=0:     15.62 (4.76%)   15.62 (4.76%)   16.41
length=1,align1=1,align2=0:     15.62 (0.00%)   14.84 (5.00%)   15.62
length=1,align1=0,align2=1:     14.84 (0.00%)   14.84 (0.00%)   14.84
length=1,align1=1,align2=1:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=2,align1=0,align2=0:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=2,align1=2,align2=0:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=2,align1=0,align2=2:     14.84 (-5.56%)  14.06 (0.00%)   14.06
length=2,align1=2,align2=2:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=3,align1=0,align2=0:     14.84 (0.00%)   14.84 (0.00%)   14.84
length=3,align1=3,align2=0:     14.84 (-5.56%)  14.06 (0.00%)   14.06
length=3,align1=0,align2=3:     15.62 (-11.11%) 14.06 (0.00%)   14.06
length=3,align1=3,align2=3:     14.84 (0.00%)   14.06 (5.26%)   14.84
length=4,align1=0,align2=0:     17.97 (-27.78%) 14.06 (0.00%)   14.06
length=4,align1=4,align2=0:     13.28 (5.56%)   14.06 (0.00%)   14.06
length=4,align1=0,align2=4:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=4,align1=4,align2=4:     13.28 (5.56%)   13.28 (5.56%)   14.06
length=5,align1=0,align2=0:     13.28 (5.56%)   13.28 (5.56%)   14.06
length=5,align1=5,align2=0:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=5,align1=0,align2=5:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=5,align1=5,align2=5:     14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=6,align1=0,align2=0:     14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=6,align1=6,align2=0:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=6,align1=0,align2=6:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=6,align1=6,align2=6:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=7,align1=0,align2=0:     14.84 (-11.77%) 14.06 (-5.88%)  13.28
length=7,align1=7,align2=0:     13.28 (0.00%)   14.06 (-5.88%)  13.28
length=7,align1=0,align2=7:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=7,align1=7,align2=7:     14.06 (0.00%)   14.06 (0.00%)   14.06
length=8,align1=0,align2=0:     14.06 (-5.88%)  13.28 (0.00%)   13.28
length=8,align1=8,align2=0:     14.06 (0.00%)   13.28 (5.56%)   14.06
length=8,align1=0,align2=8:     13.28 (0.00%)   13.28 (0.00%)   13.28
length=8,align1=8,align2=8:     14.06 (-5.88%)  13.28 (0.00%)   13.28
length=9,align1=0,align2=0:     13.28 (0.00%)   13.28 (0.00%)   13.28
length=9,align1=9,align2=0:     13.28 (0.00%)   13.28 (0.00%)   13.28
length=9,align1=0,align2=9:     13.28 (0.00%)   14.06 (-5.88%)  13.28
length=9,align1=9,align2=9:     14.06 (-5.88%)  13.28 (0.00%)   13.28
length=10,align1=0,align2=0:    14.06 (0.00%)   13.28 (5.56%)   14.06
length=10,align1=10,align2=0:   14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=10,align1=0,align2=10:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=10,align1=10,align2=10:  14.06 (0.00%)   13.28 (5.56%)   14.06
length=11,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=11,align1=11,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=11,align1=0,align2=11:   13.28 (0.00%)   13.28 (0.00%)   13.28
length=11,align1=11,align2=11:  13.28 (0.00%)   13.28 (0.00%)   13.28
length=12,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=12,align1=12,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=12,align1=0,align2=12:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=12,align1=12,align2=12:  14.06 (0.00%)   13.28 (5.56%)   14.06
length=13,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=13,align1=13,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=13,align1=0,align2=13:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=13,align1=13,align2=13:  13.28 (0.00%)   13.28 (0.00%)   13.28
length=14,align1=0,align2=0:    13.28 (0.00%)   13.28 (0.00%)   13.28
length=14,align1=14,align2=0:   13.28 (5.56%)   13.28 (5.56%)   14.06
length=14,align1=0,align2=14:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=14,align1=14,align2=14:  14.06 (-5.88%)  13.28 (0.00%)   13.28
length=15,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=15,align1=15,align2=0:   14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=15,align1=0,align2=15:   13.28 (0.00%)   13.28 (0.00%)   13.28
length=15,align1=15,align2=15:  13.28 (0.00%)   14.06 (-5.88%)  13.28
length=16,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=16,align1=16,align2=0:   13.28 (5.56%)   14.06 (0.00%)   14.06
length=16,align1=0,align2=16:   14.84 (-11.77%) 13.28 (0.00%)   13.28
length=16,align1=16,align2=16:  13.28 (-6.25%)  12.50 (0.00%)   12.50
length=17,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=17,align1=17,align2=0:   14.84 (-11.77%) 12.50 (5.88%)   13.28
length=17,align1=0,align2=17:   14.84 (-5.56%)  12.50 (11.11%)  14.06
length=17,align1=17,align2=17:  14.84 (-11.77%) 12.50 (5.88%)   13.28
length=18,align1=0,align2=0:    14.06 (0.00%)   12.50 (11.11%)  14.06
length=18,align1=18,align2=0:   13.28 (5.56%)   12.50 (11.11%)  14.06
length=18,align1=0,align2=18:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=18,align1=18,align2=18:  14.06 (0.00%)   12.50 (11.11%)  14.06
length=19,align1=0,align2=0:    14.06 (-5.88%)  13.28 (0.00%)   13.28
length=19,align1=19,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=19,align1=0,align2=19:   14.84 (-5.56%)  12.50 (11.11%)  14.06
length=19,align1=19,align2=19:  14.06 (-5.88%)  12.50 (5.88%)   13.28
length=20,align1=0,align2=0:    14.84 (-11.77%) 12.50 (5.88%)   13.28
length=20,align1=20,align2=0:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=20,align1=0,align2=20:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=20,align1=20,align2=20:  14.06 (0.00%)   13.28 (5.56%)   14.06
length=21,align1=0,align2=0:    14.84 (-5.56%)  12.50 (11.11%)  14.06
length=21,align1=21,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=21,align1=0,align2=21:   14.84 (-11.77%) 12.50 (5.88%)   13.28
length=21,align1=21,align2=21:  13.28 (5.56%)   13.28 (5.56%)   14.06
length=22,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=22,align1=22,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=22,align1=0,align2=22:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=22,align1=22,align2=22:  14.06 (0.00%)   12.50 (11.11%)  14.06
length=23,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=23,align1=23,align2=0:   14.06 (-5.88%)  13.28 (0.00%)   13.28
length=23,align1=0,align2=23:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=23,align1=23,align2=23:  14.06 (-5.88%)  13.28 (0.00%)   13.28
length=24,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=24,align1=24,align2=0:   14.06 (0.00%)   13.28 (5.56%)   14.06
length=24,align1=0,align2=24:   14.84 (-11.77%) 12.50 (5.88%)   13.28
length=24,align1=24,align2=24:  14.06 (-5.88%)  13.28 (0.00%)   13.28
length=25,align1=0,align2=0:    14.06 (0.00%)   12.50 (11.11%)  14.06
length=25,align1=25,align2=0:   14.06 (0.00%)   13.28 (5.56%)   14.06
length=25,align1=0,align2=25:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=25,align1=25,align2=25:  13.28 (0.00%)   13.28 (0.00%)   13.28
length=26,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=26,align1=26,align2=0:   14.06 (0.00%)   13.28 (5.56%)   14.06
length=26,align1=0,align2=26:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=26,align1=26,align2=26:  14.06 (0.00%)   13.28 (5.56%)   14.06
length=27,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=27,align1=27,align2=0:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=27,align1=0,align2=27:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=27,align1=27,align2=27:  14.06 (0.00%)   12.50 (11.11%)  14.06
length=28,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=28,align1=28,align2=0:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=28,align1=0,align2=28:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=28,align1=28,align2=28:  14.84 (-11.77%) 13.28 (0.00%)   13.28
length=29,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=29,align1=29,align2=0:   13.28 (0.00%)   12.50 (5.88%)   13.28
length=29,align1=0,align2=29:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=29,align1=29,align2=29:  13.28 (5.56%)   12.50 (11.11%)  14.06
length=30,align1=0,align2=0:    14.06 (-5.88%)  12.50 (5.88%)   13.28
length=30,align1=30,align2=0:   13.28 (5.56%)   12.50 (11.11%)  14.06
length=30,align1=0,align2=30:   14.06 (-5.88%)  12.50 (5.88%)   13.28
length=30,align1=30,align2=30:  13.28 (0.00%)   12.50 (5.88%)   13.28
length=31,align1=0,align2=0:    13.28 (0.00%)   12.50 (5.88%)   13.28
length=31,align1=31,align2=0:   14.06 (0.00%)   12.50 (11.11%)  14.06
length=31,align1=0,align2=31:   13.28 (0.00%)   12.50 (5.88%)   13.28
length=31,align1=31,align2=31:  14.06 (0.00%)   12.50 (11.11%)  14.06
length=48,align1=0,align2=0:    14.06 (0.00%)   14.06 (0.00%)   14.06
length=48,align1=3,align2=0:    14.06 (0.00%)   14.06 (0.00%)   14.06
length=48,align1=0,align2=3:    14.06 (-5.88%)  14.06 (-5.88%)  13.28
length=48,align1=3,align2=3:    13.28 (5.56%)   14.06 (0.00%)   14.06
length=80,align1=0,align2=0:    15.62 (-11.11%) 14.84 (-5.56%)  14.06
length=80,align1=5,align2=0:    15.62 (-11.11%) 16.41 (-16.67%) 14.06
length=80,align1=0,align2=5:    14.06 (0.00%)   15.62 (-11.11%) 14.06
length=80,align1=5,align2=5:    15.62 (-5.26%)  17.19 (-15.79%) 14.84
length=96,align1=0,align2=0:    14.06 (0.00%)   14.84 (-5.56%)  14.06
length=96,align1=6,align2=0:    14.84 (-5.56%)  16.41 (-16.67%) 14.06
length=96,align1=0,align2=6:    14.06 (0.00%)   14.84 (-5.56%)  14.06
length=96,align1=6,align2=6:    14.84 (-5.56%)  17.19 (-22.22%) 14.06
length=112,align1=0,align2=0:   17.19 (-4.76%)  14.06 (14.29%)  16.41
length=112,align1=7,align2=0:   17.19 (0.00%)   16.41 (4.55%)   17.19
length=112,align1=0,align2=7:   16.41 (0.00%)   14.84 (9.52%)   16.41
length=112,align1=7,align2=7:   17.19 (0.00%)   17.19 (0.00%)   17.19
length=144,align1=0,align2=0:   17.19 (-10.00%) 17.97 (-15.00%) 15.62
length=144,align1=9,align2=0:   17.19 (-4.76%)  18.75 (-14.29%) 16.41
length=144,align1=0,align2=9:   20.31 (-8.33%)  18.75 (0.00%)   18.75
length=144,align1=9,align2=9:   18.75 (-4.35%)  18.75 (-4.35%)  17.97
length=160,align1=0,align2=0:   18.75 (-4.35%)  17.97 (0.00%)   17.97
length=160,align1=10,align2=0:  18.75 (4.00%)   18.75 (4.00%)   19.53
length=160,align1=0,align2=10:  19.53 (-4.17%)  17.97 (4.17%)   18.75
length=160,align1=10,align2=10:         18.75 (-4.35%)  18.75 (-4.35%)  17.97
length=176,align1=0,align2=0:   18.75 (-4.35%)  17.19 (4.35%)   17.97
length=176,align1=11,align2=0:  19.53 (0.00%)   19.53 (0.00%)   19.53
length=176,align1=0,align2=11:  19.53 (-4.17%)  18.75 (0.00%)   18.75
length=176,align1=11,align2=11:         18.75 (0.00%)   17.97 (4.17%)   18.75
length=192,align1=0,align2=0:   18.75 (0.00%)   17.97 (4.17%)   18.75
length=192,align1=12,align2=0:  21.09 (-8.00%)  18.75 (4.00%)   19.53
length=192,align1=0,align2=12:  18.75 (0.00%)   18.75 (0.00%)   18.75
length=192,align1=12,align2=12:         18.75 (0.00%)   17.97 (4.17%)   18.75
length=208,align1=0,align2=0:   17.97 (0.00%)   20.31 (-13.04%) 17.97
length=208,align1=13,align2=0:  19.53 (7.41%)   21.09 (0.00%)   21.09
length=208,align1=0,align2=13:  23.44 (-11.11%) 21.09 (0.00%)   21.09
length=208,align1=13,align2=13:         21.09 (-3.85%)  21.09 (-3.85%)  20.31
length=224,align1=0,align2=0:   21.09 (-8.00%)  20.31 (-4.00%)  19.53
length=224,align1=14,align2=0:  23.44 (-11.11%) 20.31 (3.70%)   21.09
length=224,align1=0,align2=14:  21.09 (3.57%)   20.31 (7.14%)   21.88
length=224,align1=14,align2=14:         20.31 (0.00%)   19.53 (3.85%)   20.31
length=240,align1=0,align2=0:   20.31 (-4.00%)  19.53 (0.00%)   19.53
length=240,align1=15,align2=0:  22.66 (0.00%)   20.31 (10.34%)  22.66
length=240,align1=0,align2=15:  20.31 (-4.00%)  20.31 (-4.00%)  19.53
length=240,align1=15,align2=15:         21.88 (0.00%)   21.09 (3.57%)   21.88
length=272,align1=0,align2=0:   20.31 (0.00%)   28.12 (-38.46%) 20.31
length=272,align1=17,align2=0:  22.66 (0.00%)   27.34 (-20.69%) 22.66
length=272,align1=0,align2=17:  25.78 (-10.00%) 28.12 (-20.00%) 23.44
length=272,align1=17,align2=17:         22.66 (-3.57%)  27.34 (-25.00%) 21.88
length=288,align1=0,align2=0:   23.44 (-7.14%)  27.34 (-25.00%) 21.88
length=288,align1=18,align2=0:  22.66 (0.00%)   27.34 (-20.69%) 22.66
length=288,align1=0,align2=18:  23.44 (-3.45%)  25.00 (-10.35%) 22.66
length=288,align1=18,align2=18:         22.66 (-3.57%)  21.88 (0.00%)   21.88
length=304,align1=0,align2=0:   21.88 (0.00%)   21.88 (0.00%)   21.88
length=304,align1=19,align2=0:  23.44 (-3.45%)  22.66 (0.00%)   22.66
length=304,align1=0,align2=19:  22.66 (0.00%)   22.66 (0.00%)   22.66
length=304,align1=19,align2=19:         22.66 (-3.57%)  21.88 (0.00%)   21.88
length=320,align1=0,align2=0:   22.66 (-3.57%)  21.88 (0.00%)   21.88
length=320,align1=20,align2=0:  22.66 (0.00%)   22.66 (0.00%)   22.66
length=320,align1=0,align2=20:  22.66 (0.00%)   22.66 (0.00%)   22.66
length=320,align1=20,align2=20:         22.66 (-3.57%)  21.88 (0.00%)   21.88
length=336,align1=0,align2=0:   21.88 (0.00%)   24.22 (-10.71%) 21.88
length=336,align1=21,align2=0:  22.66 (0.00%)   25.00 (-10.35%) 22.66
length=336,align1=0,align2=21:  25.78 (0.00%)   25.00 (3.03%)   25.78
length=336,align1=21,align2=21:         25.00 (0.00%)   23.44 (6.25%)   25.00
length=352,align1=0,align2=0:   24.22 (0.00%)   24.22 (0.00%)   24.22
length=352,align1=22,align2=0:  25.00 (0.00%)   25.00 (0.00%)   25.00
length=352,align1=0,align2=22:  25.00 (-3.23%)  25.00 (-3.23%)  24.22
length=352,align1=22,align2=22:         25.00 (-3.23%)  24.22 (0.00%)   24.22
length=368,align1=0,align2=0:   25.00 (-3.23%)  23.44 (3.23%)   24.22
length=368,align1=23,align2=0:  25.00 (0.00%)   24.22 (3.12%)   25.00
length=368,align1=0,align2=23:  25.00 (-3.23%)  25.00 (-3.23%)  24.22
length=368,align1=23,align2=23:         25.00 (-6.67%)  23.44 (0.00%)   23.44
length=384,align1=0,align2=0:   24.22 (0.00%)   24.22 (0.00%)   24.22
length=384,align1=24,align2=0:  25.00 (0.00%)   24.22 (3.12%)   25.00
length=384,align1=0,align2=24:  25.00 (0.00%)   25.78 (-3.12%)  25.00
length=384,align1=24,align2=24:         24.22 (-3.33%)  23.44 (0.00%)   23.44
length=400,align1=0,align2=0:   25.00 (-3.23%)  26.56 (-9.68%)  24.22
length=400,align1=25,align2=0:  25.78 (-3.12%)  27.34 (-9.38%)  25.00
length=400,align1=0,align2=25:  27.34 (0.00%)   27.34 (0.00%)   27.34
length=400,align1=25,align2=25:         26.56 (0.00%)   25.78 (2.94%)   26.56
length=416,align1=0,align2=0:   26.56 (-3.03%)  25.78 (0.00%)   25.78
length=416,align1=26,align2=0:  28.12 (-2.86%)  27.34 (0.00%)   27.34
length=416,align1=0,align2=26:  27.34 (-2.94%)  28.12 (-5.88%)  26.56
length=416,align1=26,align2=26:         25.78 (0.00%)   26.56 (-3.03%)  25.78
length=432,align1=0,align2=0:   27.34 (-2.94%)  25.78 (2.94%)   26.56
length=432,align1=27,align2=0:  28.12 (-2.86%)  27.34 (0.00%)   27.34
length=432,align1=0,align2=27:  27.34 (0.00%)   28.12 (-2.86%)  27.34
length=432,align1=27,align2=27:         25.78 (0.00%)   25.78 (0.00%)   25.78
length=448,align1=0,align2=0:   26.56 (-3.03%)  25.78 (0.00%)   25.78
length=448,align1=28,align2=0:  27.34 (0.00%)   27.34 (0.00%)   27.34
length=448,align1=0,align2=28:  27.34 (0.00%)   28.12 (-2.86%)  27.34
length=448,align1=28,align2=28:         25.78 (0.00%)   25.78 (0.00%)   25.78
length=464,align1=0,align2=0:   25.78 (0.00%)   28.12 (-9.09%)  25.78
length=464,align1=29,align2=0:  28.12 (-2.86%)  29.69 (-8.57%)  27.34
length=464,align1=0,align2=29:  30.47 (0.00%)   30.47 (0.00%)   30.47
length=464,align1=29,align2=29:         28.12 (0.00%)   27.34 (2.78%)   28.12
length=480,align1=0,align2=0:   29.69 (-5.56%)  28.12 (0.00%)   28.12
length=480,align1=30,align2=0:  31.25 (-2.56%)  29.69 (2.56%)   30.47
length=480,align1=0,align2=30:  29.69 (0.00%)   30.47 (-2.63%)  29.69
length=480,align1=30,align2=30:         28.12 (0.00%)   28.12 (0.00%)   28.12
length=496,align1=0,align2=0:   28.12 (0.00%)   27.34 (2.78%)   28.12
length=496,align1=31,align2=0:  30.47 (-2.63%)  29.69 (0.00%)   29.69
length=496,align1=0,align2=31:  29.69 (0.00%)   30.47 (-2.63%)  29.69
length=496,align1=31,align2=31:         28.12 (-2.86%)  28.12 (-2.86%)  27.34
length=1024,align1=0,align2=0:  44.53 (0.00%)   44.53 (0.00%)   44.53
length=1024,align1=32,align2=0:         44.53 (-1.79%)  44.53 (-1.79%)  43.75
length=1024,align1=0,align2=32:         44.53 (-1.79%)  43.75 (0.00%)   43.75
length=1024,align1=32,align2=32:        43.75 (1.75%)   43.75 (1.75%)   44.53
length=1056,align1=0,align2=0:  46.88 (-1.69%)  46.88 (-1.69%)  46.09
length=1056,align1=33,align2=0:         53.12 (0.00%)   52.34 (1.47%)   53.12
length=1056,align1=0,align2=33:         52.34 (0.00%)   53.12 (-1.49%)  52.34
length=1056,align1=33,align2=33:        46.09 (0.00%)   46.88 (-1.69%)  46.09
length=1088,align1=0,align2=0:  46.88 (-1.69%)  46.09 (0.00%)   46.09
length=1088,align1=34,align2=0:         52.34 (0.00%)   52.34 (0.00%)   52.34
length=1088,align1=0,align2=34:         53.12 (-3.03%)  53.12 (-3.03%)  51.56
length=1088,align1=34,align2=34:        46.09 (0.00%)   46.88 (-1.69%)  46.09
length=1120,align1=0,align2=0:  49.22 (-1.61%)  48.44 (0.00%)   48.44
length=1120,align1=35,align2=0:         54.69 (1.41%)   55.47 (0.00%)   55.47
length=1120,align1=0,align2=35:         57.03 (0.00%)   55.47 (2.74%)   57.03
length=1120,align1=35,align2=35:        48.44 (0.00%)   49.22 (-1.61%)  48.44
length=1152,align1=0,align2=0:  47.66 (1.61%)   48.44 (0.00%)   48.44
length=1152,align1=36,align2=0:         55.47 (-1.43%)  55.47 (-1.43%)  54.69
length=1152,align1=0,align2=36:         58.59 (-1.35%)  55.47 (4.05%)   57.81
length=1152,align1=36,align2=36:        48.44 (0.00%)   49.22 (-1.61%)  48.44
length=1184,align1=0,align2=0:  53.12 (-3.03%)  50.78 (1.52%)   51.56
length=1184,align1=37,align2=0:         61.72 (-2.60%)  57.03 (5.19%)   60.16
length=1184,align1=0,align2=37:         62.50 (-1.27%)  57.03 (7.60%)   61.72
length=1184,align1=37,align2=37:        53.12 (-1.49%)  50.78 (2.99%)   52.34
length=1216,align1=0,align2=0:  53.91 (-4.55%)  50.78 (1.52%)   51.56
length=1216,align1=38,align2=0:         60.94 (0.00%)   57.03 (6.41%)   60.94
length=1216,align1=0,align2=38:         60.16 (0.00%)   57.81 (3.90%)   60.16
length=1216,align1=38,align2=38:        52.34 (-1.52%)  50.00 (3.03%)   51.56
length=1248,align1=0,align2=0:  54.69 (-2.94%)  53.12 (0.00%)   53.12
length=1248,align1=39,align2=0:         64.06 (-1.23%)  60.16 (4.94%)   63.28
length=1248,align1=0,align2=39:         60.94 (-2.63%)  60.16 (-1.32%)  59.38
length=1248,align1=39,align2=39:        53.12 (0.00%)   52.34 (1.47%)   53.12
length=1280,align1=0,align2=0:  52.34 (-1.52%)  52.34 (-1.52%)  51.56
length=1280,align1=40,align2=0:         61.72 (3.66%)   59.38 (7.32%)   64.06
length=1280,align1=0,align2=40:         60.94 (-2.63%)  60.16 (-1.32%)  59.38
length=1280,align1=40,align2=40:        52.34 (-1.52%)  52.34 (-1.52%)  51.56
length=1312,align1=0,align2=0:  54.69 (-1.45%)  55.47 (-2.90%)  53.91
length=1312,align1=41,align2=0:         63.28 (0.00%)   62.50 (1.23%)   63.28
length=1312,align1=0,align2=41:         62.50 (0.00%)   62.50 (0.00%)   62.50
length=1312,align1=41,align2=41:        53.91 (0.00%)   54.69 (-1.45%)  53.91
length=1344,align1=0,align2=0:  54.69 (0.00%)   54.69 (0.00%)   54.69
length=1344,align1=42,align2=0:         62.50 (0.00%)   62.50 (0.00%)   62.50
length=1344,align1=0,align2=42:         62.50 (-1.27%)  62.50 (-1.27%)  61.72
length=1344,align1=42,align2=42:        53.91 (0.00%)   53.91 (0.00%)   53.91
length=1376,align1=0,align2=0:  65.62 (-16.67%) 68.75 (-22.22%) 56.25
length=1376,align1=43,align2=0:         71.88 (-9.52%)  73.44 (-11.90%) 65.62
length=1376,align1=0,align2=43:         72.66 (-12.05%) 74.22 (-14.46%) 64.84
length=1376,align1=43,align2=43:        64.06 (-13.89%) 67.97 (-20.83%) 56.25
length=1408,align1=0,align2=0:  57.03 (-1.39%)  68.75 (-22.22%) 56.25
length=1408,align1=44,align2=0:         65.62 (-1.20%)  73.44 (-13.25%) 64.84
length=1408,align1=0,align2=44:         64.84 (0.00%)   74.22 (-14.46%) 64.84
length=1408,align1=44,align2=44:        56.25 (-1.41%)  68.75 (-23.94%) 55.47
length=1440,align1=0,align2=0:  67.97 (-14.47%) 64.84 (-9.21%)  59.38
length=1440,align1=45,align2=0:         74.22 (-10.47%) 68.75 (-2.33%)  67.19
length=1440,align1=0,align2=45:         72.66 (-6.90%)  69.53 (-2.30%)  67.97
length=1440,align1=45,align2=45:        65.62 (-13.51%) 58.59 (-1.35%)  57.81
length=1472,align1=0,align2=0:  66.41 (-14.86%) 58.59 (-1.35%)  57.81
length=1472,align1=46,align2=0:         73.44 (-9.30%)  67.19 (0.00%)   67.19
length=1472,align1=0,align2=46:         70.31 (-4.65%)  67.97 (-1.16%)  67.19
length=1472,align1=46,align2=46:        57.81 (0.00%)   58.59 (-1.35%)  57.81
length=1504,align1=0,align2=0:  60.94 (0.00%)   60.94 (0.00%)   60.94
length=1504,align1=47,align2=0:         71.09 (-1.11%)  70.31 (0.00%)   70.31
length=1504,align1=0,align2=47:         70.31 (-1.12%)  70.31 (-1.12%)  69.53
length=1504,align1=47,align2=47:        60.94 (-1.30%)  60.16 (0.00%)   60.16
length=1536,align1=0,align2=0:  62.50 (-3.90%)  60.16 (0.00%)   60.16
length=1536,align1=48,align2=0:         60.94 (-1.30%)  60.16 (0.00%)   60.16
length=1536,align1=0,align2=48:         61.72 (-3.95%)  60.16 (-1.32%)  59.38
length=1536,align1=48,align2=48:        60.94 (-1.30%)  60.16 (0.00%)   60.16
length=1568,align1=0,align2=0:  80.47 (-27.16%) 63.28 (0.00%)   63.28
length=1568,align1=49,align2=0:         86.72 (-18.09%) 72.66 (1.06%)   73.44
length=1568,align1=0,align2=49:         74.22 (-3.26%)  74.22 (-3.26%)  71.88
length=1568,align1=49,align2=49:        62.50 (0.00%)   61.72 (1.25%)   62.50
length=1600,align1=0,align2=0:  62.50 (-1.27%)  62.50 (-1.27%)  61.72
length=1600,align1=50,align2=0:         73.44 (0.00%)   71.88 (2.13%)   73.44
length=1600,align1=0,align2=50:         72.66 (0.00%)   73.44 (-1.08%)  72.66
length=1600,align1=50,align2=50:        62.50 (-1.27%)  62.50 (-1.27%)  61.72
length=1632,align1=0,align2=0:  64.84 (0.00%)   64.84 (0.00%)   64.84
length=1632,align1=51,align2=0:         75.78 (0.00%)   75.00 (1.03%)   75.78
length=1632,align1=0,align2=51:         78.91 (0.00%)   75.78 (3.96%)   78.91
length=1632,align1=51,align2=51:        64.84 (-2.47%)  64.84 (-2.47%)  63.28
length=1664,align1=0,align2=0:  64.84 (-1.22%)  64.84 (-1.22%)  64.06
length=1664,align1=52,align2=0:         75.78 (0.00%)   75.00 (1.03%)   75.78
length=1664,align1=0,align2=52:         80.47 (-0.98%)  75.78 (4.90%)   79.69
length=1664,align1=52,align2=52:        64.06 (-1.23%)  65.62 (-3.70%)  63.28
length=1696,align1=0,align2=0:  69.53 (-3.49%)  72.66 (-8.14%)  67.19
length=1696,align1=53,align2=0:         80.47 (-0.98%)  82.03 (-2.94%)  79.69
length=1696,align1=0,align2=53:         80.47 (0.96%)   82.03 (-0.96%)  81.25
length=1696,align1=53,align2=53:        68.75 (-2.33%)  72.66 (-8.14%)  67.19
length=1728,align1=0,align2=0:  67.97 (0.00%)   72.66 (-6.90%)  67.97
length=1728,align1=54,align2=0:         80.47 (-0.98%)  82.81 (-3.92%)  79.69
length=1728,align1=0,align2=54:         78.91 (-1.00%)  82.03 (-5.00%)  78.12
length=1728,align1=54,align2=54:        68.75 (0.00%)   72.66 (-5.68%)  68.75
length=1760,align1=0,align2=0:  77.34 (-12.50%) 68.75 (0.00%)   68.75
length=1760,align1=55,align2=0:         91.41 (-8.33%)  79.69 (5.56%)   84.38
length=1760,align1=0,align2=55:         88.28 (-10.78%) 80.47 (-0.98%)  79.69
length=1760,align1=55,align2=55:        77.34 (-11.24%) 68.75 (1.12%)   69.53
length=1792,align1=0,align2=0:  78.12 (-14.94%) 68.75 (-1.15%)  67.97
length=1792,align1=56,align2=0:         88.28 (-4.63%)  79.69 (5.56%)   84.38
length=1792,align1=0,align2=56:         88.28 (-9.71%)  80.47 (0.00%)   80.47
length=1792,align1=56,align2=56:        77.34 (-11.24%) 68.75 (1.12%)   69.53
length=1824,align1=0,align2=0:  72.66 (7.92%)   70.31 (10.89%)  78.91
length=1824,align1=57,align2=0:         85.94 (5.17%)   82.03 (9.48%)   90.62
length=1824,align1=0,align2=57:         82.03 (3.67%)   82.81 (2.75%)   85.16
length=1824,align1=57,align2=57:        70.31 (-1.12%)  70.31 (-1.12%)  69.53
length=1856,align1=0,align2=0:  70.31 (-1.12%)  70.31 (-1.12%)  69.53
length=1856,align1=58,align2=0:         83.59 (-0.94%)  82.03 (0.94%)   82.81
length=1856,align1=0,align2=58:         178.12 (-115.09%)       82.81 (0.00%)   82.81
length=1856,align1=58,align2=58:        70.31 (-1.12%)  70.31 (-1.12%)  69.53
length=1888,align1=0,align2=0:  73.44 (-1.08%)  78.91 (-8.60%)  72.66
length=1888,align1=59,align2=0:         85.94 (0.00%)   89.84 (-4.55%)  85.94
length=1888,align1=0,align2=59:         84.38 (0.00%)   89.06 (-5.56%)  84.38
length=1888,align1=59,align2=59:        72.66 (-1.09%)  78.12 (-8.70%)  71.88
length=1920,align1=0,align2=0:  72.66 (-1.09%)  78.12 (-8.70%)  71.88
length=1920,align1=60,align2=0:         85.94 (0.00%)   89.84 (-4.55%)  85.94
length=1920,align1=0,align2=60:         85.16 (0.00%)   89.06 (-4.59%)  85.16
length=1920,align1=60,align2=60:        72.66 (-1.09%)  78.91 (-9.78%)  71.88
length=1952,align1=0,align2=0:  75.00 (-1.05%)  75.00 (-1.05%)  74.22
length=1952,align1=61,align2=0:         88.28 (0.00%)   87.50 (0.88%)   88.28
length=1952,align1=0,align2=61:         87.50 (0.00%)   88.28 (-0.89%)  87.50
length=1952,align1=61,align2=61:        74.22 (0.00%)   74.22 (0.00%)   74.22
length=1984,align1=0,align2=0:  75.00 (-1.05%)  73.44 (1.05%)   74.22
length=1984,align1=62,align2=0:         89.06 (-0.89%)  87.50 (0.88%)   88.28
length=1984,align1=0,align2=62:         87.50 (0.00%)   88.28 (-0.89%)  87.50
length=1984,align1=62,align2=62:        74.22 (0.00%)   74.22 (0.00%)   74.22
length=2016,align1=0,align2=0:  77.34 (-1.02%)  76.56 (0.00%)   76.56
length=2016,align1=63,align2=0:         91.41 (-0.86%)  90.62 (0.00%)   90.62
length=2016,align1=0,align2=63:         89.84 (0.00%)   90.62 (-0.87%)  89.84
length=2016,align1=63,align2=63:        77.34 (-1.02%)  76.56 (0.00%)   76.56
length=4096,align1=0,align2=0:  141.41 (-0.56%) 146.88 (-4.44%) 140.62

Function: memcpy
__memcpy_thunderx       __memcpy_falkor __memcpy_generic
Variant: large
================================================================================
length=65543,align1=0,align2=0:         4018.75 (3.09%) 2634.38 (36.47%)        4146.88
length=65551,align1=0,align2=3:         4425.00 (-6.47%)        3134.38 (24.59%)        4156.25
length=65567,align1=3,align2=0:         2909.38 (29.95%)        3134.38 (24.53%)        4153.12
length=65599,align1=3,align2=5:         4415.62 (-6.16%)        3134.38 (24.64%)        4159.38
length=131079,align1=0,align2=0:        5765.62 (30.38%)        5240.62 (36.72%)        8281.25
length=131087,align1=0,align2=3:        8831.25 (-6.56%)        6271.88 (24.32%)        8287.50
length=131103,align1=3,align2=0:        5793.75 (29.05%)        6268.75 (23.23%)        8165.62
length=131135,align1=3,align2=5:        5806.25 (29.97%)        6259.38 (24.50%)        8290.62
length=262151,align1=0,align2=0:        11850.00 (28.91%)       10762.50 (35.43%)       16668.80
length=262159,align1=0,align2=3:        12043.80 (27.72%)       12700.00 (23.78%)       16662.50
length=262175,align1=3,align2=0:        12046.90 (27.90%)       12687.50 (24.07%)       16709.40
length=262207,align1=3,align2=5:        11984.40 (28.08%)       12678.10 (23.91%)       16662.50
length=524295,align1=0,align2=0:        24825.00 (25.00%)       24268.80 (27.34%)       33400.00
length=524303,align1=0,align2=3:        35731.20 (-6.53%)       25678.10 (23.44%)       33540.60
length=524319,align1=3,align2=0:        25893.80 (22.71%)       25725.00 (23.22%)       33503.10
length=524351,align1=3,align2=5:        25887.50 (22.86%)       25690.60 (23.45%)       33559.40
length=1048583,align1=0,align2=0:       50621.90 (0.30%)        50600.00 (0.34%)        50771.90
length=1048591,align1=0,align2=3:       53206.20 (0.54%)        51081.20 (4.51%)        53493.80
length=1048607,align1=3,align2=0:       53221.90 (0.32%)        51975.00 (2.66%)        53393.80
length=1048639,align1=3,align2=5:       53240.60 (0.36%)        51953.10 (2.77%)        53431.20
length=2097159,align1=0,align2=0:       103744.00 (-2.00%)      102447.00 (-1.00%)      102425.00
length=2097167,align1=0,align2=3:       108588.00 (-1.00%)      105159.00 (2.00%)       107606.00
length=2097183,align1=3,align2=0:       107678.00 (0.00%)       105250.00 (2.00%)       108125.00
length=2097215,align1=3,align2=5:       107906.00 (1.00%)       105841.00 (3.00%)       109475.00
length=4194311,align1=0,align2=0:       202994.00 (0.00%)       202500.00 (1.00%)       204809.00
length=4194319,align1=0,align2=3:       213350.00 (0.00%)       205997.00 (3.00%)       213384.00
length=4194335,align1=3,align2=0:       212653.00 (0.00%)       206444.00 (3.00%)       212900.00
length=4194367,align1=3,align2=5:       213044.00 (0.00%)       206084.00 (3.00%)       213847.00
length=8388615,align1=0,align2=0:       401294.00 (0.00%)       401231.00 (0.00%)       401944.00
length=8388623,align1=0,align2=3:       480872.00 (-14.00%)     406444.00 (3.00%)       422900.00
length=8388639,align1=3,align2=0:       422147.00 (0.00%)       407750.00 (3.00%)       422803.00
length=8388671,align1=3,align2=5:       442003.00 (-5.00%)      407125.00 (3.00%)       423509.00
length=16777223,align1=0,align2=0:      799809.00 (0.00%)       800000.00 (0.00%)       801756.00
length=16777231,align1=0,align2=3:      841184.00 (0.00%)       808525.00 (4.00%)       843775.00
length=16777247,align1=3,align2=0:      841166.00 (0.00%)       810147.00 (3.00%)       843147.00
length=16777279,align1=3,align2=5:      972569.00 (-16.00%)     808588.00 (4.00%)       843731.00
length=33554439,align1=0,align2=0:      1842240.00 (-0.01%)     1863590.00 (-1.17%)     1841990.00
length=33554447,align1=0,align2=3:      2103470.00 (-2.74%)     1919460.00 (6.25%)      2047440.00
length=33554463,align1=3,align2=0:      2075690.00 (-1.07%)     1930040.00 (6.02%)      2053720.00
length=33554495,align1=3,align2=5:      2110590.00 (-2.82%)     1924440.00 (6.25%)      2052650.00

Function: memcpy
__memcpy_thunderx       __memcpy_falkor __memcpy_generic
Variant: random
================================================================================
max-size=4096:  44061.90 (5.85%)        38568.20 (17.59%)       46799.90
max-size=8192:  42790.90 (5.27%)        38158.90 (15.52%)       45171.50
max-size=16384:         44912.10 (2.25%)        38710.40 (15.75%)       45945.00
max-size=32768:         43577.90 (1.23%)        37975.10 (13.93%)       44120.00
max-size=65536:         44375.50 (1.04%)        38474.20 (14.20%)       44840.60

	* manual/tunables.texi (Tunable glibc.tune.cpu): Add falkor.
	* sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add
	memcpy_falkor.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c (MAX_IFUNC):
	Bump.
	(__libc_ifunc_impl_list): Add __memcpy_falkor.
	* sysdeps/aarch64/multiarch/memcpy.c: Likewise.
	* sysdeps/aarch64/multiarch/memcpy_falkor.S: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c (cpu_list):
	Add falkor.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_FALKOR):
	New macro.
2017-08-09 06:32:17 +05:30
Siddhesh Poyarekar
28cfa3a48e tunables, aarch64: New tunable to override cpu
Add a new tunable (glibc.tune.cpu) to override CPU identification on
aarch64.  This is useful in two cases: one where it is desirable to
pretend to be another CPU for purposes of testing or because routines
written for that CPU are beneficial for specific workloads and second
where the underlying kernel does not support emulation of MRS to get
the MIDR of the CPU.

	* elf/dl-tunables.h (tunable_is_name): Move from...
	* elf/dl-tunables.c (is_name): ... here.
	(parse_tunables, __tunables_init): Adjust.
	* manual/tunables.texi: Document glibc.tune.cpu.
	* sysdeps/aarch64/dl-tunables.list: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c (struct
	cpu_list): New type.
	(cpu_list): New list of CPU names and their MIDR.
	(get_midr_from_mcpu): New function.
	(init_cpu_features): Override MIDR if necessary.
2017-06-30 22:58:39 +05:30
Siddhesh Poyarekar
ab85da1530 aarch64: Call all string function implementations in tests
The string function implementations implemented so far do not use any
instructions that may deviate from standard aarch64, so it is possible
for all routines to run on all armv8 hardware.  Select all
implementations in the benchmarks and tests.

	* sysdeps/aarch64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Unconditionally select thunderx
	routine for testing.
2017-06-30 22:57:12 +05:30
Szabolcs Nagy
e535139e82 [AArch64] Add more cfi annotations to tlsdesc entry points
Backtrace through _dl_tlsdesc_resolve_rela was broken because the offset
of x30 from cfa was not in the debug info.

Add enough annotation so backtracing from the dynamic linker through
tlsdesc entry points works and the debugger shows registers correctly.
2017-06-21 15:04:37 +01:00
Szabolcs Nagy
e9177fba13 [AArch64] Use hidden __GI__dl_argv in rtld startup code
We rely on the symbol being locally defined so using extern symbol
is not correct and the linker may complain about the relocations.
2017-06-21 14:54:11 +01:00
Zack Weinberg
09a596cc2c Remove bits/string.h.
These machine-dependent inline string functions have never been on by
default, and even if they were a good idea at the time they were
introduced, they haven't really been touched in ten to fifteen years
and probably aren't a good idea on current-gen processors.  Current
thinking is that this class of optimization is best left to the
compiler.

	* bits/string.h, string/bits/string.h
	* sysdeps/aarch64/bits/string.h
	* sysdeps/m68k/m680x0/m68020/bits/string.h
	* sysdeps/s390/bits/string.h, sysdeps/sparc/bits/string.h
	* sysdeps/x86/bits/string.h: Delete file.

	* string/string.h: Don't include bits/string.h.
	* string/bits/string3.h: Rename to bits/string_fortified.h.
	No need to undef various symbols that the removed headers
	might have defined as macros.
	* string/Makefile (headers): Remove bits/string.h, change
	bits/string3.h to bits/string_fortified.h.
	* string/string-inlines.c: Update commentary.  Remove definitions
	of various macros that nothing looks at anymore.  Don't directly
	include bits/string.h. Set _STRING_INLINE_unaligned here, based on
	compiler-predefined macros.
	* string/strncat.c: If STRNCAT is not defined, or STRNCAT_PRIMARY
	_is_ defined, provide internal hidden alias __strncat.
	* include/string.h: Declare internal hidden alias __strncat.
	Only forward __stpcpy to __builtin_stpcpy if __NO_STRING_INLINES is
	not defined.
	* include/bits/string3.h: Rename to bits/string_fortified.h,
	update to match above.

	* sysdeps/i386/string-inlines.c: Define compat symbols for
	everything formerly defined by sysdeps/x86/bits/string.h.
	Make existing definitions into compat symbols as well.
	Remove some no-longer-necessary messing around with macros.

	* sysdeps/powerpc/powerpc32/power4/multiarch/mempcpy.c
	* sysdeps/powerpc/powerpc64/multiarch/mempcpy.c
	* sysdeps/powerpc/powerpc64/multiarch/stpcpy.c
	* sysdeps/s390/multiarch/mempcpy.c
	No need to define _HAVE_STRING_ARCH_mempcpy.
	Do define __NO_STRING_INLINES and NO_MEMPCPY_STPCPY_REDIRECT.

	* sysdeps/i386/i686/multiarch/strncat-c.c
	* sysdeps/s390/multiarch/strncat-c.c
	* sysdeps/x86_64/multiarch/strncat-c.c
	Define STRNCAT_PRIMARY.  Don't change definition of libc_hidden_def.
2017-06-20 08:21:24 -04:00
Alan Modra
0572433b5b PowerPC64 ELFv2 PPC64_OPT_LOCALENTRY
ELFv2 functions with localentry:0 are those with a single entry point,
ie. global entry == local entry, that have no requirement on r2 or
r12 and guarantee r2 is unchanged on return.  Such an external
function can be called via the PLT without saving r2 or restoring it
on return, avoiding a common load-hit-store for small functions.

This patch implements the ld.so changes necessary for this
optimization.  ld.so needs to check that an optimized plt call
sequence is in fact calling a function implemented with localentry:0,
end emit a fatal error otherwise.

The elf/testobj6.c change is to stop "error while loading shared
libraries: expected localentry:0 `preload'" when running
elf/preloadtest, which we'd get otherwise.

	* elf/elf.h (PPC64_OPT_LOCALENTRY): Define.
	* sysdeps/alpha/dl-machine.h (elf_machine_fixup_plt): Add
	refsym and sym parameters.  Adjust callers.
	* sysdeps/aarch64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/arm/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/generic/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/hppa/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/i386/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/ia64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/m68k/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/microblaze/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/mips/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/nios2/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_fixup_plt):
	Likewise.
	* sysdeps/s390/s390-32/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/s390/s390-64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/sh/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/sparc/sparc32/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/sparc/sparc64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/tile/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/x86_64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/powerpc/powerpc64/dl-machine.c (_dl_error_localentry): New.
	(_dl_reloc_overflow): Increase buffser size.  Formatting.
	* sysdeps/powerpc/powerpc64/dl-machine.h (ppc64_local_entry_offset):
	Delete reloc param, add refsym and sym.  Check optimized plt
	call stubs for localentry:0 functions.  Adjust callers.
	(elf_machine_fixup_plt, elf_machine_plt_conflict): Add refsym
	and sym parameters.  Adjust callers.
	(_dl_reloc_overflow): Move attribute.
	(_dl_error_localentry): Declare.
	* elf/dl-runtime.c (_dl_fixup): Save original sym.  Pass
	refsym and sym to elf_machine_fixup_plt.
	* elf/testobj6.c (preload): Call printf.
2017-06-14 10:47:25 +09:30
Stefan Liebler
12d2dd7060 Optimize generic spinlock code and use C11 like atomic macros.
This patch optimizes the generic spinlock code.

The type pthread_spinlock_t is a typedef to volatile int on all archs.
Passing a volatile pointer to the atomic macros which are not mapped to the
C11 atomic builtins can lead to extra stores and loads to stack if such
a macro creates a temporary variable by using "__typeof (*(mem)) tmp;".
Thus, those macros which are used by spinlock code - atomic_exchange_acquire,
atomic_load_relaxed, atomic_compare_exchange_weak - have to be adjusted.
According to the comment from  Szabolcs Nagy, the type of a cast expression is
unqualified (see http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_423.htm):
__typeof ((__typeof (*(mem)) *(mem)) tmp;
Thus from spinlock perspective the variable tmp is of type int instead of
type volatile int.  This patch adjusts those macros in include/atomic.h.
With this construct GCC >= 5 omits the extra stores and loads.

The atomic macros are replaced by the C11 like atomic macros and thus
the code is aligned to it.  The pthread_spin_unlock implementation is now
using release memory order instead of sequentially consistent memory order.
The issue with passed volatile int pointers applies to the C11 like atomic
macros as well as the ones used before.

I've added a glibc_likely hint to the first atomic exchange in
pthread_spin_lock in order to return immediately to the caller if the lock is
free.  Without the hint, there is an additional jump if the lock is free.

I've added the atomic_spin_nop macro within the loop of plain reads.
The plain reads are also realized by C11 like atomic_load_relaxed macro.

The new define ATOMIC_EXCHANGE_USES_CAS determines if the first try to acquire
the spinlock in pthread_spin_lock or pthread_spin_trylock is an exchange
or a CAS.  This is defined in atomic-machine.h for all architectures.

The define SPIN_LOCK_READS_BETWEEN_CMPXCHG is now removed.
There is no technical reason for throwing in a CAS every now and then,
and so far we have no evidence that it can improve performance.
If that would be the case, we have to adjust other spin-waiting loops
elsewhere, too!  Using a CAS loop without plain reads is not a good idea
on many targets and wasn't used by one.  Thus there is now no option to
do so.

Architectures are now using the generic spinlock automatically if they
do not provide an own implementation.  Thus the pthread_spin_lock.c files
in sysdeps folder are deleted.

ChangeLog:

	* NEWS: Mention new spinlock implementation.
	* include/atomic.h:
	(__atomic_val_bysize): Cast type to omit volatile qualifier.
	(atomic_exchange_acq): Likewise.
	(atomic_load_relaxed): Likewise.
	(ATOMIC_EXCHANGE_USES_CAS): Check definition.
	* nptl/pthread_spin_init.c (pthread_spin_init):
	Use atomic_store_relaxed.
	* nptl/pthread_spin_lock.c (pthread_spin_lock):
	Use C11-like atomic macros.
	* nptl/pthread_spin_trylock.c (pthread_spin_trylock):
	Likewise.
	* nptl/pthread_spin_unlock.c (pthread_spin_unlock):
	Use atomic_store_release.
	* sysdeps/aarch64/nptl/pthread_spin_lock.c: Delete File.
	* sysdeps/arm/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/hppa/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/m68k/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/microblaze/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/mips/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/nios2/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/aarch64/atomic-machine.h (ATOMIC_EXCHANGE_USES_CAS): Define.
	* sysdeps/alpha/atomic-machine.h: Likewise.
	* sysdeps/arm/atomic-machine.h: Likewise.
	* sysdeps/i386/atomic-machine.h: Likewise.
	* sysdeps/ia64/atomic-machine.h: Likewise.
	* sysdeps/m68k/coldfire/atomic-machine.h: Likewise.
	* sysdeps/m68k/m680x0/m68020/atomic-machine.h: Likewise.
	* sysdeps/microblaze/atomic-machine.h: Likewise.
	* sysdeps/mips/atomic-machine.h: Likewise.
	* sysdeps/powerpc/powerpc32/atomic-machine.h: Likewise.
	* sysdeps/powerpc/powerpc64/atomic-machine.h: Likewise.
	* sysdeps/s390/atomic-machine.h: Likewise.
	* sysdeps/sparc/sparc32/atomic-machine.h: Likewise.
	* sysdeps/sparc/sparc32/sparcv9/atomic-machine.h: Likewise.
	* sysdeps/sparc/sparc64/atomic-machine.h: Likewise.
	* sysdeps/tile/tilegx/atomic-machine.h: Likewise.
	* sysdeps/tile/tilepro/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/hppa/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/nios2/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/sh/atomic-machine.h: Likewise.
	* sysdeps/x86_64/atomic-machine.h: Likewise.
2017-06-06 09:41:56 +02:00
Steve Ellcey
6a2c695266 aarch64: Thunderx specific memcpy and memmove
* sysdeps/aarch64/memcpy.S (MEMMOVE, MEMCPY): New macros.
	(memmove): Use MEMMOVE for name.
	(memcpy): Use MEMCPY for name.  Change internal labels
	to external labels.
	* sysdeps/aarch64/multiarch/Makefile: New file.
	* sysdeps/aarch64/multiarch/ifunc-impl-list.c: Likewise.
	* sysdeps/aarch64/multiarch/init-arch.h: Likewise.
	* sysdeps/aarch64/multiarch/memcpy.c: Likewise.
	* sysdeps/aarch64/multiarch/memcpy_generic.S: Likewise.
	* sysdeps/aarch64/multiarch/memcpy_thunderx.S: Likewise.
	* sysdeps/aarch64/multiarch/memmove.c: Likewise.
2017-05-24 16:46:48 -07:00
Adhemerval Zanella
eab380d8ec Move shared pthread definitions to common headers
This patch removes all the replicated pthread definition accross the
architectures and consolidates it on shared headers.  The new
organization is as follow:

  * Architecture specific definition (such as pthread types sizes) are
    place in the new pthreadtypes-arch.h header in arch specific path.

  * All shared structure definition are moved to a common NPTL header
    at sysdeps/nptl/bits/pthreadtypes.h (with now includes the arch
    specific one for internal definitions).

  * Also, for C11 future thread support, both mutex and condition
    definition are placed in a common header at
    sysdeps/nptl/bits/thread-shared-types.h.

It is also a refactor patch without expected functional changes.
Checked with a build for all major ABI (aarch64-linux-gnu, alpha-linux-gnu,
arm-linux-gnueabi, i386-linux-gnu, ia64-linux-gnu,
m68k-linux-gnu, microblaze-linux-gnu, mips{64}-linux-gnu, nios2-linux-gnu,
powerpc{64le}-linux-gnu, s390{x}-linux-gnu, sparc{64}-linux-gnu,
tile{pro,gx}-linux-gnu, and x86_64-linux-gnu).

	* posix/Makefile (headers): Add pthreadtypes-arch.h and
	thread-shared-types.h.
	* sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h: New file: arch
	specific thread definition.
	* sysdeps/alpha/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/arm/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/hppa/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/ia64/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/m68k/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/mips/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/nios2/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/s390/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/sh/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/sparc/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/tile/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/x86/nptl/bits/pthreadtypes-arch.h: Likewise.
	* sysdeps/nptl/bits/thread-shared-types.h: New file: shared
	thread definition between POSIX and C11.
	* sysdeps/aarch64/nptl/bits/pthreadtypes.h.: Remove file.
	* sysdeps/alpha/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/arm/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/hppa/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/m68k/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/microblaze/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/mips/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/nios2/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/ia64/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/powerpc/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/s390/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/sh/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/sparc/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/tile/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/x86/nptl/bits/pthreadtypes.h: Likewise.
	* sysdeps/nptl/bits/pthreadtypes.h: New file: common thread
	definitions shared across all architectures.
2017-05-09 17:49:17 -03:00
Szabolcs Nagy
b737847f87 [AArch64] Update libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Update.
2017-03-27 12:02:47 +01:00
Steve Ellcey
d2e4346a30 Add ifunc support for aarch64.
* sysdeps/aarch64/dl-machine.h: Include cpu-features.c.
	(DL_PLATFORM_INIT): New define.
	(dl_platform_init): New function.
	* sysdeps/aarch64/ldsodefs.h: Include cpu-features.h.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.c: New file.
	* sysdeps/unix/sysv/linux/aarch64/cpu-features.h: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/dl-procinfo.c: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/libc-start.c: Likewise.
2017-03-15 16:46:26 -07:00
Torvald Riegel
cc25c8b4c1 New pthread rwlock that is more scalable.
This replaces the pthread rwlock with a new implementation that uses a
more scalable algorithm (primarily through not using a critical section
anymore to make state changes).  The fast path for rdlock acquisition and
release is now basically a single atomic read-modify write or CAS and a few
branches.  See nptl/pthread_rwlock_common.c for details.

	* nptl/DESIGN-rwlock.txt: Remove.
	* nptl/lowlevelrwlock.sym: Remove.
	* nptl/Makefile: Add new tests.
	* nptl/pthread_rwlock_common.c: New file.  Contains the new rwlock.
	* nptl/pthreadP.h (PTHREAD_RWLOCK_PREFER_READER_P): Remove.
	(PTHREAD_RWLOCK_WRPHASE, PTHREAD_RWLOCK_WRLOCKED,
	PTHREAD_RWLOCK_RWAITING, PTHREAD_RWLOCK_READER_SHIFT,
	PTHREAD_RWLOCK_READER_OVERFLOW, PTHREAD_RWLOCK_WRHANDOVER,
	PTHREAD_RWLOCK_FUTEX_USED): New.
	* nptl/pthread_rwlock_init.c (__pthread_rwlock_init): Adapt to new
	implementation.
	* nptl/pthread_rwlock_rdlock.c (__pthread_rwlock_rdlock_slow): Remove.
	(__pthread_rwlock_rdlock): Adapt.
	* nptl/pthread_rwlock_timedrdlock.c
	(pthread_rwlock_timedrdlock): Adapt.
	* nptl/pthread_rwlock_timedwrlock.c
	(pthread_rwlock_timedwrlock): Adapt.
	* nptl/pthread_rwlock_trywrlock.c (pthread_rwlock_trywrlock): Adapt.
	* nptl/pthread_rwlock_tryrdlock.c (pthread_rwlock_tryrdlock): Adapt.
	* nptl/pthread_rwlock_unlock.c (pthread_rwlock_unlock): Adapt.
	* nptl/pthread_rwlock_wrlock.c (__pthread_rwlock_wrlock_slow): Remove.
	(__pthread_rwlock_wrlock): Adapt.
	* nptl/tst-rwlock10.c: Adapt.
	* nptl/tst-rwlock11.c: Adapt.
	* nptl/tst-rwlock17.c: New file.
	* nptl/tst-rwlock18.c: New file.
	* nptl/tst-rwlock19.c: New file.
	* nptl/tst-rwlock2b.c: New file.
	* nptl/tst-rwlock8.c: Adapt.
	* nptl/tst-rwlock9.c: Adapt.
	* sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/hppa/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/sparc/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h
	(pthread_rwlock_t): Adapt.
	* sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h
	(pthread_rwlock_t): Adapt.
	* sysdeps/x86/bits/pthreadtypes.h (pthread_rwlock_t): Adapt.
	* nptl/nptl-printers.py (): Adapt.
	* nptl/nptl_lock_constants.pysym: Adapt.
	* nptl/test-rwlock-printers.py: Adapt.
	* nptl/test-rwlockattr-printers.c: Adapt.
	* nptl/test-rwlockattr-printers.py: Adapt.
2017-01-10 11:50:17 +01:00
Joseph Myers
bfff8b1bec Update copyright dates with scripts/update-copyrights. 2017-01-01 00:14:16 +00:00
Torvald Riegel
ed19993b5b New condvar implementation that provides stronger ordering guarantees.
This is a new implementation for condition variables, required
after http://austingroupbugs.net/view.php?id=609 to fix bug 13165.  In
essence, we need to be stricter in which waiters a signal or broadcast
is required to wake up; this couldn't be solved using the old algorithm.
ISO C++ made a similar clarification, so this also fixes a bug in
current libstdc++, for example.

We can't use the old algorithm anymore because futexes do not guarantee
to wake in FIFO order.  Thus, when we wake, we can't simply let any
waiter grab a signal, but we need to ensure that one of the waiters
happening before the signal is woken up.  This is something the previous
algorithm violated (see bug 13165).

There's another issue specific to condvars: ABA issues on the underlying
futexes.  Unlike mutexes that have just three states, or semaphores that
have no tokens or a limited number of them, the state of a condvar is
the *order* of the waiters.  A waiter on a semaphore can grab a token
whenever one is available; a condvar waiter must only consume a signal
if it is eligible to do so as determined by the relative order of the
waiter and the signal.
Therefore, this new algorithm maintains two groups of waiters: Those
eligible to consume signals (G1), and those that have to wait until
previous waiters have consumed signals (G2).  Once G1 is empty, G2
becomes the new G1.  64b counters are used to avoid ABA issues.

This condvar doesn't yet use a requeue optimization (ie, on a broadcast,
waking just one thread and requeueing all others on the futex of the
mutex supplied by the program).  I don't think doing the requeue is
necessarily the right approach (but I haven't done real measurements
yet):
* If a program expects to wake many threads at the same time and make
that scalable, a condvar isn't great anyway because of how it requires
waiters to operate mutually exclusive (due to the mutex usage).  Thus, a
thundering herd problem is a scalability problem with or without the
optimization.  Using something like a semaphore might be more
appropriate in such a case.
* The scalability problem is actually at the mutex side; the condvar
could help (and it tries to with the requeue optimization), but it
should be the mutex who decides how that is done, and whether it is done
at all.
* Forcing all but one waiter into the kernel-side wait queue of the
mutex prevents/avoids the use of lock elision on the mutex.  Thus, it
prevents the only cure against the underlying scalability problem
inherent to condvars.
* If condvars use short critical sections (ie, hold the mutex just to
check a binary flag or such), which they should do ideally, then forcing
all those waiter to proceed serially with kernel-based hand-off (ie,
futex ops in the mutex' contended state, via the futex wait queues) will
be less efficient than just letting a scalable mutex implementation take
care of it.  Our current mutex impl doesn't employ spinning at all, but
if critical sections are short, spinning can be much better.
* Doing the requeue stuff requires all waiters to always drive the mutex
into the contended state.  This leads to each waiter having to call
futex_wake after lock release, even if this wouldn't be necessary.

	[BZ #13165]
	* nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to
	use new algorithm.
	* nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise.
	* nptl/pthread_cond_init.c (__pthread_cond_init): Likewise.
	* nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise.
	* nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise.
	(__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c.
	(__condvar_confirm_wakeup, __condvar_cancel_waiting,
	__condvar_cleanup_waiting, __condvar_dec_grefs,
	__pthread_cond_wait_common): New.
	(__condvar_cleanup): Remove.
	* npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt.
	* npt/pthread_condattr_setclock.c (pthread_condattr_setclock):
	Likewise.
	* npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared):
	Likewise.
	* npt/pthread_condattr_init.c (pthread_condattr_init): Likewise.
	* nptl/tst-cond1.c: Add comment.
	* nptl/tst-cond20.c (do_test): Adapt.
	* nptl/tst-cond22.c (do_test): Likewise.
	* sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt
	structure.
	* sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t):
	Likewise.
	* sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t):
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t):
	Likewise.
	* sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise.
	* sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove.
	(COND_CLOCK_BITS): Adapt.
	* sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt.
	* nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK,
	__PTHREAD_COND_SHARED_MASK): New.
	* nptl/nptl-printers.py (CLOCK_IDS): Remove.
	(ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt.
	* nptl/nptl_lock_constants.pysym: Adapt.
	* nptl/test-cond-printers.py: Adapt.
	* sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear,
	cond_compat_check_and_clear): Adapt.
	* sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ...
	* sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c
	(__pthread_cond_timedwait): ... and move here.
	* nptl/DESIGN-condvar.txt: Remove file.
	* nptl/lowlevelcond.sym: Likewise.
	* nptl/pthread_cond_timedwait.c: Likewise.
	* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise.
2016-12-31 14:56:47 +01:00
Joseph Myers
0acb8a2a85 Refactor long double information into bits/long-double.h.
Information about whether the ABI of long double is the same as that
of double is split between bits/mathdef.h and bits/wordsize.h.

When the ABIs are the same, bits/mathdef.h defines
__NO_LONG_DOUBLE_MATH.  In addition, in the case where the same glibc
binary supports both -mlong-double-64 and -mlong-double-128,
bits/wordsize.h defines __LONG_DOUBLE_MATH_OPTIONAL, along with
__NO_LONG_DOUBLE_MATH if this particular compilation is with
-mlong-double-64.

As part of the refactoring I proposed in
<https://sourceware.org/ml/libc-alpha/2016-11/msg00745.html>, this
patch puts all that information in a single header,
bits/long-double.h.  It is included from sys/cdefs.h alongside the
include of bits/wordsize.h, so other headers generally do not need to
include bits/long-double.h directly.

Previously, various bits/mathdef.h headers and bits/wordsize.h headers
had this long double information (including implicitly in some
bits/mathdef.h headers through not having the defines present in the
default version).  After the patch, it's all in six bits/long-double.h
headers.  Furthermore, most of those new headers are not
architecture-specific.  Architectures with optional long double all
use the ldbl-opt sysdeps directory, either in the order (ldbl-64-128,
ldbl-opt, ldbl-128) or (ldbl-128ibm, ldbl-opt).  Thus a generic header
for the case where long double = double, and headers in ldbl-128,
ldbl-96 and ldbl-opt, suffices to cover every architecture except for
cases where long double properties vary between different ABIs sharing
a set of installed headers; fortunately all the ldbl-opt cases share a
single compiler-predefined macro __LONG_DOUBLE_128__ that can be used
to tell whether this compilation is -mlong-double-64 or
-mlong-double-128.

The two cases where a set of headers is shared between ABIs with
different long double properties, MIPS (o32 has long double = double,
other ABIs use ldbl-128) and SPARC (32-bit has optional long double,
64-bit has required long double), need their own bits/long-double.h
headers.

As with bits/wordsize.h, multiple-include protection for this header
is generally implicit through the include guards on sys/cdefs.h, and
multiple inclusion is harmless in any case.  There is one subtlety:
the header must not define __LONG_DOUBLE_MATH_OPTIONAL if
__NO_LONG_DOUBLE_MATH was defined before its inclusion, because doing
so breaks how sysdeps/ieee754/ldbl-opt/nldbl-compat.h defines
__NO_LONG_DOUBLE_MATH itself before including system headers.  Subject
to keeping that working, it would be reasonable to move these macros
from defined/undefined #ifdef to always-defined 1/0 #if semantics, but
this patch does not attempt to do so, just rearranges where the macros
are defined.

After this patch, the only use of bits/mathdef.h is the alpha one for
modifying complex function ABIs for old GCC.  Thus, all versions of
the header other than the default and alpha versions are removed, as
is the include from math.h.

Tested for x86_64 and x86.  Also did compilation-only testing with
build-many-glibcs.py.

	* bits/long-double.h: New file.
	* sysdeps/ieee754/ldbl-128/bits/long-double.h: Likewise.
	* sysdeps/ieee754/ldbl-96/bits/long-double.h: Likewise.
	* sysdeps/ieee754/ldbl-opt/bits/long-double.h: Likewise.
	* sysdeps/mips/bits/long-double.h: Likewise.
	* sysdeps/unix/sysv/linux/sparc/bits/long-double.h: Likewise.
	* math/Makefile (headers): Add bits/long-double.h.
	* misc/sys/cdefs.h: Include <bits/long-double.h>.
	* stdlib/strtold.c: Include <bits/long-double.h> instead of
	<bits/wordsize.h>.
	* bits/mathdef.h [!_COMPLEX_H]: Do not allow inclusion.
	[!__NO_LONG_DOUBLE_MATH]: Remove conditional code.
	* math/math.h: Do not include <bits/mathdef.h>.
	* sysdeps/aarch64/bits/mathdef.h: Remove file.
	* sysdeps/alpha/bits/mathdef.h [!_COMPLEX_H]: Do not allow
	inclusion.
	* sysdeps/ia64/bits/mathdef.h: Remove file.
	* sysdeps/m68k/m680x0/bits/mathdef.h: Likewise.
	* sysdeps/mips/bits/mathdef.h: Likewise.
	* sysdeps/powerpc/bits/mathdef.h: Likewise.
	* sysdeps/s390/bits/mathdef.h: Likewise.
	* sysdeps/sparc/bits/mathdef.h: Likewise.
	* sysdeps/x86/bits/mathdef.h: Likewise.
	* sysdeps/s390/s390-32/bits/wordsize.h
	[!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Remove
	conditional code.
	* sysdeps/s390/s390-64/bits/wordsize.h
	[!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]:
	Likewise.
	* sysdeps/unix/sysv/linux/alpha/bits/wordsize.h
	[!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]:
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/bits/wordsize.h
	[!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]:
	Likewise.
	* sysdeps/unix/sysv/linux/sparc/bits/wordsize.h
	[!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]:
	Likewise.
2016-12-14 18:27:56 +00:00
Florian Weimer
67aae64512 aarch64: Use explicit offsets in _dl_tlsdesc_dynamic
Commit 389d1f1b23 (“Partial ILP32
support for aarch64”) broke dynamic TLS support because a load
offset changed:

 0000000000000030 <_dl_tlsdesc_dynamic>:
   30:  a9bc7bfd        stp     x29, x30, [sp,#-64]!
   34:  910003fd        mov     x29, sp
   38:  a9020be1        stp     x1, x2, [sp,#32]
   3c:  a90313e3        stp     x3, x4, [sp,#48]
   40:  d53bd044        mrs     x4, tpidr_el0
   44:  c8dffc1f        ldar    xzr, [x0]
   48:  f9400401        ldr     x1, [x0,#8]
   4c:  f9400080        ldr     x0, [x4]
   50:  f9400823        ldr     x3, [x1,#16]
   54:  f9400002        ldr     x2, [x0]
   58:  eb02007f        cmp     x3, x2
   5c:  540001a8        b.hi    90 <_dl_tlsdesc_dynamic+0x60>
   60:  f9400022        ldr     x2, [x1]
   64:  8b021000        add     x0, x0, x2, lsl #4
   68:  f9400000        ldr     x0, [x0]
   6c:  b100041f        cmn     x0, #0x1
   70:  54000100        b.eq    90 <_dl_tlsdesc_dynamic+0x60>
-  74:  f9400421        ldr     x1, [x1,#8]
+  74:  f9400821        ldr     x1, [x1,#16]
   78:  8b010000        add     x0, x0, x1
…

This commit introduces explicit struct offsets, generated
from the C headers, fixing the regression.
2016-12-02 16:52:57 +01:00
Joseph Myers
b2491db6c8 Refactor FP_ILOGB* out of bits/mathdef.h.
Continuing the refactoring of bits/mathdef.h, this patch stops it
defining FP_ILOGB0 and FP_ILOGBNAN, moving the required information to
a new header bits/fp-logb.h.

There are only two possible values of each of those macros permitted
by ISO C.  TS 18661-1 adds corresponding macros for llogb, and their
values are required to correspond to those of the ilogb macros in the
obvious way.  Thus two boolean values - for which the same choices are
correct for most architectures - suffice to determine the value of all
these macros, and by defining macros for those boolean values in
bits/fp-logb.h we can then define the public FP_* macros in math.h and
avoid the present duplication of the associated feature test macro
logic.

This patch duly moves to bits/fp-logb.h defining __FP_LOGB0_IS_MIN and
__FP_LOGBNAN_IS_MIN.  Default definitions of those to 0 are correct
for both architectures, while ia64, m68k and x86 get their own
versions of bits/fp-logb.h to reflect their use of values different
from the defaults.

The patch renders many copies of bits/mathdef.h trivial (needed only
to avoid the default __NO_LONG_DOUBLE_MATH).  I'll revise
<https://sourceware.org/ml/libc-alpha/2016-11/msg00865.html>
accordingly so that it removes all bits/mathdef.h headers except the
default one and the alpha one, and arranges for the header to be
included only by complex.h as the only remaining use at that point
will be for the alpha ABI issues there.

Tested for x86_64 and x86.  Also did compile-only testing with
build-many-glibcs.py (using glibc sources from before the commit that
introduced many build failures with undefined __GI___sigsetjmp).

	* bits/fp-logb.h: New file.
	* sysdeps/ia64/bits/fp-logb.h: Likewise.
	* sysdeps/m68k/m680x0/bits/fp-logb.h: Likewise.
	* sysdeps/x86/bits/fp-logb.h: Likewise.
	* math/Makefile (headers): Add bits/fp-logb.h.
	* math/math.h: Include <bits/fp-logb.h>.
	[__USE_ISOC99] (FP_ILOGB0): Define based on __FP_LOGB0_IS_MIN.
	[__USE_ISOC99] (FP_ILOGBNAN): Define based on __FP_LOGBNAN_IS_MIN.
	* bits/mathdef.h (FP_ILOGB0): Remove.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/aarch64/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/alpha/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/ia64/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/m68k/m680x0/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/mips/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/powerpc/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/s390/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/sparc/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
	* sysdeps/x86/bits/mathdef.h (FP_ILOGB0): Likewise.
	(FP_ILOGBNAN): Likewise.
2016-12-01 02:56:55 +00:00
Joseph Myers
f11e220d2d Refactor FP_FAST_* into bits/fp-fast.h.
Continuing the refactoring of bits/mathdef.h, this patch moves the
FP_FAST_* definitions into a new bits/fp-fast.h header.  Currently
this is only for FP_FAST_FMA*, but in future it would be the
appropriate place for the FP_FAST_* macros from TS 18661-1 as well.

The generic bits/mathdef.h header defines these macros based on
whether the compiler defines __FP_FAST_*.  Most architecture-specific
headers, however, fail to do so, meaning that if the architecture (or
some particular processors) does in fact have fused operations, and
GCC knows to use them inline, the FP_FAST_* macros will still not be
defined.

By refactoring, this patch causes the generic version (based on
__FP_FAST_*) to be used in more cases, and so the macro definitions to
be more accurate.  Architectures that already defined some or all of
these macros other than based on the predefines have their own
versions of fp-fast.h, which are arranged so they define FP_FAST_* if
either the architecture-specific conditions are true or __FP_FAST_*
are defined.

After this refactoring, various bits/mathdef.h headers for
architectures with long double = double are semantically identical to
the generic version.  The patch removes those headers that are
redundant.  (In fact two of the four removed were already redundant
before this patch because they did use __FP_FAST_*.)

Tested for x86_64 and x86, and compilation-only with
build-many-glibcs.py.

	* bits/fp-fast.h: New file.
	* sysdeps/aarch64/bits/fp-fast.h: Likewise.
	* sysdeps/powerpc/bits/fp-fast.h: Likewise.
	* math/Makefile (headers): Add bits/fp-fast.h.
	* math/math.h: Include <bits/fp-fast.h>.
	* bits/mathdef.h (FP_FAST_FMA): Remove.
	(FP_FAST_FMAF): Likewise.
	(FP_FAST_FMAL): Likewise.
	* sysdeps/aarch64/bits/mathdef.h (FP_FAST_FMA): Likewise.
	(FP_FAST_FMAF): Likewise.
	* sysdeps/powerpc/bits/mathdef.h (FP_FAST_FMA): Likewise.
	(FP_FAST_FMAF): Likewise.
	* sysdeps/x86/bits/mathdef.h (FP_FAST_FMA): Likewise.
	(FP_FAST_FMAF): Likewise.
	(FP_FAST_FMAL): Likewise.
	* sysdeps/arm/bits/mathdef.h: Remove file.
	* sysdeps/hppa/fpu/bits/mathdef.h: Likewise.
	* sysdeps/sh/sh4/bits/mathdef.h: Likewise.
	* sysdeps/tile/bits/mathdef.h: Likewise.
2016-11-29 01:45:00 +00:00
Steve Ellcey
389d1f1b23 Partial ILP32 support for aarch64.
* sysdeps/aarch64/crti.S: Add include of sysdep.h.
	(call_weak_fn): Use PTR_REG to get correct reg name in ILP32.
	* sysdeps/aarch64/dl-irel.h: Add include of sysdep.h.
	(elf_irela): Use AARCH64_R macro to get correct relocation in ILP32.
	* sysdeps/aarch64/dl-machine.h: Add include of sysdep.h.
	(elf_machine_load_address, RTLD_START, RTLD_START_1, RTLD_START,
	elf_machine_type_class, ELF_MACHINE_JMP_SLOT, elf_machine_rela,
	elf_machine_lazy_rel): Add ifdef's for ILP32 support.
	* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return,
	_dl_tlsdesc_return_lazy, _dl_tlsdesc_dynamic,
	_dl_tlsdesc_resolve_hold): Extend pointers in ILP32, use PTR_REG
	to get correct reg name for ILP32.
	* sysdeps/aarch64/dl-trampoline.S (ip01): New Macro.
	(RELA_SIZE): New Macro.
	(_dl_runtime_resolve, _dl_runtime_profile): Use new macros and PTR_REG
	to support ILP32.
	* sysdeps/aarch64/jmpbuf-unwind.h (_JMPBUF_CFA_UNWINDS_ADJ): Add
	cast for ILP32 mode.
	* sysdeps/aarch64/memcmp.S (memcmp): Extend arg pointers for ILP32 mode.
	* sysdeps/aarch64/memcpy.S (memmove, memcpy): Ditto.
	* sysdeps/aarch64/memset.S (__memset): Ditto.
	* sysdeps/aarch64/strchr.S (strchr): Ditto.
	* sysdeps/aarch64/strchrnul.S (__strchrnul): Ditto.
	* sysdeps/aarch64/strcmp.S (strcmp): Ditto.
	* sysdeps/aarch64/strcpy.S (strcpy): Ditto.
	* sysdeps/aarch64/strlen.S (__strlen): Ditto.
	* sysdeps/aarch64/strncmp.S (strncmp): Ditto.
	* sysdeps/aarch64/strnlen.S (strnlen): Ditto.
	* sysdeps/aarch64/strrchr.S (strrchr): Ditto.
	* sysdeps/unix/sysv/linux/aarch64/clone.S: Ditto.
	* sysdeps/unix/sysv/linux/aarch64/setcontext.S (__setcontext): Ditto.
	* sysdeps/unix/sysv/linux/aarch64/swapcontext.S (__swapcontext): Ditto.
	* sysdeps/aarch64/__longjmp.S (__longjmp): Extend pointers in ILP32,
	change PTR_MANGLE call to use register numbers instead of names.
	* sysdeps/unix/sysv/linux/aarch64/getcontext.S (__getcontext): Ditto.
	* sysdeps/aarch64/setjmp.S (__sigsetjmp): Extend arg pointers for
	ILP32 mode, change PTR_MANGLE calls to use register numbers.
	* sysdeps/aarch64/start.S (_start): Ditto.
	* sysdeps/aarch64/nptl/bits/pthreadtypes.h
	(__PTHREAD_RWLOCK_INT_FLAGS_SHARED): New define.
	(__SIZEOF_PTHREAD_ATTR_T, __SIZEOF_PTHREAD_MUTEX_T,
	__SIZEOF_PTHREAD_MUTEXATTR_T, __SIZEOF_PTHREAD_COND_T,
	__SIZEOF_PTHREAD_COND_COMPAT_T, __SIZEOF_PTHREAD_CONDATTR_T,
	__SIZEOF_PTHREAD_RWLOCK_T, __SIZEOF_PTHREAD_RWLOCKATTR_T,
	__SIZEOF_PTHREAD_BARRIER_T, __SIZEOF_PTHREAD_BARRIERATTR_T):
	Make defined values dependent on __ILP32__.
	* sysdeps/aarch64/nptl/bits/semaphore.h (__SIZEOF_SEM_T): Change define.
	(sem_t): Change __align type.
	* sysdeps/aarch64/sysdep.h (AARCH64_R, PTR_REG, PTR_LOG_SIZE, DELOUSE,
	PTR_SIZE): New Macros.
	(LDST_PCREL, LDST_GLOBAL) Update to use PTR_REG.
	* sysdeps/unix/sysv/linux/aarch64/bits/fcntl.h (O_LARGEFILE):
	Set when in ILP32 mode.
	(F_GETLK64, F_SETLK64, F_SETLKW64): Only set in LP64 mode.
	* sysdeps/unix/sysv/linux/aarch64/dl-cache.h (DL_CACHE_DEFAULT_ID):
	Set elf flags for ILP32.
	(add_system_dir): Set ILP32 library directories.
	* sysdeps/unix/sysv/linux/aarch64/init-first.c
	(_libc_vdso_platform_setup): Set minimum kernel version for ILP32.
	* sysdeps/unix/sysv/linux/aarch64/ldconfig.h
	(SYSDEP_KNOWN_INTERPRETER_NAMES): Add ILP32 names.
	* sysdeps/unix/sysv/linux/aarch64/sigcontextinfo.h (GET_PC, SET_PC):
	New Macros.
	* sysdeps/unix/sysv/linux/aarch64/sysdep.h: Handle ILP32 pointers.
2016-11-28 09:01:23 -08:00
Adhemerval Zanella
c579f48edb Remove cached PID/TID in clone
This patch remove the PID cache and usage in current GLIBC code.  Current
usage is mainly used a performance optimization to avoid the syscall,
however it adds some issues:

  - The exposed clone syscall will try to set pid/tid to make the new
    thread somewhat compatible with current GLIBC assumptions.  This cause
    a set of issue with new workloads and usecases (such as BZ#17214 and
    [1]) as well for new internal usage of clone to optimize other algorithms
    (such as clone plus CLONE_VM for posix_spawn, BZ#19957).

  - The caching complexity also added some bugs in the past [2] [3] and
    requires more effort of each port to handle such requirements (for
    both clone and vfork implementation).

  - Caching performance gain in mainly on getpid and some specific
    code paths.  The getpid performance leverage is questionable [4],
    either by the idea of getpid being a hotspot as for the getpid
    implementation itself (if it is indeed a justifiable hotspot a
    vDSO symbol could let to a much more simpler solution).

    Other usage is mainly for non usual code paths, such as pthread
    cancellation signal and handling.

For thread creation (on stack allocation) the code simplification in fact
adds some performance gain due the no need of transverse the stack cache
and invalidate each element pid.

Other thread usages will require a direct getpid syscall, such as
cancellation/setxid signal, thread cancellation, thread fail path (at
create_thread), and thread signal (pthread_kill and pthread_sigqueue).
However these are hardly usual hotspots and I think adding a syscall is
justifiable.

It also simplifies both the clone and vfork arch-specific implementation.
And by review each fork implementation there are some discrepancies that
this patch also solves:

  - microblaze clone/vfork does not set/reset the pid/tid field
  - hppa uses the default vfork implementation that fallback to fork.
    Since vfork is deprecated I do not think we should bother with it.

The patch also removes the TID caching in clone. My understanding for
such semantic is try provide some pthread usage after a user program
issue clone directly (as done by thread creation with CLONE_PARENT_SETTID
and pthread tid member).  However, as stated before in multiple discussions
threads, GLIBC provides clone syscalls without further supporting all this
semantics.

I ran a full make check on x86_64, x32, i686, armhf, aarch64, and powerpc64le.
For sparc32, sparc64, and mips I ran the basic fork and vfork tests from
posix/ folder (on a qemu system).  So it would require further testing
on alpha, hppa, ia64, m68k, nios2, s390, sh, and tile (I excluded microblaze
because it is already implementing the patch semantic regarding clone/vfork).

[1] https://codereview.chromium.org/800183004/
[2] https://sourceware.org/ml/libc-alpha/2006-07/msg00123.html
[3] https://sourceware.org/bugzilla/show_bug.cgi?id=15368
[4] http://yarchive.net/comp/linux/getpid_caching.html

	* sysdeps/nptl/fork.c (__libc_fork): Remove pid cache setting.
	* nptl/allocatestack.c (allocate_stack): Likewise.
	(__reclaim_stacks): Likewise.
	(setxid_signal_thread): Obtain pid through syscall.
	* nptl/nptl-init.c (sigcancel_handler): Likewise.
	(sighandle_setxid): Likewise.
	* nptl/pthread_cancel.c (pthread_cancel): Likewise.
	* sysdeps/unix/sysv/linux/pthread_kill.c (__pthread_kill): Likewise.
	* sysdeps/unix/sysv/linux/pthread_sigqueue.c (pthread_sigqueue):
	Likewise.
	* sysdeps/unix/sysv/linux/createthread.c (create_thread): Likewise.
	* sysdeps/unix/sysv/linux/getpid.c: Remove file.
	* nptl/descr.h (struct pthread): Change comment about pid value.
	* nptl/pthread_getattr_np.c (pthread_getattr_np): Remove thread
	pid assert.
	* sysdeps/unix/sysv/linux/pthread-pids.h (__pthread_initialize_pids):
	Do not set pid value.
	* nptl_db/td_ta_thr_iter.c (iterate_thread_list): Remove thread
	pid cache check.
	* nptl_db/td_thr_validate.c (td_thr_validate): Likewise.
	* sysdeps/aarch64/nptl/tcb-offsets.sym: Remove pid offset.
	* sysdeps/alpha/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/arm/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/hppa/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/i386/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/ia64/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/m68k/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/microblaze/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/mips/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/nios2/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/powerpc/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/s390/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/sh/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/sparc/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/tile/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/x86_64/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/clone.S: Remove pid and tid caching.
	* sysdeps/unix/sysv/linux/alpha/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/arm/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/hppa/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/ia64/clone2.S: Likewise.
	* sysdeps/unix/sysv/linux/mips/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/nios2/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sh/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/tile/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/vfork.S: Remove pid set and reset.
	* sysdeps/unix/sysv/linux/alpha/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/arm/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/ia64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/m68k/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/m68k/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/mips/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/nios2/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sh/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/tile/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/tst-clone2.c (f): Remove direct pthread
	struct access.
	(clone_test): Remove function.
	(do_test): Rewrite to take in consideration pid is not cached anymore.
2016-11-24 19:38:51 -02:00
Joseph Myers
93eb85ceb2 Refactor float_t, double_t information into bits/flt-eval-method.h.
At present, definitions of float_t and double_t are split among many
bits/mathdef.h headers.

For all but three architectures, these types are float and double.
Furthermore, if you assume __FLT_EVAL_METHOD__ to be defined, that
provides a more generic way of determining the correct values of these
typedefs.  Defining these typedefs more generally based on
__FLT_EVAL_METHOD__ was previously proposed by Paul Eggert in
<https://sourceware.org/ml/libc-alpha/2012-02/msg00002.html>.

This patch refactors things in the way I proposed in
<https://sourceware.org/ml/libc-alpha/2016-11/msg00745.html>.  A new
header bits/flt-eval-method.h defines a single macro,
__GLIBC_FLT_EVAL_METHOD, which is then used by math.h to define
float_t and double_t.  The default is based on __FLT_EVAL_METHOD__
(although actually a default to 0 would have the same effect for
current ports, because ports where values other than 0 or 16 are
possible all have their own headers).

To avoid changing the existing semantics in any case, including for
compilers not defining __FLT_EVAL_METHOD__, architecture-specific
files are then added for m68k, s390, x86 which replicate the existing
semantics.  At least with __FLT_EVAL_METHOD__ values possible with
GCC, there should be no change to the choices of float_t and double_t
for any supported configuration.

Architecture maintainer notes:

* m68k: sysdeps/m68k/m680x0/bits/flt-eval-method.h always defines
  __GLIBC_FLT_EVAL_METHOD to 2 to replicate the existing logic.  But
  actually GCC defines __FLT_EVAL_METHOD__ to 0 if TARGET_68040.  It
  might make sense to make the header prefer to base things on
  __FLT_EVAL_METHOD__ if defined, like the x86 version, and so make
  the choices of these types more accurate (with a NEWS entry as for
  the other changes to these types on particular architectures).

* s390: sysdeps/s390/bits/flt-eval-method.h always defines
  __GLIBC_FLT_EVAL_METHOD to 1 to replicate the existing logic.  As
  previously discussed, it might make sense in coordination with GCC
  to eliminate the historic mistake, avoid excess precision in the
  -fexcess-precision=standard case and make the typedefs match (with a
  NEWS entry, again).

Tested for x86-64 and x86.  Also did compilation-only testing with
build-many-glibcs.py.

	* bits/flt-eval-method.h: New file.
	* sysdeps/m68k/m680x0/bits/flt-eval-method.h: Likewise.
	* sysdeps/s390/bits/flt-eval-method.h: Likewise.
	* sysdeps/x86/bits/flt-eval-method.h: Likewise.
	* math/Makefile (headers): Add bits/flt-eval-method.h.
	* math/math.h: Include <bits/flt-eval-method.h>.
	[__USE_ISOC99] (float_t): Define based on __GLIBC_FLT_EVAL_METHOD.
	[__USE_ISOC99] (double_t): Likewise.
	* bits/mathdef.h (float_t): Remove.
	(double_t): Likewise.
	* sysdeps/aarch64/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/alpha/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/arm/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/hppa/fpu/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/ia64/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/m68k/m680x0/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/mips/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/powerpc/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/s390/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/sh/sh4/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/sparc/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/tile/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
	* sysdeps/x86/bits/mathdef.h (float_t): Likewise.
	(double_t): Likewise.
2016-11-24 18:44:50 +00:00
Siddhesh Poyarekar
8f3a4687ad Regenerate ULPs for aarch64
* sysdeps/aarch64/libm-test-ulps: Regenerated.
2016-11-10 16:52:35 +05:30
Florian Weimer
c74940f2a7 nptl: Document the reason why __kind in pthread_mutex_t is part of the ABI 2016-11-07 20:24:32 +01:00
Joseph Myers
799131036e Do not hardcode platform names in manual/libm-err-tab.pl (bug 14139).
manual/libm-err-tab.pl hardcodes a list of names for particular
platforms (mapping from sysdeps directory name to friendly name for
the manual).  This goes against the principle of keeping information
about individual platforms in their corresponding sysdeps directory,
and the list is also very out-of-date regarding supported platforms
and their corresponding sysdeps directories.

This patch fixes this by adding a libm-test-ulps-name file alongside
each libm-test-ulps file.  The script then gets the friendly name from
that file, which is required to exist, so it no longer needs to allow
for the mapping being missing.

Tested for x86_64.

	[BZ #14139]
	* manual/libm-err-tab.pl (%pplatforms): Initialize to empty.
	(find_files): Obtain platform name from libm-test-ulps-name and
	store in %pplatforms.
	(canonicalize_platform): Remove.
	(print_platforms): Use $pplatforms directly.
	(by_platforms): Do not allow for platforms missing from
	%pplatforms.
	* sysdeps/aarch64/libm-test-ulps-name: New file.
	* sysdeps/alpha/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/arm/libm-test-ulps-name: Likewise.
	* sysdeps/generic/libm-test-ulps-name: Likewise.
	* sysdeps/hppa/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/i386/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps-name: Likewise.
	* sysdeps/ia64/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/m68k/coldfire/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/m68k/m680x0/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/microblaze/libm-test-ulps-name: Likewise.
	* sysdeps/mips/mips32/libm-test-ulps-name: Likewise.
	* sysdeps/mips/mips64/libm-test-ulps-name: Likewise.
	* sysdeps/nios2/libm-test-ulps-name: Likewise.
	* sysdeps/powerpc/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/powerpc/nofpu/libm-test-ulps-name: Likewise.
	* sysdeps/s390/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/sh/libm-test-ulps-name: Likewise.
	* sysdeps/sparc/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/tile/libm-test-ulps-name: Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps-name: Likewise.
2016-11-04 16:49:06 +00:00
Steve Ellcey
d060cd002d Define wordsize.h macros everywhere
* bits/wordsize.h: Add documentation.
	* sysdeps/aarch64/bits/wordsize.h : New file
	* sysdeps/generic/stdint.h (PTRDIFF_MIN, PTRDIFF_MAX): Update
	definitions.
	(SIZE_MAX): Change ifdef to if in __WORDSIZE32_SIZE_ULONG check.
	* sysdeps/gnu/bits/utmp.h (__WORDSIZE_TIME64_COMPAT32): Check
	with #if instead of #ifdef.
	* sysdeps/gnu/bits/utmpx.h (__WORDSIZE_TIME64_COMPAT32): Ditto.
	* sysdeps/mips/bits/wordsize.h (__WORDSIZE32_SIZE_ULONG,
	__WORDSIZE32_PTRDIFF_LONG, __WORDSIZE_TIME64_COMPAT32):
	Add or change defines.
	* sysdeps/powerpc/powerpc32/bits/wordsize.h: Likewise.
	* sysdeps/powerpc/powerpc64/bits/wordsize.h: Likewise.
	* sysdeps/s390/s390-32/bits/wordsize.h: Likewise.
	* sysdeps/s390/s390-64/bits/wordsize.h: Likewise.
	* sysdeps/sparc/sparc32/bits/wordsize.h: Likewise.
	* sysdeps/sparc/sparc64/bits/wordsize.h: Likewise.
	* sysdeps/tile/tilegx/bits/wordsize.h: Likewise.
	* sysdeps/tile/tilepro/bits/wordsize.h: Likewise.
	* sysdeps/unix/sysv/linux/alpha/bits/wordsize.h: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/bits/wordsize.h: Likewise.
	* sysdeps/unix/sysv/linux/sparc/bits/wordsize.h: Likewise.
	* sysdeps/wordsize-32/bits/wordsize.h: Likewise.
	* sysdeps/wordsize-64/bits/wordsize.h: Likewise.
	* sysdeps/x86/bits/wordsize.h: Likewise.
2016-11-04 09:37:44 -07:00