mirror of
https://sourceware.org/git/glibc.git
synced 2024-11-30 08:40:07 +00:00
4ba6558684
310 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Anton Youdkevitch
|
94e358f6d4 |
aarch64: thunderx2 memcpy implementation cleanup and streamlining
Here is the updated patch for improving the long unaligned code path (the one using "ext" instruction). 1. Always taken conditional branch at the beginning is removed. 2. Epilogue code is placed after the end of the loop to reduce the number of branches. 3. The redundant "mov" instructions inside the loop are gone due to the changed order of the registers in the "ext" instructions inside the loop, the prologue has additional "ext" instruction. 4.Updating count in the prologue was hoisted out as it is the same update for each prologue. 5. Invariant code of the loop epilogue was hoisted out. 6. As the current size of the ext chunk is exactly 16 instructions long "nop" was added at the beginning of the code sequence so that the loop entry for all the chunks be aligned. * sysdeps/aarch64/multiarch/memcpy_thunderx2.S: Cleanup branching and remove redundant code. |
||
Joseph Myers
|
a04549c194 |
Break more lines before not after operators.
This patch makes further coding style fixes where code was breaking lines after an operator, contrary to the GNU Coding Standards. As with the previous patch, it is limited to files following a reasonable approximation to GNU style already, and is not exhaustive; more such issues remain to be fixed. Tested for x86_64, and with build-many-glibcs.py. * dirent/dirent.h [!_DIRENT_HAVE_D_NAMLEN && _DIRENT_HAVE_D_RECLEN] (_D_ALLOC_NAMLEN): Break lines before rather than after operators. * elf/cache.c (print_cache): Likewise. * gshadow/fgetsgent_r.c (__fgetsgent_r): Likewise. * htl/pt-getattr.c (__pthread_getattr_np): Likewise. * hurd/hurdinit.c (_hurd_setproc): Likewise. * hurd/hurdkill.c (_hurd_sig_post): Likewise. * hurd/hurdlookup.c (__file_name_lookup_under): Likewise. * hurd/hurdsig.c (_hurd_internal_post_signal): Likewise. (reauth_proc): Likewise. * hurd/lookup-at.c (__file_name_lookup_at): Likewise. (__file_name_split_at): Likewise. (__directory_name_split_at): Likewise. * hurd/lookup-retry.c (__hurd_file_name_lookup_retry): Likewise. * hurd/port2fd.c (_hurd_port2fd): Likewise. * iconv/gconv_dl.c (do_print): Likewise. * inet/netinet/in.h (struct sockaddr_in): Likewise. * libio/wstrops.c (_IO_wstr_seekoff): Likewise. * locale/setlocale.c (new_composite_name): Likewise. * malloc/memusagestat.c (main): Likewise. * misc/fstab.c (fstab_convert): Likewise. * nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_usercnt): Likewise. * nss/nss_compat/compat-grp.c (getgrent_next_nss): Likewise. (getgrent_next_file): Likewise. (internal_getgrnam_r): Likewise. (internal_getgrgid_r): Likewise. * nss/nss_compat/compat-initgroups.c (getgrent_next_nss): Likewise. (internal_getgrent_r): Likewise. * nss/nss_compat/compat-pwd.c (getpwent_next_nss_netgr): Likewise. (getpwent_next_nss): Likewise. (getpwent_next_file): Likewise. (internal_getpwnam_r): Likewise. (internal_getpwuid_r): Likewise. * nss/nss_compat/compat-spwd.c (getspent_next_nss_netgr): Likewise. (getspent_next_nss): Likewise. (internal_getspnam_r): Likewise. * pwd/fgetpwent_r.c (__fgetpwent_r): Likewise. * shadow/fgetspent_r.c (__fgetspent_r): Likewise. * string/strchr.c (STRCHR): Likewise. * string/strchrnul.c (STRCHRNUL): Likewise. * sysdeps/aarch64/fpu/fpu_control.h (_FPU_FPCR_IEEE): Likewise. * sysdeps/aarch64/sfp-machine.h (_FP_CHOOSENAN): Likewise. * sysdeps/csky/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/generic/memcopy.h (PAGE_COPY_FWD_MAYBE): Likewise. * sysdeps/generic/symbol-hacks.h (__stack_chk_fail_local): Likewise. * sysdeps/gnu/netinet/ip_icmp.h (ICMP_INFOTYPE): Likewise. * sysdeps/gnu/updwtmp.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/gnu/utmp_file.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/hppa/jmpbuf-unwind.h (_JMPBUF_UNWINDS): Likewise. * sysdeps/mach/hurd/bits/stat.h (S_ISPARE): Likewise. * sysdeps/mach/hurd/dl-sysdep.c (_dl_sysdep_start): Likewise. (open_file): Likewise. * sysdeps/mach/hurd/htl/pt-mutexattr-setprotocol.c (pthread_mutexattr_setprotocol): Likewise. * sysdeps/mach/hurd/ioctl.c (__ioctl): Likewise. * sysdeps/mach/hurd/mmap.c (__mmap): Likewise. * sysdeps/mach/hurd/ptrace.c (ptrace): Likewise. * sysdeps/mach/hurd/spawni.c (__spawni): Likewise. * sysdeps/microblaze/dl-machine.h (elf_machine_type_class): Likewise. (elf_machine_rela): Likewise. * sysdeps/mips/mips32/sfp-machine.h (_FP_CHOOSENAN): Likewise. * sysdeps/mips/mips64/sfp-machine.h (_FP_CHOOSENAN): Likewise. * sysdeps/mips/sys/asm.h (multiple #if conditionals): Likewise. * sysdeps/posix/rename.c (rename): Likewise. * sysdeps/powerpc/novmx-sigjmp.c (__novmx__sigjmp_save): Likewise. * sysdeps/powerpc/sigjmp.c (__vmx__sigjmp_save): Likewise. * sysdeps/s390/fpu/fenv_libc.h (FPC_VALID_MASK): Likewise. * sysdeps/s390/utf8-utf16-z9.c (gconv_end): Likewise. * sysdeps/unix/grantpt.c (grantpt): Likewise. * sysdeps/unix/sysv/linux/a.out.h (N_TXTOFF): Likewise. * sysdeps/unix/sysv/linux/updwtmp.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/unix/sysv/linux/utmp_file.c (TRANSFORM_UTMP_FILE_NAME): Likewise. * sysdeps/x86/cpu-features.c (get_common_indices): Likewise. * time/tzfile.c (__tzfile_compute): Likewise. |
||
Feng Xue
|
83d1cc42d8 |
aarch64: Optimized memchr specific to AmpereComputing emag
This version uses general register based memory instruction to load data, because vector register based is slightly slower in emag. Character-matching is performed on 16-byte (both size and alignment) memory block in parallel each iteration. * sysdeps/aarch64/memchr.S (__memchr): Rename to MEMCHR. [!MEMCHR](MEMCHR): Set to __memchr. * sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add memchr_generic and memchr_nosimd. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Add memchr ifuncs. * sysdeps/aarch64/multiarch/memchr.c: New file. * sysdeps/aarch64/multiarch/memchr_generic.S: Likewise. * sysdeps/aarch64/multiarch/memchr_nosimd.S: Likewise. |
||
Feng Xue
|
c7d3890ff5 |
aarch64: Optimized memset specific to AmpereComputing emag
This version uses general register based memory store instead of vector register based, for the former is faster than the latter in emag. The fact that DC ZVA size in emag is 64-byte, is used by IFUNC dispatch to select this memset, so that cost of runtime-check on DC ZVA size can be saved. * sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add memset_emag. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Add __memset_emag to memset ifunc. * sysdeps/aarch64/multiarch/memset.c (libc_ifunc): Add IS_EMAG check for ifunc dispatch. * sysdeps/aarch64/multiarch/memset_base64.S: New file. * sysdeps/aarch64/multiarch/memset_emag.S: New file. |
||
Wilco Dijkstra
|
02f440c1ef |
[AArch64] Add ifunc support for Ares
Add Ares to the midr_el0 list and support ifunc dispatch. Since Ares supports 2 128-bit loads/stores, use Neon registers for memcpy by selecting __memcpy_falkor by default (we should rename this to __memcpy_simd or similar). * manual/tunables.texi (glibc.cpu.name): Add ares tunable. * sysdeps/aarch64/multiarch/memcpy.c (__libc_memcpy): Use __memcpy_falkor for ares. * sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_ARES): Add new define. * sysdeps/unix/sysv/linux/aarch64/cpu-features.c (cpu_list): Add ares cpu. |
||
Joseph Myers
|
04277e02d7 |
Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates using scripts/update-copyrights. * locale/programs/charmap-kw.h: Regenerated. * locale/programs/locfile-kw.h: Likewise. |
||
Wilco Dijkstra
|
5770c0ad1e |
[AArch64] Adjust writeback in non-zero memset
This fixes an ineffiency in the non-zero memset. Delaying the writeback until the end of the loop is slightly faster on some cores - this shows ~5% performance gain on Cortex-A53 when doing large non-zero memsets. * sysdeps/aarch64/memset.S (MEMSET): Improve non-zero memset loop. |
||
Steve Ellcey
|
f0da0bcf8b | Remove extra space at end of line. | ||
Anton Youdkevitch
|
75c1aee500 |
aarch64: optimized memcpy implementation for thunderx2
Since aligned loads and stores are huge performance advantage the implementation always tries to do aligned access. Among the cases when src and dst addresses are aligned or unaligned evenly there are cases of not evenly unaligned src and dst. For such cases (if the length is big enough) ext instruction is used to merge-and-shift two memory chunks loaded from two adjacent aligned locations and then the adjusted chunk gets stored to aligned address. Performance gain against the current T2 implementation: memcpy-large: 65K-32M: +40% - +10% memcpy-walk: 128-32M: +20% - +2% |
||
Joseph Myers
|
c52944e8cc |
Remove unnecessary math_private.h includes.
After my changes to move various macros, inlines and other content from math_private.h to more specific headers, many files including math_private.h no longer need to do so. Furthermore, since the optimized inlines of various functions have been moved to include/fenv.h or replaced by use of function names GCC inlines automatically, a missing math_private.h include where one is appropriate will reliably cause a build failure rather than possibly causing code to be less well optimized while still building successfully. Thus, this patch removes includes of math_private.h that are now unnecessary. In the case of two RISC-V files, the include is replaced by one of stdbool.h because the files in question were relying on math_private.h to get a definition of bool. Tested for x86_64 and x86, and with build-many-glibcs.py. * math/fromfp.h: Do not include <math_private.h>. * math/s_cacosh_template.c: Likewise. * math/s_casin_template.c: Likewise. * math/s_casinh_template.c: Likewise. * math/s_ccos_template.c: Likewise. * math/s_cproj_template.c: Likewise. * math/s_fdim_template.c: Likewise. * math/s_fmaxmag_template.c: Likewise. * math/s_fminmag_template.c: Likewise. * math/s_iseqsig_template.c: Likewise. * math/s_ldexp_template.c: Likewise. * math/s_nextdown_template.c: Likewise. * math/w_log1p_template.c: Likewise. * math/w_scalbln_template.c: Likewise. * sysdeps/aarch64/fpu/feholdexcpt.c: Likewise. * sysdeps/aarch64/fpu/fesetround.c: Likewise. * sysdeps/aarch64/fpu/fgetexcptflg.c: Likewise. * sysdeps/aarch64/fpu/ftestexcept.c: Likewise. * sysdeps/aarch64/fpu/s_llrint.c: Likewise. * sysdeps/aarch64/fpu/s_llrintf.c: Likewise. * sysdeps/aarch64/fpu/s_lrint.c: Likewise. * sysdeps/aarch64/fpu/s_lrintf.c: Likewise. * sysdeps/i386/fpu/s_atanl.c: Likewise. * sysdeps/i386/fpu/s_f32xaddf64.c: Likewise. * sysdeps/i386/fpu/s_f32xsubf64.c: Likewise. * sysdeps/i386/fpu/s_fdim.c: Likewise. * sysdeps/i386/fpu/s_logbl.c: Likewise. * sysdeps/i386/fpu/s_rintl.c: Likewise. * sysdeps/i386/fpu/s_significandl.c: Likewise. * sysdeps/ia64/fpu/s_matherrf.c: Likewise. * sysdeps/ia64/fpu/s_matherrl.c: Likewise. * sysdeps/ieee754/dbl-64/s_atan.c: Likewise. * sysdeps/ieee754/dbl-64/s_cbrt.c: Likewise. * sysdeps/ieee754/dbl-64/s_fma.c: Likewise. * sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise. * sysdeps/ieee754/flt-32/s_cbrtf.c: Likewise. * sysdeps/ieee754/k_standardf.c: Likewise. * sysdeps/ieee754/k_standardl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_copysignl.c: Likewise. * sysdeps/ieee754/ldbl-64-128/s_finitel.c: Likewise. * sysdeps/ieee754/ldbl-64-128/s_fpclassifyl.c: Likewise. * sysdeps/ieee754/ldbl-64-128/s_isinfl.c: Likewise. * sysdeps/ieee754/ldbl-64-128/s_isnanl.c: Likewise. * sysdeps/ieee754/ldbl-64-128/s_signbitl.c: Likewise. * sysdeps/ieee754/ldbl-96/s_cbrtl.c: Likewise. * sysdeps/ieee754/ldbl-96/s_fma.c: Likewise. * sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise. * sysdeps/ieee754/s_signgam.c: Likewise. * sysdeps/powerpc/power5+/fpu/s_modf.c: Likewise. * sysdeps/powerpc/power5+/fpu/s_modff.c: Likewise. * sysdeps/powerpc/power7/fpu/s_logbf.c: Likewise. * sysdeps/riscv/rv64/rvd/s_ceil.c: Likewise. * sysdeps/riscv/rv64/rvd/s_floor.c: Likewise. * sysdeps/riscv/rv64/rvd/s_nearbyint.c: Likewise. * sysdeps/riscv/rv64/rvd/s_round.c: Likewise. * sysdeps/riscv/rv64/rvd/s_roundeven.c: Likewise. * sysdeps/riscv/rv64/rvd/s_trunc.c: Likewise. * sysdeps/riscv/rvd/s_finite.c: Likewise. * sysdeps/riscv/rvd/s_fmax.c: Likewise. * sysdeps/riscv/rvd/s_fmin.c: Likewise. * sysdeps/riscv/rvd/s_fpclassify.c: Likewise. * sysdeps/riscv/rvd/s_isinf.c: Likewise. * sysdeps/riscv/rvd/s_isnan.c: Likewise. * sysdeps/riscv/rvd/s_issignaling.c: Likewise. * sysdeps/riscv/rvf/fegetround.c: Likewise. * sysdeps/riscv/rvf/feholdexcpt.c: Likewise. * sysdeps/riscv/rvf/fesetenv.c: Likewise. * sysdeps/riscv/rvf/fesetround.c: Likewise. * sysdeps/riscv/rvf/feupdateenv.c: Likewise. * sysdeps/riscv/rvf/fgetexcptflg.c: Likewise. * sysdeps/riscv/rvf/ftestexcept.c: Likewise. * sysdeps/riscv/rvf/s_ceilf.c: Likewise. * sysdeps/riscv/rvf/s_finitef.c: Likewise. * sysdeps/riscv/rvf/s_floorf.c: Likewise. * sysdeps/riscv/rvf/s_fmaxf.c: Likewise. * sysdeps/riscv/rvf/s_fminf.c: Likewise. * sysdeps/riscv/rvf/s_fpclassifyf.c: Likewise. * sysdeps/riscv/rvf/s_isinff.c: Likewise. * sysdeps/riscv/rvf/s_isnanf.c: Likewise. * sysdeps/riscv/rvf/s_issignalingf.c: Likewise. * sysdeps/riscv/rvf/s_nearbyintf.c: Likewise. * sysdeps/riscv/rvf/s_roundevenf.c: Likewise. * sysdeps/riscv/rvf/s_roundf.c: Likewise. * sysdeps/riscv/rvf/s_truncf.c: Likewise. * sysdeps/riscv/rv64/rvd/s_rint.c: Include <stdbool.h> instead of <math_private.h>. * sysdeps/riscv/rvf/s_rintf.c: Likewise. |
||
Joseph Myers
|
9755bc4686 |
Use round functions not __round functions in glibc libm.
Continuing the move to use, within libm, public names for libm functions that can be inlined as built-in functions on many architectures, this patch moves calls to __round functions to call the corresponding round names instead, with asm redirection to __round when the calls are not inlined. An additional complication arises in sysdeps/ieee754/ldbl-128ibm/e_expl.c, where a call to roundl, with the result converted to int, gets converted by the compiler to call lroundl in the case of 32-bit long, so resulting in localplt test failures. It's logically correct to let the compiler make such an optimization; an appropriate asm redirection of lroundl to __lroundl is thus added to that file (it's not needed anywhere else). Tested for x86_64, and with build-many-glibcs.py. * include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (round): Redirect using MATH_REDIRECT. * sysdeps/aarch64/fpu/s_round.c: Define NO_MATH_REDIRECT before header inclusion. * sysdeps/aarch64/fpu/s_roundf.c: Likewise. * sysdeps/ieee754/dbl-64/s_round.c: Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_round.c: Likewise. * sysdeps/ieee754/float128/s_roundf128.c: Likewise. * sysdeps/ieee754/flt-32/s_roundf.c: Likewise. * sysdeps/ieee754/ldbl-128/s_roundl.c: Likewise. * sysdeps/ieee754/ldbl-96/s_roundl.c: Likewise. * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_round.c: Likewise. * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_roundf.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_round.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_roundf.c: Likewise. * sysdeps/riscv/rv64/rvd/s_round.c: Likewise. * sysdeps/riscv/rvf/s_roundf.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_roundl.c: Likewise. (round): Redirect to __round. (__roundl): Call round instead of __round. * sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__round): Remove macro. [_ARCH_PWR5X] (__roundf): Likewise. * sysdeps/ieee754/dbl-64/e_gamma_r.c (gamma_positive): Use round functions instead of __round variants. * sysdeps/ieee754/flt-32/e_gammaf_r.c (gammaf_positive): Likewise. * sysdeps/ieee754/ldbl-128/e_gammal_r.c (gammal_positive): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (gammal_positive): Likewise. * sysdeps/ieee754/ldbl-96/e_gammal_r.c (gammal_positive): Likewise. * sysdeps/x86/fpu/powl_helper.c (__powl_helper): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_expl.c (lroundl): Redirect to __lroundl. (__ieee754_expl): Call roundl instead of __roundl. |
||
Joseph Myers
|
7abf97bed9 |
Use trunc functions not __trunc functions in glibc libm.
Continuing the move to use, within libm, public names for libm functions that can be inlined as built-in functions on many architectures, this patch moves calls to __trunc functions to call the corresponding trunc names instead, with asm redirection to __trunc when the calls are not inlined. Tested for x86_64, and with build-many-glibcs.py. * include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (trunc): Redirect using MATH_REDIRECT. * sysdeps/aarch64/fpu/s_trunc.c: Define NO_MATH_REDIRECT before header inclusion. * sysdeps/aarch64/fpu/s_truncf.c: Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_trunc.c: Likewise. * sysdeps/ieee754/float128/s_truncf128.c: Likewise. * sysdeps/ieee754/dbl-64/s_trunc.c: Likewise. * sysdeps/ieee754/flt-32/s_truncf.c: Likewise. * sysdeps/ieee754/ldbl-128/s_truncl.c: Likewise. * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_trunc.c: Likewise. * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_truncf.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_trunc.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_truncf.c: Likewise. * sysdeps/riscv/rv64/rvd/s_trunc.c: Likewise. * sysdeps/riscv/rvf/s_truncf.c: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_trunc.c: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_truncf.c: Likewise. * sysdeps/x86_64/fpu/multiarch/s_trunc.c: Likewise. * sysdeps/x86_64/fpu/multiarch/s_truncf.c: Likewise. * sysdeps/m68k/m680x0/fpu/s_trunc_template.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_truncl.c: Likewise. (ceil): Redirect to __ceil. (floor): Redirect to __floor. (trunc): Redirect to __trunc. (__truncl): Call trunc instead of __trunc. * sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__trunc): Remove macro. [_ARCH_PWR5X] (__truncf): Likewise. * sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r): Use trunc functions instead of __trunc variants. * sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r): Likewise. * sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r): Likewise. |
||
Joseph Myers
|
71223ef909 |
Use ceil functions not __ceil functions in glibc libm.
Continuing the move to use, within libm, public names for libm functions that can be inlined as built-in functions on many architectures, this patch moves calls to __ceil functions to call the corresponding ceil names instead, with asm redirection to __ceil when the calls are not inlined. Tested for x86_64, and with build-many-glibcs.py. * include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (ceil): Redirect using MATH_REDIRECT. * sysdeps/aarch64/fpu/s_ceil.c: Define NO_MATH_REDIRECT before header inclusion. * sysdeps/aarch64/fpu/s_ceilf.c: Likewise. * sysdeps/ieee754/dbl-64/s_ceil.c: Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_ceil.c: Likewise. * sysdeps/ieee754/float128/s_ceilf128.c: Likewise. * sysdeps/ieee754/flt-32/s_ceilf.c: Likewise. * sysdeps/ieee754/ldbl-128/s_ceill.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_ceill.c: Likewise. * sysdeps/m68k/m680x0/fpu/s_ceil_template.c: Likewise. * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_ceil.c: Likewise. * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_ceilf.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceil.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_ceilf.c: Likewise. * sysdeps/riscv/rv64/rvd/s_ceil.c: Likewise. * sysdeps/riscv/rvf/s_ceilf.c: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_ceil.c: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_ceilf.c: Likewise. * sysdeps/x86_64/fpu/multiarch/s_ceil.c: Likewise. * sysdeps/x86_64/fpu/multiarch/s_ceilf.c: Likewise. * sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__ceil): Remove macro. * sysdeps/ieee754/dbl-64/e_gamma_r.c (gamma_positive): Use ceil functions instead of __ceil variants. * sysdeps/ieee754/flt-32/e_gammaf_r.c (gammaf_positive): Likewise. * sysdeps/ieee754/ldbl-128/e_gammal_r.c (gammal_positive): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (gammal_positive): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_truncl.c (__truncl): Likewise. * sysdeps/ieee754/ldbl-96/e_gammal_r.c (gammal_positive): Likewise. * sysdeps/powerpc/power5+/fpu/s_modf.c (__modf): Likewise. * sysdeps/powerpc/power5+/fpu/s_modff.c (__modff): Likewise. |
||
Joseph Myers
|
f29b6f17e4 |
Use rint functions not __rint functions in glibc libm.
Continuing the move to use, within libm, public names for libm functions that can be inlined as built-in functions on many architectures, this patch moves calls to __rint functions to call the corresponding rint names instead, with asm redirection to __rint when the calls are not inlined. The x86_64 math_private.h is removed as no longer useful after this patch. This patch is relative to a tree with my floor patch <https://sourceware.org/ml/libc-alpha/2018-09/msg00148.html> applied, and much the same considerations arise regarding possibly replacing an IFUNC call with a direct inline expansion. Tested for x86_64, and with build-many-glibcs.py. * include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (rint): Redirect using MATH_REDIRECT. * sysdeps/aarch64/fpu/s_rint.c: Define NO_MATH_REDIRECT before header inclusion. * sysdeps/aarch64/fpu/s_rintf.c: Likewise. * sysdeps/alpha/fpu/s_rint.c: Likewise. * sysdeps/alpha/fpu/s_rintf.c: Likewise. * sysdeps/i386/fpu/s_rintl.c: Likewise. * sysdeps/ieee754/dbl-64/s_rint.c: Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_rint.c: Likewise. * sysdeps/ieee754/float128/s_rintf128.c: Likewise. * sysdeps/ieee754/flt-32/s_rintf.c: Likewise. * sysdeps/ieee754/ldbl-128/s_rintl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise. * sysdeps/m68k/coldfire/fpu/s_rint.c: Likewise. * sysdeps/m68k/coldfire/fpu/s_rintf.c: Likewise. * sysdeps/m68k/m680x0/fpu/s_rint.c: Likewise. * sysdeps/m68k/m680x0/fpu/s_rintf.c: Likewise. * sysdeps/m68k/m680x0/fpu/s_rintl.c: Likewise. * sysdeps/powerpc/fpu/s_rint.c: Likewise. * sysdeps/powerpc/fpu/s_rintf.c: Likewise. * sysdeps/riscv/rv64/rvd/s_rint.c: Likewise. * sysdeps/riscv/rvf/s_rintf.c: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_rint.c: Likewise. * sysdeps/sparc/sparc32/sparcv9/fpu/multiarch/s_rintf.c: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_rint.c: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_rintf.c: Likewise. * sysdeps/x86_64/fpu/multiarch/s_rint.c: Likewise. * sysdeps/x86_64/fpu/multiarch/s_rintf.c: Likewise. * sysdeps/x86_64/fpu/math_private.h: Remove file. * math/e_scalb.c (invalid_fn): Use rint functions instead of __rint variants. * math/e_scalbf.c (invalid_fn): Likewise. * math/e_scalbl.c (invalid_fn): Likewise. * sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r): Likewise. * sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r): Likewise. * sysdeps/ieee754/k_standard.c (__kernel_standard): Likewise. * sysdeps/ieee754/k_standardl.c (__kernel_standard_l): Likewise. * sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/powerpc/powerpc32/fpu/s_llrint.c (__llrint): Likewise. * sysdeps/powerpc/powerpc32/fpu/s_llrintf.c (__llrintf): Likewise. |
||
Joseph Myers
|
e44acb2063 |
Use floor functions not __floor functions in glibc libm.
Similar to the changes that were made to call sqrt functions directly in glibc, instead of __ieee754_sqrt variants, so that the compiler could inline them automatically without needing special inline definitions in lots of math_private.h headers, this patch makes libm code call floor functions directly instead of __floor variants, removing the inlines / macros for x86_64 (SSE4.1) and powerpc (POWER5). The redirection used to ensure that __ieee754_sqrt does still get called when the compiler doesn't inline a built-in function expansion is refactored so it can be applied to other functions; the refactoring is arranged so it's not limited to unary functions either (it would be reasonable to use this mechanism for copysign - removing the inline in math_private_calls.h but also eliminating unnecessary local PLT entry use in the cases (powerpc soft-float and e500v1, for IBM long double) where copysign calls don't get inlined). The point of this change is that more architectures can get floor calls inlined where they weren't previously (AArch64, for example), without needing special inline definitions in their math_private.h, and existing such definitions in math_private.h headers can be removed. Note that it's possible that in some cases an inline may be used where an IFUNC call was previously used - this is the case on x86_64, for example. I think the direct calls to floor are still appropriate; if there's any significant performance cost from inline SSE2 floor instead of an IFUNC call ending up with SSE4.1 floor, that indicates that either the function should be doing something else that's faster than using floor at all, or it should itself have IFUNC variants, or that the compiler choice of inlining for generic tuning should change to allow for the possibility that, by not inlining, an SSE4.1 IFUNC might be called at runtime - but not that glibc should avoid calling floor internally. (After all, all the same considerations would apply to any user program calling floor, where it might either be inlined or left as an out-of-line call allowing for a possible IFUNC.) Tested for x86_64, and with build-many-glibcs.py. * include/math.h [!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (MATH_REDIRECT): New macro. [!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (MATH_REDIRECT_LDBL): Likewise. [!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (MATH_REDIRECT_F128): Likewise. [!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (MATH_REDIRECT_UNARY_ARGS): Likewise. [!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (sqrt): Redirect using MATH_REDIRECT. [!_ISOMAC && !(__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0) && !NO_MATH_REDIRECT] (floor): Likewise. * sysdeps/aarch64/fpu/s_floor.c: Define NO_MATH_REDIRECT before header inclusion. * sysdeps/aarch64/fpu/s_floorf.c: Likewise. * sysdeps/ieee754/dbl-64/s_floor.c: Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_floor.c: Likewise. * sysdeps/ieee754/float128/s_floorf128.c: Likewise. * sysdeps/ieee754/flt-32/s_floorf.c: Likewise. * sysdeps/ieee754/ldbl-128/s_floorl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_floorl.c: Likewise. * sysdeps/m68k/m680x0/fpu/s_floor_template.c: Likewise. * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_floor.c: Likewise. * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_floorf.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_floor.c: Likewise. * sysdeps/powerpc/powerpc64/fpu/multiarch/s_floorf.c: Likewise. * sysdeps/riscv/rv64/rvd/s_floor.c: Likewise. * sysdeps/riscv/rvf/s_floorf.c: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_floor.c: Likewise. * sysdeps/sparc/sparc64/fpu/multiarch/s_floorf.c: Likewise. * sysdeps/x86_64/fpu/multiarch/s_floor.c: Likewise. * sysdeps/x86_64/fpu/multiarch/s_floorf.c: Likewise. * sysdeps/powerpc/fpu/math_private.h [_ARCH_PWR5X] (__floor): Remove macro. [_ARCH_PWR5X] (__floorf): Likewise. * sysdeps/x86_64/fpu/math_private.h [__SSE4_1__] (__floor): Remove inline function. [__SSE4_1__] (__floorf): Likewise. * math/w_lgamma_main.c (LGFUNC (__lgamma)): Use floor functions instead of __floor variants. * math/w_lgamma_r_compat.c (__lgamma_r): Likewise. * math/w_lgammaf_main.c (LGFUNC (__lgammaf)): Likewise. * math/w_lgammaf_r_compat.c (__lgammaf_r): Likewise. * math/w_lgammal_main.c (LGFUNC (__lgammal)): Likewise. * math/w_lgammal_r_compat.c (__lgammal_r): Likewise. * math/w_tgamma_compat.c (__tgamma): Likewise. * math/w_tgamma_template.c (M_DECL_FUNC (__tgamma)): Likewise. * math/w_tgammaf_compat.c (__tgammaf): Likewise. * math/w_tgammal_compat.c (__tgammal): Likewise. * sysdeps/ieee754/dbl-64/e_lgamma_r.c (sin_pi): Likewise. * sysdeps/ieee754/dbl-64/k_rem_pio2.c (__kernel_rem_pio2): Likewise. * sysdeps/ieee754/dbl-64/lgamma_neg.c (__lgamma_neg): Likewise. * sysdeps/ieee754/flt-32/e_lgammaf_r.c (sin_pif): Likewise. * sysdeps/ieee754/flt-32/lgamma_negf.c (__lgamma_negf): Likewise. * sysdeps/ieee754/ldbl-128/e_lgammal_r.c (__ieee754_lgammal_r): Likewise. * sysdeps/ieee754/ldbl-128/e_powl.c (__ieee754_powl): Likewise. * sysdeps/ieee754/ldbl-128/lgamma_negl.c (__lgamma_negl): Likewise. * sysdeps/ieee754/ldbl-128/s_expm1l.c (__expm1l): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_lgammal_r.c (__ieee754_lgammal_r): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Likewise. * sysdeps/ieee754/ldbl-128ibm/lgamma_negl.c (__lgamma_negl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_expm1l.c (__expm1l): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_truncl.c (__truncl): Likewise. * sysdeps/ieee754/ldbl-96/e_lgammal_r.c (sin_pi): Likewise. * sysdeps/ieee754/ldbl-96/lgamma_negl.c (__lgamma_negl): Likewise. * sysdeps/powerpc/power5+/fpu/s_modf.c (__modf): Likewise. * sysdeps/powerpc/power5+/fpu/s_modff.c (__modff): Likewise. |
||
Szabolcs Nagy
|
e70c176825 |
Add new exp and exp2 implementations
Optimized exp and exp2 implementations using a lookup table for fractional powers of 2. There are several variants, see e_exp_data.c, they can be selected by modifying math_config.h allowing different tradeoffs. The default selection should be acceptable as generic libm code. Worst case error is 0.509 ULP for exp and 0.507 ULP for exp2, on aarch64 the rodata size is 2160 bytes, shared between exp and exp2. On aarch64 .text + .rodata size decreased by 24912 bytes. The non-nearest rounding error is less than 1 ULP even on targets without efficient round implementation (although the error rate is higher in that case). Targets with single instruction, rounding mode independent, to nearest integer rounding and conversion can use them by setting TOINT_INTRINSICS and adding the necessary code to their math_private.h. The __exp1 code uses the same algorithm, so the error bound of pow increased a bit. New double precision error handling code was added following the style of the single precision error handling code. Improvements on Cortex-A72 compared to current glibc master: exp thruput: 1.61x in [-9.9 9.9] exp latency: 1.53x in [-9.9 9.9] exp thruput: 1.13x in [0.5 1] exp latency: 1.30x in [0.5 1] exp2 thruput: 2.03x in [-9.9 9.9] exp2 latency: 1.64x in [-9.9 9.9] For small (< 1) inputs the current exp code uses a separate algorithm so the speed up there is less. Was tested on aarch64-linux-gnu (TOINT_INTRINSICS, fma contraction) and arm-linux-gnueabihf (!TOINT_INTRINSICS, no fma contraction) and x86_64-linux-gnu (!TOINT_INTRINSICS, no fma contraction) and powerpc64le-linux-gnu (!TOINT_INTRINSICS, fma contraction) targets, only non-nearest rounding ulp errors increase and they are within acceptable bounds (ulp updates are in separate patches). * NEWS: Mention exp and exp2 improvements. * math/Makefile (libm-support): Remove t_exp. (type-double-routines): Add math_err and e_exp_data. * sysdeps/aarch64/libm-test-ulps: Update. * sysdeps/arm/libm-test-ulps: Update. * sysdeps/i386/fpu/e_exp_data.c: New file. * sysdeps/i386/fpu/math_err.c: New file. * sysdeps/i386/fpu/t_exp.c: Remove. * sysdeps/ia64/fpu/e_exp_data.c: New file. * sysdeps/ia64/fpu/math_err.c: New file. * sysdeps/ia64/fpu/t_exp.c: Remove. * sysdeps/ieee754/dbl-64/e_exp.c: Rewrite. * sysdeps/ieee754/dbl-64/e_exp2.c: Rewrite. * sysdeps/ieee754/dbl-64/e_exp_data.c: New file. * sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Update error bound. * sysdeps/ieee754/dbl-64/eexp.tbl: Remove. * sysdeps/ieee754/dbl-64/math_config.h: New file. * sysdeps/ieee754/dbl-64/math_err.c: New file. * sysdeps/ieee754/dbl-64/t_exp.c: Remove. * sysdeps/ieee754/dbl-64/t_exp2.h: Remove. * sysdeps/ieee754/dbl-64/uexp.h: Remove. * sysdeps/ieee754/dbl-64/uexp.tbl: Remove. * sysdeps/m68k/m680x0/fpu/e_exp_data.c: New file. * sysdeps/m68k/m680x0/fpu/math_err.c: New file. * sysdeps/m68k/m680x0/fpu/t_exp.c: Remove. * sysdeps/powerpc/fpu/libm-test-ulps: Update. * sysdeps/x86_64/fpu/libm-test-ulps: Update. |
||
Joseph Myers
|
70e2ba332f |
Do not include fenv_private.h in math_private.h.
Continuing the clean-up related to the catch-all math_private.h header, this patch stops math_private.h from including fenv_private.h. Instead, fenv_private.h is included directly from those users of math_private.h that also used interfaces from fenv_private.h. No attempt is made to remove unused includes of math_private.h, but that is a natural followup. (However, since math_private.h sometimes defines optimized versions of math.h interfaces or __* variants thereof, as well as defining its own interfaces, I think it might make sense to get all those optimized versions included from include/math.h, not requiring a separate header at all, before eliminating unused math_private.h includes - that avoids a file quietly becoming less-optimized if someone adds a call to one of those interfaces without restoring a math_private.h include to that file.) There is still a pitfall that if code uses plain fe* and __fe* interfaces, but only includes fenv.h and not fenv_private.h or (before this patch) math_private.h, it will compile on platforms with exceptions and rounding modes but not get the optimized versions (and possibly not compile) on platforms without exception and rounding mode support, so making it easy to break the build for such platforms accidentally. I think it would be most natural to move the inlines / macros for fe* and __fe* in the case of no exceptions and rounding modes into include/fenv.h, so that all code including fenv.h with _ISOMAC not defined automatically gets them. Then fenv_private.h would be purely the header for the libc_fe*, SET_RESTORE_ROUND etc. internal interfaces and the risk of breaking the build on other platforms than the one you tested on because of a missing fenv_private.h include would be much reduced (and there would be some unused fenv_private.h includes to remove along with unused math_private.h includes). Tested for x86_64 and x86, and tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by this patch. * sysdeps/generic/math_private.h: Do not include <fenv_private.h>. * math/fromfp.h: Include <fenv_private.h>. * math/math-narrow.h: Likewise. * math/s_cexp_template.c: Likewise. * math/s_csin_template.c: Likewise. * math/s_csinh_template.c: Likewise. * math/s_ctan_template.c: Likewise. * math/s_ctanh_template.c: Likewise. * math/s_iseqsig_template.c: Likewise. * math/w_acos_compat.c: Likewise. * math/w_acosf_compat.c: Likewise. * math/w_acosl_compat.c: Likewise. * math/w_asin_compat.c: Likewise. * math/w_asinf_compat.c: Likewise. * math/w_asinl_compat.c: Likewise. * math/w_ilogb_template.c: Likewise. * math/w_j0_compat.c: Likewise. * math/w_j0f_compat.c: Likewise. * math/w_j0l_compat.c: Likewise. * math/w_j1_compat.c: Likewise. * math/w_j1f_compat.c: Likewise. * math/w_j1l_compat.c: Likewise. * math/w_jn_compat.c: Likewise. * math/w_jnf_compat.c: Likewise. * math/w_llogb_template.c: Likewise. * math/w_log10_compat.c: Likewise. * math/w_log10f_compat.c: Likewise. * math/w_log10l_compat.c: Likewise. * math/w_log2_compat.c: Likewise. * math/w_log2f_compat.c: Likewise. * math/w_log2l_compat.c: Likewise. * math/w_log_compat.c: Likewise. * math/w_logf_compat.c: Likewise. * math/w_logl_compat.c: Likewise. * sysdeps/aarch64/fpu/feholdexcpt.c: Likewise. * sysdeps/aarch64/fpu/fesetround.c: Likewise. * sysdeps/aarch64/fpu/fgetexcptflg.c: Likewise. * sysdeps/aarch64/fpu/ftestexcept.c: Likewise. * sysdeps/ieee754/dbl-64/e_atan2.c: Likewise. * sysdeps/ieee754/dbl-64/e_exp.c: Likewise. * sysdeps/ieee754/dbl-64/e_exp2.c: Likewise. * sysdeps/ieee754/dbl-64/e_gamma_r.c: Likewise. * sysdeps/ieee754/dbl-64/e_jn.c: Likewise. * sysdeps/ieee754/dbl-64/e_pow.c: Likewise. * sysdeps/ieee754/dbl-64/e_remainder.c: Likewise. * sysdeps/ieee754/dbl-64/e_sqrt.c: Likewise. * sysdeps/ieee754/dbl-64/gamma_product.c: Likewise. * sysdeps/ieee754/dbl-64/lgamma_neg.c: Likewise. * sysdeps/ieee754/dbl-64/s_atan.c: Likewise. * sysdeps/ieee754/dbl-64/s_fma.c: Likewise. * sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise. * sysdeps/ieee754/dbl-64/s_llrint.c: Likewise. * sysdeps/ieee754/dbl-64/s_llround.c: Likewise. * sysdeps/ieee754/dbl-64/s_lrint.c: Likewise. * sysdeps/ieee754/dbl-64/s_lround.c: Likewise. * sysdeps/ieee754/dbl-64/s_nearbyint.c: Likewise. * sysdeps/ieee754/dbl-64/s_sin.c: Likewise. * sysdeps/ieee754/dbl-64/s_sincos.c: Likewise. * sysdeps/ieee754/dbl-64/s_tan.c: Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_lround.c: Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_nearbyint.c: Likewise. * sysdeps/ieee754/dbl-64/x2y2m1.c: Likewise. * sysdeps/ieee754/float128/float128_private.h: Likewise. * sysdeps/ieee754/flt-32/e_gammaf_r.c: Likewise. * sysdeps/ieee754/flt-32/e_j1f.c: Likewise. * sysdeps/ieee754/flt-32/e_jnf.c: Likewise. * sysdeps/ieee754/flt-32/lgamma_negf.c: Likewise. * sysdeps/ieee754/flt-32/s_llrintf.c: Likewise. * sysdeps/ieee754/flt-32/s_llroundf.c: Likewise. * sysdeps/ieee754/flt-32/s_lrintf.c: Likewise. * sysdeps/ieee754/flt-32/s_lroundf.c: Likewise. * sysdeps/ieee754/flt-32/s_nearbyintf.c: Likewise. * sysdeps/ieee754/k_standardl.c: Likewise. * sysdeps/ieee754/ldbl-128/e_expl.c: Likewise. * sysdeps/ieee754/ldbl-128/e_gammal_r.c: Likewise. * sysdeps/ieee754/ldbl-128/e_j1l.c: Likewise. * sysdeps/ieee754/ldbl-128/e_jnl.c: Likewise. * sysdeps/ieee754/ldbl-128/gamma_productl.c: Likewise. * sysdeps/ieee754/ldbl-128/lgamma_negl.c: Likewise. * sysdeps/ieee754/ldbl-128/s_fmal.c: Likewise. * sysdeps/ieee754/ldbl-128/s_llrintl.c: Likewise. * sysdeps/ieee754/ldbl-128/s_llroundl.c: Likewise. * sysdeps/ieee754/ldbl-128/s_lrintl.c: Likewise. * sysdeps/ieee754/ldbl-128/s_lroundl.c: Likewise. * sysdeps/ieee754/ldbl-128/s_nearbyintl.c: Likewise. * sysdeps/ieee754/ldbl-128/x2y2m1l.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/e_expl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/e_j1l.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/e_jnl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/lgamma_negl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_llrintl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_llroundl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_lrintl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_lroundl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/x2y2m1l.c: Likewise. * sysdeps/ieee754/ldbl-96/e_gammal_r.c: Likewise. * sysdeps/ieee754/ldbl-96/e_jnl.c: Likewise. * sysdeps/ieee754/ldbl-96/gamma_productl.c: Likewise. * sysdeps/ieee754/ldbl-96/lgamma_negl.c: Likewise. * sysdeps/ieee754/ldbl-96/s_fma.c: Likewise. * sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise. * sysdeps/ieee754/ldbl-96/s_llrintl.c: Likewise. * sysdeps/ieee754/ldbl-96/s_llroundl.c: Likewise. * sysdeps/ieee754/ldbl-96/s_lrintl.c: Likewise. * sysdeps/ieee754/ldbl-96/s_lroundl.c: Likewise. * sysdeps/ieee754/ldbl-96/x2y2m1l.c: Likewise. * sysdeps/powerpc/fpu/e_sqrt.c: Likewise. * sysdeps/powerpc/fpu/e_sqrtf.c: Likewise. * sysdeps/riscv/rv64/rvd/s_ceil.c: Likewise. * sysdeps/riscv/rv64/rvd/s_floor.c: Likewise. * sysdeps/riscv/rv64/rvd/s_nearbyint.c: Likewise. * sysdeps/riscv/rv64/rvd/s_round.c: Likewise. * sysdeps/riscv/rv64/rvd/s_roundeven.c: Likewise. * sysdeps/riscv/rv64/rvd/s_trunc.c: Likewise. * sysdeps/riscv/rvd/s_finite.c: Likewise. * sysdeps/riscv/rvd/s_fmax.c: Likewise. * sysdeps/riscv/rvd/s_fmin.c: Likewise. * sysdeps/riscv/rvd/s_fpclassify.c: Likewise. * sysdeps/riscv/rvd/s_isinf.c: Likewise. * sysdeps/riscv/rvd/s_isnan.c: Likewise. * sysdeps/riscv/rvd/s_issignaling.c: Likewise. * sysdeps/riscv/rvf/fegetround.c: Likewise. * sysdeps/riscv/rvf/feholdexcpt.c: Likewise. * sysdeps/riscv/rvf/fesetenv.c: Likewise. * sysdeps/riscv/rvf/fesetround.c: Likewise. * sysdeps/riscv/rvf/feupdateenv.c: Likewise. * sysdeps/riscv/rvf/fgetexcptflg.c: Likewise. * sysdeps/riscv/rvf/ftestexcept.c: Likewise. * sysdeps/riscv/rvf/s_ceilf.c: Likewise. * sysdeps/riscv/rvf/s_finitef.c: Likewise. * sysdeps/riscv/rvf/s_floorf.c: Likewise. * sysdeps/riscv/rvf/s_fmaxf.c: Likewise. * sysdeps/riscv/rvf/s_fminf.c: Likewise. * sysdeps/riscv/rvf/s_fpclassifyf.c: Likewise. * sysdeps/riscv/rvf/s_isinff.c: Likewise. * sysdeps/riscv/rvf/s_isnanf.c: Likewise. * sysdeps/riscv/rvf/s_issignalingf.c: Likewise. * sysdeps/riscv/rvf/s_nearbyintf.c: Likewise. * sysdeps/riscv/rvf/s_roundevenf.c: Likewise. * sysdeps/riscv/rvf/s_roundf.c: Likewise. * sysdeps/riscv/rvf/s_truncf.c: Likewise. |
||
Paul Pluzhnikov
|
a6e8926f8d | [BZ #20271] Add newlines in __libc_fatal calls. | ||
Joseph Myers
|
ff6b24501f |
Split fenv_private.h out of math_private.h more consistently.
On some architectures, the parts of math_private.h relating to the floating-point environment are in a separate file fenv_private.h included from math_private.h. As this is purely an architecture-specific convention used by several architectures, however, all such architectures still need their own math_private.h, even if it has nothing to do beyond #include <fenv_private.h> and peculiarity of including the i386 file directly instead of having a shared file in sysdeps/x86. This patch makes the fenv_private.h name an architecture-independent convention in glibc. The include of fenv_private.h from math_private.h becomes architecture-independent (until callers are updated to include fenv_private.h directly so the include from math_private.h is no longer needed). Some architecture math_private.h headers are removed if no longer needed, or renamed to fenv_private.h if all they define belongs in that header; architecture fenv_private.h headers now do require #include_next <fenv_private.h>. The i386 fenv_private.h file moves to sysdeps/x86/fpu/ to reflect how it is actually shared with x86_64. The generic math_private.h gets a new include of <stdbool.h>, as needed for bool in some prototypes in that header (previously that was indirectly included via include/fenv.h, which now only gets included too late in math_private.h, after those prototypes). Tested for x86_64 and x86, and tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by the patch. * sysdeps/aarch64/fpu/fenv_private.h: New file. Based on .... * sysdeps/aarch64/fpu/math_private.h: ... this file. All contents moved to fenv_private.h except for ... (TOINT_INTRINSICS): Kept in math_private.h. (roundtoint): Likewise. (converttoint): Likewise. * sysdeps/arm/fenv_private.h: Change multiple-include guard to [ARM_FENV_PRIVATE_H]. Include next <fenv_private.h>. * sysdeps/arm/math_private.h: Remove. * sysdeps/generic/fenv_private.h: New file. Contents moved from .... * sysdeps/generic/math_private.h: ... this file. Include <stdbool.h>. Do not include <fenv.h> or <get-rounding-mode.h>. Include <fenv_private.h>. Remove functions and macros moved to fenv_private.h. * sysdeps/i386/fpu/math_private.h: Remove. * sysdeps/mips/math_private.h: Move to .... * sysdeps/mips/fpu/fenv_private.h: ... here. Change multiple-include guard to [MIPS_FENV_PRIVATE_H]. Remove [__mips_hard_float] conditional. Include next <fenv_private.h>. * sysdeps/powerpc/fpu/fenv_private.h: Change multiple-include guard to [POWERPC_FENV_PRIVATE_H]. Include next <fenv_private.h>. * sysdeps/powerpc/fpu/math_private.h: Do not include <fenv_private.h>. * sysdeps/riscv/rvf/math_private.h: Move to .... * sysdeps/riscv/rvf/fenv_private.h: ... here. Change multiple-include guard to [RISCV_FENV_PRIVATE_H]. Include next <fenv_private.h>. * sysdeps/sparc/fpu/fenv_private.h: Change multiple-include guard to [SPARC_FENV_PRIVATE_H]. Include next <fenv_private.h>. * sysdeps/sparc/fpu/math_private.h: Remove. * sysdeps/i386/fpu/fenv_private.h: Move to .... * sysdeps/x86/fpu/fenv_private.h: ... here. Change multiple-include guard to [X86_FENV_PRIVATE_H]. Include next <fenv_private.h>. * sysdeps/x86_64/fpu/math_private.h: Do not include <sysdeps/i386/fpu/fenv_private.h>. |
||
Joseph Myers
|
895ef79e04 |
Move EXCEPTION_ENABLE_SUPPORTED out of math-tests.h.
Continuing moving macros out of math-tests.h to smaller headers following typo-proof conventions instead of using #ifndef, this patch moves the EXCEPTION_ENABLE_SUPPORTED macro out to its own math-tests-trap.h header. Tested with build-many-glibcs.py. * sysdeps/generic/math-tests-trap.h: New file. * sysdeps/generic/math-tests.h: Include <math-tests-trap.h>. (EXCEPTION_ENABLE_SUPPORTED): Do not define here. * sysdeps/aarch64/math-tests.h: Remove file. * sysdeps/arm/math-tests.h: Likewise. * sysdeps/riscv/math-tests.h: Likewise. * sysdeps/aarch64/math-tests-trap.h: New file. * sysdeps/arm/math-tests-trap.h: Likewise. * sysdeps/riscv/math-tests-trap.h: Likewise. |
||
Siddhesh Poyarekar
|
436e4d5b96 |
[aarch64] Add an ASIMD variant of strlen for falkor
This variant of strlen uses vector loads and operations to reduce the size of the code and also eliminate the non-ascii fallback. This works very well for falkor because of its two vector units and efficient vector ops. In the best case it reduces latency of cases in bench-strlen by 48%, with gains throughout the benchmark. strlen-walk also sees uniform gains in the 5%-15% range. Overall the routine appears to work better than the stock one for falkor regardless of the benchmark, length of string or cache state. The same cannot be said of a53 and a72 though. a53 performance was greatly reduced and for a72 it was a bit of a mixed bag, slightly on the negative side but I reckon it might be fast in some situations. * sysdeps/aarch64/strlen.S (__strlen): Rename to STRLEN. [!STRLEN](STRLEN): Set to __strlen. * sysdeps/aarch64/multiarch/strlen.c: New file. * sysdeps/aarch64/multiarch/strlen_generic.S: Likewise. * sysdeps/aarch64/multiarch/strlen_asimd.S: Likewise. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Add strlen. * sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add strlen_generic and strlen_asimd. Reviewed-By: szabolcs.nagy@arm.com CC: pinskia@gmail.com |
||
Wilco Dijkstra
|
599cf39766 |
Improve performance of sinf and cosf
The second patch improves performance of sinf and cosf using the same algorithms and polynomials. The returned values are identical to sincosf for the same input. ULP definitions for AArch64 and x64 are updated. sinf/cosf througput gains on Cortex-A72: * |x| < 0x1p-12 : 1.2x * |x| < M_PI_4 : 1.8x * |x| < 2 * M_PI: 1.7x * |x| < 120.0 : 2.3x * |x| < Inf : 3.0x * NEWS: Mention sinf, cosf, sincosf. * sysdeps/aarch64/libm-test-ulps: Update ULP for sinf, cosf, sincosf. * sysdeps/x86_64/fpu/libm-test-ulps: Update ULP for sinf and cosf. * sysdeps/x86_64/fpu/multiarch/s_sincosf-fma.c: Add definitions of constants rather than including generic sincosf.h. * sysdeps/x86_64/fpu/s_sincosf_data.c: Remove. * sysdeps/ieee754/flt-32/s_cosf.c (cosf): Rewrite. * sysdeps/ieee754/flt-32/s_sincosf.h (reduced_sin): Remove. (reduced_cos): Remove. (sinf_poly): New function. * sysdeps/ieee754/flt-32/s_sinf.c (sinf): Rewrite. |
||
Szabolcs Nagy
|
43cfdf8f48 |
Clean up converttoint handling and document the semantics
This patch currently only affects aarch64. The roundtoint and converttoint internal functions are only called with small values, so 32 bit result is enough for converttoint and it is a signed int conversion so the return type is changed to int32_t. The original idea was to help the compiler keeping the result in uint64_t, then it's clear that no sign extension is needed and there is no accidental undefined or implementation defined signed int arithmetics. But it turns out gcc does a good job with inlining so changing the type has no overhead and the semantics of the conversion is less surprising this way. Since we want to allow the asuint64 (x + 0x1.8p52) style conversion, the top bits were never usable and the existing code ensures that only the bottom 32 bits of the conversion result are used. On aarch64 the neon intrinsics (which round ties to even) are changed to round and lround (which round ties away from zero) this does not affect the results in a significant way, but more portable (relies on round and lround being inlined which works with -fno-math-errno). The TOINT_SHIFT and TOINT_RINT macros were removed, only keep separate code paths for TOINT_INTRINSICS and !TOINT_INTRINSICS. * sysdeps/aarch64/fpu/math_private.h (roundtoint): Use round. (converttoint): Use lround. * sysdeps/ieee754/flt-32/math_config.h (roundtoint): Declare and document the semantics when TOINT_INTRINSICS is set. (converttoint): Likewise. (TOINT_RINT): Remove. (TOINT_SHIFT): Remove. * sysdeps/ieee754/flt-32/e_expf.c (__expf): Remove the TOINT_RINT code path. |
||
Siddhesh Poyarekar
|
be64b1946b |
[aarch64] Fix value of MIN_PAGE_SIZE for testing
MIN_PAGE_SIZE is normally set to 4096 but for testing it can be set to 16 so that it exercises the page crossing code for every misaligned access. The value was set to 15, which is obviously wrong, so fixed as obvious and tested. * sysdeps/aarch64/strlen.S [TEST_PAGE_CROSS](MIN_PAGE_SIZE): Fix value. |
||
Siddhesh Poyarekar
|
dce452dc52 |
Rename the glibc.tune namespace to glibc.cpu
The glibc.tune namespace is vaguely named since it is a 'tunable', so give it a more specific name that describes what it refers to. Rename the tunable namespace to 'cpu' to more accurately reflect what it encompasses. Also rename glibc.tune.cpu to glibc.cpu.name since glibc.cpu.cpu is weird. * NEWS: Mention the change. * elf/dl-tunables.list: Rename tune namespace to cpu. * sysdeps/powerpc/dl-tunables.list: Likewise. * sysdeps/x86/dl-tunables.list: Likewise. * sysdeps/aarch64/dl-tunables.list: Rename tune.cpu to cpu.name. * elf/dl-hwcaps.c (_dl_important_hwcaps): Adjust. * elf/dl-hwcaps.h (GET_HWCAP_MASK): Likewise. * manual/README.tunables: Likewise. * manual/tunables.texi: Likewise. * sysdeps/powerpc/cpu-features.c: Likewise. * sysdeps/unix/sysv/linux/aarch64/cpu-features.c (init_cpu_features): Likewise. * sysdeps/x86/cpu-features.c: Likewise. * sysdeps/x86/cpu-features.h: Likewise. * sysdeps/x86/cpu-tunables.c: Likewise. * sysdeps/x86_64/Makefile: Likewise. * sysdeps/x86/dl-cet.c: Likewise. Reviewed-by: Carlos O'Donell <carlos@redhat.com> |
||
Siddhesh Poyarekar
|
0aec4c1d18 |
aarch64,falkor: Use vector registers for memcpy
Vector registers perform better than scalar register pairs for copying data so prefer them instead. This results in a time reduction of over 50% (i.e. 2x speed improvemnet) for some smaller sizes for memcpy-walk. Larger sizes show improvements of around 1% to 2%. memcpy-random shows a very small improvement, in the range of 1-2%. * sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor): Use vector registers. |
||
Siddhesh Poyarekar
|
ce76a5cb8d |
aarch64,falkor: Use vector registers for memmove
Vector registers perform much better for moves compared to pairs of registers on falkor, so use them instead. This results in a time reduction of up to 50% (i.e. 2x improvement) for a lot of the smaller sizes, i.e. up to 1K in memmove-walk. Improvements for larger sizes are smaller, at about 1%-2%. * sysdeps/aarch64/multiarch/memmove_falkor.S (__memcpy_falkor): Use vector registers. |
||
Hongbo Zhang
|
fc2ba8037d |
aarch64: add HXT Phecda core memory operation ifuncs
Phecda is HXT semiconductor's CPU core, this patch adds memory operation ifuncs for it: sharing the same optimized implementation with Qualcomm's Falkor core. 2018-06-07 Minfeng Kang <minfeng.kang@hxt-semitech.com> Hongbo Zhang <hongbo.zhang@linaro.org> * sysdeps/aarch64/multiarch/memcpy.c (libc_ifunc): reuse __memcpy_falkor for phecda core. * sysdeps/aarch64/multiarch/memmove.c (libc_ifunc): reuse __memmove_falkor for phecda core. * sysdeps/aarch64/multiarch/memset.c (libc_ifunc): reuse __memset_falkor for phecda core. * sysdeps/unix/sysv/linux/aarch64/cpu-features.c: add MIDR entry for phecda core. * sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_PHECDA): add macro to identify phecda core. |
||
H.J. Lu
|
67c0579669 |
Mark _init and _fini as hidden [BZ #23145]
_init and _fini are special functions provided by glibc for linker to define DT_INIT and DT_FINI in executable and shared library. They should never be put in dynamic symbol table. This patch marks them as hidden to remove them from dynamic symbol table. Tested with build-many-glibcs.py. [BZ #23145] * elf/Makefile (tests-special): Add $(objpfx)check-initfini.out. ($(all-built-dso:=.dynsym): New target. (common-generated): Add $(all-built-dso:$(common-objpfx)%=%.dynsym). ($(objpfx)check-initfini.out): New target. (generated): Add check-initfini.out. * scripts/check-initfini.awk: New file. * sysdeps/aarch64/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/alpha/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/arm/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/hppa/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/i386/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/ia64/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/m68k/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/microblaze/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/mips/mips32/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/mips/mips64/n32/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/mips/mips64/n64/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/nios2/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/powerpc/powerpc32/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/powerpc/powerpc64/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/s390/s390-32/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/s390/s390-64/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/sh/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/sparc/crti.S (_init): Mark as hidden. (_fini): Likewise. * sysdeps/x86_64/crti.S (_init): Mark as hidden. (_fini): Likewise. |
||
Joseph Myers
|
8f145c7712 |
Remove sysdeps/aarch64/soft-fp directory.
As per <https://sourceware.org/ml/libc-alpha/2014-10/msg00369.html>, there should not be separate sysdeps/<arch>/soft-fp directories when those are used by all configurations that use sysdeps/<arch>, and, more generally, should not be sysdeps/foo/Implies files pointing to a subdirectory foo/bar. This patch eliminates the sysdeps/aarch64/soft-fp directory accordingly, merging its contents into sysdeps/aarch64. Tested with build-many-glibcs.py that installed stripped shared libraries for aarch64 configurations are unchanged by this patch. * sysdeps/aarch64/Implies: Remove aarch64/soft-fp. * sysdeps/aarch64/Makefile [$(subdir) = math] (CPPFLAGS): Add -I../soft-fp. Moved from .... * sysdeps/aarch64/soft-fp/Makefile: ... here. Remove file. * sysdeps/aarch64/soft-fp/e_sqrtl.c: Move to .... * sysdeps/aarch64/e_sqrtl.c: ... here. * sysdeps/aarch64/soft-fp/sfp-machine.h: Move to .... * sysdeps/aarch64/sfp-machine.h: ... here. |
||
Joseph Myers
|
b4d5b8b021 |
Do not include math-barriers.h in math_private.h.
This patch continues the math_private.h cleanup by stopping math_private.h from including math-barriers.h and making the users of the barrier macros include the latter header directly. No attempt is made to remove any math_private.h includes that are now unused, except in strtod_l.c where that is done to avoid line number changes in assertions, so that installed stripped shared libraries can be compared before and after the patch. (I think the floating-point environment support in math_private.h should also move out - some architectures already have fenv_private.h as an architecture-internal header included from their math_private.h - and after moving that out might be a better time to identify unused math_private.h includes.) Tested for x86_64 and x86, and tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by the patch. * sysdeps/generic/math_private.h: Do not include <math-barriers.h>. * stdlib/strtod_l.c: Include <math-barriers.h> instead of <math_private.h>. * math/fromfp.h: Include <math-barriers.h>. * math/math-narrow.h: Likewise. * math/s_nextafter.c: Likewise. * math/s_nexttowardf.c: Likewise. * sysdeps/aarch64/fpu/s_llrint.c: Likewise. * sysdeps/aarch64/fpu/s_llrintf.c: Likewise. * sysdeps/aarch64/fpu/s_lrint.c: Likewise. * sysdeps/aarch64/fpu/s_lrintf.c: Likewise. * sysdeps/i386/fpu/s_nextafterl.c: Likewise. * sysdeps/i386/fpu/s_nexttoward.c: Likewise. * sysdeps/i386/fpu/s_nexttowardf.c: Likewise. * sysdeps/ieee754/dbl-64/e_atan2.c: Likewise. * sysdeps/ieee754/dbl-64/e_atanh.c: Likewise. * sysdeps/ieee754/dbl-64/e_exp.c: Likewise. * sysdeps/ieee754/dbl-64/e_exp2.c: Likewise. * sysdeps/ieee754/dbl-64/e_j0.c: Likewise. * sysdeps/ieee754/dbl-64/e_sqrt.c: Likewise. * sysdeps/ieee754/dbl-64/s_expm1.c: Likewise. * sysdeps/ieee754/dbl-64/s_fma.c: Likewise. * sysdeps/ieee754/dbl-64/s_fmaf.c: Likewise. * sysdeps/ieee754/dbl-64/s_log1p.c: Likewise. * sysdeps/ieee754/dbl-64/s_nearbyint.c: Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_nearbyint.c: Likewise. * sysdeps/ieee754/flt-32/e_atanhf.c: Likewise. * sysdeps/ieee754/flt-32/e_j0f.c: Likewise. * sysdeps/ieee754/flt-32/s_expm1f.c: Likewise. * sysdeps/ieee754/flt-32/s_log1pf.c: Likewise. * sysdeps/ieee754/flt-32/s_nearbyintf.c: Likewise. * sysdeps/ieee754/flt-32/s_nextafterf.c: Likewise. * sysdeps/ieee754/k_standardl.c: Likewise. * sysdeps/ieee754/ldbl-128/e_asinl.c: Likewise. * sysdeps/ieee754/ldbl-128/e_expl.c: Likewise. * sysdeps/ieee754/ldbl-128/e_powl.c: Likewise. * sysdeps/ieee754/ldbl-128/s_fmal.c: Likewise. * sysdeps/ieee754/ldbl-128/s_nearbyintl.c: Likewise. * sysdeps/ieee754/ldbl-128/s_nextafterl.c: Likewise. * sysdeps/ieee754/ldbl-128/s_nexttoward.c: Likewise. * sysdeps/ieee754/ldbl-128/s_nexttowardf.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/e_asinl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nextafterl.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nexttoward.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c: Likewise. * sysdeps/ieee754/ldbl-128ibm/s_rintl.c: Likewise. * sysdeps/ieee754/ldbl-96/e_atanhl.c: Likewise. * sysdeps/ieee754/ldbl-96/e_j0l.c: Likewise. * sysdeps/ieee754/ldbl-96/s_fma.c: Likewise. * sysdeps/ieee754/ldbl-96/s_fmal.c: Likewise. * sysdeps/ieee754/ldbl-96/s_nexttoward.c: Likewise. * sysdeps/ieee754/ldbl-96/s_nexttowardf.c: Likewise. * sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c: Likewise. * sysdeps/m68k/m680x0/fpu/s_nextafterl.c: Likewise. |
||
Siddhesh Poyarekar
|
db725a458e |
aarch64,falkor: Ignore prefetcher tagging for smaller copies
For smaller and medium sized copies, the effect of hardware prefetching are not as dominant as instruction level parallelism. Hence it makes more sense to load data into multiple registers than to try and route them to the same prefetch unit. This is also the case for the loop exit where we are unable to latch on to the same prefetch unit anyway so it makes more sense to have data loaded in parallel. The performance results are a bit mixed with memcpy-random, with numbers jumping between -1% and +3%, i.e. the numbers don't seem repeatable. memcpy-walk sees a 70% improvement (i.e. > 2x) for 128 bytes and that improvement reduces down as the impact of the tail copy decreases in comparison to the loop. * sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor): Use multiple registers to copy data in loop tail. |
||
Siddhesh Poyarekar
|
70c97f8493 |
aarch64,falkor: Ignore prefetcher hints for memmove tail
The tail of the copy loops are unable to train the falkor hardware prefetcher because they load from a different base compared to the hot loop. In this case avoid serializing the instructions by loading them into different registers. Also peel the last iteration of the loop into the tail (and have them use different registers) since it gives better performance for medium sizes. This results in performance improvements of between 3% and 20% over the current falkor implementation for sizes between 128 bytes and 1K on the memmove-walk benchmark, thus mostly covering the regressions seen against the generic memmove. * sysdeps/aarch64/multiarch/memmove_falkor.S (__memmove_falkor): Use multiple registers to move data in loop tail. |
||
Joseph Myers
|
9ed2e15ff4 |
Move math_opt_barrier, math_force_eval to separate math-barriers.h.
This patch continues cleaning up math_private.h by moving the math_opt_barrier and math_force_eval macros to a separate header math-barriers.h. At present, those macros are inside a "#ifndef math_opt_barrier" in math_private.h to allow architectures to override them and then use a separate math-barriers.h header, no such #ifndef or #include_next is needed; architectures just have their own alternative version of math-barriers.h when providing their own optimized versions that avoid going through memory unnecessarily. The generic math-barriers.h has a comment added to document these two macros. In this patch, math_private.h is made to #include <math-barriers.h>, so files using these macros do not need updating yet. That is because of uses of math_force_eval in math_check_force_underflow and math_check_force_underflow_nonneg, which are still defined in math_private.h. Once those are moved out to a separate header, that separate header can be made to include <math-barriers.h>, as can the other files directly using these barrier macros, and then the include of <math-barriers.h> from math_private.h can be removed. Tested for x86_64 and x86. Also tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by this patch. * sysdeps/generic/math-barriers.h: New file. * sysdeps/generic/math_private.h [!math_opt_barrier] (math_opt_barrier): Move to math-barriers.h. [!math_opt_barrier] (math_force_eval): Likewise. * sysdeps/aarch64/fpu/math-barriers.h: New file. * sysdeps/aarch64/fpu/math_private.h (math_opt_barrier): Move to math-barriers.h. (math_force_eval): Likewise. * sysdeps/alpha/fpu/math-barriers.h: New file. * sysdeps/alpha/fpu/math_private.h (math_opt_barrier): Move to math-barriers.h. (math_force_eval): Likewise. * sysdeps/x86/fpu/math-barriers.h: New file. * sysdeps/i386/fpu/fenv_private.h (math_opt_barrier): Move to math-barriers.h. (math_force_eval): Likewise. * sysdeps/m68k/m680x0/fpu/math_private.h: Move to.... * sysdeps/m68k/m680x0/fpu/math-barriers.h: ... here. Adjust multiple-include guard for rename. * sysdeps/powerpc/fpu/math-barriers.h: New file. * sysdeps/powerpc/fpu/math_private.h (math_opt_barrier): Move to math-barriers.h. (math_force_eval): Likewise. |
||
Maciej W. Rozycki
|
10a446ddcc |
elf: Unify symbol address run-time calculation [BZ #19818]
Wrap symbol address run-time calculation into a macro and use it throughout, replacing inline calculations. There are a couple of variants, most of them different in a functionally insignificant way. Most calculations are right following RESOLVE_MAP, at which point either the map or the symbol returned can be checked for validity as the macro sets either both or neither. In some places both the symbol and the map has to be checked however. My initial implementation therefore always checked both, however that resulted in code larger by as much as 0.3%, as many places know from elsewhere that no check is needed. I have decided the size growth was unacceptable. Having looked closer I realized that it's the map that is the culprit. Therefore I have modified LOOKUP_VALUE_ADDRESS to accept an additional boolean argument telling it to access the map without checking it for validity. This in turn has brought quite nice results, with new code actually being smaller for i686, and MIPS o32, n32 and little-endian n64 targets, unchanged in size for x86-64 and, unusually, marginally larger for big-endian MIPS n64, as follows: i686: text data bss dec hex filename 152255 4052 192 156499 26353 ld-2.27.9000-base.so 152159 4052 192 156403 262f3 ld-2.27.9000-elf-symbol-value.so MIPS/o32/el: text data bss dec hex filename 142906 4396 260 147562 2406a ld-2.27.9000-base.so 142890 4396 260 147546 2405a ld-2.27.9000-elf-symbol-value.so MIPS/n32/el: text data bss dec hex filename 142267 4404 260 146931 23df3 ld-2.27.9000-base.so 142171 4404 260 146835 23d93 ld-2.27.9000-elf-symbol-value.so MIPS/n64/el: text data bss dec hex filename 149835 7376 408 157619 267b3 ld-2.27.9000-base.so 149787 7376 408 157571 26783 ld-2.27.9000-elf-symbol-value.so MIPS/o32/eb: text data bss dec hex filename 142870 4396 260 147526 24046 ld-2.27.9000-base.so 142854 4396 260 147510 24036 ld-2.27.9000-elf-symbol-value.so MIPS/n32/eb: text data bss dec hex filename 142019 4404 260 146683 23cfb ld-2.27.9000-base.so 141923 4404 260 146587 23c9b ld-2.27.9000-elf-symbol-value.so MIPS/n64/eb: text data bss dec hex filename 149763 7376 408 157547 2676b ld-2.27.9000-base.so 149779 7376 408 157563 2677b ld-2.27.9000-elf-symbol-value.so x86-64: text data bss dec hex filename 148462 6452 400 155314 25eb2 ld-2.27.9000-base.so 148462 6452 400 155314 25eb2 ld-2.27.9000-elf-symbol-value.so [BZ #19818] * sysdeps/generic/ldsodefs.h (LOOKUP_VALUE_ADDRESS): Add `set' parameter. (SYMBOL_ADDRESS): New macro. [!ELF_FUNCTION_PTR_IS_SPECIAL] (DL_SYMBOL_ADDRESS): Use SYMBOL_ADDRESS for symbol address calculation. * elf/dl-runtime.c (_dl_fixup): Likewise. (_dl_profile_fixup): Likewise. * elf/dl-symaddr.c (_dl_symbol_address): Likewise. * elf/rtld.c (dl_main): Likewise. * sysdeps/aarch64/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/alpha/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/arm/dl-machine.h (elf_machine_rel): Likewise. (elf_machine_rela): Likewise. * sysdeps/hppa/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/hppa/dl-symaddr.c (_dl_symbol_address): Likewise. * sysdeps/i386/dl-machine.h (elf_machine_rel): Likewise. (elf_machine_rela): Likewise. * sysdeps/ia64/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/m68k/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/microblaze/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/mips/dl-machine.h (ELF_MACHINE_BEFORE_RTLD_RELOC): Likewise. (elf_machine_reloc): Likewise. (elf_machine_got_rel): Likewise. * sysdeps/mips/dl-trampoline.c (__dl_runtime_resolve): Likewise. * sysdeps/nios2/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/powerpc/powerpc64/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/riscv/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/s390/s390-32/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/s390/s390-64/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/sh/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/sparc/sparc32/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/sparc/sparc64/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/tile/dl-machine.h (elf_machine_rela): Likewise. * sysdeps/x86_64/dl-machine.h (elf_machine_rela): Likewise. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> |
||
Wilco Dijkstra
|
19a8b9a300 |
[PATCH 1/7] sin/cos slow paths: avoid slow paths for small inputs
This series of patches removes the slow patchs from sin, cos and sincos. Besides greatly simplifying the implementation, the new version is also much faster for inputs up to PI (41% faster) and for large inputs needing range reduction (27% faster). ULP is ~0.55 with no errors found after testing 1.6 billion inputs across most of the range with mpsin and mpcos. The number of incorrectly rounded results (ie. ULP >0.5) is at most ~2750 per million inputs between 0.125 and 0.5, the average is ~850 per million between 0 and PI. Tested on AArch64 and x86_64 with no regressions. The first patch removes the slow paths for the cases where the input is small and doesn't require range reduction. Update ULP tables for sin, cos and sincos on AArch64 and x86_64. * sysdeps/aarch64/libm-test-ulps: Update ULP for sin, cos, sincos. * sysdeps/ieee754/dbl-64/s_sin.c (__sin): Remove slow paths for small inputs. (__cos): Likewise. * sysdeps/x86_64/fpu/libm-test-ulps: Update ULP for sin, cos, sincos. |
||
Joseph Myers
|
ffec7b2740 |
Use x86_64 backtrace as generic version.
No glibc configuration uses the present debug/backtrace.c, whereas several #include the x86_64 version. The x86_64 version is effectively a generic one (using _Unwind_Backtrace from libgcc, which works much more reliably than the built-in functions used by debug/backtrace.c). This patch moves it to debug/backtrace.c and removes all the #includes of the x86_64 version from other architectures which are no longer required. I do not know whether all the other architecture-specific backtrace implementations that are based on _Unwind_Backtrace are required, or whether, where their differences from the generic version do something useful, suitable hooks could be added to the generic version to reduce the duplication involved. Tested with build-many-glibcs.py that installed stripped shared libraries are unchanged by this patch. * sysdeps/x86_64/backtrace.c: Move to .... * debug/backtrace.c: ... here. * sysdeps/aarch64/backtrace.c: Remove file. * sysdeps/alpha/backtrace.c: Likewise. * sysdeps/hppa/backtrace.c: Likewise. * sysdeps/ia64/backtrace.c: Likewise. * sysdeps/mips/backtrace.c: Likewise. * sysdeps/nios2/backtrace.c: Likewise. * sysdeps/riscv/backtrace.c: Likewise. * sysdeps/sh/backtrace.c: Likewise. * sysdeps/tile/backtrace.c: Likewise. |
||
Wilco Dijkstra
|
700593fdd7 |
Remove all target specific __ieee754_sqrt(f/l) inlines
Remove the now unused target specific__ieee754_sqrt(f/l) inlines. Also remove inlines of sqrt which are for really old GCC versions. Removing these is desirable, under the general principle of leaving such inlining to the compiler rather than trying to do it in installed headers, especially when only very old compilers are affected. Note that removing inlines for __ieee754_sqrt disables inlining in the sqrt wrapper functions. Given the sqrt function will typically only be called for negative arguments, it doesn't matter whether the inlining happens or not. * sysdeps/aarch64/fpu/math_private.h (__ieee754_sqrt): Remove. (__ieee754_sqrtf): Remove. * sysdeps/alpha/fpu/math_private.h (__ieee754_sqrt): Remove. (__ieee754_sqrtf): Remove. * sysdeps/generic/math-type-macros.h (M_SQRT): Use sqrt. * sysdeps/m68k/m680x0/fpu/mathimpl.h (__ieee754_sqrt): Remove. * sysdeps/powerpc/fpu/math_private.h (__ieee754_sqrt): Remove. (__ieee754_sqrtf): Remove. * sysdeps/s390/fpu/bits/mathinline.h: Remove file. * sysdeps/sparc/fpu/bits/mathinline.h (sqrt) Remove. (sqrtf): Remove. (sqrtl): Remove. (__ieee754_sqrt): Remove. (__ieee754_sqrtf): Remove. (__ieee754_sqrtl): Remove. * sysdeps/m68k/m680x0/fpu/mathimpl.h (__ieee754_sqrt): Remove. * sysdeps/x86/fpu/math_private.h (__ieee754_sqrt): Remove. * sysdeps/x86_64/fpu/math_private.h (__ieee754_sqrt): Remove. (__ieee754_sqrtf): Remove. (__ieee754_sqrtl): Remove. |
||
Siddhesh Poyarekar
|
b47c3e7637 |
aarch64/strncmp: Use lsr instead of mov+lsr
A lsr can do what the mov and lsr did. |
||
Siddhesh Poyarekar
|
d46f84de74 |
aarch64/strncmp: Unbreak builds with old binutils
Binutils 2.26.* and older do not support moves with shifted registers, so use a separate shift instruction instead. |
||
Siddhesh Poyarekar
|
7108f1f944 |
aarch64: Improve strncmp for mutually misaligned inputs
The mutually misaligned inputs on aarch64 are compared with a simple byte copy, which is not very efficient. Enhance the comparison similar to strcmp by loading a double-word at a time. The peak performance improvement (i.e. 4k maxlen comparisons) due to this on the strncmp microbenchmark is as follows: falkor: 3.5x (up to 72% time reduction) cortex-a73: 3.5x (up to 71% time reduction) cortex-a53: 3.5x (up to 71% time reduction) All mutually misaligned inputs from 16 bytes maxlen onwards show upwards of 15% improvement and there is no measurable effect on the performance of aligned/mutually aligned inputs. * sysdeps/aarch64/strncmp.S (count): New macro. (strncmp): Store misaligned length in SRC1 in COUNT. (mutual_align): Adjust. (misaligned8): Load dword at a time when it is safe. |
||
Samuel Thibault
|
a5df0318ef |
hurd: add gscope support
* elf/dl-support.c [!THREAD_GSCOPE_IN_TCB] (_dl_thread_gscope_count): Define variable. * sysdeps/generic/ldsodefs.h [!THREAD_GSCOPE_IN_TCB] (struct rtld_global): Add _dl_thread_gscope_count member. * sysdeps/mach/hurd/tls.h: Include <atomic.h>. [!defined __ASSEMBLER__] (THREAD_GSCOPE_GLOBAL, THREAD_GSCOPE_SET_FLAG, THREAD_GSCOPE_RESET_FLAG, THREAD_GSCOPE_WAIT): Define macros. * sysdeps/generic/tls.h: Document THREAD_GSCOPE_IN_TCB. * sysdeps/aarch64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/alpha/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/arm/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/hppa/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/i386/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/ia64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/m68k/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/microblaze/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/mips/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/nios2/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/powerpc/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/riscv/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/s390/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/sh/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/sparc/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/tile/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. * sysdeps/x86_64/nptl/tls.h: Define THREAD_GSCOPE_IN_TCB to 1. |
||
Siddhesh Poyarekar
|
4e54d91863 |
aarch64: Fix branch target to loop16
I goofed up when changing the loop8 name to loop16 and missed on out the branch instance. Fixed and actually build tested this time. * sysdeps/aarch64/memcmp.S (more16): Fix branch target loop16. |
||
Siddhesh Poyarekar
|
30a81dae5b |
aarch64: Optimized memcmp for medium to large sizes
This improved memcmp provides a fast path for compares up to 16 bytes and then compares 16 bytes at a time, thus optimizing loads from both sources. The glibc memcmp microbenchmark retains performance (with an error of ~1ns) for smaller compare sizes and reduces up to 31% of execution time for compares up to 4K on the APM Mustang. On Qualcomm Falkor this improves to almost 48%, i.e. it is almost 2x improvement for sizes of 2K and above. * sysdeps/aarch64/memcmp.S: Widen comparison to 16 bytes at a time. |
||
Siddhesh Poyarekar
|
6ca24c4348 |
aarch64/strcmp: fix misaligned loop jump target
I accidentally set the loop jump back label as misaligned8 instead of do_misaligned. The typo is harmless but it's always nice to not have to unnecessarily execute those two instructions. * sysdeps/aarch64/strcmp.S (do_misaligned): Jump back to do_misaligned, not misaligned8. |
||
Steve Ellcey
|
e9537dddc7 |
IFUNC for Cavium ThunderX2
* sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add memcpy_thunderx2. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (MAX_IFUNC): Increment to 4. (__libc_ifunc_impl_list): Add __memcpy_thunderx2. * sysdeps/aarch64/multiarch/memcpy.c (libc_ifunc): Add IS_THUNDERX2 and IS_THUNDERX2PA checks. * sysdeps/aarch64/multiarch/memcpy_thunderx.S (USE_THUNDERX2): Use macro to set name appropriately. (memcpy): Use USE_THUNDERX2 macro to modify prefetches. * sysdeps/aarch64/multiarch/memcpy_thunderx2.S: New file. * sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_THUNDERX2PA): New macro. (IS_THUNDERX2): New macro. |
||
Wilco Dijkstra
|
0c8a67a573 |
[AArch64] Fix include.
Fix include to use <>. * sysdeps/aarch64/fpu/fpu_control.h: Use <> in include. |
||
Wilco Dijkstra
|
c3d466cba1 |
Remove slow paths from pow
Remove the slow paths from pow. Like several other double precision math functions, pow is exactly rounded. This is not required from math functions and causes major overheads as it requires multiple fallbacks using higher precision arithmetic if a result is close to 0.5ULP. Ridiculous slowdowns of up to 100000x have been reported when the highest precision path triggers. All GLIBC math tests pass on AArch64 and x64 (with ULP of pow set to 1). The worst case error is ~0.506ULP. A simple test over a few hundred million values shows pow is 10% faster on average. This fixes BZ #13932. [BZ #13932] * sysdeps/ieee754/dbl-64/uexp.h (err_1): Remove. * benchtests/pow-inputs: Update comment for slow path cases. * manual/probes.texi (slowpow_p10): Delete removed probe. (slowpow_p10): Likewise. * math/Makefile: Remove halfulp.c and slowpow.c. * sysdeps/aarch64/libm-test-ulps: Set ULP of pow to 1. * sysdeps/generic/math_private.h (__exp1): Remove error argument. (__halfulp): Remove. (__slowpow): Remove. * sysdeps/i386/fpu/halfulp.c: Delete file. * sysdeps/i386/fpu/slowpow.c: Likewise. * sysdeps/ia64/fpu/halfulp.c: Likewise. * sysdeps/ia64/fpu/slowpow.c: Likewise. * sysdeps/ieee754/dbl-64/e_exp.c (__exp1): Remove error argument, improve comments and add error analysis. * sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Add error analysis. (power1): Remove function: (log1): Remove error argument, add error analysis. (my_log2): Remove function. * sysdeps/ieee754/dbl-64/halfulp.c: Delete file. * sysdeps/ieee754/dbl-64/slowpow.c: Likewise. * sysdeps/m68k/m680x0/fpu/halfulp.c: Likewise. * sysdeps/m68k/m680x0/fpu/slowpow.c: Likewise. * sysdeps/powerpc/power4/fpu/Makefile: Remove CPPFLAGS-slowpow.c. * sysdeps/x86_64/fpu/libm-test-ulps: Set ULP of pow to 1. * sysdeps/x86_64/fpu/multiarch/Makefile: Remove slowpow-fma.c, slowpow-fma4.c, halfulp-fma.c, halfulp-fma4.c. * sysdeps/x86_64/fpu/multiarch/e_pow-fma.c (__slowpow): Remove define. * sysdeps/x86_64/fpu/multiarch/e_pow-fma4.c (__slowpow): Likewise. * sysdeps/x86_64/fpu/multiarch/halfulp-fma.c: Delete file. * sysdeps/x86_64/fpu/multiarch/halfulp-fma4.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowpow-fma.c: Likewise. * sysdeps/x86_64/fpu/multiarch/slowpow-fma4.c: Likewise. |
||
Wilco Dijkstra
|
4f5b921eb9 |
[AArch64] Fix testsuite error due to fpsr/fscr change
Add features.h include for __GNUC_PREREQ. * sysdeps/aarch64/fpu/fpu_control.h: Add features.h to fix build error. |
||
Wilco Dijkstra
|
3f8d9d58c5 |
[AArch64] Use builtins for fpcr/fpsr
Since GCC has support for accessing FPSR/FPCR, use them when possible so that the asm instructions can be removed eventually. Although GCC 5 supports the builtins, it has an optimization bug, so use them from GCC 6 onwards. * sysdeps/aarch64/fpu/fpu_control.h: Use builtins for accessing FPCR/FPSR. |
||
Siddhesh Poyarekar
|
84c94d2fd9 |
aarch64: Use the L() macro for labels in memcmp
The L() macro makes the assembly a bit more readable. * sysdeps/aarch64/memcmp.S: Use L() macro for labels. |
||
Szabolcs Nagy
|
3d1d79283e |
aarch64: fix static pie enabled libc when main is in a shared library
In the static pie enabled libc, crt1.o uses the same position independent code as rcrt1.o and crt1.o is used instead of Scrt1.o when -no-pie executables are linked. When main is not defined in the executable, but in a shared library crt1.o is currently broken, it assumes main is local. (glibc has a test for this but i missed it in my previous testing.) To make both rcrt1.o and crt1.o happy with the same code, a wrapper is introduced around main: with this crt1.o works with extern main symbol while rcrt1.o does not depend on GOT relocations. (The change only affects static pie enabled libc. Further simplification of start.S is possible in the future by using the same approach for Scrt1.o too.) * aarch64/start.S (_start): Use __wrap_main. (__wrap_main): New local symbol. |
||
Joseph Myers
|
688903eb3e |
Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates using scripts/update-copyrights. * locale/programs/charmap-kw.h: Regenerated. * locale/programs/locfile-kw.h: Likewise. |
||
Szabolcs Nagy
|
8bfb461e20 |
aarch64: update libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Update. |
||
Adhemerval Zanella
|
4e00196912 |
aarch64: fix memset with --disable-multi-arch
* sysdeps/aarch64/memset.S (MEMSET): Define. |
||
Szabolcs Nagy
|
14d886edbd |
aarch64: fix start code for static pie
There are three flavors of the crt startup code: 1) crt1.o used for non-pie, 2) Scrt1.o used for dynamic linked pie (dynamic linker relocates), 3) rcrt1.o used for static linked pie (self relocation is needed) In the --enable-static-pie case crt1.o is built with -DPIC and in case of static linking it interposes _dl_relocate_static_pie in libc to avoid self relocation. Scrt1.o is built with -DPIC -DSHARED and it relies on GOT entries that the static linker cannot relax and thus need relocation before the start code is executed, so rcrt1.o needs separate implementation. This implementation does not work for .text > 4G position independent executables, which is fine since the toolchain does not support -mcmodel=large with -fPIE. Tests pass with ld/22269 and ld/22263 binutils bugs fixed. * sysdeps/aarch64/start.S (_start): Handle PIC && !SHARED case. |
||
Siddhesh Poyarekar
|
2bce01ebba |
aarch64: Improve strcmp unaligned performance
Replace the simple byte-wise compare in the misaligned case with a dword compare with page boundary checks in place. For simplicity I've chosen a 4K page boundary so that we don't have to query the actual page size on the system. This results in up to 3x improvement in performance in the unaligned case on falkor and about 2.5x improvement on mustang as measured using bench-strcmp. * sysdeps/aarch64/strcmp.S (misaligned8): Compare dword at a time whenever possible. |
||
Siddhesh Poyarekar
|
4c1d801a59 |
aarch64: Avoid hidden symbols for memcpy/memmove into static binaries
The __GI_* symbol aliases for __memcpy_generic are unnecessary since they're never used. Add them only for libc.so to avoid PLT. Maybe some time in future we need to evaluate the relative cost of PLT vs gains from multiarch memcpy implementations and take a call on whether to drop this completely. * sysdeps/aarch64/multiarch/memcpy_generic.S (__GI_memcpy): Define only for libc.so. |
||
Joseph Myers
|
15ff490014 |
Use libm_alias_float for aarch64.
Continuing the preparation for additional _FloatN / _FloatNx function aliases, this patch makes aarch64 libm function implementations use libm_alias_float to define function aliases. Tested with build-many-glibcs.py for aarch64-linux-gnu that installed stripped shared libraries are unchanged by the patch. * sysdeps/aarch64/fpu/s_ceilf.c: Include <libm-alias-float.h>. (ceilf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_floorf.c: Include <libm-alias-float.h>. (floorf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_fmaf.c: Include <libm-alias-float.h>. (fmaf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_fmaxf.c: Include <libm-alias-float.h>. (fmaxf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_fminf.c: Include <libm-alias-float.h>. (fminf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_llrintf.c: Include <libm-alias-float.h>. (llrintf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_llroundf.c: Include <libm-alias-float.h>. (llroundf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_lrintf.c: Include <libm-alias-float.h>. (lrintf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_lroundf.c: Include <libm-alias-float.h>. (lroundf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_nearbyintf.c: Include <libm-alias-float.h>. (nearbyintf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_rintf.c: Include <libm-alias-float.h>. (rintf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_roundf.c: Include <libm-alias-float.h>. (roundf): Define using libm_alias_float. * sysdeps/aarch64/fpu/s_truncf.c: Include <libm-alias-float.h>. (truncf): Define using libm_alias_float. |
||
Joseph Myers
|
f07d2ec8c0 |
Use libm_alias_double for aarch64.
Continuing the preparation for additional _FloatN / _FloatNx function aliases, this patch makes aarch64 libm function implementations use libm_alias_double to define function aliases. Tested with build-many-glibcs.py for aarch64-linux-gnu that installed stripped shared libraries are unchanged by the patch. * sysdeps/aarch64/fpu/s_ceil.c: Include <libm-alias-double.h>. (ceil): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_floor.c: Include <libm-alias-double.h>. (floor): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_fma.c: Include <libm-alias-double.h>. (fma): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_fmax.c: Include <libm-alias-double.h>. (fmax): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_fmin.c: Include <libm-alias-double.h>. (fmin): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_llrint.c: Include <libm-alias-double.h>. (llrint): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_llround.c: Include <libm-alias-double.h>. (llround): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_lrint.c: Include <libm-alias-double.h>. (lrint): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_lround.c: Include <libm-alias-double.h>. (lround): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_nearbyint.c: Include <libm-alias-double.h>. (nearbyint): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_rint.c: Include <libm-alias-double.h>. (rint): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_round.c: Include <libm-alias-double.h>. (round): Define using libm_alias_double. * sysdeps/aarch64/fpu/s_trunc.c: Include <libm-alias-double.h>. (trunc): Define using libm_alias_double. |
||
Siddhesh Poyarekar
|
5a67c4fa01 |
aarch64: Optimized memset for falkor
The generic memset reads dczid_el0 on every memset. This has a significant impact on falkor for a range of sizes because reading dczid_el0 is slow. The DZP bit in the dczid_el0 register does not change dynamically, so it is safe to read once during program startup. With this patch dczid_el0 is read once during startup and zva_size is cached. This is used to invoke the falkor-specific memset; the generic memset routine remains unchanged. The gains due to this are significant for falkor, with run time reductions as high as 48%. Here's a sample from the falkor tests: Function: memset Variant: walk simple_memset __memset_falkor __memset_generic ===================================================================== length=256, char=0: 139.96 (-698.28%) 9.07 ( 48.26%) 17.53 length=257, char=0: 140.50 (-699.03%) 9.53 ( 45.80%) 17.58 length=258, char=0: 140.96 (-703.95%) 9.58 ( 45.36%) 17.53 length=259, char=0: 141.56 (-705.16%) 9.53 ( 45.79%) 17.58 length=260, char=0: 142.15 (-710.76%) 9.57 ( 45.39%) 17.53 length=261, char=0: 142.50 (-710.39%) 9.53 ( 45.78%) 17.58 length=262, char=0: 142.97 (-715.09%) 9.57 ( 45.42%) 17.54 length=263, char=0: 143.51 (-716.18%) 9.53 ( 45.80%) 17.58 length=264, char=0: 143.93 (-720.55%) 9.58 ( 45.39%) 17.54 length=265, char=0: 144.56 (-722.07%) 9.53 ( 45.80%) 17.59 length=266, char=0: 144.98 (-726.42%) 9.58 ( 45.42%) 17.54 length=267, char=0: 145.53 (-727.53%) 9.53 ( 45.80%) 17.59 length=268, char=0: 146.25 (-731.81%) 9.53 ( 45.79%) 17.58 length=269, char=0: 146.52 (-735.39%) 9.53 ( 45.66%) 17.54 length=270, char=0: 146.97 (-735.81%) 9.53 ( 45.80%) 17.58 length=271, char=0: 147.54 (-741.08%) 9.58 ( 45.38%) 17.54 length=512, char=0: 268.26 (-1307.85%) 12.06 ( 36.71%) 19.05 length=513, char=0: 268.73 (-1273.89%) 13.56 ( 30.68%) 19.56 length=514, char=0: 269.31 (-1276.89%) 13.56 ( 30.68%) 19.56 length=515, char=0: 269.73 (-1279.05%) 13.56 ( 30.68%) 19.56 length=516, char=0: 270.34 (-1282.24%) 13.56 ( 30.67%) 19.56 length=517, char=0: 270.83 (-1284.71%) 13.56 ( 30.66%) 19.56 length=518, char=0: 271.20 (-1286.54%) 13.56 ( 30.67%) 19.56 length=519, char=0: 271.67 (-1288.67%) 13.65 ( 30.24%) 19.56 length=520, char=0: 272.14 (-1291.04%) 13.65 ( 30.22%) 19.56 length=521, char=0: 272.66 (-1293.69%) 13.65 ( 30.23%) 19.56 length=522, char=0: 273.14 (-1296.13%) 13.65 ( 30.20%) 19.56 length=523, char=0: 273.64 (-1298.75%) 13.65 ( 30.23%) 19.56 length=524, char=0: 274.34 (-1302.16%) 13.66 ( 30.20%) 19.57 length=525, char=0: 274.64 (-1297.78%) 13.56 ( 30.99%) 19.65 length=526, char=0: 275.20 (-1300.04%) 13.56 ( 31.01%) 19.66 length=527, char=0: 275.66 (-1302.86%) 13.56 ( 30.99%) 19.65 length=1024, char=0: 524.46 (-2169.75%) 20.12 ( 12.92%) 23.11 length=1025, char=0: 525.14 (-2124.63%) 21.62 ( 8.40%) 23.61 length=1026, char=0: 525.59 (-2125.36%) 21.88 ( 7.37%) 23.62 length=1027, char=0: 525.98 (-2127.14%) 21.62 ( 8.46%) 23.62 length=1028, char=0: 526.68 (-2131.10%) 21.62 ( 8.42%) 23.61 length=1029, char=0: 527.10 (-2131.70%) 21.79 ( 7.73%) 23.62 length=1030, char=0: 527.54 (-2118.51%) 21.62 ( 9.10%) 23.78 length=1031, char=0: 527.98 (-2136.37%) 21.62 ( 8.43%) 23.61 length=1032, char=0: 528.70 (-2139.38%) 21.62 ( 8.43%) 23.61 length=1033, char=0: 529.25 (-2124.37%) 21.62 ( 9.11%) 23.79 length=1034, char=0: 529.48 (-2142.95%) 21.62 ( 8.43%) 23.61 length=1035, char=0: 530.11 (-2145.13%) 21.62 ( 8.44%) 23.61 length=1036, char=0: 530.76 (-2147.10%) 21.79 ( 7.73%) 23.62 length=1037, char=0: 531.03 (-2149.45%) 21.62 ( 8.42%) 23.61 length=1038, char=0: 531.64 (-2151.87%) 21.62 ( 8.42%) 23.61 length=1039, char=0: 531.99 (-2151.63%) 21.80 ( 7.75%) 23.63 * sysdeps/aarch64/memset-reg.h: New file. * sysdeps/aarch64/memset.S: Use it. (__memset): Rename to MEMSET macro. [ZVA_MACRO]: Use zva_macro. * sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add memset_generic and memset_falkor. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Add memset ifuncs. * sysdeps/aarch64/multiarch/init-arch.h (INIT_ARCH): New local variable zva_size. * sysdeps/aarch64/multiarch/memset.c: New file. * sysdeps/aarch64/multiarch/memset_generic.S: New file. * sysdeps/aarch64/multiarch/memset_falkor.S: New file. * sysdeps/aarch64/multiarch/rtld-memset.S: New file. * sysdeps/unix/sysv/linux/aarch64/cpu-features.c (DCZID_DZP_MASK): New macro. (DCZID_BS_MASK): Likewise. (init_cpu_features): Read and set zva_size. * sysdeps/unix/sysv/linux/aarch64/cpu-features.h (struct cpu_features): New member zva_size. |
||
Adhemerval Zanella
|
58a813bf6e |
aarch64: Fix f{max,min}{f} build for GCC 4.9 and 5
GCC 4.9 and 5 do not generate a correct f{max,min}nm instruction for __builtin_{fmax,fmin}{f} without -ffinite-math-only. It is clear a compiler issue since the instruction can handle NaN and Inf correctly and GCC6+ does not show this issue. We can backport a fix to GCC 5, raise the minimum required GCC version for aarch64 (since GCC 4.9 branch is now closed [1]) and/or add configure check to check for this issue. However I think -ffinite-math-only should be safe for these specific implementations and it is a simpler solution. Checked on aarch64-linux-gnu with GCC 5.3.1. * sysdeps/aarch64/fpu/Makefile (CFLAGS-s_fmax.c, CFLAGS-s_fmaxf.c, CFLAGS-s_fmin.c, CFLAGS-s_fminf.c): New rule: add -ffinite-math-only. [1] https://gcc.gnu.org/ml/gcc/2016-08/msg00010.html Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com> |
||
Adhemerval Zanella
|
06be6368da |
nptl: Define __PTHREAD_MUTEX_{NUSERS_AFTER_KIND,USE_UNION}
This patch adds two new internal defines to set the internal pthread_mutex_t layout required by the supported ABIS: 1. __PTHREAD_MUTEX_NUSERS_AFTER_KIND which control whether to define __nusers fields before or after __kind. The preferred value for is 0 for new ports and it sets __nusers before __kind. 2. __PTHREAD_MUTEX_USE_UNION which control whether internal __spins and __list members will be place inside an union for linuxthreads compatibility. The preferred value is 0 for ports and it sets to not use an union to define both fields. It fixes the wrong offsets value for __kind value on x86_64-linux-gnu-x32. Checked with a make check run-built-tests=no on all afected ABIs. [BZ #22298] * nptl/allocatestack.c (allocate_stack): Check if __PTHREAD_MUTEX_HAVE_PREV is non-zero, instead if __PTHREAD_MUTEX_HAVE_PREV is defined. * nptl/descr.h (pthread): Likewise. * nptl/nptl-init.c (__pthread_initialize_minimal_internal): Likewise. * nptl/pthread_create.c (START_THREAD_DEFN): Likewise. * sysdeps/nptl/fork.c (__libc_fork): Likewise. * sysdeps/nptl/pthread.h (PTHREAD_MUTEX_INITIALIZER): Likewise. * sysdeps/nptl/bits/thread-shared-types.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New defines. (__pthread_internal_list): Check __PTHREAD_MUTEX_USE_UNION instead of __WORDSIZE for internal layout. (__pthread_mutex_s): Check __PTHREAD_MUTEX_NUSERS_AFTER_KIND instead of __WORDSIZE for internal __nusers layout and __PTHREAD_MUTEX_USE_UNION instead of __WORDSIZE whether to use an union for __spins and __list fields. (__PTHREAD_MUTEX_HAVE_PREV): Define also for __PTHREAD_MUTEX_USE_UNION case. * sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New defines. * sysdeps/alpha/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/arm/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/hppa/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/sparc/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. * sysdeps/x86/nptl/bits/pthreadtypes-arch.h (__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): Likewise. Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> |
||
Adhemerval Zanella
|
dff91cd45e |
nptl: Add tests for internal pthread_mutex_t offsets
This patch adds a new build test to check for internal fields offsets for user visible internal field. Although currently the only field which is statically initialized to a non zero value is pthread_mutex_t.__data.__kind value, the tests also check the offset of __kind, __spins, __elision (if supported), and __list internal member. A internal header (pthread-offset.h) is added to each major ABI with the reference value. Checked on x86_64-linux-gnu and with a build check for all affected ABIs (aarch64-linux-gnu, alpha-linux-gnu, arm-linux-gnueabihf, hppa-linux-gnu, i686-linux-gnu, ia64-linux-gnu, m68k-linux-gnu, microblaze-linux-gnu, mips64-linux-gnu, mips64-n32-linux-gnu, mips-linux-gnu, powerpc64le-linux-gnu, powerpc-linux-gnu, s390-linux-gnu, s390x-linux-gnu, sh4-linux-gnu, sparc64-linux-gnu, sparcv9-linux-gnu, tilegx-linux-gnu, tilegx-linux-gnu-x32, tilepro-linux-gnu, x86_64-linux-gnu, and x86_64-linux-x32). * nptl/pthreadP.h (ASSERT_PTHREAD_STRING, ASSERT_PTHREAD_INTERNAL_OFFSET): New macro. * nptl/pthread_mutex_init.c (__pthread_mutex_init): Add build time checks for internal pthread_mutex_t offsets. * sysdeps/aarch64/nptl/pthread-offsets.h (__PTHREAD_MUTEX_NUSERS_OFFSET, __PTHREAD_MUTEX_KIND_OFFSET, __PTHREAD_MUTEX_SPINS_OFFSET, __PTHREAD_MUTEX_ELISION_OFFSET, __PTHREAD_MUTEX_LIST_OFFSET): New macro. * sysdeps/alpha/nptl/pthread-offsets.h: Likewise. * sysdeps/arm/nptl/pthread-offsets.h: Likewise. * sysdeps/hppa/nptl/pthread-offsets.h: Likewise. * sysdeps/i386/nptl/pthread-offsets.h: Likewise. * sysdeps/ia64/nptl/pthread-offsets.h: Likewise. * sysdeps/m68k/nptl/pthread-offsets.h: Likewise. * sysdeps/microblaze/nptl/pthread-offsets.h: Likewise. * sysdeps/mips/nptl/pthread-offsets.h: Likewise. * sysdeps/nios2/nptl/pthread-offsets.h: Likewise. * sysdeps/powerpc/nptl/pthread-offsets.h: Likewise. * sysdeps/s390/nptl/pthread-offsets.h: Likewise. * sysdeps/sh/nptl/pthread-offsets.h: Likewise. * sysdeps/sparc/nptl/pthread-offsets.h: Likewise. * sysdeps/tile/nptl/pthread-offsets.h: Likewise. * sysdeps/x86_64/nptl/pthread-offsets.h: Likewise. Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> |
||
Szabolcs Nagy
|
659ca26736 |
aarch64: optimize _dl_tlsdesc_dynamic fast path
Remove some load/store instructions from the dynamic tlsdesc resolver fast path. This gives around 20% faster tls access in dlopened shared libraries (assuming glibc ran out of static tls space). * sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_dynamic): Optimize. |
||
Szabolcs Nagy
|
91c5a366d8 |
aarch64: Remove barriers from TLS descriptor functions
Remove ldar synchronization and most lazy TLSDESC initialization related code. * sysdeps/aarch64/dl-machine.h (elf_machine_runtime_setup): Remove DT_TLSDESC_GOT initialization. * sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Remove. (_dl_tlsdesc_resolve_rela): Likewise. (_dl_tlsdesc_resolve_hold): Likewise. (_dl_tlsdesc_undefweak): Remove ldar. (_dl_tlsdesc_dynamic): Likewise. * sysdeps/aarch64/dl-tlsdesc.h (_dl_tlsdesc_return_lazy): Remove. (_dl_tlsdesc_resolve_rela): Likewise. (_dl_tlsdesc_resolve_hold): Likewise. * sysdeps/aarch64/tlsdesc.c (_dl_tlsdesc_resolve_rela_fixup): Remove. (_dl_tlsdesc_resolve_hold_fixup): Likewise. (_dl_tlsdesc_resolve_rela): Likewise. (_dl_tlsdesc_resolve_hold): Likewise. |
||
Szabolcs Nagy
|
b7cf203b5c |
aarch64: Disable lazy symbol binding of TLSDESC
Always do TLS descriptor initialization at load time during relocation processing to avoid barriers at every TLS access. In non-dlopened shared libraries the overhead of tls access vs static global access is > 3x bigger when lazy initialization is used (_dl_tlsdesc_return_lazy) compared to bind-now (_dl_tlsdesc_return) so the barriers dominate tls access performance. TLSDESC relocs are in DT_JMPREL which are processed at load time using elf_machine_lazy_rel which is only supposed to do lightweight initialization using the DT_TLSDESC_PLT trampoline (the trampoline code jumps to the entry point in DT_TLSDESC_GOT which does the lazy tlsdesc initialization at runtime). This patch changes elf_machine_lazy_rel in aarch64 to do the symbol binding and initialization as if DF_BIND_NOW was set, so the non-lazy code path of elf/do-rel.h was replicated. The static linker could be changed to emit TLSDESC relocs in DT_REL*, which are processed non-lazily, but the goal of this patch is to always guarantee bind-now semantics, even if the binary was produced with an old linker, so the barriers can be dropped in tls descriptor functions. After this change the synchronizing ldar instructions can be dropped as well as the lazy initialization machinery including the DT_TLSDESC_GOT setup. I believe this should be done on all targets, including ones where no barrier is needed for lazy initialization. There is very little gain in optimizing for large number of symbolic tlsdesc relocations which is an extremely uncommon case. And currently the tlsdesc entries are only readonly protected with -z now and some hardennings against writable JUMPSLOT relocs don't work for TLSDESC so they are a security hazard. (But to fix that the static linker has to be changed.) * sysdeps/aarch64/dl-machine.h (elf_machine_lazy_rel): Do symbol binding and initialization non-lazily for R_AARCH64_TLSDESC. |
||
Szabolcs Nagy
|
be080b6c14 |
aarch64: Add missing math Makefile for recent commit
Without -fno-math-errno, the builtins just do a call instead of inlining a single instruction. |
||
Michael Collison
|
5062680c60 |
aarch64: Implement math acceleration via builtins
This patch converts asm statements into builtins for AArch64. As an example for the file sysdeps/aarch64/fpu/s_ceil.c, we convert the function from double __ceil (double x) { double result; asm ("frintp\t%d0, %d1" : "=w" (result) : "w" (x) ); return result; } into double __ceil (double x) { return __builtin_ceil (x); } Tested on aarch64-linux-gnu with gcc-4.9.4 and gcc-6. * sysdeps/aarch64/fpu/e_sqrt.c (ieee754_sqrt): Replace asm statements with __builtin_sqrt. * sysdeps/aarch64/fpu/e_sqrtf.c (ieee754_sqrtf): Replace asm statements with __builtin_sqrtf. * sysdeps/aarch64/fpu/s_ceil.c (__ceil): Replace asm statements with __builtin_ceil. * sysdeps/aarch64/fpu/s_ceilf.c (__ceilf): Replace asm statements with __builtin_ceilf. * sysdeps/aarch64/fpu/s_floor.c (__floor): Replace asm statements with __builtin_floor. * sysdeps/aarch64/fpu/s_floorf.c (__floorf): Replace asm statements with __builtin_floorf. * sysdeps/aarch64/fpu/s_fma.c (__fma): Replace asm statements with __builtin_fma. * sysdeps/aarch64/fpu/s_fmaf.c (__fmaf): Replace asm statements with __builtin_fmaf. * sysdeps/aarch64/fpu/s_fmax.c (__fmax): Replace asm statements with __builtin_fmax. * sysdeps/aarch64/fpu/s_fmaxf.c (__fmaxf): Replace asm statements with __builtin_fmaxf. * sysdeps/aarch64/fpu/s_fmin.c (__fmin): Replace asm statements with __builtin_fmin. * sysdeps/aarch64/fpu/s_fminf.c (__fminf): Replace asm statements with __builtin_fminf. * sysdeps/aarch64/fpu/s_frint.c: Delete file. * sysdeps/aarch64/fpu/s_frintf.c: Delete file. * sysdeps/aarch64/fpu/s_llrint.c (__llrint): Replace asm statements with builtin_rint and conversion to int. * sysdeps/aarch64/fpu/s_llrintf.c (__llrintf): Likewise. * sysdeps/aarch64/fpu/s_llround.c (__llround): Replace asm statements with builtin_llround. * sysdeps/aarch64/fpu/s_llroundf.c (__llroundf): Likewise. * sysdeps/aarch64/fpu/s_lrint.c (__lrint): Replace asm statements with builtin_rint and conversion to long int. * sysdeps/aarch64/fpu/s_lrintf.c (__lrintf): Likewise. * sysdeps/aarch64/fpu/s_lround.c (__lround): Replace asm statements with builtin_lround. * sysdeps/aarch64/fpu/s_lroundf.c (__lroundf): Replace asm statements with builtin_lroundf. * sysdeps/aarch64/fpu/s_nearbyint.c (__nearbyint): Replace asm statements with __builtin_nearbyint. * sysdeps/aarch64/fpu/s_nearbyintf.c (__nearbyintf): Replace asm statements with __builtin_nearbyintf. * sysdeps/aarch64/fpu/s_rint.c (__rint): Replace asm statements with __builtin_rint. * sysdeps/aarch64/fpu/s_rintf.c (__rintf): Replace asm statements with __builtin_rintf. * sysdeps/aarch64/fpu/s_round.c (__round): Replace asm statements with __builtin_round. * sysdeps/aarch64/fpu/s_roundf.c (__roundf): Replace asm statements with __builtin_roundf. * sysdeps/aarch64/fpu/s_trunc.c (__trunc): Replace asm statements with __builtin_trunc. * sysdeps/aarch64/fpu/s_truncf.c (__truncf): Replace asm statements with __builtin_truncf. * sysdeps/aarch64/fpu/Makefile: Build e_sqrt[f].c with -fno-math-errno. |
||
Szabolcs Nagy
|
a68ba2f3cd |
[AARCH64] Rewrite elf_machine_load_address using _DYNAMIC symbol
This patch rewrites aarch64 elf_machine_load_address to use special _DYNAMIC symbol instead of _dl_start. The static address of _DYNAMIC symbol is stored in the first GOT entry. Here is the change which makes this solution work (part of binutils 2.24): https://sourceware.org/ml/binutils/2013-06/msg00248.html i386, x86_64 targets use the same method to do this as well. The original implementation relies on a trick that R_AARCH64_ABS32 relocation being resolved at link time and the static address fits in the 32bits. However, in LP64, normally, the address is defined to be 64 bit. Here is the C version one which should be portable in all cases. * sysdeps/aarch64/dl-machine.h (elf_machine_load_address): Use _DYNAMIC symbol to calculate load address. |
||
Siddhesh Poyarekar
|
dd5bc7f1b3 |
aarch64: Optimized implementation of memmove for Qualcomm Falkor
This is an optimized memmove implementation for the Qualcomm Falkor processor core. Due to the way the falkor memcpy needs to be written, code cannot be easily shared between memmove and memcpy like in case of other aarch64 memcpy implementations due to which this routine is separate. The underlying principle is the same as that of memcpy where it tries to use registers with the same lower 4 bits for fetching the same stream, thus optimizing hardware prefetcher performance. The memcpy copy loop copies 64 bytes at a time using the same register pair since that's the way to train the hardware prefetcher on the falkor core. memmove cannot quite do that since it needs to avoid overlaps, so it does the next best thing, i.e. has a 32 byte loop with a 32 byte end (prefetch a loop ahead to account for overlapping locations) with register pairs that alias so that they hit the same prefetcher. Due to this difference in loop size, they have to currently be separate implementations but efforts are on to try and get memmove to fall back into memcpy whenever it can without simply duplicating all of the code. Performance: The routine fares around 20-25% better than the generic memmove for most medium to large sizes (i.e. > 128 bytes) for the new walking memmove benchmark (memmove-walk) with an unexplained regression between 1K and 2K. The minor regression is something worth looking into for us, but the remaining gains are significant enough that we would like this included upstream as we looking into the cause for the regression. Here is a snippet of the numbers as generated from the microbenchmark by the compare_strings script. Comparisons are against __memmove_generic: Function: memmove Variant: walk __memmove_thunderx __memmove_falkor __memmove_generic ======================================================================================================================== <snip> length=16384: 12508800.00 ( 6.09%) 11486800.00 ( 13.76%) 13319600.00 length=16400: 13614200.00 ( -0.67%) 11585000.00 ( 14.33%) 13523600.00 length=16385: 13448400.00 ( 0.10%) 11732700.00 ( 12.84%) 13461200.00 length=16399: 13594100.00 ( -0.22%) 11859600.00 ( 12.57%) 13564400.00 length=16386: 13211600.00 ( 1.13%) 11503800.00 ( 13.91%) 13362400.00 length=16398: 13218600.00 ( 2.12%) 11573200.00 ( 14.30%) 13504700.00 length=16387: 13510900.00 ( -0.37%) 11744200.00 ( 12.76%) 13461300.00 length=16397: 13603700.00 ( -0.15%) 11878200.00 ( 12.55%) 13583200.00 length=16388: 13461700.00 ( -0.13%) 11558000.00 ( 14.03%) 13444100.00 length=16396: 13517500.00 ( -0.03%) 11561300.00 ( 14.45%) 13513900.00 length=16389: 13534100.00 ( 0.17%) 11756800.00 ( 13.28%) 13556900.00 length=16395: 13585600.00 ( 0.11%) 11791800.00 ( 13.30%) 13601200.00 length=16390: 13480100.00 ( -0.13%) 11685500.00 ( 13.20%) 13462100.00 length=16394: 13529900.00 ( -0.23%) 11549800.00 ( 14.43%) 13498200.00 length=16391: 13595400.00 ( -0.26%) 11768200.00 ( 13.22%) 13560600.00 length=16393: 13567000.00 ( 0.20%) 11779700.00 ( 13.35%) 13594700.00 length=32768: 71308800.00 ( -6.53%) 50220800.00 ( 24.98%) 66939200.00 length=32784: 72100800.00 (-11.55%) 50114100.00 ( 22.47%) 64636300.00 length=32769: 71767000.00 ( -7.10%) 51238400.00 ( 23.54%) 67010000.00 length=32783: 70113700.00 (-40.95%) 51129000.00 ( -2.78%) 49744400.00 length=32770: 71367600.00 ( -6.52%) 50244700.00 ( 25.01%) 67000900.00 length=32782: 64366700.00 ( 4.71%) 50101400.00 ( 25.83%) 67545600.00 length=32771: 71440100.00 ( -6.51%) 51263900.00 ( 23.57%) 67074900.00 length=32781: 66993000.00 ( 0.34%) 51108300.00 ( 23.97%) 67220300.00 length=32772: 71443900.00 (-60.50%) 50062100.00 (-12.47%) 44512600.00 length=32780: 71759100.00 ( -6.58%) 50263200.00 ( 25.35%) 67328600.00 length=32773: 71714900.00 (-33.21%) 51076600.00 ( 5.12%) 53835400.00 length=32779: 71756900.00 ( -6.56%) 51290800.00 ( 23.83%) 67337800.00 length=32774: 59689300.00 (-34.55%) 50068400.00 (-12.86%) 44363300.00 length=32778: 71847500.00 (-18.20%) 50084100.00 ( 17.61%) 60786500.00 length=32775: 71599300.00 ( -6.54%) 51278200.00 ( 23.70%) 67204800.00 length=32777: 71862900.00 (-60.85%) 51094000.00 (-14.36%) 44677900.00 length=65536: 282848000.00 ( -6.60%) 199187000.00 ( 24.93%) 265325000.00 length=65552: 243285000.00 (-41.61%) 198512000.00 (-15.54%) 171805000.00 length=65537: 255415000.00 (-23.47%) 202499000.00 ( 2.11%) 206858000.00 length=65551: 280122000.00 (-62.95%) 203349000.00 (-18.29%) 171911000.00 length=65538: 283676000.00 (-14.46%) 198368000.00 ( 19.96%) 247848000.00 length=65550: 275566000.00 (-51.76%) 198494000.00 ( -9.31%) 181581000.00 length=65539: 283699000.00 ( -6.58%) 203453000.00 ( 23.57%) 266195000.00 length=65549: 286572000.00 ( -6.65%) 202607000.00 ( 24.60%) 268712000.00 length=65540: 283710000.00 ( -6.59%) 199161000.00 ( 25.17%) 266160000.00 length=65548: 237573000.00 ( 11.48%) 198462000.00 ( 26.06%) 268395000.00 length=65541: 284150000.00 ( -6.58%) 203273000.00 ( 23.75%) 266600000.00 length=65547: 286250000.00 ( -6.70%) 202594000.00 ( 24.48%) 268263000.00 length=65542: 284167000.00 ( -6.60%) 199122000.00 ( 25.31%) 266584000.00 length=65546: 285656000.00 ( -6.59%) 198443000.00 ( 25.95%) 268002000.00 length=65543: 284600000.00 ( -6.58%) 203247000.00 ( 23.89%) 267030000.00 length=65545: 285665000.00 ( -6.40%) 202575000.00 ( 24.55%) 268472000.00 <snip> * sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add memmove_falkor. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Likewise. * sysdeps/aarch64/multiarch/memmove.c: Likewise. * sysdeps/aarch64/multiarch/memmove_falkor.S: New file. |
||
Szabolcs Nagy
|
db4f87bad4 |
aarch64: don't use MIN in dl-machine.h
MIN is used, but param.h may not be included, so expand its single use inline. * sysdeps/aarch64/dl-machine.h (elf_machine_rela): Expand MIN. |
||
Szabolcs Nagy
|
b2f03cf3a4 |
AArch64: update libm-test-ulps
Update for new expf and logf. * sysdeps/aarch64/libm-test-ulps: Update. |
||
Szabolcs Nagy
|
72aa623345 |
Optimized generic expf and exp2f with wrappers
Based on new expf and exp2f code from https://github.com/ARM-software/optimized-routines/ with wrapper on aarch64: expf reciprocal-throughput: 2.3x faster expf latency: 1.7x faster without wrapper on aarch64: expf reciprocal-throughput: 3.3x faster expf latency: 1.7x faster without wrapper on aarch64: exp2f reciprocal-throughput: 2.8x faster exp2f latency: 1.3x faster libm.so size on aarch64: .text size: -152 bytes .rodata size: -1740 bytes expf/exp2f worst case nearest rounding error: 0.502 ulp worst case non-nearest rounding error: 1 ulp Error checks are inline and errno setting is in separate tail called functions, but the wrappers are kept in this patch to handle the _LIB_VERSION==_SVID_ case. (So e.g. errno is set twice for expf calls and once for __expf_finite calls on targets where the new code is used.) Double precision arithmetics is used which is expected to be faster on most targets (including soft-float) than using single precision and it is easier to get good precision result with it. Const data is kept in a separate translation unit which complicates maintenance a bit, but is expected to give good code for literal loads on most targets and allows sharing data across expf, exp2f and powf. (This data is disabled on i386, m68k and ia64 which have their own expf, exp2f and powf code.) Some details may need target specific tweaks: - best convert and round to int operation in the arg reduction may be different across targets. - code was optimized on fma target, optimal polynomial eval may be different without fma. - gcc does not always generate good code for fp bit representation access via unions or it may be inherently slow on some targets. The libm-test-ulps will need adjustment because.. - The argument reduction ideally uses nearest rounded rint, but that is not efficient on most targets, so the polynomial can get evaluated on a wider interval in non-nearest rounding mode making 1 ulp errors common in that case. - The polynomial is evaluated such that it may have 1 ulp error on negative tiny inputs with upward rounding. * math/Makefile (type-float-routines): Add math_errf and e_exp2f_data. * sysdeps/aarch64/fpu/math_private.h (TOINT_INTRINSICS): Define. (roundtoint, converttoint): Likewise. * sysdeps/ieee754/flt-32/e_expf.c: New implementation. * sysdeps/ieee754/flt-32/e_exp2f.c: New implementation. * sysdeps/ieee754/flt-32/e_exp2f_data.c: New file. * sysdeps/ieee754/flt-32/math_config.h: New file. * sysdeps/ieee754/flt-32/math_errf.c: New file. * sysdeps/ieee754/flt-32/t_exp2f.h: Remove. * sysdeps/i386/fpu/e_exp2f_data.c: New file. * sysdeps/i386/fpu/math_errf.c: New file. * sysdeps/ia64/fpu/e_exp2f_data.c: New file. * sysdeps/ia64/fpu/math_errf.c: New file. * sysdeps/m68k/m680x0/fpu/e_exp2f_data.c: New file. * sysdeps/m68k/m680x0/fpu/math_errf.c: New file. |
||
Wilco Dijkstra
|
ca3a382ea3 |
Enable unwind info in libc-start.c and backtrace.c
Add unwind info to __libc_start_main so that unwinding continues one extra level to _start. Similarly add unwind info to backtrace. Given many targets require this, do this in a general way. * csu/Makefile: Add -funwind-tables to libc-start.c. * debug/Makefile: Add -funwind-tables to backtrace.c. * sysdeps/aarch64/Makefile: Remove CFLAGS-backtrace.c. * sysdeps/arm/Makefile: Likewise. * sysdeps/i386/Makefile: Likewise. * sysdeps/m68k/Makefile: Likewise. * sysdeps/mips/Makefile: Likewise. * sysdeps/nios2/Makefile: Likewise. * sysdeps/sh/Makefile: Likewise. * sysdeps/sparc/Makefile: Likewise. |
||
Wang Boshi
|
6cd380dd36 |
AArch64: use movz/movk instead of literal pools in start.S
eXecute-Only Memory (XOM) is a protection mechanism against some ROP attacks. XOM sets the code as executable and unreadable, so the access to any data, like literal pools, in the code section causes the fault with XOM. The compiler can disable literal pools for C source files, but not for assembly files, so I use movz/movk instead of literal pools in start.S for XOM. I add MOVL macro with movz/movk instructions like movl pseudo-instruction in armasm, and use the macro instead of literal pools. * sysdeps/aarch64/start.S: Use MOVL instead of literal pools. * sysdeps/aarch64/sysdep.h (MOVL): Add MOVL macro. |
||
Joseph Myers
|
5a80d39d0d |
Obsolete pow10 functions.
This patch obsoletes the pow10, pow10f and pow10l functions (makes them into compat symbols, not available for new ports or static linking). The exp10 names for these functions are standardized (in TS 18661-4) and were added in the same glibc version (2.1) as pow10 so source code can change to use them without any loss of portability. Since pow10 is deliberately not provided for _Float128, only exp10, this slightly simplifies moving to the new wrapper templates in the !LIBM_SVID_COMPAT case, by avoiding needing to arrange for pow10, pow10f and pow10l to be defined by those templates. Tested for x86_64, and with build-many-glibcs.py. * manual/math.texi (pow10): Do not document. (pow10f): Likewise. (pow10l): Likewise. * math/bits/mathcalls.h [__USE_GNU] (pow10): Do not declare. * math/bits/math-finite.h [__USE_GNU] (pow10): Likewise. * math/libm-test-exp10.inc (pow10_test): Remove. (do_test): Do not call pow10. * math/w_exp10_compat.c (pow10): Make into compat symbol. [NO_LONG_DOUBLE] (pow10l): Likewise. * math/w_exp10f_compat.c (pow10f): Likewise. * math/w_exp10l_compat.c (pow10l): Likewise. * sysdeps/ia64/fpu/e_exp10.S: Include <shlib-compat.h>. (pow10): Make into compat symbol. * sysdeps/ia64/fpu/e_exp10f.S: Include <shlib-compat.h>. (pow10f): Make into compat symbol. * sysdeps/ia64/fpu/e_exp10l.S: Include <shlib-compat.h>. (pow10l): Make into compat symbol. * sysdeps/ieee754/ldbl-opt/Makefile (libnldbl-calls): Remove pow10. (CFLAGS-nldbl-pow10.c): Remove variable.. * sysdeps/ieee754/ldbl-opt/nldbl-pow10.c: Remove file. * sysdeps/ieee754/ldbl-opt/w_exp10_compat.c (pow10l): Condition on [SHLIB_COMPAT (libm, GLIBC_2_1, GLIBC_2_27)]. * sysdeps/ieee754/ldbl-opt/w_exp10l_compat.c (compat_symbol): Undefine and redefine. (pow10l): Make into compat symbol. * sysdeps/aarch64/libm-test-ulps: Remove pow10 ulps. * sysdeps/alpha/fpu/libm-test-ulps: Likewise. * sysdeps/arm/libm-test-ulps: Likewise. * sysdeps/hppa/fpu/libm-test-ulps: Likewise. * sysdeps/i386/fpu/libm-test-ulps: Likewise. * sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise. * sysdeps/microblaze/libm-test-ulps: Likewise. * sysdeps/mips/mips32/libm-test-ulps: Likewise. * sysdeps/mips/mips64/libm-test-ulps: Likewise. * sysdeps/nios2/libm-test-ulps: Likewise. * sysdeps/powerpc/fpu/libm-test-ulps: Likewise. * sysdeps/powerpc/nofpu/libm-test-ulps: Likewise. * sysdeps/s390/fpu/libm-test-ulps: Likewise. * sysdeps/sh/libm-test-ulps: Likewise. * sysdeps/sparc/fpu/libm-test-ulps: Likewise. * sysdeps/tile/libm-test-ulps: Likewise. * sysdeps/x86_64/fpu/libm-test-ulps: Likewise. |
||
Steve Ellcey
|
d9ff799a5b |
ILP32 math changes
* sysdeps/aarch64/fpu/s_llrint.c (OREG_SIZE): New macro. * sysdeps/aarch64/fpu/s_llround.c (OREG_SIZE): Likewise. * sysdeps/aarch64/fpu/s_llrintf.c (OREGS, IREGS): Remove. (IREG_SIZE, OREG_SIZE): New macros. * sysdeps/aarch64/fpu/s_llroundf.c: (OREGS, IREGS): Remove. (IREG_SIZE, OREG_SIZE): New macros. * sysdeps/aarch64/fpu/s_lrintf.c (IREGS): Remove. (IREG_SIZE): New macro. * sysdeps/aarch64/fpu/s_lroundf.c (IREGS): Remove. (IREG_SIZE): New macro. * sysdeps/aarch64/fpu/s_lrint.c (get-rounding-mode.h, stdint.h): New includes. (IREG_SIZE, OREG_SIZE): Initialize if not already set. (OREGS, IREGS): Set based on IREG_SIZE and OREG_SIZE. (__CONCATX): Handle exceptions correctly on large values that may set FE_INVALID. * sysdeps/aarch64/fpu/s_lround.c (IREG_SIZE, OREG_SIZE): Initialize if not already set. (OREGS, IREGS): Set based on IREG_SIZE and OREG_SIZE. |
||
Steve Ellcey
|
9eee633b68 |
Change argument type passed to ifunc resolvers
* sysdeps/aarch64/dl-irel.h: (elf_ifunc_invoke): Change argument type in resolver call. |
||
Florian Weimer
|
17e00cc69e | elf: Remove internal_function attribute | ||
Steve Ellcey
|
5a706f649d |
aarch64: Use PTR_REG macro to fix ILP32 bug and make code consistent
* sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_dynamic): Use PTR_REG macro in cmp instruction. |
||
Wilco Dijkstra
|
922369032c |
[AArch64] Optimized memcmp.
This is an optimized memcmp for AArch64. This is a complete rewrite using a different algorithm. The previous version split into cases where both inputs were aligned, the inputs were mutually aligned and unaligned using a byte loop. The new version combines all these cases, while small inputs of less than 8 bytes are handled separately. This allows the main code to be sped up using unaligned loads since there are now at least 8 bytes to be compared. After the first 8 bytes, align the first input. This ensures each iteration does at most one unaligned access and mutually aligned inputs behave as aligned. After the main loop, process the last 8 bytes using unaligned accesses. This improves performance of (mutually) aligned cases by 25% and unaligned by >500% (yes >6 times faster) on large inputs. * sysdeps/aarch64/memcmp.S (memcmp): Rewrite of optimized memcmp. |
||
Siddhesh Poyarekar
|
0e02b5107e | memcpy_falkor: Fix code style in comments | ||
Siddhesh Poyarekar
|
36ada5f681 |
aarch64: Optimized memcpy for Qualcomm Falkor processor
This is an optimized implementation of the memcpy routine that gives a significant gain in performance for all sizes of copies on the Qualcomm Falkor processor. A detailed rationale of the implementation is written in a comment in the patch. This implementation improves time for copies up to 128 bytes by up to 15% and for larger copies by up to 35% in the glibc microbenchmark. The memcpy-random benchmark sees improvements in all sizes in the range of 13%-18%. Here are the full numbers extracted from the glibc microbenchmark using the commands: ../benchtests/scripts/compare_strings.py benchtests/bench-memcpy.out \ ../benchtests/scripts/benchout_strings.schema.json \ -base=__memcpy_generic length align1 align2 ../benchtests/scripts/compare_strings.py benchtests/bench-memcpy-large.out \ ../benchtests/scripts/benchout_strings.schema.json \ -base=__memcpy_generic length align1 align2 ../benchtests/scripts/compare_strings.py benchtests/bench-memcpy-random.out \ ../benchtests/scripts/benchout_strings.schema.json \ -base=__memcpy_generic max-size Function: memcpy __memcpy_thunderx __memcpy_falkor __memcpy_generic Variant: default ================================================================================ length=1,align1=0,align2=0: 33.59 (-115.00%) 15.62 (0.00%) 15.62 length=1,align1=0,align2=0: 16.41 (-10.53%) 14.06 (5.26%) 14.84 length=1,align1=0,align2=0: 14.84 (0.00%) 14.84 (0.00%) 14.84 length=1,align1=0,align2=0: 15.62 (-5.26%) 14.06 (5.26%) 14.84 length=2,align1=0,align2=0: 15.62 (-5.26%) 14.06 (5.26%) 14.84 length=2,align1=1,align2=0: 15.62 (-5.26%) 14.06 (5.26%) 14.84 length=2,align1=0,align2=1: 14.84 (0.00%) 14.06 (5.26%) 14.84 length=2,align1=1,align2=1: 14.84 (-5.56%) 14.06 (0.00%) 14.06 length=4,align1=0,align2=0: 14.06 (0.00%) 14.06 (0.00%) 14.06 length=4,align1=2,align2=0: 14.06 (-5.88%) 14.06 (-5.88%) 13.28 length=4,align1=0,align2=2: 14.06 (0.00%) 14.06 (0.00%) 14.06 length=4,align1=2,align2=2: 14.06 (-5.88%) 14.06 (-5.88%) 13.28 length=8,align1=0,align2=0: 14.84 (-5.56%) 13.28 (5.56%) 14.06 length=8,align1=3,align2=0: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=8,align1=0,align2=3: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=8,align1=3,align2=3: 13.28 (-6.25%) 13.28 (-6.25%) 12.50 length=16,align1=0,align2=0: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=16,align1=4,align2=0: 13.28 (0.00%) 12.50 (5.88%) 13.28 length=16,align1=0,align2=4: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=16,align1=4,align2=4: 13.28 (-6.25%) 12.50 (0.00%) 12.50 length=32,align1=0,align2=0: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=32,align1=5,align2=0: 13.28 (0.00%) 12.50 (5.88%) 13.28 length=32,align1=0,align2=5: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=32,align1=5,align2=5: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=64,align1=0,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=64,align1=6,align2=0: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=64,align1=0,align2=6: 14.06 (5.26%) 14.06 (5.26%) 14.84 length=64,align1=6,align2=6: 14.84 (-11.77%) 14.06 (-5.88%) 13.28 length=128,align1=0,align2=0: 17.19 (-4.76%) 14.84 (9.52%) 16.41 length=128,align1=7,align2=0: 16.41 (4.55%) 15.62 (9.09%) 17.19 length=128,align1=0,align2=7: 16.41 (0.00%) 14.06 (14.29%) 16.41 length=128,align1=7,align2=7: 16.41 (4.55%) 15.62 (9.09%) 17.19 length=256,align1=0,align2=0: 21.88 (-3.70%) 21.09 (0.00%) 21.09 length=256,align1=8,align2=0: 21.09 (-3.85%) 21.09 (-3.85%) 20.31 length=256,align1=0,align2=8: 20.31 (-4.00%) 20.31 (-4.00%) 19.53 length=256,align1=8,align2=8: 21.88 (-7.69%) 20.31 (0.00%) 20.31 length=512,align1=0,align2=0: 28.91 (-2.78%) 28.91 (-2.78%) 28.12 length=512,align1=9,align2=0: 30.47 (-2.63%) 30.47 (-2.63%) 29.69 length=512,align1=0,align2=9: 29.69 (0.00%) 29.69 (0.00%) 29.69 length=512,align1=9,align2=9: 28.12 (-2.86%) 28.12 (-2.86%) 27.34 length=1024,align1=0,align2=0: 44.53 (0.00%) 44.53 (0.00%) 44.53 length=1024,align1=10,align2=0: 50.00 (0.00%) 50.00 (0.00%) 50.00 length=1024,align1=0,align2=10: 49.22 (1.56%) 50.78 (-1.56%) 50.00 length=1024,align1=10,align2=10: 44.53 (-1.79%) 43.75 (0.00%) 43.75 length=2048,align1=0,align2=0: 77.34 (-1.02%) 76.56 (0.00%) 76.56 length=2048,align1=11,align2=0: 89.84 (0.00%) 89.84 (0.00%) 89.84 length=2048,align1=0,align2=11: 89.84 (0.00%) 89.84 (0.00%) 89.84 length=2048,align1=11,align2=11: 75.78 (0.00%) 75.78 (0.00%) 75.78 length=4096,align1=0,align2=0: 141.41 (-0.56%) 140.62 (0.00%) 140.62 length=4096,align1=12,align2=0: 171.09 (-0.46%) 170.31 (0.00%) 170.31 length=4096,align1=0,align2=12: 170.31 (0.00%) 170.31 (0.00%) 170.31 length=4096,align1=12,align2=12: 140.62 (0.00%) 140.62 (0.00%) 140.62 length=8192,align1=0,align2=0: 278.91 (-0.28%) 275.78 (0.84%) 278.12 length=8192,align1=13,align2=0: 338.28 (0.23%) 335.94 (0.92%) 339.06 length=8192,align1=0,align2=13: 338.28 (0.00%) 455.47 (-34.64%) 338.28 length=8192,align1=13,align2=13: 278.12 (-0.28%) 275.78 (0.56%) 277.34 length=16384,align1=0,align2=0: 535.94 (-0.15%) 531.25 (0.73%) 535.16 length=16384,align1=14,align2=0: 659.38 (0.12%) 659.38 (0.12%) 660.16 length=16384,align1=0,align2=14: 659.38 (0.00%) 657.03 (0.36%) 659.38 length=16384,align1=14,align2=14: 535.16 (0.44%) 532.81 (0.87%) 537.50 length=32768,align1=0,align2=0: 1260.94 (10.68%) 1121.88 (20.53%) 1411.72 length=32768,align1=15,align2=0: 1368.75 (10.02%) 1376.56 (9.50%) 1521.09 length=32768,align1=0,align2=15: 1333.59 (10.91%) 1373.44 (8.25%) 1496.88 length=32768,align1=15,align2=15: 1256.25 (13.96%) 1125.78 (22.90%) 1460.16 length=65536,align1=0,align2=0: 2853.91 (30.11%) 2589.06 (36.60%) 4083.59 length=65536,align1=16,align2=0: 2850.00 (30.14%) 2589.84 (36.52%) 4079.69 length=65536,align1=0,align2=16: 2853.12 (30.60%) 2589.84 (37.00%) 4110.94 length=65536,align1=16,align2=16: 2850.78 (30.07%) 2589.06 (36.49%) 4076.56 length=0,align1=0,align2=0: 15.62 (-5.26%) 16.41 (-10.53%) 14.84 length=0,align1=0,align2=0: 14.84 (-5.56%) 14.84 (-5.56%) 14.06 length=0,align1=0,align2=0: 14.84 (0.00%) 14.84 (0.00%) 14.84 length=0,align1=0,align2=0: 16.41 (-16.67%) 14.84 (-5.56%) 14.06 length=1,align1=0,align2=0: 15.62 (4.76%) 15.62 (4.76%) 16.41 length=1,align1=1,align2=0: 15.62 (0.00%) 14.84 (5.00%) 15.62 length=1,align1=0,align2=1: 14.84 (0.00%) 14.84 (0.00%) 14.84 length=1,align1=1,align2=1: 14.84 (0.00%) 14.06 (5.26%) 14.84 length=2,align1=0,align2=0: 14.84 (0.00%) 14.06 (5.26%) 14.84 length=2,align1=2,align2=0: 14.84 (0.00%) 14.06 (5.26%) 14.84 length=2,align1=0,align2=2: 14.84 (-5.56%) 14.06 (0.00%) 14.06 length=2,align1=2,align2=2: 14.84 (0.00%) 14.06 (5.26%) 14.84 length=3,align1=0,align2=0: 14.84 (0.00%) 14.84 (0.00%) 14.84 length=3,align1=3,align2=0: 14.84 (-5.56%) 14.06 (0.00%) 14.06 length=3,align1=0,align2=3: 15.62 (-11.11%) 14.06 (0.00%) 14.06 length=3,align1=3,align2=3: 14.84 (0.00%) 14.06 (5.26%) 14.84 length=4,align1=0,align2=0: 17.97 (-27.78%) 14.06 (0.00%) 14.06 length=4,align1=4,align2=0: 13.28 (5.56%) 14.06 (0.00%) 14.06 length=4,align1=0,align2=4: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=4,align1=4,align2=4: 13.28 (5.56%) 13.28 (5.56%) 14.06 length=5,align1=0,align2=0: 13.28 (5.56%) 13.28 (5.56%) 14.06 length=5,align1=5,align2=0: 14.06 (0.00%) 14.06 (0.00%) 14.06 length=5,align1=0,align2=5: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=5,align1=5,align2=5: 14.06 (-5.88%) 14.06 (-5.88%) 13.28 length=6,align1=0,align2=0: 14.06 (-5.88%) 14.06 (-5.88%) 13.28 length=6,align1=6,align2=0: 14.06 (0.00%) 14.06 (0.00%) 14.06 length=6,align1=0,align2=6: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=6,align1=6,align2=6: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=7,align1=0,align2=0: 14.84 (-11.77%) 14.06 (-5.88%) 13.28 length=7,align1=7,align2=0: 13.28 (0.00%) 14.06 (-5.88%) 13.28 length=7,align1=0,align2=7: 14.06 (0.00%) 14.06 (0.00%) 14.06 length=7,align1=7,align2=7: 14.06 (0.00%) 14.06 (0.00%) 14.06 length=8,align1=0,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=8,align1=8,align2=0: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=8,align1=0,align2=8: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=8,align1=8,align2=8: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=9,align1=0,align2=0: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=9,align1=9,align2=0: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=9,align1=0,align2=9: 13.28 (0.00%) 14.06 (-5.88%) 13.28 length=9,align1=9,align2=9: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=10,align1=0,align2=0: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=10,align1=10,align2=0: 14.06 (-5.88%) 14.06 (-5.88%) 13.28 length=10,align1=0,align2=10: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=10,align1=10,align2=10: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=11,align1=0,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=11,align1=11,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=11,align1=0,align2=11: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=11,align1=11,align2=11: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=12,align1=0,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=12,align1=12,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=12,align1=0,align2=12: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=12,align1=12,align2=12: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=13,align1=0,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=13,align1=13,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=13,align1=0,align2=13: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=13,align1=13,align2=13: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=14,align1=0,align2=0: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=14,align1=14,align2=0: 13.28 (5.56%) 13.28 (5.56%) 14.06 length=14,align1=0,align2=14: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=14,align1=14,align2=14: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=15,align1=0,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=15,align1=15,align2=0: 14.06 (-5.88%) 14.06 (-5.88%) 13.28 length=15,align1=0,align2=15: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=15,align1=15,align2=15: 13.28 (0.00%) 14.06 (-5.88%) 13.28 length=16,align1=0,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=16,align1=16,align2=0: 13.28 (5.56%) 14.06 (0.00%) 14.06 length=16,align1=0,align2=16: 14.84 (-11.77%) 13.28 (0.00%) 13.28 length=16,align1=16,align2=16: 13.28 (-6.25%) 12.50 (0.00%) 12.50 length=17,align1=0,align2=0: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=17,align1=17,align2=0: 14.84 (-11.77%) 12.50 (5.88%) 13.28 length=17,align1=0,align2=17: 14.84 (-5.56%) 12.50 (11.11%) 14.06 length=17,align1=17,align2=17: 14.84 (-11.77%) 12.50 (5.88%) 13.28 length=18,align1=0,align2=0: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=18,align1=18,align2=0: 13.28 (5.56%) 12.50 (11.11%) 14.06 length=18,align1=0,align2=18: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=18,align1=18,align2=18: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=19,align1=0,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=19,align1=19,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=19,align1=0,align2=19: 14.84 (-5.56%) 12.50 (11.11%) 14.06 length=19,align1=19,align2=19: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=20,align1=0,align2=0: 14.84 (-11.77%) 12.50 (5.88%) 13.28 length=20,align1=20,align2=0: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=20,align1=0,align2=20: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=20,align1=20,align2=20: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=21,align1=0,align2=0: 14.84 (-5.56%) 12.50 (11.11%) 14.06 length=21,align1=21,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=21,align1=0,align2=21: 14.84 (-11.77%) 12.50 (5.88%) 13.28 length=21,align1=21,align2=21: 13.28 (5.56%) 13.28 (5.56%) 14.06 length=22,align1=0,align2=0: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=22,align1=22,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=22,align1=0,align2=22: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=22,align1=22,align2=22: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=23,align1=0,align2=0: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=23,align1=23,align2=0: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=23,align1=0,align2=23: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=23,align1=23,align2=23: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=24,align1=0,align2=0: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=24,align1=24,align2=0: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=24,align1=0,align2=24: 14.84 (-11.77%) 12.50 (5.88%) 13.28 length=24,align1=24,align2=24: 14.06 (-5.88%) 13.28 (0.00%) 13.28 length=25,align1=0,align2=0: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=25,align1=25,align2=0: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=25,align1=0,align2=25: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=25,align1=25,align2=25: 13.28 (0.00%) 13.28 (0.00%) 13.28 length=26,align1=0,align2=0: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=26,align1=26,align2=0: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=26,align1=0,align2=26: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=26,align1=26,align2=26: 14.06 (0.00%) 13.28 (5.56%) 14.06 length=27,align1=0,align2=0: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=27,align1=27,align2=0: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=27,align1=0,align2=27: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=27,align1=27,align2=27: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=28,align1=0,align2=0: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=28,align1=28,align2=0: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=28,align1=0,align2=28: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=28,align1=28,align2=28: 14.84 (-11.77%) 13.28 (0.00%) 13.28 length=29,align1=0,align2=0: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=29,align1=29,align2=0: 13.28 (0.00%) 12.50 (5.88%) 13.28 length=29,align1=0,align2=29: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=29,align1=29,align2=29: 13.28 (5.56%) 12.50 (11.11%) 14.06 length=30,align1=0,align2=0: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=30,align1=30,align2=0: 13.28 (5.56%) 12.50 (11.11%) 14.06 length=30,align1=0,align2=30: 14.06 (-5.88%) 12.50 (5.88%) 13.28 length=30,align1=30,align2=30: 13.28 (0.00%) 12.50 (5.88%) 13.28 length=31,align1=0,align2=0: 13.28 (0.00%) 12.50 (5.88%) 13.28 length=31,align1=31,align2=0: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=31,align1=0,align2=31: 13.28 (0.00%) 12.50 (5.88%) 13.28 length=31,align1=31,align2=31: 14.06 (0.00%) 12.50 (11.11%) 14.06 length=48,align1=0,align2=0: 14.06 (0.00%) 14.06 (0.00%) 14.06 length=48,align1=3,align2=0: 14.06 (0.00%) 14.06 (0.00%) 14.06 length=48,align1=0,align2=3: 14.06 (-5.88%) 14.06 (-5.88%) 13.28 length=48,align1=3,align2=3: 13.28 (5.56%) 14.06 (0.00%) 14.06 length=80,align1=0,align2=0: 15.62 (-11.11%) 14.84 (-5.56%) 14.06 length=80,align1=5,align2=0: 15.62 (-11.11%) 16.41 (-16.67%) 14.06 length=80,align1=0,align2=5: 14.06 (0.00%) 15.62 (-11.11%) 14.06 length=80,align1=5,align2=5: 15.62 (-5.26%) 17.19 (-15.79%) 14.84 length=96,align1=0,align2=0: 14.06 (0.00%) 14.84 (-5.56%) 14.06 length=96,align1=6,align2=0: 14.84 (-5.56%) 16.41 (-16.67%) 14.06 length=96,align1=0,align2=6: 14.06 (0.00%) 14.84 (-5.56%) 14.06 length=96,align1=6,align2=6: 14.84 (-5.56%) 17.19 (-22.22%) 14.06 length=112,align1=0,align2=0: 17.19 (-4.76%) 14.06 (14.29%) 16.41 length=112,align1=7,align2=0: 17.19 (0.00%) 16.41 (4.55%) 17.19 length=112,align1=0,align2=7: 16.41 (0.00%) 14.84 (9.52%) 16.41 length=112,align1=7,align2=7: 17.19 (0.00%) 17.19 (0.00%) 17.19 length=144,align1=0,align2=0: 17.19 (-10.00%) 17.97 (-15.00%) 15.62 length=144,align1=9,align2=0: 17.19 (-4.76%) 18.75 (-14.29%) 16.41 length=144,align1=0,align2=9: 20.31 (-8.33%) 18.75 (0.00%) 18.75 length=144,align1=9,align2=9: 18.75 (-4.35%) 18.75 (-4.35%) 17.97 length=160,align1=0,align2=0: 18.75 (-4.35%) 17.97 (0.00%) 17.97 length=160,align1=10,align2=0: 18.75 (4.00%) 18.75 (4.00%) 19.53 length=160,align1=0,align2=10: 19.53 (-4.17%) 17.97 (4.17%) 18.75 length=160,align1=10,align2=10: 18.75 (-4.35%) 18.75 (-4.35%) 17.97 length=176,align1=0,align2=0: 18.75 (-4.35%) 17.19 (4.35%) 17.97 length=176,align1=11,align2=0: 19.53 (0.00%) 19.53 (0.00%) 19.53 length=176,align1=0,align2=11: 19.53 (-4.17%) 18.75 (0.00%) 18.75 length=176,align1=11,align2=11: 18.75 (0.00%) 17.97 (4.17%) 18.75 length=192,align1=0,align2=0: 18.75 (0.00%) 17.97 (4.17%) 18.75 length=192,align1=12,align2=0: 21.09 (-8.00%) 18.75 (4.00%) 19.53 length=192,align1=0,align2=12: 18.75 (0.00%) 18.75 (0.00%) 18.75 length=192,align1=12,align2=12: 18.75 (0.00%) 17.97 (4.17%) 18.75 length=208,align1=0,align2=0: 17.97 (0.00%) 20.31 (-13.04%) 17.97 length=208,align1=13,align2=0: 19.53 (7.41%) 21.09 (0.00%) 21.09 length=208,align1=0,align2=13: 23.44 (-11.11%) 21.09 (0.00%) 21.09 length=208,align1=13,align2=13: 21.09 (-3.85%) 21.09 (-3.85%) 20.31 length=224,align1=0,align2=0: 21.09 (-8.00%) 20.31 (-4.00%) 19.53 length=224,align1=14,align2=0: 23.44 (-11.11%) 20.31 (3.70%) 21.09 length=224,align1=0,align2=14: 21.09 (3.57%) 20.31 (7.14%) 21.88 length=224,align1=14,align2=14: 20.31 (0.00%) 19.53 (3.85%) 20.31 length=240,align1=0,align2=0: 20.31 (-4.00%) 19.53 (0.00%) 19.53 length=240,align1=15,align2=0: 22.66 (0.00%) 20.31 (10.34%) 22.66 length=240,align1=0,align2=15: 20.31 (-4.00%) 20.31 (-4.00%) 19.53 length=240,align1=15,align2=15: 21.88 (0.00%) 21.09 (3.57%) 21.88 length=272,align1=0,align2=0: 20.31 (0.00%) 28.12 (-38.46%) 20.31 length=272,align1=17,align2=0: 22.66 (0.00%) 27.34 (-20.69%) 22.66 length=272,align1=0,align2=17: 25.78 (-10.00%) 28.12 (-20.00%) 23.44 length=272,align1=17,align2=17: 22.66 (-3.57%) 27.34 (-25.00%) 21.88 length=288,align1=0,align2=0: 23.44 (-7.14%) 27.34 (-25.00%) 21.88 length=288,align1=18,align2=0: 22.66 (0.00%) 27.34 (-20.69%) 22.66 length=288,align1=0,align2=18: 23.44 (-3.45%) 25.00 (-10.35%) 22.66 length=288,align1=18,align2=18: 22.66 (-3.57%) 21.88 (0.00%) 21.88 length=304,align1=0,align2=0: 21.88 (0.00%) 21.88 (0.00%) 21.88 length=304,align1=19,align2=0: 23.44 (-3.45%) 22.66 (0.00%) 22.66 length=304,align1=0,align2=19: 22.66 (0.00%) 22.66 (0.00%) 22.66 length=304,align1=19,align2=19: 22.66 (-3.57%) 21.88 (0.00%) 21.88 length=320,align1=0,align2=0: 22.66 (-3.57%) 21.88 (0.00%) 21.88 length=320,align1=20,align2=0: 22.66 (0.00%) 22.66 (0.00%) 22.66 length=320,align1=0,align2=20: 22.66 (0.00%) 22.66 (0.00%) 22.66 length=320,align1=20,align2=20: 22.66 (-3.57%) 21.88 (0.00%) 21.88 length=336,align1=0,align2=0: 21.88 (0.00%) 24.22 (-10.71%) 21.88 length=336,align1=21,align2=0: 22.66 (0.00%) 25.00 (-10.35%) 22.66 length=336,align1=0,align2=21: 25.78 (0.00%) 25.00 (3.03%) 25.78 length=336,align1=21,align2=21: 25.00 (0.00%) 23.44 (6.25%) 25.00 length=352,align1=0,align2=0: 24.22 (0.00%) 24.22 (0.00%) 24.22 length=352,align1=22,align2=0: 25.00 (0.00%) 25.00 (0.00%) 25.00 length=352,align1=0,align2=22: 25.00 (-3.23%) 25.00 (-3.23%) 24.22 length=352,align1=22,align2=22: 25.00 (-3.23%) 24.22 (0.00%) 24.22 length=368,align1=0,align2=0: 25.00 (-3.23%) 23.44 (3.23%) 24.22 length=368,align1=23,align2=0: 25.00 (0.00%) 24.22 (3.12%) 25.00 length=368,align1=0,align2=23: 25.00 (-3.23%) 25.00 (-3.23%) 24.22 length=368,align1=23,align2=23: 25.00 (-6.67%) 23.44 (0.00%) 23.44 length=384,align1=0,align2=0: 24.22 (0.00%) 24.22 (0.00%) 24.22 length=384,align1=24,align2=0: 25.00 (0.00%) 24.22 (3.12%) 25.00 length=384,align1=0,align2=24: 25.00 (0.00%) 25.78 (-3.12%) 25.00 length=384,align1=24,align2=24: 24.22 (-3.33%) 23.44 (0.00%) 23.44 length=400,align1=0,align2=0: 25.00 (-3.23%) 26.56 (-9.68%) 24.22 length=400,align1=25,align2=0: 25.78 (-3.12%) 27.34 (-9.38%) 25.00 length=400,align1=0,align2=25: 27.34 (0.00%) 27.34 (0.00%) 27.34 length=400,align1=25,align2=25: 26.56 (0.00%) 25.78 (2.94%) 26.56 length=416,align1=0,align2=0: 26.56 (-3.03%) 25.78 (0.00%) 25.78 length=416,align1=26,align2=0: 28.12 (-2.86%) 27.34 (0.00%) 27.34 length=416,align1=0,align2=26: 27.34 (-2.94%) 28.12 (-5.88%) 26.56 length=416,align1=26,align2=26: 25.78 (0.00%) 26.56 (-3.03%) 25.78 length=432,align1=0,align2=0: 27.34 (-2.94%) 25.78 (2.94%) 26.56 length=432,align1=27,align2=0: 28.12 (-2.86%) 27.34 (0.00%) 27.34 length=432,align1=0,align2=27: 27.34 (0.00%) 28.12 (-2.86%) 27.34 length=432,align1=27,align2=27: 25.78 (0.00%) 25.78 (0.00%) 25.78 length=448,align1=0,align2=0: 26.56 (-3.03%) 25.78 (0.00%) 25.78 length=448,align1=28,align2=0: 27.34 (0.00%) 27.34 (0.00%) 27.34 length=448,align1=0,align2=28: 27.34 (0.00%) 28.12 (-2.86%) 27.34 length=448,align1=28,align2=28: 25.78 (0.00%) 25.78 (0.00%) 25.78 length=464,align1=0,align2=0: 25.78 (0.00%) 28.12 (-9.09%) 25.78 length=464,align1=29,align2=0: 28.12 (-2.86%) 29.69 (-8.57%) 27.34 length=464,align1=0,align2=29: 30.47 (0.00%) 30.47 (0.00%) 30.47 length=464,align1=29,align2=29: 28.12 (0.00%) 27.34 (2.78%) 28.12 length=480,align1=0,align2=0: 29.69 (-5.56%) 28.12 (0.00%) 28.12 length=480,align1=30,align2=0: 31.25 (-2.56%) 29.69 (2.56%) 30.47 length=480,align1=0,align2=30: 29.69 (0.00%) 30.47 (-2.63%) 29.69 length=480,align1=30,align2=30: 28.12 (0.00%) 28.12 (0.00%) 28.12 length=496,align1=0,align2=0: 28.12 (0.00%) 27.34 (2.78%) 28.12 length=496,align1=31,align2=0: 30.47 (-2.63%) 29.69 (0.00%) 29.69 length=496,align1=0,align2=31: 29.69 (0.00%) 30.47 (-2.63%) 29.69 length=496,align1=31,align2=31: 28.12 (-2.86%) 28.12 (-2.86%) 27.34 length=1024,align1=0,align2=0: 44.53 (0.00%) 44.53 (0.00%) 44.53 length=1024,align1=32,align2=0: 44.53 (-1.79%) 44.53 (-1.79%) 43.75 length=1024,align1=0,align2=32: 44.53 (-1.79%) 43.75 (0.00%) 43.75 length=1024,align1=32,align2=32: 43.75 (1.75%) 43.75 (1.75%) 44.53 length=1056,align1=0,align2=0: 46.88 (-1.69%) 46.88 (-1.69%) 46.09 length=1056,align1=33,align2=0: 53.12 (0.00%) 52.34 (1.47%) 53.12 length=1056,align1=0,align2=33: 52.34 (0.00%) 53.12 (-1.49%) 52.34 length=1056,align1=33,align2=33: 46.09 (0.00%) 46.88 (-1.69%) 46.09 length=1088,align1=0,align2=0: 46.88 (-1.69%) 46.09 (0.00%) 46.09 length=1088,align1=34,align2=0: 52.34 (0.00%) 52.34 (0.00%) 52.34 length=1088,align1=0,align2=34: 53.12 (-3.03%) 53.12 (-3.03%) 51.56 length=1088,align1=34,align2=34: 46.09 (0.00%) 46.88 (-1.69%) 46.09 length=1120,align1=0,align2=0: 49.22 (-1.61%) 48.44 (0.00%) 48.44 length=1120,align1=35,align2=0: 54.69 (1.41%) 55.47 (0.00%) 55.47 length=1120,align1=0,align2=35: 57.03 (0.00%) 55.47 (2.74%) 57.03 length=1120,align1=35,align2=35: 48.44 (0.00%) 49.22 (-1.61%) 48.44 length=1152,align1=0,align2=0: 47.66 (1.61%) 48.44 (0.00%) 48.44 length=1152,align1=36,align2=0: 55.47 (-1.43%) 55.47 (-1.43%) 54.69 length=1152,align1=0,align2=36: 58.59 (-1.35%) 55.47 (4.05%) 57.81 length=1152,align1=36,align2=36: 48.44 (0.00%) 49.22 (-1.61%) 48.44 length=1184,align1=0,align2=0: 53.12 (-3.03%) 50.78 (1.52%) 51.56 length=1184,align1=37,align2=0: 61.72 (-2.60%) 57.03 (5.19%) 60.16 length=1184,align1=0,align2=37: 62.50 (-1.27%) 57.03 (7.60%) 61.72 length=1184,align1=37,align2=37: 53.12 (-1.49%) 50.78 (2.99%) 52.34 length=1216,align1=0,align2=0: 53.91 (-4.55%) 50.78 (1.52%) 51.56 length=1216,align1=38,align2=0: 60.94 (0.00%) 57.03 (6.41%) 60.94 length=1216,align1=0,align2=38: 60.16 (0.00%) 57.81 (3.90%) 60.16 length=1216,align1=38,align2=38: 52.34 (-1.52%) 50.00 (3.03%) 51.56 length=1248,align1=0,align2=0: 54.69 (-2.94%) 53.12 (0.00%) 53.12 length=1248,align1=39,align2=0: 64.06 (-1.23%) 60.16 (4.94%) 63.28 length=1248,align1=0,align2=39: 60.94 (-2.63%) 60.16 (-1.32%) 59.38 length=1248,align1=39,align2=39: 53.12 (0.00%) 52.34 (1.47%) 53.12 length=1280,align1=0,align2=0: 52.34 (-1.52%) 52.34 (-1.52%) 51.56 length=1280,align1=40,align2=0: 61.72 (3.66%) 59.38 (7.32%) 64.06 length=1280,align1=0,align2=40: 60.94 (-2.63%) 60.16 (-1.32%) 59.38 length=1280,align1=40,align2=40: 52.34 (-1.52%) 52.34 (-1.52%) 51.56 length=1312,align1=0,align2=0: 54.69 (-1.45%) 55.47 (-2.90%) 53.91 length=1312,align1=41,align2=0: 63.28 (0.00%) 62.50 (1.23%) 63.28 length=1312,align1=0,align2=41: 62.50 (0.00%) 62.50 (0.00%) 62.50 length=1312,align1=41,align2=41: 53.91 (0.00%) 54.69 (-1.45%) 53.91 length=1344,align1=0,align2=0: 54.69 (0.00%) 54.69 (0.00%) 54.69 length=1344,align1=42,align2=0: 62.50 (0.00%) 62.50 (0.00%) 62.50 length=1344,align1=0,align2=42: 62.50 (-1.27%) 62.50 (-1.27%) 61.72 length=1344,align1=42,align2=42: 53.91 (0.00%) 53.91 (0.00%) 53.91 length=1376,align1=0,align2=0: 65.62 (-16.67%) 68.75 (-22.22%) 56.25 length=1376,align1=43,align2=0: 71.88 (-9.52%) 73.44 (-11.90%) 65.62 length=1376,align1=0,align2=43: 72.66 (-12.05%) 74.22 (-14.46%) 64.84 length=1376,align1=43,align2=43: 64.06 (-13.89%) 67.97 (-20.83%) 56.25 length=1408,align1=0,align2=0: 57.03 (-1.39%) 68.75 (-22.22%) 56.25 length=1408,align1=44,align2=0: 65.62 (-1.20%) 73.44 (-13.25%) 64.84 length=1408,align1=0,align2=44: 64.84 (0.00%) 74.22 (-14.46%) 64.84 length=1408,align1=44,align2=44: 56.25 (-1.41%) 68.75 (-23.94%) 55.47 length=1440,align1=0,align2=0: 67.97 (-14.47%) 64.84 (-9.21%) 59.38 length=1440,align1=45,align2=0: 74.22 (-10.47%) 68.75 (-2.33%) 67.19 length=1440,align1=0,align2=45: 72.66 (-6.90%) 69.53 (-2.30%) 67.97 length=1440,align1=45,align2=45: 65.62 (-13.51%) 58.59 (-1.35%) 57.81 length=1472,align1=0,align2=0: 66.41 (-14.86%) 58.59 (-1.35%) 57.81 length=1472,align1=46,align2=0: 73.44 (-9.30%) 67.19 (0.00%) 67.19 length=1472,align1=0,align2=46: 70.31 (-4.65%) 67.97 (-1.16%) 67.19 length=1472,align1=46,align2=46: 57.81 (0.00%) 58.59 (-1.35%) 57.81 length=1504,align1=0,align2=0: 60.94 (0.00%) 60.94 (0.00%) 60.94 length=1504,align1=47,align2=0: 71.09 (-1.11%) 70.31 (0.00%) 70.31 length=1504,align1=0,align2=47: 70.31 (-1.12%) 70.31 (-1.12%) 69.53 length=1504,align1=47,align2=47: 60.94 (-1.30%) 60.16 (0.00%) 60.16 length=1536,align1=0,align2=0: 62.50 (-3.90%) 60.16 (0.00%) 60.16 length=1536,align1=48,align2=0: 60.94 (-1.30%) 60.16 (0.00%) 60.16 length=1536,align1=0,align2=48: 61.72 (-3.95%) 60.16 (-1.32%) 59.38 length=1536,align1=48,align2=48: 60.94 (-1.30%) 60.16 (0.00%) 60.16 length=1568,align1=0,align2=0: 80.47 (-27.16%) 63.28 (0.00%) 63.28 length=1568,align1=49,align2=0: 86.72 (-18.09%) 72.66 (1.06%) 73.44 length=1568,align1=0,align2=49: 74.22 (-3.26%) 74.22 (-3.26%) 71.88 length=1568,align1=49,align2=49: 62.50 (0.00%) 61.72 (1.25%) 62.50 length=1600,align1=0,align2=0: 62.50 (-1.27%) 62.50 (-1.27%) 61.72 length=1600,align1=50,align2=0: 73.44 (0.00%) 71.88 (2.13%) 73.44 length=1600,align1=0,align2=50: 72.66 (0.00%) 73.44 (-1.08%) 72.66 length=1600,align1=50,align2=50: 62.50 (-1.27%) 62.50 (-1.27%) 61.72 length=1632,align1=0,align2=0: 64.84 (0.00%) 64.84 (0.00%) 64.84 length=1632,align1=51,align2=0: 75.78 (0.00%) 75.00 (1.03%) 75.78 length=1632,align1=0,align2=51: 78.91 (0.00%) 75.78 (3.96%) 78.91 length=1632,align1=51,align2=51: 64.84 (-2.47%) 64.84 (-2.47%) 63.28 length=1664,align1=0,align2=0: 64.84 (-1.22%) 64.84 (-1.22%) 64.06 length=1664,align1=52,align2=0: 75.78 (0.00%) 75.00 (1.03%) 75.78 length=1664,align1=0,align2=52: 80.47 (-0.98%) 75.78 (4.90%) 79.69 length=1664,align1=52,align2=52: 64.06 (-1.23%) 65.62 (-3.70%) 63.28 length=1696,align1=0,align2=0: 69.53 (-3.49%) 72.66 (-8.14%) 67.19 length=1696,align1=53,align2=0: 80.47 (-0.98%) 82.03 (-2.94%) 79.69 length=1696,align1=0,align2=53: 80.47 (0.96%) 82.03 (-0.96%) 81.25 length=1696,align1=53,align2=53: 68.75 (-2.33%) 72.66 (-8.14%) 67.19 length=1728,align1=0,align2=0: 67.97 (0.00%) 72.66 (-6.90%) 67.97 length=1728,align1=54,align2=0: 80.47 (-0.98%) 82.81 (-3.92%) 79.69 length=1728,align1=0,align2=54: 78.91 (-1.00%) 82.03 (-5.00%) 78.12 length=1728,align1=54,align2=54: 68.75 (0.00%) 72.66 (-5.68%) 68.75 length=1760,align1=0,align2=0: 77.34 (-12.50%) 68.75 (0.00%) 68.75 length=1760,align1=55,align2=0: 91.41 (-8.33%) 79.69 (5.56%) 84.38 length=1760,align1=0,align2=55: 88.28 (-10.78%) 80.47 (-0.98%) 79.69 length=1760,align1=55,align2=55: 77.34 (-11.24%) 68.75 (1.12%) 69.53 length=1792,align1=0,align2=0: 78.12 (-14.94%) 68.75 (-1.15%) 67.97 length=1792,align1=56,align2=0: 88.28 (-4.63%) 79.69 (5.56%) 84.38 length=1792,align1=0,align2=56: 88.28 (-9.71%) 80.47 (0.00%) 80.47 length=1792,align1=56,align2=56: 77.34 (-11.24%) 68.75 (1.12%) 69.53 length=1824,align1=0,align2=0: 72.66 (7.92%) 70.31 (10.89%) 78.91 length=1824,align1=57,align2=0: 85.94 (5.17%) 82.03 (9.48%) 90.62 length=1824,align1=0,align2=57: 82.03 (3.67%) 82.81 (2.75%) 85.16 length=1824,align1=57,align2=57: 70.31 (-1.12%) 70.31 (-1.12%) 69.53 length=1856,align1=0,align2=0: 70.31 (-1.12%) 70.31 (-1.12%) 69.53 length=1856,align1=58,align2=0: 83.59 (-0.94%) 82.03 (0.94%) 82.81 length=1856,align1=0,align2=58: 178.12 (-115.09%) 82.81 (0.00%) 82.81 length=1856,align1=58,align2=58: 70.31 (-1.12%) 70.31 (-1.12%) 69.53 length=1888,align1=0,align2=0: 73.44 (-1.08%) 78.91 (-8.60%) 72.66 length=1888,align1=59,align2=0: 85.94 (0.00%) 89.84 (-4.55%) 85.94 length=1888,align1=0,align2=59: 84.38 (0.00%) 89.06 (-5.56%) 84.38 length=1888,align1=59,align2=59: 72.66 (-1.09%) 78.12 (-8.70%) 71.88 length=1920,align1=0,align2=0: 72.66 (-1.09%) 78.12 (-8.70%) 71.88 length=1920,align1=60,align2=0: 85.94 (0.00%) 89.84 (-4.55%) 85.94 length=1920,align1=0,align2=60: 85.16 (0.00%) 89.06 (-4.59%) 85.16 length=1920,align1=60,align2=60: 72.66 (-1.09%) 78.91 (-9.78%) 71.88 length=1952,align1=0,align2=0: 75.00 (-1.05%) 75.00 (-1.05%) 74.22 length=1952,align1=61,align2=0: 88.28 (0.00%) 87.50 (0.88%) 88.28 length=1952,align1=0,align2=61: 87.50 (0.00%) 88.28 (-0.89%) 87.50 length=1952,align1=61,align2=61: 74.22 (0.00%) 74.22 (0.00%) 74.22 length=1984,align1=0,align2=0: 75.00 (-1.05%) 73.44 (1.05%) 74.22 length=1984,align1=62,align2=0: 89.06 (-0.89%) 87.50 (0.88%) 88.28 length=1984,align1=0,align2=62: 87.50 (0.00%) 88.28 (-0.89%) 87.50 length=1984,align1=62,align2=62: 74.22 (0.00%) 74.22 (0.00%) 74.22 length=2016,align1=0,align2=0: 77.34 (-1.02%) 76.56 (0.00%) 76.56 length=2016,align1=63,align2=0: 91.41 (-0.86%) 90.62 (0.00%) 90.62 length=2016,align1=0,align2=63: 89.84 (0.00%) 90.62 (-0.87%) 89.84 length=2016,align1=63,align2=63: 77.34 (-1.02%) 76.56 (0.00%) 76.56 length=4096,align1=0,align2=0: 141.41 (-0.56%) 146.88 (-4.44%) 140.62 Function: memcpy __memcpy_thunderx __memcpy_falkor __memcpy_generic Variant: large ================================================================================ length=65543,align1=0,align2=0: 4018.75 (3.09%) 2634.38 (36.47%) 4146.88 length=65551,align1=0,align2=3: 4425.00 (-6.47%) 3134.38 (24.59%) 4156.25 length=65567,align1=3,align2=0: 2909.38 (29.95%) 3134.38 (24.53%) 4153.12 length=65599,align1=3,align2=5: 4415.62 (-6.16%) 3134.38 (24.64%) 4159.38 length=131079,align1=0,align2=0: 5765.62 (30.38%) 5240.62 (36.72%) 8281.25 length=131087,align1=0,align2=3: 8831.25 (-6.56%) 6271.88 (24.32%) 8287.50 length=131103,align1=3,align2=0: 5793.75 (29.05%) 6268.75 (23.23%) 8165.62 length=131135,align1=3,align2=5: 5806.25 (29.97%) 6259.38 (24.50%) 8290.62 length=262151,align1=0,align2=0: 11850.00 (28.91%) 10762.50 (35.43%) 16668.80 length=262159,align1=0,align2=3: 12043.80 (27.72%) 12700.00 (23.78%) 16662.50 length=262175,align1=3,align2=0: 12046.90 (27.90%) 12687.50 (24.07%) 16709.40 length=262207,align1=3,align2=5: 11984.40 (28.08%) 12678.10 (23.91%) 16662.50 length=524295,align1=0,align2=0: 24825.00 (25.00%) 24268.80 (27.34%) 33400.00 length=524303,align1=0,align2=3: 35731.20 (-6.53%) 25678.10 (23.44%) 33540.60 length=524319,align1=3,align2=0: 25893.80 (22.71%) 25725.00 (23.22%) 33503.10 length=524351,align1=3,align2=5: 25887.50 (22.86%) 25690.60 (23.45%) 33559.40 length=1048583,align1=0,align2=0: 50621.90 (0.30%) 50600.00 (0.34%) 50771.90 length=1048591,align1=0,align2=3: 53206.20 (0.54%) 51081.20 (4.51%) 53493.80 length=1048607,align1=3,align2=0: 53221.90 (0.32%) 51975.00 (2.66%) 53393.80 length=1048639,align1=3,align2=5: 53240.60 (0.36%) 51953.10 (2.77%) 53431.20 length=2097159,align1=0,align2=0: 103744.00 (-2.00%) 102447.00 (-1.00%) 102425.00 length=2097167,align1=0,align2=3: 108588.00 (-1.00%) 105159.00 (2.00%) 107606.00 length=2097183,align1=3,align2=0: 107678.00 (0.00%) 105250.00 (2.00%) 108125.00 length=2097215,align1=3,align2=5: 107906.00 (1.00%) 105841.00 (3.00%) 109475.00 length=4194311,align1=0,align2=0: 202994.00 (0.00%) 202500.00 (1.00%) 204809.00 length=4194319,align1=0,align2=3: 213350.00 (0.00%) 205997.00 (3.00%) 213384.00 length=4194335,align1=3,align2=0: 212653.00 (0.00%) 206444.00 (3.00%) 212900.00 length=4194367,align1=3,align2=5: 213044.00 (0.00%) 206084.00 (3.00%) 213847.00 length=8388615,align1=0,align2=0: 401294.00 (0.00%) 401231.00 (0.00%) 401944.00 length=8388623,align1=0,align2=3: 480872.00 (-14.00%) 406444.00 (3.00%) 422900.00 length=8388639,align1=3,align2=0: 422147.00 (0.00%) 407750.00 (3.00%) 422803.00 length=8388671,align1=3,align2=5: 442003.00 (-5.00%) 407125.00 (3.00%) 423509.00 length=16777223,align1=0,align2=0: 799809.00 (0.00%) 800000.00 (0.00%) 801756.00 length=16777231,align1=0,align2=3: 841184.00 (0.00%) 808525.00 (4.00%) 843775.00 length=16777247,align1=3,align2=0: 841166.00 (0.00%) 810147.00 (3.00%) 843147.00 length=16777279,align1=3,align2=5: 972569.00 (-16.00%) 808588.00 (4.00%) 843731.00 length=33554439,align1=0,align2=0: 1842240.00 (-0.01%) 1863590.00 (-1.17%) 1841990.00 length=33554447,align1=0,align2=3: 2103470.00 (-2.74%) 1919460.00 (6.25%) 2047440.00 length=33554463,align1=3,align2=0: 2075690.00 (-1.07%) 1930040.00 (6.02%) 2053720.00 length=33554495,align1=3,align2=5: 2110590.00 (-2.82%) 1924440.00 (6.25%) 2052650.00 Function: memcpy __memcpy_thunderx __memcpy_falkor __memcpy_generic Variant: random ================================================================================ max-size=4096: 44061.90 (5.85%) 38568.20 (17.59%) 46799.90 max-size=8192: 42790.90 (5.27%) 38158.90 (15.52%) 45171.50 max-size=16384: 44912.10 (2.25%) 38710.40 (15.75%) 45945.00 max-size=32768: 43577.90 (1.23%) 37975.10 (13.93%) 44120.00 max-size=65536: 44375.50 (1.04%) 38474.20 (14.20%) 44840.60 * manual/tunables.texi (Tunable glibc.tune.cpu): Add falkor. * sysdeps/aarch64/multiarch/Makefile (sysdep_routines): Add memcpy_falkor. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (MAX_IFUNC): Bump. (__libc_ifunc_impl_list): Add __memcpy_falkor. * sysdeps/aarch64/multiarch/memcpy.c: Likewise. * sysdeps/aarch64/multiarch/memcpy_falkor.S: New file. * sysdeps/unix/sysv/linux/aarch64/cpu-features.c (cpu_list): Add falkor. * sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_FALKOR): New macro. |
||
Siddhesh Poyarekar
|
28cfa3a48e |
tunables, aarch64: New tunable to override cpu
Add a new tunable (glibc.tune.cpu) to override CPU identification on aarch64. This is useful in two cases: one where it is desirable to pretend to be another CPU for purposes of testing or because routines written for that CPU are beneficial for specific workloads and second where the underlying kernel does not support emulation of MRS to get the MIDR of the CPU. * elf/dl-tunables.h (tunable_is_name): Move from... * elf/dl-tunables.c (is_name): ... here. (parse_tunables, __tunables_init): Adjust. * manual/tunables.texi: Document glibc.tune.cpu. * sysdeps/aarch64/dl-tunables.list: New file. * sysdeps/unix/sysv/linux/aarch64/cpu-features.c (struct cpu_list): New type. (cpu_list): New list of CPU names and their MIDR. (get_midr_from_mcpu): New function. (init_cpu_features): Override MIDR if necessary. |
||
Siddhesh Poyarekar
|
ab85da1530 |
aarch64: Call all string function implementations in tests
The string function implementations implemented so far do not use any instructions that may deviate from standard aarch64, so it is possible for all routines to run on all armv8 hardware. Select all implementations in the benchmarks and tests. * sysdeps/aarch64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Unconditionally select thunderx routine for testing. |
||
Szabolcs Nagy
|
e535139e82 |
[AArch64] Add more cfi annotations to tlsdesc entry points
Backtrace through _dl_tlsdesc_resolve_rela was broken because the offset of x30 from cfa was not in the debug info. Add enough annotation so backtracing from the dynamic linker through tlsdesc entry points works and the debugger shows registers correctly. |
||
Szabolcs Nagy
|
e9177fba13 |
[AArch64] Use hidden __GI__dl_argv in rtld startup code
We rely on the symbol being locally defined so using extern symbol is not correct and the linker may complain about the relocations. |
||
Zack Weinberg
|
09a596cc2c |
Remove bits/string.h.
These machine-dependent inline string functions have never been on by default, and even if they were a good idea at the time they were introduced, they haven't really been touched in ten to fifteen years and probably aren't a good idea on current-gen processors. Current thinking is that this class of optimization is best left to the compiler. * bits/string.h, string/bits/string.h * sysdeps/aarch64/bits/string.h * sysdeps/m68k/m680x0/m68020/bits/string.h * sysdeps/s390/bits/string.h, sysdeps/sparc/bits/string.h * sysdeps/x86/bits/string.h: Delete file. * string/string.h: Don't include bits/string.h. * string/bits/string3.h: Rename to bits/string_fortified.h. No need to undef various symbols that the removed headers might have defined as macros. * string/Makefile (headers): Remove bits/string.h, change bits/string3.h to bits/string_fortified.h. * string/string-inlines.c: Update commentary. Remove definitions of various macros that nothing looks at anymore. Don't directly include bits/string.h. Set _STRING_INLINE_unaligned here, based on compiler-predefined macros. * string/strncat.c: If STRNCAT is not defined, or STRNCAT_PRIMARY _is_ defined, provide internal hidden alias __strncat. * include/string.h: Declare internal hidden alias __strncat. Only forward __stpcpy to __builtin_stpcpy if __NO_STRING_INLINES is not defined. * include/bits/string3.h: Rename to bits/string_fortified.h, update to match above. * sysdeps/i386/string-inlines.c: Define compat symbols for everything formerly defined by sysdeps/x86/bits/string.h. Make existing definitions into compat symbols as well. Remove some no-longer-necessary messing around with macros. * sysdeps/powerpc/powerpc32/power4/multiarch/mempcpy.c * sysdeps/powerpc/powerpc64/multiarch/mempcpy.c * sysdeps/powerpc/powerpc64/multiarch/stpcpy.c * sysdeps/s390/multiarch/mempcpy.c No need to define _HAVE_STRING_ARCH_mempcpy. Do define __NO_STRING_INLINES and NO_MEMPCPY_STPCPY_REDIRECT. * sysdeps/i386/i686/multiarch/strncat-c.c * sysdeps/s390/multiarch/strncat-c.c * sysdeps/x86_64/multiarch/strncat-c.c Define STRNCAT_PRIMARY. Don't change definition of libc_hidden_def. |
||
Alan Modra
|
0572433b5b |
PowerPC64 ELFv2 PPC64_OPT_LOCALENTRY
ELFv2 functions with localentry:0 are those with a single entry point, ie. global entry == local entry, that have no requirement on r2 or r12 and guarantee r2 is unchanged on return. Such an external function can be called via the PLT without saving r2 or restoring it on return, avoiding a common load-hit-store for small functions. This patch implements the ld.so changes necessary for this optimization. ld.so needs to check that an optimized plt call sequence is in fact calling a function implemented with localentry:0, end emit a fatal error otherwise. The elf/testobj6.c change is to stop "error while loading shared libraries: expected localentry:0 `preload'" when running elf/preloadtest, which we'd get otherwise. * elf/elf.h (PPC64_OPT_LOCALENTRY): Define. * sysdeps/alpha/dl-machine.h (elf_machine_fixup_plt): Add refsym and sym parameters. Adjust callers. * sysdeps/aarch64/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/arm/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/generic/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/hppa/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/i386/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/ia64/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/m68k/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/microblaze/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/mips/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/nios2/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/s390/s390-32/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/s390/s390-64/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/sh/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/sparc/sparc32/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/sparc/sparc64/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/tile/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/x86_64/dl-machine.h (elf_machine_fixup_plt): Likewise. * sysdeps/powerpc/powerpc64/dl-machine.c (_dl_error_localentry): New. (_dl_reloc_overflow): Increase buffser size. Formatting. * sysdeps/powerpc/powerpc64/dl-machine.h (ppc64_local_entry_offset): Delete reloc param, add refsym and sym. Check optimized plt call stubs for localentry:0 functions. Adjust callers. (elf_machine_fixup_plt, elf_machine_plt_conflict): Add refsym and sym parameters. Adjust callers. (_dl_reloc_overflow): Move attribute. (_dl_error_localentry): Declare. * elf/dl-runtime.c (_dl_fixup): Save original sym. Pass refsym and sym to elf_machine_fixup_plt. * elf/testobj6.c (preload): Call printf. |
||
Stefan Liebler
|
12d2dd7060 |
Optimize generic spinlock code and use C11 like atomic macros.
This patch optimizes the generic spinlock code. The type pthread_spinlock_t is a typedef to volatile int on all archs. Passing a volatile pointer to the atomic macros which are not mapped to the C11 atomic builtins can lead to extra stores and loads to stack if such a macro creates a temporary variable by using "__typeof (*(mem)) tmp;". Thus, those macros which are used by spinlock code - atomic_exchange_acquire, atomic_load_relaxed, atomic_compare_exchange_weak - have to be adjusted. According to the comment from Szabolcs Nagy, the type of a cast expression is unqualified (see http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_423.htm): __typeof ((__typeof (*(mem)) *(mem)) tmp; Thus from spinlock perspective the variable tmp is of type int instead of type volatile int. This patch adjusts those macros in include/atomic.h. With this construct GCC >= 5 omits the extra stores and loads. The atomic macros are replaced by the C11 like atomic macros and thus the code is aligned to it. The pthread_spin_unlock implementation is now using release memory order instead of sequentially consistent memory order. The issue with passed volatile int pointers applies to the C11 like atomic macros as well as the ones used before. I've added a glibc_likely hint to the first atomic exchange in pthread_spin_lock in order to return immediately to the caller if the lock is free. Without the hint, there is an additional jump if the lock is free. I've added the atomic_spin_nop macro within the loop of plain reads. The plain reads are also realized by C11 like atomic_load_relaxed macro. The new define ATOMIC_EXCHANGE_USES_CAS determines if the first try to acquire the spinlock in pthread_spin_lock or pthread_spin_trylock is an exchange or a CAS. This is defined in atomic-machine.h for all architectures. The define SPIN_LOCK_READS_BETWEEN_CMPXCHG is now removed. There is no technical reason for throwing in a CAS every now and then, and so far we have no evidence that it can improve performance. If that would be the case, we have to adjust other spin-waiting loops elsewhere, too! Using a CAS loop without plain reads is not a good idea on many targets and wasn't used by one. Thus there is now no option to do so. Architectures are now using the generic spinlock automatically if they do not provide an own implementation. Thus the pthread_spin_lock.c files in sysdeps folder are deleted. ChangeLog: * NEWS: Mention new spinlock implementation. * include/atomic.h: (__atomic_val_bysize): Cast type to omit volatile qualifier. (atomic_exchange_acq): Likewise. (atomic_load_relaxed): Likewise. (ATOMIC_EXCHANGE_USES_CAS): Check definition. * nptl/pthread_spin_init.c (pthread_spin_init): Use atomic_store_relaxed. * nptl/pthread_spin_lock.c (pthread_spin_lock): Use C11-like atomic macros. * nptl/pthread_spin_trylock.c (pthread_spin_trylock): Likewise. * nptl/pthread_spin_unlock.c (pthread_spin_unlock): Use atomic_store_release. * sysdeps/aarch64/nptl/pthread_spin_lock.c: Delete File. * sysdeps/arm/nptl/pthread_spin_lock.c: Likewise. * sysdeps/hppa/nptl/pthread_spin_lock.c: Likewise. * sysdeps/m68k/nptl/pthread_spin_lock.c: Likewise. * sysdeps/microblaze/nptl/pthread_spin_lock.c: Likewise. * sysdeps/mips/nptl/pthread_spin_lock.c: Likewise. * sysdeps/nios2/nptl/pthread_spin_lock.c: Likewise. * sysdeps/aarch64/atomic-machine.h (ATOMIC_EXCHANGE_USES_CAS): Define. * sysdeps/alpha/atomic-machine.h: Likewise. * sysdeps/arm/atomic-machine.h: Likewise. * sysdeps/i386/atomic-machine.h: Likewise. * sysdeps/ia64/atomic-machine.h: Likewise. * sysdeps/m68k/coldfire/atomic-machine.h: Likewise. * sysdeps/m68k/m680x0/m68020/atomic-machine.h: Likewise. * sysdeps/microblaze/atomic-machine.h: Likewise. * sysdeps/mips/atomic-machine.h: Likewise. * sysdeps/powerpc/powerpc32/atomic-machine.h: Likewise. * sysdeps/powerpc/powerpc64/atomic-machine.h: Likewise. * sysdeps/s390/atomic-machine.h: Likewise. * sysdeps/sparc/sparc32/atomic-machine.h: Likewise. * sysdeps/sparc/sparc32/sparcv9/atomic-machine.h: Likewise. * sysdeps/sparc/sparc64/atomic-machine.h: Likewise. * sysdeps/tile/tilegx/atomic-machine.h: Likewise. * sysdeps/tile/tilepro/atomic-machine.h: Likewise. * sysdeps/unix/sysv/linux/hppa/atomic-machine.h: Likewise. * sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h: Likewise. * sysdeps/unix/sysv/linux/nios2/atomic-machine.h: Likewise. * sysdeps/unix/sysv/linux/sh/atomic-machine.h: Likewise. * sysdeps/x86_64/atomic-machine.h: Likewise. |
||
Steve Ellcey
|
6a2c695266 |
aarch64: Thunderx specific memcpy and memmove
* sysdeps/aarch64/memcpy.S (MEMMOVE, MEMCPY): New macros. (memmove): Use MEMMOVE for name. (memcpy): Use MEMCPY for name. Change internal labels to external labels. * sysdeps/aarch64/multiarch/Makefile: New file. * sysdeps/aarch64/multiarch/ifunc-impl-list.c: Likewise. * sysdeps/aarch64/multiarch/init-arch.h: Likewise. * sysdeps/aarch64/multiarch/memcpy.c: Likewise. * sysdeps/aarch64/multiarch/memcpy_generic.S: Likewise. * sysdeps/aarch64/multiarch/memcpy_thunderx.S: Likewise. * sysdeps/aarch64/multiarch/memmove.c: Likewise. |
||
Adhemerval Zanella
|
eab380d8ec |
Move shared pthread definitions to common headers
This patch removes all the replicated pthread definition accross the architectures and consolidates it on shared headers. The new organization is as follow: * Architecture specific definition (such as pthread types sizes) are place in the new pthreadtypes-arch.h header in arch specific path. * All shared structure definition are moved to a common NPTL header at sysdeps/nptl/bits/pthreadtypes.h (with now includes the arch specific one for internal definitions). * Also, for C11 future thread support, both mutex and condition definition are placed in a common header at sysdeps/nptl/bits/thread-shared-types.h. It is also a refactor patch without expected functional changes. Checked with a build for all major ABI (aarch64-linux-gnu, alpha-linux-gnu, arm-linux-gnueabi, i386-linux-gnu, ia64-linux-gnu, m68k-linux-gnu, microblaze-linux-gnu, mips{64}-linux-gnu, nios2-linux-gnu, powerpc{64le}-linux-gnu, s390{x}-linux-gnu, sparc{64}-linux-gnu, tile{pro,gx}-linux-gnu, and x86_64-linux-gnu). * posix/Makefile (headers): Add pthreadtypes-arch.h and thread-shared-types.h. * sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h: New file: arch specific thread definition. * sysdeps/alpha/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/arm/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/hppa/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/mips/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/s390/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/sh/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/sparc/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/tile/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/x86/nptl/bits/pthreadtypes-arch.h: Likewise. * sysdeps/nptl/bits/thread-shared-types.h: New file: shared thread definition between POSIX and C11. * sysdeps/aarch64/nptl/bits/pthreadtypes.h.: Remove file. * sysdeps/alpha/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/arm/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/hppa/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/powerpc/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/sparc/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/x86/nptl/bits/pthreadtypes.h: Likewise. * sysdeps/nptl/bits/pthreadtypes.h: New file: common thread definitions shared across all architectures. |
||
Szabolcs Nagy
|
b737847f87 |
[AArch64] Update libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Update. |
||
Steve Ellcey
|
d2e4346a30 |
Add ifunc support for aarch64.
* sysdeps/aarch64/dl-machine.h: Include cpu-features.c. (DL_PLATFORM_INIT): New define. (dl_platform_init): New function. * sysdeps/aarch64/ldsodefs.h: Include cpu-features.h. * sysdeps/unix/sysv/linux/aarch64/cpu-features.c: New file. * sysdeps/unix/sysv/linux/aarch64/cpu-features.h: Likewise. * sysdeps/unix/sysv/linux/aarch64/dl-procinfo.c: Likewise. * sysdeps/unix/sysv/linux/aarch64/libc-start.c: Likewise. |
||
Torvald Riegel
|
cc25c8b4c1 |
New pthread rwlock that is more scalable.
This replaces the pthread rwlock with a new implementation that uses a more scalable algorithm (primarily through not using a critical section anymore to make state changes). The fast path for rdlock acquisition and release is now basically a single atomic read-modify write or CAS and a few branches. See nptl/pthread_rwlock_common.c for details. * nptl/DESIGN-rwlock.txt: Remove. * nptl/lowlevelrwlock.sym: Remove. * nptl/Makefile: Add new tests. * nptl/pthread_rwlock_common.c: New file. Contains the new rwlock. * nptl/pthreadP.h (PTHREAD_RWLOCK_PREFER_READER_P): Remove. (PTHREAD_RWLOCK_WRPHASE, PTHREAD_RWLOCK_WRLOCKED, PTHREAD_RWLOCK_RWAITING, PTHREAD_RWLOCK_READER_SHIFT, PTHREAD_RWLOCK_READER_OVERFLOW, PTHREAD_RWLOCK_WRHANDOVER, PTHREAD_RWLOCK_FUTEX_USED): New. * nptl/pthread_rwlock_init.c (__pthread_rwlock_init): Adapt to new implementation. * nptl/pthread_rwlock_rdlock.c (__pthread_rwlock_rdlock_slow): Remove. (__pthread_rwlock_rdlock): Adapt. * nptl/pthread_rwlock_timedrdlock.c (pthread_rwlock_timedrdlock): Adapt. * nptl/pthread_rwlock_timedwrlock.c (pthread_rwlock_timedwrlock): Adapt. * nptl/pthread_rwlock_trywrlock.c (pthread_rwlock_trywrlock): Adapt. * nptl/pthread_rwlock_tryrdlock.c (pthread_rwlock_tryrdlock): Adapt. * nptl/pthread_rwlock_unlock.c (pthread_rwlock_unlock): Adapt. * nptl/pthread_rwlock_wrlock.c (__pthread_rwlock_wrlock_slow): Remove. (__pthread_rwlock_wrlock): Adapt. * nptl/tst-rwlock10.c: Adapt. * nptl/tst-rwlock11.c: Adapt. * nptl/tst-rwlock17.c: New file. * nptl/tst-rwlock18.c: New file. * nptl/tst-rwlock19.c: New file. * nptl/tst-rwlock2b.c: New file. * nptl/tst-rwlock8.c: Adapt. * nptl/tst-rwlock9.c: Adapt. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/hppa/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/sparc/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * sysdeps/x86/bits/pthreadtypes.h (pthread_rwlock_t): Adapt. * nptl/nptl-printers.py (): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-rwlock-printers.py: Adapt. * nptl/test-rwlockattr-printers.c: Adapt. * nptl/test-rwlockattr-printers.py: Adapt. |
||
Joseph Myers
|
bfff8b1bec | Update copyright dates with scripts/update-copyrights. | ||
Torvald Riegel
|
ed19993b5b |
New condvar implementation that provides stronger ordering guarantees.
This is a new implementation for condition variables, required after http://austingroupbugs.net/view.php?id=609 to fix bug 13165. In essence, we need to be stricter in which waiters a signal or broadcast is required to wake up; this couldn't be solved using the old algorithm. ISO C++ made a similar clarification, so this also fixes a bug in current libstdc++, for example. We can't use the old algorithm anymore because futexes do not guarantee to wake in FIFO order. Thus, when we wake, we can't simply let any waiter grab a signal, but we need to ensure that one of the waiters happening before the signal is woken up. This is something the previous algorithm violated (see bug 13165). There's another issue specific to condvars: ABA issues on the underlying futexes. Unlike mutexes that have just three states, or semaphores that have no tokens or a limited number of them, the state of a condvar is the *order* of the waiters. A waiter on a semaphore can grab a token whenever one is available; a condvar waiter must only consume a signal if it is eligible to do so as determined by the relative order of the waiter and the signal. Therefore, this new algorithm maintains two groups of waiters: Those eligible to consume signals (G1), and those that have to wait until previous waiters have consumed signals (G2). Once G1 is empty, G2 becomes the new G1. 64b counters are used to avoid ABA issues. This condvar doesn't yet use a requeue optimization (ie, on a broadcast, waking just one thread and requeueing all others on the futex of the mutex supplied by the program). I don't think doing the requeue is necessarily the right approach (but I haven't done real measurements yet): * If a program expects to wake many threads at the same time and make that scalable, a condvar isn't great anyway because of how it requires waiters to operate mutually exclusive (due to the mutex usage). Thus, a thundering herd problem is a scalability problem with or without the optimization. Using something like a semaphore might be more appropriate in such a case. * The scalability problem is actually at the mutex side; the condvar could help (and it tries to with the requeue optimization), but it should be the mutex who decides how that is done, and whether it is done at all. * Forcing all but one waiter into the kernel-side wait queue of the mutex prevents/avoids the use of lock elision on the mutex. Thus, it prevents the only cure against the underlying scalability problem inherent to condvars. * If condvars use short critical sections (ie, hold the mutex just to check a binary flag or such), which they should do ideally, then forcing all those waiter to proceed serially with kernel-based hand-off (ie, futex ops in the mutex' contended state, via the futex wait queues) will be less efficient than just letting a scalable mutex implementation take care of it. Our current mutex impl doesn't employ spinning at all, but if critical sections are short, spinning can be much better. * Doing the requeue stuff requires all waiters to always drive the mutex into the contended state. This leads to each waiter having to call futex_wake after lock release, even if this wouldn't be necessary. [BZ #13165] * nptl/pthread_cond_broadcast.c (__pthread_cond_broadcast): Rewrite to use new algorithm. * nptl/pthread_cond_destroy.c (__pthread_cond_destroy): Likewise. * nptl/pthread_cond_init.c (__pthread_cond_init): Likewise. * nptl/pthread_cond_signal.c (__pthread_cond_signal): Likewise. * nptl/pthread_cond_wait.c (__pthread_cond_wait): Likewise. (__pthread_cond_timedwait): Move here from pthread_cond_timedwait.c. (__condvar_confirm_wakeup, __condvar_cancel_waiting, __condvar_cleanup_waiting, __condvar_dec_grefs, __pthread_cond_wait_common): New. (__condvar_cleanup): Remove. * npt/pthread_condattr_getclock.c (pthread_condattr_getclock): Adapt. * npt/pthread_condattr_setclock.c (pthread_condattr_setclock): Likewise. * npt/pthread_condattr_getpshared.c (pthread_condattr_getpshared): Likewise. * npt/pthread_condattr_init.c (pthread_condattr_init): Likewise. * nptl/tst-cond1.c: Add comment. * nptl/tst-cond20.c (do_test): Adapt. * nptl/tst-cond22.c (do_test): Likewise. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (pthread_cond_t): Adapt structure. * sysdeps/arm/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/ia64/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/m68k/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/microblaze/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/mips/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nios2/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/s390/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/sh/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/tile/nptl/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/alpha/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/x86/bits/pthreadtypes.h (pthread_cond_t): Likewise. * sysdeps/nptl/internaltypes.h (COND_NWAITERS_SHIFT): Remove. (COND_CLOCK_BITS): Adapt. * sysdeps/nptl/pthread.h (PTHREAD_COND_INITIALIZER): Adapt. * nptl/pthreadP.h (__PTHREAD_COND_CLOCK_MONOTONIC_MASK, __PTHREAD_COND_SHARED_MASK): New. * nptl/nptl-printers.py (CLOCK_IDS): Remove. (ConditionVariablePrinter, ConditionVariableAttributesPrinter): Adapt. * nptl/nptl_lock_constants.pysym: Adapt. * nptl/test-cond-printers.py: Adapt. * sysdeps/unix/sysv/linux/hppa/internaltypes.h (cond_compat_clear, cond_compat_check_and_clear): Adapt. * sysdeps/unix/sysv/linux/hppa/pthread_cond_timedwait.c: Remove file ... * sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c (__pthread_cond_timedwait): ... and move here. * nptl/DESIGN-condvar.txt: Remove file. * nptl/lowlevelcond.sym: Likewise. * nptl/pthread_cond_timedwait.c: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i486/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i586/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/i386/i686/pthread_cond_wait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: Likewise. |
||
Joseph Myers
|
0acb8a2a85 |
Refactor long double information into bits/long-double.h.
Information about whether the ABI of long double is the same as that of double is split between bits/mathdef.h and bits/wordsize.h. When the ABIs are the same, bits/mathdef.h defines __NO_LONG_DOUBLE_MATH. In addition, in the case where the same glibc binary supports both -mlong-double-64 and -mlong-double-128, bits/wordsize.h defines __LONG_DOUBLE_MATH_OPTIONAL, along with __NO_LONG_DOUBLE_MATH if this particular compilation is with -mlong-double-64. As part of the refactoring I proposed in <https://sourceware.org/ml/libc-alpha/2016-11/msg00745.html>, this patch puts all that information in a single header, bits/long-double.h. It is included from sys/cdefs.h alongside the include of bits/wordsize.h, so other headers generally do not need to include bits/long-double.h directly. Previously, various bits/mathdef.h headers and bits/wordsize.h headers had this long double information (including implicitly in some bits/mathdef.h headers through not having the defines present in the default version). After the patch, it's all in six bits/long-double.h headers. Furthermore, most of those new headers are not architecture-specific. Architectures with optional long double all use the ldbl-opt sysdeps directory, either in the order (ldbl-64-128, ldbl-opt, ldbl-128) or (ldbl-128ibm, ldbl-opt). Thus a generic header for the case where long double = double, and headers in ldbl-128, ldbl-96 and ldbl-opt, suffices to cover every architecture except for cases where long double properties vary between different ABIs sharing a set of installed headers; fortunately all the ldbl-opt cases share a single compiler-predefined macro __LONG_DOUBLE_128__ that can be used to tell whether this compilation is -mlong-double-64 or -mlong-double-128. The two cases where a set of headers is shared between ABIs with different long double properties, MIPS (o32 has long double = double, other ABIs use ldbl-128) and SPARC (32-bit has optional long double, 64-bit has required long double), need their own bits/long-double.h headers. As with bits/wordsize.h, multiple-include protection for this header is generally implicit through the include guards on sys/cdefs.h, and multiple inclusion is harmless in any case. There is one subtlety: the header must not define __LONG_DOUBLE_MATH_OPTIONAL if __NO_LONG_DOUBLE_MATH was defined before its inclusion, because doing so breaks how sysdeps/ieee754/ldbl-opt/nldbl-compat.h defines __NO_LONG_DOUBLE_MATH itself before including system headers. Subject to keeping that working, it would be reasonable to move these macros from defined/undefined #ifdef to always-defined 1/0 #if semantics, but this patch does not attempt to do so, just rearranges where the macros are defined. After this patch, the only use of bits/mathdef.h is the alpha one for modifying complex function ABIs for old GCC. Thus, all versions of the header other than the default and alpha versions are removed, as is the include from math.h. Tested for x86_64 and x86. Also did compilation-only testing with build-many-glibcs.py. * bits/long-double.h: New file. * sysdeps/ieee754/ldbl-128/bits/long-double.h: Likewise. * sysdeps/ieee754/ldbl-96/bits/long-double.h: Likewise. * sysdeps/ieee754/ldbl-opt/bits/long-double.h: Likewise. * sysdeps/mips/bits/long-double.h: Likewise. * sysdeps/unix/sysv/linux/sparc/bits/long-double.h: Likewise. * math/Makefile (headers): Add bits/long-double.h. * misc/sys/cdefs.h: Include <bits/long-double.h>. * stdlib/strtold.c: Include <bits/long-double.h> instead of <bits/wordsize.h>. * bits/mathdef.h [!_COMPLEX_H]: Do not allow inclusion. [!__NO_LONG_DOUBLE_MATH]: Remove conditional code. * math/math.h: Do not include <bits/mathdef.h>. * sysdeps/aarch64/bits/mathdef.h: Remove file. * sysdeps/alpha/bits/mathdef.h [!_COMPLEX_H]: Do not allow inclusion. * sysdeps/ia64/bits/mathdef.h: Remove file. * sysdeps/m68k/m680x0/bits/mathdef.h: Likewise. * sysdeps/mips/bits/mathdef.h: Likewise. * sysdeps/powerpc/bits/mathdef.h: Likewise. * sysdeps/s390/bits/mathdef.h: Likewise. * sysdeps/sparc/bits/mathdef.h: Likewise. * sysdeps/x86/bits/mathdef.h: Likewise. * sysdeps/s390/s390-32/bits/wordsize.h [!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Remove conditional code. * sysdeps/s390/s390-64/bits/wordsize.h [!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Likewise. * sysdeps/unix/sysv/linux/alpha/bits/wordsize.h [!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/wordsize.h [!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Likewise. * sysdeps/unix/sysv/linux/sparc/bits/wordsize.h [!__NO_LONG_DOUBLE_MATH && !__LONG_DOUBLE_MATH_OPTIONAL]: Likewise. |
||
Florian Weimer
|
67aae64512 |
aarch64: Use explicit offsets in _dl_tlsdesc_dynamic
Commit
|
||
Joseph Myers
|
b2491db6c8 |
Refactor FP_ILOGB* out of bits/mathdef.h.
Continuing the refactoring of bits/mathdef.h, this patch stops it defining FP_ILOGB0 and FP_ILOGBNAN, moving the required information to a new header bits/fp-logb.h. There are only two possible values of each of those macros permitted by ISO C. TS 18661-1 adds corresponding macros for llogb, and their values are required to correspond to those of the ilogb macros in the obvious way. Thus two boolean values - for which the same choices are correct for most architectures - suffice to determine the value of all these macros, and by defining macros for those boolean values in bits/fp-logb.h we can then define the public FP_* macros in math.h and avoid the present duplication of the associated feature test macro logic. This patch duly moves to bits/fp-logb.h defining __FP_LOGB0_IS_MIN and __FP_LOGBNAN_IS_MIN. Default definitions of those to 0 are correct for both architectures, while ia64, m68k and x86 get their own versions of bits/fp-logb.h to reflect their use of values different from the defaults. The patch renders many copies of bits/mathdef.h trivial (needed only to avoid the default __NO_LONG_DOUBLE_MATH). I'll revise <https://sourceware.org/ml/libc-alpha/2016-11/msg00865.html> accordingly so that it removes all bits/mathdef.h headers except the default one and the alpha one, and arranges for the header to be included only by complex.h as the only remaining use at that point will be for the alpha ABI issues there. Tested for x86_64 and x86. Also did compile-only testing with build-many-glibcs.py (using glibc sources from before the commit that introduced many build failures with undefined __GI___sigsetjmp). * bits/fp-logb.h: New file. * sysdeps/ia64/bits/fp-logb.h: Likewise. * sysdeps/m68k/m680x0/bits/fp-logb.h: Likewise. * sysdeps/x86/bits/fp-logb.h: Likewise. * math/Makefile (headers): Add bits/fp-logb.h. * math/math.h: Include <bits/fp-logb.h>. [__USE_ISOC99] (FP_ILOGB0): Define based on __FP_LOGB0_IS_MIN. [__USE_ISOC99] (FP_ILOGBNAN): Define based on __FP_LOGBNAN_IS_MIN. * bits/mathdef.h (FP_ILOGB0): Remove. (FP_ILOGBNAN): Likewise. * sysdeps/aarch64/bits/mathdef.h (FP_ILOGB0): Likewise. (FP_ILOGBNAN): Likewise. * sysdeps/alpha/bits/mathdef.h (FP_ILOGB0): Likewise. (FP_ILOGBNAN): Likewise. * sysdeps/ia64/bits/mathdef.h (FP_ILOGB0): Likewise. (FP_ILOGBNAN): Likewise. * sysdeps/m68k/m680x0/bits/mathdef.h (FP_ILOGB0): Likewise. (FP_ILOGBNAN): Likewise. * sysdeps/mips/bits/mathdef.h (FP_ILOGB0): Likewise. (FP_ILOGBNAN): Likewise. * sysdeps/powerpc/bits/mathdef.h (FP_ILOGB0): Likewise. (FP_ILOGBNAN): Likewise. * sysdeps/s390/bits/mathdef.h (FP_ILOGB0): Likewise. (FP_ILOGBNAN): Likewise. * sysdeps/sparc/bits/mathdef.h (FP_ILOGB0): Likewise. (FP_ILOGBNAN): Likewise. * sysdeps/x86/bits/mathdef.h (FP_ILOGB0): Likewise. (FP_ILOGBNAN): Likewise. |
||
Joseph Myers
|
f11e220d2d |
Refactor FP_FAST_* into bits/fp-fast.h.
Continuing the refactoring of bits/mathdef.h, this patch moves the FP_FAST_* definitions into a new bits/fp-fast.h header. Currently this is only for FP_FAST_FMA*, but in future it would be the appropriate place for the FP_FAST_* macros from TS 18661-1 as well. The generic bits/mathdef.h header defines these macros based on whether the compiler defines __FP_FAST_*. Most architecture-specific headers, however, fail to do so, meaning that if the architecture (or some particular processors) does in fact have fused operations, and GCC knows to use them inline, the FP_FAST_* macros will still not be defined. By refactoring, this patch causes the generic version (based on __FP_FAST_*) to be used in more cases, and so the macro definitions to be more accurate. Architectures that already defined some or all of these macros other than based on the predefines have their own versions of fp-fast.h, which are arranged so they define FP_FAST_* if either the architecture-specific conditions are true or __FP_FAST_* are defined. After this refactoring, various bits/mathdef.h headers for architectures with long double = double are semantically identical to the generic version. The patch removes those headers that are redundant. (In fact two of the four removed were already redundant before this patch because they did use __FP_FAST_*.) Tested for x86_64 and x86, and compilation-only with build-many-glibcs.py. * bits/fp-fast.h: New file. * sysdeps/aarch64/bits/fp-fast.h: Likewise. * sysdeps/powerpc/bits/fp-fast.h: Likewise. * math/Makefile (headers): Add bits/fp-fast.h. * math/math.h: Include <bits/fp-fast.h>. * bits/mathdef.h (FP_FAST_FMA): Remove. (FP_FAST_FMAF): Likewise. (FP_FAST_FMAL): Likewise. * sysdeps/aarch64/bits/mathdef.h (FP_FAST_FMA): Likewise. (FP_FAST_FMAF): Likewise. * sysdeps/powerpc/bits/mathdef.h (FP_FAST_FMA): Likewise. (FP_FAST_FMAF): Likewise. * sysdeps/x86/bits/mathdef.h (FP_FAST_FMA): Likewise. (FP_FAST_FMAF): Likewise. (FP_FAST_FMAL): Likewise. * sysdeps/arm/bits/mathdef.h: Remove file. * sysdeps/hppa/fpu/bits/mathdef.h: Likewise. * sysdeps/sh/sh4/bits/mathdef.h: Likewise. * sysdeps/tile/bits/mathdef.h: Likewise. |
||
Steve Ellcey
|
389d1f1b23 |
Partial ILP32 support for aarch64.
* sysdeps/aarch64/crti.S: Add include of sysdep.h. (call_weak_fn): Use PTR_REG to get correct reg name in ILP32. * sysdeps/aarch64/dl-irel.h: Add include of sysdep.h. (elf_irela): Use AARCH64_R macro to get correct relocation in ILP32. * sysdeps/aarch64/dl-machine.h: Add include of sysdep.h. (elf_machine_load_address, RTLD_START, RTLD_START_1, RTLD_START, elf_machine_type_class, ELF_MACHINE_JMP_SLOT, elf_machine_rela, elf_machine_lazy_rel): Add ifdef's for ILP32 support. * sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return, _dl_tlsdesc_return_lazy, _dl_tlsdesc_dynamic, _dl_tlsdesc_resolve_hold): Extend pointers in ILP32, use PTR_REG to get correct reg name for ILP32. * sysdeps/aarch64/dl-trampoline.S (ip01): New Macro. (RELA_SIZE): New Macro. (_dl_runtime_resolve, _dl_runtime_profile): Use new macros and PTR_REG to support ILP32. * sysdeps/aarch64/jmpbuf-unwind.h (_JMPBUF_CFA_UNWINDS_ADJ): Add cast for ILP32 mode. * sysdeps/aarch64/memcmp.S (memcmp): Extend arg pointers for ILP32 mode. * sysdeps/aarch64/memcpy.S (memmove, memcpy): Ditto. * sysdeps/aarch64/memset.S (__memset): Ditto. * sysdeps/aarch64/strchr.S (strchr): Ditto. * sysdeps/aarch64/strchrnul.S (__strchrnul): Ditto. * sysdeps/aarch64/strcmp.S (strcmp): Ditto. * sysdeps/aarch64/strcpy.S (strcpy): Ditto. * sysdeps/aarch64/strlen.S (__strlen): Ditto. * sysdeps/aarch64/strncmp.S (strncmp): Ditto. * sysdeps/aarch64/strnlen.S (strnlen): Ditto. * sysdeps/aarch64/strrchr.S (strrchr): Ditto. * sysdeps/unix/sysv/linux/aarch64/clone.S: Ditto. * sysdeps/unix/sysv/linux/aarch64/setcontext.S (__setcontext): Ditto. * sysdeps/unix/sysv/linux/aarch64/swapcontext.S (__swapcontext): Ditto. * sysdeps/aarch64/__longjmp.S (__longjmp): Extend pointers in ILP32, change PTR_MANGLE call to use register numbers instead of names. * sysdeps/unix/sysv/linux/aarch64/getcontext.S (__getcontext): Ditto. * sysdeps/aarch64/setjmp.S (__sigsetjmp): Extend arg pointers for ILP32 mode, change PTR_MANGLE calls to use register numbers. * sysdeps/aarch64/start.S (_start): Ditto. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (__PTHREAD_RWLOCK_INT_FLAGS_SHARED): New define. (__SIZEOF_PTHREAD_ATTR_T, __SIZEOF_PTHREAD_MUTEX_T, __SIZEOF_PTHREAD_MUTEXATTR_T, __SIZEOF_PTHREAD_COND_T, __SIZEOF_PTHREAD_COND_COMPAT_T, __SIZEOF_PTHREAD_CONDATTR_T, __SIZEOF_PTHREAD_RWLOCK_T, __SIZEOF_PTHREAD_RWLOCKATTR_T, __SIZEOF_PTHREAD_BARRIER_T, __SIZEOF_PTHREAD_BARRIERATTR_T): Make defined values dependent on __ILP32__. * sysdeps/aarch64/nptl/bits/semaphore.h (__SIZEOF_SEM_T): Change define. (sem_t): Change __align type. * sysdeps/aarch64/sysdep.h (AARCH64_R, PTR_REG, PTR_LOG_SIZE, DELOUSE, PTR_SIZE): New Macros. (LDST_PCREL, LDST_GLOBAL) Update to use PTR_REG. * sysdeps/unix/sysv/linux/aarch64/bits/fcntl.h (O_LARGEFILE): Set when in ILP32 mode. (F_GETLK64, F_SETLK64, F_SETLKW64): Only set in LP64 mode. * sysdeps/unix/sysv/linux/aarch64/dl-cache.h (DL_CACHE_DEFAULT_ID): Set elf flags for ILP32. (add_system_dir): Set ILP32 library directories. * sysdeps/unix/sysv/linux/aarch64/init-first.c (_libc_vdso_platform_setup): Set minimum kernel version for ILP32. * sysdeps/unix/sysv/linux/aarch64/ldconfig.h (SYSDEP_KNOWN_INTERPRETER_NAMES): Add ILP32 names. * sysdeps/unix/sysv/linux/aarch64/sigcontextinfo.h (GET_PC, SET_PC): New Macros. * sysdeps/unix/sysv/linux/aarch64/sysdep.h: Handle ILP32 pointers. |
||
Adhemerval Zanella
|
c579f48edb |
Remove cached PID/TID in clone
This patch remove the PID cache and usage in current GLIBC code. Current usage is mainly used a performance optimization to avoid the syscall, however it adds some issues: - The exposed clone syscall will try to set pid/tid to make the new thread somewhat compatible with current GLIBC assumptions. This cause a set of issue with new workloads and usecases (such as BZ#17214 and [1]) as well for new internal usage of clone to optimize other algorithms (such as clone plus CLONE_VM for posix_spawn, BZ#19957). - The caching complexity also added some bugs in the past [2] [3] and requires more effort of each port to handle such requirements (for both clone and vfork implementation). - Caching performance gain in mainly on getpid and some specific code paths. The getpid performance leverage is questionable [4], either by the idea of getpid being a hotspot as for the getpid implementation itself (if it is indeed a justifiable hotspot a vDSO symbol could let to a much more simpler solution). Other usage is mainly for non usual code paths, such as pthread cancellation signal and handling. For thread creation (on stack allocation) the code simplification in fact adds some performance gain due the no need of transverse the stack cache and invalidate each element pid. Other thread usages will require a direct getpid syscall, such as cancellation/setxid signal, thread cancellation, thread fail path (at create_thread), and thread signal (pthread_kill and pthread_sigqueue). However these are hardly usual hotspots and I think adding a syscall is justifiable. It also simplifies both the clone and vfork arch-specific implementation. And by review each fork implementation there are some discrepancies that this patch also solves: - microblaze clone/vfork does not set/reset the pid/tid field - hppa uses the default vfork implementation that fallback to fork. Since vfork is deprecated I do not think we should bother with it. The patch also removes the TID caching in clone. My understanding for such semantic is try provide some pthread usage after a user program issue clone directly (as done by thread creation with CLONE_PARENT_SETTID and pthread tid member). However, as stated before in multiple discussions threads, GLIBC provides clone syscalls without further supporting all this semantics. I ran a full make check on x86_64, x32, i686, armhf, aarch64, and powerpc64le. For sparc32, sparc64, and mips I ran the basic fork and vfork tests from posix/ folder (on a qemu system). So it would require further testing on alpha, hppa, ia64, m68k, nios2, s390, sh, and tile (I excluded microblaze because it is already implementing the patch semantic regarding clone/vfork). [1] https://codereview.chromium.org/800183004/ [2] https://sourceware.org/ml/libc-alpha/2006-07/msg00123.html [3] https://sourceware.org/bugzilla/show_bug.cgi?id=15368 [4] http://yarchive.net/comp/linux/getpid_caching.html * sysdeps/nptl/fork.c (__libc_fork): Remove pid cache setting. * nptl/allocatestack.c (allocate_stack): Likewise. (__reclaim_stacks): Likewise. (setxid_signal_thread): Obtain pid through syscall. * nptl/nptl-init.c (sigcancel_handler): Likewise. (sighandle_setxid): Likewise. * nptl/pthread_cancel.c (pthread_cancel): Likewise. * sysdeps/unix/sysv/linux/pthread_kill.c (__pthread_kill): Likewise. * sysdeps/unix/sysv/linux/pthread_sigqueue.c (pthread_sigqueue): Likewise. * sysdeps/unix/sysv/linux/createthread.c (create_thread): Likewise. * sysdeps/unix/sysv/linux/getpid.c: Remove file. * nptl/descr.h (struct pthread): Change comment about pid value. * nptl/pthread_getattr_np.c (pthread_getattr_np): Remove thread pid assert. * sysdeps/unix/sysv/linux/pthread-pids.h (__pthread_initialize_pids): Do not set pid value. * nptl_db/td_ta_thr_iter.c (iterate_thread_list): Remove thread pid cache check. * nptl_db/td_thr_validate.c (td_thr_validate): Likewise. * sysdeps/aarch64/nptl/tcb-offsets.sym: Remove pid offset. * sysdeps/alpha/nptl/tcb-offsets.sym: Likewise. * sysdeps/arm/nptl/tcb-offsets.sym: Likewise. * sysdeps/hppa/nptl/tcb-offsets.sym: Likewise. * sysdeps/i386/nptl/tcb-offsets.sym: Likewise. * sysdeps/ia64/nptl/tcb-offsets.sym: Likewise. * sysdeps/m68k/nptl/tcb-offsets.sym: Likewise. * sysdeps/microblaze/nptl/tcb-offsets.sym: Likewise. * sysdeps/mips/nptl/tcb-offsets.sym: Likewise. * sysdeps/nios2/nptl/tcb-offsets.sym: Likewise. * sysdeps/powerpc/nptl/tcb-offsets.sym: Likewise. * sysdeps/s390/nptl/tcb-offsets.sym: Likewise. * sysdeps/sh/nptl/tcb-offsets.sym: Likewise. * sysdeps/sparc/nptl/tcb-offsets.sym: Likewise. * sysdeps/tile/nptl/tcb-offsets.sym: Likewise. * sysdeps/x86_64/nptl/tcb-offsets.sym: Likewise. * sysdeps/unix/sysv/linux/aarch64/clone.S: Remove pid and tid caching. * sysdeps/unix/sysv/linux/alpha/clone.S: Likewise. * sysdeps/unix/sysv/linux/arm/clone.S: Likewise. * sysdeps/unix/sysv/linux/hppa/clone.S: Likewise. * sysdeps/unix/sysv/linux/i386/clone.S: Likewise. * sysdeps/unix/sysv/linux/ia64/clone2.S: Likewise. * sysdeps/unix/sysv/linux/mips/clone.S: Likewise. * sysdeps/unix/sysv/linux/nios2/clone.S: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S: Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/clone.S: Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/clone.S: Likewise. * sysdeps/unix/sysv/linux/sh/clone.S: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/clone.S: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/clone.S: Likewise. * sysdeps/unix/sysv/linux/tile/clone.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/clone.S: Likewise. * sysdeps/unix/sysv/linux/aarch64/vfork.S: Remove pid set and reset. * sysdeps/unix/sysv/linux/alpha/vfork.S: Likewise. * sysdeps/unix/sysv/linux/arm/vfork.S: Likewise. * sysdeps/unix/sysv/linux/i386/vfork.S: Likewise. * sysdeps/unix/sysv/linux/ia64/vfork.S: Likewise. * sysdeps/unix/sysv/linux/m68k/clone.S: Likewise. * sysdeps/unix/sysv/linux/m68k/vfork.S: Likewise. * sysdeps/unix/sysv/linux/mips/vfork.S: Likewise. * sysdeps/unix/sysv/linux/nios2/vfork.S: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/vfork.S: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/vfork.S: Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/vfork.S: Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/vfork.S: Likewise. * sysdeps/unix/sysv/linux/sh/vfork.S: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/vfork.S: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/vfork.S: Likewise. * sysdeps/unix/sysv/linux/tile/vfork.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/vfork.S: Likewise. * sysdeps/unix/sysv/linux/tst-clone2.c (f): Remove direct pthread struct access. (clone_test): Remove function. (do_test): Rewrite to take in consideration pid is not cached anymore. |
||
Joseph Myers
|
93eb85ceb2 |
Refactor float_t, double_t information into bits/flt-eval-method.h.
At present, definitions of float_t and double_t are split among many bits/mathdef.h headers. For all but three architectures, these types are float and double. Furthermore, if you assume __FLT_EVAL_METHOD__ to be defined, that provides a more generic way of determining the correct values of these typedefs. Defining these typedefs more generally based on __FLT_EVAL_METHOD__ was previously proposed by Paul Eggert in <https://sourceware.org/ml/libc-alpha/2012-02/msg00002.html>. This patch refactors things in the way I proposed in <https://sourceware.org/ml/libc-alpha/2016-11/msg00745.html>. A new header bits/flt-eval-method.h defines a single macro, __GLIBC_FLT_EVAL_METHOD, which is then used by math.h to define float_t and double_t. The default is based on __FLT_EVAL_METHOD__ (although actually a default to 0 would have the same effect for current ports, because ports where values other than 0 or 16 are possible all have their own headers). To avoid changing the existing semantics in any case, including for compilers not defining __FLT_EVAL_METHOD__, architecture-specific files are then added for m68k, s390, x86 which replicate the existing semantics. At least with __FLT_EVAL_METHOD__ values possible with GCC, there should be no change to the choices of float_t and double_t for any supported configuration. Architecture maintainer notes: * m68k: sysdeps/m68k/m680x0/bits/flt-eval-method.h always defines __GLIBC_FLT_EVAL_METHOD to 2 to replicate the existing logic. But actually GCC defines __FLT_EVAL_METHOD__ to 0 if TARGET_68040. It might make sense to make the header prefer to base things on __FLT_EVAL_METHOD__ if defined, like the x86 version, and so make the choices of these types more accurate (with a NEWS entry as for the other changes to these types on particular architectures). * s390: sysdeps/s390/bits/flt-eval-method.h always defines __GLIBC_FLT_EVAL_METHOD to 1 to replicate the existing logic. As previously discussed, it might make sense in coordination with GCC to eliminate the historic mistake, avoid excess precision in the -fexcess-precision=standard case and make the typedefs match (with a NEWS entry, again). Tested for x86-64 and x86. Also did compilation-only testing with build-many-glibcs.py. * bits/flt-eval-method.h: New file. * sysdeps/m68k/m680x0/bits/flt-eval-method.h: Likewise. * sysdeps/s390/bits/flt-eval-method.h: Likewise. * sysdeps/x86/bits/flt-eval-method.h: Likewise. * math/Makefile (headers): Add bits/flt-eval-method.h. * math/math.h: Include <bits/flt-eval-method.h>. [__USE_ISOC99] (float_t): Define based on __GLIBC_FLT_EVAL_METHOD. [__USE_ISOC99] (double_t): Likewise. * bits/mathdef.h (float_t): Remove. (double_t): Likewise. * sysdeps/aarch64/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/alpha/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/arm/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/hppa/fpu/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/ia64/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/m68k/m680x0/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/mips/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/powerpc/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/s390/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/sh/sh4/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/sparc/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/tile/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. * sysdeps/x86/bits/mathdef.h (float_t): Likewise. (double_t): Likewise. |
||
Siddhesh Poyarekar
|
8f3a4687ad |
Regenerate ULPs for aarch64
* sysdeps/aarch64/libm-test-ulps: Regenerated. |
||
Florian Weimer
|
c74940f2a7 | nptl: Document the reason why __kind in pthread_mutex_t is part of the ABI | ||
Joseph Myers
|
799131036e |
Do not hardcode platform names in manual/libm-err-tab.pl (bug 14139).
manual/libm-err-tab.pl hardcodes a list of names for particular platforms (mapping from sysdeps directory name to friendly name for the manual). This goes against the principle of keeping information about individual platforms in their corresponding sysdeps directory, and the list is also very out-of-date regarding supported platforms and their corresponding sysdeps directories. This patch fixes this by adding a libm-test-ulps-name file alongside each libm-test-ulps file. The script then gets the friendly name from that file, which is required to exist, so it no longer needs to allow for the mapping being missing. Tested for x86_64. [BZ #14139] * manual/libm-err-tab.pl (%pplatforms): Initialize to empty. (find_files): Obtain platform name from libm-test-ulps-name and store in %pplatforms. (canonicalize_platform): Remove. (print_platforms): Use $pplatforms directly. (by_platforms): Do not allow for platforms missing from %pplatforms. * sysdeps/aarch64/libm-test-ulps-name: New file. * sysdeps/alpha/fpu/libm-test-ulps-name: Likewise. * sysdeps/arm/libm-test-ulps-name: Likewise. * sysdeps/generic/libm-test-ulps-name: Likewise. * sysdeps/hppa/fpu/libm-test-ulps-name: Likewise. * sysdeps/i386/fpu/libm-test-ulps-name: Likewise. * sysdeps/i386/i686/fpu/multiarch/libm-test-ulps-name: Likewise. * sysdeps/ia64/fpu/libm-test-ulps-name: Likewise. * sysdeps/m68k/coldfire/fpu/libm-test-ulps-name: Likewise. * sysdeps/m68k/m680x0/fpu/libm-test-ulps-name: Likewise. * sysdeps/microblaze/libm-test-ulps-name: Likewise. * sysdeps/mips/mips32/libm-test-ulps-name: Likewise. * sysdeps/mips/mips64/libm-test-ulps-name: Likewise. * sysdeps/nios2/libm-test-ulps-name: Likewise. * sysdeps/powerpc/fpu/libm-test-ulps-name: Likewise. * sysdeps/powerpc/nofpu/libm-test-ulps-name: Likewise. * sysdeps/s390/fpu/libm-test-ulps-name: Likewise. * sysdeps/sh/libm-test-ulps-name: Likewise. * sysdeps/sparc/fpu/libm-test-ulps-name: Likewise. * sysdeps/tile/libm-test-ulps-name: Likewise. * sysdeps/x86_64/fpu/libm-test-ulps-name: Likewise. |
||
Steve Ellcey
|
d060cd002d |
Define wordsize.h macros everywhere
* bits/wordsize.h: Add documentation. * sysdeps/aarch64/bits/wordsize.h : New file * sysdeps/generic/stdint.h (PTRDIFF_MIN, PTRDIFF_MAX): Update definitions. (SIZE_MAX): Change ifdef to if in __WORDSIZE32_SIZE_ULONG check. * sysdeps/gnu/bits/utmp.h (__WORDSIZE_TIME64_COMPAT32): Check with #if instead of #ifdef. * sysdeps/gnu/bits/utmpx.h (__WORDSIZE_TIME64_COMPAT32): Ditto. * sysdeps/mips/bits/wordsize.h (__WORDSIZE32_SIZE_ULONG, __WORDSIZE32_PTRDIFF_LONG, __WORDSIZE_TIME64_COMPAT32): Add or change defines. * sysdeps/powerpc/powerpc32/bits/wordsize.h: Likewise. * sysdeps/powerpc/powerpc64/bits/wordsize.h: Likewise. * sysdeps/s390/s390-32/bits/wordsize.h: Likewise. * sysdeps/s390/s390-64/bits/wordsize.h: Likewise. * sysdeps/sparc/sparc32/bits/wordsize.h: Likewise. * sysdeps/sparc/sparc64/bits/wordsize.h: Likewise. * sysdeps/tile/tilegx/bits/wordsize.h: Likewise. * sysdeps/tile/tilepro/bits/wordsize.h: Likewise. * sysdeps/unix/sysv/linux/alpha/bits/wordsize.h: Likewise. * sysdeps/unix/sysv/linux/powerpc/bits/wordsize.h: Likewise. * sysdeps/unix/sysv/linux/sparc/bits/wordsize.h: Likewise. * sysdeps/wordsize-32/bits/wordsize.h: Likewise. * sysdeps/wordsize-64/bits/wordsize.h: Likewise. * sysdeps/x86/bits/wordsize.h: Likewise. |
||
Wilco Dijkstra
|
95e431cc73 |
An optimized memchr was missing for AArch64. This version is similar to
strchr and is significantly faster than the C version. 2016-11-04 Wilco Dijkstra <wdijkstr@arm.com> Kevin Petit <kevin.petit@arm.com> * sysdeps/aarch64/memchr.S (__memchr): New file. |
||
Joseph Myers
|
1396c647a9 |
Add femode_t functions: aarch64.
This patch adds AArch64 versions of fegetmode and fesetmode. Untested. * sysdeps/aarch64/fpu/fegetmode.c: New file. * sysdeps/aarch64/fpu/fesetmode.c: Likewise. |
||
Joseph Myers
|
ec94343f59 |
Add femode_t functions.
TS 18661-1 defines a type femode_t to represent the set of dynamic floating-point control modes (such as the rounding mode and trap enablement modes), and functions fegetmode and fesetmode to manipulate those modes (without affecting other state such as the raised exception flags) and a corresponding macro FE_DFL_MODE. This patch series implements those interfaces for glibc. This first patch adds the architecture-independent pieces, the x86 and x86_64 implementations, and the <bits/fenv.h> and ABI baseline updates for all architectures so glibc keeps building and passing the ABI tests on all architectures. Subsequent patches add the fegetmode and fesetmode implementations for other architectures. femode_t is generally an integer type - the same type as fenv_t, or as the single element of fenv_t where fenv_t is a structure containing a single integer (or the single relevant element, where it has elements for both status and control registers) - except where architecture properties or consistency with the fenv_t implementation indicate otherwise. FE_DFL_MODE follows FE_DFL_ENV in whether it's a magic pointer value (-1 cast to const femode_t *), a value that can be distinguished from valid pointers by its high bits but otherwise contains a representation of the desired register contents, or a pointer to a constant variable (the powerpc case; __fe_dfl_mode is added as an exported constant object, an alias to __fe_dfl_env). Note that where architectures (that share a register between control and status bits) gain definitions of new floating-point control or status bits in future, the implementations of fesetmode for those architectures may need updating (depending on whether the new bits are control or status bits and what the implementation does with previously unknown bits), just like existing implementations of <fenv.h> functions that take care not to touch reserved bits may need updating when the set of reserved bits changes. (As any new bits are outside the scope of ISO C, that's just a quality-of-implementation issue for supporting them, not a conformance issue.) As with fenv_t, femode_t should properly include any software DFP rounding mode (and for both fenv_t and femode_t I'd consider that fragment of DFP support appropriate for inclusion in glibc even in the absence of the rest of libdfp; hardware DFP rounding modes should already be included if the definitions of which bits are status / control bits are correct). Tested for x86_64, x86, mips64 (hard float, and soft float to test the fallback version), arm (hard float) and powerpc (hard float, soft float and e500). Other architecture versions are untested. * math/fegetmode.c: New file. * math/fesetmode.c: Likewise. * sysdeps/i386/fpu/fegetmode.c: Likewise. * sysdeps/i386/fpu/fesetmode.c: Likewise. * sysdeps/x86_64/fpu/fegetmode.c: Likewise. * sysdeps/x86_64/fpu/fesetmode.c: Likewise. * math/fenv.h: Update comment on inclusion of <bits/fenv.h>. [__GLIBC_USE (IEC_60559_BFP_EXT)] (fegetmode): New function declaration. [__GLIBC_USE (IEC_60559_BFP_EXT)] (fesetmode): Likewise. * bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/aarch64/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/alpha/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/arm/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/hppa/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/ia64/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/m68k/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/microblaze/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/mips/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/nios2/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/powerpc/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (__fe_dfl_mode): New variable declaration. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/s390/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/sh/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/sparc/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/tile/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * sysdeps/x86/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New typedef. [__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro. * manual/arith.texi (FE_DFL_MODE): Document macro. (fegetmode): Document function. (fesetmode): Likewise. * math/Versions (fegetmode): New libm symbol at version GLIBC_2.25. (fesetmode): Likewise. * math/Makefile (libm-support): Add fegetmode and fesetmode. (tests): Add test-femode and test-femode-traps. * math/test-femode-traps.c: New file. * math/test-femode.c: Likewise. * sysdeps/powerpc/fpu/fenv_const.c (__fe_dfl_mode): Declare as alias for __fe_dfl_env. * sysdeps/powerpc/nofpu/fenv_const.c (__fe_dfl_mode): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/fenv_const.c (__fe_dfl_mode): Likewise. * sysdeps/powerpc/Versions (__fe_dfl_mode): New libm symbol at version GLIBC_2.25. * sysdeps/nacl/libm.abilist: Update. * sysdeps/unix/sysv/linux/aarch64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/alpha/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/arm/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/hppa/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/i386/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/ia64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/m68k/coldfire/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/m68k/m680x0/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/microblaze/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/mips/mips32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/nios2/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/libm-le.abilist: Likewise. * sysdeps/unix/sysv/linux/powerpc/powerpc64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/sh/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/tile/tilepro/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Likewise. * sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Likewise. |
||
Paul E. Murphy
|
847c9161c7 |
Make common fmax implementation generic.
Also update aarch64 to ensure the correct s_fmin.c is included. The include order favors including the generated copy. |
||
Joseph Myers
|
ce99c0816b |
Add fesetexcept: aarch64.
This patch adds an AArch64 version of fesetexcept. Untested. * sysdeps/aarch64/fpu/fesetexcept.c: New file. |
||
Szabolcs Nagy
|
d637e923f9 |
[AArch64] Update libm-test-ulps
This partly reverts commit
|
||
Szabolcs Nagy
|
f8238ae3c7 |
[AArch64] Regenerate libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated. |
||
Szabolcs Nagy
|
efbe665c3a |
[AArch64] Fix libc internal asm profiling code
When glibc is built with --enable-profile, the ENTRY of asm functions includes CALL_MCOUNT for profiling. (matters for binaries static linked against libc_p.a.) CALL_MCOUNT did not save/restore argument registers around the _mcount call so it clobbered them. (it is enough to only save/restore the arguments passed to a given asm function, but that would be too many asm changes so it is simpler to always save all argument registers in this macro.) float args are not saved: mcount does not clobber the float regs and currently no asm function takes float arguments anyway. [BZ #18707] * sysdeps/aarch64/Makefile (CFLAGS-mcount.c): Add -mgeneral-regs-only. * sysdeps/aarch64/sysdep.h (CALL_MCOUNT): Save argument registers. |
||
Torvald Riegel
|
76a0b73e81 |
Remove atomic_compare_and_exchange_bool_rel.
atomic_compare_and_exchange_bool_rel and catomic_compare_and_exchange_bool_rel are removed and replaced with the new C11-like atomic_compare_exchange_weak_release. The concurrent code in nscd/cache.c has not been reviewed yet, so this patch does not add detailed comments. * nscd/cache.c (cache_add): Use new C11-like atomic operation instead of atomic_compare_and_exchange_bool_rel. * nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise. * include/atomic.h (atomic_compare_and_exchange_bool_rel, catomic_compare_and_exchange_bool_rel): Remove. * sysdeps/aarch64/atomic-machine.h (atomic_compare_and_exchange_bool_rel): Likewise. * sysdeps/alpha/atomic-machine.h (atomic_compare_and_exchange_bool_rel): Likewise. * sysdeps/arm/atomic-machine.h (atomic_compare_and_exchange_bool_rel): Likewise. * sysdeps/mips/atomic-machine.h (atomic_compare_and_exchange_bool_rel): Likewise. * sysdeps/tile/atomic-machine.h (atomic_compare_and_exchange_bool_rel): Likewise. |
||
Wilco Dijkstra
|
a024b39a4e |
This patch further tunes memcpy - avoid one branch for sizes 1-3,
add a prefetch and improve small copies that are exact powers of 2. * sysdeps/aarch64/memcpy.S (memcpy): Further tuning for performance. |
||
Wilco Dijkstra
|
58ec4fb881 |
Add a simple rawmemchr implementation. Use strlen for rawmemchr(s, '\0') as it
is the fastest way to search for '\0'. Otherwise use memchr with an infinite size. This is 3x faster on benchtests for large sizes. Passes GLIBC tests. * sysdeps/aarch64/rawmemchr.S (__rawmemchr): New file. * sysdeps/aarch64/strlen.S (__strlen): Change to __strlen to avoid PLT. |
||
Wilco Dijkstra
|
b998e16e71 |
This is an optimized memcpy/memmove for AArch64. Copies are split into 3 main
cases: small copies of up to 16 bytes, medium copies of 17..96 bytes which are fully unrolled. Large copies of more than 96 bytes align the destination and use an unrolled loop processing 64 bytes per iteration. In order to share code with memmove, small and medium copies read all data before writing, allowing any kind of overlap. All memmoves except for the large backwards case fall into memcpy for optimal performance. On a random copy test memcpy/memmove are 40% faster on Cortex-A57 and 28% on Cortex-A53. * sysdeps/aarch64/memcpy.S (memcpy): Rewrite of optimized memcpy and memmove. * sysdeps/aarch64/memmove.S (memmove): Remove memmove code (merged into memcpy.S). |
||
Florian Weimer
|
aca1daef29 |
elf: Consolidate machine-agnostic DTV definitions in <dl-dtv.h>
Identical definitions of dtv_t and TLS_DTV_UNALLOCATED were repeated for all architectures using DTVs. |
||
Wilco Dijkstra
|
a8c5a2a952 |
This is an optimized memset for AArch64. Memset is split into 4 main cases:
small sets of up to 16 bytes, medium of 16..96 bytes which are fully unrolled. Large memsets of more than 96 bytes align the destination and use an unrolled loop processing 64 bytes per iteration. Memsets of zero of more than 256 use the dc zva instruction, and there are faster versions for the common ZVA sizes 64 or 128. STP of Q registers is used to reduce codesize without loss of performance. The speedup on test-memset is 1% on Cortex-A57 and 8% on Cortex-A53. * sysdeps/aarch64/memset.S (__memset): Rewrite of optimized memset. |
||
H.J. Lu
|
16396c41de |
Add _STRING_INLINE_unaligned and string_private.h
As discussed in https://sourceware.org/ml/libc-alpha/2015-10/msg00403.html the setting of _STRING_ARCH_unaligned currently controls the external GLIBC ABI as well as selecting the use of unaligned accesses withing GLIBC. Since _STRING_ARCH_unaligned was recently changed for AArch64, this would potentially break the ABI in GLIBC 2.23, so split the uses and add _STRING_INLINE_unaligned to select the string ABI. This setting must be fixed for each target, while _STRING_ARCH_unaligned may be changed from release to release. _STRING_ARCH_unaligned is used unconditionally in glibc. But <bits/string.h>, which defines _STRING_ARCH_unaligned, isn't included with -Os. Since _STRING_ARCH_unaligned is internal to glibc and may change between glibc releases, it should be made private to glibc. _STRING_ARCH_unaligned should defined in the new string_private.h heade file which is included unconditionally from internal <string.h> for glibc build. [BZ #19462] * bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * include/string.h: Include <string_private.h>. * string/bits/string2.h: Replace _STRING_ARCH_unaligned with _STRING_INLINE_unaligned. * sysdeps/aarch64/bits/string.h (_STRING_ARCH_unaligned): Removed. (_STRING_INLINE_unaligned): New. * sysdeps/aarch64/string_private.h: New file. * sysdeps/generic/string_private.h: Likewise. * sysdeps/m68k/m680x0/m68020/string_private.h: Likewise. * sysdeps/s390/string_private.h: Likewise. * sysdeps/x86/string_private.h: Likewise. * sysdeps/m68k/m680x0/m68020/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * sysdeps/s390/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * sysdeps/sparc/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. * sysdeps/x86/bits/string.h (_STRING_ARCH_unaligned): Renamed to ... (_STRING_INLINE_unaligned): This. |
||
Joseph Myers
|
f7a9f785e5 | Update copyright dates with scripts/update-copyrights. | ||
Szabolcs Nagy
|
c960ded0d5 |
[AArch64] Regenerate libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated. |
||
Wilco Dijkstra
|
2fee269248 |
Enable _STRING_ARCH_unaligned on AArch64.
* sysdeps/aarch64/bits/string.h: New file. (_STRING_ARCH_unaligned): Define. |
||
Szabolcs Nagy
|
24ffcbfc24 |
Regenerate aarch64 libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated. |
||
Joseph Myers
|
de071d199a |
Move bits/atomic.h to atomic-machine.h (bug 14912).
It was noted in <https://sourceware.org/ml/libc-alpha/2012-09/msg00305.html> that the bits/*.h naming scheme should only be used for installed headers. This patch renames bits/atomic.h to atomic-machine.h to follow that convention. This is the only change in this series that needs to change the filename rather than simply removing a directory level (because both atomic.h and bits/atomic.h exist at present). Tested for x86_64 (testsuite, and that installed stripped shared libraries are unchanged by the patch). [BZ #14912] * sysdeps/aarch64/bits/atomic.h: Move to ... * sysdeps/aarch64/atomic-machine.h: ...here. (_AARCH64_BITS_ATOMIC_H): Rename macro to _AARCH64_ATOMIC_MACHINE_H. * sysdeps/alpha/bits/atomic.h: Move to ... * sysdeps/alpha/atomic-machine.h: ...here. * sysdeps/arm/bits/atomic.h: Move to ... * sysdeps/arm/atomic-machine.h: ...here. Update comments. * bits/atomic.h: Move to ... * sysdeps/generic/atomic-machine.h: ...here. (_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H. * sysdeps/i386/bits/atomic.h: Move to ... * sysdeps/i386/atomic-machine.h: ...here. * sysdeps/ia64/bits/atomic.h: Move to ... * sysdeps/ia64/atomic-machine.h: ...here. * sysdeps/m68k/coldfire/bits/atomic.h: Move to ... * sysdeps/m68k/coldfire/atomic-machine.h: ...here. (_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H. * sysdeps/m68k/m680x0/m68020/bits/atomic.h: Move to ... * sysdeps/m68k/m680x0/m68020/atomic-machine.h: ...here. * sysdeps/microblaze/bits/atomic.h: Move to ... * sysdeps/microblaze/atomic-machine.h: ...here. * sysdeps/mips/bits/atomic.h: Move to ... * sysdeps/mips/atomic-machine.h: ...here. (_MIPS_BITS_ATOMIC_H): Rename macro to _MIPS_ATOMIC_MACHINE_H. * sysdeps/powerpc/bits/atomic.h: Move to ... * sysdeps/powerpc/atomic-machine.h: ...here. Update comments. * sysdeps/powerpc/powerpc32/bits/atomic.h: Move to ... * sysdeps/powerpc/powerpc32/atomic-machine.h: ...here. Update comments. Include <atomic-machine.h> instead of <bits/atomic.h>. * sysdeps/powerpc/powerpc64/bits/atomic.h: Move to ... * sysdeps/powerpc/powerpc64/atomic-machine.h: ...here. Include <atomic-machine.h> instead of <bits/atomic.h>. * sysdeps/s390/bits/atomic.h: Move to ... * sysdeps/s390/atomic-machine.h: ...here. * sysdeps/sparc/sparc32/bits/atomic.h: Move to ... * sysdeps/sparc/sparc32/atomic-machine.h: ...here. (_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H. * sysdeps/sparc/sparc32/sparcv9/bits/atomic.h: Move to ... * sysdeps/sparc/sparc32/sparcv9/atomic-machine.h: ...here. * sysdeps/sparc/sparc64/bits/atomic.h: Move to ... * sysdeps/sparc/sparc64/atomic-machine.h: ...here. * sysdeps/tile/bits/atomic.h: Move to ... * sysdeps/tile/atomic-machine.h: ...here. * sysdeps/tile/tilegx/bits/atomic.h: Move to ... * sysdeps/tile/tilegx/atomic-machine.h: ...here. Include <sysdeps/tile/atomic-machine.h> instead of <sysdeps/tile/bits/atomic.h>. (_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H. * sysdeps/tile/tilepro/bits/atomic.h: Move to ... * sysdeps/tile/tilepro/atomic-machine.h: ...here. Include <sysdeps/tile/atomic-machine.h> instead of <sysdeps/tile/bits/atomic.h>. (_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H. * sysdeps/unix/sysv/linux/arm/bits/atomic.h: Move to ... * sysdeps/unix/sysv/linux/arm/atomic-machine.h: ...here. Include <sysdeps/arm/atomic-machine.h> instead of <sysdeps/arm/bits/atomic.h>. * sysdeps/unix/sysv/linux/hppa/bits/atomic.h: Move to ... * sysdeps/unix/sysv/linux/hppa/atomic-machine.h: ...here. (_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H. * sysdeps/unix/sysv/linux/m68k/coldfire/bits/atomic.h: Move to ... * sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h: ...here. (_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H. * sysdeps/unix/sysv/linux/nios2/bits/atomic.h: Move to ... * sysdeps/unix/sysv/linux/nios2/atomic-machine.h: ...here. (_NIOS2_BITS_ATOMIC_H): Rename macro to _NIOS2_ATOMIC_MACHINE_H. * sysdeps/unix/sysv/linux/sh/bits/atomic.h: Move to ... * sysdeps/unix/sysv/linux/sh/atomic-machine.h: ...here. * sysdeps/x86_64/bits/atomic.h: Move to ... * sysdeps/x86_64/atomic-machine.h: ...here. * include/atomic.h: Include <atomic-machine.h> instead of <bits/atomic.h>. |
||
Joseph Myers
|
522e02ab8a |
Rename bits/linkmap.h to linkmap.h (bug 14912).
It was noted in <https://sourceware.org/ml/libc-alpha/2012-09/msg00305.html> that the bits/*.h naming scheme should only be used for installed headers. This patch renames bits/linkmap.h to plain linkmap.h to follow that convention. Tested for x86_64 (testsuite, and that installed stripped shared libraries are unchanged by the patch). [BZ #14912] * bits/linkmap.h: Move to ... * sysdeps/generic/linkmap.h: ...here. * sysdeps/aarch64/bits/linkmap.h: Move to ... * sysdeps/aarch64/linkmap.h: ...here. * sysdeps/arm/bits/linkmap.h: Move to ... * sysdeps/arm/linkmap.h: ...here. * sysdeps/hppa/bits/linkmap.h: Move to ... * sysdeps/hppa/linkmap.h: ...here. * sysdeps/ia64/bits/linkmap.h: Move to ... * sysdeps/ia64/linkmap.h: ...here. * sysdeps/mips/bits/linkmap.h: Move to ... * sysdeps/mips/linkmap.h: ...here. * sysdeps/s390/bits/linkmap.h: Move to ... * sysdeps/s390/linkmap.h: ...here. * sysdeps/sh/bits/linkmap.h: Move to ... * sysdeps/sh/linkmap.h: ...here. * sysdeps/x86/bits/linkmap.h: Move to ... * sysdeps/x86/linkmap.h: ...here. * include/link.h: Include <linkmap.h> instead of <bits/linkmap.h>. |
||
Wilco Dijkstra
|
edbbc86c3a |
2015-08-24 Wilco Dijkstra <wdijkstr@arm.com>
* sysdeps/aarch64/bzero.S (__bzero): Remove. |
||
Wilco Dijkstra
|
f008c71455 |
2015-08-24 Wilco Dijkstra <wdijkstr@arm.com>
* sysdeps/aarch64/fpu/math_private.h (libc_feholdsetround_aarch64_ctx): Unconditionally set __fpcr to avoid uninialized warning. (libc_feholdsetround_noex_aarch64_ctx): Likewise. |
||
Wilco Dijkstra
|
7b1c56e483 |
Improve feenableexcept performance - avoid an unnecessary FPCR read in case
the FPCR does not change. Also improve the logic of the return value. |
||
Wilco Dijkstra
|
3136eb7abd |
Improve fesetenv performance by avoiding unnecessary FPSR/FPCR reads/writes.
It uses the same logic as the ARM version. The common case removes 1 FPSR and 1 FPCR read. For FE_DFL_ENV and FE_NOMASK_ENV a FPCR read is avoided in case the FPCR does not change. |
||
Szabolcs Nagy
|
0910702c4d |
[AArch64][BZ #17711] Fix extern protected data handling
Fixes elf/tst-protected1a and elf/tst-protected1b tests. Depends on a gcc patch that makes protected visibility data non-local: https://gcc.gnu.org/ml/gcc-patches/2015-07/msg01871.html and on a binutils patch so R_*_GLOB_DAT relocs are used for it: https://sourceware.org/ml/binutils/2015-07/msg00246.html |
||
Wilco Dijkstra
|
82641e16aa | Add AArch64 versions of math_opt_barrier and math_force_eval that avoid going via memory. | ||
Wilco Dijkstra
|
c435989f52 |
Optimize the strlen implementation by using a page cross check and a fast check
for nul bytes which reverts to separate loop when a non-ASCII char is encountered. Speedup on test-strlen is ~10%, long ASCII strings are processed ~60% faster, and on random tests it is ~80% better. |
||
Wilco Dijkstra
|
6471190491 | Inline __ieee754_sqrt and __ieee754_sqrtf. Also add external definitions. | ||
Szabolcs Nagy
|
cfe4368e51 |
Regenerate aarch64 libm-test-ulps
* sysdeps/aarch64/libm-test-ulps: Regenerated. |
||
Szabolcs Nagy
|
c71c89e5c7 |
[AArch64] Fix cfi_adjust_cfa_offset usage in dl-tlsdesc.S
Some of the cfi annotations used incorrect sign. * sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Fix cfi_adjust_cfa_offset argument. (_dl_tlsdesc_undefweak, _dl_tlsdesc_dynamic): Likewise. (_dl_tlsdesc_resolve_rela, _dl_tlsdesc_resolve_hold): Likewise. |
||
Szabolcs Nagy
|
08325735c2 |
[BZ 18034][AArch64] Lazy TLSDESC relocation data race fix
Lazy TLSDESC initialization needs to be synchronized with concurrent TLS accesses. The TLS descriptor contains a function pointer (entry) and an argument that is accessed from the entry function. With lazy initialization the first call to the entry function updates the entry and the argument to their final value. A final entry function must make sure that it accesses an initialized argument, this needs synchronization on systems with weak memory ordering otherwise the writes of the first call can be observed out of order. There are at least two issues with the current code: tlsdesc.c (i386, x86_64, arm, aarch64) uses volatile memory accesses on the write side (in the initial entry function) instead of C11 atomics. And on systems with weak memory ordering (arm, aarch64) the read side synchronization is missing from the final entry functions (dl-tlsdesc.S). This patch only deals with aarch64. * Write side: Volatile accesses were replaced with C11 relaxed atomics, and a release store was used for the initialization of entry so the read side can synchronize with it. * Read side: TLS access generated by the compiler and an entry function code is roughly ldr x1, [x0] // load the entry blr x1 // call it entryfunc: ldr x0, [x0,#8] // load the arg ret Various alternatives were considered to force the ordering in the entry function between the two loads: (1) barrier entryfunc: dmb ishld ldr x0, [x0,#8] (2) address dependency (if the address of the second load depends on the result of the first one the ordering is guaranteed): entryfunc: ldr x1,[x0] and x1,x1,#8 orr x1,x1,#8 ldr x0,[x0,x1] (3) load-acquire (ARMv8 instruction that is ordered before subsequent loads and stores) entryfunc: ldar xzr,[x0] ldr x0,[x0,#8] Option (1) is the simplest but slowest (note: this runs at every TLS access), options (2) and (3) do one extra load from [x0] (same address loads are ordered so it happens-after the load on the call site), option (2) clobbers x1 which is problematic because existing gcc does not expect that, so approach (3) was chosen. A new _dl_tlsdesc_return_lazy entry function was introduced for lazily relocated static TLS, so non-lazy static TLS can avoid the synchronization cost. [BZ #18034] * sysdeps/aarch64/dl-tlsdesc.h (_dl_tlsdesc_return_lazy): Declare. * sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return_lazy): Define. (_dl_tlsdesc_undefweak): Guarantee TLSDESC entry and argument load-load ordering using ldar. (_dl_tlsdesc_dynamic): Likewise. (_dl_tlsdesc_return_lazy): Likewise. * sysdeps/aarch64/tlsdesc.c (_dl_tlsdesc_resolve_rela_fixup): Use relaxed atomics instead of volatile and synchronize with release store. (_dl_tlsdesc_resolve_hold_fixup): Use relaxed atomics instead of volatile. * elf/tlsdeschtab.h (_dl_tlsdesc_resolve_early_return_p): Likewise. |
||
Joseph Myers
|
1769608794 |
Use libc_hidden_proto / libc_hidden_def with __strnlen.
Various code in glibc uses __strnlen instead of strnlen for namespace reasons. However, __strnlen does not use libc_hidden_proto / libc_hidden_def (as is normally done for any function defined and called within the same library, whether or not exported from the library and whatever namespace it is in), so the compiler does not know that those calls are to a function within libc. This patch uses libc_hidden_proto / libc_hidden_def with __strnlen. On x86_64, it makes no difference to the installed stripped shared libraries. On 32-bit x86, it causes __strnlen calls to go to the same place as strnlen calls (the fallback strnlen implementation), rather than through a PLT entry for the strnlen IFUNC; I'm not sure of the logic behind when calls from within libc should use IFUNCs versus when they should go direct to a particular function implementation, but clearly it doesn't make sense for strnlen and __strnlen to be handled differently in this regard. Tested for x86_64 and x86 (testsuite, and comparison of installed shared libraries as described above). * string/strnlen.c [!STRNLEN] (__strnlen): Use libc_hidden_def. * include/string.h (__strnlen): Use libc_hidden_proto. * sysdeps/aarch64/strnlen.S (__strnlen): Use libc_hidden_def. * sysdeps/i386/i686/multiarch/strnlen-c.c [SHARED] (libc_hidden_def): Define __GI___strnlen as well as __GI_strnlen. * sysdeps/powerpc/powerpc32/power4/multiarch/strnlen-power7.S (libc_hidden_def): Undefine and redefine. * sysdeps/powerpc/powerpc32/power4/multiarch/strnlen-ppc32.c [SHARED] (libc_hidden_def): Define __GI___strnlen as well as __GI_strnlen. * sysdeps/powerpc/powerpc32/power7/strnlen.S (__strnlen): Use libc_hidden_def. * sysdeps/tile/tilegx/strnlen.c (__strnlen): Likewise. |
||
Wilco Dijkstra
|
71bf272d91 |
2015-06-02 Szabolcs Nagy <szabolcs.nagy@arm.com>
* sysdeps/aarch64/libm-test-ulps: Update. |
||
Szabolcs Nagy
|
265a9b73ba | [AArch64] Fix inline asm clobber list in tls-macros.h | ||
Wilco Dijkstra
|
eda361c8d9 |
2015-05-06 Szabolcs Nagy <szabolcs.nagy@arm.com>
* sysdeps/aarch64/libm-test-ulps: Update. |
||
Roland McGrath
|
ac9e0e5e40 | Clean up sysdep-dl-routines variable. | ||
Joseph Myers
|
8116321f65 |
Fix libm feupdateenv namespace (bug 17748).
Concluding the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of feupdateenv by making it a weak alias for __feupdateenv and making the affected code call __feupdateenv. Tested for x86_64 (testsuite, and that installed stripped shared libraries are unchanged by the patch). Also tested for ARM (soft-float) that the math.h linknamespace tests now pass. [BZ #17748] * include/fenv.h (__feupdateenv): Use libm_hidden_proto. * math/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/aarch64/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/arm/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/powerpc/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/tile/math_private.h (__feupdateenv): New inline function. * sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/generic/math_private.h (default_libc_feupdateenv): Call __feupdateenv instead of feupdateenv. (default_libc_feupdateenv_test): Likewise. (libc_feresetround_ctx): Likewise. |
||
Richard Earnshaw
|
dc400d7b73 | AArch64: Optimized implementations of strcpy and stpcpy. | ||
Richard Earnshaw
|
ec582ca0f3 | AArch64 optimized implementation of strrchr. | ||
Joseph Myers
|
01238691bb |
Fix libm fesetround namespace (bug 17748).
Continuing the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of fesetround by making it a weak alias of __fesetround and making the affected code call __fesetround. An existing __fesetround function in fenv_libc.h for powerpc is renamed to __fesetround_inline. Tested for x86_64 (testsuite, and that disassembly of installed shared libraries is unchanged by the patch). Also tested for ARM (soft-float) that fesetround failures disappear from the linknamespace test results (feupdateenv remains to be addressed to complete fixing bug 17748). [BZ #17748] * include/fenv.h (__fesetround): Declare. Use libm_hidden_proto. * math/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/aarch64/fpu/fesetround.c (fesetround): Likewise. * sysdeps/alpha/fpu/fesetround.c (fesetround): Likewise. * sysdeps/arm/fesetround.c (fesetround): Likewise. * sysdeps/hppa/fpu/fesetround.c (fesetround): Likewise. * sysdeps/i386/fpu/fesetround.c (fesetround): Likewise. * sysdeps/ia64/fpu/fesetround.c (fesetround): Likewise. * sysdeps/m68k/fpu/fesetround.c (fesetround): Likewise. * sysdeps/mips/fpu/fesetround.c (fesetround): Likewise. * sysdeps/powerpc/fpu/fenv_libc.h (__fesetround): Rename to __fesetround_inline. * sysdeps/powerpc/fpu/fenv_private.h (libc_fesetround_ppc): Call __fesetround_inline instead of __fesetround. * sysdeps/powerpc/fpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. Call __fesetround_inline instead of __fesetround. * sysdeps/powerpc/nofpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/powerpc/powerpc32/e500/nofpu/fesetround.c (fesetround): Likewise. * sysdeps/s390/fpu/fesetround.c (fesetround): Likewise. * sysdeps/sh/sh4/fpu/fesetround.c (fesetround): Likewise. * sysdeps/sparc/fpu/fesetround.c (fesetround): Likewise. * sysdeps/tile/math_private.h (__fesetround): New inline function. * sysdeps/x86_64/fpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/generic/math_private.h (default_libc_fesetround): Call __fesetround instead of fesetround. (default_libc_feholdexcept_setround): Likewise. (libc_feholdsetround_ctx): Likewise. (libc_feholdsetround_noex_ctx): Likewise. |