Commit Graph

1267 Commits

Author SHA1 Message Date
H.J. Lu
57a72fa350 x86-64: Add FMA multiarch functions to libm
This patch adds multiarch functions optimized with -mfma -mavx2 to libm.
e_pow-fma.c is compiled with $(config-cflags-nofma) due to PR 19003.

	* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
	Add e_exp-fma, e_log-fma, e_pow-fma, s_atan-fma, e_asin-fma,
	e_atan2-fma, s_sin-fma, s_tan-fma, mplog-fma, mpa-fma,
	slowexp-fma, slowpow-fma, sincos32-fma, doasin-fma, dosincos-fma,
	halfulp-fma, mpexp-fma, mpatan2-fma, mpatan-fma, mpsqrt-fma,
	and mptan-fma.
	(CFLAGS-doasin-fma.c): New.
	(CFLAGS-dosincos-fma.c): Likewise.
	(CFLAGS-e_asin-fma.c): Likewise.
	(CFLAGS-e_atan2-fma.c): Likewise.
	(CFLAGS-e_exp-fma.c): Likewise.
	(CFLAGS-e_log-fma.c): Likewise.
	(CFLAGS-e_pow-fma.c): Likewise.
	(CFLAGS-halfulp-fma.c): Likewise.
	(CFLAGS-mpa-fma.c): Likewise.
	(CFLAGS-mpatan-fma.c): Likewise.
	(CFLAGS-mpatan2-fma.c): Likewise.
	(CFLAGS-mpexp-fma.c): Likewise.
	(CFLAGS-mplog-fma.c): Likewise.
	(CFLAGS-mpsqrt-fma.c): Likewise.
	(CFLAGS-mptan-fma.c): Likewise.
	(CFLAGS-s_atan-fma.c): Likewise.
	(CFLAGS-sincos32-fma.c): Likewise.
	(CFLAGS-slowexp-fma.c): Likewise.
	(CFLAGS-slowpow-fma.c): Likewise.
	(CFLAGS-s_sin-fma.c): Likewise.
	(CFLAGS-s_tan-fma.c): Likewise.
	* sysdeps/x86_64/fpu/multiarch/doasin-fma.c: New file.
	* sysdeps/x86_64/fpu/multiarch/dosincos-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_asin-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_atan2-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_exp-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_log-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_pow-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/halfulp-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/ifunc-avx-fma4.h: Likewise.
	* sysdeps/x86_64/fpu/multiarch/ifunc-fma4.h: Likewise.
	* sysdeps/x86_64/fpu/multiarch/mpa-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/mpatan-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/mpatan2-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/mpexp-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/mplog-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/mpsqrt-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/mptan-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_atan-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_sin-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_tan-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/sincos32-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/slowexp-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/slowpow-fma.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_asin.c: Rewrite.
	* sysdeps/x86_64/fpu/multiarch/e_atan2.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_exp.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_log.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_pow.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_atan.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_sin.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_tan.c: Likewise.
2017-08-07 08:20:56 -07:00
H.J. Lu
8537e0f6cf x86-64: Implement libmathvec IFUNC selectors in C
* sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines)
	Add svml_d_cos2_core-sse2, svml_d_cos4_core-sse,
	svml_d_cos8_core-avx2, svml_d_exp2_core-sse2,
	svml_d_exp4_core-sse, svml_d_exp8_core-avx2,
	svml_d_log2_core-sse2, svml_d_log4_core-sse,
	svml_d_log8_core-avx2, svml_d_pow2_core-sse2,
	svml_d_pow4_core-sse, svml_d_pow8_core-avx2
	svml_d_sin2_core-sse2, svml_d_sin4_core-sse,
	svml_d_sin8_core-avx2, svml_d_sincos2_core-sse2,
	svml_d_sincos4_core-sse, svml_d_sincos8_core-avx2,
	svml_s_cosf16_core-avx2, svml_s_cosf4_core-sse2,
	svml_s_cosf8_core-sse, svml_s_expf16_core-avx2,
	svml_s_expf4_core-sse2, svml_s_expf8_core-sse,
	svml_s_logf16_core-avx2, svml_s_logf4_core-sse2,
	svml_s_logf8_core-sse, svml_s_powf16_core-avx2,
	svml_s_powf4_core-sse2, svml_s_powf8_core-sse,
	svml_s_sincosf16_core-avx2, svml_s_sincosf4_core-sse2,
	svml_s_sincosf8_core-sse, svml_s_sinf16_core-avx2,
	svml_s_sinf4_core-sse2 and svml_s_sinf8_core-sse.
	* sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx2.h: New file.
	* sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx512.h: Likewise.
	* sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-sse4_1.h: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cosf16_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cosf4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cosf8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_expf16_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_expf4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_expf8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_logf16_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_logf4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_logf8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_powf16_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_powf4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_powf8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf16_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sinf16_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sinf4_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sinf8_core.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN2v_cos): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVdN4v_cos): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVeN8v_cos): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN2v_exp): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVdN4v_exp): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVeN8v_exp): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN2v_log): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVdN4v_log): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVeN8v_log): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN2vv_pow): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVdN4vv_pow): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVeN8vv_pow): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN2v_sin): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN4v_sin): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN8v_sin): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN2vvv_sincos): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVdN4vvv_sincos): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVeN8vvv_sincos): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cosf16_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_cosf16_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVeN16v_cosf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cosf4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_cosf4_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN4v_cosf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cosf8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_cosf8_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVdN8v_cosf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_expf16_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_expf16_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVeN16v_expf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_expf4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_expf4_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN4v_expf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_expf8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_expf8_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVdN8v_expf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_logf16_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_logf16_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVeN16v_logf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_logf4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_logf4_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN4v_logf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_logf8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_logf8_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVdN8v_logf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_powf16_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_powf16_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVeN16vv_powf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_powf4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_powf4_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN4vv_powf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_powf8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_powf8_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVdN8vv_powf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf16_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf16_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVeN16vvv_sincosf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf4_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN4vvv_sincosf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincosf8_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVdN8vvv_sincosf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sinf16_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sinf16_core-avx2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVeN16v_sinf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sinf4_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sinf4_core-sse2.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVbN4v_sinf): Removed.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sinf8_core.S:  Renamed to
	...
	* sysdeps/x86_64/fpu/multiarch/svml_d_sinf8_core-sse.S: This.
	Don't include <sysdep.h> nor <init-arch.h>.
	(_ZGVdN8v_sinf): Removed.
2017-08-04 13:03:58 -07:00
H.J. Lu
10a87ca476 x86-64: Implement libm IFUNC selectors in C
* sysdeps/x86_64/fpu/multiarch/Makefile (libm-sysdep_routines):
	Add s_ceil-sse4_1, s_ceilf-sse4_1, s_floor-sse4_1,
	s_floorf-sse4_1, s_nearbyint-sse4_1, s_nearbyintf-sse4_1,
	s_rint-sse4_1 and s_rintf-sse4_1.
	* sysdeps/x86_64/fpu/multiarch/ifunc-sse4_1.h: New file.
	* sysdeps/x86_64/fpu/multiarch/s_ceil.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_ceilf.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_floor.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_floorf.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_nearbyint.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_nearbyintf.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_rint.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_rintf.c: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_ceil.S: Renamed to ...
	* sysdeps/x86_64/fpu/multiarch/s_ceil-sse4_1.S: This.  Don't
	include <machine/asm.h> nor <init-arch.h>.  Include <sysdep.h>.
	(__ceil): Removed.
	* sysdeps/x86_64/fpu/multiarch/s_ceilf.S: Renamed to ...
	* sysdeps/x86_64/fpu/multiarch/s_ceilf-sse4_1.S: This.  Don't
	include <machine/asm.h> nor <init-arch.h>.  Include <sysdep.h>.
	(__ceilf): Removed.
	* sysdeps/x86_64/fpu/multiarch/s_floor.S: Renamed to ...
	* sysdeps/x86_64/fpu/multiarch/s_floor-sse4_1.S: This.  Don't
	include <machine/asm.h> nor <init-arch.h>.  Include <sysdep.h>.
	(__floor): Removed.
	* sysdeps/x86_64/fpu/multiarch/s_floorf.S: Renamed to ...
	* sysdeps/x86_64/fpu/multiarch/s_floorf-sse4_1.S: This.  Don't
	include <machine/asm.h> nor <init-arch.h>.  Include <sysdep.h>.
	(__floorf): Removed.
	* sysdeps/x86_64/fpu/multiarch/s_nearbyint.S: Renamed to ...
	* sysdeps/x86_64/fpu/multiarch/s_nearbyint-sse4_1.S: This.  Don't
	include <machine/asm.h> nor <init-arch.h>.  Include <sysdep.h>.
	(__nearbyint): Removed.
	* sysdeps/x86_64/fpu/multiarch/s_nearbyintf.S: Renamed to ...
	* sysdeps/x86_64/fpu/multiarch/s_nearbyintf-sse4_1.S: This.  Don't
	include <machine/asm.h> nor <init-arch.h>.  Include <sysdep.h>.
	(__nearbyintf): Removed.
	* sysdeps/x86_64/fpu/multiarch/s_rint.S: Renamed to ...
	* sysdeps/x86_64/fpu/multiarch/s_rint-sse4_1.S: This.  Don't
	include <machine/asm.h> nor <init-arch.h>.  Include <sysdep.h>.
	(__rint): Removed.
	* sysdeps/x86_64/fpu/multiarch/s_rintf.S: Renamed to ...
	* sysdeps/x86_64/fpu/multiarch/s_rintf-sse4_1.S: This.  Don't
	include <machine/asm.h> nor <init-arch.h>.  Include <sysdep.h>.
	(__rintf): Removed.
2017-08-04 13:02:13 -07:00
H.J. Lu
fc11ff8d0a x86-64: Use IFUNC memcpy and mempcpy in libc.a
Since apply_irel is called before memcpy and mempcpy are called, we
can use IFUNC memcpy and mempcpy in libc.a.

	* sysdeps/x86_64/memmove.S (MEMCPY_SYMBOL): Don't check SHARED.
	(MEMPCPY_SYMBOL): Likewise.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Test memcpy and mempcpy in libc.a.
	* sysdeps/x86_64/multiarch/memcpy-ssse3-back.S: Also include
	in libc.a.
	* sysdeps/x86_64/multiarch/memcpy-ssse3.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memcpy.c: Also include in libc.a.
	(__hidden_ver1): Don't use in libc.a.
	* sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S
	(__mempcpy): Don't create a weak alias in libc.a.
	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: Support
	libc.a.
	* sysdeps/x86_64/multiarch/mempcpy.c: Also include in libc.a.
	(__hidden_ver1): Don't use in libc.a.
2017-08-04 12:27:18 -07:00
H.J. Lu
19f1a11e7e Check linker support for INSERT in linker script
Since gold doesn't support INSERT in linker script:

https://sourceware.org/bugzilla/show_bug.cgi?id=21676

tst-split-dynreloc fails to link with gold.  Check if linker supports
INSERT in linker script before using it.

	* config.make.in (have-insert): New.
	* configure.ac (libc_cv_insert): New.  Set to yes if linker
	supports INSERT in linker script.
	(AC_SUBST(libc_cv_insert): New.
	* configure: Regenerated.
	* sysdeps/x86_64/Makefile (tests): Add tst-split-dynreloc only
	if $(have-insert) == yes.
2017-08-04 12:17:30 -07:00
H.J. Lu
c8a0e6ec03 x86: Remove __memset_zero_constant_len_parameter [BZ #21790]
__memset_zero_constant_len_parameter should be removed by

commit 61062f5630
Author: Ulrich Drepper <drepper@redhat.com>
Date:   Tue Mar 1 00:35:23 2005 +0000

    2005-02-24  Roland McGrath  <roland@redhat.com>

            * debug/Versions (libc: GLIBC_2.4): Remove
            __memset_zero_constant_len_parameter.
            * sysdeps/generic/memset_chk.c: Remove alias and warning.
            * misc/sys/cdefs.h (__warndecl): New macro.
            * debug/warning-nop.c: New file.
            * string/bits/string3.h (memset): Call __warn_memset_zero_len with no
            arguments, instead of calling __memset_zero_constant_len_parameter.
            Use __warndecl for __warn_memset_zero_len.
            * debug/Makefile (routines): Add $(static-only-routines).
            (static-only-routines): New variable.

This patch removes the last emaining pieces of it.  Tested it on i586,
i686 and x86-64.

	[BZ #21790]
	* sysdeps/i386/i586/memset.S
	(__memset_zero_constant_len_parameter): Removed.
	* sysdeps/i386/i686/memset.S
	(__memset_zero_constant_len_parameter): Likewise.
	* sysdeps/i386/i686/multiarch/memset_chk.S
	(__memset_zero_constant_len_parameter): Likewise.
	* sysdeps/x86_64/memset.S (__memset_zero_constant_len_parameter):
	Likewise.
2017-08-04 10:56:51 -07:00
H.J. Lu
5b736bc9b5 x86-64: Check PIC instead of SHARED in start.S
Since start.o may be compiled as PIC, we should check PIC instead of
SHARED.

	* sysdeps/x86_64/start.S (_start): Check PIC instead of SHARED.
2017-08-02 10:27:34 -07:00
H.J. Lu
7a499756ab x86-64: Test memmove_chk and memset_chk only in libc.so [BZ #21741]
Since there are no multiarch versions of memmove_chk and memset_chk,
test multiarch versions of memmove_chk and memset_chk only in libc.so.

	[BZ #21741]
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Test memmove_chk and memset_chk only
	in libc.so.
2017-07-10 04:44:38 -07:00
H.J. Lu
58d021c836 x86-64: Update comments in IFUNC selectors
* sysdeps/x86_64/multiarch/memcmp.c: Update comments.
	* sysdeps/x86_64/multiarch/memmove.c: Likewise.
	* sysdeps/x86_64/multiarch/memrchr.c: Likewise.
	* sysdeps/x86_64/multiarch/memset.c: Likewise.
	* sysdeps/x86_64/multiarch/rawmemchr.c: Likewise.
	* sysdeps/x86_64/multiarch/strchrnul.c: Likewise.
	* sysdeps/x86_64/multiarch/strlen.c: Likewise.
	* sysdeps/x86_64/multiarch/strnlen.c: Likewise.
	* sysdeps/x86_64/multiarch/wcschr.c: Likewise.
	* sysdeps/x86_64/multiarch/wcscpy.c: Likewise.
	* sysdeps/x86_64/multiarch/wcslen.c: Likewise.
	* sysdeps/x86_64/multiarch/wcsnlen.c: Likewise.
	* sysdeps/x86_64/multiarch/wmemchr.c: Likewise.
	* sysdeps/x86_64/multiarch/wmemcmp.c: Likewise.
	* sysdeps/x86_64/multiarch/wmemset.c: Likewise.
	* sysdeps/x86_64/multiarch/wmemset_chk.c: Likewise.
2017-07-09 11:43:20 -07:00
H.J. Lu
4df54c89bb x86-64: Update comments in ifunc-impl-list.c
All x86-64 IFUNC selectors are written in C now.  Update comments to
reflect it.

	* sysdeps/x86_64/multiarch/ifunc-impl-list.c: Update comments.
2017-07-09 11:38:37 -07:00
H.J. Lu
031e519c95 x86-64: Align the stack in __tls_get_addr [BZ #21609]
This change forces realignment of the stack pointer in __tls_get_addr, so
that binaries compiled by GCCs older than GCC 4.9:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58066

continue to work even if vector instructions are used in glibc which
require the ABI stack realignment.

__tls_get_addr_slow is added to handle the slow paths in the default
implementation of__tls_get_addr in elf/dl-tls.c.  The new __tls_get_addr
calls __tls_get_addr_slow after realigning the stack.  Internal calls
within ld.so go directly to the default implementation of __tls_get_addr
because they do not need stack realignment.

	[BZ #21609]
	* sysdeps/x86_64/Makefile (sysdep-dl-routines): Add tls_get_addr.
	(gen-as-const-headers): Add rtld-offsets.sym.
	* sysdeps/x86_64/dl-tls.c: New file.
	* sysdeps/x86_64/rtld-offsets.sym: Likwise.
	* sysdeps/x86_64/tls_get_addr.S: Likewise.
	* sysdeps/x86_64/dl-tls.h: Add multiple inclusion guards.
	* sysdeps/x86_64/tlsdesc.sym (TI_MODULE_OFFSET): New.
	(TI_OFFSET_OFFSET): Likwise.
2017-07-06 04:43:20 -07:00
Joseph Myers
073e8fa773 Require binutils 2.25 or later to build glibc.
This patch implements a requirement of binutils >= 2.25 (up from 2.22)
to build glibc.  Tests for 2.24 or later on x86_64 and s390 are
removed.  It was already the case, as indicated by buildbot results,
that 2.24 was too old for building tests for 32-bit x86 (produced
internal linker errors linking elf/tst-gnu2-tls1mod.so).  I don't know
if any configure tests for binutils features are obsolete given the
increased version requirement.

Tested for x86_64.

	* configure.ac (AS): Require binutils 2.25 or later.
	(LD): Likewise.
	* configure: Regenerated.
	* sysdeps/s390/configure.ac (AS): Remove version check.
	* sysdeps/s390/configure: Regenerated.
	* sysdeps/x86_64/configure.ac (AS): Remove version check.
	* sysdeps/x86_64/configure: Regenerated.
	* manual/install.texi (Tools for Compilation): Document
	requirement for binutils 2.25 or later.
	* INSTALL: Regenerated.
2017-06-28 11:31:50 +00:00
H.J. Lu
e94c310357 x86-64: Optimize memcmp-avx2-movbe.S for short difference
Check the first 32 bytes before checking size when size >= 32 bytes
to avoid unnecessary branch if the difference is in the first 32 bytes.
Replace vpmovmskb/subl/jnz with vptest/jnc.

On Haswell, the new version is as fast as the previous one.  On Skylake,
the new version is a little bit faster.

	* sysdeps/x86_64/multiarch/memcmp-avx2-movbe.S (MEMCMP): Check
	the first 32 bytes before checking size when size >= 32 bytes.
	Replace vpmovmskb/subl/jnz with vptest/jnc.
2017-06-27 07:55:00 -07:00
Joseph Myers
c86ed71d63 Add float128 support for x86_64, x86.
This patch enables float128 support for x86_64 and x86.  All GCC
versions that can build glibc provide the required support, but since
GCC 6 and before don't provide __builtin_nanq / __builtin_nansq, sNaN
tests and some tests of NaN payloads need to be disabled with such
compilers (this does not affect the generated glibc binaries at all,
just the tests).  bits/floatn.h declares float128 support to be
available for GCC versions that provide the required libgcc support
(4.3 for x86_64, 4.4 for i386 GNU/Linux, 4.5 for i386 GNU/Hurd);
compilation-only support was present some time before then, but not
really useful without the libgcc functions.

fenv_private.h needed updating to avoid trying to put _Float128 values
in registers.  I make no assertion of optimality of the
math_opt_barrier / math_force_eval definitions for this case; they are
simply intended to be sufficient to work correctly.

Tested for x86_64 and x86, with GCC 7 and GCC 6.  (Testing for x32 was
compilation tests only with build-many-glibcs.py to verify the ABI
baseline updates.  I have not done any testing for Hurd, although the
float128 support is enabled there as for GNU/Linux.)

	* sysdeps/i386/Implies: Add ieee754/float128.
	* sysdeps/x86_64/Implies: Likewise.
	* sysdeps/x86/bits/floatn.h: New file.
	* sysdeps/x86/float128-abi.h: Likewise.
	* manual/math.texi (Mathematics): Document support for _Float128
	on x86_64 and x86.
	* sysdeps/i386/fpu/fenv_private.h: Include <bits/floatn.h>.
	(math_opt_barrier): Do not put _Float128 values in floating-point
	registers.
	(math_force_eval): Likewise.
	[__x86_64__] (SET_RESTORE_ROUNDF128): New macro.
	* sysdeps/x86/fpu/Makefile [$(subdir) = math] (CPPFLAGS): Append
	to Makefile variable.
	* sysdeps/x86/fpu/e_sqrtf128.c: New file.
	* sysdeps/x86/fpu/sfp-machine.h: Likewise.  Based on libgcc.
	* sysdeps/x86/math-tests.h: New file.
	* math/libm-test-support.h (XFAIL_FLOAT128_PAYLOAD): New macro.
	* math/libm-test-getpayload.inc (getpayload_test_data): Use
	XFAIL_FLOAT128_PAYLOAD.
	* math/libm-test-setpayload.inc (setpayload_test_data): Likewise.
	* math/libm-test-totalorder.inc (totalorder_test_data): Likewise.
	* math/libm-test-totalordermag.inc (totalordermag_test_data):
	Likewise.
	* sysdeps/unix/sysv/linux/i386/libc.abilist: Update.
	* sysdeps/unix/sysv/linux/i386/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/64/libc.abilist: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/x32/libc.abilist: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Likewise.
	* sysdeps/i386/fpu/libm-test-ulps: Likewise.
	* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2017-06-26 22:02:24 +00:00
H.J. Lu
049816c3be x86-64: Optimize L(between_2_3) in memcmp-avx2-movbe.S
Turn

	movzbl	-1(%rdi, %rdx), %edi
	movzbl	-1(%rsi, %rdx), %esi
	orl	%edi, %eax
	orl	%esi, %ecx

into

	movb	-1(%rdi, %rdx), %al
	movb	-1(%rsi, %rdx), %cl

	* sysdeps/x86_64/multiarch/memcmp-avx2-movbe.S (between_2_3):
	Replace movzbl and orl with movb.
2017-06-23 12:46:12 -07:00
Florian Weimer
bc0382ae90 x86-64: Fix comment typo in memcmp-avx2-movbe.S 2017-06-23 19:00:58 +02:00
Florian Weimer
3ec7c02cc3 x86-64: memcmp-avx2-movbe.S needs saturating subtraction [BZ #21662]
This code:

L(between_2_3):
	/* Load as big endian with overlapping loads and bswap to avoid
	   branches.  */
	movzwl	-2(%rdi, %rdx), %eax
	movzwl	-2(%rsi, %rdx), %ecx
	shll	$16, %eax
	shll	$16, %ecx
	movzwl	(%rdi), %edi
	movzwl	(%rsi), %esi
	orl	%edi, %eax
	orl	%esi, %ecx
	bswap	%eax
	bswap	%ecx
	subl	%ecx, %eax
	ret

needs a saturating subtract because the full register is used.
With this commit, only the lower 24 bits of the register are used,
so a regular subtraction suffices.

The test case change adds coverage for these kinds of bugs.
2017-06-23 17:24:40 +02:00
H.J. Lu
11ffcacb64 x86-64: Implement strcmp family IFUNC selectors in C
Implement strcmp family IFUNC selectors in C.

All internal calls within libc.so can use IFUNC on x86-64 since unlike
x86, x86-64 supports PC-relative addressing to access the GOT entry so
that it can call via PLT without using an extra register.  For libc.a,
we can't use IFUNC for functions which are called before IFUNC has been
initialized.  Use IFUNC internally reduces the icache footprint since
libc.so and other codes in the process use the same implementations.
This patch uses IFUNC for strcmp family functions within libc.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	strcmp-sse2, strcmp-sse4_2, strncmp-sse2, strncmp-sse4_2,
	strcasecmp_l-sse2, strcasecmp_l-sse4_2, strcasecmp_l-avx,
	strncase_l-sse2, strncase_l-sse4_2 and strncase_l-avx.
	* sysdeps/x86_64/multiarch/ifunc-strcasecmp.h: New file.
	* sysdeps/x86_64/multiarch/strcasecmp.c: Likewise.
	* sysdeps/x86_64/multiarch/strcasecmp_l-avx.S: Likewise.
	* sysdeps/x86_64/multiarch/strcasecmp_l-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strcasecmp_l-sse4_2.S: Likewise.
	* sysdeps/x86_64/multiarch/strcasecmp_l.c: Likewise.
	* sysdeps/x86_64/multiarch/strcmp-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strcmp-sse4_2.S: Likewise.
	* sysdeps/x86_64/multiarch/strcmp.c: Likewise.
	* sysdeps/x86_64/multiarch/strncase.c: Likewise.
	* sysdeps/x86_64/multiarch/strncase_l-avx.S : Likewise.
	* sysdeps/x86_64/multiarch/strncase_l-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strncase_l-sse4_2.S: Likewise.
	* sysdeps/x86_64/multiarch/strncase_l.c: Likewise.
	* sysdeps/x86_64/multiarch/strncmp-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strncmp-sse4_2.S: Likewise.
	* sysdeps/x86_64/multiarch/strncmp.c: Likewise.
	* sysdeps/x86_64/multiarch/strcasecmp_l.S: Removed.
	* sysdeps/x86_64/multiarch/strcmp.S: Likewise.
	* sysdeps/x86_64/multiarch/strncase_l.S: Likewise.
	* sysdeps/x86_64/multiarch/strncmp.S: Likewise.
	* sysdeps/x86_64/multiarch/strcmp-sse42.S: Include <sysdep.h>.
	(STRCMP_SSE42): New.  Defined to __strcmp_sse42 if not defined.
	[USE_AS_STRCASECMP_L || USE_AS_STRNCASECMP_L]: Include
	"locale-defines.h".
	(UPDATE_STRNCMP_COUNTER): New.
	(SECTION): Likewise.
	(GLABEL): Likewise.
	(LABEL): Likewise.
	* sysdeps/x86_64/multiarch/strncmp-ssse3.S: Rewrite and enable
	for libc.a.
2017-06-21 12:11:06 -07:00
Zack Weinberg
af85385f31 Use locale_t, not __locale_t, throughout glibc
<locale.h> is specified to define locale_t in POSIX.1-2008, and so are
all of the headers that define functions that take locale_t arguments.
Under _GNU_SOURCE, the additional headers that define such functions
have also always defined locale_t.  Therefore, there is no need to use
__locale_t in public function prototypes, nor in any internal code.

	* ctype/ctype-c99_l.c, ctype/ctype.h, ctype/ctype_l.c
	* include/monetary.h, include/stdlib.h, include/time.h
	* include/wchar.h, locale/duplocale.c, locale/freelocale.c
	* locale/global-locale.c, locale/langinfo.h, locale/locale.h
	* locale/localeinfo.h, locale/newlocale.c
	* locale/nl_langinfo_l.c, locale/uselocale.c
	* localedata/bug-usesetlocale.c, localedata/tst-xlocale2.c
	* stdio-common/vfscanf.c, stdlib/monetary.h, stdlib/stdlib.h
	* stdlib/strfmon_l.c, stdlib/strtod_l.c, stdlib/strtof_l.c
	* stdlib/strtol.c, stdlib/strtol_l.c, stdlib/strtold_l.c
	* stdlib/strtoll_l.c, stdlib/strtoul_l.c, stdlib/strtoull_l.c
	* string/strcasecmp.c, string/strcoll_l.c, string/string.h
	* string/strings.h, string/strncase.c, string/strxfrm_l.c
	* sysdeps/ieee754/float128/strtof128_l.c
	* sysdeps/ieee754/float128/wcstof128.c
	* sysdeps/ieee754/float128/wcstof128_l.c
	* sysdeps/ieee754/ldbl-128ibm/strtold_l.c
	* sysdeps/ieee754/ldbl-64-128/strtold_l.c
	* sysdeps/ieee754/ldbl-opt/nldbl-compat.c
	* sysdeps/ieee754/ldbl-opt/nldbl-strfmon_l.c
	* sysdeps/ieee754/ldbl-opt/nldbl-strtold_l.c
	* sysdeps/ieee754/ldbl-opt/nldbl-wcstold_l.c
	* sysdeps/powerpc/powerpc32/power7/strcasecmp.S
	* sysdeps/powerpc/powerpc64/power7/strcasecmp.S
	* sysdeps/x86_64/strcasecmp_l-nonascii.c
	* sysdeps/x86_64/strncase_l-nonascii.c, time/strftime_l.c
	* time/strptime_l.c, time/time.h, wcsmbs/mbsrtowcs_l.c
	* wcsmbs/wchar.h, wcsmbs/wcscasecmp.c, wcsmbs/wcsncase.c
	* wcsmbs/wcstod.c, wcsmbs/wcstod_l.c, wcsmbs/wcstof.c
	* wcsmbs/wcstof_l.c, wcsmbs/wcstol_l.c, wcsmbs/wcstold.c
	* wcsmbs/wcstold_l.c, wcsmbs/wcstoll_l.c, wcsmbs/wcstoul_l.c
	* wcsmbs/wcstoull_l.c, wctype/iswctype_l.c
	* wctype/towctrans_l.c, wctype/wcfuncs_l.c
	* wctype/wctrans_l.c, wctype/wctype.h, wctype/wctype_l.c:
	Change all uses of __locale_t to locale_t.
2017-06-20 20:30:06 -04:00
Zack Weinberg
c0b23001a8 Fix fallout from bits/string.h removal.
Remove one more string inline that was defined directly in string.h;
in the absence of the rest of the inlines, it broke the build.

Like other ifunc shims for these functions,
x86_64/multiarch/{mem,st}pcpy.c need to define __NO_STRING_INLINES and
NO_MEMPCPY_STPCPY_REDIRECT.

	* string/string.h (__mempcpy_inline): Delete.
	* sysdeps/x86_64/multiarch/mempcpy.c
	* sysdeps/x86_64/multiarch/stpcpy.c:
	Define NO_MEMPCPY_STPCPY_REDIRECT and __NO_STRING_INLINES
	before including string.h.
2017-06-20 09:39:08 -04:00
Zack Weinberg
09a596cc2c Remove bits/string.h.
These machine-dependent inline string functions have never been on by
default, and even if they were a good idea at the time they were
introduced, they haven't really been touched in ten to fifteen years
and probably aren't a good idea on current-gen processors.  Current
thinking is that this class of optimization is best left to the
compiler.

	* bits/string.h, string/bits/string.h
	* sysdeps/aarch64/bits/string.h
	* sysdeps/m68k/m680x0/m68020/bits/string.h
	* sysdeps/s390/bits/string.h, sysdeps/sparc/bits/string.h
	* sysdeps/x86/bits/string.h: Delete file.

	* string/string.h: Don't include bits/string.h.
	* string/bits/string3.h: Rename to bits/string_fortified.h.
	No need to undef various symbols that the removed headers
	might have defined as macros.
	* string/Makefile (headers): Remove bits/string.h, change
	bits/string3.h to bits/string_fortified.h.
	* string/string-inlines.c: Update commentary.  Remove definitions
	of various macros that nothing looks at anymore.  Don't directly
	include bits/string.h. Set _STRING_INLINE_unaligned here, based on
	compiler-predefined macros.
	* string/strncat.c: If STRNCAT is not defined, or STRNCAT_PRIMARY
	_is_ defined, provide internal hidden alias __strncat.
	* include/string.h: Declare internal hidden alias __strncat.
	Only forward __stpcpy to __builtin_stpcpy if __NO_STRING_INLINES is
	not defined.
	* include/bits/string3.h: Rename to bits/string_fortified.h,
	update to match above.

	* sysdeps/i386/string-inlines.c: Define compat symbols for
	everything formerly defined by sysdeps/x86/bits/string.h.
	Make existing definitions into compat symbols as well.
	Remove some no-longer-necessary messing around with macros.

	* sysdeps/powerpc/powerpc32/power4/multiarch/mempcpy.c
	* sysdeps/powerpc/powerpc64/multiarch/mempcpy.c
	* sysdeps/powerpc/powerpc64/multiarch/stpcpy.c
	* sysdeps/s390/multiarch/mempcpy.c
	No need to define _HAVE_STRING_ARCH_mempcpy.
	Do define __NO_STRING_INLINES and NO_MEMPCPY_STPCPY_REDIRECT.

	* sysdeps/i386/i686/multiarch/strncat-c.c
	* sysdeps/s390/multiarch/strncat-c.c
	* sysdeps/x86_64/multiarch/strncat-c.c
	Define STRNCAT_PRIMARY.  Don't change definition of libc_hidden_def.
2017-06-20 08:21:24 -04:00
Siddhesh Poyarekar
629ebc873a Fix typo when undefining weak_alias
The macro directive #undef was miswritten as #undefine.

	* sysdeps/x86_64/multiarch/rawmemchr-sse2.S: Fix typo.
2017-06-19 14:56:40 +05:30
H.J. Lu
70fe2eb794 x86-64: Implement strcspn/strpbrk/strspn IFUNC selectors in C
Implement strcspn/strpbrk/strspn IFUNC selectors in C

All internal calls within libc.so can use IFUNC on x86-64 since unlike
x86, x86-64 supports PC-relative addressing to access the GOT entry so
that it can call via PLT without using an extra register.  For libc.a,
we can't use IFUNC for functions which are called before IFUNC has been
initialized.  Use IFUNC internally reduces the icache footprint since
libc.so and other codes in the process use the same implementations.
This patch uses IFUNC for strcspn/strpbrk/strspn functions within libc.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	strcspn-sse2, strpbrk-sse2 and strspn-sse2.
	* sysdeps/x86_64/strcspn.S (STRPBRK_P): Removed.
	Check USE_AS_STRPBRK instead of STRPBRK_P.
	* sysdeps/x86_64/strpbrk.S (USE_AS_STRPBRK): New.
	* sysdeps/x86_64/multiarch/ifunc-sse4_2.h: New file.
	* sysdeps/x86_64/multiarch/strcspn-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strcspn.c: Likewise.
	* sysdeps/x86_64/multiarch/strpbrk-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strpbrk.c: Likewise.
	* sysdeps/x86_64/multiarch/strspn-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strspn.c: Likewise.
	* sysdeps/x86_64/multiarch/strcspn.S: Removed.
	* sysdeps/x86_64/multiarch/strpbrk.S: Likewise.
	* sysdeps/x86_64/multiarch/strspn.S: Likewise.
	* sysdeps/x86_64/multiarch/strpbrk-c.c: Remove "#ifdef SHARED"
	and "#endif".
2017-06-15 08:59:05 -07:00
H.J. Lu
9f4254b8bd x86-64: Implement wcscpy IFUNC selector in C
* sysdeps/x86_64/multiarch/wcscpy.S: Removed.
	* sysdeps/x86_64/multiarch/wcscpy.c: New file.
2017-06-15 08:57:52 -07:00
H.J. Lu
9ed0aa15d3 x86-64: Implement strcat family IFUNC selectors in C
Implement strcat family IFUNC selectors in C.

All internal calls within libc.so can use IFUNC on x86-64 since unlike
x86, x86-64 supports PC-relative addressing to access the GOT entry so
that it can call via PLT without using an extra register.  For libc.a,
we can't use IFUNC for functions which are called before IFUNC has been
initialized.  Use IFUNC internally reduces the icache footprint since
libc.so and other codes in the process use the same implementations.
This patch uses IFUNC for strcat family functions within libc.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	strcat-sse2.
	* sysdeps/x86_64/multiarch/strcat-sse2.S: New file.
	* sysdeps/x86_64/multiarch/strcat.c: Likewise.
	* sysdeps/x86_64/multiarch/strncat.c: Likewise.
	* sysdeps/x86_64/multiarch/strcat.S: Removed.
	* sysdeps/x86_64/multiarch/strncat.S: Likewise.
2017-06-15 08:56:59 -07:00
H.J. Lu
b91a52d0d7 x86-64: Implement memcmp family IFUNC selectors in C
Implement memcmp family IFUNC selectors in C.

All internal calls within libc.so can use IFUNC on x86-64 since unlike
x86, x86-64 supports PC-relative addressing to access the GOT entry so
that it can call via PLT without using an extra register.  For libc.a,
we can't use IFUNC for functions which are called before IFUNC has been
initialized.  Use IFUNC internally reduces the icache footprint since
libc.so and other codes in the process use the same implementations.
This patch uses IFUNC for memcmp family functions within libc.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memcmp-sse2.
	* sysdeps/x86_64/multiarch/ifunc-memcmp.h: New file.
	* sysdeps/x86_64/multiarch/memcmp-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/memcmp.c: Likewise.
	* sysdeps/x86_64/multiarch/wmemcmp.c: Likewise.
	* sysdeps/x86_64/multiarch/memcmp.S: Removed.
	* sysdeps/x86_64/multiarch/wmemcmp.S: Likewise.
2017-06-15 08:49:57 -07:00
H.J. Lu
93e46f8773 x86-64: Implement memset family IFUNC selectors in C
Implement memset family IFUNC selectors in C.

All internal calls within libc.so can use IFUNC on x86-64 since unlike
x86, x86-64 supports PC-relative addressing to access the GOT entry so
that it can call via PLT without using an extra register.  For libc.a,
we can't use IFUNC for functions which are called before IFUNC has been
initialized.  Use IFUNC internally reduces the icache footprint since
libc.so and other codes in the process use the same implementations.
This patch uses IFUNC for memset functions within libc.

2017-06-07  H.J. Lu  <hongjiu.lu@intel.com>
	    Erich Elsen  <eriche@google.com>

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memset-sse2-unaligned-erms, and memset_chk-nonshared.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add test for __memset_chk_erms.
	Update comments.
	* sysdeps/x86_64/multiarch/ifunc-memset.h: New file.
	* sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S: Likewise.
	* sysdeps/x86_64/multiarch/memset.c: Likewise.
	* sysdeps/x86_64/multiarch/memset_chk-nonshared.S: Likewise.
	* sysdeps/x86_64/multiarch/memset_chk.c: Likewise.
	* sysdeps/x86_64/multiarch/memset.S: Removed.
	* sysdeps/x86_64/multiarch/memset_chk.S: Likewise.
	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S
	(__memset_chk_erms): New function.
2017-06-15 08:33:35 -07:00
H.J. Lu
5c3e322d3b x86-64: Implement memmove family IFUNC selectors in C
Implement memmove family IFUNC selectors in C.

All internal calls within libc.so can use IFUNC on x86-64 since unlike
x86, x86-64 supports PC-relative addressing to access the GOT entry so
that it can call via PLT without using an extra register.  For libc.a,
we can't use IFUNC for functions which are called before IFUNC has been
initialized.  Use IFUNC internally reduces the icache footprint since
libc.so and other codes in the process use the same implementations.
This patch uses IFUNC for memmove family functions within libc.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memmove-sse2-unaligned-erms, memcpy_chk-nonshared,
	mempcpy_chk-nonshared and memmove_chk-nonshared.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add tests for __memmove_chk_erms,
	__memcpy_chk_erms and __mempcpy_chk_erms.  Update comments.
	* sysdeps/x86_64/multiarch/ifunc-memmove.h: New file.
	* sysdeps/x86_64/multiarch/memcpy.c: Likewise.
	* sysdeps/x86_64/multiarch/memcpy_chk-nonshared.S: Likewise.
	* sysdeps/x86_64/multiarch/memcpy_chk.c: Likewise.
	* sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove.c: Likewise.
	* sysdeps/x86_64/multiarch/memmove_chk-nonshared.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove_chk.c: Likewise.
	* sysdeps/x86_64/multiarch/mempcpy.c: Likewise.
	* sysdeps/x86_64/multiarch/mempcpy_chk-nonshared.S: Likewise.
	* sysdeps/x86_64/multiarch/mempcpy_chk.c: Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S: Removed.
	* sysdeps/x86_64/multiarch/memcpy_chk.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove_chk.S: Likewise.
	* sysdeps/x86_64/multiarch/mempcpy.S: Likewise.
	* sysdeps/x86_64/multiarch/mempcpy_chk.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
	(__mempcpy_chk_erms): New function.
	(__memmove_chk_erms): Likewise.
	(__memcpy_chk_erms): New alias.
2017-06-14 12:11:10 -07:00
Zack Weinberg
fd860eaaa8 Remove __need macros from errno.h (__need_Emath, __need_error_t).
This is fairly complicated, not because the users of __need_Emath and
__need_error_t have complicated requirements, but because the core
changes had a lot of fallout.

__need_error_t exists for gnulib compatibility in argz.h and argp.h.
error_t itself is a Hurdism, an enum containing all the E-constants,
so you can do 'p (error_t) errno' in gdb and get a symbolic value.
argz.h and argp.h use it for function return values, and they want to
fall back to 'int' when that's not available.  There is no reason why
these nonstandard headers cannot just go ahead and include all of
errno.h; so we do that.

__need_Emath is defined only by .S files; what they _really_ need is
for errno.h to avoid declaring anything other than the E-constants
(e.g. 'extern int __errno_location(void);' is a syntax error in
assembly language). This is replaced with a check for __ASSEMBLER__ in
errno.h, plus a carefully documented requirement for bits/errno.h not
to define anything other than macros.  That in turn has the
consequence that bits/errno.h must not define errno - fortunately, all
live ports use the same definition of errno, so I've moved it to
errno.h.  The Hurd bits/errno.h must also take care not to define
error_t when __ASSEMBLER__ is defined, which involves repeating all of
the definitions twice, but it's a generated file so that's okay.

	* stdlib/errno.h: Remove __need_Emath and __need_error_t logic.
	Reorganize file.  Declare errno here.  When __ASSEMBLER__ is
	defined, don't declare anything other than the E-constants.

	* include/errno.h: Change conditional for exposing internal
	declarations to (not _ISOMAC and not __ASSEMBLER__).
	* bits/errno.h: Remove logic for __need_Emath.  Document
	requirements for a port-specific bits/errno.h.

	* sysdeps/unix/sysv/linux/bits/errno.h
	* sysdeps/unix/sysv/linux/alpha/bits/errno.h
	* sysdeps/unix/sysv/linux/hppa/bits/errno.h
	* sysdeps/unix/sysv/linux/mips/bits/errno.h
	* sysdeps/unix/sysv/linux/sparc/bits/errno.h:
	Add multiple-include guard and check against improper inclusion.
	Remove __need_Emath logic.  Don't declare errno here.  Ensure all
	constants are defined as simple integer literals.  Consistent
	formatting.
	* sysdeps/mach/hurd/errnos.awk: Likewise.  Only define error_t and
	enum __error_t_codes if __ASSEMBLER__ is not defined.
	* sysdeps/mach/hurd/bits/errno.h: Regenerate.

	* argp/argp.h, string/argz.h: Don't define __need_error_t before
	including errno.h.
	* sysdeps/i386/i686/fpu/multiarch/s_cosf-sse2.S
	* sysdeps/i386/i686/fpu/multiarch/s_sincosf-sse2.S
	* sysdeps/i386/i686/fpu/multiarch/s_sinf-sse2.S
	* sysdeps/x86_64/fpu/s_cosf.S
	* sysdeps/x86_64/fpu/s_sincosf.S
	* sysdeps/x86_64/fpu/s_sinf.S:
	Just include errno.h; don't define __need_Emath or include
	bits/errno.h directly.
2017-06-14 08:14:34 -04:00
Alan Modra
0572433b5b PowerPC64 ELFv2 PPC64_OPT_LOCALENTRY
ELFv2 functions with localentry:0 are those with a single entry point,
ie. global entry == local entry, that have no requirement on r2 or
r12 and guarantee r2 is unchanged on return.  Such an external
function can be called via the PLT without saving r2 or restoring it
on return, avoiding a common load-hit-store for small functions.

This patch implements the ld.so changes necessary for this
optimization.  ld.so needs to check that an optimized plt call
sequence is in fact calling a function implemented with localentry:0,
end emit a fatal error otherwise.

The elf/testobj6.c change is to stop "error while loading shared
libraries: expected localentry:0 `preload'" when running
elf/preloadtest, which we'd get otherwise.

	* elf/elf.h (PPC64_OPT_LOCALENTRY): Define.
	* sysdeps/alpha/dl-machine.h (elf_machine_fixup_plt): Add
	refsym and sym parameters.  Adjust callers.
	* sysdeps/aarch64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/arm/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/generic/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/hppa/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/i386/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/ia64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/m68k/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/microblaze/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/mips/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/nios2/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_fixup_plt):
	Likewise.
	* sysdeps/s390/s390-32/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/s390/s390-64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/sh/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/sparc/sparc32/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/sparc/sparc64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/tile/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/x86_64/dl-machine.h (elf_machine_fixup_plt): Likewise.
	* sysdeps/powerpc/powerpc64/dl-machine.c (_dl_error_localentry): New.
	(_dl_reloc_overflow): Increase buffser size.  Formatting.
	* sysdeps/powerpc/powerpc64/dl-machine.h (ppc64_local_entry_offset):
	Delete reloc param, add refsym and sym.  Check optimized plt
	call stubs for localentry:0 functions.  Adjust callers.
	(elf_machine_fixup_plt, elf_machine_plt_conflict): Add refsym
	and sym parameters.  Adjust callers.
	(_dl_reloc_overflow): Move attribute.
	(_dl_error_localentry): Declare.
	* elf/dl-runtime.c (_dl_fixup): Save original sym.  Pass
	refsym and sym to elf_machine_fixup_plt.
	* elf/testobj6.c (preload): Call printf.
2017-06-14 10:47:25 +09:30
H.J. Lu
5a103908c0 x86-64: Implement strcpy family IFUNC selectors in C
Implement strcpy family IFUNC selectors in C.

All internal calls within libc.so can use IFUNC on x86-64 since unlike
x86, x86-64 supports PC-relative addressing to access the GOT entry so
that it can call via PLT without using an extra register.  For libc.a,
we can't use IFUNC for functions which are called before IFUNC has been
initialized.  Use IFUNC internally reduces the icache footprint since
libc.so and other codes in the process use the same implementations.
This patch uses IFUNC for strcpy family functions within libc.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	strcpy-sse2 and stpcpy-sse2.
	* sysdeps/x86_64/multiarch/ifunc-unaligned-ssse3.h: New file.
	* sysdeps/x86_64/multiarch/stpcpy-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/stpcpy.c: Likewise.
	* sysdeps/x86_64/multiarch/stpncpy.c: Likewise.
	* sysdeps/x86_64/multiarch/strcpy-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strcpy.c: Likewise.
	* sysdeps/x86_64/multiarch/strncpy.c: Likewise.
	* sysdeps/x86_64/multiarch/stpcpy.S: Removed.
	* sysdeps/x86_64/multiarch/stpncpy.S: Likewise.
	* sysdeps/x86_64/multiarch/strcpy.S: Likewise.
	* sysdeps/x86_64/multiarch/strncpy.S: Likewise.
	* sysdeps/x86_64/multiarch/stpncpy-c.c (weak_alias): New.
	(libc_hidden_def): Always defined as empty.
	* sysdeps/x86_64/multiarch/strncpy-c.c (libc_hidden_builtin_def):
	Always Defined as empty.
2017-06-12 09:06:09 -07:00
H.J. Lu
6b6710e55b x86-64: Correct comments in ifunc-impl-list.c
* sysdeps/x86_64/multiarch/ifunc-impl-list.c: Correct comments.
2017-06-09 05:53:45 -07:00
H.J. Lu
d2538b9156 x86-64: Optimize strrchr/wcsrchr with AVX2
Optimize strrchr/wcsrchr with AVX2 to check 32 bytes with vector
instructions.  It is as fast as SSE2 version for small data sizes
and up to 1X faster for large data sizes on Haswell.  Select AVX2
version on AVX2 machines where vzeroupper is preferred and AVX
unaligned load is fast.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	strrchr-sse2, strrchr-avx2, wcsrchr-sse2 and wcsrchr-avx2.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add tests for __strrchr_avx2,
	__strrchr_sse2, __wcsrchr_avx2 and __wcsrchr_sse2.
	* sysdeps/x86_64/multiarch/strrchr-avx2.S: New file.
	* sysdeps/x86_64/multiarch/strrchr-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strrchr.c: Likewise.
	* sysdeps/x86_64/multiarch/wcsrchr-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/wcsrchr-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/wcsrchr.c: Likewise.
2017-06-09 05:45:52 -07:00
H.J. Lu
5ac7aa1d7c x86-64: Optimize memrchr with AVX2
Optimize memrchr with AVX2 to search 32 bytes with a single vector
compare instruction.  It is as fast as SSE2 memrchr for small data
sizes and up to 1X faster for large data sizes on Haswell.  Select
AVX2 memrchr on AVX2 machines where vzeroupper is preferred and AVX
unaligned load is fast.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memrchr-sse2 and memrchr-avx2.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add tests for __memrchr_avx2 and
	__memrchr_sse2.
	* sysdeps/x86_64/multiarch/memrchr-avx2.S: New file.
	* sysdeps/x86_64/multiarch/memrchr-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/memrchr.c: Likewise.
2017-06-09 05:44:41 -07:00
H.J. Lu
8fe57365bf x86-64: Optimize strchr/strchrnul/wcschr with AVX2
Optimize strchr/strchrnul/wcschr with AVX2 to search 32 bytes with vector
instructions.  It is as fast as SSE2 versions for size <= 16 bytes and up
to 1X faster for or size > 16 bytes on Haswell.  Select AVX2 version on
AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast.

NB: It uses TZCNT instead of BSF since TZCNT produces the same result
as BSF for non-zero input.  TZCNT is faster than BSF and is executed
as BSF if machine doesn't support TZCNT.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	strchr-sse2, strchrnul-sse2, strchr-avx2, strchrnul-avx2,
	wcschr-sse2 and wcschr-avx2.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add tests for __strchr_avx2,
	__strchrnul_avx2, __strchrnul_sse2, __wcschr_avx2 and
	__wcschr_sse2.
	* sysdeps/x86_64/multiarch/strchr-avx2.S: New file.
	* sysdeps/x86_64/multiarch/strchr-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strchr.c: Likewise.
	* sysdeps/x86_64/multiarch/strchrnul-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/strchrnul-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strchrnul.c: Likewise.
	* sysdeps/x86_64/multiarch/wcschr-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/wcschr-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/wcschr.c: Likewise.
	* sysdeps/x86_64/multiarch/strchr.S: Removed.
2017-06-09 05:42:29 -07:00
H.J. Lu
dc485ceb2a x86-64: Optimize strlen/strnlen/wcslen/wcsnlen with AVX2
Optimize strlen/strnlen/wcslen/wcsnlen with AVX2 to check 32 bytes with
a single vector compare instruction.  It is as fast as SSE2 versions for
size <= 16 bytes and up to 1X faster for or size > 16 bytes on Haswell.
Select AVX2 version on AVX2 machines where vzeroupper is preferred and
AVX unaligned load is fast.

NB: It uses TZCNT instead of BSF since TZCNT produces the same result
as BSF for non-zero input.  TZCNT is faster than BSF and is executed
as BSF if machine doesn't support TZCNT.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	strlen-sse2, strnlen-sse2, strlen-avx2, strnlen-avx2,
	wcslen-sse2, wcslen-avx2 and wcsnlen-avx2.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add tests for __strlen_avx2,
	__strlen_sse2, __strnlen_avx2, __strnlen_sse2, __wcslen_avx2,
	__wcslen_sse2 and __wcsnlen_avx2.
	* sysdeps/x86_64/multiarch/strlen-avx2.S: New file.
	* sysdeps/x86_64/multiarch/strlen-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strlen.c: Likewise.
	* sysdeps/x86_64/multiarch/strnlen-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/strnlen-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/strnlen.c: Likewise.
	* sysdeps/x86_64/multiarch/wcslen-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/wcslen-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/wcslen.c: Likewise.
	* sysdeps/x86_64/multiarch/wcsnlen-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/wcsnlen.c (OPTIMIZE (avx2)): New.
	(IFUNC_SELECTOR): Return OPTIMIZE (avx2) on AVX2 machines where
	vzeroupper is preferred and AVX unaligned load is fast.
2017-06-09 05:18:18 -07:00
H.J. Lu
2f5d20ac99 x86-64: Optimize memchr/rawmemchr/wmemchr with SSE2/AVX2
SSE2 memchr is extended to support wmemchr.  AVX2 memchr/rawmemchr/wmemchr
are added to search 32 bytes with a single vector compare instruction.
AVX2 memchr/rawmemchr/wmemchr are as fast as SSE2 memchr/rawmemchr/wmemchr
for small sizes and up to 1.5X faster for larger sizes on Haswell and
Skylake.  Select AVX2 memchr/rawmemchr/wmemchr on AVX2 machines where
vzeroupper is preferred and AVX unaligned load is fast.

NB: It uses TZCNT instead of BSF since TZCNT produces the same result
as BSF for non-zero input.  TZCNT is faster than BSF and is executed
as BSF if machine doesn't support TZCNT.

	* sysdeps/x86_64/memchr.S (MEMCHR): New.  Depending on if
	USE_AS_WMEMCHR is defined.
	(PCMPEQ): Likewise.
	(memchr): Renamed to ...
	(MEMCHR): This.  Support wmemchr if USE_AS_WMEMCHR is defined.
	Replace pcmpeqb with PCMPEQ.
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memchr-sse2, rawmemchr-sse2, memchr-avx2, rawmemchr-avx2,
	wmemchr-sse4_1, wmemchr-avx2 and wmemchr-c.
	* sysdeps/x86_64/multiarch/ifunc-avx2.h: New file.
	* sysdeps/x86_64/multiarch/memchr-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/memchr-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/memchr.c: Likewise.
	* sysdeps/x86_64/multiarch/rawmemchr-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/rawmemchr-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/rawmemchr.c: Likewise.
	* sysdeps/x86_64/multiarch/wmemchr-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/wmemchr-sse2.S: Likewise.
	* sysdeps/x86_64/multiarch/wmemchr.c: Likewise.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Test __memchr_avx2, __memchr_sse2,
	__rawmemchr_avx2, __rawmemchr_sse2, __wmemchr_avx2 and
	__wmemchr_sse2.
2017-06-09 05:13:31 -07:00
H.J. Lu
5e1122827a x86-64: Rename wmemset.h to ifunc-wmemset.h
No code changes.

	* sysdeps/x86_64/multiarch/wmemset.c: Include ifunc-wmemset.h
	instead of wmemset.h.
	* sysdeps/x86_64/multiarch/wmemset_chk.c: Likewise.
	* sysdeps/x86_64/multiarch/wmemset.h: Renamed to ...
	* sysdeps/x86_64/multiarch/ifunc-wmemset.h: This.
2017-06-07 14:48:34 -07:00
H.J. Lu
2e87c7d158 x86-64: Fold ifunc-sse4_1.h into wcsnlen.c
Since ifunc-sse4_1.h is included only by wcsnlen.c, we can fold it
into wcsnlen.c.  No code changes in wcsnlen.o.

2017-06-07  H.J. Lu  <hongjiu.lu@intel.com>

	* sysdeps/x86_64/multiarch/ifunc-sse4_1.h: Removed and folded
	into ...
	* sysdeps/x86_64/multiarch/wcsnlen.c: Here.  Don't include
	ifunc-sse4_1.h.
2017-06-07 09:04:40 -07:00
H.J. Lu
d4cc385c6e x86-64: Move wcsnlen.S to multiarch/wcsnlen-sse4_1.S
Since wcsnlen.S uses pminud which is the part of SSE4.1, move wcsnlen.S
to multiarch/wcsnlen-sse4_1.S.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	wcsnlen-sse4_1 and wcsnlen-c.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Test __wcsnlen_sse4_1 and
	__wcsnlen_sse2.
	* sysdeps/x86_64/multiarch/ifunc-sse4_1.h: New file.
	* sysdeps/x86_64/multiarch/wcsnlen-c.c: Likewise.
	* sysdeps/x86_64/multiarch/wcsnlen-sse4_1.S: Likewise.
	* sysdeps/x86_64/multiarch/wcsnlen.c: Likewise.
	* sysdeps/x86_64/wcsnlen.S: Removed.
2017-06-06 06:12:32 -07:00
Stefan Liebler
12d2dd7060 Optimize generic spinlock code and use C11 like atomic macros.
This patch optimizes the generic spinlock code.

The type pthread_spinlock_t is a typedef to volatile int on all archs.
Passing a volatile pointer to the atomic macros which are not mapped to the
C11 atomic builtins can lead to extra stores and loads to stack if such
a macro creates a temporary variable by using "__typeof (*(mem)) tmp;".
Thus, those macros which are used by spinlock code - atomic_exchange_acquire,
atomic_load_relaxed, atomic_compare_exchange_weak - have to be adjusted.
According to the comment from  Szabolcs Nagy, the type of a cast expression is
unqualified (see http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_423.htm):
__typeof ((__typeof (*(mem)) *(mem)) tmp;
Thus from spinlock perspective the variable tmp is of type int instead of
type volatile int.  This patch adjusts those macros in include/atomic.h.
With this construct GCC >= 5 omits the extra stores and loads.

The atomic macros are replaced by the C11 like atomic macros and thus
the code is aligned to it.  The pthread_spin_unlock implementation is now
using release memory order instead of sequentially consistent memory order.
The issue with passed volatile int pointers applies to the C11 like atomic
macros as well as the ones used before.

I've added a glibc_likely hint to the first atomic exchange in
pthread_spin_lock in order to return immediately to the caller if the lock is
free.  Without the hint, there is an additional jump if the lock is free.

I've added the atomic_spin_nop macro within the loop of plain reads.
The plain reads are also realized by C11 like atomic_load_relaxed macro.

The new define ATOMIC_EXCHANGE_USES_CAS determines if the first try to acquire
the spinlock in pthread_spin_lock or pthread_spin_trylock is an exchange
or a CAS.  This is defined in atomic-machine.h for all architectures.

The define SPIN_LOCK_READS_BETWEEN_CMPXCHG is now removed.
There is no technical reason for throwing in a CAS every now and then,
and so far we have no evidence that it can improve performance.
If that would be the case, we have to adjust other spin-waiting loops
elsewhere, too!  Using a CAS loop without plain reads is not a good idea
on many targets and wasn't used by one.  Thus there is now no option to
do so.

Architectures are now using the generic spinlock automatically if they
do not provide an own implementation.  Thus the pthread_spin_lock.c files
in sysdeps folder are deleted.

ChangeLog:

	* NEWS: Mention new spinlock implementation.
	* include/atomic.h:
	(__atomic_val_bysize): Cast type to omit volatile qualifier.
	(atomic_exchange_acq): Likewise.
	(atomic_load_relaxed): Likewise.
	(ATOMIC_EXCHANGE_USES_CAS): Check definition.
	* nptl/pthread_spin_init.c (pthread_spin_init):
	Use atomic_store_relaxed.
	* nptl/pthread_spin_lock.c (pthread_spin_lock):
	Use C11-like atomic macros.
	* nptl/pthread_spin_trylock.c (pthread_spin_trylock):
	Likewise.
	* nptl/pthread_spin_unlock.c (pthread_spin_unlock):
	Use atomic_store_release.
	* sysdeps/aarch64/nptl/pthread_spin_lock.c: Delete File.
	* sysdeps/arm/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/hppa/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/m68k/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/microblaze/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/mips/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/nios2/nptl/pthread_spin_lock.c: Likewise.
	* sysdeps/aarch64/atomic-machine.h (ATOMIC_EXCHANGE_USES_CAS): Define.
	* sysdeps/alpha/atomic-machine.h: Likewise.
	* sysdeps/arm/atomic-machine.h: Likewise.
	* sysdeps/i386/atomic-machine.h: Likewise.
	* sysdeps/ia64/atomic-machine.h: Likewise.
	* sysdeps/m68k/coldfire/atomic-machine.h: Likewise.
	* sysdeps/m68k/m680x0/m68020/atomic-machine.h: Likewise.
	* sysdeps/microblaze/atomic-machine.h: Likewise.
	* sysdeps/mips/atomic-machine.h: Likewise.
	* sysdeps/powerpc/powerpc32/atomic-machine.h: Likewise.
	* sysdeps/powerpc/powerpc64/atomic-machine.h: Likewise.
	* sysdeps/s390/atomic-machine.h: Likewise.
	* sysdeps/sparc/sparc32/atomic-machine.h: Likewise.
	* sysdeps/sparc/sparc32/sparcv9/atomic-machine.h: Likewise.
	* sysdeps/sparc/sparc64/atomic-machine.h: Likewise.
	* sysdeps/tile/tilegx/atomic-machine.h: Likewise.
	* sysdeps/tile/tilepro/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/hppa/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/nios2/atomic-machine.h: Likewise.
	* sysdeps/unix/sysv/linux/sh/atomic-machine.h: Likewise.
	* sysdeps/x86_64/atomic-machine.h: Likewise.
2017-06-06 09:41:56 +02:00
H.J. Lu
935971ba6b x86-64: Optimize memcmp/wmemcmp with AVX2 and MOVBE
Optimize x86-64 memcmp/wmemcmp with AVX2.  It uses vector compare as
much as possible.  It is as fast as SSE4 memcmp for size <= 16 bytes
and up to 2X faster for size > 16 bytes on Haswell and Skylake.  Select
AVX2 memcmp/wmemcmp on AVX2 machines where vzeroupper is preferred and
AVX unaligned load is fast.

NB: It uses TZCNT instead of BSF since TZCNT produces the same result
as BSF for non-zero input.  TZCNT is faster than BSF and is executed
as BSF if machine doesn't support TZCNT.

Key features:

1. For size from 2 to 7 bytes, load as big endian with movbe and bswap
   to avoid branches.
2. Use overlapping compare to avoid branch.
3. Use vector compare when size >= 4 bytes for memcmp or size >= 8
   bytes for wmemcmp.
4. If size is 8 * VEC_SIZE or less, unroll the loop.
5. Compare 4 * VEC_SIZE at a time with the aligned first memory area.
6. Use 2 vector compares when size is 2 * VEC_SIZE or less.
7. Use 4 vector compares when size is 4 * VEC_SIZE or less.
8. Use 8 vector compares when size is 8 * VEC_SIZE or less.

	* sysdeps/x86/cpu-features.h (index_cpu_MOVBE): New.
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memcmp-avx2 and wmemcmp-avx2.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Test __memcmp_avx2 and __wmemcmp_avx2.
	* sysdeps/x86_64/multiarch/memcmp-avx2.S: New file.
	* sysdeps/x86_64/multiarch/wmemcmp-avx2.S: Likewise.
	* sysdeps/x86_64/multiarch/memcmp.S: Use __memcmp_avx2 on AVX
	2 machines if AVX unaligned load is fast and vzeroupper is
	preferred.
	* sysdeps/x86_64/multiarch/wmemcmp.S: Use __wmemcmp_avx2 on AVX
	2 machines if AVX unaligned load is fast and vzeroupper is
	preferred.
2017-06-05 12:52:55 -07:00
H.J. Lu
ef9c4cb6c7 x86-64: Optimize wmemset with SSE2/AVX2/AVX512
The difference between memset and wmemset is byte vs int.  Add stubs
to SSE2/AVX2/AVX512 memset for wmemset with updated constant and size:

SSE2 wmemset:
	shl    $0x2,%rdx
	movd   %esi,%xmm0
	mov    %rdi,%rax
	pshufd $0x0,%xmm0,%xmm0
	jmp	entry_from_wmemset

SSE2 memset:
	movd   %esi,%xmm0
	mov    %rdi,%rax
	punpcklbw %xmm0,%xmm0
	punpcklwd %xmm0,%xmm0
	pshufd $0x0,%xmm0,%xmm0
entry_from_wmemset:

Since the ERMS versions of wmemset requires "rep stosl" instead of
"rep stosb", only the vector store stubs of SSE2/AVX2/AVX512 wmemset
are added.  The SSE2 wmemset is about 3X faster and the AVX2 wmemset
is about 6X faster on Haswell.

	* include/wchar.h (__wmemset_chk): New.
	* sysdeps/x86_64/memset.S (VDUP_TO_VEC0_AND_SET_RETURN): Renamed
	to MEMSET_VDUP_TO_VEC0_AND_SET_RETURN.
	(WMEMSET_VDUP_TO_VEC0_AND_SET_RETURN): New.
	(WMEMSET_CHK_SYMBOL): Likewise.
	(WMEMSET_SYMBOL): Likewise.
	(__wmemset): Add hidden definition.
	(wmemset): Add weak hidden definition.
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	wmemset_chk-nonshared.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Add __wmemset_sse2_unaligned,
	__wmemset_avx2_unaligned, __wmemset_avx512_unaligned,
	__wmemset_chk_sse2_unaligned, __wmemset_chk_avx2_unaligned
	and __wmemset_chk_avx512_unaligned.
	* sysdeps/x86_64/multiarch/memset-avx2-unaligned-erms.S
	(VDUP_TO_VEC0_AND_SET_RETURN): Renamed to ...
	(MEMSET_VDUP_TO_VEC0_AND_SET_RETURN): This.
	(WMEMSET_VDUP_TO_VEC0_AND_SET_RETURN): New.
	(WMEMSET_SYMBOL): Likewise.
	* sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S
	(VDUP_TO_VEC0_AND_SET_RETURN): Renamed to ...
	(MEMSET_VDUP_TO_VEC0_AND_SET_RETURN): This.
	(WMEMSET_VDUP_TO_VEC0_AND_SET_RETURN): New.
	(WMEMSET_SYMBOL): Likewise.
	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: Updated.
	(WMEMSET_CHK_SYMBOL): New.
	(WMEMSET_CHK_SYMBOL (__wmemset_chk, unaligned)): Likewise.
	(WMEMSET_SYMBOL (__wmemset, unaligned)): Likewise.
	* sysdeps/x86_64/multiarch/memset.S (WMEMSET_SYMBOL): New.
	(libc_hidden_builtin_def): Also define __GI_wmemset and
	__GI___wmemset.
	(weak_alias): New.
	* sysdeps/x86_64/multiarch/wmemset.c: New file.
	* sysdeps/x86_64/multiarch/wmemset.h: Likewise.
	* sysdeps/x86_64/multiarch/wmemset_chk-nonshared.S: Likewise.
	* sysdeps/x86_64/multiarch/wmemset_chk.c: Likewise.
	* sysdeps/x86_64/wmemset.c: Likewise.
	* sysdeps/x86_64/wmemset_chk.c: Likewise.
2017-06-05 11:09:59 -07:00
H.J. Lu
30cb625a21 x86-64: Update strlen.S to support wcslen/wcsnlen
The difference between strlen and wcslen is byte vs int.  We can
replace pminub and pcmpeqb with pminud and pcmpeqd to turn strlen
into wcslen.

	* sysdeps/x86_64/strlen.S (PMINU): New.
	(PCMPEQ): Likewise.
	(SHIFT_RETURN): Likewise.
	(FIND_ZERO): Replace pcmpeqb with PCMPEQ.
	(strlen): Add SHIFT_RETURN before ret.  Replace pcmpeqb and
	pminub with PCMPEQ and PMINU.
	* sysdeps/x86_64/wcsnlen.S: New file.
2017-06-05 07:58:23 -07:00
H.J. Lu
7395928b95 x86_64: Remove redundant REX bytes from memrchr.S
By x86-64 specification, 32-bit destination registers are zero-extended
to 64 bits.  There is no need to use 64-bit registers when only the lower
32 bits are non-zero.  Also 2 instructions in:

	mov	%rdi, %rcx
	and	$15, %rcx
	jz	L(length_less16_offset0)

	mov	%rdi, %rcx		<<< redundant
	and	$15, %rcx		<<< redundant

are redundant.

	* sysdeps/x86_64/memrchr.S (__memrchr): Use 32-bit registers for
	the lower 32 bits.  Remove redundant instructions.
2017-06-05 07:41:26 -07:00
H.J. Lu
4f26ef1b67 x86_64: Remove redundant REX bytes from memchr.S
By x86-64 specification, 32-bit destination registers are zero-extended
to 64 bits.  There is no need to use 64-bit registers when only the lower
32 bits are non-zero.

	* sysdeps/x86_64/memchr.S (MEMCHR): Use 32-bit registers for
	the lower 32 bits.
2017-05-30 12:39:14 -07:00
H.J. Lu
1f655beb08 x86_64: Remove L(return_null) from rawmemchr.S
L(return_null) is unused.

	* sysdeps/x86_64/rawmemchr.S (L(return_null)): Removed.
2017-05-20 06:13:38 -07:00
H.J. Lu
402bf06952 x86: Optimize SSE2 memchr overflow calculation
SSE2 memchr computes "edx + ecx - 16" where ecx is less than 16.  Use
"edx - (16 - ecx)", instead of satured math, to avoid possible addition
overflow.  This replaces

	add	%ecx, %edx
	sbb	%eax, %eax
	or	%eax, %edx
	sub	$16, %edx

with

	neg	%ecx
	add	$16, %ecx
	sub	%ecx, %edx

It is the same for x86_64, except for rcx/rdx, instead of ecx/edx.

	* sysdeps/i386/i686/multiarch/memchr-sse2.S (MEMCHR): Use
	"edx + ecx - 16" to avoid possible addition overflow.
	* sysdeps/x86_64/memchr.S (memchr): Likewise.
2017-05-19 10:48:45 -07:00
H.J. Lu
a7fbedff76 Correct comments in x86_64/multiarch/memcmp.S
* sysdeps/x86_64/multiarch/memcmp.S (__GI_memcmp): Correct
	comments.
2017-05-18 14:02:02 -07:00
Zack Weinberg
7c3018f9e4 Suppress internal declarations for most of the testsuite.
This patch adds a new build module called 'testsuite'.
IS_IN (testsuite) implies _ISOMAC, as do IS_IN_build and __cplusplus
(which means several ad-hoc tests for __cplusplus can go away).
libc-symbols.h now suppresses almost all of *itself* when _ISOMAC is
defined; in particular, _ISOMAC mode does not get config.h
automatically anymore.

There are still quite a few tests that need to see internal gunk of
one variety or another.  For them, we now have 'tests-internal' and
'test-internal-extras'; files in this category will still be compiled
with MODULE_NAME=nonlib, and everything proceeds as it always has.
The bulk of this patch is moving tests from 'tests' to
'tests-internal'.  There is also 'tests-static-internal', which has
the same effect on files in 'tests-static', and 'modules-names-tests',
which has the *inverse* effect on files in 'modules-names' (it's
inverted because most of the things in modules-names are *not* tests).
For both of these, the file must appear in *both* the new variable and
the old one.

There is also now a special case for when libc-symbols.h is included
without MODULE_NAME being defined at all.  (This happens during the
creation of libc-modules.h, and also when preprocessing Versions
files.)  When this happens, IS_IN is set to be always false and
_ISOMAC is *not* defined, which was the status quo, but now it's
explicit.

The remaining changes to C source files in this patch seemed likely to
cause problems in the absence of the main change.  They should be
relatively self-explanatory.  In a few cases I duplicated a definition
from an internal header rather than move the test to tests-internal;
this was a judgement call each time and I'm happy to change those
however reviewers feel is more appropriate.

	* Makerules: New subdir configuration variables 'tests-internal'
	and 'test-internal-extras'.  Test files in these categories will
	still be compiled with MODULE_NAME=nonlib.  Test files in the
	existing categories (tests, xtests, test-srcs, test-extras) are
	now compiled with MODULE_NAME=testsuite.
	New subdir configuration variable 'modules-names-tests'.  Files
	which are in both 'modules-names' and 'modules-names-tests' will
	be compiled with MODULE_NAME=testsuite instead of
	MODULE_NAME=extramodules.
	(gen-as-const-headers): Move to tests-internal.
	(do-tests-clean, common-mostlyclean): Support tests-internal.
	* Makeconfig (built-modules): Add testsuite.
	* Makefile: Change libof-check-installed-headers-c and
	libof-check-installed-headers-cxx to 'testsuite'.
	* Rules: Likewise.  Support tests-internal.
	* benchtests/strcoll-inputs/filelist#en_US.UTF-8:
	Remove extra-modules.mk.

	* config.h.in: Don't check for __OPTIMIZE__ or __FAST_MATH__ here.
	* include/libc-symbols.h: Move definitions of _GNU_SOURCE,
	PASTE_NAME, PASTE_NAME1, IN_MODULE, IS_IN, and IS_IN_LIB to the
	very top of the file and rationalize their order.
	If MODULE_NAME is not defined at all, define IS_IN to always be
	false, and don't define _ISOMAC.
	If any of IS_IN (testsuite), IS_IN_build, or __cplusplus are
	true, define _ISOMAC and suppress everything else in this file,
	starting with the inclusion of config.h.
	Do check for inappropriate definitions of __OPTIMIZE__ and
	__FAST_MATH__ here, but only if _ISOMAC is not defined.
        Correct some out-of-date commentary.

	* include/math.h: If _ISOMAC is defined, undefine NO_LONG_DOUBLE
	and _Mlong_double_ before including math.h.
	* include/string.h: If _ISOMAC is defined, don't expose
	_STRING_ARCH_unaligned. Move a comment to a more appropriate
	location.

	* include/errno.h, include/stdio.h, include/stdlib.h, include/string.h
	* include/time.h, include/unistd.h, include/wchar.h: No need to
	check __cplusplus nor use __BEGIN_DECLS/__END_DECLS.

	* misc/sys/cdefs.h (__NTHNL): New macro.
	* sysdeps/m68k/m680x0/fpu/bits/mathinline.h
	(__m81_defun): Use __NTHNL to avoid errors with GCC 6.

	* elf/tst-env-setuid-tunables.c: Include config.h with _LIBC
	defined, for HAVE_TUNABLES.
	* inet/tst-checks-posix.c: No need to define _ISOMAC.
	* intl/tst-gettext2.c: Provide own definition of N_.
	* math/test-signgam-finite-c99.c: No need to define _ISOMAC.
	* math/test-signgam-main.c: No need to define _ISOMAC.
	* stdlib/tst-strtod.c: Convert to test-driver. Split locale_test to...
	* stdlib/tst-strtod1i.c: ...this new file.
	* stdlib/tst-strtod5.c: Convert to test-driver and add copyright notice.
        Split tests of __strtod_internal to...
	* stdlib/tst-strtod5i.c: ...this new file.
	* string/test-string.h: Include stdint.h. Duplicate definition of
	inhibit_loop_to_libcall here (from libc-symbols.h).
	* string/test-strstr.c: Provide dummy definition of
	libc_hidden_builtin_def when including strstr.c.
	* sysdeps/ia64/fpu/libm-symbols.h: Suppress entire file in _ISOMAC
	mode; no need to test __STRICT_ANSI__ nor __cplusplus as well.
	* sysdeps/x86_64/fpu/math-tests-arch.h: Include cpu-features.h.
	Don't include init-arch.h.
	* sysdeps/x86_64/multiarch/test-multiarch.h: Include cpu-features.h.
	Don't include init-arch.h.

	* elf/Makefile: Move tst-ptrguard1-static, tst-stackguard1-static,
	tst-tls1-static, tst-tls2-static, tst-tls3-static, loadtest,
	unload, unload2, circleload1, neededtest, neededtest2,
	neededtest3, neededtest4, tst-tls1, tst-tls2, tst-tls3,
	tst-tls6, tst-tls7, tst-tls8, tst-dlmopen2, tst-ptrguard1,
	tst-stackguard1, tst-_dl_addr_inside_object, and all of the
	ifunc tests to tests-internal.
	Don't add $(modules-names) to test-extras.
	* inet/Makefile: Move tst-inet6_scopeid_pton to tests-internal.
	Add tst-deadline to tests-static-internal.
	* malloc/Makefile: Move tst-mallocstate and tst-scratch_buffer to
	tests-internal.
	* misc/Makefile: Move tst-atomic and tst-atomic-long to tests-internal.
	* nptl/Makefile: Move tst-typesizes, tst-rwlock19, tst-sem11,
	tst-sem12, tst-sem13, tst-barrier5, tst-signal7, tst-tls3,
	tst-tls3-malloc, tst-tls5, tst-stackguard1, tst-sem11-static,
	tst-sem12-static, and tst-stackguard1-static to tests-internal.
        Link tests-internal with libpthread also.
	Don't add $(modules-names) to test-extras.
	* nss/Makefile: Move tst-field to tests-internal.
	* posix/Makefile: Move bug-regex5, bug-regex20, bug-regex33,
	tst-rfc3484, tst-rfc3484-2, and tst-rfc3484-3 to tests-internal.
	* stdlib/Makefile: Move tst-strtod1i, tst-strtod3, tst-strtod4,
	tst-strtod5i, tst-tls-atexit, and tst-tls-atexit-nodelete to
	tests-internal.
        * sunrpc/Makefile: Move tst-svc_register to tests-internal.
	* sysdeps/powerpc/Makefile: Move test-get_hwcap and
	test-get_hwcap-static to tests-internal.
	* sysdeps/unix/sysv/linux/Makefile: Move tst-setgetname to
	tests-internal.
	* sysdeps/x86_64/fpu/Makefile: Add all libmvec test modules to
	modules-names-tests.
2017-05-11 19:27:59 -04:00
H.J. Lu
1432d38ea0 x86: Set dl_platform and dl_hwcap from CPU features [BZ #21391]
dl_platform and dl_hwcap are set from AT_PLATFORM and AT_HWCAP very
early during startup.  They are used by dynamic linker to determine
platform and build an array of hardware capability names, which are
added to search path when loading shared object.  dl_platform and
dl_hwcap are unused on x86-64.  On i386, i386, i486, i586 and i686
platforms were supported and only SSE2 capability was used.

On x86, usage of AT_PLATFORM and AT_HWCAP to determine platform and
processor capabilities is obsolete since all information is available
in dl_x86_cpu_features.  This patch sets dl_platform and dl_hwcap from
dl_x86_cpu_features in dynamic linker.  On i386, the available plaforms
are changed to i586 and i686 since i386 has been deprecated.  On x86-64,
the available plaforms are haswell, which is for Haswell class processors
with BMI1, BMI2, LZCNT, MOVBE, POPCNT, AVX2 and FMA, and xeon_phi, which
is for Xeon Phi class processors with AVX512F, AVX512CD, AVX512ER and
AVX512PF.  A capability, avx512_1, is also added to x86-64 for AVX512
ISAs: AVX512F, AVX512CD, AVX512BW, AVX512DQ and AVX512VL.

	[BZ #21391]
	* sysdeps/i386/dl-machine.h (dl_platform_init) [IS_IN (rtld)]:
	Only call init_cpu_features.
	[!IS_IN (rtld)]: Only set GLRO(dl_platform) to NULL if needed.
	* sysdeps/x86_64/dl-machine.h (dl_platform_init): Likewise.
	* sysdeps/i386/dl-procinfo.h: Removed.
	* sysdeps/unix/sysv/linux/i386/dl-procinfo.h: Don't include
	<sysdeps/i386/dl-procinfo.h> nor <ldsodefs.h>.  Include
	<sysdeps/x86/dl-procinfo.h>.
	(_dl_procinfo): Replace _DL_HWCAP_COUNT with 32.
	* sysdeps/unix/sysv/linux/x86_64/dl-procinfo.h [!IS_IN (ldconfig)]:
	Include <sysdeps/x86/dl-procinfo.h> instead of
	 <sysdeps/generic/dl-procinfo.h>.
	* sysdeps/x86/cpu-features.c: Include <dl-hwcap.h>.
	(init_cpu_features): Set dl_platform, dl_hwcap and dl_hwcap_mask.
	* sysdeps/x86/cpu-features.h (bit_cpu_LZCNT): New.
	(bit_cpu_MOVBE): Likewise.
	(bit_cpu_BMI1): Likewise.
	(bit_cpu_BMI2): Likewise.
	(index_cpu_BMI1): Likewise.
	(index_cpu_BMI2): Likewise.
	(index_cpu_LZCNT): Likewise.
	(index_cpu_MOVBE): Likewise.
	(index_cpu_POPCNT): Likewise.
	(reg_BMI1): Likewise.
	(reg_BMI2): Likewise.
	(reg_LZCNT): Likewise.
	(reg_MOVBE): Likewise.
	(reg_POPCNT): Likewise.
	* sysdeps/x86/dl-hwcap.h: New file.
	* sysdeps/x86/dl-procinfo.h: Likewise.
	* sysdeps/x86/dl-procinfo.c (_dl_x86_hwcap_flags): New.
	(_dl_x86_platforms): Likewise.
2017-05-03 13:44:35 -07:00
H.J. Lu
4cb334c4d6 x86: Use AVX2 memcpy/memset on Skylake server [BZ #21396]
On Skylake server, AVX512 load/store instructions in memcpy/memset may
lead to lower CPU turbo frequency in certain situations.  Use of AVX2
in memcpy/memset has been observed to have improved overall performance
in many workloads due to the higher frequency.

Since AVX512ER is unique to Xeon Phi, this patch sets Prefer_No_AVX512
if AVX512ER isn't available so that AVX2 versions of memcpy/memset are
used on Skylake server.

	[BZ #21396]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	Prefer_No_AVX512 if AVX512ER isn't available.
	* sysdeps/x86/cpu-features.h (bit_arch_Prefer_No_AVX512): New.
	(index_arch_Prefer_No_AVX512): Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Don't use
	AVX512 version if Prefer_No_AVX512 is set.
	* sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk):
	Likewise.
	* sysdeps/x86_64/multiarch/memmove.S (__libc_memmove): Likewise.
	* sysdeps/x86_64/multiarch/memmove_chk.S (__memmove_chk):
	Likewise.
	* sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Likewise.
	* sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk):
	Likewise.
	* sysdeps/x86_64/multiarch/memset.S (memset): Likewise.
	* sysdeps/x86_64/multiarch/memset_chk.S (__memset_chk):
	Likewise.
2017-04-18 14:01:45 -07:00
H.J. Lu
fda19e0438 Add sysdeps/x86/dl-procinfo.c
Add sysdeps/x86/dl-procinfo.c for x86 version of processor capability
information to reduce duplication between i386 and x86_64 dl-procinfo.c.

	* sysdeps/i386/dl-procinfo.c: Include
	<sysdeps/x86/dl-procinfo.c>.
	* sysdeps/x86_64/dl-procinfo.c: Likewise.
	* sysdeps/x86/dl-procinfo.c: New file.
2017-04-10 12:01:45 -07:00
Adhemerval Zanella
a358c80530 Remove CALL_THREAD_FCT macro
This patch removes CALL_THREAD_FCT macro usage and its defition for
x86.  For 32 bits it usage is only for force 16 stack alignment,
however stack is already explicit aligned in clone syscall.  For
64 bits and x32 it just a function call and there is no need to
code it with inline assembly.

Checked on i686-linux-gnu, x86_64-linux-gnu, and x86_64-linux-gnu-x32.

	* nptl/pthread_create.c (START_THREAD_DEFN): Remove
	CALL_THREAD_FCT macro usage.
	* sysdeps/i386/nptl/tls.h (CALL_THREAD_FCT): Remove definition.
	* sysdeps/x86_64/nptl/tls.h (CALL_THREAD_FCT): Likewise.
	* sysdeps/x86_64/32/nptl/tls.h: Remove file.
2017-04-04 18:03:35 -03:00
H.J. Lu
c15f8eb50c x86-64: Improve branch predication in _dl_runtime_resolve_avx512_opt [BZ #21258]
On Skylake server, _dl_runtime_resolve_avx512_opt is used to preserve
the first 8 vector registers.  The code layout is

  if only %xmm0 - %xmm7 registers are used
     preserve %xmm0 - %xmm7 registers
  if only %ymm0 - %ymm7 registers are used
     preserve %ymm0 - %ymm7 registers
  preserve %zmm0 - %zmm7 registers

Branch predication always executes the fallthrough code path to preserve
%zmm0 - %zmm7 registers speculatively, even though only %xmm0 - %xmm7
registers are used.  This leads to lower CPU frequency on Skylake
server.  This patch changes the fallthrough code path to preserve
%xmm0 - %xmm7 registers instead:

  if whole %zmm0 - %zmm7 registers are used
    preserve %zmm0 - %zmm7 registers
  if only %ymm0 - %ymm7 registers are used
     preserve %ymm0 - %ymm7 registers
  preserve %xmm0 - %xmm7 registers

Tested on Skylake server.

	[BZ #21258]
	* sysdeps/x86_64/dl-trampoline.S (_dl_runtime_resolve_opt):
	Define only if _dl_runtime_resolve is defined to
	_dl_runtime_resolve_sse_vex.
	* sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_opt):
	Fallthrough to _dl_runtime_resolve_sse_vex.
2017-03-21 11:00:12 -07:00
Mike Frysinger
fbe355fbd1 x86_64: fix static build of __mempcpy_chk for compilers defaulting to PIC/PIE
When glibc is compiled with gcc 6.2 that has been configured with
to default to PIC/PIE, the static version of __mempcpy_chk is not built,
as the test is done on PIC instead of SHARED.  Fix the test to check for
SHARED, like it is done for similar functions like __memcpy_chk.

2017-03-12  Mike Frysinger  <vapier@gentoo.org>

	* sysdeps/x86_64/mempcpy_chk.S (__mempcpy_chk): Check for SHARED
	instead of PIC.
2017-03-15 16:10:05 -07:00
Florian Weimer
2d6ab5df3b Document and fix --enable-bind-now [BZ #21015] 2017-03-02 14:44:28 +01:00
Zack Weinberg
9090848d06 Narrowing the visibility of libc-internal.h even further.
posix/wordexp-test.c used libc-internal.h for PTR_ALIGN_DOWN; similar
to what was done with libc-diag.h, I have split the definitions of
cast_to_integer, ALIGN_UP, ALIGN_DOWN, PTR_ALIGN_UP, and PTR_ALIGN_DOWN
to a new header, libc-pointer-arith.h.

It then occurred to me that the remaining declarations in libc-internal.h
are mostly to do with early initialization, and probably most of the
files including it, even in the core code, don't need it anymore.  Indeed,
only 19 files actually need what remains of libc-internal.h.  23 others
need libc-diag.h instead, and 12 need libc-pointer-arith.h instead.
No file needs more than one of them, and 16 don't need any of them!

So, with this patch, libc-internal.h stops including libc-diag.h as
well as losing the pointer arithmetic macros, and all including files
are adjusted.

        * include/libc-pointer-arith.h: New file.  Define
	cast_to_integer, ALIGN_UP, ALIGN_DOWN, PTR_ALIGN_UP, and
        PTR_ALIGN_DOWN here.
        * include/libc-internal.h: Definitions of above macros
	moved from here.  Don't include libc-diag.h anymore either.
	* posix/wordexp-test.c: Include stdint.h and libc-pointer-arith.h.
        Don't include libc-internal.h.

	* debug/pcprofile.c, elf/dl-tunables.c, elf/soinit.c, io/openat.c
	* io/openat64.c, misc/ptrace.c, nptl/pthread_clock_gettime.c
	* nptl/pthread_clock_settime.c, nptl/pthread_cond_common.c
	* string/strcoll_l.c, sysdeps/nacl/brk.c
	* sysdeps/unix/clock_settime.c
	* sysdeps/unix/sysv/linux/i386/get_clockfreq.c
	* sysdeps/unix/sysv/linux/ia64/get_clockfreq.c
	* sysdeps/unix/sysv/linux/powerpc/get_clockfreq.c
	* sysdeps/unix/sysv/linux/sparc/sparc64/get_clockfreq.c:
	Don't include libc-internal.h.

	* elf/get-dynamic-info.h, iconv/loop.c
	* iconvdata/iso-2022-cn-ext.c, locale/weight.h, locale/weightwc.h
	* misc/reboot.c, nis/nis_table.c, nptl_db/thread_dbP.h
	* nscd/connections.c, resolv/res_send.c, soft-fp/fmadf4.c
	* soft-fp/fmasf4.c, soft-fp/fmatf4.c, stdio-common/vfscanf.c
	* sysdeps/ieee754/dbl-64/e_lgamma_r.c
	* sysdeps/ieee754/dbl-64/k_rem_pio2.c
	* sysdeps/ieee754/flt-32/e_lgammaf_r.c
	* sysdeps/ieee754/flt-32/k_rem_pio2f.c
	* sysdeps/ieee754/ldbl-128/k_tanl.c
	* sysdeps/ieee754/ldbl-128ibm/k_tanl.c
	* sysdeps/ieee754/ldbl-96/e_lgammal_r.c
	* sysdeps/ieee754/ldbl-96/k_tanl.c, sysdeps/nptl/futex-internal.h:
	Include libc-diag.h instead of libc-internal.h.

        * elf/dl-load.c, elf/dl-reloc.c, locale/programs/locarchive.c
        * nptl/nptl-init.c, string/strcspn.c, string/strspn.c
	* malloc/malloc.c, sysdeps/i386/nptl/tls.h
	* sysdeps/nacl/dl-map-segments.h, sysdeps/x86_64/atomic-machine.h
	* sysdeps/unix/sysv/linux/spawni.c
        * sysdeps/x86_64/nptl/tls.h:
        Include libc-pointer-arith.h instead of libc-internal.h.

	* elf/get-dynamic-info.h, sysdeps/nacl/dl-map-segments.h
	* sysdeps/x86_64/atomic-machine.h:
        Add multiple include guard.
2017-03-01 20:33:46 -05:00
Zack Weinberg
963394a22b Allow direct use of math_ldbl.h in testsuite.
A few 'long double'-related tests include math_private.h just for
their variety of math_ldbl.h, which contains macros for assembling and
disassembling the binary representation of 'long double'.  math_ldbl.h
insists on being included from math_private.h, but if we relax this
restriction (and fix some portability sloppiness) we can use it
directly and not have to expose all of math_private.h to the testsuite.

	* sysdeps/generic/math_private.h: Use __BIG_ENDIAN and
	__LITTLE_ENDIAN, not BIG_ENDIAN and LITTLE_ENDIAN.

	* sysdeps/generic/math_ldbl.h
	* sysdeps/ia64/fpu/math_ldbl.h
	* sysdeps/ieee754/ldbl-128/math_ldbl.h
	* sysdeps/ieee754/ldbl-128ibm/math_ldbl.h
	* sysdeps/ieee754/ldbl-96/math_ldbl.h
	* sysdeps/powerpc/fpu/math_ldbl.h
	* sysdeps/x86_64/fpu/math_ldbl.h:
	Allow direct inclusion.  Use uintNN_t instead of u_intNN_t.
	Use __BIG_ENDIAN and __LITTLE_ENDIAN, not BIG_ENDIAN and
	LITTLE_ENDIAN.  Include endian.h and/or stdint.h if necessary.
	Add copyright notices.

	* sysdeps/ieee754/ldbl-128ibm/math_ldbl.h (ldbl_canonicalize_int):
	Don't use EXTRACT_WORDS64.

	* sysdeps/ieee754/ldbl-96/test-canonical-ldbl-96.c
	* sysdeps/ieee754/ldbl-96/test-totalorderl-ldbl-96.c
	* sysdeps/ieee754/ldbl-128ibm/test-canonical-ldbl-128ibm.c
	* sysdeps/ieee754/ldbl-128ibm/test-totalorderl-ldbl-128ibm.c:
	Include math_ldbl.h, not math_private.h.
2017-02-25 10:40:48 -05:00
Joseph Myers
92061bb033 Run libm tests separately for each function.
At present, libm tests for each function get built into a single
executable (for each floating point type, for each of normal / inline
/ finite-math-only functions, plus vector variants) and run together,
resulting in a single PASS or FAIL (for each of those nine variants
plus vector variants).  Building this executable involves reading
over 50 MB of libm-test-*.c sources.

This patch arranges for tests of each function to be run separately
from the makefiles instead.  There are 121 functions being tested for
each (type, variant pair) (actually 126, but run as 121 from the
Makefile because each of the pairs (exp10, pow10), (isfinite, finite),
(lgamma, gamma), (remainder, drem), (scalbn, ldexp), shares a table of
test results and so is run together), so 1089 separate tests run from
the Makefile, plus 48 vector tests on x86_64 (six functions for eight
vector variants).  Each test only involves a libm-test-<func>.c file
of no more than about 4 MB, rather than all such files taking about 50
MB.  With tests run separately, test summaries will indicate which
functions actually have problems (of course, those problems may just
be out-of-date libm-test-ulps files if the file hasn't been updated
for the architecture in question recently).

All the .c files for the 1089+48 tests are generated automatically
from the Makefiles.  Various checked-in boilerplate .c files are
removed as no longer needed.  CFLAGS definitions for the different
kinds of tests are generated using makefile iterators to apply
target-specific variable settings.  libm-have-vector-test.h is no
longer needed; the list of functions to test for each vector type is
now in the sysdeps Makefile.

This should reduce the amount of boilerplate needed for float128
testing support; test-float128.h will still be needed, but not various
.c files or Makefile CFLAGS definitions.  The logic for creating
dependencies on libm-test-support-*.o files should also render
<https://sourceware.org/ml/libc-alpha/2017-02/msg00279.html>
unnecessary.

Tested for x86_64 and x86.

	* math/Makefile (libm-tests-generated): Remove variable.
	(libm-tests-base-normal): New variable.
	(libm-tests-base-finite): Likewise.
	(libm-tests-base-inline): Likewise.
	(libm-tests-base): Likewise.
	(libm-tests-normal): Likewise.
	(libm-tests-finite): Likewise.
	(libm-tests-inline): Likewise.
	(libm-tests-vector): Likewise.
	(libm-tests): Define in terms of these new variables.
	(libm-tests-for-type): New variable.
	(libm-tests.o): Move definition.
	(tests): Move addition of $(libm-tests).
	(generated): Update for new and removed libm test files.
	($(objpfx)libm-test.c): Remove target.
	($(objpfx)libm-have-vector-test.h): Likewise.
	(CFLAGS-test-double-vlen2.c): Remove variable.
	(CFLAGS-test-double-vlen4.c): Likewise.
	(CFLAGS-test-double-vlen8.c): Likewise.
	(CFLAGS-test-float-vlen4.c): Likewise.
	(CFLAGS-test-float-vlen8.c): Likewise.
	(CFLAGS-test-float-vlen16.c): Likewise.
	(CFLAGS-test-float.c): Likewise.
	(CFLAGS-test-float-finite.c): Likewise.
	(CFLAGS-libm-test-support-float.c): Likewise.
	(CFLAGS-test-double.c): Likewise.
	(CFLAGS-test-double-finite.c): Likewise.
	(CFLAGS-libm-test-support-double.c): Likewise.
	(CFLAGS-test-ldouble.c): Likewise.
	(CFLAGS-test-ldouble-finite.c): Likewise.
	(CFLAGS-libm-test-support-ldouble.c): Likewise.
	(libm-test-inline-cflags): New variable.
	(CFLAGS-test-ifloat.c): Remove variable.
	(CFLAGS-test-idouble.c): Likewise.
	(CFLAGS-test-ildouble.c): Likewise.
	($(addprefix $(objpfx), $(libm-tests.o))): Move target and update
	dependencies.
	($(foreach t,$(libm-tests-normal),$(objpfx)$(t).c)): New rule.
	($(foreach t,$(libm-tests-finite),$(objpfx)$(t).c)): Likewise.
	($(foreach t,$(libm-tests-inline),$(objpfx)$(t).c)): Likewise.
	($(foreach t,$(libm-tests-vector),$(objpfx)$(t).c)): Likewise.
	($(foreach t,$(types),$(objpfx)libm-test-support-$(t).c)):
	Likewise.
	(dependencies on libm-test-support-*.o): Remove.
	($(foreach f,$(libm-test-funcs-all),$(objpfx)$(o)-$(f).o)): New
	rules using iterators.
	($(addprefix $(objpfx),$(call libm-tests-for-type,$(o)))):
	Likewise.
	($(objpfx)libm-test-support-$(o).o): Likewise.
	($(addprefix $(objpfx),$(filter-out $(tests-static)
	$(libm-vec-tests),$(tests)))): Filter out $(libm-tests-vector)
	instead.
	($(addprefix $(objpfx), $(libm-vec-tests))): Use iterator to
	define rule instead.
	* math/README.libm-test: Update.
	* math/libm-test-acos.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-acosh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-asin.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-asinh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-atan.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-atan2.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-atanh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-cabs.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-cacos.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-cacosh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-canonicalize.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-carg.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-casin.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-casinh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-catan.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-catanh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-cbrt.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-ccos.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-ccosh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-ceil.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-cexp.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-cimag.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-clog.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-clog10.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-conj.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-copysign.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-cos.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-cosh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-cpow.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-cproj.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-creal.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-csin.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-csinh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-csqrt.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-ctan.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-ctanh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-erf.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-erfc.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-exp.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-exp10.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-exp2.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-expm1.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-fabs.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-fdim.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-floor.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-fma.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-fmax.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-fmaxmag.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-fmin.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-fminmag.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-fmod.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-fpclassify.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-frexp.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-fromfp.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-fromfpx.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-getpayload.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-hypot.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-ilogb.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-iscanonical.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-iseqsig.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-isfinite.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-isgreater.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-isgreaterequal.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-isinf.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-isless.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-islessequal.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-islessgreater.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-isnan.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-isnormal.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-issignaling.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-issubnormal.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-isunordered.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-iszero.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-j0.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-j1.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-jn.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-lgamma.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-llogb.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-llrint.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-llround.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-log.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-log10.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-log1p.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-log2.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-logb.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-lrint.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-lround.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-modf.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-nearbyint.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-nextafter.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-nextdown.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-nexttoward.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-nextup.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-pow.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-remainder.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-remquo.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-rint.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-round.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-roundeven.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-scalb.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-scalbln.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-scalbn.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-setpayload.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-setpayloadsig.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-signbit.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-significand.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-sin.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-sincos.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-sinh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-sqrt.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-tan.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-tanh.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-tgamma.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-totalorder.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-totalordermag.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-trunc.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-ufromfp.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-ufromfpx.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-y0.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-y1.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-yn.inc: Include libm-test-driver.c.
	(do_test): New function.
	* math/libm-test-driver.c: Do not include libm-have-vector-test.h.
	(HAVE_VECTOR): Remove macro.
	(START): Do not call HAVE_VECTOR.
	* math/test-double-vlen2.h (FUNC_TEST): Remove macro.
	* math/test-double-vlen4.h (FUNC_TEST): Remove macro.
	* math/test-double-vlen8.h (FUNC_TEST): Remove macro.
	* math/test-float-vlen16.h (FUNC_TEST): Remove macro.
	* math/test-float-vlen4.h (FUNC_TEST): Remove macro.
	* math/test-float-vlen8.h (FUNC_TEST): Remove macro.
	* math/test-math-vector.h (FUNC_TEST): New macro.
	(WRAPPER_DECL): Rename to WRAPPER_DECL_f.
	* sysdeps/x86_64/fpu/Makefile (double-vlen2-funcs): New variable.
	(double-vlen4-funcs): Likewise.
	(double-vlen4-avx2-funcs): Likewise.
	(double-vlen8-funcs): Likewise.
	(float-vlen4-funcs): Likewise.
	(float-vlen8-funcs): Likewise.
	(float-vlen8-avx2-funcs): Likewise.
	(float-vlen16-funcs): Likewise.
	(CFLAGS-test-double-vlen4-avx2.c): Remove variable.
	(CFLAGS-test-float-vlen8-avx2.c): Likewise.
	* sysdeps/x86_64/fpu/test-double-vlen4.h (TEST_VECTOR_cos): Remove
	macro.
	(TEST_VECTOR_sin): Likewise.
	(TEST_VECTOR_sincos): Likewise.
	(TEST_VECTOR_log): Likewise.
	(TEST_VECTOR_exp): Likewise.
	(TEST_VECTOR_pow): Likewise.
	* sysdeps/x86_64/fpu/test-double-vlen8.h (TEST_VECTOR_cos):
	Likewise.
	(TEST_VECTOR_sin): Likewise.
	(TEST_VECTOR_sincos): Likewise.
	(TEST_VECTOR_log): Likewise.
	(TEST_VECTOR_exp): Likewise.
	(TEST_VECTOR_pow): Likewise.
	* sysdeps/x86_64/fpu/test-float-vlen16.h (TEST_VECTOR_cosf):
	Likewise.
	(TEST_VECTOR_sinf): Likewise.
	(TEST_VECTOR_sincosf): Likewise.
	(TEST_VECTOR_logf): Likewise.
	(TEST_VECTOR_expf): Likewise.
	(TEST_VECTOR_powf): Likewise.
	* sysdeps/x86_64/fpu/test-float-vlen8.h (TEST_VECTOR_cosf):
	Likewise.
	(TEST_VECTOR_sinf): Likewise.
	(TEST_VECTOR_sincosf): Likewise.
	(TEST_VECTOR_logf): Likewise.
	(TEST_VECTOR_expf): Likewise.
	(TEST_VECTOR_powf): Likewise.
	* math/gen-libm-have-vector-test.sh: Remove file.
	* math/libm-test.inc: Likewise.
	* math/libm-test-support-double.c: Likewise.
	* math/libm-test-support-float.c: Likewise.
	* math/libm-test-support-ldouble.c: Likewise.
	* math/test-double-finite.c: Likewise.: Likewise.
	* math/test-double.c: Likewise.
	* math/test-float-finite.c: Likewise.
	* math/test-float.c: Likewise.
	* math/test-idouble.c: Likewise.
	* math/test-ifloat.c: Likewise.
	* math/test-ildouble.c: Likewise.
	* math/test-ldouble-finite.c: Likewise.
	* math/test-ldouble.c: Likewise.
	* sysdeps/x86_64/fpu/test-double-vlen2.c: Likewise.
	* sysdeps/x86_64/fpu/test-double-vlen2.h: Likewise.
	* sysdeps/x86_64/fpu/test-double-vlen4-avx2.c: Likewise.
	* sysdeps/x86_64/fpu/test-double-vlen4.c: Likewise.
	* sysdeps/x86_64/fpu/test-double-vlen8.c: Likewise.
	* sysdeps/x86_64/fpu/test-float-vlen16.c: Likewise.
	* sysdeps/x86_64/fpu/test-float-vlen4.c: Likewise.
	* sysdeps/x86_64/fpu/test-float-vlen4.h: Likewise.
	* sysdeps/x86_64/fpu/test-float-vlen8-avx2.c: Likewise.
	* sysdeps/x86_64/fpu/test-float-vlen8.c: Likewise.
2017-02-24 00:52:49 +00:00
Joseph Myers
2c51dfd05d Move tests of catan, catanh to auto-libm-test-*.
This patch moves tests of catan and catanh with finite inputs (other
than the divide-by-zero cases producing an exact infinity) to using
the auto-libm-test machinery.  Each of auto-libm-test-out-catan and
auto-libm-test-out-catanh takes about three seconds to generate on my
system (so in fact it wasn't necessary after all to defer the move to
auto-libm-test-* until the output files were split up by function).

Tested for x86_64 and x86 and ulps updated accordingly.

	* math/auto-libm-test-in: Add tests of catan and catanh.
	* math/auto-libm-test-out-catan: New generated file.
	* math/auto-libm-test-out-catanh: Likewise.
	* math/libm-test-catan.inc (catan_test_data): Use AUTO_TESTS_c_c.
	Move tests with finite inputs, except divide-by-zero cases, to
	auto-libm-test-in.
	* math/libm-test-catanh.inc (catanh_test_data): Likewise.
	* math/Makefile (libm-test-funcs-auto): Add catan and catanh.
	(libm-test-funcs-noauto): Remove catan and catanh.
	* sysdeps/i386/fpu/libm-test-ulps: Update.
	* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2017-02-17 18:42:37 +00:00
Joseph Myers
fa2a3dd7a3 Move tests of casin, casinh to auto-libm-test-*.
This patch moves tests of casin and casinh with finite inputs to using
the auto-libm-test machinery.  Each of auto-libm-test-out-casin and
auto-libm-test-out-casinh takes about 38 minutes to generate on my
system because of MPC slowness on special cases that appear in the
tests (with MPC 1.0.3; I don't know to what extent current MPC master
might speed it up).

Tested for x86_64 and x86 and ulps updated accordingly.

	* math/auto-libm-test-in: Add tests of casin and casinh.
	* math/auto-libm-test-out-casin: New generated file.
	* math/auto-libm-test-out-casinh: Likewise.
	* math/libm-test-casin.inc (casin_test_data): Use AUTO_TESTS_c_c.
	Move tests with finite inputs to auto-libm-test-in.
	* math/libm-test-casinh.inc (casinh_test_data): Likewise.
	* math/Makefile (libm-test-funcs-auto): Add casin and casinh.
	(libm-test-funcs-noauto): Remove casin and casinh.
	* sysdeps/i386/fpu/libm-test-ulps: Update.
	* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2017-02-17 18:14:02 +00:00
Joseph Myers
6b8303a383 Move tests of cacos, cacosh to auto-libm-test-*.
This patch moves tests of cacos and cacosh with finite inputs to using
the auto-libm-test machinery.  Each of auto-libm-test-out-cacos and
auto-libm-test-out-cacosh takes about 80 minutes to generate on my
system because of MPC slowness on special cases that appear in the
tests (with MPC 1.0.3; I don't know to what extent current MPC master
might speed it up).

Tested for x86_64 and x86 and ulps updated accordingly.

	* math/auto-libm-test-in: Add tests of cacos and cacosh.
	* math/auto-libm-test-out-cacos: New generated file.
	* math/auto-libm-test-out-cacosh: Likewise.
	* math/libm-test-cacos.inc (cacos_test_data): Use AUTO_TESTS_c_c.
	Move tests with finite inputs to auto-libm-test-in.
	* math/libm-test-cacosh.inc (cacosh_test_data): Likewise.
	* math/Makefile (libm-test-funcs-auto): Add cacos and cacosh.
	(libm-test-funcs-noauto): Remove cacos and cacosh.
	* sysdeps/i386/fpu/libm-test-ulps: Update.
	* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2017-02-17 17:44:23 +00:00
Joseph Myers
f7a51347a4 Revert header inclusion changes that break math/ testing on x86_64.
Revert:
	2017-02-16  Zack Weinberg  <zackw@panix.com>

	* sysdeps/x86_64/fpu/math-tests-arch.h: Include cpu-features.h.
	Don't include init-arch.h.
	* sysdeps/x86_64/multiarch/test-multiarch.h: Include cpu-features.h.
	Don't include init-arch.h.
2017-02-17 17:08:17 +00:00
Zack Weinberg
ceaa98897c Add missing header files throughout the testsuite.
* crypt/md5.h: Test _LIBC with #if defined, not #if.
	* dirent/opendir-tst1.c: Include sys/stat.h.
	* dirent/tst-fdopendir.c: Include sys/stat.h.
	* dirent/tst-fdopendir2.c: Include stdlib.h.
	* dirent/tst-scandir.c: Include stdbool.h.
	* elf/tst-auditmod1.c: Include link.h and stddef.h.
	* elf/tst-tls15.c: Include stdlib.h.
	* elf/tst-tls16.c: Include stdlib.h.
	* elf/tst-tls17.c: Include stdlib.h.
	* elf/tst-tls18.c: Include stdlib.h.
	* iconv/tst-iconv6.c: Include endian.h.
	* iconvdata/bug-iconv11.c: Include limits.h.
	* io/test-utime.c: Include stdint.h.
	* io/tst-faccessat.c: Include sys/stat.h.
	* io/tst-fchmodat.c: Include sys/stat.h.
	* io/tst-fchownat.c: Include sys/stat.h.
	* io/tst-fstatat.c: Include sys/stat.h.
	* io/tst-futimesat.c: Include sys/stat.h.
	* io/tst-linkat.c: Include sys/stat.h.
	* io/tst-mkdirat.c: Include sys/stat.h and stdbool.h.
	* io/tst-mkfifoat.c: Include sys/stat.h and stdbool.h.
	* io/tst-mknodat.c: Include sys/stat.h and stdbool.h.
	* io/tst-openat.c: Include stdbool.h.
	* io/tst-readlinkat.c: Include sys/stat.h.
	* io/tst-renameat.c: Include sys/stat.h.
	* io/tst-symlinkat.c: Include sys/stat.h.
	* io/tst-unlinkat.c: Include stdbool.h.
	* libio/bug-memstream1.c: Include stdlib.h.
	* libio/bug-wmemstream1.c: Include stdlib.h.
	* libio/tst-fwrite-error.c: Include stdlib.h.
	* libio/tst-memstream1.c: Include stdlib.h.
	* libio/tst-memstream2.c: Include stdlib.h.
	* libio/tst-memstream3.c: Include stdlib.h.
	* malloc/tst-interpose-aux.c: Include stdint.h.
	* misc/tst-preadvwritev-common.c: Include sys/stat.h.
	* nptl/tst-basic7.c: Include limits.h.
	* nptl/tst-cancel25.c: Include pthread.h, not pthreadP.h.
	* nptl/tst-cancel4.c: Include stddef.h, limits.h, and sys/stat.h.
	* nptl/tst-cancel4_1.c: Include stddef.h.
	* nptl/tst-cancel4_2.c: Include stddef.h.
	* nptl/tst-cond16.c: Include limits.h.
	Use sysconf(_SC_PAGESIZE) instead of __getpagesize.
	* nptl/tst-cond18.c: Include limits.h.
	Use sysconf(_SC_PAGESIZE) instead of __getpagesize.
	* nptl/tst-cond4.c: Include stdint.h.
	* nptl/tst-cond6.c: Include stdint.h.
	* nptl/tst-stack2.c: Include limits.h.
	* nptl/tst-stackguard1.c: Include stddef.h.
	* nptl/tst-tls4.c: Include stdint.h. Don't include tls.h.
	* nptl/tst-tls4moda.c: Include stddef.h.
	Don't include stdio.h, unistd.h, or tls.h.
	* nptl/tst-tls4modb.c: Include stddef.h.
	Don't include stdio.h, unistd.h, or tls.h.
	* nptl/tst-tls5.h: Include stddef.h. Don't include stdlib.h or tls.h.
	* posix/tst-getaddrinfo2.c: Include stdio.h.
	* posix/tst-getaddrinfo5.c: Include stdio.h.
	* posix/tst-pathconf.c: Include sys/stat.h.
	* posix/tst-posix_fadvise-common.c: Include stdint.h.
	* posix/tst-preadwrite-common.c: Include sys/stat.h.
	* posix/tst-regex.c: Include stdint.h.
	Don't include spawn.h or spawn_int.h.
	* posix/tst-regexloc.c: Don't include spawn.h or spawn_int.h.
	* posix/tst-vfork3.c: Include sys/stat.h.
	* resolv/tst-bug18665-tcp.c: Include stdlib.h.
	* resolv/tst-res_hconf_reorder.c: Include stdlib.h.
	* resolv/tst-resolv-search.c: Include stdlib.h.
	* stdio-common/tst-fmemopen2.c: Include stdint.h.
	* stdio-common/tst-vfprintf-width-prec.c: Include stdlib.h.
	* stdlib/test-canon.c: Include sys/stat.h.
	* stdlib/tst-tls-atexit.c: Include stdbool.h.
	* string/test-memchr.c: Include stdint.h.
	* string/tst-cmp.c: Include stdint.h.
	* sysdeps/pthread/tst-timer.c: Include stdint.h.
	* sysdeps/unix/sysv/linux/tst-sync_file_range.c: Include stdint.h.
	* sysdeps/wordsize-64/tst-writev.c: Include limits.h and stdint.h.
	* sysdeps/x86_64/fpu/math-tests-arch.h: Include cpu-features.h.
	Don't include init-arch.h.
	* sysdeps/x86_64/multiarch/test-multiarch.h: Include cpu-features.h.
	Don't include init-arch.h.
	* sysdeps/x86_64/tst-auditmod10b.c: Include link.h and stddef.h.
	* sysdeps/x86_64/tst-auditmod3b.c: Include link.h and stddef.h.
	* sysdeps/x86_64/tst-auditmod4b.c: Include link.h and stddef.h.
	* sysdeps/x86_64/tst-auditmod5b.c: Include link.h and stddef.h.
	* sysdeps/x86_64/tst-auditmod6b.c: Include link.h and stddef.h.
	* sysdeps/x86_64/tst-auditmod6c.c: Include link.h and stddef.h.
	* sysdeps/x86_64/tst-auditmod7b.c: Include link.h and stddef.h.
	* time/clocktest.c: Include stdint.h.
	* time/tst-posixtz.c: Include stdint.h.
	* timezone/tst-timezone.c: Include stdint.h.
2017-02-16 17:33:18 -05:00
Joseph Myers
10303eb74b Move most libmvec test contents from .c to .h files.
The libmvec tests put substantive, architecture-specific contents in
.c files such as test-double-vlen4.c, so making those files
architecture-specific and causing issues for generating such files
automatically when splitting up tests by function.

This patch moves all the substantive contents to .h files, so the .c
files only include the .h file and then libm-test.c.  This allows for
automatic generation of per-function .c files in future.  The .h files
in turn #include or #include_next the architecture-independent file
and add the architecture-specific definitions to that.  (Splitting by
function should in fact allow the TEST_VECTOR_* macros to be replaced
by sysdeps makefile information on which functions to test in each
case, removing the need for gen-libm-have-vector-test.sh as well as
removing the need for some of the architecture-specific headers.)

Tested for x86_64.

	* sysdeps/x86_64/fpu/test-double-vlen2.c: Move most contents to,
	and include ...
	* sysdeps/x86_64/fpu/test-double-vlen2.h: ... here.  New file.
	* sysdeps/x86_64/fpu/test-double-vlen4-avx2.c: Move most contents
	to, and include ...
	* sysdeps/x86_64/fpu/test-double-vlen4-avx2.h: ... here.  New
	file.
	* sysdeps/x86_64/fpu/test-double-vlen4.c: Move most contents to,
	and include ...
	* sysdeps/x86_64/fpu/test-double-vlen4.h: ... here.  New file.
	* sysdeps/x86_64/fpu/test-double-vlen8.c: Move most contents to,
	and include ...
	* sysdeps/x86_64/fpu/test-double-vlen8.h: ... here.  New file.
	* sysdeps/x86_64/fpu/test-float-vlen16.c: Move most contents to,
	and include ...
	* sysdeps/x86_64/fpu/test-float-vlen16.h: ... here.  New file.
	* sysdeps/x86_64/fpu/test-float-vlen4.c: Move most contents to,
	and include ...
	* sysdeps/x86_64/fpu/test-float-vlen4.h: ... here.  New file.
	* sysdeps/x86_64/fpu/test-float-vlen8-avx2.c: Move most contents
	to, and include ...
	* sysdeps/x86_64/fpu/test-float-vlen8-avx2.h: ... here.  New file.
	* sysdeps/x86_64/fpu/test-float-vlen8.c: Move most contents to,
	and include ...
	* sysdeps/x86_64/fpu/test-float-vlen8.h: ... here.  New file.
2017-02-15 01:13:15 +00:00
H.J. Lu
3403a17fea x86-64: Verify that _dl_runtime_resolve preserves vector registers
On x86-64, _dl_runtime_resolve must preserve the first 8 vector
registers.  Add 3 _dl_runtime_resolve tests to verify that SSE,
AVX and AVX512 registers are preserved.

	* sysdeps/x86_64/Makefile (tests): Add tst-sse, tst-avx and
	tst-avx512.
	(test-extras): Add tst-avx-aux and tst-avx512-aux.
	(extra-test-objs): Add tst-avx-aux.o and tst-avx512-aux.o.
	(modules-names): Add tst-ssemod, tst-avxmod and tst-avx512mod.
	($(objpfx)tst-sse): New rule.
	($(objpfx)tst-avx): Likewise.
	($(objpfx)tst-avx512): Likewise.
	(CFLAGS-tst-avx-aux.c): New.
	(CFLAGS-tst-avxmod.c): Likewise.
	(CFLAGS-tst-avx512-aux.c): Likewise.
	(CFLAGS-tst-avx512mod.c): Likewise.
	* sysdeps/x86_64/tst-avx-aux.c: New file.
	* sysdeps/x86_64/tst-avx.c: Likewise.
	* sysdeps/x86_64/tst-avx512-aux.c: Likewise.
	* sysdeps/x86_64/tst-avx512.c: Likewise.
	* sysdeps/x86_64/tst-avx512mod.c: Likewise.
	* sysdeps/x86_64/tst-avxmod.c: Likewise.
	* sysdeps/x86_64/tst-sse.c: Likewise.
	* sysdeps/x86_64/tst-ssemod.c: Likewise.
2017-02-09 12:19:58 -08:00
Adhemerval Zanella
f2d7f23a30 Remove i686, x86_64, and powerpc strtok implementations
Based on comments on previous attempt to address BZ#16640 [1],
the idea is not support invalid use of strtok (the original
bug report proposal).  This leader to a new strtok optimized
strtok implementation [2].

The idea of this patch is to fix BZ#16640 to align all the
implementations to a same contract.  However, with newer strtok
code it is better to get remove the old assembly ones instead of
fix them.

For x86 is a gain in all cases since the new implementation can
potentially use sse2/sse42 implementation for strspn and strcspn.
This shows a better performance on both i686 and x86_64 using
the string benchtests.

On powerpc64 the gains are mixed, where only for larger inputs
or keys some gains are showns (based on benchtest it seems that
it shows some gains for keys larger than 10 and inputs larger
than 32).  I would prefer to remove the optimized implementation
based on first code simplicity and second because some more gain
could be optimized using a better optimized strcspn/strspn
code (as for x86).  However if powerpc arch maintainers prefer I
can send a v2 with the assembly code adjusted instead.

Checked on x86_64-linux-gnu, i686-linux-gnu, and powerpc64le-linux-gnu.

	[BZ #16640]
	* sysdeps/i386/i686/strtok.S: Remove file.
	* sysdeps/i386/i686/strtok_r.S: Likewise.
	* sysdeps/i386/strtok.S: Likewise.
	* sysdeps/i386/strtok_r.S: Likewise.
	* sysdeps/powerpc/powerpc64/strtok.S: Likewise.
	* sysdeps/powerpc/powerpc64/strtok_r.S: Likewise.
	* sysdeps/x86_64/strtok.S: Likewise.
	* sysdeps/x86_64/strtok_r.S: Likewise.

[1] https://sourceware.org/ml/libc-alpha/2016-10/msg00411.html
[2] https://sourceware.org/ml/libc-alpha/2016-12/msg00461.html
2017-02-06 10:24:17 -02:00
H.J. Lu
6fab532b47 Allow IFUNC relocation against unrelocated shared library
IFUNC relocation against definition in unrelocated shared library
will lead to segfault when the IFUNC function is called.  This
patch allows such IFUNC relocations with a warning.  This isn't
a real fix for

https://sourceware.org/bugzilla/show_bug.cgi?id=21041

It simply allows the program to load.  The program will segfault
when longjmp is called.

	* sysdeps/i386/dl-machine.h (elf_machine_rel): Replace
	_dl_fatal_printf with _dl_error_printf for IFUNC relocation
	against unrelocated shared library.
	* sysdeps/x86_64/dl-machine.h (elf_machine_rela): Likewise.
2017-02-02 13:14:59 -08:00
H.J. Lu
02b78ff749 Add VZEROUPPER to memset-vec-unaligned-erms.S [BZ #21081]
Since memset-vec-unaligned-erms.S has VDUP_TO_VEC0_AND_SET_RETURN at
function entry, memset optimized for AVX2 and AVX512 will always use
ymm/zmm register. VZEROUPPER should be placed before ret in

L(stosb):
        movq    %rdx, %rcx
        movzbl  %sil, %eax
        movq    %rdi, %rdx
        rep stosb
        movq    %rdx, %rax
        ret

since it can be reached from

L(stosb_more_2x_vec):
        cmpq    $REP_STOSB_THRESHOLD, %rdx
        ja      L(stosb)

	[BZ #21081]
	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S
	(L(stosb)): Add VZEROUPPER before ret.
2017-01-30 10:59:31 -08:00
Adhemerval Zanella
8dad72997a Fix x86 strncat optimized implementation for large sizes
Similar to BZ#19387, BZ#21014, and BZ#20971, both x86 sse2 strncat
optimized assembly implementations do not handle the size overflow
correctly.

The x86_64 one is in fact an issue with strcpy-sse2-unaligned, but
that is triggered also with strncat optimized implementation.

This patch uses a similar strategy used on 3daef2c8ee, where
saturared math is used for overflow case.

Checked on x86_64-linux-gnu and i686-linux-gnu.  It fixes BZ #19390.

	[BZ #19390]
	* string/test-strncat.c (test_main): Add tests with SIZE_MAX as
	maximum string size.
	* sysdeps/i386/i686/multiarch/strcat-sse2.S (STRCAT): Avoid overflow
	in pointer addition.
	* sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S (STRCPY):
	Likewise.
2017-01-03 14:24:53 -02:00
Joseph Myers
bfff8b1bec Update copyright dates with scripts/update-copyrights. 2017-01-01 00:14:16 +00:00
Adhemerval Zanella
3daef2c8ee Fix x86_64 memchr for large input sizes
Current optimized memchr for x86_64 does for input arguments pointers
module 64 in range of [49,63] if there is no searchr char in the rest
of 64-byte block a pointer addition which might overflow:

* sysdeps/x86_64/memchr.S

    77          .p2align 4
    78  L(unaligned_no_match):
    79          add     %rcx, %rdx

Add (uintptr_t)s % 16 to n in %rdx.

    80          sub     $16, %rdx
    81          jbe     L(return_null)

This patch fixes by adding a saturated math that sets a maximum pointer
value if it overflows (UINTPTR_MAX).

Checked on x86_64-linux-gnu and powerpc64-linux-gnu.

	[BZ# 19387]
	* sysdeps/x86_64/memchr.S (memchr): Avoid overflow in pointer
	addition.
	* string/test-memchr.c (do_test): Remove alignment limitation.
	(test_main): Add test that trigger BZ# 19387.
2016-12-27 10:50:41 -02:00
Nick Alcock
de6591238b Do not stack-protect ifunc resolvers [BZ #7065]
When dynamically linking, ifunc resolvers are called before TLS is
initialized, so they cannot be safely stack-protected.

We avoid disabling stack-protection on large numbers of files by
using __attribute__ ((__optimize__ ("-fno-stack-protector")))
to turn it off just for the resolvers themselves.  (We provide
the attribute even when statically linking, because we will later
use it elsewhere too.)
2016-12-26 10:08:41 +01:00
Nick Alcock
fcd942370f x86_64: tst-quad1pie, tst-quad2pie: compile with -fPIE [BZ #7065]
With stack protection enabled, these files have external symbol
references for the first time, so the fact that they are not compiled
with -fPIE and are then linked into a -pie binary starts to hurt.
2016-12-21 12:04:12 +01:00
Joseph Myers
0a2546cdaa Fix x86, x86_64 fmax, fmin sNaN handling, add tests (bug 20947).
Various fmax and fmin function implementations mishandle sNaN
arguments:

(a) When both arguments are NaNs, the return value should be a qNaN,
but sometimes it is an sNaN if at least one argument is an sNaN.

(b) Under TS 18661-1 semantics, if either argument is an sNaN then the
result should be a qNaN (whereas if one argument is a qNaN and the
other is not a NaN, the result should be the non-NaN argument).
Various implementations treat sNaNs like qNaNs here.

This patch fixes the x86 and x86_64 versions (ignoring float and
double for 32-bit x86 given the inability to reliably avoid the sNaN
turning into a qNaN before it gets to the called function).  Tests of
sNaN inputs to these functions are added.

Note on architecture versions I haven't changed for this issue:
AArch64 already gets this right (it uses a hardware instruction with
the correct semantics for both quiet and signaling NaNs) and does not
need changes.  It's possible Alpha, IA64, SPARC might need changes
(this would be shown by the testsuite if so).

Tested for x86_64 and x86 (both i686 and i586 builds, to cover the
different x86 implementations).

	[BZ #20947]
	* sysdeps/i386/fpu/s_fmaxl.S (__fmaxl): Add the arguments when
	either is a signaling NaN.
	* sysdeps/i386/fpu/s_fminl.S (__fminl): Likewise.  Make code
	follow fmaxl more closely.
	* sysdeps/i386/i686/fpu/s_fmaxl.S (__fmaxl): Add the arguments
	when either is a signaling NaN.
	* sysdeps/i386/i686/fpu/s_fminl.S (__fminl): Likewise.
	* sysdeps/x86_64/fpu/s_fmax.S (__fmax): Likewise.
	* sysdeps/x86_64/fpu/s_fmaxf.S (__fmaxf): Likewise.
	* sysdeps/x86_64/fpu/s_fmaxl.S (__fmaxl): Likewise.
	* sysdeps/x86_64/fpu/s_fmin.S (__fmin): Likewise.
	* sysdeps/x86_64/fpu/s_fminf.S (__fminf): Likewise.
	* sysdeps/x86_64/fpu/s_fminl.S (__fminl): Likewise.
	* math/libm-test.inc (fmax_test_data): Add tests of sNaN inputs.
	(fmin_test_data): Likewise.
2016-12-15 23:52:18 +00:00
Joseph Myers
a91fd168a0 Fix x86_64/x86 powl handling of sNaN arguments (bug 20916).
The x86_64/x86 powl implementations mishandle sNaN arguments, both by
returning sNaN in some cases (instead of doing arithmetic on the
arguments to produce the result when NaN arguments result in NaN
results) and by treating sNaN the same as qNaN for arguments (1, sNaN)
and (sNaN, 0), contrary to TS 18661-1 which requires those cases to
return qNaN instead of 1.

This patch makes the x86_64/x86 powl implementations follow TS 18661-1
semantics for sNaN arguments; sNaN tests are also added for pow.
Given the problems with testing float and double sNaN arguments on
32-bit x86 (sNaN tests disabled because the compiler may convert
unnecessarily to a qNaN when passing arguments), no changes are made
to the powf and pow implementations there.

Tested for x86_64 and x86.

	[BZ #20916]
	* sysdeps/i386/fpu/e_powl.S (__ieee754_powl): Do not return 1 for
	arguments (sNaN, 0) or (1, sNaN).  Do arithmetic on NaN arguments
	to compute result.
	* sysdeps/x86_64/fpu/e_powl.S (__ieee754_powl): Likewise.
	* math/libm-test.inc (pow_test_data): Add tests of sNaN arguments.
2016-12-06 00:33:19 +00:00
Florian Weimer
b04beebf07 ld.so: Remove __libc_memalign
It is no longer needed since commit 6c444ad6e9
(elf: Do not use memalign for TCB/TLS blocks allocation [BZ #17730]).
Applications do not link against ld.so and will use the definition in
libc.so, so there is no ABI impact.
2016-11-30 16:23:58 +01:00
Florian Weimer
9e78f6f6e7 Implement _dl_catch_error, _dl_signal_error in libc.so [BZ #16628]
This change moves the main implementation of _dl_catch_error,
_dl_signal_error to libc.so, where TLS variables can be used
directly.  This removes a writable function pointer from the
rtld_global variable.

For use during initial relocation, minimal implementations of these
functions are provided in ld.so.  These are eventually interposed
by the libc.so implementations.  This is implemented by compiling
elf/dl-error-skeleton.c twice, via elf/dl-error.c and
elf/dl-error-minimal.c.

As a side effect of this change, the static version of dl-error.c
no longer includes support for the
_dl_signal_cerror/_dl_receive_error mechanism because it is only
used in ld.so.
2016-11-30 15:59:57 +01:00
H.J. Lu
c9070e6305 X86_64: Don't use PLT nor GOT in static archives [BZ #20750]
There is no need to use PLT nor GOT in static archives to branch to a
function, regardless whether static archives is compiled with PIC or
not.  When static archives are used to create dynamic executable,
PLT/GOT may be used.  The resulting executable still works correctly.

	[BZ #20750]
	* sysdeps/x86_64/sysdep.h (JUMPTARGET): Check SHARED instead
	of PIC.
2016-11-28 09:45:07 -08:00
Adhemerval Zanella
c579f48edb Remove cached PID/TID in clone
This patch remove the PID cache and usage in current GLIBC code.  Current
usage is mainly used a performance optimization to avoid the syscall,
however it adds some issues:

  - The exposed clone syscall will try to set pid/tid to make the new
    thread somewhat compatible with current GLIBC assumptions.  This cause
    a set of issue with new workloads and usecases (such as BZ#17214 and
    [1]) as well for new internal usage of clone to optimize other algorithms
    (such as clone plus CLONE_VM for posix_spawn, BZ#19957).

  - The caching complexity also added some bugs in the past [2] [3] and
    requires more effort of each port to handle such requirements (for
    both clone and vfork implementation).

  - Caching performance gain in mainly on getpid and some specific
    code paths.  The getpid performance leverage is questionable [4],
    either by the idea of getpid being a hotspot as for the getpid
    implementation itself (if it is indeed a justifiable hotspot a
    vDSO symbol could let to a much more simpler solution).

    Other usage is mainly for non usual code paths, such as pthread
    cancellation signal and handling.

For thread creation (on stack allocation) the code simplification in fact
adds some performance gain due the no need of transverse the stack cache
and invalidate each element pid.

Other thread usages will require a direct getpid syscall, such as
cancellation/setxid signal, thread cancellation, thread fail path (at
create_thread), and thread signal (pthread_kill and pthread_sigqueue).
However these are hardly usual hotspots and I think adding a syscall is
justifiable.

It also simplifies both the clone and vfork arch-specific implementation.
And by review each fork implementation there are some discrepancies that
this patch also solves:

  - microblaze clone/vfork does not set/reset the pid/tid field
  - hppa uses the default vfork implementation that fallback to fork.
    Since vfork is deprecated I do not think we should bother with it.

The patch also removes the TID caching in clone. My understanding for
such semantic is try provide some pthread usage after a user program
issue clone directly (as done by thread creation with CLONE_PARENT_SETTID
and pthread tid member).  However, as stated before in multiple discussions
threads, GLIBC provides clone syscalls without further supporting all this
semantics.

I ran a full make check on x86_64, x32, i686, armhf, aarch64, and powerpc64le.
For sparc32, sparc64, and mips I ran the basic fork and vfork tests from
posix/ folder (on a qemu system).  So it would require further testing
on alpha, hppa, ia64, m68k, nios2, s390, sh, and tile (I excluded microblaze
because it is already implementing the patch semantic regarding clone/vfork).

[1] https://codereview.chromium.org/800183004/
[2] https://sourceware.org/ml/libc-alpha/2006-07/msg00123.html
[3] https://sourceware.org/bugzilla/show_bug.cgi?id=15368
[4] http://yarchive.net/comp/linux/getpid_caching.html

	* sysdeps/nptl/fork.c (__libc_fork): Remove pid cache setting.
	* nptl/allocatestack.c (allocate_stack): Likewise.
	(__reclaim_stacks): Likewise.
	(setxid_signal_thread): Obtain pid through syscall.
	* nptl/nptl-init.c (sigcancel_handler): Likewise.
	(sighandle_setxid): Likewise.
	* nptl/pthread_cancel.c (pthread_cancel): Likewise.
	* sysdeps/unix/sysv/linux/pthread_kill.c (__pthread_kill): Likewise.
	* sysdeps/unix/sysv/linux/pthread_sigqueue.c (pthread_sigqueue):
	Likewise.
	* sysdeps/unix/sysv/linux/createthread.c (create_thread): Likewise.
	* sysdeps/unix/sysv/linux/getpid.c: Remove file.
	* nptl/descr.h (struct pthread): Change comment about pid value.
	* nptl/pthread_getattr_np.c (pthread_getattr_np): Remove thread
	pid assert.
	* sysdeps/unix/sysv/linux/pthread-pids.h (__pthread_initialize_pids):
	Do not set pid value.
	* nptl_db/td_ta_thr_iter.c (iterate_thread_list): Remove thread
	pid cache check.
	* nptl_db/td_thr_validate.c (td_thr_validate): Likewise.
	* sysdeps/aarch64/nptl/tcb-offsets.sym: Remove pid offset.
	* sysdeps/alpha/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/arm/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/hppa/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/i386/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/ia64/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/m68k/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/microblaze/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/mips/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/nios2/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/powerpc/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/s390/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/sh/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/sparc/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/tile/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/x86_64/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/clone.S: Remove pid and tid caching.
	* sysdeps/unix/sysv/linux/alpha/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/arm/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/hppa/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/ia64/clone2.S: Likewise.
	* sysdeps/unix/sysv/linux/mips/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/nios2/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sh/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/tile/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/vfork.S: Remove pid set and reset.
	* sysdeps/unix/sysv/linux/alpha/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/arm/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/ia64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/m68k/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/m68k/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/mips/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/nios2/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sh/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/tile/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/tst-clone2.c (f): Remove direct pthread
	struct access.
	(clone_test): Remove function.
	(do_test): Rewrite to take in consideration pid is not cached anymore.
2016-11-24 19:38:51 -02:00
Aurelien Jarno
380ec16d62 x86_64: fix static build of __memcpy_chk for compilers defaulting to PIC/PIE
When glibc is compiled with gcc 6.2 that has been configured with
to default to PIC/PIE, the static version of __memcpy_chk is not built,
as the test is done on PIC instead of SHARED. Fix the test to check for
SHARED, like it is done for similar functions like memmove_chk.

Changelog:
	* sysdeps/x86_64/memcpy_chk.S (__memcpy_chk): Check for SHARED
	instead of PIC.
2016-11-24 16:56:26 +01:00
Joseph Myers
799131036e Do not hardcode platform names in manual/libm-err-tab.pl (bug 14139).
manual/libm-err-tab.pl hardcodes a list of names for particular
platforms (mapping from sysdeps directory name to friendly name for
the manual).  This goes against the principle of keeping information
about individual platforms in their corresponding sysdeps directory,
and the list is also very out-of-date regarding supported platforms
and their corresponding sysdeps directories.

This patch fixes this by adding a libm-test-ulps-name file alongside
each libm-test-ulps file.  The script then gets the friendly name from
that file, which is required to exist, so it no longer needs to allow
for the mapping being missing.

Tested for x86_64.

	[BZ #14139]
	* manual/libm-err-tab.pl (%pplatforms): Initialize to empty.
	(find_files): Obtain platform name from libm-test-ulps-name and
	store in %pplatforms.
	(canonicalize_platform): Remove.
	(print_platforms): Use $pplatforms directly.
	(by_platforms): Do not allow for platforms missing from
	%pplatforms.
	* sysdeps/aarch64/libm-test-ulps-name: New file.
	* sysdeps/alpha/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/arm/libm-test-ulps-name: Likewise.
	* sysdeps/generic/libm-test-ulps-name: Likewise.
	* sysdeps/hppa/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/i386/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps-name: Likewise.
	* sysdeps/ia64/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/m68k/coldfire/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/m68k/m680x0/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/microblaze/libm-test-ulps-name: Likewise.
	* sysdeps/mips/mips32/libm-test-ulps-name: Likewise.
	* sysdeps/mips/mips64/libm-test-ulps-name: Likewise.
	* sysdeps/nios2/libm-test-ulps-name: Likewise.
	* sysdeps/powerpc/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/powerpc/nofpu/libm-test-ulps-name: Likewise.
	* sysdeps/s390/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/sh/libm-test-ulps-name: Likewise.
	* sysdeps/sparc/fpu/libm-test-ulps-name: Likewise.
	* sysdeps/tile/libm-test-ulps-name: Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps-name: Likewise.
2016-11-04 16:49:06 +00:00
H.J. Lu
0e6d3adc60 Check IFUNC definition in unrelocated shared library [BZ #20019]
Calling an IFUNC function defined in unrelocated shared library may
lead to segfault.  This patch issues an error message to request
relinking the shared library if it references IFUNC function defined
in the unrelocated shared library.

	[BZ #20019]
	* sysdeps/i386/dl-machine.h (elf_machine_rel): Check IFUNC
	definition in unrelocated shared library.
	* sysdeps/x86_64/dl-machine.h (elf_machine_rela): Likewise.
2016-10-28 09:12:15 -07:00
Joseph Myers
f280fa6d17 Use __builtin_fma more in dbl-64 code.
sysdeps/ieee754/dbl-64/dla.h can use a macro DLA_FMS for more
efficient double-width operations when fused multiply-subtract is
supported.  However, this macro is only defined for x86_64,
conditional on architecture-specific __FMA4__.  This patch makes the
code use __builtin_fma conditional on __FP_FAST_FMA, as used elsewhere
in glibc.

Tested for x86_64, x86 and powerpc.  On powerpc (where this is causing
fused operations to be used where they weren't previously) I see an
increase from 1ulp to 2ulp in the imaginary part of clog10:

testing double (without inline functions)
Failure: Test: Imaginary part of: clog10 (0x1.7a858p+0 - 0x6.d940dp-4 i)
Result:
 is:         -1.2237865208199886e-01  -0x1.f5435146bb61ap-4
 should be:  -1.2237865208199888e-01  -0x1.f5435146bb61cp-4
 difference:  2.7755575615628914e-17   0x1.0000000000000p-55
 ulp       :  2.0000
 max.ulp   :  1.0000
Maximal error of real part of: clog10
 is      : 3 ulp
 accepted: 3 ulp
Maximal error of imaginary part of: clog10
 is      : 2 ulp
 accepted: 1 ulp

This is actually resulting from atan2 becoming *more* accurate (atan2
(-0x6.d940dp-4, 0x1.7a858p+0) should ideally be -0x1.208cd6e841554p-2
but was -0x1.208cd6e841555p-2 from a powerpc libm built before this
change, and is -0x1.208cd6e841554p-2 from a powerpc libm built after
this change).  Since these functions are not expected to be correctly
rounding by glibc's accuracy goals, neither result is a problem, but
this does imply that some of this code, although designed to be
correctly rounding, is not in fact correctly rounding (possibly
because of GCC creating fused operations where the code does not
expect it, something we've only disabled for specific functions where
it was found to cause large errors).  (Of course as previously
discussed I think we should remove the slow cases where an error
analysis shows this wouldn't increase the errors much above 0.5ulp;
it's only functions such as cratan2 that are expected to be correctly
rounding, not atan2.)

	* sysdeps/ieee754/dbl-64/dla.h [__FP_FAST_FMA] (DLA_FMS): Define
	macro to use __builtin_fma.
	* sysdeps/x86_64/fpu/dla.h: Remove file.
2016-09-30 15:49:51 +00:00
Joseph Myers
ec94343f59 Add femode_t functions.
TS 18661-1 defines a type femode_t to represent the set of dynamic
floating-point control modes (such as the rounding mode and trap
enablement modes), and functions fegetmode and fesetmode to manipulate
those modes (without affecting other state such as the raised
exception flags) and a corresponding macro FE_DFL_MODE.

This patch series implements those interfaces for glibc.  This first
patch adds the architecture-independent pieces, the x86 and x86_64
implementations, and the <bits/fenv.h> and ABI baseline updates for
all architectures so glibc keeps building and passing the ABI tests on
all architectures.  Subsequent patches add the fegetmode and fesetmode
implementations for other architectures.

femode_t is generally an integer type - the same type as fenv_t, or as
the single element of fenv_t where fenv_t is a structure containing a
single integer (or the single relevant element, where it has elements
for both status and control registers) - except where architecture
properties or consistency with the fenv_t implementation indicate
otherwise.  FE_DFL_MODE follows FE_DFL_ENV in whether it's a magic
pointer value (-1 cast to const femode_t *), a value that can be
distinguished from valid pointers by its high bits but otherwise
contains a representation of the desired register contents, or a
pointer to a constant variable (the powerpc case; __fe_dfl_mode is
added as an exported constant object, an alias to __fe_dfl_env).

Note that where architectures (that share a register between control
and status bits) gain definitions of new floating-point control or
status bits in future, the implementations of fesetmode for those
architectures may need updating (depending on whether the new bits are
control or status bits and what the implementation does with
previously unknown bits), just like existing implementations of
<fenv.h> functions that take care not to touch reserved bits may need
updating when the set of reserved bits changes.  (As any new bits are
outside the scope of ISO C, that's just a quality-of-implementation
issue for supporting them, not a conformance issue.)

As with fenv_t, femode_t should properly include any software DFP
rounding mode (and for both fenv_t and femode_t I'd consider that
fragment of DFP support appropriate for inclusion in glibc even in the
absence of the rest of libdfp; hardware DFP rounding modes should
already be included if the definitions of which bits are status /
control bits are correct).

Tested for x86_64, x86, mips64 (hard float, and soft float to test the
fallback version), arm (hard float) and powerpc (hard float, soft
float and e500).  Other architecture versions are untested.

	* math/fegetmode.c: New file.
	* math/fesetmode.c: Likewise.
	* sysdeps/i386/fpu/fegetmode.c: Likewise.
	* sysdeps/i386/fpu/fesetmode.c: Likewise.
	* sysdeps/x86_64/fpu/fegetmode.c: Likewise.
	* sysdeps/x86_64/fpu/fesetmode.c: Likewise.
	* math/fenv.h: Update comment on inclusion of <bits/fenv.h>.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (fegetmode): New function
	declaration.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (fesetmode): Likewise.
	* bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)] (femode_t): New
	typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/aarch64/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/alpha/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/arm/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/hppa/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/ia64/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/m68k/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/microblaze/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/mips/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/nios2/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/powerpc/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (__fe_dfl_mode): New variable
	declaration.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/s390/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/sh/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/sparc/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/tile/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* sysdeps/x86/fpu/bits/fenv.h [__GLIBC_USE (IEC_60559_BFP_EXT)]
	(femode_t): New typedef.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (FE_DFL_MODE): New macro.
	* manual/arith.texi (FE_DFL_MODE): Document macro.
	(fegetmode): Document function.
	(fesetmode): Likewise.
	* math/Versions (fegetmode): New libm symbol at version
	GLIBC_2.25.
	(fesetmode): Likewise.
	* math/Makefile (libm-support): Add fegetmode and fesetmode.
	(tests): Add test-femode and test-femode-traps.
	* math/test-femode-traps.c: New file.
	* math/test-femode.c: Likewise.
	* sysdeps/powerpc/fpu/fenv_const.c (__fe_dfl_mode): Declare as
	alias for __fe_dfl_env.
	* sysdeps/powerpc/nofpu/fenv_const.c (__fe_dfl_mode): Likewise.
	* sysdeps/powerpc/powerpc32/e500/nofpu/fenv_const.c
	(__fe_dfl_mode): Likewise.
	* sysdeps/powerpc/Versions (__fe_dfl_mode): New libm symbol at
	version GLIBC_2.25.
	* sysdeps/nacl/libm.abilist: Update.
	* sysdeps/unix/sysv/linux/aarch64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/alpha/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/arm/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/hppa/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/i386/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/ia64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/m68k/coldfire/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/m68k/m680x0/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/microblaze/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/mips/mips32/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/nios2/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm-le.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/sh/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/tile/tilepro/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Likewise.
2016-09-07 16:40:09 +00:00
H.J. Lu
fb0f7a6755 X86-64: Add _dl_runtime_resolve_avx[512]_{opt|slow} [BZ #20508]
There is transition penalty when SSE instructions are mixed with 256-bit
AVX or 512-bit AVX512 load instructions.  Since _dl_runtime_resolve_avx
and _dl_runtime_profile_avx512 save/restore 256-bit YMM/512-bit ZMM
registers, there is transition penalty when SSE instructions are used
with lazy binding on AVX and AVX512 processors.

To avoid SSE transition penalty, if only the lower 128 bits of the first
8 vector registers are non-zero, we can preserve %xmm0 - %xmm7 registers
with the zero upper bits.

For AVX and AVX512 processors which support XGETBV with ECX == 1, we can
use XGETBV with ECX == 1 to check if the upper 128 bits of YMM registers
or the upper 256 bits of ZMM registers are zero.  We can restore only the
non-zero portion of vector registers with AVX/AVX512 load instructions
which will zero-extend upper bits of vector registers.

This patch adds _dl_runtime_resolve_sse_vex which saves and restores
XMM registers with 128-bit AVX store/load instructions.  It is used to
preserve YMM/ZMM registers when only the lower 128 bits are non-zero.
_dl_runtime_resolve_avx_opt and _dl_runtime_resolve_avx512_opt are added
and used on AVX/AVX512 processors supporting XGETBV with ECX == 1 so
that we store and load only the non-zero portion of vector registers.
This avoids SSE transition penalty caused by _dl_runtime_resolve_avx and
_dl_runtime_profile_avx512 when only the lower 128 bits of vector
registers are used.

_dl_runtime_resolve_avx_slow is added and used for AVX processors which
don't support XGETBV with ECX == 1.  Since there is no SSE transition
penalty on AVX512 processors which don't support XGETBV with ECX == 1,
_dl_runtime_resolve_avx512_slow isn't provided.

	[BZ #20495]
	[BZ #20508]
	* sysdeps/x86/cpu-features.c (init_cpu_features): For Intel
	processors, set Use_dl_runtime_resolve_slow and set
	Use_dl_runtime_resolve_opt if XGETBV suports ECX == 1.
	* sysdeps/x86/cpu-features.h (bit_arch_Use_dl_runtime_resolve_opt):
	New.
	(bit_arch_Use_dl_runtime_resolve_slow): Likewise.
	(index_arch_Use_dl_runtime_resolve_opt): Likewise.
	(index_arch_Use_dl_runtime_resolve_slow): Likewise.
	* sysdeps/x86_64/dl-machine.h (elf_machine_runtime_setup): Use
	_dl_runtime_resolve_avx512_opt and _dl_runtime_resolve_avx_opt
	if Use_dl_runtime_resolve_opt is set.  Use
	_dl_runtime_resolve_slow if Use_dl_runtime_resolve_slow is set.
	* sysdeps/x86_64/dl-trampoline.S: Include <cpu-features.h>.
	(_dl_runtime_resolve_opt): New.  Defined for AVX and AVX512.
	(_dl_runtime_resolve): Add one for _dl_runtime_resolve_sse_vex.
	* sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_avx_slow):
	New.
	(_dl_runtime_resolve_opt): Likewise.
	(_dl_runtime_profile): Define only if _dl_runtime_profile is
	defined.
2016-09-06 08:51:07 -07:00
Paul E. Murphy
2bad840e9d Remove unneeded stubs for k_rem_pio2l.
This is only used for the float and double variants.

Instead, just add it to the type specific list of files,
and remove all stubs, and remove the declaration from
math_private.h.

I verified x86_64, i486, ia64, m68k, and ppc64 build.
2016-09-01 09:31:06 -05:00
H.J. Lu
0ac8ee53e8 X86-64: Correct CFA in _dl_runtime_resolve
When stack is re-aligned in _dl_runtime_resolve, there is no need to
adjust CFA when allocating register save area on stack.

	* sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve): Don't
	adjust CFA when allocating register save area on re-aligned
	stack.
2016-08-26 08:57:54 -07:00
Joseph Myers
5146356f5a Add fesetexcept.
TS 18661-1 defines an fesetexcept function for setting floating-point
exception flags without the side-effect of causing enabled traps to be
taken.

This patch series implements this function for glibc.  The present
patch adds the fallback stub implementation, x86 and x86_64
implementations, documentation, tests and ABI baseline updates.  The
remaining patches, some of them untested, add implementations for
other architectures.  The implementations generally follow those of
the fesetexceptflag function.

As for fesetexceptflag, the approach taken for architectures where
setting flags causes enabled traps to be taken is to set the flags
(and potentially cause traps) rather than refusing to set the flags
and returning an error.  Since ISO C and TS 18661 provide no way to
enable traps, this is formally in accordance with the standards.

The NEWS entry should be considered a placeholder, since this patch
series is intended to be followed by further such series adding other
TS 18661-1 features, so that the NEWS entry would end up looking more
like

* New <fenv.h> features from TS 18661-1:2014 are added to libm: the
  fesetexcept, fetestexceptflag, fegetmode and fesetmode functions,
  the femode_t type and the FE_DFL_MODE macro.

with hopefully more such entries for other features, rather than
having an entry for a single function in the end.

I believe we have consensus for adding TS 18661-1 interfaces as per
<https://sourceware.org/ml/libc-alpha/2016-06/msg00421.html>.

Tested for x86_64, x86, mips64 (hard float, and soft float to test the
fallback version), arm (hard float) and powerpc (hard float, soft
float and e500).

	* math/fesetexcept.c: New file.
	* sysdeps/i386/fpu/fesetexcept.c: Likewise.
	* sysdeps/x86_64/fpu/fesetexcept.c: Likewise.
	* math/fenv.h: Define
	__GLIBC_INTERNAL_STARTING_HEADER_IMPLEMENTATION and include
	<bits/libc-header-start.h> instead of including <features.h>.
	[__GLIBC_USE (IEC_60559_BFP_EXT)] (fesetexcept): New function
	declaration.
	* manual/arith.texi (fesetexcept): Document function.
	* math/Versions (fesetexcept): New libm symbol at version
	GLIBC_2.25.
	* math/Makefile (libm-support): Add fesetexcept.
	(tests): Add test-fesetexcept and test-fesetexcept-traps.
	* math/test-fesetexcept.c: New file.
	* math/test-fesetexcept-traps.c: Likewise.
	* sysdeps/nacl/libm.abilist: Update.
	* sysdeps/unix/sysv/linux/aarch64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/alpha/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/arm/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/hppa/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/i386/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/ia64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/m68k/coldfire/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/m68k/m680x0/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/microblaze/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/mips/mips32/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/nios2/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/fpu/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm-le.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/sh/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/tile/tilegx/tilegx32/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/tile/tilegx/tilegx64/libm.abilist:
	Likewise.
	* sysdeps/unix/sysv/linux/tile/tilepro/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Likewise.
2016-08-16 16:16:10 +00:00
Andrew Senkevich
533f9bebf9 x86_64: Call finite scalar versions in vectorized log, pow, exp (bz #20033).
Vector math functions require -ffast-math which sets -ffinite-math-only,
so it is needed to call finite scalar versions (which are called from
vector functions in some cases).

Since finite version of pow() returns qNaN instead of 1.0 for several
inputs, those inputs are excluded for tests of vector math functions.

    [BZ #20033]
    * sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core_sse4.S: Call
    finite version.
    * sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core_avx2.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core_avx512.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_d_log2_core_sse4.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_d_log4_core_avx2.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_d_log8_core_avx512.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core_sse4.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core_avx2.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core_avx512.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_expf4_core_sse4.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_expf8_core_avx2.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core_avx512.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_powf4_core_sse4.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_powf8_core_avx2.S: Likewise.
    * sysdeps/x86_64/fpu/svml_d_exp2_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_d_log2_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_d_pow2_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_s_expf4_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_s_logf4_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_s_powf4_core.S: Likewise.
    * math/libm-test.inc (pow_test_data): Exclude tests for qNaN
    in power zero.
2016-08-02 16:35:25 +03:00
H.J. Lu
fe0cf86148 Don't compile do_test with -mavx/-mavx/-mavx512
Don't compile do_test with -mavx, -mavx nor -mavx512 since they won't run
on non-AVX machines.

	[BZ #20384]
	* sysdeps/x86_64/fpu/Makefile (extra-test-objs): Add
	test-double-libmvec-sincos-avx-main.o,
	test-double-libmvec-sincos-avx2-main.o,
	test-double-libmvec-sincos-main.o,
	test-float-libmvec-sincosf-avx-main.o,
	test-float-libmvec-sincosf-avx2-main.o and
	test-float-libmvec-sincosf-main.o.
	test-float-libmvec-sincosf-avx512-main.o.
	($(objpfx)test-double-libmvec-sincos): Also link with
	$(objpfx)test-double-libmvec-sincos-main.o.
	($(objpfx)test-double-libmvec-sincos-avx): Also link with
	$(objpfx)test-double-libmvec-sincos-avx-main.o.
	($(objpfx)test-double-libmvec-sincos-avx2): Also link with
	$(objpfx)test-double-libmvec-sincos-avx2-main.o.
	($(objpfx)test-float-libmvec-sincosf): Also link with
	$(objpfx)test-float-libmvec-sincosf-main.o.
	($(objpfx)test-float-libmvec-sincosf-avx): Also link with
	$(objpfx)test-float-libmvec-sincosf-avx2-main.o.
	[$(config-cflags-avx512) == yes] (extra-test-objs): Add
	test-double-libmvec-sincos-avx512-main.o and
	($(objpfx)test-double-libmvec-sincos-avx512): Also link with
	$(objpfx)test-double-libmvec-sincos-avx512-main.o.
	($(objpfx)test-float-libmvec-sincosf-avx512): Also link with
	$(objpfx)test-float-libmvec-sincosf-avx512-main.o.
	(CFLAGS-test-double-libmvec-sincos.c): Removed.
	(CFLAGS-test-float-libmvec-sincosf.c): Likewise.
	(CFLAGS-test-double-libmvec-sincos-main.c): New.
	(CFLAGS-test-double-libmvec-sincos-avx-main.c): Likewise.
	(CFLAGS-test-double-libmvec-sincos-avx2-main.c): Likewise.
	(CFLAGS-test-float-libmvec-sincosf-main.c): Likewise.
	(CFLAGS-test-float-libmvec-sincosf-avx-main.c): Likewise.
	(CFLAGS-test-float-libmvec-sincosf-avx2-main.c): Likewise.
	(CFLAGS-test-float-libmvec-sincosf-avx512-main.c): Likewise.
	(CFLAGS-test-double-libmvec-sincos-avx.c): Set to -DREQUIRE_AVX.
	(CFLAGS-test-float-libmvec-sincosf-avx.c ): Likewise.
	(CFLAGS-test-double-libmvec-sincos-avx2.c): Set to
	-DREQUIRE_AVX2.
	(CFLAGS-test-float-libmvec-sincosf-avx2.c ): Likewise.
	(CFLAGS-test-double-libmvec-sincos-avx512.c): Set to
	-DREQUIRE_AVX512F.
	(CFLAGS-test-float-libmvec-sincosf-avx512.c): Likewise.
	* sysdeps/x86_64/fpu/test-double-libmvec-sincos.c: Rewritten.
	* sysdeps/x86_64/fpu/test-float-libmvec-sincosf.c: Likewise.
	* sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx-main.c: New
	file.
	* sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx2-main.c:
	Likewise.
	* sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx512-main.c:
	Likewise.
	* sysdeps/x86_64/fpu/test-double-libmvec-sincos-main.c:
	Likewise.
	* sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx-main.c:
	Likewise.
	* sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx2-main.c:
	Likewise.
	* sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx512-main.c:
	Likewise.
	* sysdeps/x86_64/fpu/test-float-libmvec-sincosf-main.c:
	Likewise.
2016-07-27 11:53:15 -07:00
H.J. Lu
61655555aa x86-64: Properly align stack in _dl_tlsdesc_dynamic [BZ #20309]
Since _dl_tlsdesc_dynamic is called via PLT, we need to add 8 bytes for
push in the PLT entry to align the stack.

	[BZ #20309]
	* configure.ac (have-mtls-dialect-gnu2): Set to yes if
	-mtls-dialect=gnu2 works.
	* configure: Regenerated.
	* elf/Makefile [have-mtls-dialect-gnu2 = yes]
	(tests): Add tst-gnu2-tls1.
	(modules-names): Add tst-gnu2-tls1mod.
	($(objpfx)tst-gnu2-tls1): New.
	(tst-gnu2-tls1mod.so-no-z-defs): Likewise.
	(CFLAGS-tst-gnu2-tls1mod.c): Likewise.
	* elf/tst-gnu2-tls1.c: New file.
	* elf/tst-gnu2-tls1mod.c: Likewise.
	* sysdeps/x86_64/dl-tlsdesc.S (_dl_tlsdesc_dynamic): Add 8
	bytes for push in the PLT entry to align the stack.
2016-07-12 06:30:08 -07:00
H.J. Lu
f43cb35c9b Require binutils 2.24 to build x86-64 glibc [BZ #20139]
If assembler doesn't support AVX512DQ, _dl_runtime_resolve_avx is used
to save the first 8 vector registers, which only saves the lower 256
bits of vector register, for lazy binding.  When it is called on AVX512
platform, the upper 256 bits of ZMM registers are clobbered.  Parameters
passed in ZMM registers will be wrong when the function is called the
first time.  This patch requires binutils 2.24, whose assembler can store
and load ZMM registers, to build x86-64 glibc.  Since mathvec library
needs assembler support for AVX512DQ,  we disable mathvec if assembler
doesn't support AVX512DQ.

	[BZ #20139]
	* config.h.in (HAVE_AVX512_ASM_SUPPORT): Renamed to ...
	(HAVE_AVX512DQ_ASM_SUPPORT): This.
	* sysdeps/x86_64/configure.ac: Require assembler from binutils
	2.24 or above.
	(HAVE_AVX512_ASM_SUPPORT): Removed.
	(HAVE_AVX512DQ_ASM_SUPPORT): New.
	* sysdeps/x86_64/configure: Regenerated.
	* sysdeps/x86_64/dl-trampoline.S: Make HAVE_AVX512_ASM_SUPPORT
	check unconditional.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c: Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S: Likewise.
	* sysdeps/x86_64/multiarch/memcpy_chk.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memmove.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove_chk.S: Likewise.
	* sysdeps/x86_64/multiarch/mempcpy.S: Likewise.
	* sysdeps/x86_64/multiarch/mempcpy_chk.S: Likewise.
	* sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memset.S: Likewise.
	* sysdeps/x86_64/multiarch/memset_chk.S: Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core_avx512.S: Check
	HAVE_AVX512DQ_ASM_SUPPORT instead of HAVE_AVX512_ASM_SUPPORT.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core_avx512.S:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core_avx512.S:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core_avx512.S:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core_avx512.S:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_cosf16_core_avx512.S:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core_avx512.S:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx51:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sinf16_core_avx512.S:
	Likewise.
2016-07-01 06:03:05 -07:00
Andrew Senkevich
ee2196bb67 Fixed wrong vector sincos/sincosf ABI to have it compatible with
current vector function declaration "#pragma omp declare simd notinbranch",
according to which vector sincos should have vector of pointers for second and
third parameters. It is fixed with implementation as wrapper to version
having second and third parameters as pointers.

    [BZ #20024]
    * sysdeps/x86/fpu/test-math-vector-sincos.h: New.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S: Fixed ABI
    of this implementation of vector function.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S:
    Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S: Likewise.
    * sysdeps/x86_64/fpu/svml_d_sincos2_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_d_sincos4_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_d_sincos4_core_avx.S: Likewise.
    * sysdeps/x86_64/fpu/svml_d_sincos8_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_s_sincosf16_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_s_sincosf4_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_s_sincosf8_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_s_sincosf8_core_avx.S: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen2-wrappers.c: Use another wrapper
    for testing vector sincos with fixed ABI.
    * sysdeps/x86_64/fpu/test-double-vlen4-avx2-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen8-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx.c: New test.
    * sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx512.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-sincos.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx512.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-sincosf.c: Likewise.
    * sysdeps/x86_64/fpu/Makefile: Added new tests.
2016-07-01 14:15:38 +03:00
H.J. Lu
13efa86ece Check Prefer_ERMS in memmove/memcpy/mempcpy/memset
Although the Enhanced REP MOVSB/STOSB (ERMS) implementations of memmove,
memcpy, mempcpy and memset aren't used by the current processors, this
patch adds Prefer_ERMS check in memmove, memcpy, mempcpy and memset so
that they can be used in the future.

	* sysdeps/x86/cpu-features.h (bit_arch_Prefer_ERMS): New.
	(index_arch_Prefer_ERMS): Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Return
	__memcpy_erms for Prefer_ERMS.
	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
	(__memmove_erms): Enabled for libc.a.
	* ysdeps/x86_64/multiarch/memmove.S (__libc_memmove): Return
	__memmove_erms or Prefer_ERMS.
	* sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Return
	__mempcpy_erms for Prefer_ERMS.
	* sysdeps/x86_64/multiarch/memset.S (memset): Return
	__memset_erms for Prefer_ERMS.
2016-06-30 07:58:11 -07:00
Joseph Myers
30dcf959d2 Avoid "inexact" exceptions in i386/x86_64 trunc functions (bug 15479).
As discussed in
<https://sourceware.org/ml/libc-alpha/2016-05/msg00577.html>, TS
18661-1 disallows ceil, floor, round and trunc functions from raising
the "inexact" exception, in accordance with general IEEE 754 semantics
for when that exception is raised.  Fixing this for x87 floating point
is more complicated than for the other versions of these functions,
because they use the frndint instruction that raises "inexact" and
this can only be avoided by saving and restoring the whole
floating-point environment.

As I noted in
<https://sourceware.org/ml/libc-alpha/2016-06/msg00128.html>, I have
now implemented a GCC option -fno-fp-int-builtin-inexact for GCC 7,
such that GCC will inline these functions on x86, without caring about
"inexact", when the default -ffp-int-builtin-inexact is in effect.
This allows users to get optimized code depending on the options they
pass to the compiler, while making the out-of-line functions follow TS
18661-1 semantics and avoid "inexact".

This patch duly fixes the out-of-line trunc function implementations
to avoid "inexact", in the same way as the nearbyint implementations.

I do not know how the performance of implementations such as these
based on saving the environment and changing the rounding mode
temporarily compares to that of the C versions or SSE 4.1 versions (of
course, for 32-bit x86 SSE implementations still need to get the
return value in an x87 register); it's entirely possible other
implementations could be faster in some cases.

Tested for x86_64 and x86.

	[BZ #15479]
	* sysdeps/i386/fpu/s_trunc.S (__trunc): Save and restore
	floating-point environment rather than just control word.
	* sysdeps/i386/fpu/s_truncf.S (__truncf): Likewise.
	* sysdeps/i386/fpu/s_truncl.S (__truncl): Save and restore
	floating-point environment, with "invalid" exceptions merged in,
	rather than just control word.
	* sysdeps/x86_64/fpu/s_truncl.S (__truncl): Likewise.
	* math/libm-test.inc (trunc_test_data): Do not allow spurious
	"inexact" exceptions.
2016-06-27 17:26:52 +00:00
Joseph Myers
623629de06 Avoid "inexact" exceptions in i386/x86_64 floor functions (bug 15479).
As discussed in
<https://sourceware.org/ml/libc-alpha/2016-05/msg00577.html>, TS
18661-1 disallows ceil, floor, round and trunc functions from raising
the "inexact" exception, in accordance with general IEEE 754 semantics
for when that exception is raised.  Fixing this for x87 floating point
is more complicated than for the other versions of these functions,
because they use the frndint instruction that raises "inexact" and
this can only be avoided by saving and restoring the whole
floating-point environment.

As I noted in
<https://sourceware.org/ml/libc-alpha/2016-06/msg00128.html>, I have
now implemented a GCC option -fno-fp-int-builtin-inexact for GCC 7,
such that GCC will inline these functions on x86, without caring about
"inexact", when the default -ffp-int-builtin-inexact is in effect.
This allows users to get optimized code depending on the options they
pass to the compiler, while making the out-of-line functions follow TS
18661-1 semantics and avoid "inexact".

This patch duly fixes the out-of-line floor function implementations
to avoid "inexact", in the same way as the nearbyint implementations.

I do not know how the performance of implementations such as these
based on saving the environment and changing the rounding mode
temporarily compares to that of the C versions or SSE 4.1 versions (of
course, for 32-bit x86 SSE implementations still need to get the
return value in an x87 register); it's entirely possible other
implementations could be faster in some cases.

Tested for x86_64 and x86.

	[BZ #15479]
	* sysdeps/i386/fpu/s_floor.S (__floor): Save and restore
	floating-point environment rather than just control word.
	* sysdeps/i386/fpu/s_floorf.S (__floorf): Likewise.
	* sysdeps/i386/fpu/s_floorl.S (__floorl): Save and restore
	floating-point environment, with "invalid" exceptions merged in,
	rather than just control word.
	* sysdeps/x86_64/fpu/s_floorl.S (__floorl): Likewise.
	* math/libm-test.inc (floor_test_data): Do not allow spurious
	"inexact" exceptions.
2016-06-27 17:25:47 +00:00
Joseph Myers
26b0bf9600 Avoid "inexact" exceptions in i386/x86_64 ceil functions (bug 15479).
As discussed in
<https://sourceware.org/ml/libc-alpha/2016-05/msg00577.html>, TS
18661-1 disallows ceil, floor, round and trunc functions from raising
the "inexact" exception, in accordance with general IEEE 754 semantics
for when that exception is raised.  Fixing this for x87 floating point
is more complicated than for the other versions of these functions,
because they use the frndint instruction that raises "inexact" and
this can only be avoided by saving and restoring the whole
floating-point environment.

As I noted in
<https://sourceware.org/ml/libc-alpha/2016-06/msg00128.html>, I have
now implemented a GCC option -fno-fp-int-builtin-inexact for GCC 7,
such that GCC will inline these functions on x86, without caring about
"inexact", when the default -ffp-int-builtin-inexact is in effect.
This allows users to get optimized code depending on the options they
pass to the compiler, while making the out-of-line functions follow TS
18661-1 semantics and avoid "inexact".

This patch duly fixes the out-of-line ceil function implementations to
avoid "inexact", in the same way as the nearbyint implementations.

I do not know how the performance of implementations such as these
based on saving the environment and changing the rounding mode
temporarily compares to that of the C versions or SSE 4.1 versions (of
course, for 32-bit x86 SSE implementations still need to get the
return value in an x87 register); it's entirely possible other
implementations could be faster in some cases.

Tested for x86_64 and x86.

	[BZ #15479]
	* sysdeps/i386/fpu/s_ceil.S (__ceil): Save and restore
	floating-point environment rather than just control word.
	* sysdeps/i386/fpu/s_ceilf.S (__ceilf): Likewise.
	* sysdeps/i386/fpu/s_ceill.S (__ceill): Save and restore
	floating-point environment, with "invalid" exceptions merged in,
	rather than just control word.
	* sysdeps/x86_64/fpu/s_ceill.S (__ceill): Likewise.
	* math/libm-test.inc (ceil_test_data): Do not allow spurious
	"inexact" exceptions.
2016-06-27 17:24:30 +00:00
Joseph Myers
40244be372 Fix i386/x86_64 scalbl with sNaN input (bug 20296).
The x86_64 and i386 versions of scalbl return sNaN for some cases of
sNaN input and are missing "invalid" exceptions for other cases.  This
results from overly complicated code that either returns a NaN input,
or discards both inputs when one is NaN and loads a NaN from memory.
This patch fixes this by simplifying the code to add the arguments
when either one is NaN.

Tested for x86_64 and x86.

	[BZ #20296]
	* sysdeps/i386/fpu/e_scalbl.S (__ieee754_scalbl): Add arguments
	when either argument is a NaN.
	* sysdeps/x86_64/fpu/e_scalbl.S (__ieee754_scalbl): Likewise.
	* math/libm-test.inc (scalb_test_data): Add sNaN tests.
2016-06-23 22:17:41 +00:00
Joseph Myers
4e9bf327ad Simplify x86 nearbyint functions.
The i386 implementations of nearbyint functions, and x86_64
nearbyintl, contain code to mask the "inexact" exception.  However,
the fnstenv instruction has the effect of masking all exceptions, so
this masking code has been redundant since fnstenv was added to those
implementations (by commit 846d9a4a3acdb4939ca7bf6aed48f9f6f26911be;
commit 71d1b0166b added the test
math/test-nearbyint-except-2.c that verifies these functions do work
when called with "inexact" traps enabled); this patch removes the
redundant code.

Tested for x86_64 and x86.

	* sysdeps/i386/fpu/s_nearbyint.S (__nearbyint): Do not mask
	"inexact" exceptions after fnstenv.
	* sysdeps/i386/fpu/s_nearbyintf.S (__nearbyintf): Likewise.
	* sysdeps/i386/fpu/s_nearbyintl.S (__nearbyintl): Likewise.
	* sysdeps/x86_64/fpu/s_nearbyintl.S (__nearbyintl): Likewise.
2016-06-22 15:40:30 +00:00
Andrew Senkevich
df2258c6cb Added tests to ensure linkage through libmvec *_finite aliases which are
defined in libmvec_nonshared.a (bug 19654).

    [BZ #19654]
    * sysdeps/x86_64/fpu/Makefile: Added new tests.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias-avx-main.c: New.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias-avx-mod.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias-avx.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias-avx2-main.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias-avx2-mod.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias-avx512-main.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias-avx512-mod.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias-avx512.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias-main.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias-mod.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-alias.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias-avx-main.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias-avx-mod.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias-avx.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias-avx2-main.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias-avx2-mod.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias-avx512-main.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias-avx512-mod.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias-avx512.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias-main.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias-mod.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-alias.c: Likewise.
    * sysdeps/x86_64/fpu/test-libmvec-alias-mod.c: Likewise.
2016-06-20 21:15:50 +03:00
Florian Weimer
aca1daef29 elf: Consolidate machine-agnostic DTV definitions in <dl-dtv.h>
Identical definitions of dtv_t and TLS_DTV_UNALLOCATED were
repeated for all architectures using DTVs.
2016-06-20 14:31:40 +02:00
Joseph Myers
f4015c8a86 Use generic fdim on more architectures (bug 6796, bug 20255, bug 20256).
Some architectures have their own versions of fdim functions, which
are missing errno setting (bug 6796) and may also return sNaN instead
of qNaN for sNaN input, in the case of the x86 / x86_64 long double
versions (bug 20256).

These versions are not actually doing anything that a compiler
couldn't generate, just straightforward comparisons / arithmetic (and,
in the x86 / x86_64 case, testing for NaNs with fxam, which isn't
actually needed once you use an unordered comparison and let the NaNs
pass through the same subtraction as non-NaN inputs).  This patch
removes the x86 / x86_64 / powerpc versions, so that those
architectures use the generic C versions, which correctly handle
setting errno and deal properly with sNaN inputs.  This seems better
than dealing with setting errno in lots of .S versions.

The i386 versions also return results with excess range and precision,
which is not appropriate for a function exactly defined by reference
to IEEE operations.  For errno setting to work correctly on overflow,
it's necessary to remove excess range with math_narrow_eval, which
this patch duly does in the float and double versions so that the
tests can reliably pass on x86.  For float, this avoids any double
rounding issues as the long double precision is more than twice that
of float.  For double, double rounding issues will need to be
addressed separately, so this patch does not fully fix bug 20255.

Tested for x86_64, x86 and powerpc.

	[BZ #6796]
	[BZ #20255]
	[BZ #20256]
	* math/s_fdim.c: Include <math_private.h>.
	(__fdim): Use math_narrow_eval on result.
	* math/s_fdimf.c: Include <math_private.h>.
	(__fdimf): Use math_narrow_eval on result.
	* sysdeps/i386/fpu/s_fdim.S: Remove file.
	* sysdeps/i386/fpu/s_fdimf.S: Likewise.
	* sysdeps/i386/fpu/s_fdiml.S: Likewise.
	* sysdeps/i386/i686/fpu/s_fdim.S: Likewise.
	* sysdeps/i386/i686/fpu/s_fdimf.S: Likewise.
	* sysdeps/i386/i686/fpu/s_fdiml.S: Likewise.
	* sysdeps/powerpc/fpu/s_fdim.c: Likewise.
	* sysdeps/powerpc/fpu/s_fdimf.c: Likewise.
	* sysdeps/powerpc/powerpc32/fpu/s_fdim.c: Likewise.
	* sysdeps/powerpc/powerpc64/fpu/s_fdim.c: Likewise.
	* sysdeps/x86_64/fpu/s_fdiml.S: Likewise.
	* math/libm-test.inc (fdim_test_data): Expect errno setting on
	overflow.  Add sNaN tests.
2016-06-14 16:04:19 +00:00
Joseph Myers
f00faa4a43 Fix i386/x86_64 log2l (sNaN) (bug 20235).
The i386/x86_64 versions of log2l return sNaN for sNaN input.  This
patch fixes them to add NaN inputs to themselves so that qNaN is
returned in this case.

Tested for x86_64 and x86.

	[BZ #20235]
	* sysdeps/i386/fpu/e_log2l.S (__ieee754_log2l): Add NaN input to
	itself.
	* sysdeps/x86_64/fpu/e_log2l.S (__ieee754_log2l): Likewise.
	* math/libm-test.inc (log2_test_data): Add sNaN tests.
2016-06-09 18:04:30 +00:00
H.J. Lu
ac187dc4ab Always indirect branch to __libc_start_main via GOT
Since __libc_start_main in libc.so is called very early, lazy binding
isn't relevant.  Always call __libc_start_main with indirect branch via
GOT to avoid extra branch to PLT slot.  In case of static executable,
ld in binutils 2.26 or above can convert indirect branch into direct
branch:

0000000000400a80 <_start>:
  400a80:       31 ed                   xor    %ebp,%ebp
  400a82:       49 89 d1                mov    %rdx,%r9
  400a85:       5e                      pop    %rsi
  400a86:       48 89 e2                mov    %rsp,%rdx
  400a89:       48 83 e4 f0             and    $0xfffffffffffffff0,%rsp
  400a8d:       50                      push   %rax
  400a8e:       54                      push   %rsp
  400a8f:       49 c7 c0 20 1b 40 00    mov    $0x401b20,%r8
  400a96:       48 c7 c1 90 1a 40 00    mov    $0x401a90,%rcx
  400a9d:       48 c7 c7 c0 03 40 00    mov    $0x4003c0,%rdi
  400aa4:       67 e8 96 09 00 00       addr32 callq 401440 <__libc_start_main>
  400aaa:       f4                      hlt

	* sysdeps/x86_64/start.S (_start): Always indirect branch to
	__libc_start_main via GOT.
2016-06-09 04:43:31 -07:00
H.J. Lu
75437079e4 X86-64: Add dummy memcopy.h and wordcopy.c
Since x86-64 no longer uses memory copy functions, add dummy memcopy.h
and wordcopy.c to reduce code size.  It reduces the size of libc.so by
about 1 KB.

	* sysdeps/x86_64/memcopy.h: New file.
	* sysdeps/x86_64/wordcopy.c: Likewise.
2016-06-09 04:38:34 -07:00
Joseph Myers
8c010e2f71 Fix i386/x86_64 log1pl (sNaN) (bug 20229).
The i386/x86_64 versions of log1pl return sNaN for sNaN input.  This
patch fixes them to add a NaN input to itself so that qNaN is returned
in this case.

Tested for x86_64 and x86.

	[BZ #20229]
	* sysdeps/i386/fpu/s_log1pl.S (__log1pl): Add NaN input to itself.
	* sysdeps/x86_64/fpu/s_log1pl.S (__log1pl): Likewise.
	* math/libm-test.inc (log1p_test_data): Add sNaN tests.
2016-06-08 23:11:42 +00:00
Joseph Myers
09096b3615 Fix i386/x86_64 log10l (sNaN) (bug 20228).
The i386/x86_64 versions of log10l return sNaN for sNaN input.  This
patch fixes them to add a NaN input to itself so that qNaN is returned
in this case.

Tested for x86_64 and x86.

	[BZ #20228]
	* sysdeps/i386/fpu/e_log10l.S (__ieee754_log10l): Add NaN input to
	itself.
	* sysdeps/x86_64/fpu/e_log10l.S (__ieee754_log10l): Likewise.
	* math/libm-test.inc (log10_test_data): Add sNaN tests.
2016-06-08 22:59:18 +00:00
Joseph Myers
df179d8808 Fix i386/x86_64 logl (sNaN) (bug 20227).
The i386/x86_64 versions of logl return sNaN for sNaN input.  This
patch fixes them to add a NaN input to itself so that qNaN is returned
in this case.

Tested for x86_64 and x86 (including a build for i586 to cover the
non-i686 logl version).

	[BZ #20227]
	* sysdeps/i386/fpu/e_logl.S (__ieee754_logl): Add NaN input to
	itself.
	* sysdeps/i386/i686/fpu/e_logl.S (__ieee754_logl): Likewise.
	* sysdeps/x86_64/fpu/e_logl.S (__ieee754_logl): Likewise.
	* math/libm-test.inc (log_test_data): Add sNaN tests.
2016-06-08 22:24:06 +00:00
Joseph Myers
9bd3ef8e19 Fix i386/x86_64 expl, exp10l, expm1l for sNaN input (bug 20226).
The i386 and x86_64 implementations of expl, exp10l and expm1l (code
shared between the functions) return sNaN for sNaN input.  This patch
fixes them to add NaN inputs to themselves so that qNaN is returned in
this case.

Tested for x86_64 and x86.

	[BZ #20226]
	* sysdeps/i386/fpu/e_expl.S (IEEE754_EXPL): Add NaN argument to
	itself.
	* sysdeps/x86_64/fpu/e_expl.S (IEEE754_EXPL): Likewise.
	* math/libm-test.inc (exp_test_data): Add sNaN tests.
	(exp10_test_data): Likewise.
	(expm1_test_data): Likewise.
2016-06-08 21:55:06 +00:00
H.J. Lu
c867597bff X86-64: Remove previous default/SSE2/AVX2 memcpy/memmove
Since the new SSE2/AVX2 memcpy/memmove are faster than the previous ones,
we can remove the previous SSE2/AVX2 memcpy/memmove and replace them with
the new ones.

No change in IFUNC selection if SSE2 and AVX2 memcpy/memmove weren't used
before.  If SSE2 or AVX2 memcpy/memmove were used, the new SSE2 or AVX2
memcpy/memmove optimized with Enhanced REP MOVSB will be used for
processors with ERMS.  The new AVX512 memcpy/memmove will be used for
processors with AVX512 which prefer vzeroupper.

Since the new SSE2 memcpy/memmove are faster than the previous default
memcpy/memmove used in libc.a and ld.so, we also remove the previous
default memcpy/memmove and make them the default memcpy/memmove, except
that non-temporal store isn't used in ld.so.

Together, it reduces the size of libc.so by about 6 KB and the size of
ld.so by about 2 KB.

	[BZ #19776]
	* sysdeps/x86_64/memcpy.S: Make it dummy.
	* sysdeps/x86_64/mempcpy.S: Likewise.
	* sysdeps/x86_64/memmove.S: New file.
	* sysdeps/x86_64/memmove_chk.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove_chk.S: Likewise.
	* sysdeps/x86_64/memmove.c: Removed.
	* sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S: Likewise.
	* sysdeps/x86_64/multiarch/memcpy-sse2-unaligned.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove-avx-unaligned.S: Likewise.
	* sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memmove.c: Likewise.
	* sysdeps/x86_64/multiarch/memmove_chk.c: Likewise.
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
	memcpy-sse2-unaligned, memmove-avx-unaligned,
	memcpy-avx-unaligned and memmove-sse2-unaligned-erms.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Replace
	__memmove_chk_avx512_unaligned_2 with
	__memmove_chk_avx512_unaligned.  Remove
	__memmove_chk_avx_unaligned_2.  Replace
	__memmove_chk_sse2_unaligned_2 with
	__memmove_chk_sse2_unaligned.  Remove __memmove_chk_sse2 and
	__memmove_avx_unaligned_2.  Replace __memmove_avx512_unaligned_2
	with __memmove_avx512_unaligned.  Replace
	__memmove_sse2_unaligned_2 with __memmove_sse2_unaligned.
	Remove __memmove_sse2.  Replace __memcpy_chk_avx512_unaligned_2
	with __memcpy_chk_avx512_unaligned.  Remove
	__memcpy_chk_avx_unaligned_2.  Replace
	__memcpy_chk_sse2_unaligned_2 with __memcpy_chk_sse2_unaligned.
	Remove __memcpy_chk_sse2.  Remove __memcpy_avx_unaligned_2.
	Replace __memcpy_avx512_unaligned_2 with
	__memcpy_avx512_unaligned.  Remove __memcpy_sse2_unaligned_2
	and __memcpy_sse2.  Replace __mempcpy_chk_avx512_unaligned_2
	with __mempcpy_chk_avx512_unaligned.  Remove
	__mempcpy_chk_avx_unaligned_2.  Replace
	__mempcpy_chk_sse2_unaligned_2 with
	__mempcpy_chk_sse2_unaligned.  Remove __mempcpy_chk_sse2.
	Replace __mempcpy_avx512_unaligned_2 with
	__mempcpy_avx512_unaligned.  Remove __mempcpy_avx_unaligned_2.
	Replace __mempcpy_sse2_unaligned_2 with
	__mempcpy_sse2_unaligned.  Remove __mempcpy_sse2.
	* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Support
	__memcpy_avx512_unaligned_erms and __memcpy_avx512_unaligned.
	Use __memcpy_avx_unaligned_erms and __memcpy_sse2_unaligned_erms
	if processor has ERMS.  Default to __memcpy_sse2_unaligned.
	(ENTRY): Removed.
	(END): Likewise.
	(ENTRY_CHK): Likewise.
	(libc_hidden_builtin_def): Likewise.
	Don't include ../memcpy.S.
	* sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk): Support
	__memcpy_chk_avx512_unaligned_erms and
	__memcpy_chk_avx512_unaligned.  Use
	__memcpy_chk_avx_unaligned_erms and
	__memcpy_chk_sse2_unaligned_erms if if processor has ERMS.
	Default to __memcpy_chk_sse2_unaligned.
	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
	Change function suffix from unaligned_2 to unaligned.
	* sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Support
	__mempcpy_avx512_unaligned_erms and __mempcpy_avx512_unaligned.
	Use __mempcpy_avx_unaligned_erms and __mempcpy_sse2_unaligned_erms
	if processor has ERMS.  Default to __mempcpy_sse2_unaligned.
	(ENTRY): Removed.
	(END): Likewise.
	(ENTRY_CHK): Likewise.
	(libc_hidden_builtin_def): Likewise.
	Don't include ../mempcpy.S.
	(mempcpy): New.  Add a weak alias.
	* sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk): Support
	__mempcpy_chk_avx512_unaligned_erms and
	__mempcpy_chk_avx512_unaligned.  Use
	__mempcpy_chk_avx_unaligned_erms and
	__mempcpy_chk_sse2_unaligned_erms if if processor has ERMS.
	Default to __mempcpy_chk_sse2_unaligned.
2016-06-08 13:58:08 -07:00
H.J. Lu
5e8c5bb1ac X86-64: Remove the previous SSE2/AVX2 memsets
Since the new SSE2/AVX2 memsets are faster than the previous ones, we
can remove the previous SSE2/AVX2 memsets and replace them with the
new ones.  This reduces the size of libc.so by about 900 bytes.

No change in IFUNC selection if SSE2 and AVX2 memsets weren't used
before.  If SSE2 or AVX2 memset was used, the new SSE2 or AVX2 memset
optimized with Enhanced REP STOSB will be used for processors with
ERMS.  The new AVX512 memset will be used for processors with AVX512
which prefer vzeroupper.

	[BZ #19881]
	* sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S: Folded
	into ...
	* sysdeps/x86_64/memset.S: This.
	(__bzero): Removed.
	(__memset_tail): Likewise.
	(__memset_chk): Likewise.
	(memset): Likewise.
	(MEMSET_CHK_SYMBOL): New. Define only if MEMSET_SYMBOL isn't
	defined.
	(MEMSET_SYMBOL): Define only if MEMSET_SYMBOL isn't defined.
	* sysdeps/x86_64/multiarch/memset-avx2.S: Removed.
	(__memset_zero_constant_len_parameter): Check SHARED instead of
	PIC.
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
	memset-avx2 and memset-sse2-unaligned-erms.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Remove __memset_chk_sse2,
	__memset_chk_avx2, __memset_sse2 and __memset_avx2_unaligned.
	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S
	(__bzero): Enabled.
	* sysdeps/x86_64/multiarch/memset.S (memset): Replace
	__memset_sse2 and __memset_avx2 with __memset_sse2_unaligned
	and __memset_avx2_unaligned.  Use __memset_sse2_unaligned_erms
	or __memset_avx2_unaligned_erms if processor has ERMS.  Support
	__memset_avx512_unaligned_erms and __memset_avx512_unaligned.
	(memset): Removed.
	(__memset_chk): Likewise.
	(MEMSET_SYMBOL): New.
	(libc_hidden_builtin_def): Replace __memset_sse2 with
	__memset_sse2_unaligned.
	* sysdeps/x86_64/multiarch/memset_chk.S (__memset_chk): Replace
	__memset_chk_sse2 and __memset_chk_avx2 with
	__memset_chk_sse2_unaligned and __memset_chk_avx2_unaligned_erms.
	Use __memset_chk_sse2_unaligned_erms or
	__memset_chk_avx2_unaligned_erms if processor has ERMS.  Support
	__memset_chk_avx512_unaligned_erms and
	__memset_chk_avx512_unaligned.
2016-06-08 13:56:14 -07:00
H.J. Lu
3f61232ab3 Fix a typo in comments in memmove-vec-unaligned-erms.S
* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: Fix
	a typo in comments.
2016-06-06 16:03:21 -07:00
Joseph Myers
5ff81530dd Do not raise "inexact" from x86_64 SSE4.1 ceil, floor (bug 15479).
Continuing fixes for ceil and floor functions not to raise the
"inexact" exception, this patch fixes the x86_64 SSE4.1 versions.  The
roundss / roundsd instructions take an immediate operand that
determines the rounding mode and whether to raise "inexact"; this just
needs bit 3 set to disable "inexact", which this patch does.

Remark: we don't have an SSE4.1 version of trunc / truncf (using this
instruction with operand 11); I'd expect one to make sense, but of
course it should be benchmarked against the existing C code.  I'll
file a bug in Bugzilla for the lack of such a version.

Tested for x86_64.

	[BZ #15479]
	* sysdeps/x86_64/fpu/multiarch/s_ceil.S (__ceil_sse41): Set bit 3
	of immediate operand to rounding instruction.
	* sysdeps/x86_64/fpu/multiarch/s_ceilf.S (__ceilf_sse41):
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_floor.S (__floor_sse41):
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_floorf.S (__floorf_sse41):
	Likewise.
2016-05-24 21:11:18 +00:00
H.J. Lu
6901def689 Avoid an extra branch to PLT for -z now
When --enable-bind-now is used to configure glibc build, we can avoid
an extra branch to the PLT entry by using indirect branch via the GOT
slot instead, which is similar to the first instructuon in the PLT
entry.  Changes in the shared library sizes in text sections:

Shared library    Before (bytes)   After (bytes)
libm.so             1060813          1060797
libmvec.so           160881           160805
libpthread.so         94992            94984
librt.so              25064            25048

	* config.h.in (BIND_NOW): New.
	* configure.ac (BIND_NOW): New.  Defined for --enable-bind-now.
	* configure: Regenerated.
	* sysdeps/x86_64/sysdep.h (JUMPTARGET)[BIND_NOW]: Defined to
	indirect branch via the GOT slot.
2016-05-24 08:44:23 -07:00
H.J. Lu
eb2c88c7c8 Remove alignments on jump targets in memset
X86-64 memset-vec-unaligned-erms.S aligns many jump targets, which
increases code sizes, but not necessarily improve performance.  As
memset benchtest data of align vs no align on various Intel and AMD
processors

https://sourceware.org/bugzilla/attachment.cgi?id=9277

shows that aligning jump targets isn't necessary.

	[BZ #20115]
	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S (__memset):
	Remove alignments on jump targets.
2016-05-19 08:49:55 -07:00
H.J. Lu
4facca0b0e Call init_cpu_features only if SHARED is defined
In static executable, since init_cpu_features is called early from
__libc_start_main, there is no need to call it again in dl_platform_init.

	[BZ #20072]
	* sysdeps/i386/dl-machine.h (dl_platform_init): Call
	init_cpu_features only if SHARED is defined.
	* sysdeps/x86_64/dl-machine.h (dl_platform_init): Likewise.
2016-05-13 08:29:33 -07:00
H.J. Lu
2a1f15b1a9 Remove x86 ifunc-defines.sym and rtld-global-offsets.sym
Merge x86 ifunc-defines.sym with x86 cpu-features-offsets.sym.  Remove
x86 ifunc-defines.sym and rtld-global-offsets.sym.  No code changes on
i686 and x86-64.

	* sysdeps/i386/i686/multiarch/Makefile (gen-as-const-headers):
	Remove ifunc-defines.sym.
	* sysdeps/x86_64/multiarch/Makefile (gen-as-const-headers):
	Likewise.
	* sysdeps/i386/i686/multiarch/ifunc-defines.sym: Removed.
	* sysdeps/x86/rtld-global-offsets.sym: Likewise.
	* sysdeps/x86_64/multiarch/ifunc-defines.sym: Likewise.
	* sysdeps/x86/Makefile (gen-as-const-headers): Remove
	rtld-global-offsets.sym.
	* sysdeps/x86_64/multiarch/ifunc-defines.sym: Merged with ...
	* sysdeps/x86/cpu-features-offsets.sym: This.
	* sysdeps/x86/cpu-features.h: Include <cpu-features-offsets.h>
	instead of <ifunc-defines.h> and <rtld-global-offsets.h>.
2016-05-11 05:51:39 -07:00
H.J. Lu
a9558b49b3 Move sysdeps/x86_64/cacheinfo.c to sysdeps/x86
Move sysdeps/x86_64/cacheinfo.c to sysdeps/x86.  No code changes on x86
and x86_64.

	* sysdeps/i386/cacheinfo.c: Include <sysdeps/x86/cacheinfo.c>
	instead of <sysdeps/x86_64/cacheinfo.c>.
	* sysdeps/x86_64/cacheinfo.c: Moved to ...
	* sysdeps/x86/cacheinfo.c: Here.
2016-05-08 08:49:18 -07:00
Andreas Schwab
b4bcb3aec6 Register extra test objects
This makes sure that the extra test objects are compiled with the correct
MODULE_NAME and dependencies are tracked.
2016-04-13 17:07:13 +02:00
H.J. Lu
a057f5f8cd X86-64: Use non-temporal store in memcpy on large data
The large memcpy micro benchmark in glibc shows that there is a
regression with large data on Haswell machine.  non-temporal store in
memcpy on large data can improve performance significantly.  This
patch adds a threshold to use non temporal store which is 6 times of
shared cache size.  When size is above the threshold, non temporal
store will be used, but avoid non-temporal store if there is overlap
between destination and source since destination may be in cache when
source is loaded.

For size below 8 vector register width, we load all data into registers
and store them together.  Only forward and backward loops, which move 4
vector registers at a time, are used to support overlapping addresses.
For forward loop, we load the last 4 vector register width of data and
the first vector register width of data into vector registers before the
loop and store them after the loop.  For backward loop, we load the first
4 vector register width of data and the last vector register width of
data into vector registers before the loop and store them after the loop.

	[BZ #19928]
	* sysdeps/x86_64/cacheinfo.c (__x86_shared_non_temporal_threshold):
	New.
	(init_cacheinfo): Set __x86_shared_non_temporal_threshold to 6
	times of shared cache size.
	* sysdeps/x86_64/multiarch/memmove-avx-unaligned-erms.S
	(VMOVNT): New.
	* sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S
	(VMOVNT): Likewise.
	* sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S
	(VMOVNT): Likewise.
	(VMOVU): Changed to movups for smaller code sizes.
	(VMOVA): Changed to movaps for smaller code sizes.
	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: Update
	comments.
	(PREFETCH): New.
	(PREFETCH_SIZE): Likewise.
	(PREFETCHED_LOAD_SIZE): Likewise.
	(PREFETCH_ONE_SET): Likewise.
	Rewrite to use forward and backward loops, which move 4 vector
	registers at a time, to support overlapping addresses and use
	non temporal store if size is above the threshold and there is
	no overlap between destination and source.
2016-04-12 08:10:47 -07:00
Mike Frysinger
b2d4456b33 configure: fix test == usage
POSIX defines the = operator, but not ==.  Fix the few places where we
incorrectly used ==.
2016-04-09 20:05:13 -04:00
H.J. Lu
a7d1c51482 X86-64: Prepare memmove-vec-unaligned-erms.S
Prepare memmove-vec-unaligned-erms.S to make the SSE2 version as the
default memcpy, mempcpy and memmove.

	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
	(MEMCPY_SYMBOL): New.
	(MEMPCPY_SYMBOL): Likewise.
	(MEMMOVE_CHK_SYMBOL): Likewise.
	Replace MEMMOVE_SYMBOL with MEMMOVE_CHK_SYMBOL on __mempcpy_chk
	symbols.  Replace MEMMOVE_SYMBOL with MEMPCPY_SYMBOL on
	__mempcpy symbols.  Provide alias for __memcpy_chk in libc.a.
	Provide alias for memcpy in libc.a and ld.so.
2016-04-06 10:19:16 -07:00
H.J. Lu
4af1bb06c5 X86-64: Prepare memset-vec-unaligned-erms.S
Prepare memset-vec-unaligned-erms.S to make the SSE2 version as the
default memset.

	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S
	(MEMSET_CHK_SYMBOL): New.  Define if not defined.
	(__bzero): Check VEC_SIZE == 16 instead of USE_MULTIARCH.
	Disabled fro now.
	Replace MEMSET_SYMBOL with MEMSET_CHK_SYMBOL on __memset_chk
	symbols.  Properly check USE_MULTIARCH on __memset symbols.
2016-04-06 09:10:35 -07:00
H.J. Lu
ec0cac9a1f Force 32-bit displacement in memset-vec-unaligned-erms.S
* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: Force
	32-bit displacement to avoid long nop between instructions.
2016-04-05 05:21:19 -07:00
H.J. Lu
696ac77484 Add a comment in memset-sse2-unaligned-erms.S
* sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S: Add
	a comment on VMOVU and VMOVA.
2016-04-05 05:19:18 -07:00
H.J. Lu
5cd7af016d Don't put SSE2/AVX/AVX512 memmove/memset in ld.so
Since memmove and memset in ld.so don't use IFUNC, don't put SSE2, AVX
and AVX512 memmove and memset in ld.so.

	* sysdeps/x86_64/multiarch/memmove-avx-unaligned-erms.S: Skip
	if not in libc.
	* sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memset-avx2-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S:
	Likewise.
2016-04-03 14:35:38 -07:00
H.J. Lu
ea2785e96f Fix memmove-vec-unaligned-erms.S
__mempcpy_erms and __memmove_erms can't be placed between __memmove_chk
and __memmove it breaks __memmove_chk.

Don't check source == destination first since it is less common.

	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:
	(__mempcpy_erms, __memmove_erms): Moved before __mempcpy_chk
	with unaligned_erms.
	(__memmove_erms): Skip if source == destination.
	(__memmove_unaligned_erms): Don't check source == destination
	first.
2016-04-03 12:38:25 -07:00
H.J. Lu
830566307f Add x86-64 memset with unaligned store and rep stosb
Implement x86-64 memset with unaligned store and rep movsb.  Support
16-byte, 32-byte and 64-byte vector register sizes.  A single file
provides 2 implementations of memset, one with rep stosb and the other
without rep stosb.  They share the same codes when size is between 2
times of vector register size and REP_STOSB_THRESHOLD which defaults
to 2KB.

Key features:

1. Use overlapping store to avoid branch.
2. For size <= 4 times of vector register size, fully unroll the loop.
3. For size > 4 times of vector register size, store 4 times of vector
register size at a time.

	[BZ #19881]
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memset-sse2-unaligned-erms, memset-avx2-unaligned-erms and
	memset-avx512-unaligned-erms.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Test __memset_chk_sse2_unaligned,
	__memset_chk_sse2_unaligned_erms, __memset_chk_avx2_unaligned,
	__memset_chk_avx2_unaligned_erms, __memset_chk_avx512_unaligned,
	__memset_chk_avx512_unaligned_erms, __memset_sse2_unaligned,
	__memset_sse2_unaligned_erms, __memset_erms,
	__memset_avx2_unaligned, __memset_avx2_unaligned_erms,
	__memset_avx512_unaligned_erms and __memset_avx512_unaligned.
	* sysdeps/x86_64/multiarch/memset-avx2-unaligned-erms.S: New
	file.
	* sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:
	Likewise.
2016-03-31 10:06:07 -07:00
H.J. Lu
88b57b8ed4 Add x86-64 memmove with unaligned load/store and rep movsb
Implement x86-64 memmove with unaligned load/store and rep movsb.
Support 16-byte, 32-byte and 64-byte vector register sizes.  When
size <= 8 times of vector register size, there is no check for
address overlap bewteen source and destination.  Since overhead for
overlap check is small when size > 8 times of vector register size,
memcpy is an alias of memmove.

A single file provides 2 implementations of memmove, one with rep movsb
and the other without rep movsb.  They share the same codes when size is
between 2 times of vector register size and REP_MOVSB_THRESHOLD which
is 2KB for 16-byte vector register size and scaled up by large vector
register size.

Key features:

1. Use overlapping load and store to avoid branch.
2. For size <= 8 times of vector register size, load  all sources into
registers and store them together.
3. If there is no address overlap bewteen source and destination, copy
from both ends with 4 times of vector register size at a time.
4. If address of destination > address of source, backward copy 8 times
of vector register size at a time.
5. Otherwise, forward copy 8 times of vector register size at a time.
6. Use rep movsb only for forward copy.  Avoid slow backward rep movsb
by fallbacking to backward copy 8 times of vector register size at a
time.
7. Skip when address of destination == address of source.

	[BZ #19776]
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memmove-sse2-unaligned-erms, memmove-avx-unaligned-erms and
	memmove-avx512-unaligned-erms.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Test
	__memmove_chk_avx512_unaligned_2,
	__memmove_chk_avx512_unaligned_erms,
	__memmove_chk_avx_unaligned_2, __memmove_chk_avx_unaligned_erms,
	__memmove_chk_sse2_unaligned_2,
	__memmove_chk_sse2_unaligned_erms, __memmove_avx_unaligned_2,
	__memmove_avx_unaligned_erms, __memmove_avx512_unaligned_2,
	__memmove_avx512_unaligned_erms, __memmove_erms,
	__memmove_sse2_unaligned_2, __memmove_sse2_unaligned_erms,
	__memcpy_chk_avx512_unaligned_2,
	__memcpy_chk_avx512_unaligned_erms,
	__memcpy_chk_avx_unaligned_2, __memcpy_chk_avx_unaligned_erms,
	__memcpy_chk_sse2_unaligned_2, __memcpy_chk_sse2_unaligned_erms,
	__memcpy_avx_unaligned_2, __memcpy_avx_unaligned_erms,
	__memcpy_avx512_unaligned_2, __memcpy_avx512_unaligned_erms,
	__memcpy_sse2_unaligned_2, __memcpy_sse2_unaligned_erms,
	__memcpy_erms, __mempcpy_chk_avx512_unaligned_2,
	__mempcpy_chk_avx512_unaligned_erms,
	__mempcpy_chk_avx_unaligned_2, __mempcpy_chk_avx_unaligned_erms,
	__mempcpy_chk_sse2_unaligned_2, __mempcpy_chk_sse2_unaligned_erms,
	__mempcpy_avx512_unaligned_2, __mempcpy_avx512_unaligned_erms,
	__mempcpy_avx_unaligned_2, __mempcpy_avx_unaligned_erms,
	__mempcpy_sse2_unaligned_2, __mempcpy_sse2_unaligned_erms and
	__mempcpy_erms.
	* sysdeps/x86_64/multiarch/memmove-avx-unaligned-erms.S: New
	file.
	* sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S:
	Likwise.
	* sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S:
	Likwise.
	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:
	Likwise.
2016-03-31 10:04:40 -07:00
H.J. Lu
064f01b10b Make __memcpy_avx512_no_vzeroupper an alias
Since x86-64 memcpy-avx512-no-vzeroupper.S implements memmove, make
__memcpy_avx512_no_vzeroupper an alias of __memmove_avx512_no_vzeroupper
to reduce code size of libc.so.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
	memcpy-avx512-no-vzeroupper.
	* sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S: Renamed
	to ...
	* sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S: This.
	(MEMCPY): Don't define.
	(MEMCPY_CHK): Likewise.
	(MEMPCPY): Likewise.
	(MEMPCPY_CHK): Likewise.
	(MEMPCPY_CHK): Renamed to ...
	(__mempcpy_chk_avx512_no_vzeroupper): This.
	(MEMPCPY_CHK): Renamed to ...
	(__mempcpy_chk_avx512_no_vzeroupper): This.
	(MEMCPY_CHK): Renamed to ...
	(__memmove_chk_avx512_no_vzeroupper): This.
	(MEMCPY): Renamed to ...
	(__memmove_avx512_no_vzeroupper): This.
	(__memcpy_avx512_no_vzeroupper): New alias.
	(__memcpy_chk_avx512_no_vzeroupper): Likewise.
2016-03-28 13:16:22 -07:00
H.J. Lu
c365e615f7 Implement x86-64 multiarch mempcpy in memcpy
Implement x86-64 multiarch mempcpy in memcpy to share most of code.  It
reduces code size of libc.so.

	[BZ #18858]
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
	mempcpy-ssse3, mempcpy-ssse3-back, mempcpy-avx-unaligned
	and mempcpy-avx512-no-vzeroupper.
	* sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMPCPY_CHK):
	New.
	(MEMPCPY): Likewise.
	* sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S
	(MEMPCPY_CHK): New.
	(MEMPCPY): Likewise.
	* sysdeps/x86_64/multiarch/memcpy-ssse3-back.S (MEMPCPY_CHK): New.
	(MEMPCPY): Likewise.
	* sysdeps/x86_64/multiarch/memcpy-ssse3.S (MEMPCPY_CHK): New.
	(MEMPCPY): Likewise.
	* sysdeps/x86_64/multiarch/mempcpy-avx-unaligned.S: Removed.
	* sysdeps/x86_64/multiarch/mempcpy-avx512-no-vzeroupper.S:
	Likewise.
	* sysdeps/x86_64/multiarch/mempcpy-ssse3-back.S: Likewise.
	* sysdeps/x86_64/multiarch/mempcpy-ssse3.S: Likewise.
2016-03-28 13:13:51 -07:00
H.J. Lu
e41b395523 [x86] Add a feature bit: Fast_Unaligned_Copy
On AMD processors, memcpy optimized with unaligned SSE load is
slower than emcpy optimized with aligned SSSE3 while other string
functions are faster with unaligned SSE load.  A feature bit,
Fast_Unaligned_Copy, is added to select memcpy optimized with
unaligned SSE load.

	[BZ #19583]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel
	processors.  Set Fast_Copy_Backward for AMD Excavator
	processors.
	* sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy):
	New.
	(index_arch_Fast_Unaligned_Copy): Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check
	Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
2016-03-28 04:40:03 -07:00
Florian Weimer
f327f5b47b tst-audit10: Fix compilation on compilers without bit_AVX512F [BZ #19860]
[BZ# 19860]
	* sysdeps/x86_64/tst-audit10.c (avx512_enabled): Always return
	zero if the compiler does not provide the AVX512F bit.
2016-03-25 11:11:42 +01:00
Joseph Myers
c898991d8b Fix x86_64 / x86 powl inaccuracy for integer exponents (bug 19848).
Bug 19848 reports cases where powl on x86 / x86_64 has error
accumulation, for small integer exponents, larger than permitted by
glibc's accuracy goals, at least in some rounding modes.  This patch
further restricts the exponent range for which the
small-integer-exponent logic is used to limit the possible error
accumulation.

Tested for x86_64 and x86 and ulps updated accordingly.

	[BZ #19848]
	* sysdeps/i386/fpu/e_powl.S (p3): Rename to p2 and change value
	from 8 to 4.
	(__ieee754_powl): Compare integer exponent against 4 not 8.
	* sysdeps/x86_64/fpu/e_powl.S (p3): Rename to p2 and change value
	from 8 to 4.
	(__ieee754_powl): Compare integer exponent against 4 not 8.
	* math/auto-libm-test-in: Add more tests of pow.
	* math/auto-libm-test-out: Regenerated.
	* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Update.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2016-03-24 01:32:52 +00:00
H.J. Lu
3c9a4cd16c Don't set %rcx twice before "rep movsb"
* sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMCPY):
	Don't set %rcx twice before "rep movsb".
2016-03-22 08:36:16 -07:00
H.J. Lu
86ed888255 Use JUMPTARGET in x86-64 mathvec
When PLT may be used, JUMPTARGET should be used instead calling the
function directly.

	* sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core_sse4.S
	(_ZGVbN2v_cos_sse4): Use JUMPTARGET to call cos.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core_avx2.S
	(_ZGVdN4v_cos_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core_avx512.S
	(_ZGVdN4v_cos): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core_sse4.S
	(_ZGVbN2v_exp_sse4): Use JUMPTARGET to call exp.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core_avx2.S
	(_ZGVdN4v_exp_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core_avx512.S
	(_ZGVdN4v_exp): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core_sse4.S
	(_ZGVbN2v_log_sse4): Use JUMPTARGET to call log.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core_avx2.S
	(_ZGVdN4v_log_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core_avx512.S
	(_ZGVdN4v_log): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core_sse4.S
	(_ZGVbN2vv_pow_sse4): Use JUMPTARGET to call pow.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core_avx2.S
	(_ZGVdN4vv_pow_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core_avx512.S
	(_ZGVdN4vv_pow): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core_sse4.S
	(_ZGVbN2v_sin_sse4): Use JUMPTARGET to call sin.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core_avx2.S
	(_ZGVdN4v_sin_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core_avx512.S
	(_ZGVdN4v_sin): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S
	(_ZGVbN2vvv_sincos_sse4): Use JUMPTARGET to call sin and cos.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S
	(_ZGVdN4vvv_sincos_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S
	(_ZGVdN4vvv_sincos): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_cosf16_core_avx512.S
	(_ZGVdN8v_cosf): Use JUMPTARGET to call cosf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_cosf4_core_sse4.S
	(_ZGVbN4v_cosf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_cosf8_core_avx2.S
	(_ZGVdN8v_cosf_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S
	(_ZGVdN8v_expf): Use JUMPTARGET to call expf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_expf4_core_sse4.S
	(_ZGVbN4v_expf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_expf8_core_avx2.S
	(_ZGVdN8v_expf_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S
	(_ZGVdN8v_logf): Use JUMPTARGET to call logf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S
	(_ZGVbN4v_logf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S
	(_ZGVdN8v_logf_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core_avx512.S
	(_ZGVdN8vv_powf): Use JUMPTARGET to call powf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_powf4_core_sse4.S
	(_ZGVbN4vv_powf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_powf8_core_avx2.S
	(_ZGVdN8vv_powf_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S
	(_ZGVdN8vv_powf): Use JUMPTARGET to call sinf and cosf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S
	(_ZGVbN4vvv_sincosf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S
	(_ZGVdN8vvv_sincosf_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sinf16_core_avx512.S
	(_ZGVdN8v_sinf): Use JUMPTARGET to call sinf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sinf4_core_sse4.S
	(_ZGVbN4v_sinf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sinf8_core_avx2.S
	(_ZGVdN8v_sinf_avx2): Likewise.
	* sysdeps/x86_64/fpu/svml_d_wrapper_impl.h (WRAPPER_IMPL_SSE2):
	Use JUMPTARGET to call callee.
	(WRAPPER_IMPL_SSE2_ff): Likewise.
	(WRAPPER_IMPL_SSE2_fFF): Likewise.
	(WRAPPER_IMPL_AVX): Likewise.
	(WRAPPER_IMPL_AVX_ff): Likewise.
	(WRAPPER_IMPL_AVX_fFF): Likewise.
	(WRAPPER_IMPL_AVX512): Likewise.
	(WRAPPER_IMPL_AVX512_ff): Likewise.
	* sysdeps/x86_64/fpu/svml_s_wrapper_impl.h (WRAPPER_IMPL_SSE2):
	Likewise.
	(WRAPPER_IMPL_SSE2_ff): Likewise.
	(WRAPPER_IMPL_SSE2_fFF): Likewise.
	(WRAPPER_IMPL_AVX): Likewise.
	(WRAPPER_IMPL_AVX_ff): Likewise.
	(WRAPPER_IMPL_AVX_fFF): Likewise.
	(WRAPPER_IMPL_AVX512): Likewise.
	(WRAPPER_IMPL_AVX512_ff): Likewise.
	(WRAPPER_IMPL_AVX512_fFF): Likewise.
2016-03-16 14:24:19 -07:00
Roland McGrath
3bd80c0de2 Fix tst-audit10 build when -mavx512f is not supported. 2016-03-08 12:32:59 -08:00
Florian Weimer
3c0f7407ee tst-audit4, tst-audit10: Compile AVX/AVX-512 code separately [BZ #19269]
This ensures that GCC will not use unsupported instructions before
the run-time check to ensure support.
2016-03-07 16:00:25 +01:00
H.J. Lu
fee9eb6200 Group AVX512 functions in .text.avx512 section
* sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S:
	Replace .text with .text.avx512.
	* sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S:
	Likewise.
2016-03-06 16:48:11 -08:00
H.J. Lu
16b23e0363 Replace PREINIT_FUNCTION@PLT with *%rax in call
Since we have loaded address of PREINIT_FUNCTION into %rax, we can
avoid extra branch to PLT slot.

	[BZ #19745]
	* sysdeps/x86_64/crti.S (_init): Replace PREINIT_FUNCTION@PLT
	with *%rax in call.
2016-03-04 16:15:41 -08:00
H.J. Lu
21683b5a7d Replace @PLT with @GOTPCREL(%rip) in call
Since __libc_start_main is called very early, lazy binding isn't relevant
here.  Use indirect branch via GOT to avoid extra branch to PLT slot.

	[BZ #19745]
	* sysdeps/x86_64/start.S (_start): __libc_start_main@PLT
	with *__libc_start_main@GOTPCREL(%rip) in call.
2016-03-04 16:15:41 -08:00
H.J. Lu
97f7112728 Add a comment in sysdeps/x86_64/Makefile
Mention recursive calls when ENTRY is used in _mcount.S.

	* sysdeps/x86_64/Makefile (sysdep_noprof): Add a comment.
2016-03-04 08:44:58 -08:00
H.J. Lu
14a1d7cc4c x86-64: Fix memcpy IFUNC selection
Chek Fast_Unaligned_Load, instead of Slow_BSF, and also check for
Fast_Copy_Backward to enable __memcpy_ssse3_back.  Existing selection
order is updated with following selection order:

1. __memcpy_avx_unaligned if AVX_Fast_Unaligned_Load bit is set.
2. __memcpy_sse2_unaligned if Fast_Unaligned_Load bit is set.
3. __memcpy_sse2 if SSSE3 isn't available.
4. __memcpy_ssse3_back if Fast_Copy_Backward bit it set.
5. __memcpy_ssse3

	[BZ #18880]
	* sysdeps/x86_64/multiarch/memcpy.S: Check Fast_Unaligned_Load,
	instead of Slow_BSF, and also check for Fast_Copy_Backward to
	enable __memcpy_ssse3_back.
2016-03-04 08:39:07 -08:00
Paul Pluzhnikov
5cdc3d9db0 2016-03-03 Paul Pluzhnikov <ppluzhnikov@google.com>
[BZ #19490]
	* sysdeps/x86_64/_mcount.S (_mcount): Add unwind descriptor.
	(__fentry__): Likewise
2016-03-03 09:53:49 -08:00
H.J. Lu
87a07a4376 Copy x86_64 _mcount.op from _mcount.o
No need to compile x86_64 _mcount.S with -pg.  We can just copy the
normal static object.

	* gmon/Makefile (noprof): Add $(sysdep_noprof).
	* sysdeps/x86_64/Makefile (sysdep_noprof): Add _mcount.
2016-03-03 06:56:22 -08:00
H.J. Lu
ec215346b9 Call x86-64 __mcount_internal/__sigjmp_save directly
Since __mcount_internal and __sigjmp_save are internal to x86-64 libc.so:

3532: 0000000000104530   289 FUNC    LOCAL  DEFAULT   13 __mcount_internal
3391: 0000000000034170    38 FUNC    LOCAL  DEFAULT   13 __sigjmp_save

they can be called directly without PLT.

	* sysdeps/x86_64/_mcount.S (C_LABEL(_mcount)): Call
	__mcount_internal directly.
	(C_LABEL(__fentry__)): Likewise.
	* sysdeps/x86_64/setjmp.S __sigsetjmp): Call __sigjmp_save
	directly.
2016-03-01 16:58:07 -08:00
H.J. Lu
8d9c92017d [x86_64] Set DL_RUNTIME_UNALIGNED_VEC_SIZE to 8
Due to GCC bug:

   https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58066

__tls_get_addr may be called with 8-byte stack alignment.  Although
this bug has been fixed in GCC 4.9.4, 5.3 and 6, we can't assume
that stack will be always aligned at 16 bytes.  Since SSE optimized
memory/string functions with aligned SSE register load and store are
used in the dynamic linker, we must set DL_RUNTIME_UNALIGNED_VEC_SIZE
to 8 so that _dl_runtime_resolve_sse will align the stack before
calling _dl_fixup:

Dump of assembler code for function _dl_runtime_resolve_sse:
   0x00007ffff7deea90 <+0>:	push   %rbx
   0x00007ffff7deea91 <+1>:	mov    %rsp,%rbx
   0x00007ffff7deea94 <+4>:	and    $0xfffffffffffffff0,%rsp
                                ^^^^^^^^^^^ Align stack to 16 bytes
   0x00007ffff7deea98 <+8>:	sub    $0x100,%rsp
   0x00007ffff7deea9f <+15>:	mov    %rax,0xc0(%rsp)
   0x00007ffff7deeaa7 <+23>:	mov    %rcx,0xc8(%rsp)
   0x00007ffff7deeaaf <+31>:	mov    %rdx,0xd0(%rsp)
   0x00007ffff7deeab7 <+39>:	mov    %rsi,0xd8(%rsp)
   0x00007ffff7deeabf <+47>:	mov    %rdi,0xe0(%rsp)
   0x00007ffff7deeac7 <+55>:	mov    %r8,0xe8(%rsp)
   0x00007ffff7deeacf <+63>:	mov    %r9,0xf0(%rsp)
   0x00007ffff7deead7 <+71>:	movaps %xmm0,(%rsp)
   0x00007ffff7deeadb <+75>:	movaps %xmm1,0x10(%rsp)
   0x00007ffff7deeae0 <+80>:	movaps %xmm2,0x20(%rsp)
   0x00007ffff7deeae5 <+85>:	movaps %xmm3,0x30(%rsp)
   0x00007ffff7deeaea <+90>:	movaps %xmm4,0x40(%rsp)
   0x00007ffff7deeaef <+95>:	movaps %xmm5,0x50(%rsp)
   0x00007ffff7deeaf4 <+100>:	movaps %xmm6,0x60(%rsp)
   0x00007ffff7deeaf9 <+105>:	movaps %xmm7,0x70(%rsp)

	[BZ #19679]
	* sysdeps/x86_64/dl-trampoline.S (DL_RUNIME_UNALIGNED_VEC_SIZE):
	Renamed to ...
	(DL_RUNTIME_UNALIGNED_VEC_SIZE): This.  Set to 8.
	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.  Updated.
	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.
	* sysdeps/x86_64/dl-trampoline.h
	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.
2016-02-19 15:45:09 -08:00
Andrew Senkevich
a5df3210a6 Use PIC relocation in ALIAS_IMPL
Since libmvec_nonshared.a may be linked into shared objects, ALIAS_IMPL
should use PIC relocation.

	[BZ #19590]
	* sysdeps/x86_64/fpu/svml_finite_alias.S (ALIAS_IMPL): Use PIC
	relocation.
2016-02-17 14:23:32 -08:00
Paul Pluzhnikov
b274130206 2016-01-20 Paul Pluzhnikov <ppluzhnikov@google.com>
[BZ #19490]
* sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S (pthread_cond_broadcast): Use ENTRY/END
* sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S (pthread_cond_signal): Likewise
* sysdeps/x86_64/nptl/pthread_spin_lock.S (pthread_spin_lock): Likewise
* sysdeps/x86_64/nptl/pthread_spin_trylock.S (pthread_spin_trylock): Likewise
* sysdeps/x86_64/nptl/pthread_spin_unlock.S (pthread_spin_unlock): Likewise
2016-01-20 13:39:20 -08:00
Andrew Senkevich
df782dc690 Fixed build with assembler w/o AVX-512 support.
* sysdeps/x86_64/multiarch/ifunc-impl-list.c: Fixed build with
    assembler not supporting AVX-512.
2016-01-19 14:34:53 +03:00
Andrew Senkevich
214a44f394 Fixed typos in __memcpy_chk.
* sysdeps/x86_64/multiarch/memcpy_chk.S: Fixed typos.
2016-01-16 14:42:26 +03:00
Andrew Senkevich
72276d6e88 Added memcpy/memmove family optimized with AVX512 for KNL hardware.
Added AVX512 implementations of memcpy, mempcpy, memmove, memcpy_chk,
mempcpy_chk, memmove_chk.
It shows average improvement more than 30% over AVX versions on KNL
hardware (performance results in the thread
<https://sourceware.org/ml/libc-alpha/2016-01/msg00258.html>).

    * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Added new files.
    * sysdeps/x86_64/multiarch/ifunc-impl-list.c: Added new tests.
    * sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S: New file.
    * sysdeps/x86_64/multiarch/mempcpy-avx512-no-vzeroupper.S: Likewise.
    * sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S: Likewise.
    * sysdeps/x86_64/multiarch/memcpy.S: Added new IFUNC branch.
    * sysdeps/x86_64/multiarch/memcpy_chk.S: Likewise.
    * sysdeps/x86_64/multiarch/memmove.c: Likewise.
    * sysdeps/x86_64/multiarch/memmove_chk.c: Likewise.
    * sysdeps/x86_64/multiarch/mempcpy.S: Likewise.
    * sysdeps/x86_64/multiarch/mempcpy_chk.S: Likewise.
2016-01-16 00:49:45 +03:00
Joseph Myers
f7a9f785e5 Update copyright dates with scripts/update-copyrights. 2016-01-04 16:05:18 +00:00
Andrew Senkevich
83d776f979 Added memset optimized with AVX512 for KNL hardware.
It shows improvement up to 28% over AVX2 memset (performance results
attached at <https://sourceware.org/ml/libc-alpha/2015-12/msg00052.html>).

    * sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: New file.
    * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Added new file.
    * sysdeps/x86_64/multiarch/ifunc-impl-list.c: Added new tests.
    * sysdeps/x86_64/multiarch/memset.S: Added new IFUNC branch.
    * sysdeps/x86_64/multiarch/memset_chk.S: Likewise.
    * sysdeps/x86/cpu-features.h (bit_Prefer_No_VZEROUPPER,
    index_Prefer_No_VZEROUPPER): New.
    * sysdeps/x86/cpu-features.c (init_cpu_features): Set the
    Prefer_No_VZEROUPPER for Knights Landing.
2015-12-19 02:47:28 +03:00
Andrew Senkevich
977a30801f Better workaround for aliases of *_finite symbols in vector math library.
Old workaround based on assembly aliases can lead to link fail (bug 19058).
This patch makes workaround in another way to avoid it.

    [BZ #19058]
    * math/Makefile ($(inst_libdir)/libm.so): Added libmvec_nonshared.a
    to AS_NEEDED.
    * sysdeps/x86/fpu/bits/math-vector.h: Removed code with old workaround.
    * sysdeps/x86_64/fpu/Makefile (libmvec-support,
    libmvec-static-only-routines): Added new file.
    * sysdeps/x86_64/fpu/svml_finite_alias.S: New file.
2015-11-27 16:22:26 +03:00
Joseph Myers
01189b083b Fix i386/x86_64 log* (1) zero sign for -ffinite-math-only (bug 19213).
For the -ffinite-math-only versions of various x86_64 and x86 log*
functions, a zero result from log* (1) is returned with incorrect sign
in round-downward mode.  This patch fixes this in a similar way to the
previous fixes for the non-*_finite versions of the functions.

Tested for x86_64 and x86 (including an i586 build), together with a
patch that will be applied separately to enable the main libm-test.inc
tests for the finite-math-only functions.

	[BZ #19213]
	* sysdeps/i386/fpu/e_log.S (__log_finite): Ensure +0 is always
	returned for argument 1.
	* sysdeps/i386/fpu/e_logf.S (__logf_finite): Likewise.
	* sysdeps/i386/fpu/e_logl.S (__logl_finite): Likewise.
	* sysdeps/i386/i686/fpu/e_logl.S (__logl_finite): Likewise.
	* sysdeps/x86_64/fpu/e_log10l.S (__log10l_finite): Likewise.
	* sysdeps/x86_64/fpu/e_log2l.S (__log2l_finite): Likewise.
	* sysdeps/x86_64/fpu/e_logl.S (__logl_finite): Likewise.
2015-11-05 21:56:31 +00:00
Joseph Myers
9f9f27248b Remove miscellaneous GCC >= 4.7 version conditionals.
This patch removes miscellaneous __GNUC_PREREQ (4, 7) conditionals
that are now dead.

Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by the patch).

	* sysdeps/arm/atomic-machine.h
	[__GNUC_PREREQ (4, 7) && __GCC_HAVE_SYNC_COMPARE_AND_SWAP_4]:
	Change conditional to [__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4].
	[__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4 && !__GNUC_PREREQ (4, 7)]:
	Remove conditional code.
	[!__GNUC_PREREQ (4, 7) || !__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4]:
	Change conditional to [!__GCC_HAVE_SYNC_COMPARE_AND_SWAP_4].
	* sysdeps/i386/sysdep.h [__ASSEMBLER__ && __GNUC_PREREQ (4, 7)]:
	Change conditional to [__ASSEMBLER__].
	[__ASSEMBLER__ && !__GNUC_PREREQ (4, 7)]: Remove conditional code.
	[!__ASSEMBLER__ && __GNUC_PREREQ (4, 7)]: Change conditional to
	[!__ASSEMBLER__].
	[!__ASSEMBLER__ && !__GNUC_PREREQ (4, 7)]: Remove conditional
	code.
	* sysdeps/unix/sysv/linux/sh/atomic-machine.h (rNOSP): Remove
	conditional macro definitions.
	(__arch_compare_and_exchange_val_8_acq): Use "u" instead of rNOSP.
	(__arch_compare_and_exchange_val_16_acq): Likewise.
	(__arch_compare_and_exchange_val_32_acq): Likewise.
	(atomic_exchange_and_add): Likewise.
	(atomic_add): Likewise.
	(atomic_add_negative): Likewise.
	(atomic_add_zero): Likewise.
	(atomic_bit_set): Likewise.
	(atomic_bit_test_set): Likewise.
	* sysdeps/x86_64/atomic-machine.h [__GNUC_PREREQ (4, 7)]: Make
	code unconditional.
	[!__GNUC_PREREQ (4, 7)]: Remove conditional code.
2015-11-04 21:34:36 +00:00
Joseph Myers
91bcb95ad4 Remove cpuid.h configure tests.
There are configure tests for the cpuid.h header for x86 / x86_64.
GCC 4.3 and later install this header, so those tests are obsolete.
This patch removes them.

Tested for x86_64 and x86 (testsuite, and that installed shared
libraries are unchanged by the patch).

	* sysdeps/i386/configure.ac (cpuid.h): Do not test for header.
	* sysdeps/i386/configure: Regenerated.
	* sysdeps/x86_64/configure.ac (cpuid.h): Do not test for header.
	* sysdeps/x86_64/configure: Regenerated.
2015-10-29 14:43:46 +00:00
Joseph Myers
2145f97cee Handle more state in i386/x86_64 fesetenv (bug 16068).
fenv_t should include architecture-specific floating-point modes and
status flags.  i386 and x86_64 fesetenv limit which bits they use from
the x87 status and control words, when using saved state, and limit
which parts of the state they set to fixed values, when using
FE_DFL_ENV / FE_NOMASK_ENV.  The following should be included but are
excluded in at least some cases: status and masking for the "denormal
operand" exception (which isn't part of FE_ALL_EXCEPT); precision
control (explicitly mentioned in Annex F as something that counts as
part of the floating-point environment); MXCSR FZ and DAZ bits (for
FE_DFL_ENV and FE_NOMASK_ENV).  This patch arranges for this extra
state to be handled by fesetenv (and thereby by feupdateenv, which
calls fesetenv).

(Note that glibc functions using floating point are not generally
expected to work correctly with non-default values of this state,
especially precision control, but it is still logically part of the
floating-point environment and should be handled as such by fesetenv.
Changes to the state relating to subnormals ought generally to work
with libm functions when the arguments aren't subnormal and neither
are the expected results; that's a consequence of functions avoiding
spurious internal underflows.)

A question arising from this is whether FE_NOMASK_ENV should or should
not mask the "denormal operand" exception.  I decided it should mask
that exception.  This is the status quo - previously that exception
could only be unmasked by direct manipulation of control registers
(possibly via <fpu_control.h>).  In addition, it means that use of
FE_NOMASK_ENV leaves a floating-point environment the same as could be
obtained by fesetenv (FE_DFL_ENV); feenableexcept (FE_ALL_EXCEPT);,
rather than an environment in which an exception is unmasked that
could only be masked again by using fesetenv with FE_DFL_ENV (or a
previously saved environment) - this exception not being usable with
other <fenv.h> functions because it's outside FE_ALL_EXCEPT.

Tested for x86_64 and x86.

	[BZ #16068]
	* sysdeps/i386/fpu/fesetenv.c: Include <fpu_control.h>.
	(FE_ALL_EXCEPT_X86): New macro.
	(__fesetenv): Use FE_ALL_EXCEPT_X86 in most places instead of
	FE_ALL_EXCEPT.  Ensure precision control is included in
	floating-point state.  Ensure that FE_DFL_ENV and FE_NOMASK_ENV
	handle "denormal operand exception" and clear FZ and DAZ bits.
	* sysdeps/x86_64/fpu/fesetenv.c: Include <fpu_control.h>.
	(FE_ALL_EXCEPT_X86): New macro.
	(__fesetenv): Use FE_ALL_EXCEPT_X86 in most places instead of
	FE_ALL_EXCEPT.  Ensure precision control is included in
	floating-point state.  Ensure that FE_DFL_ENV and FE_NOMASK_ENV
	handle "denormal operand exception" and clear FZ and DAZ bits.
	* sysdeps/x86/fpu/test-fenv-sse-2.c: New file.
	* sysdeps/x86/fpu/test-fenv-x87.c: Likewise.
	* sysdeps/x86/fpu/Makefile [$(subdir) = math] (tests): Add
	test-fenv-x87 and test-fenv-sse-2.
	[$(subdir) = math] (CFLAGS-test-fenv-sse-2.c): New variable.
2015-10-28 22:58:29 +00:00
Joseph Myers
0b9af583a5 Fix i386/x86_64 fesetenv SSE exception clearing (bug 19181).
The i386 and x86_64 versions of fesetenv, when called with FE_DFL_ENV
or FE_NOMASK_ENV as argument, do not clear SSE exceptions raised in
MXCSR.  These arguments should, like other fenv_t values, represent
the whole of the floating-point state, so such exceptions should be
cleared; this patch adds the required clearing.  (Discovered while
working on bug 16068.)

Tested for x86_64 and x86.

	[BZ #19181]
	* sysdeps/i386/fpu/fesetenv.c (__fesetenv): Clear already-raised
	SSE exceptions when argument is FE_DFL_ENV or FE_NOMASK_ENV.
	* sysdeps/x86_64/fpu/fesetenv.c (__fesetenv): Likewise.
	* math/test-fenv-clear-main.c: New file.
	* math/test-fenv-clear.c: Likewise.
	* math/Makefile (tests): Add test-fenv-clear.
	* sysdeps/x86/fpu/test-fenv-clear-sse.c: New file.
	* sysdeps/x86/fpu/Makefile [$(subdir) = math] (tests): Add
	test-fenv-clear-sse.
	[$(subdir) = math] (CFLAGS-test-fenv-clear-sse.c): New variable.
2015-10-28 18:50:20 +00:00
Joseph Myers
c871b9b096 Remove -mavx2 configure tests.
There are configure tests for the -mavx2 compiler option.  AVX2
support was added in GCC 4.7, so these tests are now obsolete; this
patch removes them.

Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by the patch).

	* sysdeps/i386/configure.ac (libc_cv_cc_avx2): Remove configure
	test.
	* sysdeps/i386/configure: Regenerated.
	* sysdeps/x86_64/configure.ac (libc_cv_cc_avx2): Remove configure
	test.
	* sysdeps/x86_64/configure: Regenerated.
	* config.h.in (HAVE_AVX2_SUPPORT): Remove #undef.
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memset-avx2 unconditionally instead of conditionally on
	[$(config-cflags-avx2) = yes].
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list) [HAVE_AVX2_SUPPORT]: Make code
	unconditional.
	* sysdeps/x86_64/multiarch/memset.S [HAVE_AVX2_SUPPORT]: Likewise.
	* sysdeps/x86_64/multiarch/memset_chk.S
	[IS_IN (libc) && SHARED && HAVE_AVX2_SUPPORT]: Change conditional
	to [IS_IN (libc) && SHARED].
2015-10-28 13:29:03 +00:00
Florian Weimer
e39edb0552 x86_64: Regenerate ulps [BZ #19168]
This comes from running “make regen-ulps” on AMD Opteron 6272 CPUs.
2015-10-26 13:15:48 +01:00
Joseph Myers
9d1687b2df Add more libm tests (ilogb, is*, j0, j1, jn, lgamma, log*).
This patch improves the libm test coverage for a few more functions.

Tested for x86_64 and x86.

	* math/auto-libm-test-in: Add more tests of log, log10, log1p and
	log2.
	* math/auto-libm-test-out: Regenerated.
	* math/libm-test.inc (MAX_EXP): New macro.
	(ilogb_test_data): Add more tests.
	(isfinite_test_data): Likewise.
	(isgreater_test_data): Likewise.
	(isgreaterequal_test_data): Likewise.
	(isinf_test_data): Likewise.
	(isless_test_data): Likewise.
	(islessequal_test_data): Likewise.
	(islessgreater_test_data): Likewise.
	(isnan_test_data): Likewise.
	(isnormal_test_data): Likewise.
	(issignaling_test_data): Likewise.
	(isunordered_test_data): Likewise.
	(j0_test_data): Likewise.
	(j1_test_data): Likewise.
	(jn_test_data): Likewise.
	(lgamma_test_data): Likewise.
	(log_test_data): Likewise.
	(log10_test_data): Likewise.
	(log1p_test_data): Likewise.
	(log2_test_data): Likewise.
	(logb_test_data): Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Update.
2015-10-23 22:46:05 +00:00
Joseph Myers
846d9a4a3a Fix i386 / x86_64 nearbyint exception clearing (bug 15491).
The implementations of nearbyint functions using x87 floating point
(i386 all versions, x86_64 long double only) use the fclex
instruction, which clears any exceptions that were raised before the
function was called.  These functions must not clear exceptions that
were raised before they were called.

This patch fixes these functions to save and restore the whole
floating-point environment (fnstenv / fldenv) as the way of avoiding
raising "inexact" (recall that there isn't an x87 instruction for
loading just the status word, so the whole environment has to be saved
and loaded instead - the code already saved and loaded the control
word, which is now obtained from the saved environment after this
patch, to disable traps on "inexact").  In the case of the long double
functions, any "invalid" exception from frndint (applied to a
signaling NaN) needs merging into the saved state; this issue doesn't
apply to the float and double functions because that exception would
have been raised when the argument is loaded, before the environment
is saved.

	[BZ #15491]
	* sysdeps/i386/fpu/s_nearbyint.S (__nearbyint): Save and restore
	floating-point environment instead of clearing all exceptions.
	* sysdeps/i386/fpu/s_nearbyintf.S (__nearbyintf): Likewise.
	* sysdeps/i386/fpu/s_nearbyintl.S (__nearbyintl): Likewise,
	merging in "invalid" exceptions from frndint.
	* sysdeps/x86_64/fpu/s_nearbyintl.S (__nearbyintl): Likewise.
	* math/test-nearbyint-except.c: New file.
	* math/Makefile (tests): Add test-nearbyint-except.
2015-10-22 23:14:55 +00:00
Joseph Myers
bd2260a206 Convert 231 sysdeps function definitions to prototype style.
This mostly automatically-generated patch converts 231 sysdeps
function definitions in glibc from old-style K&R to prototype-style.

For __aio_sigqueue and __gai_sigqueue I had to add internal_function
to the definitions as noted by Florian in
<https://sourceware.org/ml/libc-alpha/2015-10/msg00595.html> to keep
the functions compiling on x86 after conversion to prototype
definitions.  Otherwise, the patch is automatically generated with all
the same exclusions and caveats as in
<https://sourceware.org/ml/libc-alpha/2015-10/msg00594.html> except
that it's a patch for sysdeps files.

Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by the patch).  Also tested for arm,
mips64 and powerpc32 that installed stripped shared libraries are
unchanged by the patch.

	* sysdeps/arm/backtrace.c (__backtrace): Convert to
	prototype-style function definition.
	* sysdeps/i386/backtrace.c (__backtrace): Likewise.
	* sysdeps/i386/ffs.c (__ffs): Likewise.
	* sysdeps/i386/i686/ffs.c (__ffs): Likewise.
	* sysdeps/ia64/nptl/pthread_spin_lock.c (pthread_spin_lock):
	Likewise.
	* sysdeps/ia64/nptl/pthread_spin_trylock.c (pthread_spin_trylock):
	Likewise.
	* sysdeps/ieee754/ldbl-128/e_log2l.c (__ieee754_log2l): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_log2l.c (__ieee754_log2l):
	Likewise.
	* sysdeps/m68k/ffs.c (__ffs): Likewise.
	* sysdeps/m68k/m680x0/fpu/e_acos.c (FUNC): Likewise.
	* sysdeps/m68k/m680x0/fpu/e_fmod.c (FUNC): Likewise.
	* sysdeps/mach/adjtime.c (__adjtime): Likewise.
	* sysdeps/mach/gettimeofday.c (__gettimeofday): Likewise.
	* sysdeps/mach/hurd/_exit.c (_exit): Likewise.
	* sysdeps/mach/hurd/access.c (__access): Likewise.
	* sysdeps/mach/hurd/adjtime.c (__adjtime): Likewise.
	* sysdeps/mach/hurd/chdir.c (__chdir): Likewise.
	* sysdeps/mach/hurd/chmod.c (__chmod): Likewise.
	* sysdeps/mach/hurd/chown.c (__chown): Likewise.
	* sysdeps/mach/hurd/cthreads.c (cthread_keycreate): Likewise.
	(cthread_getspecific): Likewise.
	(cthread_setspecific): Likewise.
	(__libc_getspecific): Likewise.
	* sysdeps/mach/hurd/euidaccess.c (__euidaccess): Likewise.
	* sysdeps/mach/hurd/faccessat.c (faccessat): Likewise.
	* sysdeps/mach/hurd/fchdir.c (__fchdir): Likewise.
	* sysdeps/mach/hurd/fchmod.c (__fchmod): Likewise.
	* sysdeps/mach/hurd/fchmodat.c (fchmodat): Likewise.
	* sysdeps/mach/hurd/fchown.c (__fchown): Likewise.
	* sysdeps/mach/hurd/fchownat.c (fchownat): Likewise.
	* sysdeps/mach/hurd/flock.c (__flock): Likewise.
	* sysdeps/mach/hurd/fsync.c (fsync): Likewise.
	* sysdeps/mach/hurd/ftruncate.c (__ftruncate): Likewise.
	* sysdeps/mach/hurd/getgroups.c (__getgroups): Likewise.
	* sysdeps/mach/hurd/gethostname.c (__gethostname): Likewise.
	* sysdeps/mach/hurd/getitimer.c (__getitimer): Likewise.
	* sysdeps/mach/hurd/getlogin_r.c (__getlogin_r): Likewise.
	* sysdeps/mach/hurd/getpgid.c (__getpgid): Likewise.
	* sysdeps/mach/hurd/getrusage.c (__getrusage): Likewise.
	* sysdeps/mach/hurd/getsockname.c (__getsockname): Likewise.
	* sysdeps/mach/hurd/group_member.c (__group_member): Likewise.
	* sysdeps/mach/hurd/isatty.c (__isatty): Likewise.
	* sysdeps/mach/hurd/lchown.c (__lchown): Likewise.
	* sysdeps/mach/hurd/link.c (__link): Likewise.
	* sysdeps/mach/hurd/linkat.c (linkat): Likewise.
	* sysdeps/mach/hurd/listen.c (__listen): Likewise.
	* sysdeps/mach/hurd/mkdir.c (__mkdir): Likewise.
	* sysdeps/mach/hurd/mkdirat.c (mkdirat): Likewise.
	* sysdeps/mach/hurd/openat.c (__openat): Likewise.
	* sysdeps/mach/hurd/poll.c (__poll): Likewise.
	* sysdeps/mach/hurd/readlink.c (__readlink): Likewise.
	* sysdeps/mach/hurd/readlinkat.c (readlinkat): Likewise.
	* sysdeps/mach/hurd/recv.c (__recv): Likewise.
	* sysdeps/mach/hurd/rename.c (rename): Likewise.
	* sysdeps/mach/hurd/renameat.c (renameat): Likewise.
	* sysdeps/mach/hurd/revoke.c (revoke): Likewise.
	* sysdeps/mach/hurd/rewinddir.c (__rewinddir): Likewise.
	* sysdeps/mach/hurd/rmdir.c (__rmdir): Likewise.
	* sysdeps/mach/hurd/seekdir.c (seekdir): Likewise.
	* sysdeps/mach/hurd/send.c (__send): Likewise.
	* sysdeps/mach/hurd/setdomain.c (setdomainname): Likewise.
	* sysdeps/mach/hurd/setegid.c (setegid): Likewise.
	* sysdeps/mach/hurd/seteuid.c (seteuid): Likewise.
	* sysdeps/mach/hurd/setgid.c (__setgid): Likewise.
	* sysdeps/mach/hurd/setgroups.c (setgroups): Likewise.
	* sysdeps/mach/hurd/sethostid.c (sethostid): Likewise.
	* sysdeps/mach/hurd/sethostname.c (sethostname): Likewise.
	* sysdeps/mach/hurd/setlogin.c (setlogin): Likewise.
	* sysdeps/mach/hurd/setpgid.c (__setpgid): Likewise.
	* sysdeps/mach/hurd/setregid.c (__setregid): Likewise.
	* sysdeps/mach/hurd/setreuid.c (__setreuid): Likewise.
	* sysdeps/mach/hurd/settimeofday.c (__settimeofday): Likewise.
	* sysdeps/mach/hurd/setuid.c (__setuid): Likewise.
	* sysdeps/mach/hurd/shutdown.c (shutdown): Likewise.
	* sysdeps/mach/hurd/sigaction.c (__sigaction): Likewise.
	* sysdeps/mach/hurd/sigaltstack.c (__sigaltstack): Likewise.
	* sysdeps/mach/hurd/sigpending.c (sigpending): Likewise.
	* sysdeps/mach/hurd/sigprocmask.c (__sigprocmask): Likewise.
	* sysdeps/mach/hurd/sigsuspend.c (__sigsuspend): Likewise.
	* sysdeps/mach/hurd/socket.c (__socket): Likewise.
	* sysdeps/mach/hurd/symlink.c (__symlink): Likewise.
	* sysdeps/mach/hurd/symlinkat.c (symlinkat): Likewise.
	* sysdeps/mach/hurd/telldir.c (telldir): Likewise.
	* sysdeps/mach/hurd/truncate.c (__truncate): Likewise.
	* sysdeps/mach/hurd/umask.c (__umask): Likewise.
	* sysdeps/mach/hurd/unlink.c (__unlink): Likewise.
	* sysdeps/mach/hurd/unlinkat.c (unlinkat): Likewise.
	* sysdeps/mips/mips64/__longjmp.c (__longjmp): Likewise.
	* sysdeps/posix/alarm.c (alarm): Likewise.
	* sysdeps/posix/cuserid.c (cuserid): Likewise.
	* sysdeps/posix/dirfd.c (dirfd): Likewise.
	* sysdeps/posix/dup.c (__dup): Likewise.
	* sysdeps/posix/dup2.c (__dup2): Likewise.
	* sysdeps/posix/euidaccess.c (euidaccess): Likewise.
	(main): Likewise.
	* sysdeps/posix/flock.c (__flock): Likewise.
	* sysdeps/posix/fpathconf.c (__fpathconf): Likewise.
	* sysdeps/posix/getcwd.c (__getcwd): Likewise.
	* sysdeps/posix/gethostname.c (__gethostname): Likewise.
	* sysdeps/posix/gettimeofday.c (__gettimeofday): Likewise.
	* sysdeps/posix/isatty.c (__isatty): Likewise.
	* sysdeps/posix/killpg.c (killpg): Likewise.
	* sysdeps/posix/libc_fatal.c (__libc_fatal): Likewise.
	* sysdeps/posix/mkfifoat.c (mkfifoat): Likewise.
	* sysdeps/posix/raise.c (raise): Likewise.
	* sysdeps/posix/remove.c (remove): Likewise.
	* sysdeps/posix/rename.c (rename): Likewise.
	* sysdeps/posix/rewinddir.c (__rewinddir): Likewise.
	* sysdeps/posix/seekdir.c (seekdir): Likewise.
	* sysdeps/posix/sigblock.c (__sigblock): Likewise.
	* sysdeps/posix/sigignore.c (sigignore): Likewise.
	* sysdeps/posix/sigintr.c (siginterrupt): Likewise.
	* sysdeps/posix/signal.c (__bsd_signal): Likewise.
	* sysdeps/posix/sigset.c (sigset): Likewise.
	* sysdeps/posix/sigsuspend.c (__sigsuspend): Likewise.
	* sysdeps/posix/sysconf.c (__sysconf): Likewise.
	* sysdeps/posix/sysv_signal.c (__sysv_signal): Likewise.
	* sysdeps/posix/time.c (time): Likewise.
	* sysdeps/posix/ttyname.c (getttyname): Likewise.
	(ttyname): Likewise.
	* sysdeps/posix/ttyname_r.c (__ttyname_r): Likewise.
	* sysdeps/posix/utime.c (utime): Likewise.
	* sysdeps/powerpc/fpu/s_isnan.c (__isnan): Likewise.
	* sysdeps/powerpc/nptl/pthread_spin_lock.c (pthread_spin_lock):
	Likewise.
	* sysdeps/powerpc/nptl/pthread_spin_trylock.c
	(pthread_spin_trylock): Likewise.
	* sysdeps/pthread/aio_error.c (aio_error): Likewise.
	* sysdeps/pthread/aio_read.c (aio_read): Likewise.
	* sysdeps/pthread/aio_read64.c (aio_read64): Likewise.
	* sysdeps/pthread/aio_write.c (aio_write): Likewise.
	* sysdeps/pthread/aio_write64.c (aio_write64): Likewise.
	* sysdeps/pthread/flockfile.c (__flockfile): Likewise.
	* sysdeps/pthread/ftrylockfile.c (__ftrylockfile): Likewise.
	* sysdeps/pthread/funlockfile.c (__funlockfile): Likewise.
	* sysdeps/pthread/timer_create.c (timer_create): Likewise.
	* sysdeps/pthread/timer_getoverr.c (timer_getoverrun): Likewise.
	* sysdeps/pthread/timer_gettime.c (timer_gettime): Likewise.
	* sysdeps/s390/ffs.c (__ffs): Likewise.
	* sysdeps/s390/nptl/pthread_spin_lock.c (pthread_spin_lock):
	Likewise.
	* sysdeps/s390/nptl/pthread_spin_trylock.c (pthread_spin_trylock):
	Likewise.
	* sysdeps/sh/nptl/pthread_spin_lock.c (pthread_spin_lock):
	Likewise.
	* sysdeps/sparc/nptl/pthread_barrier_destroy.c
	(pthread_barrier_destroy): Likewise.
	* sysdeps/sparc/nptl/pthread_barrier_wait.c
	(__pthread_barrier_wait): Likewise.
	* sysdeps/sparc/sparc32/e_sqrt.c (__ieee754_sqrt): Likewise.
	* sysdeps/sparc/sparc32/pthread_barrier_wait.c
	(__pthread_barrier_wait): Likewise.
	* sysdeps/sparc/sparc32/sem_init.c (__old_sem_init): Likewise.
	* sysdeps/tile/memcmp.c (memcmp_common_alignment): Likewise.
	(memcmp_not_common_alignment): Likewise.
	(MEMCMP): Likewise.
	* sysdeps/tile/wordcopy.c (_wordcopy_fwd_aligned): Likewise.
	(_wordcopy_fwd_dest_aligned): Likewise.
	(_wordcopy_bwd_aligned): Likewise.
	(_wordcopy_bwd_dest_aligned): Likewise.
	* sysdeps/unix/bsd/ftime.c (ftime): Likewise.
	* sysdeps/unix/bsd/gtty.c (gtty): Likewise.
	* sysdeps/unix/bsd/stty.c (stty): Likewise.
	* sysdeps/unix/bsd/tcflow.c (tcflow): Likewise.
	* sysdeps/unix/bsd/tcflush.c (tcflush): Likewise.
	* sysdeps/unix/bsd/tcgetattr.c (__tcgetattr): Likewise.
	* sysdeps/unix/bsd/tcgetpgrp.c (tcgetpgrp): Likewise.
	* sysdeps/unix/bsd/tcsendbrk.c (tcsendbreak): Likewise.
	* sysdeps/unix/bsd/tcsetattr.c (tcsetattr): Likewise.
	* sysdeps/unix/bsd/tcsetpgrp.c (tcsetpgrp): Likewise.
	* sysdeps/unix/bsd/ualarm.c (ualarm): Likewise.
	* sysdeps/unix/bsd/wait3.c (__wait3): Likewise.
	* sysdeps/unix/getlogin_r.c (__getlogin_r): Likewise.
	* sysdeps/unix/sockatmark.c (sockatmark): Likewise.
	* sysdeps/unix/stime.c (stime): Likewise.
	* sysdeps/unix/sysv/linux/_exit.c (_exit): Likewise.
	* sysdeps/unix/sysv/linux/aio_sigqueue.c (__aio_sigqueue):
	Likewise.  Use internal_function.
	* sysdeps/unix/sysv/linux/arm/sigaction.c (__libc_sigaction):
	Convert to prototype-style function definition.
	* sysdeps/unix/sysv/linux/faccessat.c (faccessat): Likewise.
	* sysdeps/unix/sysv/linux/fchmodat.c (fchmodat): Likewise.
	* sysdeps/unix/sysv/linux/fpathconf.c (__fpathconf): Likewise.
	* sysdeps/unix/sysv/linux/gai_sigqueue.c (__gai_sigqueue):
	Likewise.  Use internal_function.
	* sysdeps/unix/sysv/linux/gethostid.c (sethostid): Convert to
	prototype-style function definition
	* sysdeps/unix/sysv/linux/getlogin_r.c (__getlogin_r_loginuid):
	Likewise.
	(__getlogin_r): Likewise.
	* sysdeps/unix/sysv/linux/getpt.c (__posix_openpt): Likewise.
	* sysdeps/unix/sysv/linux/hppa/pthread_cond_broadcast.c
	(__pthread_cond_broadcast): Likewise.
	* sysdeps/unix/sysv/linux/hppa/pthread_cond_destroy.c
	(__pthread_cond_destroy): Likewise.
	* sysdeps/unix/sysv/linux/hppa/pthread_cond_init.c
	(__pthread_cond_init): Likewise.
	* sysdeps/unix/sysv/linux/hppa/pthread_cond_signal.c
	(__pthread_cond_signal): Likewise.
	* sysdeps/unix/sysv/linux/hppa/pthread_cond_wait.c
	(__pthread_cond_wait): Likewise.
	* sysdeps/unix/sysv/linux/i386/getmsg.c (getmsg): Likewise.
	* sysdeps/unix/sysv/linux/i386/setegid.c (setegid): Likewise.
	* sysdeps/unix/sysv/linux/ia64/sigaction.c (__libc_sigaction):
	Likewise.
	* sysdeps/unix/sysv/linux/ia64/sigpending.c (sigpending):
	Likewise.
	* sysdeps/unix/sysv/linux/ia64/sigprocmask.c (__sigprocmask):
	Likewise.
	* sysdeps/unix/sysv/linux/mips/sigaction.c (__libc_sigaction):
	Likewise.
	* sysdeps/unix/sysv/linux/msgget.c (msgget): Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/ftruncate64.c
	(__ftruncate64): Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/truncate64.c
	(truncate64): Likewise.
	* sysdeps/unix/sysv/linux/pt-raise.c (raise): Likewise.
	* sysdeps/unix/sysv/linux/pthread_getcpuclockid.c
	(pthread_getcpuclockid): Likewise.
	* sysdeps/unix/sysv/linux/pthread_getname.c (pthread_getname_np):
	Likewise.
	* sysdeps/unix/sysv/linux/pthread_setname.c (pthread_setname_np):
	Likewise.
	* sysdeps/unix/sysv/linux/pthread_sigmask.c (pthread_sigmask):
	Likewise.
	* sysdeps/unix/sysv/linux/pthread_sigqueue.c (pthread_sigqueue):
	Likewise.
	* sysdeps/unix/sysv/linux/raise.c (raise): Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/sigaction.c
	(__libc_sigaction): Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/sigpending.c (sigpending):
	Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/sigprocmask.c
	(__sigprocmask): Likewise.
	* sysdeps/unix/sysv/linux/semget.c (semget): Likewise.
	* sysdeps/unix/sysv/linux/semop.c (semop): Likewise.
	* sysdeps/unix/sysv/linux/setrlimit64.c (setrlimit64): Likewise.
	* sysdeps/unix/sysv/linux/shmat.c (shmat): Likewise.
	* sysdeps/unix/sysv/linux/shmdt.c (shmdt): Likewise.
	* sysdeps/unix/sysv/linux/shmget.c (shmget): Likewise.
	* sysdeps/unix/sysv/linux/sigaction.c (__libc_sigaction):
	Likewise.
	* sysdeps/unix/sysv/linux/sigpending.c (sigpending): Likewise.
	* sysdeps/unix/sysv/linux/sigprocmask.c (__sigprocmask): Likewise.
	* sysdeps/unix/sysv/linux/sigqueue.c (__sigqueue): Likewise.
	* sysdeps/unix/sysv/linux/sigstack.c (sigstack): Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/sigpending.c (sigpending):
	Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/sigprocmask.c
	(__sigprocmask): Likewise.
	* sysdeps/unix/sysv/linux/speed.c (cfgetospeed): Likewise.
	(cfgetispeed): Likewise.
	(cfsetospeed): Likewise.
	(cfsetispeed): Likewise.
	* sysdeps/unix/sysv/linux/tcflow.c (tcflow): Likewise.
	* sysdeps/unix/sysv/linux/tcflush.c (tcflush): Likewise.
	* sysdeps/unix/sysv/linux/tcgetattr.c (__tcgetattr): Likewise.
	* sysdeps/unix/sysv/linux/tcsetattr.c (tcsetattr): Likewise.
	* sysdeps/unix/sysv/linux/time.c (time): Likewise.
	* sysdeps/unix/sysv/linux/timer_create.c (timer_create): Likewise.
	* sysdeps/unix/sysv/linux/timer_delete.c (timer_delete): Likewise.
	* sysdeps/unix/sysv/linux/timer_getoverr.c (timer_getoverrun):
	Likewise.
	* sysdeps/unix/sysv/linux/timer_gettime.c (timer_gettime):
	Likewise.
	* sysdeps/unix/sysv/linux/x86_64/sigpending.c (sigpending):
	Likewise.
	* sysdeps/unix/sysv/linux/x86_64/sigprocmask.c (__sigprocmask):
	Likewise.
	* sysdeps/x86_64/backtrace.c (__backtrace): Likewise.
2015-10-19 12:04:33 +00:00
H.J. Lu
9edf9b18b1 Mark x86 _dl_unmap/_dl_make_tlsdesc_dynamic hidden
Since x86 _dl_unmap and _dl_make_tlsdesc_dynamic are only used
internally in ld.so, they can be made hidden.

	[BZ #19122]
	* sysdeps/i386/dl-lookupcfg.h (_dl_unmap): Add attribute_hidden.
	* sysdeps/i386/dl-tlsdesc.h (_dl_make_tlsdesc_dynamic):
	Likewise.
	* sysdeps/x86_64/dl-tlsdesc.h (_dl_make_tlsdesc_dynamic):
	Likewise.
	* sysdeps/x86_64/dl-lookupcfg.h (_dl_unmap): Likewise.
2015-10-15 13:48:54 -07:00
H.J. Lu
d3d9c95aef Support PLT and GOT references in local PIC check
Linker in binutils 2.26 and newer generate GOT references instead
PLT references when -z now is passed to linker.  We need to extend
scripts/localplt.awk to allow PLT or GOT references.

	[BZ #19007]
	* scripts/localplt.awk: Also allow GOT references.
	* sysdeps/unix/sysv/linux/i386/localplt.data: Mark
	_Unwind_Find_FDE, calloc, memalign, realloc and __libc_memalign
	with "+ REL R_386_GLOB_DAT".
	* sysdeps/x86_64/localplt.data: Mark calloc, memalign, realloc
	and __libc_memalign with "+ RELA R_X86_64_GLOB_DAT".
2015-10-14 06:00:02 -07:00
H.J. Lu
0a5768fe9c Support x86-64 assmebler without AVX512
When x86-64 assmebler doesn't support AVX512, we should make
_dl_runtime_resolve_avx512/_dl_runtime_profile_avx512 as aliases of
_dl_runtime_resolve_avx/_dl_runtime_profile_avx.  Tested on x86-64
using GCC 5.2 with binutils 20151008 and GCC 4.8 with binutils 20130219.
There are no differences in ld.so with binutils 20151008.  There are no
unexpected failures with binutils 20130219 and 20151008.

	[BZ #19124]
	* sysdeps/x86_64/dl-trampoline.S [!HAVE_AVX512_ASM_SUPPORT]
	(_dl_runtime_resolve_avx512): Make it a hidden alias of
	_dl_runtime_resolve_avx.
	(_dl_runtime_profile_avx512): Make it a hidden alias of
	_dl_runtime_profile_avx.
2015-10-13 10:36:27 -07:00
H.J. Lu
4b71ce6c1a Update lrint/lrintf/lrintl for x32
The x86_64 versions of lrint/lrintf/ lrintl are aliases for the long
long versions which isn't correct for x32, where exceptions must respect
overflow for 32-bit long.  Separate versions of the long functions for
x32 that convert to 32-bit long and raise the right exceptions for that
conversion, while keeping the aliases in the non-x32 case.

Tested on x86_64 and x32.  There are no code changes in libm.so on
x86_64.

	* sysdeps/x86_64/fpu/s_llrint.S (__lrint): Add alias only if
	__ILP32__ isn't defined.
	(lrint): Likewise.
	* sysdeps/x86_64/fpu/s_llrintf.S (__lrintf): Likewise.
	(lrintf): Likewise.
	* sysdeps/x86_64/fpu/s_llrintl.S (__lrintl): Likewise.
	(lrintl): Likewise.
	* sysdeps/x86_64/x32/fpu/s_lrint.S: New file.
	* sysdeps/x86_64/x32/fpu/s_lrintf.S: Likewise.
	* sysdeps/x86_64/x32/fpu/s_lrintl.S: Likewise.
2015-10-09 11:42:10 -07:00
Joseph Myers
ae5d8eaed0 Remove configure tests for -mno-vzeroupper support.
GCC added support for -mno-vzeroupper in version 4.6.  Thus the
configure tests for this support are obsolete, and this patch removes
them.

Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by this patch).

	* sysdeps/i386/configure.ac (libc_cv_cc_novzeroupper): Remove
	configure test.
	* sysdeps/i386/configure: Regenerated.
	* sysdeps/x86_64/configure.ac (libc_cv_cc_novzeroupper): Remove
	configure test.
	* sysdeps/x86_64/configure: Regenerated.
	* sysdeps/x86_64/Makefile [$(config-cflags-novzeroupper) = yes]:
	Make code unconditional.
2015-10-09 16:03:48 +00:00
Joseph Myers
b7848899a5 Remove configure tests for FMA4 support.
GCC added support for -mfma4 in version 4.5.  Thus the configure tests
for this support are obsolete, and this patch removes them.

Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by this patch).

	* sysdeps/i386/configure.ac (libc_cv_cc_fma4): Remove configure
	test.
	* sysdeps/i386/configure: Regenerated.
	* sysdeps/x86_64/configure.ac (libc_cv_cc_fma4): Remove configure
	test.
	* sysdeps/x86_64/configure: Regenerated.
	* sysdeps/x86_64/fpu/multiarch/Makefile [$(have-mfma4) = yes]:
	Make code unconditional.
	* sysdeps/x86_64/fpu/multiarch/e_asin.c [HAVE_FMA4_SUPPORT]:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_atan2.c [HAVE_FMA4_SUPPORT]:
	Likewise.
	[!HAVE_FMA4_SUPPORT]: Remove conditional code.
	* sysdeps/x86_64/fpu/multiarch/e_exp.c [HAVE_FMA4_SUPPORT]: Make
	code unconditional.
	[!HAVE_FMA4_SUPPORT]: Remove conditional code.
	* sysdeps/x86_64/fpu/multiarch/e_log.c [HAVE_FMA4_SUPPORT]: Make
	code unconditional.
	[!HAVE_FMA4_SUPPORT]: Remove conditional code.
	* sysdeps/x86_64/fpu/multiarch/e_pow.c [HAVE_FMA4_SUPPORT]: Make
	code unconditional.
	* sysdeps/x86_64/fpu/multiarch/s_atan.c [HAVE_FMA4_SUPPORT]: Make
	code unconditional.
	[!HAVE_FMA4_SUPPORT]: Remove conditional code.
	* sysdeps/x86_64/fpu/multiarch/s_fma.c [HAVE_FMA4_SUPPORT]: Make
	code unconditional.
	[!HAVE_FMA4_SUPPORT]: Remove conditional code.
	* sysdeps/x86_64/fpu/multiarch/s_fmaf.c [HAVE_FMA4_SUPPORT]: Make
	code unconditional.
	[!HAVE_FMA4_SUPPORT]: Remove conditional code.
	* sysdeps/x86_64/fpu/multiarch/s_sin.c [HAVE_FMA4_SUPPORT]: Make
	code unconditional.
	[!HAVE_FMA4_SUPPORT]: Remove conditional code.
	* sysdeps/x86_64/fpu/multiarch/s_tan.c [HAVE_FMA4_SUPPORT]: Make
	code unconditional.
	[!HAVE_FMA4_SUPPORT]: Remove conditional code.
	* config.h.in (HAVE_FMA4_SUPPORT): Remove #undef.
2015-10-09 16:02:54 +00:00
Joseph Myers
1b12cd7f4d Remove configure tests for AVX support.
GCC added support for -mavx and -msse2avx in version 4.4.  Thus the
configure tests for this support are obsolete, and this patch removes
them.

Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by this patch).

	* sysdeps/i386/configure.ac (libc_cv_cc_avx): Remove configure
	test.
	(libc_cv_cc_sse2avx): Likewise.
	* sysdeps/i386/configure: Regenerated.
	* sysdeps/i386/i686/multiarch/Makefile
	[$(subdir)$(config-cflags-avx) = mathyes]: Change conditional to
	[$(subdir) = math].
	* sysdeps/i386/i686/multiarch/s_fma-fma.c [HAVE_AVX_SUPPORT]: Make
	code unconditional.
	* sysdeps/i386/i686/multiarch/s_fma.c [HAVE_AVX_SUPPORT]:
	Likewise.
	* sysdeps/i386/i686/multiarch/s_fmaf-fma.c [HAVE_AVX_SUPPORT]:
	Likewise.
	* sysdeps/i386/i686/multiarch/s_fmaf.c [HAVE_AVX_SUPPORT]:
	Likewise.
	* sysdeps/x86_64/configure.ac (libc_cv_cc_avx): Remove configure
	test.
	(libc_cv_cc_sse2avx): Likewise.
	* sysdeps/x86_64/configure: Regenerated.
	* sysdeps/x86_64/Makefile [$(config-cflags-avx) = yes]: Make code
	unconditional.
	* sysdeps/x86_64/dl-trampoline.h (_dl_runtime_profile)
	[HAVE_AVX_SUPPORT || HAVE_AVX512_ASM_SUPPORT]: Make code
	unconditional.
	(_dl_runtime_profile)
	[!(HAVE_AVX_SUPPORT || HAVE_AVX512_ASM_SUPPORT)]: Remove
	conditional code.
	* sysdeps/x86_64/fpu/multiarch/Makefile
	[$(config-cflags-sse2avx) = yes]: Make code unconditional.
	* sysdeps/x86_64/fpu/multiarch/e_atan2.c
	[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_exp.c
	[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
	* sysdeps/x86_64/fpu/multiarch/e_log.c
	[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_atan.c
	[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_fma.c [HAVE_AVX_SUPPORT]:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_fmaf.c [HAVE_AVX_SUPPORT]:
	Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_sin.c
	[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
	* sysdeps/x86_64/fpu/multiarch/s_tan.c
	[HAVE_FMA4_SUPPORT || HAVE_AVX_SUPPORT]: Likewise.
	* sysdeps/x86_64/multiarch/strcmp.S [HAVE_AVX_SUPPORT]: Likewise.
	* config.h.in (HAVE_AVX_SUPPORT): Remove #undef.
	(HAVE_SSE2AVX_SUPPORT): Likewise.
2015-10-08 15:59:32 +00:00
Joseph Myers
b75bc69cdf Don't use dbl-64/wordsize-64 lround based on llround for ILP32 (bug 19079).
The implementation of lround in dbl-64/wordsize-64 as an alias or
wrapper for llround is always incorrect when long is not 64-bit,
because it misses required exceptions in overflow cases, as shown by
my recently added tests.  This patch removes that alias / wrapper in
the non-LP64 case, together with the REGISTER_CAST_INT32_TO_INT64
macro, restoring the previous version of lround for dbl-64/wordsize-64
(newly conditioned on !_LP64).

Tested for x86_64, and for mips64 with use of dbl-64/wordsize-64
enabled.

	[BZ #19079]
	* sysdeps/ieee754/dbl-64/wordsize-64/s_lround.c: Restore previous
	file, conditioned on [!_LP64].
	* sysdeps/ieee754/dbl-64/wordsize-64/s_llround.c
	[!_LP64] (__lround): Do not define as function or alias.
	[!_LP64] (lround): Likewise.
	[!_LP64] (__lroundl): Likewise.
	[!_LP64] (lroundl): Likewise.
	* sysdeps/tile/sysdep.h (REGISTER_CAST_INT32_TO_INT64): Remove
	macro.
	* sysdeps/x86_64/x32/sysdep.h (REGISTER_CAST_INT32_TO_INT64):
	Likewise.
2015-10-07 00:40:12 +00:00
Joseph Myers
3b7aa5bf59 Remove configure tests for SSE4 support.
GCC added support for -msse4 in version 4.3.  Thus the configure tests
for it are obsolete, and this patch removes them.

Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by this patch).

	* sysdeps/i386/configure.ac (libc_cv_cc_sse4): Remove configure
	test.
	* sysdeps/i386/configure: Regenerated.
	* sysdeps/i386/i686/multiarch/Makefile
	[$(config-cflags-sse4) = yes]: Make code unconditional.
	* sysdeps/i386/i686/multiarch/strcspn.S [HAVE_SSE4_SUPPORT]:
	Likewise.
	* sysdeps/i386/i686/multiarch/strspn.S [HAVE_SSE4_SUPPORT]:
	Likewise.
	* sysdeps/x86_64/configure.ac (libc_cv_cc_sse4): Remove configure
	test.
	* sysdeps/x86_64/configure: Regenerated.
	* sysdeps/x86_64/multiarch/Makefile [$(config-cflags-sse4) = yes]:
	Make code unconditional.
	* sysdeps/x86_64/multiarch/strcspn.S [HAVE_SSE4_SUPPORT]:
	Likewise.
	* sysdeps/x86_64/multiarch/strspn.S [HAVE_SSE4_SUPPORT]: Likewise.
	* config.h.in (HAVE_SSE4_SUPPORT): Remove #undef.
2015-10-06 20:47:40 +00:00
Paul Pluzhnikov
57352c2201 sysdeps/x86_64/fpu/libm-test-ulps: Regenerated on Haswell. 2015-10-03 11:31:21 -07:00
Joseph Myers
93e448cbed Improve test coverage of real libm functions [a-e]*.
This patch improves test coverage of the real libm functions [a-e]*,
ensuring that special cases and ranges of input values of potential
significance (such as close to overflow and underflow thresholds) are
more systematically covered.

This is a followup to
<https://sourceware.org/ml/libc-alpha/2013-12/msg00757.html> which
covered [a-c]* (however, I found more weaknesses in the coverage of
those functions when preparing this patch, hence the additional tests
being added for them here).

Addition of a test for acosh (-qNaN) is temporarily deferred, to be
included as part of a fix for bug 19032 which was discovered in the
course of adding these tests (and which illustrates the use of testing
-qNaN as well as +qNaN as input even to functions for which the sign
of a NaN isn't meant to be significant).

Tested for x86_64 and x86.

	* math/auto-libm-test-in: Add more tests of acos, acosh, asin,
	atan, atan2, atanh, cbrt, cos, cosh, erf, erfc, exp, exp10, exp2
	and expm1.
	* math/auto-libm-test-out: Regenerated.
	* math/libm-test.inc (acos_test_data): Add more tests.
	(asin_test_data): Likewise.
	(asinh_test_data): Likewise.
	(atan_test_data): Likewise.
	(atanh_test_data): Likewise.
	(atan2_test_data): Likewise.
	(cbrt_test_data): Likewise.
	(ceil_test_data): Likewise.
	(copysign_test_data): Likewise.
	(cos_test_data): Likewise.
	(cosh_test_data): Likewise.
	(erf_test_data): Likewise.
	(erfc_test_data): Likewise.
	(exp_test_data): Likewise.
	(exp10_test_data): Likewise.
	(exp2_test_data): Likewise.
	(expm1_test_data): Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Update.
2015-09-30 18:06:02 +00:00
Joseph Myers
a5721ebc68 Fix clog, clog10 inaccuracy (bug 19016).
For arguments with X^2 + Y^2 close to 1, clog and clog10 avoid large
errors from log(hypot) by computing X^2 + Y^2 - 1 in a way that avoids
cancellation error and then using log1p.

However, the thresholds for using that approach still result in log
being used on argument as large as sqrt(13/16) > 0.9, leading to
significant errors, in some cases above the 9ulp maximum allowed in
glibc libm.  This patch arranges for the approach using log1p to be
used in any cases where |X|, |Y| < 1 and X^2 + Y^2 >= 0.5 (with the
existing allowance for cases where one of X and Y is very small),
adjusting the __x2y2m1 functions to work with the wider range of
inputs.  This way, log only gets used on arguments below sqrt(1/2) (or
substantially above 1), where the error involved is much less.

Tested for x86_64, x86, mips64 and powerpc.  For the ulps regeneration
I removed the existing clog and clog10 ulps before regenerating to
allow any reduced ulps to appear.  Tests added include those found by
random test generation to produce large ulps either before or after
the patch, and some found by trying inputs close to the (0.75, 0.5)
threshold where the potential errors from using log are largest.

	[BZ #19016]
	* sysdeps/generic/math_private.h (__x2y2m1f): Update comment to
	allow more cases with X^2 + Y^2 >= 0.5.
	* sysdeps/ieee754/dbl-64/x2y2m1.c (__x2y2m1): Likewise.  Add -1 as
	normal element in sum instead of special-casing based on values of
	arguments.
	* sysdeps/ieee754/dbl-64/x2y2m1f.c (__x2y2m1f): Update comment.
	* sysdeps/ieee754/ldbl-128/x2y2m1l.c (__x2y2m1l): Likewise.  Add
	-1 as normal element in sum instead of special-casing based on
	values of arguments.
	* sysdeps/ieee754/ldbl-128ibm/x2y2m1l.c (__x2y2m1l): Likewise.
	* sysdeps/ieee754/ldbl-96/x2y2m1.c [FLT_EVAL_METHOD != 0]
	(__x2y2m1): Update comment.
	* sysdeps/ieee754/ldbl-96/x2y2m1l.c (__x2y2m1l): Likewise.  Add -1
	as normal element in sum instead of special-casing based on values
	of arguments.
	* math/s_clog.c (__clog): Handle more cases using log1p without
	hypot.
	* math/s_clog10.c (__clog10): Likewise.
	* math/s_clog10f.c (__clog10f): Likewise.
	* math/s_clog10l.c (__clog10l): Likewise.
	* math/s_clogf.c (__clogf): Likewise.
	* math/s_clogl.c (__clogl): Likewise.
	* math/auto-libm-test-in: Add more tests of clog and clog10.
	* math/auto-libm-test-out: Regenerated.
	* sysdeps/i386/fpu/libm-test-ulps: Update.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2015-09-28 22:11:22 +00:00
Joseph Myers
fa752c6981 Fix powf inaccuracy (bug 18956).
The flt-32 version of powf can be inaccurate because of bugs in the
extra-precision calculation of (x-1)/(x+1) or (x-1.5)/(x+1.5) as part
of calculating log(x) with extra precision: a constant used (as part
of adding 1 or 1.5 through integer arithmetic) is incorrect, and then
the code fails to mask a computed high part before using it in
arithmetic that relies on s_h*t_h being exactly representable.  This
patch fixes these bugs.

Tested for x86_64 and x86.  x86_64 ulps for powf removed and
regenerated to reflect reduced ulps from the increased accuracy for
existing tests.

	[BZ #18956]
	* sysdeps/ieee754/flt-32/e_powf.c (__ieee754_powf): Add 0x00400000
	not 0x0040000 for high bit of mantissa.  Mask with 0xfffff000 when
	extracting high part.
	* math/auto-libm-test-in: Add another test of pow.
	* math/auto-libm-test-out: Regenerated.
	* sysdeps/x86_64/fpu/libm-test-ulps: Update.
2015-09-26 00:27:06 +00:00
Joseph Myers
6ace393821 Fix pow missing underflows (bug 18825).
Similar to various other bugs in this area, pow functions can fail to
raise the underflow exception when the result is tiny and inexact but
one or more low bits of the intermediate result that is scaled down
(or, in the i386 case, converted from a wider evaluation format) are
zero.  This patch forces the exception in a similar way to previous
fixes, thereby concluding the fixes for known bugs with missing
underflow exceptions currently filed in Bugzilla.

Tested for x86_64, x86, mips64 and powerpc.

	[BZ #18825]
	* sysdeps/i386/fpu/i386-math-asm.h (FLT_NARROW_EVAL_UFLOW_NONNAN):
	New macro.
	(DBL_NARROW_EVAL_UFLOW_NONNAN): Likewise.
	(LDBL_CHECK_FORCE_UFLOW_NONNAN): Likewise.
	* sysdeps/i386/fpu/e_pow.S: Use DEFINE_DBL_MIN.
	(__ieee754_pow): Use DBL_NARROW_EVAL_UFLOW_NONNAN instead of
	DBL_NARROW_EVAL, reloading the PIC register as needed.
	* sysdeps/i386/fpu/e_powf.S: Use DEFINE_FLT_MIN.
	(__ieee754_powf): Use FLT_NARROW_EVAL_UFLOW_NONNAN instead of
	FLT_NARROW_EVAL.  Use separate return path for case when first
	argument is NaN.
	* sysdeps/i386/fpu/e_powl.S: Include <i386-math-asm.h>.  Use
	DEFINE_LDBL_MIN.
	(__ieee754_powl): Use LDBL_CHECK_FORCE_UFLOW_NONNAN, reloading the
	PIC register.
	* sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Use
	math_check_force_underflow_nonneg.
	* sysdeps/ieee754/flt-32/e_powf.c (__ieee754_powf): Force
	underflow for subnormal result.
	* sysdeps/ieee754/ldbl-128/e_powl.c (__ieee754_powl): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Use
	math_check_force_underflow_nonneg.
	* sysdeps/x86/fpu/powl_helper.c (__powl_helper): Use
	math_check_force_underflow.
	* sysdeps/x86_64/fpu/x86_64-math-asm.h
	(LDBL_CHECK_FORCE_UFLOW_NONNAN): New macro.
	* sysdeps/x86_64/fpu/e_powl.S: Include <x86_64-math-asm.h>.  Use
	DEFINE_LDBL_MIN.
	(__ieee754_powl): Use LDBL_CHECK_FORCE_UFLOW_NONNAN.
	* math/auto-libm-test-in: Add more tests of pow.
	* math/auto-libm-test-out: Regenerated.
2015-09-25 22:29:10 +00:00
Joseph Myers
b2a64460ba Refactor x86_64 libm code forcing underflow exceptions.
This patch refactors code in sysdeps/x86_64/fpu that forces underflow
exceptions and closely follows corresponding i386 code to use common
macros in x86_64-math-asm.h for that purpose.  This is mainly about
keeping the code similar to the i386 code as far as possible, since
each macro apart from DEFINE_LDBL_MIN ends up used only once.  It
would be possible to do a further refactoring to share these macros
between i386 and x86_64 (with i386 using the fcomip / fucomip versions
when building for i686 and above), but I have no immediate plans to do
so.

Tested for x86_64.

	* sysdeps/x86_64/fpu/x86_64-math-asm.h: New file.
	* sysdeps/x86_64/fpu/e_exp2l.S: Include <x86_64-math-asm.h>.
	(ldbl_min): Replace with use of DEFINE_LDBL_MIN.
	(__ieee754_exp2l): Use LDBL_CHECK_FORCE_UFLOW_NONNEG_NAN.
	* sysdeps/x86_64/fpu/e_expl.S: Include <x86_64-math-asm.h>.
	[!USE_AS_EXPM1L] (cmin): Replace with use of DEFINE_LDBL_MIN.
	(IEEE754_EXPL): Use LDBL_CHECK_FORCE_UFLOW_NONNEG.
2015-09-24 22:25:30 +00:00
Joseph Myers
51df260506 Fix x86_64 fma4 pow inappropriate contraction (bug 19003).
The x86_64 fma4 version of pow fails to disable contraction of
operations other than those explicitly intended to use fma
instructions, so resulting in large ulps errors on processors with
fma4 instructions, as in bug 18104 (165ulp for the test added for that
bug; error originally reported by "blaaa" on #glibc).  This patch adds
$(config-cflags-nofma) for e_pow-fma4.c, corresponding to the use for
e_pow.c in sysdeps/ieee754/dbl-64/Makefile.

Tested for x86_64 on a processor with fma4.

	[BZ #19003]
	* sysdeps/x86_64/fpu/multiarch/Makefile (CFLAGS-e_pow-fma4.c): Add
	$(config-cflags-nofma).
2015-09-24 16:48:32 +00:00
Joseph Myers
da2f4f2dd5 Make scalbn set errno (bug 6803).
As noted in bug 6803, scalbn fails to set errno on overflow and
underflow.  This patch fixes this by making scalbn an alias of ldexp,
which has exactly the same semantics (for floating-point types with
radix 2) and already has wrappers that deal with setting errno,
instead of an alias of the internal __scalbn (which ldexp calls).

Notes:

* Where compat symbols were defined for scalbn functions, I didn't
  change what they point to (to keep the patch minimal), so such
  compat symbols continue to go directly to the non-errno-setting
  functions.

* Mike, I didn't do anything with the IA64 versions of these
  functions, where I think both the ldexp and scalbn functions already
  deal with setting errno.  As a cleanup (not needed to fix this bug)
  however you might want to make those functions into aliases for
  IA64; there is no need for them to be separate function
  implementations at all.

* This concludes the fix for bug 6803 since the scalb and scalbln
  cases of that bug were fixed some time ago.

Tested for x86_64, x86, mips64 and powerpc.

	[BZ #6803]
	* math/s_ldexp.c (scalbn): Define as weak alias of __ldexp.
	[NO_LONG_DOUBLE] (scalbnl): Define as weak alias of __ldexp.
	* math/s_ldexpf.c (scalbnf): Define as weak alias of __ldexpf.
	* math/s_ldexpl.c (scalbnl): Define as weak alias of __ldexpl.
	* sysdeps/i386/fpu/s_scalbn.S (scalbn): Remove alias.
	* sysdeps/i386/fpu/s_scalbnf.S (scalbnf): Likewise.
	* sysdeps/i386/fpu/s_scalbnl.S (scalbnl): Likewise.
	* sysdeps/ieee754/dbl-64/s_scalbn.c (scalbn): Likewise.
	[NO_LONG_DOUBLE] (scalbnl): Likewise.
	* sysdeps/ieee754/dbl-64/wordsize-64/s_scalbn.c (scalbn):
	Likewise.
	[NO_LONG_DOUBLE] (scalbnl): Likewise.
	* sysdeps/ieee754/flt-32/s_scalbnf.c (scalbnf): Likewise.
	* sysdeps/ieee754/ldbl-128/s_scalbnl.c (scalbnl): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/s_scalbnl.c (scalbnl): Remove
	long_double_symbol calls.
	* sysdeps/ieee754/ldbl-64-128/s_scalbnl.c (scalbnl): Likewise.
	* sysdeps/ieee754/ldbl-opt/s_ldexpl.c (__ldexpl_2): Define as
	strong alias of __ldexpl.
	(scalbnl): Define using long_double_symbol.
	* sysdeps/m68k/m680x0/fpu/s_scalbn.c (__CONCATX(scalbn,suffix)):
	Remove alias.
	* sysdeps/sparc/sparc64/soft-fp/s_scalbnl.c (scalbnl): Likewise.
	* sysdeps/x86_64/fpu/s_scalbnl.S (scalbnl): Likewise.
	* math/libm-test.inc (scalbn_test_data): Add errno expectations.
	(scalbln_test_data): Add more errno expectations.
2015-09-16 21:11:00 +00:00
Joseph Myers
903af5af9a Fix exp2 missing underflows (bug 16521).
Various exp2 implementations in glibc can miss underflow exceptions
when the scaling down part of the calculation is exact (or, in the x86
case, when the conversion from extended precision to the target
precision is exact).  This patch forces the exception in a similar way
to previous fixes.

The x86 exp2f changes may in fact not be needed for this purpose -
it's likely to be the case that no argument of type float has an exp2
result so close to an exact subnormal float value that it equals that
value when rounded to 64 bits (even taking account of variation
between different x86 implementations).  However, they are included
for consistency with the changes to exp2 and so as to fix the exp2f
part of bug 18875 by ensuring that excess range and precision is
removed from underflowing return values.

Tested for x86_64, x86 and mips64.

	[BZ #16521]
	[BZ #18875]
	* math/e_exp2l.c (__ieee754_exp2l): Force underflow exception for
	small results.
	* sysdeps/i386/fpu/e_exp2.S (dbl_min): New object.
	(MO): New macro.
	(__ieee754_exp2): For small results, force underflow exception and
	remove excess range and precision from return value.
	* sysdeps/i386/fpu/e_exp2f.S (flt_min): New object.
	(MO): New macro.
	(__ieee754_exp2f): For small results, force underflow exception
	and remove excess range and precision from return value.
	* sysdeps/i386/fpu/e_exp2l.S (ldbl_min): New object.
	(MO): New macro.
	(__ieee754_exp2l): Force underflow exception for small results.
	* sysdeps/ieee754/dbl-64/e_exp2.c (__ieee754_exp2): Likewise.
	* sysdeps/ieee754/flt-32/e_exp2f.c (__ieee754_exp2f): Likewise.
	* sysdeps/x86_64/fpu/e_exp2l.S (ldbl_min): New object.
	(MO): New macro.
	(__ieee754_exp2l): Force underflow exception for small results.
	* math/auto-libm-test-in: Add more tests or exp2.
	* math/auto-libm-test-out: Regenerated.
2015-09-14 22:00:12 +00:00
Joseph Myers
a1f99ba28b Add more random libm test inputs (mainly for ldbl-128).
This patch adds more libm test inputs found through random test
generation to increase previously known ulps.  This particular test
generation was run for mips64, so most of the increased ulps are for
ldbl-128 (float and double having been fairly well covered by such
testing for x86_64), but there's the odd ulps increase for other
formats.

Tested for x86_64, x86 and mips64.

	* math/auto-libm-test-in: Add more tests of acos, acosh, asin,
	asinh, atan, atan2, atanh, cabs, carg, cos, csqrt, erfc, exp,
	exp10, exp2, log, log1p, log2, pow, sin, sincos, sinh, tan and
	tanh.
	* math/auto-libm-test-out: Regenerated.
	* sysdeps/i386/fpu/libm-test-ulps: Update.
	* sysdeps/mips/mips32/libm-test-ulps: Likewise.
	* sysdeps/mips/mips64/libm-test-ulps: Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2015-09-12 00:01:38 +00:00
Joseph Myers
de071d199a Move bits/atomic.h to atomic-machine.h (bug 14912).
It was noted in
<https://sourceware.org/ml/libc-alpha/2012-09/msg00305.html> that the
bits/*.h naming scheme should only be used for installed headers.
This patch renames bits/atomic.h to atomic-machine.h to follow that
convention.

This is the only change in this series that needs to change the
filename rather than simply removing a directory level (because both
atomic.h and bits/atomic.h exist at present).

Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch).

	[BZ #14912]
	* sysdeps/aarch64/bits/atomic.h: Move to ...
	* sysdeps/aarch64/atomic-machine.h: ...here.
	(_AARCH64_BITS_ATOMIC_H): Rename macro to
	_AARCH64_ATOMIC_MACHINE_H.
	* sysdeps/alpha/bits/atomic.h: Move to ...
	* sysdeps/alpha/atomic-machine.h: ...here.
	* sysdeps/arm/bits/atomic.h: Move to ...
	* sysdeps/arm/atomic-machine.h: ...here.  Update comments.
	* bits/atomic.h: Move to ...
	* sysdeps/generic/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/i386/bits/atomic.h: Move to ...
	* sysdeps/i386/atomic-machine.h: ...here.
	* sysdeps/ia64/bits/atomic.h: Move to ...
	* sysdeps/ia64/atomic-machine.h: ...here.
	* sysdeps/m68k/coldfire/bits/atomic.h: Move to ...
	* sysdeps/m68k/coldfire/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/m68k/m680x0/m68020/bits/atomic.h: Move to ...
	* sysdeps/m68k/m680x0/m68020/atomic-machine.h: ...here.
	* sysdeps/microblaze/bits/atomic.h: Move to ...
	* sysdeps/microblaze/atomic-machine.h: ...here.
	* sysdeps/mips/bits/atomic.h: Move to ...
	* sysdeps/mips/atomic-machine.h: ...here.
	(_MIPS_BITS_ATOMIC_H): Rename macro to _MIPS_ATOMIC_MACHINE_H.
	* sysdeps/powerpc/bits/atomic.h: Move to ...
	* sysdeps/powerpc/atomic-machine.h: ...here.  Update comments.
	* sysdeps/powerpc/powerpc32/bits/atomic.h: Move to ...
	* sysdeps/powerpc/powerpc32/atomic-machine.h: ...here.  Update
	comments.  Include <atomic-machine.h> instead of <bits/atomic.h>.
	* sysdeps/powerpc/powerpc64/bits/atomic.h: Move to ...
	* sysdeps/powerpc/powerpc64/atomic-machine.h: ...here.  Include
	<atomic-machine.h> instead of <bits/atomic.h>.
	* sysdeps/s390/bits/atomic.h: Move to ...
	* sysdeps/s390/atomic-machine.h: ...here.
	* sysdeps/sparc/sparc32/bits/atomic.h: Move to ...
	* sysdeps/sparc/sparc32/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/sparc/sparc32/sparcv9/bits/atomic.h: Move to ...
	* sysdeps/sparc/sparc32/sparcv9/atomic-machine.h: ...here.
	* sysdeps/sparc/sparc64/bits/atomic.h: Move to ...
	* sysdeps/sparc/sparc64/atomic-machine.h: ...here.
	* sysdeps/tile/bits/atomic.h: Move to ...
	* sysdeps/tile/atomic-machine.h: ...here.
	* sysdeps/tile/tilegx/bits/atomic.h: Move to ...
	* sysdeps/tile/tilegx/atomic-machine.h: ...here.  Include
	<sysdeps/tile/atomic-machine.h> instead of
	<sysdeps/tile/bits/atomic.h>.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/tile/tilepro/bits/atomic.h: Move to ...
	* sysdeps/tile/tilepro/atomic-machine.h: ...here.  Include
	<sysdeps/tile/atomic-machine.h> instead of
	<sysdeps/tile/bits/atomic.h>.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/arm/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/arm/atomic-machine.h: ...here.  Include
	<sysdeps/arm/atomic-machine.h> instead of
	<sysdeps/arm/bits/atomic.h>.
	* sysdeps/unix/sysv/linux/hppa/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/hppa/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/m68k/coldfire/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h: ...here.
	(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/nios2/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/nios2/atomic-machine.h: ...here.
	(_NIOS2_BITS_ATOMIC_H): Rename macro to _NIOS2_ATOMIC_MACHINE_H.
	* sysdeps/unix/sysv/linux/sh/bits/atomic.h: Move to ...
	* sysdeps/unix/sysv/linux/sh/atomic-machine.h: ...here.
	* sysdeps/x86_64/bits/atomic.h: Move to ...
	* sysdeps/x86_64/atomic-machine.h: ...here.
	* include/atomic.h: Include <atomic-machine.h> instead of
	<bits/atomic.h>.
2015-09-11 20:00:19 +00:00
Joseph Myers
00a7073c38 Add more randomly-generated libm tests.
This patch adds more libm test inputs found through random test
generation to increase observed ulps on x86_64.

Tested for x86_64 and x86.

	* math/auto-libm-test-in: Add more tests of acosh, atanh, cbrt,
	cosh, csqrt, erfc, expm1 and lgamma.
	* math/auto-libm-test-out: Regenerated.
	* sysdeps/i386/fpu/libm-test-ulps: Update.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2015-09-11 15:03:10 +00:00
Joseph Myers
050f29c188 Fix lgamma (negative) inaccuracy (bug 2542, bug 2543, bug 2558).
The existing implementations of lgamma functions (except for the ia64
versions) use the reflection formula for negative arguments.  This
suffers large inaccuracy from cancellation near zeros of lgamma (near
where the gamma function is +/- 1).

This patch fixes this inaccuracy.  For arguments above -2, there are
no zeros and no large cancellation, while for sufficiently large
negative arguments the zeros are so close to integers that even for
integers +/- 1ulp the log(gamma(1-x)) term dominates and cancellation
is not significant.  Thus, it is only necessary to take special care
about cancellation for arguments around a limited number of zeros.

Accordingly, this patch uses precomputed tables of relevant zeros,
expressed as the sum of two floating-point values.  The log of the
ratio of two sines can be computed accurately using log1p in cases
where log would lose accuracy.  The log of the ratio of two gamma(1-x)
values can be computed using Stirling's approximation (the difference
between two values of that approximation to lgamma being computable
without computing the two values and then subtracting), with
appropriate adjustments (which don't reduce accuracy too much) in
cases where 1-x is too small to use Stirling's approximation directly.

In the interval from -3 to -2, using the ratios of sines and of
gamma(1-x) can still produce too much cancellation between those two
parts of the computation (and that interval is also the worst interval
for computing the ratio between gamma(1-x) values, which computation
becomes more accurate, while being less critical for the final result,
for larger 1-x).  Because this can result in errors slightly above
those accepted in glibc, this interval is instead dealt with by
polynomial approximations.  Separate polynomial approximations to
(|gamma(x)|-1)(x-n)/(x-x0) are used for each interval of length 1/8
from -3 to -2, where n (-3 or -2) is the nearest integer to the
1/8-interval and x0 is the zero of lgamma in the relevant half-integer
interval (-3 to -2.5 or -2.5 to -2).

Together, the two approaches are intended to give sufficient accuracy
for all negative arguments in the problem range.  Outside that range,
the previous implementation continues to be used.

Tested for x86_64, x86, mips64 and powerpc.  The mips64 and powerpc
testing shows up pre-existing problems for ldbl-128 and ldbl-128ibm
with large negative arguments giving spurious "invalid" exceptions
(exposed by newly added tests for cases this patch doesn't affect the
logic for); I'll address those problems separately.

	[BZ #2542]
	[BZ #2543]
	[BZ #2558]
	* sysdeps/ieee754/dbl-64/e_lgamma_r.c (__ieee754_lgamma_r): Call
	__lgamma_neg for arguments from -28.0 to -2.0.
	* sysdeps/ieee754/flt-32/e_lgammaf_r.c (__ieee754_lgammaf_r): Call
	__lgamma_negf for arguments from -15.0 to -2.0.
	* sysdeps/ieee754/ldbl-128/e_lgammal_r.c (__ieee754_lgammal_r):
	Call __lgamma_negl for arguments from -48.0 or -50.0 to -2.0.
	* sysdeps/ieee754/ldbl-96/e_lgammal_r.c (__ieee754_lgammal_r):
	Call __lgamma_negl for arguments from -33.0 to -2.0.
	* sysdeps/ieee754/dbl-64/lgamma_neg.c: New file.
	* sysdeps/ieee754/dbl-64/lgamma_product.c: Likewise.
	* sysdeps/ieee754/flt-32/lgamma_negf.c: Likewise.
	* sysdeps/ieee754/flt-32/lgamma_productf.c: Likewise.
	* sysdeps/ieee754/ldbl-128/lgamma_negl.c: Likewise.
	* sysdeps/ieee754/ldbl-128/lgamma_productl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/lgamma_negl.c: Likewise.
	* sysdeps/ieee754/ldbl-128ibm/lgamma_productl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/lgamma_negl.c: Likewise.
	* sysdeps/ieee754/ldbl-96/lgamma_product.c: Likewise.
	* sysdeps/ieee754/ldbl-96/lgamma_productl.c: Likewise.
	* sysdeps/generic/math_private.h (__lgamma_negf): New prototype.
	(__lgamma_neg): Likewise.
	(__lgamma_negl): Likewise.
	(__lgamma_product): Likewise.
	(__lgamma_productl): Likewise.
	* math/Makefile (libm-calls): Add lgamma_neg and lgamma_product.
	* math/auto-libm-test-in: Add more tests of lgamma.
	* math/auto-libm-test-out: Regenerated.
	* sysdeps/i386/fpu/libm-test-ulps: Update.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2015-09-10 22:27:58 +00:00
Joseph Myers
ec999b8e5e Move bits/libc-lock.h and bits/libc-lockP.h out of bits/ (bug 14912).
It was noted in
<https://sourceware.org/ml/libc-alpha/2012-09/msg00305.html> that the
bits/*.h naming scheme should only be used for installed headers.
This patch renames bits/libc-lock.h to plain libc-lock.h and
bits/libc-lockP.h to plain libc-lockP.h to follow that convention.

Note that I don't know where libc-lockP.h comes from for Hurd (the
Hurd libc-lock.h includes libc-lockP.h, but the only libc-lockP.h in
the glibc source tree is for NPTL) - some unmerged patch? - but I
updated the #include in the Hurd libc-lock.h anyway.

Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch).

	[BZ #14912]
	* bits/libc-lock.h: Move to ...
	* sysdeps/generic/libc-lock.h: ...here.
	(_BITS_LIBC_LOCK_H): Rename macro to _LIBC_LOCK_H.
	* sysdeps/mach/hurd/bits/libc-lock.h: Move to ...
	* sysdeps/mach/hurd/libc-lock.h: ...here.
	(_BITS_LIBC_LOCK_H): Rename macro to _LIBC_LOCK_H.
	[_LIBC]: Include <libc-lockP.h> instead of <bits/libc-lockP.h>.
	* sysdeps/mach/bits/libc-lock.h: Move to ...
	* sysdeps/mach/libc-lock.h: ...here.
	(_BITS_LIBC_LOCK_H): Rename macro to _LIBC_LOCK_H.
	* sysdeps/nptl/bits/libc-lock.h: Move to ...
	* sysdeps/nptl/libc-lock.h: ...here.
	(_BITS_LIBC_LOCK_H): Rename macro to _LIBC_LOCK_H.
	* sysdeps/nptl/bits/libc-lockP.h: Move to ...
	* sysdeps/nptl/libc-lockP.h: ...here.
	(_BITS_LIBC_LOCKP_H): Rename macro to _LIBC_LOCKP_H.
	* crypt/crypt_util.c: Include <libc-lock.h> instead of
	<bits/libc-lock.h>.
	* dirent/scandir-tail.c: Likewise.
	* dlfcn/dlerror.c: Likewise.
	* elf/dl-close.c: Likewise.
	* elf/dl-iteratephdr.c: Likewise.
	* elf/dl-lookup.c: Likewise.
	* elf/dl-open.c: Likewise.
	* elf/dl-support.c: Likewise.
	* elf/dl-writev.h: Likewise.
	* elf/rtld.c: Likewise.
	* grp/fgetgrent.c: Likewise.
	* gshadow/fgetsgent.c: Likewise.
	* gshadow/sgetsgent.c: Likewise.
	* iconv/gconv_conf.c: Likewise.
	* iconv/gconv_db.c: Likewise.
	* iconv/gconv_dl.c: Likewise.
	* iconv/gconv_int.h: Likewise.
	* iconv/gconv_trans.c: Likewise.
	* include/link.h: Likewise.
	* inet/getnameinfo.c: Likewise.
	* inet/getnetgrent.c: Likewise.
	* inet/getnetgrent_r.c: Likewise.
	* intl/bindtextdom.c: Likewise.
	* intl/dcigettext.c: Likewise.
	* intl/finddomain.c: Likewise.
	* intl/gettextP.h: Likewise.
	* intl/loadmsgcat.c: Likewise.
	* intl/localealias.c: Likewise.
	* intl/textdomain.c: Likewise.
	* libidn/idn-stub.c: Likewise.
	* libio/libioP.h: Likewise.
	* locale/duplocale.c: Likewise.
	* locale/freelocale.c: Likewise.
	* locale/newlocale.c: Likewise.
	* locale/setlocale.c: Likewise.
	* login/getutent_r.c: Likewise.
	* login/getutid_r.c: Likewise.
	* login/getutline_r.c: Likewise.
	* login/utmp-private.h: Likewise.
	* login/utmpname.c: Likewise.
	* malloc/mtrace.c: Likewise.
	* misc/efgcvt.c: Likewise.
	* misc/error.c: Likewise.
	* misc/fstab.c: Likewise.
	* misc/getpass.c: Likewise.
	* misc/mntent.c: Likewise.
	* misc/syslog.c: Likewise.
	* nis/nis_call.c: Likewise.
	* nis/nis_callback.c: Likewise.
	* nis/nss-default.c: Likewise.
	* nis/nss_compat/compat-grp.c: Likewise.
	* nis/nss_compat/compat-initgroups.c: Likewise.
	* nis/nss_compat/compat-pwd.c: Likewise.
	* nis/nss_compat/compat-spwd.c: Likewise.
	* nis/nss_nis/nis-alias.c: Likewise.
	* nis/nss_nis/nis-ethers.c: Likewise.
	* nis/nss_nis/nis-grp.c: Likewise.
	* nis/nss_nis/nis-hosts.c: Likewise.
	* nis/nss_nis/nis-network.c: Likewise.
	* nis/nss_nis/nis-proto.c: Likewise.
	* nis/nss_nis/nis-pwd.c: Likewise.
	* nis/nss_nis/nis-rpc.c: Likewise.
	* nis/nss_nis/nis-service.c: Likewise.
	* nis/nss_nis/nis-spwd.c: Likewise.
	* nis/nss_nisplus/nisplus-alias.c: Likewise.
	* nis/nss_nisplus/nisplus-ethers.c: Likewise.
	* nis/nss_nisplus/nisplus-grp.c: Likewise.
	* nis/nss_nisplus/nisplus-hosts.c: Likewise.
	* nis/nss_nisplus/nisplus-initgroups.c: Likewise.
	* nis/nss_nisplus/nisplus-network.c: Likewise.
	* nis/nss_nisplus/nisplus-proto.c: Likewise.
	* nis/nss_nisplus/nisplus-pwd.c: Likewise.
	* nis/nss_nisplus/nisplus-rpc.c: Likewise.
	* nis/nss_nisplus/nisplus-service.c: Likewise.
	* nis/nss_nisplus/nisplus-spwd.c: Likewise.
	* nis/ypclnt.c: Likewise.
	* nptl/libc_pthread_init.c: Likewise.
	* nss/getXXbyYY.c: Likewise.
	* nss/getXXent.c: Likewise.
	* nss/getXXent_r.c: Likewise.
	* nss/nss_db/db-XXX.c: Likewise.
	* nss/nss_db/db-netgrp.c: Likewise.
	* nss/nss_db/nss_db.h: Likewise.
	* nss/nss_files/files-XXX.c: Likewise.
	* nss/nss_files/files-alias.c: Likewise.
	* nss/nsswitch.c: Likewise.
	* posix/regex_internal.h: Likewise.
	* posix/wordexp.c: Likewise.
	* pwd/fgetpwent.c: Likewise.
	* resolv/res_hconf.c: Likewise.
	* resolv/res_libc.c: Likewise.
	* shadow/fgetspent.c: Likewise.
	* shadow/lckpwdf.c: Likewise.
	* shadow/sgetspent.c: Likewise.
	* socket/opensock.c: Likewise.
	* stdio-common/reg-modifier.c: Likewise.
	* stdio-common/reg-printf.c: Likewise.
	* stdio-common/reg-type.c: Likewise.
	* stdio-common/vfprintf.c: Likewise.
	* stdio-common/vfscanf.c: Likewise.
	* stdlib/abort.c: Likewise.
	* stdlib/cxa_atexit.c: Likewise.
	* stdlib/fmtmsg.c: Likewise.
	* stdlib/random.c: Likewise.
	* stdlib/setenv.c: Likewise.
	* string/strsignal.c: Likewise.
	* sunrpc/auth_none.c: Likewise.
	* sunrpc/bindrsvprt.c: Likewise.
	* sunrpc/create_xid.c: Likewise.
	* sunrpc/key_call.c: Likewise.
	* sunrpc/rpc_thread.c: Likewise.
	* sysdeps/arm/backtrace.c: Likewise.
	* sysdeps/generic/ldsodefs.h: Likewise.
	* sysdeps/generic/stdio-lock.h: Likewise.
	* sysdeps/generic/unwind-dw2-fde.c: Likewise.
	* sysdeps/i386/backtrace.c: Likewise.
	* sysdeps/ieee754/ldbl-opt/nldbl-compat.c: Likewise.
	* sysdeps/m68k/backtrace.c: Likewise.
	* sysdeps/mach/hurd/cthreads.c: Likewise.
	* sysdeps/mach/hurd/dirstream.h: Likewise.
	* sysdeps/mach/hurd/malloc-machine.h: Likewise.
	* sysdeps/nptl/malloc-machine.h: Likewise.
	* sysdeps/nptl/stdio-lock.h: Likewise.
	* sysdeps/posix/dirstream.h: Likewise.
	* sysdeps/posix/getaddrinfo.c: Likewise.
	* sysdeps/posix/system.c: Likewise.
	* sysdeps/pthread/aio_suspend.c: Likewise.
	* sysdeps/s390/s390-32/backtrace.c: Likewise.
	* sysdeps/s390/s390-64/backtrace.c: Likewise.
	* sysdeps/unix/sysv/linux/check_pf.c: Likewise.
	* sysdeps/unix/sysv/linux/if_index.c: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/getutent_r.c: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/getutid_r.c: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/getutline_r.c: Likewise.
	* sysdeps/unix/sysv/linux/shm-directory.c: Likewise.
	* sysdeps/unix/sysv/linux/system.c: Likewise.
	* sysdeps/x86_64/backtrace.c: Likewise.
	* time/alt_digit.c: Likewise.
	* time/era.c: Likewise.
	* time/tzset.c: Likewise.
	* wcsmbs/wcsmbsload.c: Likewise.
	* nptl/tst-initializers1.c (do_test): Refer to <libc-lock.h>
	instead of <bits/libc-lock.h> in comment.
2015-09-08 21:11:03 +00:00
H.J. Lu
38d22f9f48 Don't disable SSE in x86-64 ld.so
Since x86-64 ld.so preserves vector registers now, we can use SSE in
x86-64 ld.so.  We should run tst-ld-sse-use.sh only on i386.

	* sysdeps/x86/Makefile [$(subdir) == elf] (CFLAGS-.os,
	tests-special, $(objpfx)tst-ld-sse-use.out): Moved to ...
	* sysdeps/i386/Makefile [$(subdir) == elf] (CFLAGS-.os,
	tests-special, $(objpfx)tst-ld-sse-use.out): Here.  Update
	comments.
	* sysdeps/x86_64/Makefile [$(subdir) == elf] (CFLAGS-.os): Add
	-mno-mmx for $(all-rtld-routines).
	* sysdeps/x86/tst-ld-sse-use.sh: Moved to ...
	* sysdeps/i386/tst-ld-sse-use.sh: Here.  Replace x86-64 with
	i386.
2015-08-26 07:55:42 -07:00
H.J. Lu
d8725b1fba Use SSE2 optimized strcmp in x86-64 ld.so
Since ld.so preserves vector registers now, we can use the same SSE2
optimized strcmp in x86-64 libc and ld.so.

	* sysdeps/x86_64/strcmp.S: Remove "#if !IS_IN (libc)".
2015-08-25 12:38:11 -07:00
H.J. Lu
2194737e77 Replace %xmm[8-12] with %xmm[0-4]
Since ld.so preserves vector registers now, we can use %xmm[0-4] to
avoid the REX prefix.

	* sysdeps/x86_64/strlen.S: Replace %xmm[8-12] with %xmm[0-4].
2015-08-25 08:51:23 -07:00
H.J. Lu
2339c6f4bd Remove x86-64 rtld-xxx.c and rtld-xxx.S
Since ld.so preserves vector registers now, we can use the regular,
non-ifunc string and memory functions in ld.so.

	* sysdeps/x86_64/rtld-memcmp.c: Removed.
	* sysdeps/x86_64/rtld-memset.S: Likewise.
	* sysdeps/x86_64/rtld-strchr.S: Likewise.
	* sysdeps/x86_64/rtld-strlen.S: Likewise.
	* sysdeps/x86_64/multiarch/rtld-memcmp.c: Likewise.
	* sysdeps/x86_64/multiarch/rtld-memset.S: Likewise.
2015-08-25 08:50:06 -07:00
H.J. Lu
5f92ec52e7 Replace %xmm8 with %xmm0
Since ld.so preserves vector registers now, we can use %xmm0 to avoid
the REX prefix.

	* sysdeps/x86_64/memset.S: Replace %xmm8 with %xmm0.
2015-08-25 08:48:34 -07:00
Ondřej Bílka
0d0325ed4b Fix strcpy_chk and stpcpy_chk performance.
Hi, as I wrote in previous patches a performance of checked strcpy and
stpcpy is terrible as these don't use sse2 and are around four times
slower that strcpy and stpcpy now.

As this bug shows that these functions are not performance sensitive I
decided just to improve generic implementation instead for easier
maintainance.

        * debug/strcpy_chk.c: Improve performance.
        * debug/stpcpy_chk.c: Likewise.
        * sysdeps/x86_64/strcpy_chk.S: Remove.
        * sysdeps/x86_64/stpcpy_chk.S: Remove.
2015-08-25 15:06:33 +02:00
H.J. Lu
f3dcae82d5 Save and restore vector registers in x86-64 ld.so
This patch adds SSE, AVX and AVX512 versions of _dl_runtime_resolve
and _dl_runtime_profile, which save and restore the first 8 vector
registers used for parameter passing.  elf_machine_runtime_setup
selects the proper _dl_runtime_resolve or _dl_runtime_profile based
on _dl_x86_cpu_features.  It avoids race condition caused by
FOREIGN_CALL macros, which are only used for x86-64.

Performance impact of saving and restoring 8 vector registers are
negligible on Nehalem, Sandy Bridge, Ivy Bridge and Haswell when
ld.so is optimized with SSE2.

	[BZ #15128]
	* sysdeps/x86_64/Makefile [$(subdir) == elf] (tests): Add
	ifuncmain8.
	(modules-names): Add ifuncmod8.
	($(objpfx)ifuncmain8): New rule.
	* sysdeps/x86_64/dl-machine.h: Include <dl-procinfo.h> and
	<cpuid.h>.
	(elf_machine_runtime_setup): Use _dl_runtime_resolve_sse,
	_dl_runtime_resolve_avx, or _dl_runtime_resolve_avx512,
	_dl_runtime_profile_sse, _dl_runtime_profile_avx, or
	_dl_runtime_profile_avx512, based on HAS_ARCH_FEATURE.
	* sysdeps/x86_64/dl-trampoline.S: Rewrite.
	* sysdeps/x86_64/dl-trampoline.h: Likewise.
	* sysdeps/x86_64/ifuncmain8.c: New file.
	* sysdeps/x86_64/ifuncmod8.c: Likewise.
	* sysdeps/x86_64/nptl/tcb-offsets.sym (RTLD_SAVESPACE_SSE):
	Removed.
	* sysdeps/x86_64/nptl/tls.h (__128bits): Removed.
	(tcbhead_t): Change rtld_must_xmm_save to __glibc_unused1.
	Change rtld_savespace_sse to __glibc_unused2.
	(RTLD_CHECK_FOREIGN_CALL): Removed.
	(RTLD_ENABLE_FOREIGN_CALL): Likewise.
	(RTLD_PREPARE_FOREIGN_CALL): Likewise.
	(RTLD_FINALIZE_FOREIGN_CALL): Likewise.
2015-08-25 04:34:13 -07:00
H.J. Lu
daa4db69fc Remove the unused IFUNC files
sysdeps/i386/i686/multiarch/strcasestr-c.c became unused after

commit 1818483b15
Author: Andreas Schwab <schwab@suse.de>
Date:   Wed Dec 18 11:53:27 2013 +1000

    Remove use of SSE4.2 functions for strstr on i686

which contains

-sysdep_routines += strcspn-c strpbrk-c strspn-c strstr-c strcasestr-c
+sysdep_routines += strcspn-c strpbrk-c strspn-c

sysdeps/x86_64/multiarch/strcasestr.c became useless after

t 584b18eb4d
Author: Ondřej Bílka <neleai@seznam.cz>
Date:   Sat Dec 14 19:33:56 2013 +0100

    Add strstr with unaligned loads. Fixes bug 12100.

which changes sysdeps/x86_64/multiarch/strcasestr.c to

libc_ifunc (__strcasestr, __strcasestr_sse2);

This patch removes these file.

	* i386/i686/multiarch/strcasestr-c.c: Removed.
	* x86_64/multiarch/strcasestr.c: Likewise.
	* x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list):
	Remove strcasestr.
2015-08-20 12:47:20 -07:00
H.J. Lu
1ae6c72dc1 Move x86_64 init-arch.h to sysdeps/x86/init-arch.h
Move sysdeps/x86_64/multiarch/init-arch.h to sysdeps/x86/init-arch.h
which can be used for both i386 and x86_64.

	* sysdeps/i386/i686/multiarch/init-arch.h: Removed.
	* sysdeps/unix/sysv/linux/x86/init-arch.h: Likewise.
	* sysdeps/x86_64/cacheinfo.c: Include <init-arch.h> instead
	of "multiarch/init-arch.h".
	* sysdeps/x86_64/multiarch/init-arch.h: Renamed to ...
	* sysdeps/x86/init-arch.h: This.
2015-08-20 04:29:23 -07:00
Petar Jovanovic
fa19d5c48a Fix dynamic linker issue with bind-now
Fix the bind-now case when DT_REL and DT_JMPREL sections are separate
and there is a gap between them.

	[BZ #14341]
	* elf/dynamic-link.h (elf_machine_lazy_rel): Properly handle the
	case when there is a gap between DT_REL and DT_JMPREL sections.
	* sysdeps/x86_64/Makefile (tests): Add tst-split-dynreloc.
	(LDFLAGS-tst-split-dynreloc): New.
	(tst-split-dynreloc-ENV): Likewise.
	* sysdeps/x86_64/tst-split-dynreloc.c: New file.
	* sysdeps/x86_64/tst-split-dynreloc.lds: Likewise.
2015-08-19 05:37:01 -07:00