Commit Graph

110 Commits

Author SHA1 Message Date
Andrew Senkevich
ee2196bb67 Fixed wrong vector sincos/sincosf ABI to have it compatible with
current vector function declaration "#pragma omp declare simd notinbranch",
according to which vector sincos should have vector of pointers for second and
third parameters. It is fixed with implementation as wrapper to version
having second and third parameters as pointers.

    [BZ #20024]
    * sysdeps/x86/fpu/test-math-vector-sincos.h: New.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S: Fixed ABI
    of this implementation of vector function.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S:
    Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S: Likewise.
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S: Likewise.
    * sysdeps/x86_64/fpu/svml_d_sincos2_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_d_sincos4_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_d_sincos4_core_avx.S: Likewise.
    * sysdeps/x86_64/fpu/svml_d_sincos8_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_s_sincosf16_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_s_sincosf4_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_s_sincosf8_core.S: Likewise.
    * sysdeps/x86_64/fpu/svml_s_sincosf8_core_avx.S: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen2-wrappers.c: Use another wrapper
    for testing vector sincos with fixed ABI.
    * sysdeps/x86_64/fpu/test-double-vlen4-avx2-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen8-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx.c: New test.
    * sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-sincos-avx512.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-libmvec-sincos.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-sincosf-avx512.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-libmvec-sincosf.c: Likewise.
    * sysdeps/x86_64/fpu/Makefile: Added new tests.
2016-07-01 14:15:38 +03:00
H.J. Lu
13efa86ece Check Prefer_ERMS in memmove/memcpy/mempcpy/memset
Although the Enhanced REP MOVSB/STOSB (ERMS) implementations of memmove,
memcpy, mempcpy and memset aren't used by the current processors, this
patch adds Prefer_ERMS check in memmove, memcpy, mempcpy and memset so
that they can be used in the future.

	* sysdeps/x86/cpu-features.h (bit_arch_Prefer_ERMS): New.
	(index_arch_Prefer_ERMS): Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Return
	__memcpy_erms for Prefer_ERMS.
	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
	(__memmove_erms): Enabled for libc.a.
	* ysdeps/x86_64/multiarch/memmove.S (__libc_memmove): Return
	__memmove_erms or Prefer_ERMS.
	* sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Return
	__mempcpy_erms for Prefer_ERMS.
	* sysdeps/x86_64/multiarch/memset.S (memset): Return
	__memset_erms for Prefer_ERMS.
2016-06-30 07:58:11 -07:00
Andreas Schwab
fea56491c4 Avoid array-bounds warning for strncat on i586 (bug 20260) 2016-06-29 17:15:40 +02:00
H.J. Lu
91655fc307 Check FMA after COMMON_CPUID_INDEX_80000001
Since the FMA4 bit is in COMMON_CPUID_INDEX_80000001 and FMA4 requires
AVX, determine if FMA4 is usable after COMMON_CPUID_INDEX_80000001 is
available and if AVX is usable.

	[BZ #20195]
	* sysdeps/x86/cpu-features.c (get_common_indeces): Move FMA4
	check to ...
	(init_cpu_features): Here.
2016-06-07 08:00:40 -07:00
H.J. Lu
d6af2388f7 Count number of logical processors sharing L2 cache
For Intel processors, when there are both L2 and L3 caches, SMT level
type should be ued to count number of available logical processors
sharing L2 cache.  If there is only L2 cache, core level type should
be used to count number of available logical processors sharing L2
cache.  Number of available logical processors sharing L2 cache should
be used for non-inclusive L2 and L3 caches.

	* sysdeps/x86/cacheinfo.c (init_cacheinfo): Count number of
	available logical processors with SMT level type sharing L2
	cache for Intel processors.
2016-05-27 15:16:51 -07:00
H.J. Lu
b7598b1b85 Remove special L2 cache case for Knights Landing
L2 cache is shared by 2 cores on Knights Landing, which has 4 threads
per core:

https://en.wikipedia.org/wiki/Xeon_Phi#Knights_Landing

So L2 cache is shared by 8 threads on Knights Landing as reported by
CPUID.  We should remove special L2 cache case for Knights Landing.

	[BZ #18185]
	* sysdeps/x86/cacheinfo.c (init_cacheinfo): Don't limit threads
	sharing L2 cache to 2 for Knights Landing.
2016-05-20 14:42:00 -07:00
H.J. Lu
de71e0421b Correct Intel processor level type mask from CPUID
Intel CPUID with EAX == 11 returns:

ECX Bits 07 - 00: Level number. Same value in ECX input.
    Bits 15 - 08: Level type.
    ^^^^^^^^^^^^^^^^^^^^^^^^ This is level type.
    Bits 31 - 16: Reserved.

Intel processor level type mask should be 0xff00, not 0xff0.

	[BZ #20119]
	* sysdeps/x86/cacheinfo.c (init_cacheinfo): Correct Intel
	processor level type mask for CPUID with EAX == 11.
2016-05-19 10:02:36 -07:00
H.J. Lu
7c08d791ee Check the HTT bit before counting logical threads
Skip counting logical threads for Intel processors if the HTT bit is 0
which indicates there is only a single logical processor.

	* sysdeps/x86/cacheinfo.c (init_cacheinfo): Skip counting
	logical threads if the HTT bit is 0.
	* sysdeps/x86/cpu-features.h (bit_cpu_HTT): New.
	(index_cpu_HTT): Likewise.
	(reg_HTT): Likewise.
2016-05-19 09:09:00 -07:00
H.J. Lu
9e4ec3e816 Support non-inclusive caches on Intel processors
* sysdeps/x86/cacheinfo.c (init_cacheinfo): Check and support
	non-inclusive caches on Intel processors.
2016-05-13 07:18:35 -07:00
H.J. Lu
2a1f15b1a9 Remove x86 ifunc-defines.sym and rtld-global-offsets.sym
Merge x86 ifunc-defines.sym with x86 cpu-features-offsets.sym.  Remove
x86 ifunc-defines.sym and rtld-global-offsets.sym.  No code changes on
i686 and x86-64.

	* sysdeps/i386/i686/multiarch/Makefile (gen-as-const-headers):
	Remove ifunc-defines.sym.
	* sysdeps/x86_64/multiarch/Makefile (gen-as-const-headers):
	Likewise.
	* sysdeps/i386/i686/multiarch/ifunc-defines.sym: Removed.
	* sysdeps/x86/rtld-global-offsets.sym: Likewise.
	* sysdeps/x86_64/multiarch/ifunc-defines.sym: Likewise.
	* sysdeps/x86/Makefile (gen-as-const-headers): Remove
	rtld-global-offsets.sym.
	* sysdeps/x86_64/multiarch/ifunc-defines.sym: Merged with ...
	* sysdeps/x86/cpu-features-offsets.sym: This.
	* sysdeps/x86/cpu-features.h: Include <cpu-features-offsets.h>
	instead of <ifunc-defines.h> and <rtld-global-offsets.h>.
2016-05-11 05:51:39 -07:00
H.J. Lu
a9558b49b3 Move sysdeps/x86_64/cacheinfo.c to sysdeps/x86
Move sysdeps/x86_64/cacheinfo.c to sysdeps/x86.  No code changes on x86
and x86_64.

	* sysdeps/i386/cacheinfo.c: Include <sysdeps/x86/cacheinfo.c>
	instead of <sysdeps/x86_64/cacheinfo.c>.
	* sysdeps/x86_64/cacheinfo.c: Moved to ...
	* sysdeps/x86/cacheinfo.c: Here.
2016-05-08 08:49:18 -07:00
H.J. Lu
2e2d9796da Detect Intel Goldmont and Airmont processors
Updated from the model numbers of Goldmont and Airmont processors in
Intel64 And IA-32 Processor Architectures Software Developer's Manual
Volume 3 Revision 058.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Detect Intel
	Goldmont and Airmont processors.
2016-04-15 05:23:06 -07:00
H.J. Lu
27d3ce1467 Remove Fast_Copy_Backward from Intel Core processors
Intel Core i3, i5 and i7 processors have fast unaligned copy and
copy backward is ignored.  Remove Fast_Copy_Backward from Intel Core
processors to avoid confusion.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Don't set
	bit_arch_Fast_Copy_Backward for Intel Core proessors.
2016-04-01 15:09:14 -07:00
H.J. Lu
0791f91dff Initial Enhanced REP MOVSB/STOSB (ERMS) support
The newer Intel processors support Enhanced REP MOVSB/STOSB (ERMS) which
has a feature bit in CPUID.  This patch adds the Enhanced REP MOVSB/STOSB
(ERMS) bit to x86 cpu-features.

	* sysdeps/x86/cpu-features.h (bit_cpu_ERMS): New.
	(index_cpu_ERMS): Likewise.
	(reg_ERMS): Likewise.
2016-03-28 19:23:31 -07:00
H.J. Lu
e41b395523 [x86] Add a feature bit: Fast_Unaligned_Copy
On AMD processors, memcpy optimized with unaligned SSE load is
slower than emcpy optimized with aligned SSSE3 while other string
functions are faster with unaligned SSE load.  A feature bit,
Fast_Unaligned_Copy, is added to select memcpy optimized with
unaligned SSE load.

	[BZ #19583]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel
	processors.  Set Fast_Copy_Backward for AMD Excavator
	processors.
	* sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy):
	New.
	(index_arch_Fast_Unaligned_Copy): Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check
	Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
2016-03-28 04:40:03 -07:00
H.J. Lu
f781a9e961 Set index_arch_AVX_Fast_Unaligned_Load only for Intel processors
Since only Intel processors with AVX2 have fast unaligned load, we
should set index_arch_AVX_Fast_Unaligned_Load only for Intel processors.

Move AVX, AVX2, AVX512, FMA and FMA4 detection into get_common_indeces
and call get_common_indeces for other processors.

Add CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P to aoid loading
GLRO(dl_x86_cpu_features) in cpu-features.c.

	[BZ #19583]
	* sysdeps/x86/cpu-features.c (get_common_indeces): Remove
	inline.  Check family before setting family, model and
	extended_model.  Set AVX, AVX2, AVX512, FMA and FMA4 usable
	bits here.
	(init_cpu_features): Replace HAS_CPU_FEATURE and
	HAS_ARCH_FEATURE with CPU_FEATURES_CPU_P and
	CPU_FEATURES_ARCH_P.  Set index_arch_AVX_Fast_Unaligned_Load
	for Intel processors with usable AVX2.  Call get_common_indeces
	for other processors with family == NULL.
	* sysdeps/x86/cpu-features.h (CPU_FEATURES_CPU_P): New macro.
	(CPU_FEATURES_ARCH_P): Likewise.
	(HAS_CPU_FEATURE): Use CPU_FEATURES_CPU_P.
	(HAS_ARCH_FEATURE): Use CPU_FEATURES_ARCH_P.
2016-03-22 07:47:20 -07:00
H.J. Lu
6aa3e97e25 Add _arch_/_cpu_ to index_*/bit_* in x86 cpu-features.h
index_* and bit_* macros are used to access cpuid and feature arrays o
struct cpu_features.  It is very easy to use bits and indices of cpuid
array on feature array, especially in assembly codes.  For example,
sysdeps/i386/i686/multiarch/bcopy.S has

	HAS_CPU_FEATURE (Fast_Rep_String)

which should be

	HAS_ARCH_FEATURE (Fast_Rep_String)

We change index_* and bit_* to index_cpu_*/index_arch_* and
bit_cpu_*/bit_arch_* so that we can catch such error at build time.

	[BZ #19762]
	* sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h
	(EXTRA_LD_ENVVARS): Add _arch_ to index_*/bit_*.
	* sysdeps/x86/cpu-features.c (init_cpu_features): Likewise.
	* sysdeps/x86/cpu-features.h (bit_*): Renamed to ...
	(bit_arch_*): This for feature array.
	(bit_*): Renamed to ...
	(bit_cpu_*): This for cpu array.
	(index_*): Renamed to ...
	(index_arch_*): This for feature array.
	(index_*): Renamed to ...
	(index_cpu_*): This for cpu array.
	[__ASSEMBLER__] (HAS_FEATURE): Add and use field.
	[__ASSEMBLER__] (HAS_CPU_FEATURE)): Pass cpu to HAS_FEATURE.
	[__ASSEMBLER__] (HAS_ARCH_FEATURE)): Pass arch to HAS_FEATURE.
	[!__ASSEMBLER__] (HAS_CPU_FEATURE): Replace index_##name and
	bit_##name with index_cpu_##name and bit_cpu_##name.
	[!__ASSEMBLER__] (HAS_ARCH_FEATURE): Replace index_##name and
	bit_##name with index_arch_##name and bit_arch_##name.
2016-03-10 05:27:07 -08:00
H.J. Lu
2b35e48c0c Define _HAVE_STRING_ARCH_mempcpy to 1 for x86
Since x86 has an optimized mempcpy and GCC can inline mempcpy on x86,
define _HAVE_STRING_ARCH_mempcpy to 1 for x86.

	[BZ #19759]
	* sysdeps/x86/bits/string.h (_HAVE_STRING_ARCH_mempcpy): New.
2016-03-08 10:57:41 -08:00
H.J. Lu
16396c41de Add _STRING_INLINE_unaligned and string_private.h
As discussed in

https://sourceware.org/ml/libc-alpha/2015-10/msg00403.html

the setting of _STRING_ARCH_unaligned currently controls the external
GLIBC ABI as well as selecting the use of unaligned accesses withing
GLIBC.

Since _STRING_ARCH_unaligned was recently changed for AArch64, this
would potentially break the ABI in GLIBC 2.23, so split the uses and add
_STRING_INLINE_unaligned to select the string ABI. This setting must be
fixed for each target, while _STRING_ARCH_unaligned may be changed from
release to release.  _STRING_ARCH_unaligned is used unconditionally in
glibc.  But <bits/string.h>, which defines _STRING_ARCH_unaligned, isn't
included with -Os.  Since _STRING_ARCH_unaligned is internal to glibc and
may change between glibc releases, it should be made private to glibc.
_STRING_ARCH_unaligned should defined in the new string_private.h heade
file which is included unconditionally from internal <string.h> for glibc
build.

	[BZ #19462]
	* bits/string.h (_STRING_ARCH_unaligned): Renamed to ...
	(_STRING_INLINE_unaligned): This.
	* include/string.h: Include <string_private.h>.
	* string/bits/string2.h: Replace _STRING_ARCH_unaligned with
	_STRING_INLINE_unaligned.
	* sysdeps/aarch64/bits/string.h (_STRING_ARCH_unaligned): Removed.
	(_STRING_INLINE_unaligned): New.
	* sysdeps/aarch64/string_private.h: New file.
	* sysdeps/generic/string_private.h: Likewise.
	* sysdeps/m68k/m680x0/m68020/string_private.h: Likewise.
	* sysdeps/s390/string_private.h: Likewise.
	* sysdeps/x86/string_private.h: Likewise.
	* sysdeps/m68k/m680x0/m68020/bits/string.h
	(_STRING_ARCH_unaligned): Renamed to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/s390/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/sparc/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
	* sysdeps/x86/bits/string.h (_STRING_ARCH_unaligned): Renamed
	to ...
	(_STRING_INLINE_unaligned): This.
2016-02-18 14:55:29 -02:00
Amit Pawar
d7890e6947 Set index_Fast_Unaligned_Load for Excavator family CPUs
GLIBC benchtest testcases shows SSE2_Unaligned based implementations
are performing faster compare to SSE2 based implementations for
routines: strcmp, strcat, strncat, stpcpy, stpncpy, strcpy, strncpy
and strstr. Flag index_Fast_Unaligned_Load is set for Excavator family
0x15h CPU's. This makes SSE2_Unaligned based implementations as
default for these routines.

	[BZ #19467]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	index_Fast_Unaligned_Load flag for Excavator family CPUs.
2016-01-14 08:14:31 -08:00
Joseph Myers
f7a9f785e5 Update copyright dates with scripts/update-copyrights. 2016-01-04 16:05:18 +00:00
Andrew Senkevich
83d776f979 Added memset optimized with AVX512 for KNL hardware.
It shows improvement up to 28% over AVX2 memset (performance results
attached at <https://sourceware.org/ml/libc-alpha/2015-12/msg00052.html>).

    * sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: New file.
    * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Added new file.
    * sysdeps/x86_64/multiarch/ifunc-impl-list.c: Added new tests.
    * sysdeps/x86_64/multiarch/memset.S: Added new IFUNC branch.
    * sysdeps/x86_64/multiarch/memset_chk.S: Likewise.
    * sysdeps/x86/cpu-features.h (bit_Prefer_No_VZEROUPPER,
    index_Prefer_No_VZEROUPPER): New.
    * sysdeps/x86/cpu-features.c (init_cpu_features): Set the
    Prefer_No_VZEROUPPER for Knights Landing.
2015-12-19 02:47:28 +03:00
H.J. Lu
b9eb92ab05 Add Prefer_MAP_32BIT_EXEC to map executable pages with MAP_32BIT
According to Silvermont software optimization guide, for 64-bit
applications, branch prediction performance can be negatively impacted
when the target of a branch is more than 4GB away from the branch.  Add
the Prefer_MAP_32BIT_EXEC bit so that mmap will try to map executable
pages with MAP_32BIT first.  NB: MAP_32BIT will map to lower 2GB, not
lower 4GB, address.  Prefer_MAP_32BIT_EXEC reduces bits available for
address space layout randomization (ASLR), which is always disabled for
SUID programs and can only be enabled by setting environment variable,
LD_PREFER_MAP_32BIT_EXEC.

On Fedora 23, this patch speeds up GCC 5 testsuite by 3% on Silvermont.

	[BZ #19367]
	* sysdeps/unix/sysv/linux/wordsize-64/mmap.c: New file.
	* sysdeps/unix/sysv/linux/x86_64/64/dl-librecon.h: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/64/mmap.c: Likewise.
	* sysdeps/x86/cpu-features.h (bit_Prefer_MAP_32BIT_EXEC): New.
	(index_Prefer_MAP_32BIT_EXEC): Likewise.
2015-12-15 13:16:02 -08:00
H.J. Lu
c9afcaaafa Enable Silvermont optimizations for Knights Landing
Knights Landing processor is based on Silvermont.  This patch enables
Silvermont optimizations for Knights Landing.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Enable
	Silvermont optimizations for Knights Landing.
2015-12-15 11:46:54 -08:00
Andrew Senkevich
377ed004f2 Utilize x86_64 vector math functions w/o -fopenmp.
This patch allows to use x86_64 vector math functions with GCC 6.*
without OpenMP SIMD constructs.  For additional details please visit
<https://sourceware.org/glibc/wiki/libmvec#Example_2>.

    * sysdeps/x86/fpu/bits/math-vector.h: W/o -fopenmp declare vector math
    functions with GCC 6.* __attribute__ ((__simd__)).
2015-12-07 21:58:26 +03:00
H.J. Lu
9627da32ec Update family and model detection for AMD CPUs
AMD CPUs uses the similar encoding scheme for extended family and model
as Intel CPUs as shown in:

http://support.amd.com/TechDocs/25481.pdf

This patch updates get_common_indeces to get family and model for both
Intel and AMD CPUs when family == 0x0f.

	[BZ #19214]
	* sysdeps/x86/cpu-features.c (get_common_indeces): Add an
	argument to return extended model.  Update family and model
	with extended family and model when family == 0x0f.
	(init_cpu_features): Updated.
2015-11-30 09:01:31 -08:00
Andrew Senkevich
977a30801f Better workaround for aliases of *_finite symbols in vector math library.
Old workaround based on assembly aliases can lead to link fail (bug 19058).
This patch makes workaround in another way to avoid it.

    [BZ #19058]
    * math/Makefile ($(inst_libdir)/libm.so): Added libmvec_nonshared.a
    to AS_NEEDED.
    * sysdeps/x86/fpu/bits/math-vector.h: Removed code with old workaround.
    * sysdeps/x86_64/fpu/Makefile (libmvec-support,
    libmvec-static-only-routines): Added new file.
    * sysdeps/x86_64/fpu/svml_finite_alias.S: New file.
2015-11-27 16:22:26 +03:00
H.J. Lu
89569c8bb6 Run tst-prelink test for GLOB_DAT reloc
Run tst-prelink test on targets with GLOB_DAT relocaton.

	* config.make.in (have-glob-dat-reloc): New.
	* configure.ac (libc_cv_has_glob_dat): New.  Set to yes if
	target supports GLOB_DAT relocaton. AC_SUBST.
	* configure: Regenerated.
	* elf/Makefile (tests): Add tst-prelink.
	(tests-special): Add $(objpfx)tst-prelink-cmp.out.
	(tst-prelink-ENV): New.
	($(objpfx)tst-prelink-conflict.out): Likewise.
	($(objpfx)tst-prelink-cmp.out): Likewise.
	* sysdeps/x86/tst-prelink.c: Moved to ...
	* elf/tst-prelink.c: Here.
	* sysdeps/x86/tst-prelink.exp: Moved to ...
	* elf/tst-prelink.exp: Here.
	* sysdeps/x86/Makefile (tests): Don't add tst-prelink.
	(tst-prelink-ENV): Removed.
	($(objpfx)tst-prelink-conflict.out): Likewise.
	($(objpfx)tst-prelink-cmp.out): Likewise.
	(tests-special): Don't add $(objpfx)tst-prelink-cmp.out.
2015-11-14 12:00:38 -08:00
H.J. Lu
fe534fe898 Add a test for prelink output
This test applies to i386 and x86_64 which set R_386_GLOB_DAT and
R_X86_64_GLOB_DAT to ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA.

	[BZ #19178]
	* sysdeps/x86/Makefile (tests): Add tst-prelink.
	(tst-prelink-ENV): New.
	($(objpfx)tst-prelink-conflict.out): Likewise.
	($(objpfx)tst-prelink-cmp.out): Likewise.
	(tests-special): Add $(objpfx)tst-prelink-cmp.out.
	* sysdeps/x86/tst-prelink.c: New file.
	* sysdeps/x86/tst-prelink.exp: Likewise.
2015-11-10 12:27:24 -08:00
Joseph Myers
2145f97cee Handle more state in i386/x86_64 fesetenv (bug 16068).
fenv_t should include architecture-specific floating-point modes and
status flags.  i386 and x86_64 fesetenv limit which bits they use from
the x87 status and control words, when using saved state, and limit
which parts of the state they set to fixed values, when using
FE_DFL_ENV / FE_NOMASK_ENV.  The following should be included but are
excluded in at least some cases: status and masking for the "denormal
operand" exception (which isn't part of FE_ALL_EXCEPT); precision
control (explicitly mentioned in Annex F as something that counts as
part of the floating-point environment); MXCSR FZ and DAZ bits (for
FE_DFL_ENV and FE_NOMASK_ENV).  This patch arranges for this extra
state to be handled by fesetenv (and thereby by feupdateenv, which
calls fesetenv).

(Note that glibc functions using floating point are not generally
expected to work correctly with non-default values of this state,
especially precision control, but it is still logically part of the
floating-point environment and should be handled as such by fesetenv.
Changes to the state relating to subnormals ought generally to work
with libm functions when the arguments aren't subnormal and neither
are the expected results; that's a consequence of functions avoiding
spurious internal underflows.)

A question arising from this is whether FE_NOMASK_ENV should or should
not mask the "denormal operand" exception.  I decided it should mask
that exception.  This is the status quo - previously that exception
could only be unmasked by direct manipulation of control registers
(possibly via <fpu_control.h>).  In addition, it means that use of
FE_NOMASK_ENV leaves a floating-point environment the same as could be
obtained by fesetenv (FE_DFL_ENV); feenableexcept (FE_ALL_EXCEPT);,
rather than an environment in which an exception is unmasked that
could only be masked again by using fesetenv with FE_DFL_ENV (or a
previously saved environment) - this exception not being usable with
other <fenv.h> functions because it's outside FE_ALL_EXCEPT.

Tested for x86_64 and x86.

	[BZ #16068]
	* sysdeps/i386/fpu/fesetenv.c: Include <fpu_control.h>.
	(FE_ALL_EXCEPT_X86): New macro.
	(__fesetenv): Use FE_ALL_EXCEPT_X86 in most places instead of
	FE_ALL_EXCEPT.  Ensure precision control is included in
	floating-point state.  Ensure that FE_DFL_ENV and FE_NOMASK_ENV
	handle "denormal operand exception" and clear FZ and DAZ bits.
	* sysdeps/x86_64/fpu/fesetenv.c: Include <fpu_control.h>.
	(FE_ALL_EXCEPT_X86): New macro.
	(__fesetenv): Use FE_ALL_EXCEPT_X86 in most places instead of
	FE_ALL_EXCEPT.  Ensure precision control is included in
	floating-point state.  Ensure that FE_DFL_ENV and FE_NOMASK_ENV
	handle "denormal operand exception" and clear FZ and DAZ bits.
	* sysdeps/x86/fpu/test-fenv-sse-2.c: New file.
	* sysdeps/x86/fpu/test-fenv-x87.c: Likewise.
	* sysdeps/x86/fpu/Makefile [$(subdir) = math] (tests): Add
	test-fenv-x87 and test-fenv-sse-2.
	[$(subdir) = math] (CFLAGS-test-fenv-sse-2.c): New variable.
2015-10-28 22:58:29 +00:00
Joseph Myers
0b9af583a5 Fix i386/x86_64 fesetenv SSE exception clearing (bug 19181).
The i386 and x86_64 versions of fesetenv, when called with FE_DFL_ENV
or FE_NOMASK_ENV as argument, do not clear SSE exceptions raised in
MXCSR.  These arguments should, like other fenv_t values, represent
the whole of the floating-point state, so such exceptions should be
cleared; this patch adds the required clearing.  (Discovered while
working on bug 16068.)

Tested for x86_64 and x86.

	[BZ #19181]
	* sysdeps/i386/fpu/fesetenv.c (__fesetenv): Clear already-raised
	SSE exceptions when argument is FE_DFL_ENV or FE_NOMASK_ENV.
	* sysdeps/x86_64/fpu/fesetenv.c (__fesetenv): Likewise.
	* math/test-fenv-clear-main.c: New file.
	* math/test-fenv-clear.c: Likewise.
	* math/Makefile (tests): Add test-fenv-clear.
	* sysdeps/x86/fpu/test-fenv-clear-sse.c: New file.
	* sysdeps/x86/fpu/Makefile [$(subdir) = math] (tests): Add
	test-fenv-clear-sse.
	[$(subdir) = math] (CFLAGS-test-fenv-clear-sse.c): New variable.
2015-10-28 18:50:20 +00:00
Joseph Myers
6ace393821 Fix pow missing underflows (bug 18825).
Similar to various other bugs in this area, pow functions can fail to
raise the underflow exception when the result is tiny and inexact but
one or more low bits of the intermediate result that is scaled down
(or, in the i386 case, converted from a wider evaluation format) are
zero.  This patch forces the exception in a similar way to previous
fixes, thereby concluding the fixes for known bugs with missing
underflow exceptions currently filed in Bugzilla.

Tested for x86_64, x86, mips64 and powerpc.

	[BZ #18825]
	* sysdeps/i386/fpu/i386-math-asm.h (FLT_NARROW_EVAL_UFLOW_NONNAN):
	New macro.
	(DBL_NARROW_EVAL_UFLOW_NONNAN): Likewise.
	(LDBL_CHECK_FORCE_UFLOW_NONNAN): Likewise.
	* sysdeps/i386/fpu/e_pow.S: Use DEFINE_DBL_MIN.
	(__ieee754_pow): Use DBL_NARROW_EVAL_UFLOW_NONNAN instead of
	DBL_NARROW_EVAL, reloading the PIC register as needed.
	* sysdeps/i386/fpu/e_powf.S: Use DEFINE_FLT_MIN.
	(__ieee754_powf): Use FLT_NARROW_EVAL_UFLOW_NONNAN instead of
	FLT_NARROW_EVAL.  Use separate return path for case when first
	argument is NaN.
	* sysdeps/i386/fpu/e_powl.S: Include <i386-math-asm.h>.  Use
	DEFINE_LDBL_MIN.
	(__ieee754_powl): Use LDBL_CHECK_FORCE_UFLOW_NONNAN, reloading the
	PIC register.
	* sysdeps/ieee754/dbl-64/e_pow.c (__ieee754_pow): Use
	math_check_force_underflow_nonneg.
	* sysdeps/ieee754/flt-32/e_powf.c (__ieee754_powf): Force
	underflow for subnormal result.
	* sysdeps/ieee754/ldbl-128/e_powl.c (__ieee754_powl): Likewise.
	* sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Use
	math_check_force_underflow_nonneg.
	* sysdeps/x86/fpu/powl_helper.c (__powl_helper): Use
	math_check_force_underflow.
	* sysdeps/x86_64/fpu/x86_64-math-asm.h
	(LDBL_CHECK_FORCE_UFLOW_NONNAN): New macro.
	* sysdeps/x86_64/fpu/e_powl.S: Include <x86_64-math-asm.h>.  Use
	DEFINE_LDBL_MIN.
	(__ieee754_powl): Use LDBL_CHECK_FORCE_UFLOW_NONNAN.
	* math/auto-libm-test-in: Add more tests of pow.
	* math/auto-libm-test-out: Regenerated.
2015-09-25 22:29:10 +00:00
Joseph Myers
522e02ab8a Rename bits/linkmap.h to linkmap.h (bug 14912).
It was noted in
<https://sourceware.org/ml/libc-alpha/2012-09/msg00305.html> that the
bits/*.h naming scheme should only be used for installed headers.
This patch renames bits/linkmap.h to plain linkmap.h to follow that
convention.

Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch).

	[BZ #14912]
	* bits/linkmap.h: Move to ...
	* sysdeps/generic/linkmap.h: ...here.
	* sysdeps/aarch64/bits/linkmap.h: Move to ...
	* sysdeps/aarch64/linkmap.h: ...here.
	* sysdeps/arm/bits/linkmap.h: Move to ...
	* sysdeps/arm/linkmap.h: ...here.
	* sysdeps/hppa/bits/linkmap.h: Move to ...
	* sysdeps/hppa/linkmap.h: ...here.
	* sysdeps/ia64/bits/linkmap.h: Move to ...
	* sysdeps/ia64/linkmap.h: ...here.
	* sysdeps/mips/bits/linkmap.h: Move to ...
	* sysdeps/mips/linkmap.h: ...here.
	* sysdeps/s390/bits/linkmap.h: Move to ...
	* sysdeps/s390/linkmap.h: ...here.
	* sysdeps/sh/bits/linkmap.h: Move to ...
	* sysdeps/sh/linkmap.h: ...here.
	* sysdeps/x86/bits/linkmap.h: Move to ...
	* sysdeps/x86/linkmap.h: ...here.
	* include/link.h: Include <linkmap.h> instead of <bits/linkmap.h>.
2015-09-04 19:44:27 +00:00
H.J. Lu
75dd0a8f3d Detect and select i586/i686 implementation at run-time
We detect i586 and i686 features at run-time by checking CX8 and CMOV
CPUID features bits.  We can use these information to select the best
implementation in ix86 multiarch.  HAS_I586/HAS_I686 is true if i586/i686
instructions are available on the processor.

Due to the reordering and the other nifty extensions in i686, it is not
really good to use heavily i586 optimized code on an i686.  It's better
to use i486 code if it isn't an i586.  USE_I586/USE_I686 is true if
i586/i686 implementation should be used for the processor.  USE_I586
is true only if i686 instructions aren't available.  If i686 instructions
are available, we always choose i686 or i486 implementation, in that order,
and we never choose i586 implementation for i686-class processors.

	* sysdeps/i386/init-arch.h: New file.
	* sysdeps/i386/i586/init-arch.h: Likewise.
	* sysdeps/i386/i686/init-arch.h: Likewise.
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set bit_I586
	bit if CX8 is available.  Set bit_I686 bit if CMOV is available.
	* sysdeps/x86/cpu-features.h (bit_I586): New.
	(bit_I686): Likewise.
	(bit_CX8): Likewise.
	(bit_CMOV): Likewise.
	(index_CX8): Likewise.
	(index_CMOV): Likewise.
	(index_I586): Likewise.
	(index_I686): Likewise.
	(reg_CX8): Likewise.
	(reg_CMOV): Likewise.
	(HAS_I586): Defined as HAS_ARCH_FEATURE (I586) if i586 isn't
	available at compile-time.
	(HAS_I686): Defined as HAS_ARCH_FEATURE (I686) if i686 isn't
	available at compile-time.
	* sysdeps/x86/init-arch.h (USE_I586): New macro.
	(USE_I686): Likewise.
2015-08-27 09:06:44 -07:00
H.J. Lu
38d22f9f48 Don't disable SSE in x86-64 ld.so
Since x86-64 ld.so preserves vector registers now, we can use SSE in
x86-64 ld.so.  We should run tst-ld-sse-use.sh only on i386.

	* sysdeps/x86/Makefile [$(subdir) == elf] (CFLAGS-.os,
	tests-special, $(objpfx)tst-ld-sse-use.out): Moved to ...
	* sysdeps/i386/Makefile [$(subdir) == elf] (CFLAGS-.os,
	tests-special, $(objpfx)tst-ld-sse-use.out): Here.  Update
	comments.
	* sysdeps/x86_64/Makefile [$(subdir) == elf] (CFLAGS-.os): Add
	-mno-mmx for $(all-rtld-routines).
	* sysdeps/x86/tst-ld-sse-use.sh: Moved to ...
	* sysdeps/i386/tst-ld-sse-use.sh: Here.  Replace x86-64 with
	i386.
2015-08-26 07:55:42 -07:00
H.J. Lu
1ae6c72dc1 Move x86_64 init-arch.h to sysdeps/x86/init-arch.h
Move sysdeps/x86_64/multiarch/init-arch.h to sysdeps/x86/init-arch.h
which can be used for both i386 and x86_64.

	* sysdeps/i386/i686/multiarch/init-arch.h: Removed.
	* sysdeps/unix/sysv/linux/x86/init-arch.h: Likewise.
	* sysdeps/x86_64/cacheinfo.c: Include <init-arch.h> instead
	of "multiarch/init-arch.h".
	* sysdeps/x86_64/multiarch/init-arch.h: Renamed to ...
	* sysdeps/x86/init-arch.h: This.
2015-08-20 04:29:23 -07:00
Florian Weimer
cd4e69ed3e nptl: Document crash due to incorrect use of locks 2015-08-20 08:44:37 +02:00
H.J. Lu
477fa2c843 Also check __i586__/__i686__ for HAS_I586/HAS_I686
* sysdeps/x86/cpu-features.h (HAS_I586): Defined to 1 if
	__i586__ is defined.
	(HAS_I686): Defined to 1 if __i686__ is defined.
2015-08-19 04:19:58 -07:00
H.J. Lu
1814df5b02 Define HAS_CPUID/HAS_I586/HAS_I686 from -march=
cpuid, i586 and i686 instructions are available if the processor
specified by -march= supports them.  We can use this information
to determine whether those instructions can be used safely.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Check
	whether cpuid is available only if HAS_CPUID is 0.
	* sysdeps/x86/cpu-features.h (HAS_CPUID): New.
	(HAS_I586): Likewise.
	(HAS_I686): Likewise.
2015-08-18 08:00:00 -07:00
H.J. Lu
a5cf909b8f Check if cpuid is available in init_cpu_features
Since not all i486 processors support cpuid, we call __get_cpuid_max to
check if cpuid is available before using it if not compiling for i586,
i686 nor x86-64.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Call
	__get_cpuid_max if not compiling for i586, i686 nor x86-64.
2015-08-13 04:53:03 -07:00
H.J. Lu
e2e4f56056 Add _dl_x86_cpu_features to rtld_global
This patch adds _dl_x86_cpu_features to rtld_global in x86 ld.so
and initializes it early before __libc_start_main is called so that
cpu_features is always available when it is used and we can avoid
calling __init_cpu_features in IFUNC selectors.

	* sysdeps/i386/dl-machine.h: Include <cpu-features.c>.
	(dl_platform_init): Call init_cpu_features.
	* sysdeps/i386/dl-procinfo.c (_dl_x86_cpu_features): New.
	* sysdeps/i386/i686/cacheinfo.c
	(DISABLE_PREFERRED_MEMORY_INSTRUCTION): Removed.
	* sysdeps/i386/i686/multiarch/Makefile (aux): Remove init-arch.
	* sysdeps/i386/i686/multiarch/Versions: Removed.
	* sysdeps/i386/i686/multiarch/ifunc-defines.sym (KIND_OFFSET):
	Removed.
	* sysdeps/i386/ldsodefs.h: Include <cpu-features.h>.
	* sysdeps/unix/sysv/linux/x86/Makefile
	(libpthread-sysdep_routines): Remove init-arch.
	* sysdeps/unix/sysv/linux/x86_64/dl-procinfo.c: Include
	<sysdeps/x86_64/dl-procinfo.c> instead of
	sysdeps/generic/dl-procinfo.c>.
	* sysdeps/x86/Makefile [$(subdir) == csu] (gen-as-const-headers):
	Add cpu-features-offsets.sym and rtld-global-offsets.sym.
	[$(subdir) == elf] (sysdep-dl-routines): Add dl-get-cpu-features.
	[$(subdir) == elf] (tests): Add tst-get-cpu-features.
	[$(subdir) == elf] (tests-static): Add
	tst-get-cpu-features-static.
	* sysdeps/x86/Versions: New file.
	* sysdeps/x86/cpu-features-offsets.sym: Likewise.
	* sysdeps/x86/cpu-features.c: Likewise.
	* sysdeps/x86/cpu-features.h: Likewise.
	* sysdeps/x86/dl-get-cpu-features.c: Likewise.
	* sysdeps/x86/libc-start.c: Likewise.
	* sysdeps/x86/rtld-global-offsets.sym: Likewise.
	* sysdeps/x86/tst-get-cpu-features-static.c: Likewise.
	* sysdeps/x86/tst-get-cpu-features.c: Likewise.
	* sysdeps/x86_64/dl-procinfo.c: Likewise.
	* sysdeps/x86_64/cacheinfo.c (__cpuid_count): Removed.
	Assume USE_MULTIARCH is defined and don't check it.
	(is_intel): Replace __cpu_features with GLRO(dl_x86_cpu_features).
	(is_amd): Likewise.
	(max_cpuid): Likewise.
	(intel_check_word): Likewise.
	(__cache_sysconf): Don't call __init_cpu_features.
	(__x86_preferred_memory_instruction): Removed.
	(init_cacheinfo): Don't call __init_cpu_features. Replace
	__cpu_features with GLRO(dl_x86_cpu_features).
	* sysdeps/x86_64/dl-machine.h: <cpu-features.c>.
	(dl_platform_init): Call init_cpu_features.
	* sysdeps/x86_64/ldsodefs.h: Include <cpu-features.h>.
	* sysdeps/x86_64/multiarch/Makefile (aux): Remove init-arch.
	* sysdeps/x86_64/multiarch/Versions: Removed.
	* sysdeps/x86_64/multiarch/cacheinfo.c: Likewise.
	* sysdeps/x86_64/multiarch/init-arch.c: Likewise.
	* sysdeps/x86_64/multiarch/ifunc-defines.sym (KIND_OFFSET):
	Removed.
	* sysdeps/x86_64/multiarch/init-arch.h: Rewrite.
2015-08-13 03:41:22 -07:00
Igor Zamyatin
14c5cbabc2 Preserve bound registers for pointer pass/return
We need to save/restore bound registers and add a BND prefix before
branches in _dl_runtime_profile so that bound registers for pointer
pass and return are preserved when LD_AUDIT is used.

	[BZ #18134]
	* sysdeps/i386/configure.ac: Set HAVE_MPX_SUPPORT.
	* sysdeps/i386/configure: Regenerated.
	* sysdeps/i386/dl-trampoline.S (PRESERVE_BND_REGS_PREFIX): New.
	(_dl_runtime_profile): Save and restore Intel MPX return bound
	registers when calling _dl_call_pltexit.  Add
	PRESERVE_BND_REGS_PREFIX before return.
	* sysdeps/i386/link-defines.sym (LRV_BND0_OFFSET): New.
	(LRV_BND1_OFFSET): Likewise.
	* sysdeps/x86/bits/link.h (La_i86_retval): Add lrv_bnd0 and
	lrv_bnd1.
	* sysdeps/x86_64/dl-trampoline.S (_dl_runtime_profile): Fix
	typo in bndmov encoding.
	* sysdeps/x86_64/dl-trampoline.h: Properly save and restore
	Intel MPX bound registers.  Add PRESERVE_BND_REGS_PREFIX before
	branch instructions to preserve bounds.
2015-07-09 06:50:12 -07:00
Torvald Riegel
213a2be7b4 Do not create invalid pointers in C code of string functions.
Some of the x86 string functions create pointers based on input strings
that may be outside of the input strings.  When this happens in C code,
the compiler can potentially detect this, leading to warnings in
application code when those string functions are inlined.  Perform those
operations in the assembly code instead of the C code to fix this.
2015-07-07 13:40:12 +02:00
Andrew Senkevich
a6336cc446 Vector sincosf for x86_64 and tests.
Here is implementation of vectorized sincosf containing SSE, AVX,
AVX2 and AVX512 versions according to Vector ABI
<https://groups.google.com/forum/#!topic/x86-64-abi/LmppCfN1rZ4>.

    * NEWS: Mention addition of x86_64 vector sincosf.
    * math/test-float-vlen16.h: Added wrapper for sincosf tests.
    * math/test-float-vlen4.h: Likewise.
    * math/test-float-vlen8.h: Likewise.
    * sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New symbols added.
    * sysdeps/x86/fpu/bits/math-vector.h: Added sincosf SIMD declaration.
    * sysdeps/x86_64/fpu/Makefile (libmvec-support): Added new files.
    * sysdeps/x86_64/fpu/Versions: New versions added.
    * sysdeps/x86_64/fpu/libm-test-ulps: Regenerated.
    * sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines):
    Added build of SSE, AVX2 and AVX512 IFUNC versions.
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core.S
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core.S
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core.S
    * sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S
    * sysdeps/x86_64/fpu/svml_s_sincosf16_core.S
    * sysdeps/x86_64/fpu/svml_s_sincosf4_core.S
    * sysdeps/x86_64/fpu/svml_s_sincosf8_core.S
    * sysdeps/x86_64/fpu/svml_s_sincosf8_core_avx.S
    * sysdeps/x86_64/fpu/svml_s_sincosf_data.S: New file.
    * sysdeps/x86_64/fpu/svml_s_sincosf_data.h: New file.
    * sysdeps/x86_64/fpu/svml_s_wrapper_impl.h: Added 3 argument wrappers.
    * sysdeps/x86_64/fpu/test-float-vlen16.c: : Vector sincosf tests.
    * sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen4.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8.c: Likewise.
2015-06-18 20:11:27 +03:00
Andrew Senkevich
c9a8c526ac Vector sincos for x86_64 and tests.
Here is implementation of vectorized sincos containing SSE, AVX,
AVX2 and AVX512 versions according to Vector ABI
<https://groups.google.com/forum/#!topic/x86-64-abi/LmppCfN1rZ4>.

    * NEWS: Mention addition of x86_64 vector sincos.
    * bits/libm-simd-decl-stubs.h: Added stubs for sincos.
    * math/math.h (__MATHDECL_VEC): New macro.
    * math/bits/mathcalls.h: Added sincos declaration with __MATHDECL_VEC.
    * math/gen-libm-have-vector-test.sh: Added generation of sincos wrapper
    declaration under condition.
    * math/test-vec-loop.h (TEST_VEC_LOOP): Refactored.
    * math/test-double-vlen2.h: Added wrapper for sincos tests, reflected
    TEST_VEC_LOOP change.
    * math/test-double-vlen4.h: Likewise.
    * math/test-double-vlen8.h: Likewise.
    * math/test-float-vlen16.h: Reflected TEST_VEC_LOOP change.
    * math/test-float-vlen4.h: Likewise.
    * math/test-float-vlen8.h: Likewise.
    * sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New symbols added.
    * sysdeps/x86/fpu/bits/math-vector.h: Added sincos SIMD declaration.
    * sysdeps/x86_64/fpu/Makefile (libmvec-support): Added new files.
    * sysdeps/x86_64/fpu/Versions: New versions added.
    * sysdeps/x86_64/fpu/libm-test-ulps: Regenerated.
    * sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines):
    Added build of SSE, AVX2 and AVX512 IFUNC versions.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S: New file.
    * sysdeps/x86_64/fpu/svml_d_sincos2_core.S: New file.
    * sysdeps/x86_64/fpu/svml_d_sincos4_core.S: New file.
    * sysdeps/x86_64/fpu/svml_d_sincos4_core_avx.S: New file.
    * sysdeps/x86_64/fpu/svml_d_sincos8_core.S: New file.
    * sysdeps/x86_64/fpu/svml_d_sincos_data.S: New file.
    * sysdeps/x86_64/fpu/svml_d_sincos_data.h: New file.
    * sysdeps/x86_64/fpu/svml_d_wrapper_impl.h: Added wrappers for sincos.
    * sysdeps/x86_64/fpu/test-double-vlen2-wrappers.c: Vector sincos tests.
    * sysdeps/x86_64/fpu/test-double-vlen2.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4-avx2-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen8-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen8.c: Likewise.
2015-06-18 17:55:55 +03:00
Andrew Senkevich
8aa92022e2 Vector powf for x86_64 and tests.
Here is implementation of vectorized powf containing SSE, AVX,
AVX2 and AVX512 versions according to Vector ABI
<https://groups.google.com/forum/#!topic/x86-64-abi/LmppCfN1rZ4>.

    * sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New symbols added.
    * sysdeps/x86/fpu/bits/math-vector.h: Added SIMD declaration and asm
    redirections for powf.
    * sysdeps/x86_64/fpu/Makefile (libmvec-support): Added new files.
    * sysdeps/x86_64/fpu/Versions: New versions added.
    * sysdeps/x86_64/fpu/libm-test-ulps: Regenerated.
    * sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines):
    Added build of SSE, AVX2 and AVX512 IFUNC versions.
    * sysdeps/x86_64/fpu/svml_s_wrapper_impl.h: Added 2 argument wrappers.
    * sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core_avx512.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_powf4_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_powf4_core_sse4.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_powf8_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_powf8_core_avx2.S: New file.
    * sysdeps/x86_64/fpu/svml_s_powf16_core.S: New file.
    * sysdeps/x86_64/fpu/svml_s_powf4_core.S: New file.
    * sysdeps/x86_64/fpu/svml_s_powf8_core.S: New file.
    * sysdeps/x86_64/fpu/svml_s_powf8_core_avx.S: New file.
    * sysdeps/x86_64/fpu/svml_s_powf_data.S: New file.
    * sysdeps/x86_64/fpu/svml_s_powf_data.h: New file.
    * sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c: Vector powf tests.
    * sysdeps/x86_64/fpu/test-float-vlen16.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen4.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8.c: Likewise.
    * math/test-float-vlen16.h: Fixed 2 argument macro.
    * math/test-float-vlen4.h: Likewise.
    * math/test-float-vlen8.h: Likewise.
    * NEWS: Mention addition of x86_64 vector powf.
2015-06-18 17:04:07 +03:00
Andrew Senkevich
c10b9b13f7 Vector pow for x86_64 and tests.
Here is implementation of vectorized pow containing SSE, AVX,
AVX2 and AVX512 versions according to Vector ABI
<https://groups.google.com/forum/#!topic/x86-64-abi/LmppCfN1rZ4>.

  * bits/libm-simd-decl-stubs.h: Added stubs for pow.
    * math/bits/mathcalls.h: Added pow declaration with __MATHCALL_VEC.
    * sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New versions added.
    * sysdeps/x86/fpu/bits/math-vector.h: Added SIMD declaration and asm
    redirections for pow.
    * sysdeps/x86_64/fpu/Makefile (libmvec-support): Added new files.
    * sysdeps/x86_64/fpu/Versions: New versions added.
    * sysdeps/x86_64/fpu/libm-test-ulps: Regenerated.
    * sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines): Added
    build of SSE, AVX2 and AVX512 IFUNC versions.
    * sysdeps/x86_64/fpu/svml_d_wrapper_impl.h: Added 2 argument wrappers.
    * sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core_sse4.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core_avx2.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core_avx512.S: New file.
    * sysdeps/x86_64/fpu/svml_d_pow2_core.S: New file.
    * sysdeps/x86_64/fpu/svml_d_pow4_core.S: New file.
    * sysdeps/x86_64/fpu/svml_d_pow4_core_avx.S: New file.
    * sysdeps/x86_64/fpu/svml_d_pow8_core.S: New file.
    * sysdeps/x86_64/fpu/svml_d_pow_data.S: New file.
    * sysdeps/x86_64/fpu/svml_d_pow_data.h: New file.
    * sysdeps/x86_64/fpu/test-double-vlen2-wrappers.c: Added vector pow test.
    * sysdeps/x86_64/fpu/test-double-vlen2.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4-avx2-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen8-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen8.c: Likewise.
    * NEWS: Mention addition of x86_64 vector pow.
2015-06-17 16:22:26 +03:00
Andrew Senkevich
1663be053d Vector expf for x86_64 and tests.
Here is implementation of vectorized expf containing SSE, AVX,
AVX2 and AVX512 versions according to Vector ABI
<https://groups.google.com/forum/#!topic/x86-64-abi/LmppCfN1rZ4>.

    * sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New symbols added.
    * sysdeps/x86/fpu/bits/math-vector.h: Added SIMD declaration and asm
    redirections for expf.
    * sysdeps/x86_64/fpu/Makefile (libmvec-support): Added new files.
    * sysdeps/x86_64/fpu/Versions: New versions added.
    * sysdeps/x86_64/fpu/libm-test-ulps: Regenerated.
    * sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines): Added
    build of SSE, AVX2 and AVX512 IFUNC versions.
    * sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_expf4_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_expf4_core_sse4.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_expf8_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_expf8_core_avx2.S: New file.
    * sysdeps/x86_64/fpu/svml_s_expf16_core.S: New file.
    * sysdeps/x86_64/fpu/svml_s_expf4_core.S: New file.
    * sysdeps/x86_64/fpu/svml_s_expf8_core.S: New file.
    * sysdeps/x86_64/fpu/svml_s_expf8_core_avx.S: New file.
    * sysdeps/x86_64/fpu/svml_s_expf_data.S: New file.
    * sysdeps/x86_64/fpu/svml_s_expf_data.h: New file.
    * sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c: Vector expf tests.
    * sysdeps/x86_64/fpu/test-float-vlen16.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen4.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8.c: Likewise.
    * NEWS: Mention addition of x86_64 vector expf.
2015-06-17 16:10:51 +03:00
Andrew Senkevich
9c02f663f6 Vector exp for x86_64 and tests.
Here is implementation of vectorized exp containing SSE, AVX,
AVX2 and AVX512 versions according to Vector ABI
<https://groups.google.com/forum/#!topic/x86-64-abi/LmppCfN1rZ4>.

    * bits/libm-simd-decl-stubs.h: Added stubs for exp.
    * math/bits/mathcalls.h: Added exp declaration with __MATHCALL_VEC.
    * sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New versions added.
    * sysdeps/x86/fpu/bits/math-vector.h: Added SIMD declaration and asm
    redirections for exp.
    * sysdeps/x86_64/fpu/Makefile (libmvec-support): Added new files.
    * sysdeps/x86_64/fpu/Versions: New versions added.
    * sysdeps/x86_64/fpu/libm-test-ulps: Regenerated.
    * sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines): Added
    build of SSE, AVX2 and AVX512 IFUNC versions.
    * sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core_sse4.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core_avx2.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core_avx512.S: New file.
    * sysdeps/x86_64/fpu/svml_d_exp2_core.S: New file.
    * sysdeps/x86_64/fpu/svml_d_exp4_core.S: New file.
    * sysdeps/x86_64/fpu/svml_d_exp4_core_avx.S: New file.
    * sysdeps/x86_64/fpu/svml_d_exp8_core.S: New file.
    * sysdeps/x86_64/fpu/svml_d_exp_data.S: New file.
    * sysdeps/x86_64/fpu/svml_d_exp_data.h: New file.
    * sysdeps/x86_64/fpu/test-double-vlen2-wrappers.c: Added vector exp test.
    * sysdeps/x86_64/fpu/test-double-vlen2.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4-avx2-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen4.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen8-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-double-vlen8.c: Likewise.
    * NEWS: Mention addition of x86_64 vector exp.
2015-06-17 15:58:05 +03:00
Andrew Senkevich
774488f88a Vector logf for x86_64 and tests.
Here is implementation of vectorized logf containing SSE, AVX,
AVX2 and AVX512 versions according to Vector ABI
<https://groups.google.com/forum/#!topic/x86-64-abi/LmppCfN1rZ4>.

    * sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New symbols added.
    * sysdeps/x86/fpu/bits/math-vector.h: Added SIMD declaration and asm
    redirections for logf.
    * sysdeps/x86_64/fpu/Makefile (libmvec-support): Added new files.
    * sysdeps/x86_64/fpu/Versions: New versions added.
    * sysdeps/x86_64/fpu/libm-test-ulps: Regenerated.
    * sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines): Added
    build of SSE, AVX2 and AVX512 IFUNC versions.
    * sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core.S: New file.
    * sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S: New file.
    * sysdeps/x86_64/fpu/svml_s_logf16_core.S: New file.
    * sysdeps/x86_64/fpu/svml_s_logf4_core.S: New file.
    * sysdeps/x86_64/fpu/svml_s_logf8_core.S: New file.
    * sysdeps/x86_64/fpu/svml_s_logf8_core_avx.S: New file.
    * sysdeps/x86_64/fpu/svml_s_logf_data.S: New file.
    * sysdeps/x86_64/fpu/svml_s_logf_data.h: New file.
    * sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c: Vector logf tests.
    * sysdeps/x86_64/fpu/test-float-vlen16.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen4.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-avx2.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c: Likewise.
    * sysdeps/x86_64/fpu/test-float-vlen8.c: Likewise.
    * NEWS: Mention addition of x86_64 vector logf.
2015-06-17 15:53:00 +03:00