Commit Graph

18 Commits

Author SHA1 Message Date
Joseph Myers
b168057aaa Update copyright dates with scripts/update-copyrights. 2015-01-02 16:29:47 +00:00
Ling Ma
05f3633da4 Improve 64bit memcpy performance for Haswell CPU with AVX instruction
In this patch we take advantage of HSW memory bandwidth, manage to
reduce miss branch prediction by avoiding using branch instructions and
force destination to be aligned with avx instruction.

The CPU2006 403.gcc benchmark indicates this patch improves performance
from 2% to 10%.
2014-07-30 08:02:35 -07:00
H.J. Lu
f2fef657d8 Enable AVX2 optimized memset only if -mavx2 works
* config.h.in (HAVE_AVX2_SUPPORT): New #undef.
	* sysdeps/i386/configure.ac: Set HAVE_AVX2_SUPPORT and
	config-cflags-avx2.
	* sysdeps/x86_64/configure.ac: Likewise.
	* sysdeps/i386/configure: Regenerated.
	* sysdeps/x86_64/configure: Likewise.
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memset-avx2 only if config-cflags-avx2 is yes.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list):
	Tests for memset_chk and memset only if HAVE_AVX2_SUPPORT is
	defined.
	* sysdeps/x86_64/multiarch/memset.S: Define multiple versions
	only if HAVE_AVX2_SUPPORT is defined.
	* sysdeps/x86_64/multiarch/memset_chk.S: Likewise.
2014-07-14 07:58:27 -07:00
H.J. Lu
d92d8f8a42 Add ifunc tests for x86_64 memset_chk and memset
This patch adds ifunc tests for x86_64 memset_chk and memset.  It also
defines HAS_AVX2 with AVX2_Usable since AVX2 may not be usable even if
processor has AVX2.

	* sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list):
	Add tests for memset_chk and memset.

	* sysdeps/x86_64/multiarch/init-arch.h (HAS_AVX2): Defined
	with AVX2_Usable.
2014-06-20 14:52:29 -07:00
Allan McRae
d4697bc93d Update copyright notices with scripts/update-copyrights 2014-01-01 22:00:23 +10:00
Allan McRae
6f8e37ebf8 Update file name in x86_64 ifunc list
File name update missed in commit 584b18eb.
2013-12-16 13:00:39 +10:00
Ondřej Bílka
584b18eb4d Add strstr with unaligned loads. Fixes bug 12100.
A sse42 version of strstr used pcmpistr instruction which is quite
ineffective. A faster way is look for pairs of characters which is uses
sse2, is faster than pcmpistr and for real strings a pairs we look for
are relatively rare.

For linear time complexity we use buy or rent technique which switches
to two-way algorithm when superlinear behaviour is detected.
2013-12-14 20:08:13 +01:00
Ondřej Bílka
dc1a95c730 Faster strrchr. 2013-09-26 19:23:01 +02:00
Ondřej Bílka
5905e7b3e2 Faster strchr implementation. 2013-09-11 17:07:38 +02:00
Ondřej Bílka
8f02859f17 Add unaligned strcmp. 2013-09-03 16:27:10 +02:00
Ondřej Bílka
0186c6e97e Fix rawmemchr regression on bulldozer. 2013-08-30 10:14:37 +02:00
Ondrej Bilka
2d48b41c8f Faster memcpy on x64.
We add new memcpy version that uses unaligned loads which are fast
on modern processors. This allows second improvement which is avoiding
computed jump which is relatively expensive operation.

Tests available here:
http://kam.mff.cuni.cz/~ondra/memcpy_profile_result27_04_13.tar.bz2
2013-05-20 08:24:41 +02:00
Ondrej Bilka
37bb363f03 Faster strlen on x64. 2013-03-18 07:39:12 +01:00
Ondrej Bilka
80f844c9d8 Remove Prefer_SSE_for_memop on x64 2013-03-11 15:39:08 +01:00
Ondrej Bilka
87bd9bc4bd Revert " * sysdeps/x86_64/strlen.S: Replace with new SSE2 based implementation"
This reverts commit b79188d717.
2013-03-06 22:27:18 +01:00
Ondrej Bilka
b79188d717 * sysdeps/x86_64/strlen.S: Replace with new SSE2 based implementation
which is faster on all x86_64 architectures.
	Tested on AMD, Intel Nehalem, SNB, IVB.
2013-03-06 21:54:01 +01:00
Joseph Myers
568035b787 Update copyright notices with scripts/update-copyrights. 2013-01-02 19:05:09 +00:00
H.J. Lu
ac49ecaf9d Add x86-64 __libc_ifunc_impl_list 2012-10-11 16:41:12 -07:00