glibc/sysdeps/x86_64/multiarch/memset.S
H.J. Lu 5e8c5bb1ac X86-64: Remove the previous SSE2/AVX2 memsets
Since the new SSE2/AVX2 memsets are faster than the previous ones, we
can remove the previous SSE2/AVX2 memsets and replace them with the
new ones.  This reduces the size of libc.so by about 900 bytes.

No change in IFUNC selection if SSE2 and AVX2 memsets weren't used
before.  If SSE2 or AVX2 memset was used, the new SSE2 or AVX2 memset
optimized with Enhanced REP STOSB will be used for processors with
ERMS.  The new AVX512 memset will be used for processors with AVX512
which prefer vzeroupper.

	[BZ #19881]
	* sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S: Folded
	into ...
	* sysdeps/x86_64/memset.S: This.
	(__bzero): Removed.
	(__memset_tail): Likewise.
	(__memset_chk): Likewise.
	(memset): Likewise.
	(MEMSET_CHK_SYMBOL): New. Define only if MEMSET_SYMBOL isn't
	defined.
	(MEMSET_SYMBOL): Define only if MEMSET_SYMBOL isn't defined.
	* sysdeps/x86_64/multiarch/memset-avx2.S: Removed.
	(__memset_zero_constant_len_parameter): Check SHARED instead of
	PIC.
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
	memset-avx2 and memset-sse2-unaligned-erms.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Remove __memset_chk_sse2,
	__memset_chk_avx2, __memset_sse2 and __memset_avx2_unaligned.
	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S
	(__bzero): Enabled.
	* sysdeps/x86_64/multiarch/memset.S (memset): Replace
	__memset_sse2 and __memset_avx2 with __memset_sse2_unaligned
	and __memset_avx2_unaligned.  Use __memset_sse2_unaligned_erms
	or __memset_avx2_unaligned_erms if processor has ERMS.  Support
	__memset_avx512_unaligned_erms and __memset_avx512_unaligned.
	(memset): Removed.
	(__memset_chk): Likewise.
	(MEMSET_SYMBOL): New.
	(libc_hidden_builtin_def): Replace __memset_sse2 with
	__memset_sse2_unaligned.
	* sysdeps/x86_64/multiarch/memset_chk.S (__memset_chk): Replace
	__memset_chk_sse2 and __memset_chk_avx2 with
	__memset_chk_sse2_unaligned and __memset_chk_avx2_unaligned_erms.
	Use __memset_chk_sse2_unaligned_erms or
	__memset_chk_avx2_unaligned_erms if processor has ERMS.  Support
	__memset_chk_avx512_unaligned_erms and
	__memset_chk_avx512_unaligned.
2016-06-08 13:56:14 -07:00

73 lines
2.2 KiB
ArmAsm

/* Multiple versions of memset
All versions must be listed in ifunc-impl-list.c.
Copyright (C) 2014-2016 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <shlib-compat.h>
#include <init-arch.h>
/* Define multiple versions only for the definition in lib. */
#if IS_IN (libc)
ENTRY(memset)
.type memset, @gnu_indirect_function
LOAD_RTLD_GLOBAL_RO_RDX
lea __memset_sse2_unaligned_erms(%rip), %RAX_LP
HAS_CPU_FEATURE (ERMS)
jnz 1f
lea __memset_sse2_unaligned(%rip), %RAX_LP
1:
HAS_ARCH_FEATURE (AVX2_Usable)
jz 2f
lea __memset_avx2_unaligned_erms(%rip), %RAX_LP
HAS_CPU_FEATURE (ERMS)
jnz L(AVX512F)
lea __memset_avx2_unaligned(%rip), %RAX_LP
L(AVX512F):
# ifdef HAVE_AVX512_ASM_SUPPORT
HAS_ARCH_FEATURE (AVX512F_Usable)
jz 2f
lea __memset_avx512_no_vzeroupper(%rip), %RAX_LP
HAS_ARCH_FEATURE (Prefer_No_VZEROUPPER)
jnz 2f
lea __memset_avx512_unaligned_erms(%rip), %RAX_LP
HAS_CPU_FEATURE (ERMS)
jnz 2f
lea __memset_avx512_unaligned(%rip), %RAX_LP
# endif
2: ret
END(memset)
#endif
#if IS_IN (libc)
# define MEMSET_SYMBOL(p,s) p##_sse2_##s
# ifdef SHARED
# undef libc_hidden_builtin_def
/* It doesn't make sense to send libc-internal memset calls through a PLT.
The speedup we get from using SSE2 instructions is likely eaten away
by the indirect call in the PLT. */
# define libc_hidden_builtin_def(name) \
.globl __GI_memset; __GI_memset = __memset_sse2_unaligned
# endif
# undef strong_alias
# define strong_alias(original, alias)
#endif
#include "../memset.S"