mirror of
https://sourceware.org/git/glibc.git
synced 2024-12-25 12:11:10 +00:00
935971ba6b
Optimize x86-64 memcmp/wmemcmp with AVX2. It uses vector compare as much as possible. It is as fast as SSE4 memcmp for size <= 16 bytes and up to 2X faster for size > 16 bytes on Haswell and Skylake. Select AVX2 memcmp/wmemcmp on AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast. NB: It uses TZCNT instead of BSF since TZCNT produces the same result as BSF for non-zero input. TZCNT is faster than BSF and is executed as BSF if machine doesn't support TZCNT. Key features: 1. For size from 2 to 7 bytes, load as big endian with movbe and bswap to avoid branches. 2. Use overlapping compare to avoid branch. 3. Use vector compare when size >= 4 bytes for memcmp or size >= 8 bytes for wmemcmp. 4. If size is 8 * VEC_SIZE or less, unroll the loop. 5. Compare 4 * VEC_SIZE at a time with the aligned first memory area. 6. Use 2 vector compares when size is 2 * VEC_SIZE or less. 7. Use 4 vector compares when size is 4 * VEC_SIZE or less. 8. Use 8 vector compares when size is 8 * VEC_SIZE or less. * sysdeps/x86/cpu-features.h (index_cpu_MOVBE): New. * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add memcmp-avx2 and wmemcmp-avx2. * sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Test __memcmp_avx2 and __wmemcmp_avx2. * sysdeps/x86_64/multiarch/memcmp-avx2.S: New file. * sysdeps/x86_64/multiarch/wmemcmp-avx2.S: Likewise. * sysdeps/x86_64/multiarch/memcmp.S: Use __memcmp_avx2 on AVX 2 machines if AVX unaligned load is fast and vzeroupper is preferred. * sysdeps/x86_64/multiarch/wmemcmp.S: Use __wmemcmp_avx2 on AVX 2 machines if AVX unaligned load is fast and vzeroupper is preferred. |
||
---|---|---|
.. | ||
bcopy.S | ||
ifunc-impl-list.c | ||
Makefile | ||
memcmp-avx2-movbe.S | ||
memcmp-sse4.S | ||
memcmp-ssse3.S | ||
memcmp.S | ||
memcpy_chk.S | ||
memcpy-ssse3-back.S | ||
memcpy-ssse3.S | ||
memcpy.S | ||
memmove_chk.S | ||
memmove-avx512-no-vzeroupper.S | ||
memmove-avx512-unaligned-erms.S | ||
memmove-avx-unaligned-erms.S | ||
memmove-ssse3-back.S | ||
memmove-ssse3.S | ||
memmove-vec-unaligned-erms.S | ||
memmove.S | ||
mempcpy_chk.S | ||
mempcpy.S | ||
memset_chk.S | ||
memset-avx2-unaligned-erms.S | ||
memset-avx512-no-vzeroupper.S | ||
memset-avx512-unaligned-erms.S | ||
memset-vec-unaligned-erms.S | ||
memset.S | ||
sched_cpucount.c | ||
stpcpy-sse2-unaligned.S | ||
stpcpy-ssse3.S | ||
stpcpy.S | ||
stpncpy-c.c | ||
stpncpy-sse2-unaligned.S | ||
stpncpy-ssse3.S | ||
stpncpy.S | ||
strcasecmp_l-ssse3.S | ||
strcasecmp_l.S | ||
strcat-sse2-unaligned.S | ||
strcat-ssse3.S | ||
strcat.S | ||
strchr-sse2-no-bsf.S | ||
strchr.S | ||
strcmp-sse2-unaligned.S | ||
strcmp-sse42.S | ||
strcmp-ssse3.S | ||
strcmp.S | ||
strcpy-sse2-unaligned.S | ||
strcpy-ssse3.S | ||
strcpy.S | ||
strcspn-c.c | ||
strcspn.S | ||
strncase_l-ssse3.S | ||
strncase_l.S | ||
strncat-c.c | ||
strncat-sse2-unaligned.S | ||
strncat-ssse3.S | ||
strncat.S | ||
strncmp-ssse3.S | ||
strncmp.S | ||
strncpy-c.c | ||
strncpy-sse2-unaligned.S | ||
strncpy-ssse3.S | ||
strncpy.S | ||
strpbrk-c.c | ||
strpbrk.S | ||
strspn-c.c | ||
strspn.S | ||
strstr-sse2-unaligned.S | ||
strstr.c | ||
test-multiarch.c | ||
varshift.c | ||
varshift.h | ||
wcscpy-c.c | ||
wcscpy-ssse3.S | ||
wcscpy.S | ||
wmemcmp-avx2-movbe.S | ||
wmemcmp-c.c | ||
wmemcmp-sse4.S | ||
wmemcmp-ssse3.S | ||
wmemcmp.S | ||
wmemset_chk-nonshared.S | ||
wmemset_chk.c | ||
wmemset.c | ||
wmemset.h |