mirror of
https://sourceware.org/git/glibc.git
synced 2025-01-08 18:30:18 +00:00
8b4416d83c
These new memcpy functions are the 32-bit version of x86_64 SSE2 unaligned memcpy. Memcpy average performace benefit is 18% on Silvermont, other platforms also improved about 35%, benchmarked on Silvermont, Haswell, Ivy Bridge, Sandy Bridge and Westmere, performance results attached in https://sourceware.org/ml/libc-alpha/2014-07/msg00157.html * sysdeps/i386/i686/multiarch/bcopy-sse2-unaligned.S: New file. * sysdeps/i386/i686/multiarch/memcpy-sse2-unaligned.S: Likewise. * sysdeps/i386/i686/multiarch/memmove-sse2-unaligned.S: Likewise. * sysdeps/i386/i686/multiarch/mempcpy-sse2-unaligned.S: Likewise. * sysdeps/i386/i686/multiarch/bcopy.S: Select the sse2_unaligned version if bit_Fast_Unaligned_Load is set. * sysdeps/i386/i686/multiarch/memcpy.S: Likewise. * sysdeps/i386/i686/multiarch/memcpy_chk.S: Likewise. * sysdeps/i386/i686/multiarch/memmove.S: Likewise. * sysdeps/i386/i686/multiarch/memmove_chk.S: Likewise. * sysdeps/i386/i686/multiarch/mempcpy.S: Likewise. * sysdeps/i386/i686/multiarch/mempcpy_chk.S: Likewise. * sysdeps/i386/i686/multiarch/Makefile (sysdep_routines): Add bcopy-sse2-unaligned, memcpy-sse2-unaligned, memmove-sse2-unaligned and mempcpy-sse2-unaligned. * sysdeps/i386/i686/multiarch/ifunc-impl-list.c (MAX_IFUNC): Set to 4. (__libc_ifunc_impl_list): Test __bcopy_sse2_unaligned, __memmove_chk_sse2_unaligned, __memmove_sse2_unaligned, __memcpy_chk_sse2_unaligned, __memcpy_sse2_unaligned, __mempcpy_chk_sse2_unaligned, and __mempcpy_sse2_unaligned. |
||
---|---|---|
.. | ||
fpu | ||
multiarch | ||
nptl | ||
add_n.S | ||
bcopy.S | ||
bzero.S | ||
cacheinfo.c | ||
dl-hash.h | ||
ffs.c | ||
hp-timing.h | ||
Implies | ||
Makefile | ||
memcmp.S | ||
memcpy_chk.S | ||
memcpy.S | ||
memmove_chk.S | ||
memmove.S | ||
mempcpy_chk.S | ||
mempcpy.S | ||
memset_chk.S | ||
memset.S | ||
memusage.h | ||
pthread_spin_trylock.S | ||
stack-aliasing.h | ||
strcmp.S | ||
strtok_r.S | ||
strtok.S | ||
tst-stack-align.h |