The performance improvement is about 20%-30% for
larger cases and about 1%-5% for smaller cases.
Used SIMD load/store instead of GPR for large
overlapping forward moves.
Reused existing memcpy implementation for smaller
or overlapping backward moves.
Fixed the existing memcpy implementation to allow it
to deal with the overlapping case.
Simplified loop tails in the memcpy implementation -
use branchless overlapping sequence of fixed length
load/stores instead of branching depending on the
size.
A cleanup/optimization converting str's to stp's.
Added __memmove_thunderx2 to the list of the
available implementations.
Here is the updated patch for improving the long unaligned
code path (the one using "ext" instruction).
1. Always taken conditional branch at the beginning is
removed.
2. Epilogue code is placed after the end of the loop to
reduce the number of branches.
3. The redundant "mov" instructions inside the loop are
gone due to the changed order of the registers in the "ext"
instructions inside the loop, the prologue has additional
"ext" instruction.
4.Updating count in the prologue was hoisted out as
it is the same update for each prologue.
5. Invariant code of the loop epilogue was hoisted out.
6. As the current size of the ext chunk is exactly 16
instructions long "nop" was added at the beginning
of the code sequence so that the loop entry for all the
chunks be aligned.
* sysdeps/aarch64/multiarch/memcpy_thunderx2.S: Cleanup branching
and remove redundant code.
Since aligned loads and stores are huge performance
advantage the implementation always tries to do aligned
access. Among the cases when src and dst addresses are
aligned or unaligned evenly there are cases of not evenly
unaligned src and dst. For such cases (if the length is
big enough) ext instruction is used to merge-and-shift
two memory chunks loaded from two adjacent aligned
locations and then the adjusted chunk gets stored to
aligned address.
Performance gain against the current T2 implementation:
memcpy-large: 65K-32M: +40% - +10%
memcpy-walk: 128-32M: +20% - +2%
* sysdeps/aarch64/multiarch/Makefile (sysdep_routines):
Add memcpy_thunderx2.
* sysdeps/aarch64/multiarch/ifunc-impl-list.c (MAX_IFUNC):
Increment to 4.
(__libc_ifunc_impl_list): Add __memcpy_thunderx2.
* sysdeps/aarch64/multiarch/memcpy.c (libc_ifunc): Add IS_THUNDERX2
and IS_THUNDERX2PA checks.
* sysdeps/aarch64/multiarch/memcpy_thunderx.S (USE_THUNDERX2):
Use macro to set name appropriately.
(memcpy): Use USE_THUNDERX2 macro to modify prefetches.
* sysdeps/aarch64/multiarch/memcpy_thunderx2.S: New file.
* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_THUNDERX2PA):
New macro.
(IS_THUNDERX2): New macro.