glibc/sysdeps/powerpc/powerpc64/le/power10
Lucas A. M. Magalhaes a55e2da270 powerpc: Optimized memcmp for power10
This patch was based on the __memcmp_power8 and the recent
__strlen_power10.

Improvements from __memcmp_power8:

1. Don't need alignment code.

   On POWER10 lxvp and lxvl do not generate alignment interrupts, so
they are safe for use on caching-inhibited memory.  Notice that the
comparison on the main loop will wait for both VSR to be ready.
Therefore aligning one of the input address does not improve
performance.  In order to align both registers a vperm is necessary
which add too much overhead.

2. Uses new POWER10 instructions

   This code uses lxvp to decrease contention on load by loading 32 bytes
per instruction.
   The vextractbm is used to have a smaller tail code for calculating the
return value.

3. Performance improvement

   This version has around 35% better performance on average. I saw no
performance regressions for any length or alignment.

Thanks Matheus for helping me out with some details.

Co-authored-by: Matheus Castanho <msc@linux.ibm.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
2021-05-31 18:00:20 -03:00
..
fpu powerpc: Add support for POWER10 2020-06-29 10:08:38 -03:00
multiarch powerpc: Add support for POWER10 2020-06-29 10:08:38 -03:00
Implies powerpc: Add support for POWER10 2020-06-29 10:08:38 -03:00
memcmp.S powerpc: Optimized memcmp for power10 2021-05-31 18:00:20 -03:00
memcpy.S powerpc64le: Optimize memcpy for POWER10 2021-04-30 18:12:08 -03:00
memmove.S powerpc64le: Optimized memmove for POWER10 2021-04-30 18:12:08 -03:00
memset.S powerpc64le: Optimize memset for POWER10 2021-04-30 18:12:08 -03:00
rawmemchr.S powerpc: Add optimized rawmemchr for POWER10 2021-05-17 10:30:35 -03:00
strlen.S powerpc: Add optimized rawmemchr for POWER10 2021-05-17 10:30:35 -03:00