Commit Graph

67 Commits

Author SHA1 Message Date
Rajalakshmi Srinivasaraghavan
98408b95b1 powerpc: POWER7 strncpy optimization for unaligned string
This patch optimizes strncpy for power7 for unaligned source or
destination address. The source or destination address is aligned
to doubleword and data is shifted based on the alignment and
added with the previous loaded data to be written as a doubleword.
For each load, cmpb instruction is used for faster null check.

The new optimization shows 10 to 70% of performance improvement
for longer string though it does not show big difference on string
size less than 16 due to additional checks.Hence this new algorithm
is restricted to string greater than 16.
2015-02-12 13:16:08 -05:00
Adhemerval Zanella
ce6615c9c6 powerpc: Fix POWER7/PPC64 performance regression on LE
This patch fixes a performance regression on the POWER7/PPC64 memcmp
porting for Little Endian.  The LE code uses 'ldbrx' instruction to read
the memory on byte reversed form, however ISA 2.06 just provide the indexed
form which uses a register value as additional index, instead of a fixed value
enconded in the instruction.

And the port strategy for LE uses r0 index value and update the address
value on each compare loop interation.  For large compare size values,
it adds 8 more instructions plus some more depending of trailing
size.  This patch fixes it by adding pre-calculate indexes to remove the
address update on loops and tailing sizes.

For large sizes it shows a considerable gain, with double performance
pairing with BE.
2015-01-13 14:35:40 -05:00
Rajalakshmi Srinivasaraghavan
72607db038 powerpc: Optimize POWER7 strcmp trailing checks
This patch optimized the POWER7 trailing check by avoiding using byte
read operations and instead use the doubleword already readed with
bitwise operations.
2015-01-13 14:35:40 -05:00
Adhemerval Zanella
9f2f36e5a9 powerpc: Optimized strncat for POWER7/PPC64
With 3eb38795db (Simplify strncat) the generic algorithms uses
strlen, strnlen, and memcpy.  This is faster than POWER7 current
implementation, especially for unaligned strings (where POWER7 code
uses byte-byte operations).

This patch removes the assembly implementation and uses a multiarch
specialization based on default algorithm calling optimized POWER7
symbols.
2015-01-13 11:28:40 -05:00
Joseph Myers
b168057aaa Update copyright dates with scripts/update-copyrights. 2015-01-02 16:29:47 +00:00
Rajalakshmi Srinivasaraghavan
f59ad976ed powerpc: POWER7 strcpy optimization for unaligned strings
This patch optimizes strcpy for ppc64/power7 for unaligned source or
destination address.  The source or destination address is aligned
to doubleword and data is shifted based on the alignment and
added with the previous loaded data to be written as a doubleword.
For each load, cmpb instruction is used for faster null check.

The word aligned optimization is also removed, since the new unaligned
code path shows better results handling word-aligned strings.

More combination of unaligned inputs is also added in benchtest
to measure the improvement.The new optimization shows 2 to 80% of
performance improvement for longer string though it does not show
big difference on string size less than 16 due to additional checks.
2014-12-31 14:35:59 -05:00
Adhemerval Zanella
0f0a1c82f5 powerpc: Add powerpc64 strpbrk optimization
This patch makes the POWER7 optimized strpbrk generic by using
default doubleword stores to zero the hash, instead of VSX
instructions.  Performance on POWER7/POWER8 does not change.
2014-12-02 13:34:02 -05:00
Adhemerval Zanella
bb2542e0ae powerpc: Add powerpc64 strcspn optimization
This patch makes the POWER7 optimized strcspn generic by using
default doubleword stores to zero the hash, instead of VSX
instructions.  Performance on POWER7/POWER8 does not change.
2014-12-02 07:16:24 -05:00
Adhemerval Zanella
2e8a2de2da powerpc: Add powerpc64 strspn optimization
This patch makes the POWER7 optimized strspn generic by using
default doubleword stores to zero the hash, instead of VSX
instructions. Performance on POWER7/POWER8 machines does not changed.
2014-12-02 07:15:58 -05:00
Siddhesh Poyarekar
a109996ef9 Remove IS_IN_libm
Replace with IS_IN (libm). Generated code unchanged on x86_64.

        * include/math.h: Use IS_IN instead of IS_IN_libm.
        * sysdeps/alpha/fpu/s_copysign.c: Likewise.
        * sysdeps/ieee754/ldbl-128ibm/s_copysignl.c: Likewise.
        * sysdeps/ieee754/ldbl-128ibm/s_finitel.c: Likewise.
        * sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Likewise.
        * sysdeps/ieee754/ldbl-128ibm/s_frexpl.c: Likewise.
        * sysdeps/ieee754/ldbl-128ibm/s_isinfl.c: Likewise.
        * sysdeps/ieee754/ldbl-128ibm/s_isnanl.c: Likewise.
        * sysdeps/ieee754/ldbl-128ibm/s_modfl.c: Likewise.
        * sysdeps/ieee754/ldbl-128ibm/s_scalbnl.c: Likewise.
        * sysdeps/ieee754/ldbl-128ibm/s_signbitl.c: Likewise.
        * sysdeps/ieee754/ldbl-64-128/s_copysignl.c: Likewise.
        * sysdeps/ieee754/ldbl-64-128/s_finitel.c: Likewise.
        * sysdeps/ieee754/ldbl-64-128/s_frexpl.c: Likewise.
        * sysdeps/ieee754/ldbl-64-128/s_isinfl.c: Likewise.
        * sysdeps/ieee754/ldbl-64-128/s_isnanl.c: Likewise.
        * sysdeps/ieee754/ldbl-64-128/s_modfl.c: Likewise.
        * sysdeps/ieee754/ldbl-64-128/s_scalbnl.c: Likewise.
        * sysdeps/ieee754/ldbl-64-128/s_signbitl.c: Likewise.
        * sysdeps/ieee754/ldbl-64-128/w_scalblnl.c: Likewise.
        * sysdeps/ieee754/ldbl-opt/s_copysign.c: Likewise.
        * sysdeps/ieee754/ldbl-opt/s_finite.c: Likewise.
        * sysdeps/ieee754/ldbl-opt/s_frexp.c: Likewise.
        * sysdeps/ieee754/ldbl-opt/s_isinf.c: Likewise.
        * sysdeps/ieee754/ldbl-opt/s_isnan.c: Likewise.
        * sysdeps/ieee754/ldbl-opt/s_ldexp.c: Likewise.
        * sysdeps/ieee754/ldbl-opt/s_ldexpl.c: Likewise.
        * sysdeps/ieee754/ldbl-opt/s_modf.c: Likewise.
        * sysdeps/ieee754/ldbl-opt/s_scalbln.c: Likewise.
        * sysdeps/ieee754/ldbl-opt/s_scalbn.c: Likewise.
        * sysdeps/powerpc/power5+/fpu/s_modf.c: Likewise.
        * sysdeps/powerpc/powerpc32/fpu/s_copysign.S: Likewise.
        * sysdeps/powerpc/powerpc32/fpu/s_copysignl.S: Likewise.
        * sysdeps/powerpc/powerpc32/fpu/s_isnan.S: Likewise.
        * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_copysign.c: Likewise.
        * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_finite.c: Likewise.
        * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_isinf.c: Likewise.
        * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_isnan.c: Likewise.
        * sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_modf.c: Likewise.
        * sysdeps/powerpc/powerpc32/power5/fpu/s_isnan.S: Likewise.
        * sysdeps/powerpc/powerpc32/power6/fpu/s_copysign.S: Likewise.
        * sysdeps/powerpc/powerpc32/power6/fpu/s_isnan.S: Likewise.
        * sysdeps/powerpc/powerpc32/power7/fpu/s_finite.S: Likewise.
        * sysdeps/powerpc/powerpc32/power7/fpu/s_isinf.S: Likewise.
        * sysdeps/powerpc/powerpc32/power7/fpu/s_isnan.S: Likewise.
        * sysdeps/powerpc/powerpc64/fpu/multiarch/s_copysign.c: Likewise.
        * sysdeps/powerpc/powerpc64/fpu/multiarch/s_finite.c: Likewise.
        * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isinf.c: Likewise.
        * sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan.c: Likewise.
        * sysdeps/powerpc/powerpc64/fpu/multiarch/s_modf.c: Likewise.
        * sysdeps/powerpc/powerpc64/fpu/s_copysign.S: Likewise.
        * sysdeps/powerpc/powerpc64/fpu/s_copysignl.S: Likewise.
        * sysdeps/powerpc/powerpc64/fpu/s_isnan.S: Likewise.
        * sysdeps/powerpc/powerpc64/power5/fpu/s_isnan.S: Likewise.
        * sysdeps/powerpc/powerpc64/power6/fpu/s_copysign.S: Likewise.
        * sysdeps/powerpc/powerpc64/power6/fpu/s_isnan.S: Likewise.
        * sysdeps/powerpc/powerpc64/power6x/fpu/s_isnan.S: Likewise.
        * sysdeps/powerpc/powerpc64/power7/fpu/s_finite.S: Likewise.
        * sysdeps/powerpc/powerpc64/power7/fpu/s_isinf.S: Likewise.
        * sysdeps/powerpc/powerpc64/power7/fpu/s_isnan.S: Likewise.
        * sysdeps/powerpc/powerpc64/power8/fpu/s_finite.S: Likewise.
        * sysdeps/powerpc/powerpc64/power8/fpu/s_isinf.S: Likewise.
        * sysdeps/powerpc/powerpc64/power8/fpu/s_isnan.S: Likewise.
        * sysdeps/sparc/sparc32/fpu/s_signbitl.S: Likewise.
        * sysdeps/sparc/sparc32/sparcv9/fpu/s_isnan.S: Likewise.
        * sysdeps/unix/sysv/linux/alpha/fraiseexcpt.S: Likewise.
2014-11-24 11:41:47 +05:30
Adhemerval Zanella
3b473fecdf PowerPC: multiarch bzero cleanup for PPC64
This patch cleanups the multiarch bzero for powerpc64 by remove
the multiarch objects and use instead the the memset embedded
implementation presented in each multiarch optimization.  The
code generate is essentially the same, but the TB_TOCLESS (which
is not essential).
2014-09-10 07:39:46 -04:00
Adhemerval Zanella
87868c2418 PowerPC: Align power7 memcpy using VSX to quadword
This patch changes power7 memcpy to use VSX instructions only when
memory is aligned to quardword.  It is to avoid unaligned kernel traps
on non-cacheable memory (for instance, memory-mapped I/O).
2014-07-07 15:41:27 -05:00
Adhemerval Zanella
17762f6625 PowerPC: optimized memmove for POWER7/PPC64
This patch adds an optimized memmove optimization for POWER7/powerpc64.
Basically the idea is to use the memcpy for POWER7 on non-overlapped
memory regions and a optimized backward memcpy for memory regions
that overlap (similar to the idea of string/memmove.c).

The backward memcpy algorithm used is similar the one use for memcpy for
POWER7, with adjustments done for alignment.  The difference is memory
is always aligned to 16 bytes before using VSX/altivec instructions.
2014-07-07 15:41:21 -05:00
Vidya Ranganathan
e23d3d2690 PowerPC: Optimized strcmp for PPC64/POWER7
Optimization is achieved on 8 byte aligned strings with double word
comparison using cmpb instruction. On unaligned strings loop unrolling
is applied for Power7 gain.
2014-06-11 08:39:31 -05:00
Adhemerval Zanella
ed36bfa18f PowerPC: Fix optimized strncat strlen call
This patch fixes the optimized ppc64/power7 strncat strlen call for
static build without ifunc enabled.  The strlen symbol to call in such
situation is just strlen, instead of __GI_strlen (since the __GI_
alias is just created for shared objects).
2014-06-06 09:37:07 -05:00
Vidya Ranganathan
f360f94a05 PowerPC: strncpy/stpncpy optimization for PPC64/POWER7
The optimization is achieved by following techniques:
  > data alignment [gain from aligned memory access on read/write]
  > POWER7 gains performance with loop unrolling/unwinding
    [gain by reduction of branch penalty].
  > zero padding done by calling optimized memset
2014-05-06 09:54:25 -05:00
Adhemerval Zanella
de21c33c06 PowerPC: Fix --disable-multi-arch builds
This patch fixes some powerpc32 and powerpc64 builds with
--disable-multi-arch option along with different --with-cpu=powerN.
It cleanups the Implies directories by removing the multiarch
folder for non multiarch config and also fixing two assembly
implementations: powerpc64/power7/strncat.S that is calling the
wrong strlen; and power8/fpu/s_isnan.S that misses the hidden_def and
weak_alias directives.
2014-04-09 06:22:53 -05:00
Alan Modra
af6b17973c Correct prefetch hint in power7 memrchr.
Typo fix.

	* sysdeps/powerpc/powerpc64/power7/memrchr.S: Correct stream hint.
2014-04-02 13:42:27 +10:30
Adhemerval Zanella
6f23d0939e PowerPC: optimized strpbrk for POWER7
This patch add an optimized strpbrk for POWER7 by using a different
algorithm than default implementation: it constructs a table based on
the 'accept' argument and use this table to check for any occurance on
the input string. The idea is similar as x86_64 uses.
For PowerPC some tunings were added, such as unroll loops and memory
clear using VSX instructions.
2014-03-20 19:46:13 -05:00
Adhemerval Zanella
6eaf95cbfa PowerPC: optimized strcspn for PPC64/POWER7
This patch add a optimized strcspn for POWER7 by using a different
algorithm than default implementation: it constructs a table based on
the 'accept' argument and use this table to check for any occurance
on the input string. The idea is similar as x86_64 uses.
For PowerPC some tunings were added, such as unroll loops and align
stack memory to table to 16 bytes (so VSX clean can ran without
alignment issues).
2014-03-20 11:24:52 -05:00
Vidya Ranganathan
e65caf1f1d PowerPC: strspn optimization for PPC64/POWER7
The optimization is achieved by following techniques:
  > hashing of needle.
  > hashing avoids scanning of duplicate entries in needle across the string.
  > initializing the hash table with Vector instructions (VSX) by quadword access.
  > unrolling when scanning for character in string across hash table.
2014-03-11 08:54:33 -05:00
Adhemerval Zanella
ba9cc0714e PowerPC: strncat optimization for PPC64
The optimization is achieved by following techniques:
1. Doubleword aligned memory access and compares using
   cmpb instruction.
2. Loop unrolling for byte load/store.
3. CPU pre-fetch to avoid cache miss.
2014-03-10 07:25:09 -05:00
Rajalakshmi Srinivasaraghavan
c7debbdfac PowerPC: strrchr optimization for POWER7/PPC64
This patch optimizes strrchr() for ppc64. It uses aligned memory
access along with cmpb instruction and CPU prefetch to avoid
cache misses for speed improvement.
2014-03-03 08:06:41 -06:00
Allan McRae
d4697bc93d Update copyright notices with scripts/update-copyrights 2014-01-01 22:00:23 +10:00
Andreas Schwab
83f5c32d21 Fix uses of CALL_MCOUNT in ppc64 assembler sources 2013-12-19 17:06:48 +01:00
Adhemerval Zanella
69bbc63d88 PowerPC: Adjust multiarch Implies for PowerPC64
This patch adds Implies files on multiarch folder for POWER chips so
multirach is enabled when building with --with-cpu and powerN
option.
2013-12-13 14:58:02 -05:00
Adhemerval Zanella
8a29a3d00b PowerPC: multiarch memset/bzero for PowerPC64 2013-12-13 14:33:16 -05:00
Adhemerval Zanella
5e6a4d4b9e PowerPC: Adjust multiarch Implies for PowerPC64
This patch adds Implies files on multiarch folder for POWER chips so
multirach is enabled when building with --with-cpu and powerN
option.
2013-12-13 14:29:27 -05:00
Adhemerval Zanella
24eeafdb44 PowerPC: Optimized mpn functions for PowerPC64/POWER7
This patch add optimized __mpn_add_n/__mpn_sub_n for PowerPC64/POWER7.
They are originally from GMP with adjustments for GLIBC.
2013-12-06 11:52:31 -06:00
Adhemerval Zanella
2d9470b2ae PowerPC: multiarch logb/logbf/logbl for PowerPC32 2013-12-06 05:47:05 -06:00
Adhemerval Zanella
69f13dbf06 PowerPC: strcpy/stpcpy optimization for PPC64/POWER7
This patch intends to unify both strcpy and stpcpy implementationsi
for PPC64 and PPC64/POWER7. The idead default powerpc64 implementation
is to provide both doubleword and word aligned memory access.

For PPC64/POWER7 is also provide doubleword and word memory access,
remove the branch hints, use the cmpb instruction for compare
doubleword/words, and add an optimization for inputs of same alignment.
2013-10-25 13:28:24 -05:00
Alan Modra
466b039332 PowerPC LE memchr and memrchr
http://sourceware.org/ml/libc-alpha/2013-08/msg00105.html

Like strnlen, memchr and memrchr had a number of defects fixed by this
patch as well as adding little-endian support.  The first one I
noticed was that the entry to the main loop needlessly checked for
"are we done yet?" when we know the size is large enough that we can't
be done.  The second defect I noticed was that the main loop count was
wrong, which in turn meant that the small loop needed to handle an
extra word.  Thirdly, there is nothing to say that the string can't
wrap around zero, except of course that we'd normally hit a segfault
on trying to read from address zero.  Fixing that simplified a number
of places:

-	/* Are we done already?  */
-	addi    r9,r8,8
-	cmpld	r9,r7
-	bge	L(null)

becomes

+	cmpld	r8,r7
+	beqlr

However, the exit gets an extra test because I test for being on the
last word then if so whether the byte offset is less than the end.
Overall, the change is a win.

Lastly, memrchr used the wrong cache hint.

	* sysdeps/powerpc/powerpc64/power7/memchr.S: Replace rlwimi with
	insrdi.  Make better use of reg selection to speed exit slightly.
	Schedule entry path a little better.  Remove useless "are we done"
	checks on entry to main loop.  Handle wrapping around zero address.
	Correct main loop count.  Handle single left-over word from main
	loop inline rather than by using loop_small.  Remove extra word
	case in loop_small caused by wrong loop count.  Add little-endian
	support.
	* sysdeps/powerpc/powerpc32/power7/memchr.S: Likewise.
	* sysdeps/powerpc/powerpc64/power7/memrchr.S: Likewise.  Use proper
	cache hint.
	* sysdeps/powerpc/powerpc32/power7/memrchr.S: Likewise.
	* sysdeps/powerpc/powerpc64/power7/rawmemchr.S: Add little-endian
	support.  Avoid rlwimi.
	* sysdeps/powerpc/powerpc32/power7/rawmemchr.S: Likewise.
2013-10-04 10:41:46 +09:30
Alan Modra
3be87c77d2 PowerPC LE memset
http://sourceware.org/ml/libc-alpha/2013-08/msg00104.html

One of the things I noticed when looking at power7 timing is that rlwimi
is cracked and the two resulting insns have a register dependency.
That makes it a little slower than the equivalent rldimi.

	* sysdeps/powerpc/powerpc64/memset.S: Replace rlwimi with
        insrdi.  Formatting.
	* sysdeps/powerpc/powerpc64/power4/memset.S: Likewise.
	* sysdeps/powerpc/powerpc64/power6/memset.S: Likewise.
	* sysdeps/powerpc/powerpc64/power7/memset.S: Likewise.
	* sysdeps/powerpc/powerpc32/power4/memset.S: Likewise.
	* sysdeps/powerpc/powerpc32/power6/memset.S: Likewise.
	* sysdeps/powerpc/powerpc32/power7/memset.S: Likewise.
2013-10-04 10:41:35 +09:30
Alan Modra
759cfef3ac PowerPC LE memcpy
http://sourceware.org/ml/libc-alpha/2013-08/msg00103.html

LIttle-endian support for memcpy.  I spent some time cleaning up the
64-bit power7 memcpy, in order to avoid the extra alignment traps
power7 takes for little-endian.  It probably would have been better
to copy the linux kernel version of memcpy.

	* sysdeps/powerpc/powerpc32/power4/memcpy.S: Add little endian support.
	* sysdeps/powerpc/powerpc32/power6/memcpy.S: Likewise.
	* sysdeps/powerpc/powerpc32/power7/memcpy.S: Likewise.
	* sysdeps/powerpc/powerpc32/power7/mempcpy.S: Likewise.
	* sysdeps/powerpc/powerpc64/memcpy.S: Likewise.
	* sysdeps/powerpc/powerpc64/power4/memcpy.S: Likewise.
	* sysdeps/powerpc/powerpc64/power6/memcpy.S: Likewise.
	* sysdeps/powerpc/powerpc64/power7/memcpy.S: Likewise.
	* sysdeps/powerpc/powerpc64/power7/mempcpy.S: Likewise.  Make better
	use of regs.  Use power7 mtocrf.  Tidy function tails.
2013-10-04 10:41:24 +09:30
Alan Modra
fe6e95d717 PowerPC LE memcmp
http://sourceware.org/ml/libc-alpha/2013-08/msg00102.html

This is a rather large patch due to formatting and renaming.  The
formatting changes were to make it possible to compare power7 and
power4 versions of memcmp.  Using different register defines came
about while I was wrestling with the code, trying to find spare
registers at one stage.  I found it much simpler if we refer to a reg
by the same name throughout a function, so it's better if short-term
multiple use regs like rTMP are referred to using their register
number.  I made the cr field usage changes when attempting to reload
rWORDn regs in the exit path to byte swap before comparing when
little-endian.  That proved a bad idea due to the pipelining involved
in the main loop;  Offsets to reload the regs were different first
time around the loop..  Anyway, I left the cr field usage changes in
place for consistency.

Aside from these more-or-less cosmetic changes, I fixed a number of
places where an early exit path restores regs unnecessarily, removed
some dead code, and optimised one or two exits.

	* sysdeps/powerpc/powerpc64/power7/memcmp.S: Add little-endian support.
	Formatting.  Consistently use rXXX register defines or rN defines.
	Use early exit labels that avoid restoring unused non-volatile regs.
	Make cr field use more consistent with rWORDn compares.  Rename
	regs used as shift registers for unaligned loop, using rN defines
	for short lifetime/multiple use regs.
	* sysdeps/powerpc/powerpc64/power4/memcmp.S: Likewise.
	* sysdeps/powerpc/powerpc32/power7/memcmp.S: Likewise.  Exit with
	addi 1,1,64 to pop stack frame.  Simplify return value code.
	* sysdeps/powerpc/powerpc32/power4/memcmp.S: Likewise.
2013-10-04 10:40:56 +09:30
Alan Modra
664318c3eb PowerPC LE strchr
http://sourceware.org/ml/libc-alpha/2013-08/msg00101.html

Adds little-endian support to optimised strchr assembly.  I've also
tweaked the big-endian code a little.  In power7/strchr.S there's a
check in the tail of the function that we didn't match 0 before
finding a c match, done by comparing leading zero counts.  It's just
as valid, and quicker, to compare the raw output from cmpb.

Another little tweak is to use rldimi/insrdi in place of rlwimi for
the power7 strchr functions.  Since rlwimi is cracked, it is a few
cycles slower.  rldimi can be used on the 32-bit power7 functions
too.

	* sysdeps/powerpc/powerpc64/power7/strchr.S (strchr): Add little-endian
	support.  Correct typos, formatting.  Optimize tail.  Use insrdi
	rather than rlwimi.
	* sysdeps/powerpc/powerpc32/power7/strchr.S: Likewise.
	* sysdeps/powerpc/powerpc64/power7/strchrnul.S (__strchrnul): Add
	little-endian support.  Correct typos.
	* sysdeps/powerpc/powerpc32/power7/strchrnul.S: Likewise.  Use insrdi
	rather than rlwimi.
	* sysdeps/powerpc/powerpc64/strchr.S (rTMP4, rTMP5): Define.  Use
	in loop and entry code to keep "and." results.
	(strchr): Add little-endian support.  Comment.  Move cntlzd
	earlier in tail.
	* sysdeps/powerpc/powerpc32/strchr.S: Likewise.
2013-10-04 10:40:22 +09:30
Alan Modra
8a7413f9b0 PowerPC LE strcmp and strncmp
http://sourceware.org/ml/libc-alpha/2013-08/msg00099.html

More little-endian support.  I leave the main strcmp loops unchanged,
(well, except for renumbering rTMP to something other than r0 since
it's needed in an addi insn) and modify the tail for little-endian.

I noticed some of the big-endian tail code was a little untidy so have
cleaned that up too.

	* sysdeps/powerpc/powerpc64/strcmp.S (rTMP2): Define as r0.
	(rTMP): Define as r11.
	(strcmp): Add little-endian support.  Optimise tail.
	* sysdeps/powerpc/powerpc32/strcmp.S: Similarly.
	* sysdeps/powerpc/powerpc64/strncmp.S: Likewise.
	* sysdeps/powerpc/powerpc32/strncmp.S: Likewise.
	* sysdeps/powerpc/powerpc64/power4/strncmp.S: Likewise.
	* sysdeps/powerpc/powerpc32/power4/strncmp.S: Likewise.
	* sysdeps/powerpc/powerpc64/power7/strncmp.S: Likewise.
	* sysdeps/powerpc/powerpc32/power7/strncmp.S: Likewise.
2013-10-04 10:39:52 +09:30
Alan Modra
33ee81de05 PowerPC LE strnlen
http://sourceware.org/ml/libc-alpha/2013-08/msg00098.html

The existing strnlen code has a number of defects, so this patch is more
than just adding little-endian support.  The changes here are similar to
those for memchr.

	* sysdeps/powerpc/powerpc64/power7/strnlen.S (strnlen): Add
	little-endian support.  Remove unnecessary "are we done" tests.
	Handle "s" wrapping around zero and extremely large "size".
	Correct main loop count.  Handle single left-over word from main
	loop inline rather than by using small_loop.  Correct comments.
	Delete "zero" tail, use "end_max" instead.
	* sysdeps/powerpc/powerpc32/power7/strnlen.S: Likewise.
2013-10-04 10:39:42 +09:30
Alan Modra
db9b4570c5 PowerPC LE strlen
http://sourceware.org/ml/libc-alpha/2013-08/msg00097.html

This is the first of nine patches adding little-endian support to the
existing optimised string and memory functions.  I did spend some
time with a power7 simulator looking at cycle by cycle behaviour for
memchr, but most of these patches have not been run on cpu simulators
to check that we are going as fast as possible.  I'm sure PowerPC can
do better.  However, the little-endian support mostly leaves main
loops unchanged, so I'm banking on previous authors having done a
good job on big-endian..  As with most code you stare at long enough,
I found some improvements for big-endian too.

Little-endian support for strlen.  Like most of the string functions,
I leave the main word or multiple-word loops substantially unchanged,
just needing to modify the tail.

Removing the branch in the power7 functions is just a tidy.  .align
produces a branch anyway.  Modifying regs in the non-power7 functions
is to suit the new little-endian tail.

	* sysdeps/powerpc/powerpc64/power7/strlen.S (strlen): Add little-endian
	support.  Don't branch over align.
	* sysdeps/powerpc/powerpc32/power7/strlen.S: Likewise.
	* sysdeps/powerpc/powerpc64/strlen.S (strlen): Add little-endian support.
	Rearrange tmp reg use to suit.  Comment.
	* sysdeps/powerpc/powerpc32/strlen.S: Likewise.
2013-10-04 10:39:32 +09:30
Alan Modra
7b88401f3b PowerPC floating point little-endian [12 of 15]
http://sourceware.org/ml/libc-alpha/2013-08/msg00087.html

Fixes for little-endian in 32-bit assembly.

	* sysdeps/powerpc/sysdep.h (LOWORD, HIWORD, HISHORT): Define.
	* sysdeps/powerpc/powerpc32/fpu/s_copysign.S: Load little-endian
	words of double from correct stack offsets.
	* sysdeps/powerpc/powerpc32/fpu/s_copysignl.S: Likewise.
	* sysdeps/powerpc/powerpc32/fpu/s_lrint.S: Likewise.
	* sysdeps/powerpc/powerpc32/fpu/s_lround.S: Likewise.
	* sysdeps/powerpc/powerpc32/power4/fpu/s_llrint.S: Likewise.
	* sysdeps/powerpc/powerpc32/power4/fpu/s_llrintf.S: Likewise.
	* sysdeps/powerpc/powerpc32/power5+/fpu/s_llround.S: Likewise.
	* sysdeps/powerpc/powerpc32/power5+/fpu/s_lround.S: Likewise.
	* sysdeps/powerpc/powerpc32/power5/fpu/s_isnan.S: Likewise.
	* sysdeps/powerpc/powerpc32/power6/fpu/s_isnan.S: Likewise.
	* sysdeps/powerpc/powerpc32/power6/fpu/s_llrint.S: Likewise.
	* sysdeps/powerpc/powerpc32/power6/fpu/s_llrintf.S: Likewise.
	* sysdeps/powerpc/powerpc32/power6/fpu/s_llround.S: Likewise.
	* sysdeps/powerpc/powerpc32/power7/fpu/s_finite.S: Likewise.
	* sysdeps/powerpc/powerpc32/power7/fpu/s_isinf.S: Likewise.
	* sysdeps/powerpc/powerpc32/power7/fpu/s_isnan.S: Likewise.
	* sysdeps/powerpc/powerpc64/power7/fpu/s_finite.S: Use HISHORT.
	* sysdeps/powerpc/powerpc64/power7/fpu/s_isinf.S: Likewise.
2013-10-04 10:35:43 +09:30
Adhemerval Zanella
5430fc65a1 PowerPC: fix POWER7 memrchr for some large inputs 2013-09-05 09:32:56 -05:00
Joseph Myers
2d67d91ac0 Remove powerpc64 bounded-pointers code. 2013-03-06 00:10:21 +00:00
Anton Blanchard
2ccdea26f2 Fix spelling errors in sysdeps/powerpc files. 2013-01-07 11:20:53 -06:00
Joseph Myers
568035b787 Update copyright notices with scripts/update-copyrights. 2013-01-02 19:05:09 +00:00
Will Schmidt
14a50c9d23 [Powerpc] Tune/optimize powerpc{32,64}/power7/memchr.S.
Assorted tweaking, twisting and tuning to squeeze a few additional cycles
out of the memchr code.   Changes include bypassing the shift pairs
(sld,srd) when they are not required, and unrolling the small_loop that
handles short and trailing strings.

Per scrollpipe data measuring aligned strings for 64-bit, these changes
save between five and eight cycles (9-13% overall) for short strings (<32),
Longer aligned strings see slight improvement of 1-3% due to bypassing the
shifts and the instruction rearranging.
2012-08-21 14:20:55 -05:00
Adhemerval Zanella
45470df378 PowerPC: libm ABI update
Update for libm abilist for POWER6 and POWER7.
2012-05-22 15:34:02 -03:00
Adhemerval Zanella
777b1eea9d PowerPC - logb[f|l] optimization for POWER7
This patch provides optimized logb (1.2x on PPC32 and 2.5x on PPC64),
logbf (1.1x on PPC32 and 2.2x on PPC64), and logbl (1.3x on PPC32 and
50% on PPC64) for the POWER7 processor.
2012-05-15 10:32:28 -05:00
Paul Eggert
59ba27a63a Replace FSF snail mail address with URLs. 2012-02-09 23:18:22 +00:00
Adhemerval Zanella
f0b264f174 Optimized strcasecmp for Power7 2011-12-17 20:32:59 -05:00
Will Schmidt
2270037782 power7 strncmp optimization 2011-09-07 21:56:57 -04:00