Commit Graph

12 Commits

Author SHA1 Message Date
Joseph Myers
bfff8b1bec Update copyright dates with scripts/update-copyrights. 2017-01-01 00:14:16 +00:00
Adhemerval Zanella
b224637928 Fix powerpc64/power7 memchr for large input sizes
Current optimized powercp64/power7 memchr uses a strategy to check for
p versus align(p+n) (where 'p' is the input char pointer and n the
maximum size to check for the byte) without taking care for possible
overflow on the pointer addition in case of large 'n'.

It was triggered by 3038145ca2 where default rawmemchr (used to
created ppc64 rawmemchr in ifunc selection) now uses memchr (p, c, (size_t)-1)
on its implementation.

This patch fixes it by implement a satured addition where overflows
sets the maximum pointer size to UINTPTR_MAX.

Checked on powerpc64le-linux-gnu.

	[BZ# 20971]
	* sysdeps/powerpc/powerpc64/power7/memchr.S (__memchr): Avoid
	overflow in pointer addition.
	* string/test-memchr.c (do_test): Add an argument to pass as
	the size on memchr.
	(test_main): Add check for SIZE_MAX.
2016-12-16 11:30:20 -02:00
Joseph Myers
f7a9f785e5 Update copyright dates with scripts/update-copyrights. 2016-01-04 16:05:18 +00:00
Joseph Myers
b168057aaa Update copyright dates with scripts/update-copyrights. 2015-01-02 16:29:47 +00:00
Allan McRae
d4697bc93d Update copyright notices with scripts/update-copyrights 2014-01-01 22:00:23 +10:00
Andreas Schwab
83f5c32d21 Fix uses of CALL_MCOUNT in ppc64 assembler sources 2013-12-19 17:06:48 +01:00
Alan Modra
466b039332 PowerPC LE memchr and memrchr
http://sourceware.org/ml/libc-alpha/2013-08/msg00105.html

Like strnlen, memchr and memrchr had a number of defects fixed by this
patch as well as adding little-endian support.  The first one I
noticed was that the entry to the main loop needlessly checked for
"are we done yet?" when we know the size is large enough that we can't
be done.  The second defect I noticed was that the main loop count was
wrong, which in turn meant that the small loop needed to handle an
extra word.  Thirdly, there is nothing to say that the string can't
wrap around zero, except of course that we'd normally hit a segfault
on trying to read from address zero.  Fixing that simplified a number
of places:

-	/* Are we done already?  */
-	addi    r9,r8,8
-	cmpld	r9,r7
-	bge	L(null)

becomes

+	cmpld	r8,r7
+	beqlr

However, the exit gets an extra test because I test for being on the
last word then if so whether the byte offset is less than the end.
Overall, the change is a win.

Lastly, memrchr used the wrong cache hint.

	* sysdeps/powerpc/powerpc64/power7/memchr.S: Replace rlwimi with
	insrdi.  Make better use of reg selection to speed exit slightly.
	Schedule entry path a little better.  Remove useless "are we done"
	checks on entry to main loop.  Handle wrapping around zero address.
	Correct main loop count.  Handle single left-over word from main
	loop inline rather than by using loop_small.  Remove extra word
	case in loop_small caused by wrong loop count.  Add little-endian
	support.
	* sysdeps/powerpc/powerpc32/power7/memchr.S: Likewise.
	* sysdeps/powerpc/powerpc64/power7/memrchr.S: Likewise.  Use proper
	cache hint.
	* sysdeps/powerpc/powerpc32/power7/memrchr.S: Likewise.
	* sysdeps/powerpc/powerpc64/power7/rawmemchr.S: Add little-endian
	support.  Avoid rlwimi.
	* sysdeps/powerpc/powerpc32/power7/rawmemchr.S: Likewise.
2013-10-04 10:41:46 +09:30
Joseph Myers
2d67d91ac0 Remove powerpc64 bounded-pointers code. 2013-03-06 00:10:21 +00:00
Joseph Myers
568035b787 Update copyright notices with scripts/update-copyrights. 2013-01-02 19:05:09 +00:00
Will Schmidt
14a50c9d23 [Powerpc] Tune/optimize powerpc{32,64}/power7/memchr.S.
Assorted tweaking, twisting and tuning to squeeze a few additional cycles
out of the memchr code.   Changes include bypassing the shift pairs
(sld,srd) when they are not required, and unrolling the small_loop that
handles short and trailing strings.

Per scrollpipe data measuring aligned strings for 64-bit, these changes
save between five and eight cycles (9-13% overall) for short strings (<32),
Longer aligned strings see slight improvement of 1-3% due to bypassing the
shifts and the instruction rearranging.
2012-08-21 14:20:55 -05:00
Paul Eggert
59ba27a63a Replace FSF snail mail address with URLs. 2012-02-09 23:18:22 +00:00
Luis Machado
fe2f79db99 powerpc: Various P7-optimized string functions 2010-08-19 07:38:41 -07:00