This is another one where we'll be wanting the base symbols for
powerpc64le rather than just a power7 variant.
* sysdeps/powerpc/powerpc64/multiarch/strncase_l-power7.c: Include
string/strncase_l.c, not string/strncase.c.
(USE_IN_EXTENDED_LOCALE_MODEL): Don't define.
(libc_hidden_def): Redefine.
The routine being assembled here is strcasecmp_l, so ask for that via
__STRCMP and STRCMP defines. That change means tweaking the power7
override. Needed for later powerpc64le changes where we want the base
symbols, not just a power7 variant.
* sysdeps/powerpc/powerpc64/multiarch/strcasecmp_l-power7.S:
(__STRCMP, STRCMP, __strcasecmp_l): Define.
(__strcasecmp): Don't define.
These functions aren't used in ld.so at the moment since we don't have
strcmp or strncmp ifuncs for them there. Remove the ld.so bloat.
* sysdeps/powerpc/powerpc64/multiarch/strcmp-power8.S: Wrap in
IS_IN (libc).
* sysdeps/powerpc/powerpc64/multiarch/strcmp-power9.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strncmp-power8.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strncmp-power9.S: Likewise.
USE_AS_STPNCPY is defined by sysdeps/powerpc/powerpc64/power8/stpncpy.S,
included by this file.
* sysdeps/powerpc/powerpc64/multiarch/stpncpy-power8.S: Don't define
USE_AS_STPNCPY.
It seems to me that libc.a should not contain any of the __GI_
symbols, and certainly --enable-multi-arch ought to not add to the
list. At the end of this patch series we have the following in both
--enable-multi-arch and --disable-multi-arch libc.a:
0000000000000000 T __GI___readdir64
0000000000000000 T __GI___fxstatat64
0000000000000000 T __GI_getrlimit
0000000000000000 T __GI___getrlimit
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-ppc64.S (hidden_def):
Redefine only when SHARED.
POWER9 DD2.1 and earlier has an issue where some cache inhibited
vector load traps to the kernel, causing a performance degradation. To
handle this in memcpy and memmove, lvx/stvx is used for aligned
addresses instead of lxvd2x/stxvd2x.
Reference: https://patchwork.ozlabs.org/patch/814059/
* sysdeps/powerpc/powerpc64/power7/memcpy.S: Replace
lxvd2x/stxvd2x with lvx/stvx.
* sysdeps/powerpc/powerpc64/power7/memmove.S: Likewise.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
cfi info for stack adjust needs to be on the insn doing the adjust.
cfi describing register saves can be anywhere after the save insn but
before the reg is altered. Fewer locations with cfi result in smaller
cfi programs and possibly slightly faster exception handling. Thus
the LR cfi_offset move.
The idea behind ajusting sp after restoring regs is to break a
register dependency chain, in this case not be using r1 immediately
after it is modified.
The missing LR cfi_restore meant that code after the blr,
unaligned_lt_16 and other labels, would have cfi that said LR was at
cfa+16, but that code is reached without LR being saved.
* sysdeps/powerpc/powerpc64/power8/strncpy.S: Move LR cfi.
Adjust stack after restoring regs. Add missing LR cfi_restore.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
This patch moves the frame setup and teardown to immediately around
the single memset call, as has been done for power8. I've also
decreased FRAMESIZE to that needed to save the two callee-saved
registers used. Plus added cfi.
* sysdeps/powerpc/powerpc64/power7/strncpy.S: Decrease FRAMESIZE.
Move LR save and frame setup/teardown and LR restore to
immediately around memset call. Provide cfi.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
Fix the ifdef clause that was being used in the opposite way, setting
a wrong value of the carry bit.
This is also correcting 2 memory accesses that were mistakenly referring
to r0 while they were supposed to mean the immediate value 0.
[BZ #22142]
* stdio-common/tst-printf.c (fp_test): Add tests for DBL_MAX and
-DBL_MAX.
(do_test): Likewise.
* stdio-common/tst-printf.sh: Likewise.
* sysdeps/powerpc/powerpc64/power7/add_n.S: Invert the initial
ifdef clause in order to set the carry bit right. Replace r0 by
0 without changing the behavior.
Recent commit 59ba2d2b54 missed to add __memrchr_power8 in
ifunc list. Also handled discarding unwanted bytes for
unaligned inputs in power8 optimization.
2017-10-05 Rajalakshmi Srinivasaraghavan <raji@linux.vnet.ibm.com>
* sysdeps/powerpc/powerpc64/multiarch/memrchr-ppc64.c: Revert
back to powerpc32 file.
* sysdeps/powerpc/powerpc64/multiarch/memrchr.c
(memrchr): Add __memrchr_power8 to ifunc list.
* sysdeps/powerpc/powerpc64/power8/memrchr.S: Mask
extra bytes for unaligned inputs.
This patch makes dbl-64 modf use libm_alias_double. Both the dbl-64
and dbl-64/wordsize-64 versions are changed, and the ldbl-opt version
is changed to define the libc compat symbol only. Because of
multiarch wrappers, the changed implementations are made not to define
aliases at all if __modf is defined as a macro, as with other
functions, so avoiding duplicate compat symbols while allowing those
wrappers to be simplified.
Tested for x86_64, and verified with build-many-glibcs.py that
installed stripped shared libraries are unchanged by the patch.
* sysdeps/ieee754/dbl-64/s_modf.c: Include <libm-alias-double.h>.
(modf): Define using libm_alias_double, only if [!__modf].
* sysdeps/ieee754/dbl-64/wordsize-64/s_modf.c: Include
<libm-alias-double.h>.
(modf): Define using libm_alias_double, only if [!__modf].
* sysdeps/ieee754/ldbl-opt/s_modf.c (modfl): Only define libc
compat symbol here.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_modf-ppc32.c
(weak_alias): Do not undefine and redefine.
(strong_alias): Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_modf-ppc64.c
(weak_alias): Likewise.
(strong_alias): Likewise.
This patch makes dbl-64 logb use libm_alias_double. Both the dbl-64
and dbl-64/wordsize-64 versions are changed, and the ldbl-opt version
is removed. Because of multiarch wrappers, the changed
implementations are made not to define aliases at all if __logb is
defined as a macro, as with other functions, so avoiding duplicate
compat symbols while allowing those wrappers to be simplified.
Tested for x86_64, and verified with build-many-glibcs.py that
installed stripped shared libraries are unchanged (except on alpha
where changes from using the wordsize-64 version are expected).
* sysdeps/ieee754/dbl-64/s_logb.c: Include <libm-alias-double.h>.
(logb): Define using libm_alias_double, only if [!__logb].
* sysdeps/ieee754/dbl-64/wordsize-64/s_logb.c: Include
<libm-alias-double.h>.
(logb): Define using libm_alias_double, only if [!__logb].
* sysdeps/ieee754/ldbl-opt/s_logb.c: Remove file.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_logb-ppc32.c
(weak_alias): Do not undefine and redefine.
(strong_alias): Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_logb-ppc64.c
(weak_alias): Likewise.
(strong_alias): Likewise.
The new generic expf and exp2f code don't need wrappers any more, they
set errno inline, so only use the wrappers on targets that need it.
(If the wrapper is needed, then the top level wrapper code is included,
otherwise empty w_exp*f.c is used to suppress the wrapper.)
A powerpc64 expf implementation includes the expf c code directly which
needed some changes.
* sysdeps/ieee754/flt-32/e_exp2f.c (__exp2f): Define without wrapper.
* sysdeps/ieee754/flt-32/e_expf.c (__expf): Likewise
* sysdeps/ieee754/flt-32/w_exp2f.c: New file.
* sysdeps/ieee754/flt-32/w_expf.c: New file.
* sysdeps/powerpc/powerpc64/fpu/multiarch/e_expf-ppc64.c: Update for
the new expf code.
* sysdeps/powerpc/powerpc64/fpu/multiarch/w_expf.c: New file.
* sysdeps/powerpc/powerpc64/power8/fpu/w_expf.c: New file.
* sysdeps/m68k/m680x0/fpu/w_exp2f.c: New file.
* sysdeps/m68k/m680x0/fpu/w_expf.c: New file.
* sysdeps/i386/fpu/w_exp2f.c: New file.
* sysdeps/i386/fpu/w_expf.c: New file.
* sysdeps/i386/i686/fpu/multiarch/w_expf.c: New file.
* sysdeps/x86_64/fpu/w_expf.c: New file.
Vectorized loops are used for sizes greater than 32B to improve
performance over power7 optimization. This shows as an average
of 25% improvement depending on the position of search
character. The performance is same for shorter strings.
On powerpc64le, compiler support for float128 is not enabled by default
on gcc. To enable it, the flag -mfloat128 must be passed as a command
line option to the compiler. This means that only the few files that
actively have -mfloat128 passed as an argument get compiler support for
float128, whereas all other files don't.
When -mfloat128 becomes enabled by default on powerpc [1], all the files
that do not currently have compiler support for float128 enabled during
their compilation, will start to have it. This will lead to build
errors in s_finite.c, s_isinf.c, and s_isnan.c.
The errors are due to the unintended macro expansion of __finitef128 to
__redirect_finitef128 in math/bits/mathcalls-helper-functions.h. In
that header, __MATHDECL_1 takes '__finite' and 'f128' as arguments and
concatenates them. However, since '__finite' has been redefined in
s_finite.c, the function declaration becomes __redirect_finitef128:
extern int __redirect___finitef128 (_Float128 __value) __attribute__ ((__nothrow__ )) __attribute__ ((__const__));
This declaration itself is OK. The problem arises when include/math.h
creates the hidden prototype ('hidden_proto (__finitef128)'), which
expands to:
extern __typeof (__finitef128) __finitef128 __attribute__ ((visibility ("hidden")));
Since __finitef128 is not declared, __typeof fails. This effect was
already true for the 'float' and 'long double' versions and is now true
for float128. Likewise for isinsff128 and isnanf128.
This patch defines __finitef128 as __redirect___finitef128 in
sysdeps/powerpc/powerpc64/fpu/multiarch/s_finite.c, similarly to what's
done for the float and long double versions of these functions, to get
rid of the build error. Likewise for isinff128 and isnanf128.
[1] https://gcc.gnu.org/ml/gcc-patches/2017-08/msg01028.html
Tested for powerpc64 and powerpc64le.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_finite.c
(__finitef128): Define to __redirect___finitef128.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_isinf.c
(__isinff128): Define to __redirect___isinff128.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan.c
(__isnanf128): Define to __redirect___isnanf128.
As per the section "3.1.4.2 Alignment Interrupts" of the "POWER8 Processor
User's Manual for the Single-Chip Module", alignment interrupt is reported
for misaligned stores in Caching-inhibited storage. As memset is used in
some drivers for DMA (like xorg), this patch avoids misaligned stores for
sizes less than 8 in memset.
Some math functions have to be distributed in libc because they're
required by printf.
libc and libm require their own builds of these functions, e.g. libc
functions have to call __stack_chk_fail_local in order to bypass the
PLT, while libm functions have to call __stack_chk_fail.
While math/Makefile treat the generic cases, i.e. s_isinff, the
multiarch Makefile has to treat its own files, i.e. s_isinff-ppc64.
[BZ #21745]
* sysdeps/powerpc/powerpc64/fpu/multiarch/Makefile:
[$(subdir) = math] (sysdep_calls): New variable. Has the
previous contents of sysdep_routines, but re-sorted..
[$(subdir) = math] (sysdep_routines): Re-use the contents from
sysdep_calls.
[$(subdir) = math] (libm-sysdep_routines): Remove the functions
defined in sysdep_calls and replace by the respective m_* names.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-ppc64.S:
(compat_symbol): Undefine to avoid duplicated compat symbols in
libc.
This makes the __tls_get_addr_opt test run as a shared library, and so
actually test that DTPMOD64/DTPREL64 pairs are processed by ld.so to
support the __tls_get_adfr_opt call stub fast return. After a
2017-01-24 patch (binutils f0158f4416) ld.bfd no longer emitted
unnecessary dynamic relocations against local thread variables,
instead setting up the __tls_index GOT entries for the call stub fast
return. This meant tst-tlsopt-powerpc passed but did not check ld.so
relocation support. After a 2017-07-16 patch (binutils 676ee2b5fa)
ld.bfd no longer set up the __tls_index GOT entries for the call stub
fast return, and tst-tlsopt-powerpc failed.
Compiling mod-tlsopt-powerpc.c with -DSHARED exposed a bug in
powerpc64/tls-macros.h, which defines a __TLS_GET_ADDR macro that
clashes with one defined in dl-tls.h. The tls-macros.h version is
only used in that file, so delete it and expand.
* sysdeps/powerpc/mod-tlsopt-powerpc.c: Extract from
tst-tlsopt-powerpc.c with function name change and no test harness.
* sysdeps/powerpc/tst-tlsopt-powerpc.c: Remove body of test.
Call tls_get_addr_opt_test.
* sysdeps/powerpc/Makefile (LDFLAGS-tst-tlsopt-powerpc): Don't define.
(modules-names): Add mod-tlsopt-powerpc.
(mod-tlsopt-powerpc.so-no-z-defs): Define.
(tst-tlsopt-powerpc): Depend on .so.
* sysdeps/powerpc/powerpc64/tls-macros.h (__TLS_GET_ADDR): Don't
define. Expand use in TLS_GD and TLS_LD.
The ucontext_t type has a tag struct ucontext. As with previous such
issues for siginfo_t and stack_t, this tag is not permitted by POSIX
(is not in a reserved namespace), and so namespace conformance means
breaking C++ name mangling for this type.
In this case, the type does need to have some tag rather than just a
typedef name, because it includes a pointer to itself. This patch
uses struct ucontext_t as the new tag, so the type is mangled as
ucontext_t (the POSIX *_t reservation applies in all namespaces, not
just the namespace of ordinary identifiers). Another reserved name
such as struct __ucontext could of course be used.
Because of other namespace issues, this patch does not by itself fix
bug 21457 or allow any XFAILs to be removed.
Tested for x86_64, and with build-many-glibcs.py.
[BZ #21457]
* sysdeps/arm/sys/ucontext.h (struct ucontext): Rename to struct
ucontext_t.
* sysdeps/generic/sys/ucontext.h (struct ucontext): Likewise.
* sysdeps/i386/sys/ucontext.h (struct ucontext): Likewise.
* sysdeps/m68k/sys/ucontext.h (struct ucontext): Likewise.
* sysdeps/mips/sys/ucontext.h (struct ucontext): Likewise.
* sysdeps/unix/sysv/linux/aarch64/sys/ucontext.h (struct
ucontext): Likewise.
* sysdeps/unix/sysv/linux/alpha/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/unix/sysv/linux/arm/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/unix/sysv/linux/hppa/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/unix/sysv/linux/ia64/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/unix/sysv/linux/m68k/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/unix/sysv/linux/mips/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/unix/sysv/linux/nios2/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/unix/sysv/linux/powerpc/sys/ucontext.h (struct
ucontext): Likewise.
* sysdeps/unix/sysv/linux/s390/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/unix/sysv/linux/sh/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/unix/sysv/linux/sparc/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/unix/sysv/linux/tile/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/unix/sysv/linux/x86/sys/ucontext.h (struct ucontext):
Likewise.
* sysdeps/powerpc/powerpc32/backtrace.c (struct
rt_signal_frame_32): Likewise.
* sysdeps/powerpc/powerpc64/backtrace.c (struct signal_frame_64):
Likewise.
* sysdeps/unix/sysv/linux/aarch64/kernel_rt_sigframe.h (struct
kernel_rt_sigframe): Likewise.
* sysdeps/unix/sysv/linux/aarch64/sigcontextinfo.h (SIGCONTEXT):
Likewise.
* sysdeps/unix/sysv/linux/arm/register-dump.h (register_dump):
Likewise.
* sysdeps/unix/sysv/linux/arm/sigcontextinfo.h (SIGCONTEXT):
Likewise.
* sysdeps/unix/sysv/linux/hppa/profil-counter.h
(__profil_counter): Likewise.
* sysdeps/unix/sysv/linux/microblaze/sigcontextinfo.h
(SIGCONTEXT): Likewise.
* sysdeps/unix/sysv/linux/mips/kernel_rt_sigframe.h (struct
kernel_rt_sigframe): Likewise.
* sysdeps/unix/sysv/linux/nios2/kernel_rt_sigframe.h (struct
kernel_rt_sigframe): Likewise.
* sysdeps/unix/sysv/linux/nios2/sigcontextinfo.h (SIGCONTEXT):
Likewise.
* sysdeps/unix/sysv/linux/sh/makecontext.S (__makecontext):
Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/makecontext.c
(__start_context): Likewise.
* sysdeps/unix/sysv/linux/tile/sigcontextinfo.h (SIGCONTEXT):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/register-dump.h (register_dump):
Likewise.
* sysdeps/unix/sysv/linux/x86_64/sigcontextinfo.h (SIGCONTEXT):
Likewise.
sysdeps/powerpc/powerpc64/fpu/multiarch/s_sinf-ppc64.c should fall back to
sysdeps/powerpc/fpu/s_sinf.c not to sysdeps/ieee754/flt-32/s_sinf.c.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_sinf-ppc64.c: Change
s_sinf.c from sysdeps/ieee754/flt-32/ to sysdeps/powerpc/fpu/.
<locale.h> is specified to define locale_t in POSIX.1-2008, and so are
all of the headers that define functions that take locale_t arguments.
Under _GNU_SOURCE, the additional headers that define such functions
have also always defined locale_t. Therefore, there is no need to use
__locale_t in public function prototypes, nor in any internal code.
* ctype/ctype-c99_l.c, ctype/ctype.h, ctype/ctype_l.c
* include/monetary.h, include/stdlib.h, include/time.h
* include/wchar.h, locale/duplocale.c, locale/freelocale.c
* locale/global-locale.c, locale/langinfo.h, locale/locale.h
* locale/localeinfo.h, locale/newlocale.c
* locale/nl_langinfo_l.c, locale/uselocale.c
* localedata/bug-usesetlocale.c, localedata/tst-xlocale2.c
* stdio-common/vfscanf.c, stdlib/monetary.h, stdlib/stdlib.h
* stdlib/strfmon_l.c, stdlib/strtod_l.c, stdlib/strtof_l.c
* stdlib/strtol.c, stdlib/strtol_l.c, stdlib/strtold_l.c
* stdlib/strtoll_l.c, stdlib/strtoul_l.c, stdlib/strtoull_l.c
* string/strcasecmp.c, string/strcoll_l.c, string/string.h
* string/strings.h, string/strncase.c, string/strxfrm_l.c
* sysdeps/ieee754/float128/strtof128_l.c
* sysdeps/ieee754/float128/wcstof128.c
* sysdeps/ieee754/float128/wcstof128_l.c
* sysdeps/ieee754/ldbl-128ibm/strtold_l.c
* sysdeps/ieee754/ldbl-64-128/strtold_l.c
* sysdeps/ieee754/ldbl-opt/nldbl-compat.c
* sysdeps/ieee754/ldbl-opt/nldbl-strfmon_l.c
* sysdeps/ieee754/ldbl-opt/nldbl-strtold_l.c
* sysdeps/ieee754/ldbl-opt/nldbl-wcstold_l.c
* sysdeps/powerpc/powerpc32/power7/strcasecmp.S
* sysdeps/powerpc/powerpc64/power7/strcasecmp.S
* sysdeps/x86_64/strcasecmp_l-nonascii.c
* sysdeps/x86_64/strncase_l-nonascii.c, time/strftime_l.c
* time/strptime_l.c, time/time.h, wcsmbs/mbsrtowcs_l.c
* wcsmbs/wchar.h, wcsmbs/wcscasecmp.c, wcsmbs/wcsncase.c
* wcsmbs/wcstod.c, wcsmbs/wcstod_l.c, wcsmbs/wcstof.c
* wcsmbs/wcstof_l.c, wcsmbs/wcstol_l.c, wcsmbs/wcstold.c
* wcsmbs/wcstold_l.c, wcsmbs/wcstoll_l.c, wcsmbs/wcstoul_l.c
* wcsmbs/wcstoull_l.c, wctype/iswctype_l.c
* wctype/towctrans_l.c, wctype/wcfuncs_l.c
* wctype/wctrans_l.c, wctype/wctype.h, wctype/wctype_l.c:
Change all uses of __locale_t to locale_t.
These machine-dependent inline string functions have never been on by
default, and even if they were a good idea at the time they were
introduced, they haven't really been touched in ten to fifteen years
and probably aren't a good idea on current-gen processors. Current
thinking is that this class of optimization is best left to the
compiler.
* bits/string.h, string/bits/string.h
* sysdeps/aarch64/bits/string.h
* sysdeps/m68k/m680x0/m68020/bits/string.h
* sysdeps/s390/bits/string.h, sysdeps/sparc/bits/string.h
* sysdeps/x86/bits/string.h: Delete file.
* string/string.h: Don't include bits/string.h.
* string/bits/string3.h: Rename to bits/string_fortified.h.
No need to undef various symbols that the removed headers
might have defined as macros.
* string/Makefile (headers): Remove bits/string.h, change
bits/string3.h to bits/string_fortified.h.
* string/string-inlines.c: Update commentary. Remove definitions
of various macros that nothing looks at anymore. Don't directly
include bits/string.h. Set _STRING_INLINE_unaligned here, based on
compiler-predefined macros.
* string/strncat.c: If STRNCAT is not defined, or STRNCAT_PRIMARY
_is_ defined, provide internal hidden alias __strncat.
* include/string.h: Declare internal hidden alias __strncat.
Only forward __stpcpy to __builtin_stpcpy if __NO_STRING_INLINES is
not defined.
* include/bits/string3.h: Rename to bits/string_fortified.h,
update to match above.
* sysdeps/i386/string-inlines.c: Define compat symbols for
everything formerly defined by sysdeps/x86/bits/string.h.
Make existing definitions into compat symbols as well.
Remove some no-longer-necessary messing around with macros.
* sysdeps/powerpc/powerpc32/power4/multiarch/mempcpy.c
* sysdeps/powerpc/powerpc64/multiarch/mempcpy.c
* sysdeps/powerpc/powerpc64/multiarch/stpcpy.c
* sysdeps/s390/multiarch/mempcpy.c
No need to define _HAVE_STRING_ARCH_mempcpy.
Do define __NO_STRING_INLINES and NO_MEMPCPY_STPCPY_REDIRECT.
* sysdeps/i386/i686/multiarch/strncat-c.c
* sysdeps/s390/multiarch/strncat-c.c
* sysdeps/x86_64/multiarch/strncat-c.c
Define STRNCAT_PRIMARY. Don't change definition of libc_hidden_def.
ELFv2 functions with localentry:0 are those with a single entry point,
ie. global entry == local entry, that have no requirement on r2 or
r12 and guarantee r2 is unchanged on return. Such an external
function can be called via the PLT without saving r2 or restoring it
on return, avoiding a common load-hit-store for small functions.
This patch implements the ld.so changes necessary for this
optimization. ld.so needs to check that an optimized plt call
sequence is in fact calling a function implemented with localentry:0,
end emit a fatal error otherwise.
The elf/testobj6.c change is to stop "error while loading shared
libraries: expected localentry:0 `preload'" when running
elf/preloadtest, which we'd get otherwise.
* elf/elf.h (PPC64_OPT_LOCALENTRY): Define.
* sysdeps/alpha/dl-machine.h (elf_machine_fixup_plt): Add
refsym and sym parameters. Adjust callers.
* sysdeps/aarch64/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/arm/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/generic/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/hppa/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/i386/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/ia64/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/m68k/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/microblaze/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/mips/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/nios2/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_fixup_plt):
Likewise.
* sysdeps/s390/s390-32/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/s390/s390-64/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/sh/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/sparc/sparc32/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/sparc/sparc64/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/tile/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/x86_64/dl-machine.h (elf_machine_fixup_plt): Likewise.
* sysdeps/powerpc/powerpc64/dl-machine.c (_dl_error_localentry): New.
(_dl_reloc_overflow): Increase buffser size. Formatting.
* sysdeps/powerpc/powerpc64/dl-machine.h (ppc64_local_entry_offset):
Delete reloc param, add refsym and sym. Check optimized plt
call stubs for localentry:0 functions. Adjust callers.
(elf_machine_fixup_plt, elf_machine_plt_conflict): Add refsym
and sym parameters. Adjust callers.
(_dl_reloc_overflow): Move attribute.
(_dl_error_localentry): Declare.
* elf/dl-runtime.c (_dl_fixup): Save original sym. Pass
refsym and sym to elf_machine_fixup_plt.
* elf/testobj6.c (preload): Call printf.
Makes __stpncpy_power8 call __memset_power8 directly rather than via an
IFUNC. Fixes a missing _mcount, and removes some redundant NOPS. The
*_is_local defines are also used in a followup patch.
* sysdeps/powerpc/powerpc64/multiarch/strncpy-power7.S: Define
MEMSET_is_local.
* sysdeps/powerpc/powerpc64/multiarch/strncpy-power8.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/stpncpy-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/stpncpy-power8.S: Likewise.
Define MEMSET.
* sysdeps/powerpc/powerpc64/multiarch/strstr-power7.S: Define
STRLEN_is_local, STRNLEN_is_local, and STRCHR_is_local.
* sysdeps/powerpc/powerpc64/power7/strstr.S: Likewise. Don't add
nop after local calls.
* sysdeps/powerpc/powerpc64/power7/strncpy.S: Define MEMSET_is_local.
Don't add nop after local call.
* sysdeps/powerpc/powerpc64/power8/strncpy.S: Likewise. Add missing
CALL_MCOUNT.
.align on some targets takes a byte alignment, on others like powerpc,
log2 of the byte alignment. It's a good idea to avoid .align,
particularly since x86 and powerpc are different. This patch fixes
the occurrences of .align in powerpc64/sysdep.h, renames DOT_LABEL
since the macro doesn't have anything to do with adding dots, removes
extraneous semicolons, and fixes some formatting.
* sysdeps/powerpc/powerpc64/sysdep.h: Formatting.
(FUNC_LABEL): Rename from DOT_LABEL.
(ENTRY_1): Use FUNC_LABEL and remove leading space from label.
Use .p2align rather than .align.
(TRACEBACK, TRACEBACK_MASK): Use .p2align rather than .align.
(ABORT_TRANSACTION): Likewise.
(ENTRY_1, ENTRY_2, END_2, LOCALENTRY): Remove unnecessary semicolons,
particularly at end. Add semicolon at invocation as necessary.
(TRACEBACK, TRACEBACK_MASK, PSEUDO, PSEUDO_NOERRNO): Likewise.
(PSEUDO_ERRVAL, PPC64_LOAD_FUNCPTR, OPD_ENT): Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strrchr-power8.S (ENTRY,
END): Adjust to suit.
I think FRAME_PARM[1-9]_SAVE confuse the code, particularly
FRAME_PARM9_SAVE. There are only 8 parameter save slots!
* sysdeps/powerpc/powerpc64/sysdep.h: (FRAME_BACKCHAIN,
FRAME_CR_SAVE, FRAME_LR_SAVE): Move out of conditional.
(FRAME_PARM1_SAVE, FRAME_PARM2_SAVE, FRAME_PARM3_SAVE,
FRAME_PARM4_SAVE, FRAME_PARM5_SAVE, FRAME_PARM6_SAVE,
FRAME_PARM7_SAVE, FRAME_PARM8_SAVE, FRAME_PARM9_SAVE): Delete.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/makecontext.S: Replace
uses of FRAME_PARM[1-9]_SAVE with FRAME_PARM_SAVE plus offset.
The macros used in assembly were broken on powerpc64 ELFv1.
* sysdeps/powerpc/powerpc64/sysdep.h: (call_mcount_parm_offset): Delete.
(SAVE_ARG, REST_ARG, CFI_SAVE_ARG): Correct.
This patch optimizes the generic spinlock code.
The type pthread_spinlock_t is a typedef to volatile int on all archs.
Passing a volatile pointer to the atomic macros which are not mapped to the
C11 atomic builtins can lead to extra stores and loads to stack if such
a macro creates a temporary variable by using "__typeof (*(mem)) tmp;".
Thus, those macros which are used by spinlock code - atomic_exchange_acquire,
atomic_load_relaxed, atomic_compare_exchange_weak - have to be adjusted.
According to the comment from Szabolcs Nagy, the type of a cast expression is
unqualified (see http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_423.htm):
__typeof ((__typeof (*(mem)) *(mem)) tmp;
Thus from spinlock perspective the variable tmp is of type int instead of
type volatile int. This patch adjusts those macros in include/atomic.h.
With this construct GCC >= 5 omits the extra stores and loads.
The atomic macros are replaced by the C11 like atomic macros and thus
the code is aligned to it. The pthread_spin_unlock implementation is now
using release memory order instead of sequentially consistent memory order.
The issue with passed volatile int pointers applies to the C11 like atomic
macros as well as the ones used before.
I've added a glibc_likely hint to the first atomic exchange in
pthread_spin_lock in order to return immediately to the caller if the lock is
free. Without the hint, there is an additional jump if the lock is free.
I've added the atomic_spin_nop macro within the loop of plain reads.
The plain reads are also realized by C11 like atomic_load_relaxed macro.
The new define ATOMIC_EXCHANGE_USES_CAS determines if the first try to acquire
the spinlock in pthread_spin_lock or pthread_spin_trylock is an exchange
or a CAS. This is defined in atomic-machine.h for all architectures.
The define SPIN_LOCK_READS_BETWEEN_CMPXCHG is now removed.
There is no technical reason for throwing in a CAS every now and then,
and so far we have no evidence that it can improve performance.
If that would be the case, we have to adjust other spin-waiting loops
elsewhere, too! Using a CAS loop without plain reads is not a good idea
on many targets and wasn't used by one. Thus there is now no option to
do so.
Architectures are now using the generic spinlock automatically if they
do not provide an own implementation. Thus the pthread_spin_lock.c files
in sysdeps folder are deleted.
ChangeLog:
* NEWS: Mention new spinlock implementation.
* include/atomic.h:
(__atomic_val_bysize): Cast type to omit volatile qualifier.
(atomic_exchange_acq): Likewise.
(atomic_load_relaxed): Likewise.
(ATOMIC_EXCHANGE_USES_CAS): Check definition.
* nptl/pthread_spin_init.c (pthread_spin_init):
Use atomic_store_relaxed.
* nptl/pthread_spin_lock.c (pthread_spin_lock):
Use C11-like atomic macros.
* nptl/pthread_spin_trylock.c (pthread_spin_trylock):
Likewise.
* nptl/pthread_spin_unlock.c (pthread_spin_unlock):
Use atomic_store_release.
* sysdeps/aarch64/nptl/pthread_spin_lock.c: Delete File.
* sysdeps/arm/nptl/pthread_spin_lock.c: Likewise.
* sysdeps/hppa/nptl/pthread_spin_lock.c: Likewise.
* sysdeps/m68k/nptl/pthread_spin_lock.c: Likewise.
* sysdeps/microblaze/nptl/pthread_spin_lock.c: Likewise.
* sysdeps/mips/nptl/pthread_spin_lock.c: Likewise.
* sysdeps/nios2/nptl/pthread_spin_lock.c: Likewise.
* sysdeps/aarch64/atomic-machine.h (ATOMIC_EXCHANGE_USES_CAS): Define.
* sysdeps/alpha/atomic-machine.h: Likewise.
* sysdeps/arm/atomic-machine.h: Likewise.
* sysdeps/i386/atomic-machine.h: Likewise.
* sysdeps/ia64/atomic-machine.h: Likewise.
* sysdeps/m68k/coldfire/atomic-machine.h: Likewise.
* sysdeps/m68k/m680x0/m68020/atomic-machine.h: Likewise.
* sysdeps/microblaze/atomic-machine.h: Likewise.
* sysdeps/mips/atomic-machine.h: Likewise.
* sysdeps/powerpc/powerpc32/atomic-machine.h: Likewise.
* sysdeps/powerpc/powerpc64/atomic-machine.h: Likewise.
* sysdeps/s390/atomic-machine.h: Likewise.
* sysdeps/sparc/sparc32/atomic-machine.h: Likewise.
* sysdeps/sparc/sparc32/sparcv9/atomic-machine.h: Likewise.
* sysdeps/sparc/sparc64/atomic-machine.h: Likewise.
* sysdeps/tile/tilegx/atomic-machine.h: Likewise.
* sysdeps/tile/tilepro/atomic-machine.h: Likewise.
* sysdeps/unix/sysv/linux/hppa/atomic-machine.h: Likewise.
* sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h: Likewise.
* sysdeps/unix/sysv/linux/nios2/atomic-machine.h: Likewise.
* sysdeps/unix/sysv/linux/sh/atomic-machine.h: Likewise.
* sysdeps/x86_64/atomic-machine.h: Likewise.
This implementation is based on the one already used at
sysdeps/powerpc/powerpc64/fpu/multiarch/s_sinf-power8.S.
* sysdeps/powerpc/powerpc64/fpu/multiarch/Makefile
[$(subdir) = math] (libm-sysdep_routines): Add s_cosf-power8 and
s_cosf-ppc64.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_cosf-power8.S: New file.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_cosf-ppc64.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_cosf.c: Likewise.
* sysdeps/powerpc/powerpc64/power8/fpu/s_cosf.S: Likewise.
Now with read consolidation which uses SYSCALL_CANCEL macro, a frame
pointer is created in the syscall code and this makes the powerpc
backtrace obtain a bogus entry for the signal handling patch.
It is because it does not setup the correct frame pointer register
(r1) based on the saved value from the kernel sigreturn. It was not
failing because the syscall frame pointer register was the same one
for the next frame (the function that actually called the syscall).
This patch fixes it by setup the next stack frame using the saved
one by the kernel sigreturn. It fixes tst-backtrace{5,6} from
the read consolidation patch.
Checked on powerpc-linux-gnu and powerpc64le-linux-gnu.
* sysdeps/powerpc/powerpc32/backtrace.c (is_sigtramp_address): Use
void* for argument type and use VDSO_SYMBOL macro.
(is_sigtramp_address_rt): Likewise.
(__backtrace): Setup expected frame pointer address for signal
handling.
* sysdeps/powerpc/powerpc64/backtrace.c (is_sigtramp_address): Use
void* for argumetn type and use VSDO_SYMBOL macro.
(__backtrace): Setup expected frame pointer address for signal
handling.
P7 code is used for <=32B strings and for > 32B vectorized loops are used.
This shows as an average 25% improvement depending on the position of search
character. The performance is same for shorter strings.
Tested on ppc64 and ppc64le.
With new optimized strnlen for POWER8 [1], this patch adds
strncat for power8 to make use of optimized strlen and strnlen.
This is faster than POWER7 current implementation for larger strings.
Tested on powerpc64 and powerpc64le.
[1] https://sourceware.org/ml/libc-alpha/2017-03/msg00491.html
* sysdeps/powerpc/powerpc64/multiarch/Makefile (sysdep_routines): Add
strncat-power8.
* sysdeps/powerpc/powerpc64/multiarch/strncat.c (strncat): Add
__strncat_power8 to ifunc list.
* sysdeps/powerpc/powerpc64/multiarch/ifunc-impl-list.c
(strncat): Add __strncat_power8 to list of strncat functions.
* sysdeps/powerpc/powerpc64/multiarch/strncat-power8.c: New file.
Clean up the IFUNC implementations for powerpc in order to remove
unneeded macro definitions.
Tested on ppc64le with and without --disable-multi-arch flag.
* sysdeps/powerpc/powerpc64/multiarch/memcmp-power4.S: Define the
implementation-specific function name and remove unneeded
macros definition.
* sysdeps/powerpc/powerpc64/multiarch/memcmp-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/memmove-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/power4/memcmp.S: Set a default function
name if not defined and pass as parameter to macros accordingly.
* sysdeps/powerpc/powerpc64/power7/memcmp.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/memmove.S: Likewise.
Clean up the IFUNC implementations for powerpc in order to remove
unneeded macro definitions.
Tested on ppc64le with and without --disable-multi-arch flag.
* sysdeps/powerpc/powerpc64/multiarch/memcpy-a2.S: Define the
implementation-specific function name and remove unneeded
macros definition.
* sysdeps/powerpc/powerpc64/multiarch/memcpy-cell.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/memcpy-power4.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/memcpy-power6.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/memcpy-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/memcpy-ppc64.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/mempcpy-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/a2/memcpy.S: Set a default function
name if not defined and pass as parameter to macros accordingly.
* sysdeps/powerpc/powerpc64/cell/memcpy.S: Likewise.
* sysdeps/powerpc/powerpc64/memcpy.S: Likewise.
* sysdeps/powerpc/powerpc64/power4/memcpy.S: Likewise.
* sysdeps/powerpc/powerpc64/power6/memcpy.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/memcpy.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/mempcpy.S: Likewise.
Clean up the IFUNC implementations for powerpc in order to remove
unneeded macro definitions.
Tested on ppc64le with and without --disable-multi-arch flag.
* sysdeps/powerpc/powerpc64/multiarch/memchr-power7.S: Define the
implementation-specific function name and remove unneeded macros
definition.
* sysdeps/powerpc/powerpc64/multiarch/memrchr-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/rawmemchr-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/memchr.S: Set a default
function name if not defined and pass as parameter to macros
accordingly.
* sysdeps/powerpc/powerpc64/power7/memrchr.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/rawmemchr.S: Likewise.
Clean up the IFUNC implementations for powerpc in order to remove
unneeded macro definitions.
Tested on ppc64le with and without --disable-multi-arch flag.
* sysdeps/powerpc/powerpc64/multiarch/memset-power4.S: Define the
implementation-specific function name and remove unneeded macros
definition.
* sysdeps/powerpc/powerpc64/multiarch/memset-power6.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/memset-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/memset-power8.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/memset-ppc64.S: Likewise.
* sysdeps/powerpc/powerpc64/memset.S: Set a default function name if
not defined and pass as parameter to macros accordingly.
* sysdeps/powerpc/powerpc64/power4/memset.S: Likewise.
* sysdeps/powerpc/powerpc64/power6/memset.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/memset.S: Likewise.
* sysdeps/powerpc/powerpc64/power8/memset.S: Likewise.
Clean up the IFUNC implementations for powerpc in order to remove
unneeded macro definitions.
Tested on ppc64le with and without --disable-multi-arch flag.
* sysdeps/powerpc/powerpc64/multiarch/strcasestr-power8.S: Define the
strcasestr implementation name and remove unneeded macros definition.
* sysdeps/powerpc/powerpc64/multiarch/strstr-power7.S: Define
strstr implementation name and remove unneeded macros definition.
* sysdeps/powerpc/powerpc64/power7/strstr.S: Set a default function
name if not defined and pass as parameter to macros accordingly.
* sysdeps/powerpc/powerpc64/power8/strcasestr.S: Likewise.
Clean up the IFUNC implementations for powerpc in order to remove
unneeded macro definitions.
Tested on ppc64le with and without --disable-multi-arch flag.
* sysdeps/powerpc/powerpc64/multiarch/strchr-power7.S: Define the
implementation-specific function name and remove unneeded macros
definition.
* sysdeps/powerpc/powerpc64/multiarch/strchr-power8.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strchr-ppc64.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strchrnul-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strchrnul-power8.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strrchr-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/strchr.S: Set a default
function name if not defined and pass as parameter to macros
accordingly.
* sysdeps/powerpc/powerpc64/power7/strchrnul.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/strrchr.S: Likewise.
* sysdeps/powerpc/powerpc64/power8/strchr.S: Likewise.
* sysdeps/powerpc/powerpc64/strchr.S: Likewise.
Clean up the IFUNC implementations for powerpc in order to remove
unneeded macro definitions.
Tested on ppc64le with and without --disable-multi-arch flag.
* sysdeps/powerpc/powerpc64/multiarch/strlen-power7.S: Define
the strlen implementation name and remove unneeded macros definition.
* sysdeps/powerpc/powerpc64/multiarch/strlen-power8.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strlen-ppc64.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strnlen-power7.S: Define
the strnlen implementation name and remove unneeded macros definition.
* sysdeps/powerpc/powerpc64/power7/strlen.S: Set a default function
name if not defined and pass as parameter to macros accordingly.
* sysdeps/powerpc/powerpc64/power7/strnlen.S: Likewise.
* sysdeps/powerpc/powerpc64/power8/strlen.S: Likewise.
* sysdeps/powerpc/powerpc64/strlen.S: Likewise.
Clean up the IFUNC implementations for powerpc in order to remove
unneeded macro definitions.
Tested on ppc64le with and without --disable-multi-arch flag.
* sysdeps/powerpc/powerpc64/multiarch/strcasecmp_l-power7.S: Define
the implementation-specific function name and remove unneeded
macros definition.
* sysdeps/powerpc/powerpc64/multiarch/strcmp-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcmp-power8.S Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcmp-power9.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcmp-ppc64.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strncmp-power4.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strncmp-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strncmp-power8.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strncmp-power9.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strncmp-ppc64.S: Likewise.
* sysdeps/powerpc/powerpc64/power4/strncmp.S: Set a default function
name if not defined and pass as parameter to macros accordingly.
* sysdeps/powerpc/powerpc64/power7/strcmp.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/strncmp.S: Likewise.
* sysdeps/powerpc/powerpc64/power8/strcmp.S: Likewise.
* sysdeps/powerpc/powerpc64/power8/strncmp.S: Likewise.
* sysdeps/powerpc/powerpc64/power9/strcmp.S: Likewise.
* sysdeps/powerpc/powerpc64/power9/strncmp.S: Likewise.
* sysdeps/powerpc/powerpc64/strcmp.S: Likewise.
* sysdeps/powerpc/powerpc64/strncmp.S: Likewise.
Clean up the IFUNC implementations for powerpc in order to remove
unneeded macro definitions.
Tested on ppc64le with and without --disable-multi-arch flag.
* sysdeps/powerpc/powerpc64/multiarch/stpcpy-power8.S: Define the
implementation-specific function name and remove unneeded macros
definition.
* sysdeps/powerpc/powerpc64/multiarch/stpncpy-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/stpncpy-power8.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcpy-power8.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strncpy-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strncpy-power8.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/strncpy.S: Set a default
function name if not defined.
* sysdeps/powerpc/powerpc64/power8/strcpy.S: Likewise.
* sysdeps/powerpc/powerpc64/power8/strncpy.S: Likewise.
Added strnlen POWER8 otimized for long strings. It delivers
same performance as POWER7 implementation for short strings.
This takes advantage of reasonably performing unaligned loads
and bit permutes to check the first 1-16 bytes until
quadword aligned, then checks in 64 bytes strides until unsafe,
then 16 bytes, truncating the count if need be.
Likewise, the POWER7 code is recycled for less than 32 bytes strings.
Tested on ppc64 and ppc64le.
* sysdeps/powerpc/powerpc64/multiarch/Makefile
(sysdep_routines): Add strnlen-power8.
* sysdeps/powerpc/powerpc64/multiarch/ifunc-impl-list.c
(strnlen): Add __strnlen_power8 to list of strnlen functions.
* sysdeps/powerpc/powerpc64/multiarch/strnlen-power8.S:
New file.
* sysdeps/powerpc/powerpc64/multiarch/strnlen.c
(__strnlen): Add __strnlen_power8 to ifunc list.
* sysdeps/powerpc/powerpc64/power8/strnlen.S: New file.
For strings >16B and <32B existing algorithm takes more time than default
implementation when strings are placed closed to end of page. This is due
to byte by byte access for handling page cross. This is improved by
following >32B code path where the address is adjusted to aligned memory
before doing load doubleword operation instead of loading bytes.
Tested on powerpc64 and powerpc64le.
Based on comments on previous attempt to address BZ#16640 [1],
the idea is not support invalid use of strtok (the original
bug report proposal). This leader to a new strtok optimized
strtok implementation [2].
The idea of this patch is to fix BZ#16640 to align all the
implementations to a same contract. However, with newer strtok
code it is better to get remove the old assembly ones instead of
fix them.
For x86 is a gain in all cases since the new implementation can
potentially use sse2/sse42 implementation for strspn and strcspn.
This shows a better performance on both i686 and x86_64 using
the string benchtests.
On powerpc64 the gains are mixed, where only for larger inputs
or keys some gains are showns (based on benchtest it seems that
it shows some gains for keys larger than 10 and inputs larger
than 32). I would prefer to remove the optimized implementation
based on first code simplicity and second because some more gain
could be optimized using a better optimized strcspn/strspn
code (as for x86). However if powerpc arch maintainers prefer I
can send a v2 with the assembly code adjusted instead.
Checked on x86_64-linux-gnu, i686-linux-gnu, and powerpc64le-linux-gnu.
[BZ #16640]
* sysdeps/i386/i686/strtok.S: Remove file.
* sysdeps/i386/i686/strtok_r.S: Likewise.
* sysdeps/i386/strtok.S: Likewise.
* sysdeps/i386/strtok_r.S: Likewise.
* sysdeps/powerpc/powerpc64/strtok.S: Likewise.
* sysdeps/powerpc/powerpc64/strtok_r.S: Likewise.
* sysdeps/x86_64/strtok.S: Likewise.
* sysdeps/x86_64/strtok_r.S: Likewise.
[1] https://sourceware.org/ml/libc-alpha/2016-10/msg00411.html
[2] https://sourceware.org/ml/libc-alpha/2016-12/msg00461.html
Since commit 6e46de42fe default strcat implementation is essentially
the same for specialized ia64 and powerpc ones. This patch removes the
redundant implementation and adjust powerpc64 ifunc code to use the
default one.
Checked on powerpc32-linux-gnu (default and power4) and ia64-linux build
and on powerpc64le-linux-gnu.
* sysdeps/ia64/strcat.c: Remove file.
* sysdeps/powerpc/strcat.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcat-power7.c: Use default
C implementation.
* sysdeps/powerpc/powerpc64/multiarch/strcat-power8.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcat-ppc64.c: Likewise.
The P7 code is used for <=32B strings and for > 32B vectorized loops are used.
This shows as an average 25% improvement depending on the position of search
character. The performance is same for shorter strings.
Tested on ppc64 and ppc64le.
Current optimized powercp64/power7 memchr uses a strategy to check for
p versus align(p+n) (where 'p' is the input char pointer and n the
maximum size to check for the byte) without taking care for possible
overflow on the pointer addition in case of large 'n'.
It was triggered by 3038145ca2 where default rawmemchr (used to
created ppc64 rawmemchr in ifunc selection) now uses memchr (p, c, (size_t)-1)
on its implementation.
This patch fixes it by implement a satured addition where overflows
sets the maximum pointer size to UINTPTR_MAX.
Checked on powerpc64le-linux-gnu.
[BZ# 20971]
* sysdeps/powerpc/powerpc64/power7/memchr.S (__memchr): Avoid
overflow in pointer addition.
* string/test-memchr.c (do_test): Add an argument to pass as
the size on memchr.
(test_main): Add check for SIZE_MAX.
Commit c7debbdfac redirected the internal strrch to default powerpc64
implementation by redefining the weak_alias at
sysdeps/powerpc/powerpc64/multiarch/strchr-ppc64.c:
#undef weak_alias
#define weak_alias(name, aliasname) \
extern __typeof (__strrchr_ppc) aliasname \
__attribute__ ((weak, alias ("__strrchr_ppc")));
This creates a __GI_strchr alias that clashes with the IFUNC symbol in
stprchr.os. There is not need to define the default version for internal
version, since ifunc should work internally for powerpc64. This patch
removes the weak_alias indirection.
Checked on powerpc64le.
* sysdeps/powerpc/powerpc64/multiarch/strrchr-ppc64.c (weak_alias):
Remove redirection to __strrchr_ppc.
Commit 142e0a9953 redirected the internal stpcpy to default powerpc64
implementation by redefining the weak_alias at
sysdeps/powerpc/powerpc64/multiarch/stpcpy-ppc64.c:
#undef weak_alias
#define weak_alias(name, aliasname) \
extern __typeof (__stpcpy_ppc) aliasname \
__attribute__ ((weak, alias ("__stpcpy_ppc")));
This creates a __GI_stpcpy alias that clashes with the IFUNC symbol in
stpcpy.os. There is not need to define the default version for internal
version, since ifunc should work internally for powerpc64. This patch
removes the weak_alias indirection.
Checked on powerpc64le.
* sysdeps/powerpc/powerpc64/multiarch/stpcpy-ppc64.c (weak_alias):
Remove redirection to __stpcpy_ppc.
Building glibc for powerpc64 with recent (2.27.51.20161012) binutils,
with multi-arch enabled, I get the error:
../sysdeps/powerpc/powerpc64/power6/memset.S: Assembler messages:
../sysdeps/powerpc/powerpc64/power6/memset.S:254: Error: operand out of range (5 is not between 0 and 1)
../sysdeps/powerpc/powerpc64/power6/memset.S:254: Error: operand out of range (128 is not between 0 and 31)
../sysdeps/powerpc/powerpc64/power6/memset.S:254: Error: missing operand
Indeed, cmpli is documented as a four-operand instruction, and looking
at nearby code it seems likely cmpldi was intended. This patch fixes
this powerpc64 code accordingly, and makes a corresponding change to
the powerpc32 code.
Tested for powerpc, powerpc64 and powerpc64le by Tulio Magno Quites
Machado Filho
* sysdeps/powerpc/powerpc32/power6/memset.S (memset): Use cmplwi
instead of cmpli.
* sysdeps/powerpc/powerpc64/power6/memset.S (memset): Use cmpldi
instead of cmpli.
The powerpc (hard-float) implementations of copysignl, both 32-bit and
64-bit, raise spurious "invalid" exceptions when the first argument is
a signaling NaN. copysign functions should never raise exceptions
even for signaling NaNs.
The problem is the use of an fcmpu instruction to test the sign of the
high part of the long double argument. This patch fixes the functions
to use fsel instead (as used for fabsl following my fixes for a
similar bug there), or to examine the integer representation for older
32-bit processors without fsel.
Tested for powerpc64 and powerpc32 (configurations with and without
fsel used).
[BZ #20718]
* sysdeps/powerpc/powerpc32/fpu/s_copysignl.S (__copysignl): Do
not use floating-point comparisons to test sign.
* sysdeps/powerpc/powerpc64/fpu/s_copysignl.S (__copysignl):
Likewise.
The current s390 ifunc resolver for vector optimized functions and the common
libc_ifunc macro in include/libc-symbols.h uses something like that to generate ifunc'ed functions:
extern void *__resolve___strlen(unsigned long int dl_hwcap) asm (strlen);
asm (".type strlen, %gnu_indirect_function");
This leads to false debug information:
objdump --dwarf=info libc.so:
...
<1><1e6424>: Abbrev Number: 43 (DW_TAG_subprogram)
<1e6425> DW_AT_external : 1
<1e6425> DW_AT_name : (indirect string, offset: 0x1146e): __resolve___strlen
<1e6429> DW_AT_decl_file : 1
<1e642a> DW_AT_decl_line : 23
<1e642b> DW_AT_linkage_name: (indirect string, offset: 0x1147a): strlen
<1e642f> DW_AT_prototyped : 1
<1e642f> DW_AT_type : <0x1e4ccd>
<1e6433> DW_AT_low_pc : 0x998e0
<1e643b> DW_AT_high_pc : 0x16
<1e6443> DW_AT_frame_base : 1 byte block: 9c (DW_OP_call_frame_cfa)
<1e6445> DW_AT_GNU_all_call_sites: 1
<1e6445> DW_AT_sibling : <0x1e6459>
<2><1e6449>: Abbrev Number: 44 (DW_TAG_formal_parameter)
<1e644a> DW_AT_name : (indirect string, offset: 0x1845): dl_hwcap
<1e644e> DW_AT_decl_file : 1
<1e644f> DW_AT_decl_line : 23
<1e6450> DW_AT_type : <0x1e4c8d>
<1e6454> DW_AT_location : 0x122115 (location list)
...
The debuginfo for the ifunc-resolver function contains the DW_AT_linkage_name
field, which names the real function name "strlen". If you perform an inferior
function call to strlen in lldb, then it fails due to something like that:
"error: no matching function for call to 'strlen'
candidate function not viable: no known conversion from 'const char [6]'
to 'unsigned long' for 1st argument"
The unsigned long is the dl_hwcap argument of the resolver function.
The strlen function itself has no debufinfo.
The s390 ifunc resolver for memset & co uses something like that:
asm (".globl FUNC"
".type FUNC, @gnu_indirect_function"
".set FUNC, __resolve_FUNC");
This way the debuginfo for the ifunc-resolver function does not conain the
DW_AT_linkage_name field and the real function has no debuginfo, too.
Using this strategy for the vector optimized functions leads to some troubles
for functions like strnlen. Here we have __strnlen and a weak alias strnlen.
The __strnlen function is the ifunc function, which is realized with the asm-
statement above. The weak_alias-macro can't be used here due to undefined symbol:
gcc ../sysdeps/s390/multiarch/strnlen.c -c ...
In file included from <command-line>:0:0:
../sysdeps/s390/multiarch/strnlen.c:28:24: error: ‘strnlen’ aliased to undefined symbol ‘__strnlen’
weak_alias (__strnlen, strnlen)
^
./../include/libc-symbols.h:111:26: note: in definition of macro ‘_weak_alias’
extern __typeof (name) aliasname __attribute__ ((weak, alias (#name)));
^
../sysdeps/s390/multiarch/strnlen.c:28:1: note: in expansion of macro ‘weak_alias’
weak_alias (__strnlen, strnlen)
^
make[2]: *** [build/string/strnlen.o] Error 1
As the __strnlen function is defined with asm-statements the function name
__strnlen isn't known by gcc. But the weak alias can also be done with an
asm statement to resolve this issue:
__asm__ (".weak strnlen\n\t"
".set strnlen,__strnlen\n");
In order to use the weak_alias macro, gcc needs to know the ifunc function. The
minimum gcc to build glibc is currently 4.7, which supports attribute((ifunc)).
See https://gcc.gnu.org/onlinedocs/gcc-4.7.0/gcc/Function-Attributes.html.
It is only supported if gcc is configured with --enable-gnu-indirect-function
or gcc supports it by default for at least intel and s390x architecture.
This patch uses the old behaviour if gcc support is not available.
Usage of attribute ifunc is something like that:
__typeof (FUNC) FUNC __attribute__ ((ifunc ("__resolve_FUNC")));
Then gcc produces the same .globl, .type, .set assembler instructions like above.
And the debuginfo does not contain the DW_AT_linkage_name field and there is no
debuginfo for the real function, too.
But in order to get it work, there is also some extra work to do.
Currently, the glibc internal symbol on s390x e.g. __GI___strnlen is not the
ifunc symbol, but the fallback __strnlen_c symbol. Thus I have to omit the
libc_hidden_def macro in strnlen.c (here is the ifunc function __strnlen)
because it is already handled in strnlen-c.c (here is __strnlen_c).
Due to libc_hidden_proto (__strnlen) in string.h, compiling fails:
gcc ../sysdeps/s390/multiarch/strnlen.c -c ...
In file included from <command-line>:0:0:
../sysdeps/s390/multiarch/strnlen.c:53:24: error: ‘strnlen’ aliased to undefined symbol ‘__strnlen’
weak_alias (__strnlen, strnlen)
^
./../include/libc-symbols.h:111:26: note: in definition of macro ‘_weak_alias’
extern __typeof (name) aliasname __attribute__ ((weak, alias (#name)));
^
../sysdeps/s390/multiarch/strnlen.c:53:1: note: in expansion of macro ‘weak_alias’
weak_alias (__strnlen, strnlen)
^
make[2]: *** [build/string/strnlen.os] Error 1
I have to redirect the prototypes for __strnlen in string.h and create a copy
of the prototype for using as ifunc function:
__typeof (__redirect___strnlen) __strnlen __attribute__ ((ifunc ("__resolve_strnlen")));
weak_alias (__strnlen, strnlen)
This way there is no trouble with the internal __GI_* symbols.
Glibc builds fine with this construct and the debuginfo is "correct".
For functions without a __GI_* symbol like memccpy this redirection is not needed.
This patch adjusts the common libc_ifunc and libm_ifunc macro to use gcc
attribute ifunc. Due to this change, the macro users where the __GI_* symbol
does not target the ifunc symbol have to be prepared with the redirection
construct.
Furthermore a configure check to test gcc support is added. If it is not supported,
the old behaviour is used.
This patch also prepares the libc_ifunc macro to be useable in s390-ifunc-macro.
The s390 ifunc-resolver-functions do have an hwcaps parameter and not all
resolvers need the same initialization code. The next patch in this series
changes the s390 ifunc macros to use this common one.
ChangeLog:
* include/libc-symbols.h (__ifunc_resolver):
New macro is used by __ifunc* macros.
(__ifunc): New macro uses gcc attribute ifunc or inline assembly
depending on HAVE_GCC_IFUNC.
(libc_ifunc, libm_ifunc): Use __ifunc as base macro.
(libc_ifunc_redirected, libc_ifunc_hidden, libm_ifunc_init): New macro.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_finite.c:
Redirect ifunced function in header for using as type for ifunc function.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_finitef.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_isinf.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_isinff.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_isnan.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/multiarch/memcmp.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/multiarch/memcpy.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/multiarch/memmove.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/multiarch/mempcpy.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/multiarch/memset.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/multiarch/rawmemchr.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/multiarch/strchr.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/multiarch/strlen.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/multiarch/strncmp.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/multiarch/strnlen.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_finite.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_finitef.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_isinf.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_isinff.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/memcmp.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/mempcpy.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/rawmemchr.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/stpncpy.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcat.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strchr.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcmp.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcpy.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strncmp.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strncpy.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strnlen.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strrchr.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strstr.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/wcschr.c: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/s_isnanf.c:
Add libc_hidden_def() and use libc_ifunc_hidden() macro
instead of libc_ifunc() macro.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnanf.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/stpcpy.c: Likewise.
Commit a6a4395d fixed modf implementation by compiling s_modf.c and
s_modff.c with -fsignaling-nans. However these files are also included
from the pre-POWER5+ implementation, and thus these files should also
be compiled with -fsignaling-nans.
Changelog:
[BZ #20240]
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/Makefile
(CFLAGS-s_modf-ppc32.c): New variable.
(CFLAGS-s_modff-ppc32.c): Likewise.
* sysdeps/powerpc/powerpc64/fpu/multiarch/Makefile
(CFLAGS-s_modf-ppc64.c): Likewise.
(CFLAGS-s_modff-ppc64.c): Likewise.
If the input values are unaligned and if there are null characters in the
memory before the starting address of the input values, strcasecmp
gives incorrect return code. Fixed it by adding mask the bits that
are not part of the string.
This implementation is based on the one already used at
sysdeps/x86_64/fpu/e_expf.S.
This implementation improves the performance by ~14% on average in synthetic
benchmarks at the cost of decreasing accuracy to 1 ULP.
atomic_compare_and_exchange_bool_rel and
catomic_compare_and_exchange_bool_rel are removed and replaced with the
new C11-like atomic_compare_exchange_weak_release. The concurrent code
in nscd/cache.c has not been reviewed yet, so this patch does not add
detailed comments.
* nscd/cache.c (cache_add): Use new C11-like atomic operation instead
of atomic_compare_and_exchange_bool_rel.
* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise.
* include/atomic.h (atomic_compare_and_exchange_bool_rel,
catomic_compare_and_exchange_bool_rel): Remove.
* sysdeps/aarch64/atomic-machine.h
(atomic_compare_and_exchange_bool_rel): Likewise.
* sysdeps/alpha/atomic-machine.h
(atomic_compare_and_exchange_bool_rel): Likewise.
* sysdeps/arm/atomic-machine.h
(atomic_compare_and_exchange_bool_rel): Likewise.
* sysdeps/mips/atomic-machine.h
(atomic_compare_and_exchange_bool_rel): Likewise.
* sysdeps/tile/atomic-machine.h
(atomic_compare_and_exchange_bool_rel): Likewise.
Some architectures have their own versions of fdim functions, which
are missing errno setting (bug 6796) and may also return sNaN instead
of qNaN for sNaN input, in the case of the x86 / x86_64 long double
versions (bug 20256).
These versions are not actually doing anything that a compiler
couldn't generate, just straightforward comparisons / arithmetic (and,
in the x86 / x86_64 case, testing for NaNs with fxam, which isn't
actually needed once you use an unordered comparison and let the NaNs
pass through the same subtraction as non-NaN inputs). This patch
removes the x86 / x86_64 / powerpc versions, so that those
architectures use the generic C versions, which correctly handle
setting errno and deal properly with sNaN inputs. This seems better
than dealing with setting errno in lots of .S versions.
The i386 versions also return results with excess range and precision,
which is not appropriate for a function exactly defined by reference
to IEEE operations. For errno setting to work correctly on overflow,
it's necessary to remove excess range with math_narrow_eval, which
this patch duly does in the float and double versions so that the
tests can reliably pass on x86. For float, this avoids any double
rounding issues as the long double precision is more than twice that
of float. For double, double rounding issues will need to be
addressed separately, so this patch does not fully fix bug 20255.
Tested for x86_64, x86 and powerpc.
[BZ #6796]
[BZ #20255]
[BZ #20256]
* math/s_fdim.c: Include <math_private.h>.
(__fdim): Use math_narrow_eval on result.
* math/s_fdimf.c: Include <math_private.h>.
(__fdimf): Use math_narrow_eval on result.
* sysdeps/i386/fpu/s_fdim.S: Remove file.
* sysdeps/i386/fpu/s_fdimf.S: Likewise.
* sysdeps/i386/fpu/s_fdiml.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdim.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdimf.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdiml.S: Likewise.
* sysdeps/powerpc/fpu/s_fdim.c: Likewise.
* sysdeps/powerpc/fpu/s_fdimf.c: Likewise.
* sysdeps/powerpc/powerpc32/fpu/s_fdim.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_fdim.c: Likewise.
* sysdeps/x86_64/fpu/s_fdiml.S: Likewise.
* math/libm-test.inc (fdim_test_data): Expect errno setting on
overflow. Add sNaN tests.
This implementation utilizes vectors to improve performance
compared to current byte by byte implementation for POWER7.
The performance improvement is upto 4x. This patch is tested
on powerpc64 and powerpc64le.
The powerpc64 versions of ceil, floor, round, trunc, rint, nearbyint
and their float versions return sNaN for sNaN input when they should
return qNaN. This patch fixes them to add a NaN argument to itself to
quiet sNaNs before returning.
Tested for powerpc64.
[BZ #20160]
* sysdeps/powerpc/powerpc64/fpu/s_ceil.S (__ceil): Add NaN
argument to itself before returning the result.
* sysdeps/powerpc/powerpc64/fpu/s_ceilf.S (__ceilf): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_floor.S (__floor): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_floorf.S (__floorf): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_nearbyint.S (__nearbyint):
Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_nearbyintf.S (__nearbyintf):
Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_rint.S (__rint): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_rintf.S (__rintf): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_round.S (__round): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_roundf.S (__roundf): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_trunc.S (__trunc): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_truncf.S (__truncf): Likewise.
The powerpc implementations of fabsl for ldbl-128ibm (both powerpc32
and powerpc64) wrongly raise the "invalid" exception for sNaN
arguments. fabs functions should be quiet for all inputs including
signaling NaNs. The problem is the use of a comparison instruction
fcmpu to determine if the high part of the argument is negative and so
the low part needs to be negated; such instructions raise "invalid"
for sNaNs.
There is a pure integer implementation of fabsl in
sysdeps/ieee754/ldbl-128ibm/s_fabsl.c. However, it's not necessary to
use it to avoid such exceptions. The fsel instruction does not raise
exceptions for sNaNs, and can be used in place of the original
comparison. (Note that if the high part is zero or a NaN, it does not
matter whether the low part is negated; the choice of whether the low
part of a zero is +0 or -0 does not affect the value, and the low part
of a NaN does not affect the value / payload either.)
The condition in GCC for fsel to be available is TARGET_PPC_GFXOPT,
corresponding to the _ARCH_PPCGR predefined macro. fsel is available
on all 64-bit processors supported by GCC. A few 32-bit processors
supported by GCC do not have TARGET_PPC_GFXOPT despite having hard
float support. To support those processors, integer code (similar to
that in copysignl) is included for the !_ARCH_PPCGR case for
powerpc32.
Tested for powerpc32 (configurations with and without _ARCH_PPCGR) and
powerpc64.
[BZ #20157]
* sysdeps/powerpc/powerpc32/fpu/s_fabsl.S (__fabsl): Use fsel to
determine whether to negate low half if [_ARCH_PPCGR], and integer
comparison otherwise.
* sysdeps/powerpc/powerpc64/fpu/s_fabsl.S (__fabsl): Use fsel to
determine whether to negate low half.
Continuing fixes for ceil, floor and trunc functions not to raise the
"inexact" exception, this patch fixes the versions used on older
powerpc64 processors. As was done with the round implementations some
time ago, the save of floating-point state is moved after the first
floating-point operation on the input to ensure that any "invalid"
exception from signaling NaN input is included in the saved state, and
then the whole state gets restored rather than just the rounding mode.
This has no effect on configurations using the power5+ code, since
such processors can do these operations with a single instruction (and
those instructions do not set "inexact", so are correct for TS 18661-1
semantics).
Tested for powerpc64.
[BZ #15479]
* sysdeps/powerpc/powerpc64/fpu/s_ceil.S (__ceil): Move save of
floating-point state after first floating-point operation on
input. Restore full floating-point state instead of just rounding
mode.
* sysdeps/powerpc/powerpc64/fpu/s_ceilf.S (__ceilf): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_floor.S (__floor): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_floorf.S (__floorf): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_trunc.S (__trunc): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_truncf.S (__truncf): Likewise.
The file sysdeps/powerpc/sysdeps.h defines aliases for condition register
operands. E.g.: 'cr7' means condition register 7. On the one hand, this
increases readability, as it makes it easier for readers to know whether the
operand is a condition register, a general purpose register or an immediate.
On the other hand, this permits that condition registers be written as if they
were general purpose, and vice-versa, thus reducing the readability of the
code.
This commit removes some of these unintentional misuses.
The changes have no effect on the final code. Checked with objdump.
Call __memset_power8 to pad, with zeros, the remaining bytes in the
dest string on __strncpy_power8 and __stpncpy_power8. This improves
performance when n is larger than the input string, giving ~30% gain for
larger strings without impacting much shorter strings.
This patch optimizes strcasestr function for power >= 8 systems. The average
improvement of this optimization is ~40% and compares 16 bytes at a time
using vector instructions. This patch is tested on powerpc64 and powerpc64le.
This utilizes vectors and bitmasks. For small needle, large
haystack, the performance improvement is upto 8x. For short
strings (0-4B), the cost of computing the bitmask dominates,
and is a tad slower.
This patch removes the powerpc64 optimized strspn, strcspn, and
strpbrk assembly implementation now that the default C one
implements the same strategy. On internal glibc benchtests
current implementations shows similar performance with -O2.
Tested on powerpc64le (POWER8).
* sysdeps/powerpc/powerpc64/strcspn.S: Remove file.
* sysdeps/powerpc/powerpc64/strpbrk.S: Remove file.
* sysdeps/powerpc/powerpc64/strspn.S: Remove file.
This patch adds a new feature for powerpc. In order to get faster access to
the HWCAP/HWCAP2 bits and platform number (i.e. for implementing
__builtin_cpu_is () / __builtin_cpu_supports () in GCC) without the overhead of
reading from the auxiliary vector, we now reserve space for them in the TCB.
This is an ABI change for GLIBC 2.23.
A new versioned symbol '__parse_hwcap_and_convert_at_platform' is available to
get the data from the auxiliary vector and parse it, and store it for later use
in the TLS initialization code. This function is called very early
(in _dl_sysdep_start () via DL_PLATFORM_INFO for the dynamic linking case, and
in __libc_start_main () for the static linking case) to make sure the data is
available at the time of TLS initialization.
* sysdeps/powerpc/Makefile (sysdep-dl-routines): Add hwcapinfo.
(sysdep_routines): Likewise.
(sysdep-rtld-routines): Likewise.
[$(subdir) = nptl](tests): Add test-get_hwcap and test-get_hwcap-static
[$(subdir) = nptl](tests-static): test-get_hwcap-static
* sysdeps/powerpc/Versions: Added new
__parse_hwcap_and_convert_at_platform symbol to GLIBC-2.23.
* sysdeps/powerpc/hwcapinfo.c: New file.
(__tcb_parse_hwcap_and_convert_at_platform): New function to initialize
and parse hwcap, hwcap2 and platform number information.
* sysdeps/powerpc/hwcapinfo.h: New file. Creates global variables
to store HWCAP+HWCAP2 and platform number.
* sysdeps/powerpc/nptl/tcb-offsets.sym: Added new offsets
for HWCAP+HWCAP2 and platform number in the TCB.
* sysdeps/powerpc/nptl/tls.h: New functionality. Stores
the HWCAP, HWCAP2 and platform number in the TCB.
(dtv): Added new fields for HWCAP+HWCAP2 and platform number.
(TLS_INIT_TP): Included calls to add the hwcap and
at_platform values in the TCB in TP initialization.
(TLS_DEFINE_INIT_TP): Likewise.
(THREAD_GET_HWCAP): New macro.
(THREAD_SET_HWCAP): Likewise.
(THREAD_GET_AT_PLATFORM): Likewise.
(THREAD_SET_AT_PLATFORM): Likewise.
* sysdeps/powerpc/powerpc32/dl-machine.h:
(dl_platform_init): New function that calls
__parse_hwcap_and_convert_at_platform for the dymanic linking case for
powerpc32.
* sysdeps/powerpc/powerpc64/dl-machine.h: Likewise, for powerpc64.
* sysdeps/powerpc/test-get_hwcap-static.c: New file. Testcase for
this functionality, static linking case.
* sysdeps/powerpc/test-get_hwcap.c: New file. Likewise, dynamic
linking case.
* sysdeps/unix/sysv/linux/powerpc/libc-start.c: Added call to
__parse_hwcap_and_convert_at_platform for the static linking case.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/ld.abilist:
Included the new __parse_hwcap_and_convert_at_platform symbol in the
ABI list for GLIBC 2.23.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/ld-le.abilist:
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/ld.abilist:
Likewise.
The powerpc hard-float round and roundf functions, both 32-bit and
64-bit, raise spurious "inexact" exceptions for integer arguments from
adding 0.5 and rounding to integer toward zero.
Since these functions already save and restore the rounding mode, it's
natural to make them restore the full floating-point state instead to
fix this bug, which this patch does. The save of the state is moved
after the first floating-point operation on the input so that any
"invalid" exceptions from signaling NaN inputs are properly
preserved. As a consequence of this approach to the fix, "inexact"
for noninteger arguments (disallowed by TS 18661-1 but not by C99/C11,
see bug 15479) is also avoided for these implementations; this is
*not* a general fix for bug 15479 since plenty of other
implementations of various functions still raise spurious "inexact"
for noninteger arguments.
This issue and fix do not apply to builds using power5+ versions of
round and roundf, which use the frin instruction and avoid "inexact"
exceptions that way.
This patch should get hard-float powerpc32 and powerpc64 (default
function implementations) back to a state where test-float and
test-double will pass after ulps regeneration.
Tested for powerpc32 and powerpc64.
[BZ #15479]
[BZ #19238]
* sysdeps/powerpc/powerpc32/fpu/s_round.S (__round): Save
floating-point state after first operation on input. Restore full
state rather than just rounding mode.
* sysdeps/powerpc/powerpc32/fpu/s_roundf.S (__roundf): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_round.S (__round): Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_roundf.S (__roundf): Likewise.
Similar to bug 19134 for powerpc32, the powerpc64 implementations of
lround, lroundf, llround, llroundf can raise spurious "inexact"
exceptions for integer arguments from adding 0.5 then converting to
integer (this does not apply to the power5+ version for double, which
uses the frin instruction which is defined never to raise "inexact"; I
don't know why power5+ doesn't use that version for float as well).
This patch fixes the bug in a similar way to the powerpc32 bug, by
testing for integers (adding and subtracting 2^52 and comparing with
the value before that addition and subtraction) and not adding 0.5 in
that case.
The powerpc maintainers may wish to look at making power5+ / power6x /
power8 use frin for float lround / llround as well as for double,
unless there's some reason I've missed that this isn't beneficial.
Tested for powerpc64.
[BZ #19235]
* sysdeps/powerpc/powerpc64/fpu/s_llround.S (__llround): Do not
add 0.5 to integer arguments.
* sysdeps/powerpc/powerpc64/fpu/s_llroundf.S (__llroundf):
Likewise.
(.LC2): New object.
Similar to bug 15491 recently fixed for x86_64 / x86, the powerpc
(both powerpc32 and powerpc64) hard-float implementations of
nearbyintf and nearbyint wrongly clear an "inexact" exception that was
raised before the function was called; this shows up as failure of the
test math/test-nearbyint-except added when that bug was fixed. They
also wrongly leave traps on "inexact" disabled if they were enabled
before the function was called.
This patch fixes the bugs similar to how the x86 bug was fixed: saving
and restoring the whole floating-point state, both to restore the
original "inexact" flag state and to restore the original state of
whether traps on "inexact" were enabled. Because there's a convenient
point in the powerpc implementations to save state after any sNaN
arguments will have raised "invalid" but before "inexact" traps need
to be disabled, no special handling for "invalid" is needed as in the
x86 version.
Tested for powerpc64 and powerpc32, where it fixes the
math/test-nearbyint-except failure as well as fixing the new test
math/test-nearbyint-except-2 added by this patch. Also tested for
x86_64 and x86 that the new test passes.
If powerpc experts see a more efficient way of doing this
(e.g. instruction positioning that's better for pipelines on typical
processors) then of course followups optimizing the fix are welcome.
[BZ #19228]
* sysdeps/powerpc/powerpc32/fpu/s_nearbyint.S (__nearbyint): Save
and restore full floating-point state.
* sysdeps/powerpc/powerpc32/fpu/s_nearbyintf.S (__nearbyintf):
Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_nearbyint.S (__nearbyint):
Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_nearbyintf.S (__nearbyintf):
Likewise.
* math/test-nearbyint-except-2.c: New file.
* math/Makefile (tests): Add test-nearbyint-except-2.
The file sysdeps/powerpc/sysdeps.h defines aliases for register operands,
which add the letter 'r' as a prefix to a register name. E.g.: register 20
can be written as 'r20', instead of '20'. On the one hand, this increases
readability, as it makes it easier for readers to know whether the operand is a
register or an immediate. On the other hand, this permits that immediate
operands be written as if they were registers, and vice-versa, thus reducing
the readability of the code.
This commit removes some of these unintentional misuses.
This commit also increases readability of the code by adding the prefix 'cr' to
some uses of the control register.
Both changes have no effect on the final code. Checked with objdump.
* sysdeps/powerpc/powerpc64/power8/strncpy.S: Remove or add register
prefix from operands.
It was noted in
<https://sourceware.org/ml/libc-alpha/2012-09/msg00305.html> that the
bits/*.h naming scheme should only be used for installed headers.
This patch renames bits/atomic.h to atomic-machine.h to follow that
convention.
This is the only change in this series that needs to change the
filename rather than simply removing a directory level (because both
atomic.h and bits/atomic.h exist at present).
Tested for x86_64 (testsuite, and that installed stripped shared
libraries are unchanged by the patch).
[BZ #14912]
* sysdeps/aarch64/bits/atomic.h: Move to ...
* sysdeps/aarch64/atomic-machine.h: ...here.
(_AARCH64_BITS_ATOMIC_H): Rename macro to
_AARCH64_ATOMIC_MACHINE_H.
* sysdeps/alpha/bits/atomic.h: Move to ...
* sysdeps/alpha/atomic-machine.h: ...here.
* sysdeps/arm/bits/atomic.h: Move to ...
* sysdeps/arm/atomic-machine.h: ...here. Update comments.
* bits/atomic.h: Move to ...
* sysdeps/generic/atomic-machine.h: ...here.
(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
* sysdeps/i386/bits/atomic.h: Move to ...
* sysdeps/i386/atomic-machine.h: ...here.
* sysdeps/ia64/bits/atomic.h: Move to ...
* sysdeps/ia64/atomic-machine.h: ...here.
* sysdeps/m68k/coldfire/bits/atomic.h: Move to ...
* sysdeps/m68k/coldfire/atomic-machine.h: ...here.
(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
* sysdeps/m68k/m680x0/m68020/bits/atomic.h: Move to ...
* sysdeps/m68k/m680x0/m68020/atomic-machine.h: ...here.
* sysdeps/microblaze/bits/atomic.h: Move to ...
* sysdeps/microblaze/atomic-machine.h: ...here.
* sysdeps/mips/bits/atomic.h: Move to ...
* sysdeps/mips/atomic-machine.h: ...here.
(_MIPS_BITS_ATOMIC_H): Rename macro to _MIPS_ATOMIC_MACHINE_H.
* sysdeps/powerpc/bits/atomic.h: Move to ...
* sysdeps/powerpc/atomic-machine.h: ...here. Update comments.
* sysdeps/powerpc/powerpc32/bits/atomic.h: Move to ...
* sysdeps/powerpc/powerpc32/atomic-machine.h: ...here. Update
comments. Include <atomic-machine.h> instead of <bits/atomic.h>.
* sysdeps/powerpc/powerpc64/bits/atomic.h: Move to ...
* sysdeps/powerpc/powerpc64/atomic-machine.h: ...here. Include
<atomic-machine.h> instead of <bits/atomic.h>.
* sysdeps/s390/bits/atomic.h: Move to ...
* sysdeps/s390/atomic-machine.h: ...here.
* sysdeps/sparc/sparc32/bits/atomic.h: Move to ...
* sysdeps/sparc/sparc32/atomic-machine.h: ...here.
(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
* sysdeps/sparc/sparc32/sparcv9/bits/atomic.h: Move to ...
* sysdeps/sparc/sparc32/sparcv9/atomic-machine.h: ...here.
* sysdeps/sparc/sparc64/bits/atomic.h: Move to ...
* sysdeps/sparc/sparc64/atomic-machine.h: ...here.
* sysdeps/tile/bits/atomic.h: Move to ...
* sysdeps/tile/atomic-machine.h: ...here.
* sysdeps/tile/tilegx/bits/atomic.h: Move to ...
* sysdeps/tile/tilegx/atomic-machine.h: ...here. Include
<sysdeps/tile/atomic-machine.h> instead of
<sysdeps/tile/bits/atomic.h>.
(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
* sysdeps/tile/tilepro/bits/atomic.h: Move to ...
* sysdeps/tile/tilepro/atomic-machine.h: ...here. Include
<sysdeps/tile/atomic-machine.h> instead of
<sysdeps/tile/bits/atomic.h>.
(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
* sysdeps/unix/sysv/linux/arm/bits/atomic.h: Move to ...
* sysdeps/unix/sysv/linux/arm/atomic-machine.h: ...here. Include
<sysdeps/arm/atomic-machine.h> instead of
<sysdeps/arm/bits/atomic.h>.
* sysdeps/unix/sysv/linux/hppa/bits/atomic.h: Move to ...
* sysdeps/unix/sysv/linux/hppa/atomic-machine.h: ...here.
(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
* sysdeps/unix/sysv/linux/m68k/coldfire/bits/atomic.h: Move to ...
* sysdeps/unix/sysv/linux/m68k/coldfire/atomic-machine.h: ...here.
(_BITS_ATOMIC_H): Rename macro to _ATOMIC_MACHINE_H.
* sysdeps/unix/sysv/linux/nios2/bits/atomic.h: Move to ...
* sysdeps/unix/sysv/linux/nios2/atomic-machine.h: ...here.
(_NIOS2_BITS_ATOMIC_H): Rename macro to _NIOS2_ATOMIC_MACHINE_H.
* sysdeps/unix/sysv/linux/sh/bits/atomic.h: Move to ...
* sysdeps/unix/sysv/linux/sh/atomic-machine.h: ...here.
* sysdeps/x86_64/bits/atomic.h: Move to ...
* sysdeps/x86_64/atomic-machine.h: ...here.
* include/atomic.h: Include <atomic-machine.h> instead of
<bits/atomic.h>.
Fix usage of tabort in generated syscalls. r0 has special meaning
when used with this instruction, thus it will not generate
persistent errors, nor return an error code. This mitigates poor
CPU usage when performing elided critical sections.
Additionally, transactions should be aborted when entering a user
invoked syscall. Otherwise the results of the transaction may be
undefined.
2015-08-25 Paul E. Murphy <murphyp@linux.vnet.ibm.com>
* sysdeps/powerpc/powerpc32/sysdep.h (ABORT_TRANSACTION): Use
register other than r0 for tabort, it has special meaning.
* sysdeps/powerpc/powerpc64/sysdep.h (ABORT_TRANSACTION): Likewise
* sysdeps/unix.sysv/linux/powerpc/syscall.S (syscall): Abort
transaction before starting syscall.
Instead of checking needle length, constant 'n' number of comparisons
is checked to fall back to default implementation. This patch is tested
on powerpc64 and powerpc64le.
2015-08-25 Rajalakshmi Srinivasaraghavan <raji@linux.vnet.ibm.com>
* sysdeps/powerpc/powerpc64/power7/strstr.S: Handle worst case.
In powerpc64, memchr was always pointing to the internal __GI_memchr
implementation. This patch fixes that and makes it use the
optimized POWER7 version when adequate.
* sysdeps/powerpc/powerpc64/multiarch/memchr-ppc64.c: Make
memchr not point to the internal __GI_memchr implementation.
When building with --disable-multi-arch the memmove and strstr POWER7
optimization create and uses symbols that conflict with expect conform
tests.
* sysdeps/powerpc/powerpc64/power7/memmove.S (bcopy): Changing to
__bcopy and add a weak_alias to bcopy.
* sysdeps/powerpc/powerpc64/power7/strstr.S (strstr): Use __strnlen
for static build.
This patches uses the default strcpy/stpcpy implementation for
POWER7/PPC64. This is faster in mostly inputs for benchtests
and for multiarch the implementation uses the POWER7 strlen and
memcpy.
* string/stpcpy.c (__stpcpy): Use STPCPY to redefine symbol name and
cleanup macro usage.
* string/strcpy.c (strcpt): Use STRCPY to redefine symbol name.
* sysdeps/powerpc/powerpc64/multiarch/stpcpy-power7.S: Remove file.
* sysdeps/powerpc/powerpc64/multiarch/stpcpy-ppc64.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcpy-power7.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcpy-ppc64.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/stpcpy.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/strcpy.S: Likewise.
* sysdeps/powerpc/powerpc64/power7/strcpy.c: Likewise.
* sysdeps/powerpc/powerpc64/stpcpy.S: Likewise.
* sysdeps/powerpc/powerpc64/strcpy.S: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/stpcpy.c
[SHARED && IS_IN (libc)]: Include <string/strcpy.c>.
* sysdeps/powerpc/powerpc64/multiarch/stpcpy.c
[SHARED && IS_IN (libc)]: Include <string/stpcpy.c>.
* sysdeps/powerpc/powerpc64/multiarch/stpcpy-power7.c: New file.
* sysdeps/powerpc/powerpc64/multiarch/stpcpy-ppc64.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcpy-power7.c: Likewise.
* sysdeps/powerpc/powerpc64/multiarch/strcpy-ppc64.c: Likewise.
* sysdeps/powerpc/powerpc64/power7/strcpy.c: Likewise.
This patch fixes the strstr build with --disable-multi-arch option.
The optimization calls the __strstr_ppc symbol, which always build
for multiarch config but not if it is disable. This patch fixes it
by adding the default C implementation object with the expected
symbol name.
* sysdeps/powerpc/powerpc64/power7/Makefile [$(subdir) = string]
(sysdep_routines): Add strstr-ppc64.
* sysdeps/powerpc/powerpc64/power7/strstr-ppc64.c: New file.
This patch optimizes strstr function for power >= 7 systems. Performance
gain is obtained using aligned memory access and usage of cmpb
instruction for quicker comparison. The average improvement of this
optimization is ~40%. Tested on ppc64 and ppc64le.
2015-07-16 Rajalakshmi Srinivasaraghavan <raji@linux.vnet.ibm.com>
* sysdeps/powerpc/powerpc64/multiarch/Makefile: Add strstr().
* sysdeps/powerpc/powerpc64/multiarch/ifunc-impl-list.c: Likewise.
* sysdeps/powerpc/powerpc64/power7/strstr.S: New File.
* sysdeps/powerpc/powerpc64/multiarch/strstr-power7.S: New File.
* sysdeps/powerpc/powerpc64/multiarch/strstr-ppc64.c: New File.
* sysdeps/powerpc/powerpc64/multiarch/strstr.c: New File.
IFUNC is difficult to correctly implement on any target needing a GOT
to support position independent code, due to the dependency on order
of dynamic relocations. ld.so should be changed to apply IFUNC
relocations last, globally, because without that it is actually
impossible to write an IFUNC resolver in C that works in all
situations. Case in point, vfork in libpthread.so is an IFUNC with
the resolver returning &__libc_vfork. (system and fork are similar.)
If another shared library, libA say, uses vfork then it is quite
possible that libpthread.so hasn't been dynamically relocated before
the unfortunate libA is dynamically relocated. In that case the GOT
entry for &__libc_vfork is still zero, so the IFUNC resolver returns
NULL. LD_BIND_NOW=1 results in libA PLT dynamic relocations being
applied using this NULL value and ld.so segfaults.
This patch hardens ld.so to not segfault on a NULL from an IFUNC
resolver. It also fixes a problem with undefined weak. If you leave
the plt entry as-is for undefined weak then if the entry is ever
called it will loop in ld.so rather than segfaulting.
* sysdeps/powerpc/powerpc64/dl-machine.h (elf_machine_fixup_plt):
Don't segfault if ifunc resolver returns a NULL. Do set plt to
zero for undefined weak.
(elf_machine_plt_conflict): Similarly.
This patch is glibc support for a PowerPC TLS optimization, inspired
by Alexandre Oliva's TLS optimization for other processors,
http://www.lsd.ic.unicamp.br/~oliva/writeups/TLS/RFC-TLSDESC-x86.txt
In essence, this optimization uses a zero module id in the tls_index
GOT entry to indicate that a TLS variable is allocated space in the
static TLS area. A special plt call linker stub for __tls_get_addr
checks for such a tls_index and if found, returns the offset
immediately. The linker communicates the fact that the special
__tls_get_addr stub is used by setting a bit in the dynamic tag
DT_PPC64_OPT/DT_PPC_OPT. glibc communicates to the linker that this
optimization is available by the presence of __tls_get_addr_opt.
tst-tlsmod2.so is built with -Wl,--no-tls-get-addr-optimize for
tst-tls-dlinfo, which otherwise would fail since it tests that no
static tls is allocated. The ld option --no-tls-get-addr-optimize has
been available since binutils-2.20 so doesn't need a configure test.
* NEWS: Advertise TLS optimization.
* elf/elf.h (R_PPC_TLSGD, R_PPC_TLSLD, DT_PPC_OPT, PPC_OPT_TLS): Define.
(DT_PPC_NUM): Increment.
* elf/dynamic-link.h (HAVE_STATIC_TLS): Define.
(CHECK_STATIC_TLS): Use here.
* sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_rela): Optimize
TLS descriptors.
* sysdeps/powerpc/powerpc64/dl-machine.h (elf_machine_rela): Likewise.
* sysdeps/powerpc/dl-tls.c: New file.
* sysdeps/powerpc/Versions: Add __tls_get_addr_opt.
* sysdeps/powerpc/tst-tlsopt-powerpc.c: New tls test.
* sysdeps/unix/sysv/linux/powerpc/Makefile: Add new test.
Build tst-tlsmod2.so with --no-tls-get-addr-optimize.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/ld.abilist: Update.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/ld.abilist: Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/ld-le.abilist: Likewise.
This feature doesn't depend on the linker, as can be seen from the
actual test. It's a compiler feature.
* sysdeps/powerpc/powerpc64/configure.ac: Correct "linker support
for overlapping .opd entries" to "support...".
* sysdeps/powerpc/powerpc64/configure: Regenerate
With AIX port deprecated there is no need to check/define
HAVE_ASM_GLOBAL_DOT_NAME anymore since the current minimum binutils
supported (2.22) does not emit global symbol with dot.
This patch removes all the HAVE_ASM_GLOBAL_DOT_NAME definition and
checks for powerpc64 port.
This patch fixes the missing "__memcpy_ppc" symbol for memmove-ppc64
object in static builds. Since memcpy ifunc is not enabled in static
mode, the specialized symbols are not provided. The patch changed the
it to just "__memcpy" instead.
This patch optimizes strncpy for power7 for unaligned source or
destination address. The source or destination address is aligned
to doubleword and data is shifted based on the alignment and
added with the previous loaded data to be written as a doubleword.
For each load, cmpb instruction is used for faster null check.
The new optimization shows 10 to 70% of performance improvement
for longer string though it does not show big difference on string
size less than 16 due to additional checks.Hence this new algorithm
is restricted to string greater than 16.
This patch cleanup some multiarch code related to memmmove
optimization. Initial IFUNC support added specialized wordcopy
symbols which turned in local IFUNC calls used by memmove default
implementation. The patch removes the internal IFUNC for wordcopy
symbols and uses local branches in the memmmove optimization instead.
This patch cleanup some multiarch code related to memmmove
optimization. Initial IFUNC support added specialized wordcopy
symbols which turned in local IFUNC calls used by memmove default
implementation.
This change by removing then and used the optimized memmove instead
for supported chips.
This patch simplify the default bcopy symbol for powerpc64 by just using
memmove instead of implementing using the default bcopy. Since the
symbol is deprecated, it trades speed by code size.
Some powerpc64 processors (e5500 core for instance) does not provide the
fsqrt instruction, however current check to use in math_private.h is
__WORDSIZE and _ARCH_PWR4 (ISA 2.02). This is patch change it to use
the compiler flag _ARCH_PPCSQ (which is the same condition GCC uses to
decide whether to generate fsqrt instruction).
It fixes BZ#16576.
GLIBC memset optimization for POWER8 uses the '.machine power8'
directive, which is only supported officially on binutils 2.24+. This
causes a build failure on older binutils.
Since the requirement of .machine power8 is to correctly assembly the
'mtvsrd' instruction and it is already handled by the MTVSRD_V1_R4
macro, there is no really needed of using it.
The patch replaces the power8 with power7 for .machine directive.
It fixes BZ#17869.
This patch fix the elf/ifuncmain6pie failure when building with GCC
4.9+. For some reason, the compiler removes the branch taken code at
resolve_ifunc (sysdeps/powerpc/powerpc64/dl-machine.h) as dead-code
and thus the testcase fails because the ifunc resolves branches to an
invalid memory location. It fixes by explicit adding a dependency of
value based on odp variable to avoid compiler optimization.
It fixes BZ#17868.
This patch fixes a performance regression on the POWER7/PPC64 memcmp
porting for Little Endian. The LE code uses 'ldbrx' instruction to read
the memory on byte reversed form, however ISA 2.06 just provide the indexed
form which uses a register value as additional index, instead of a fixed value
enconded in the instruction.
And the port strategy for LE uses r0 index value and update the address
value on each compare loop interation. For large compare size values,
it adds 8 more instructions plus some more depending of trailing
size. This patch fixes it by adding pre-calculate indexes to remove the
address update on loops and tailing sizes.
For large sizes it shows a considerable gain, with double performance
pairing with BE.
This patch adds an optimized POWER8 strncmp. The implementation focus
on speeding up unaligned cases follwing the ideas of power8 strcmp.
The algorithm first check the initial 16 bytes, then align the first
function source and uses unaligned loads on second argument only.
Aditional checks for page boundaries are done for unaligned cases
(where sources alignment are different).
This patch optimized the POWER7 trailing check by avoiding using byte
read operations and instead use the doubleword already readed with
bitwise operations.
This patch adds an optimized POWER8 strcmp using unaligned accesses.
The algorithm first check the initial 16 bytes, then align the first
function source and uses unaligned loads on second argument only.
Aditional checks for page boundaries are done for unaligned cases
This patch adds an optimized POWER8 st{r,p}ncpy using unaligned accesses.
It shows 10%-80% improvement over the optimized POWER7 one that uses
only aligned accesses, specially on unaligned inputs.
The algorithm first read and check 16 bytes (if inputs do not cross a 4K
page size). The it realign source to 16-bytes and issue a 16 bytes read
and compare loop to speedup null byte checks for large strings. Also,
different from POWER7 optimization, the null pad is done inline in the
implementation using possible unaligned accesses, instead of realying on
a memset call. Special case is added for page cross reads.
With 3eb38795db (Simplify strncat) the generic algorithms uses
strlen, strnlen, and memcpy. This is faster than POWER7 current
implementation, especially for unaligned strings (where POWER7 code
uses byte-byte operations).
This patch removes the assembly implementation and uses a multiarch
specialization based on default algorithm calling optimized POWER7
symbols.
This patch adds an optimized POWER8 strcpy using unaligned accesses.
For strings up to 16 bytes the implementation first calculate the
string size, like strlen, and issues a memcpy. For larger strings,
source is first aligned to 16 bytes and then tested over a loop that
reads 16 bytes am combine the cmpb results for speedup. Special case is
added for page cross reads.
It shows 30%-60% improvement over the optimized POWER7 one that uses
only aligned accesses.
Linux kernel powerpc documentation states issuing a syscall inside a
transaction is not recommended and may lead to undefined behavior. It
also states syscalls does not abort transactoin neither they run in
transactional state.
To avoid side-effects being visible outside transactions, GLIBC with
lock elision enabled will issue a transaction abort instruction just
before all syscalls if hardware supports hardware transactions.
This patch optimizes strcpy for ppc64/power7 for unaligned source or
destination address. The source or destination address is aligned
to doubleword and data is shifted based on the alignment and
added with the previous loaded data to be written as a doubleword.
For each load, cmpb instruction is used for faster null check.
The word aligned optimization is also removed, since the new unaligned
code path shows better results handling word-aligned strings.
More combination of unaligned inputs is also added in benchtest
to measure the improvement.The new optimization shows 2 to 80% of
performance improvement for longer string though it does not show
big difference on string size less than 16 due to additional checks.
Use of strftime, a C90 function, ends up bringing in wcschr, which is
not a C90 function. Although not a conformance bug (C90 reserves
wcs*), this is still contrary to glibc practice of avoiding relying on
those reservations; this patch arranges for the internal uses to use
__wcschr instead, with wcschr being a weak alias. This is more
complicated than some such patches because of the various IFUNC
definitions of wcschr (which include code redefining libc_hidden_def
in a way that involves creating __GI_wcschr manually and so also needs
to create __GI___wcschr after the change of internal uses to use
__wcschr).
Tested for x86_64 and 32-bit x86 (testsuite, and that disassembly of
installed shared libraries is unchanged by the patch).
2014-12-10 Joseph Myers <joseph@codesourcery.com>
Adhemerval Zanella <azanella@linux.vnet.ibm.com>
[BZ #17634]
* wcsmbs/wcschr.c [!WCSCHR] (wcschr): Define as __wcschr.
Undefine after defining function. Define as weak alias of
__wcschr. Use libc_hidden_weak.
* include/wchar.h (__wcschr): Declare. Use libc_hidden_proto.
* sysdeps/i386/i686/multiarch/wcschr-c.c [IS_IN (libc) && SHARED]
(libc_hidden_def): Also define __GI___wcschr alias.
* sysdeps/i386/i686/multiarch/wcschr.S (wcschr): Rename to
__wcschr and define as weak alias of __wcschr.
* sysdeps/powerpc/power6/wcschr.c [!WCSCHR] (WCSCHR): Define as
__wcschr.
[!WCSCHR] (DEFAULT_WCSCHR): Define.
[DEFAULT_WCSCHR] (__wcschr): Use libc_hidden_def.
[DEFAULT_WCSCHR] (wcschr): Define as weak alias of __wcschr. Use
libc_hidden_weak. Do not use libc_hidden_def.
* sysdeps/powerpc/powerpc32/power4/multiarch/wcschr-ppc32.c
[IS_IN (libc) && SHARED] (libc_hidden_def): Also define
__GI___wcschr alias.
* sysdeps/powerpc/powerpc32/power4/multiarch/wcschr.c
[IS_IN (libc)] (wcschr): Define as macro expanding to
__redirect_wcschr.
[IS_IN (libc)] (__wcschr_ppc): Use __redirect_wcschr in typeof.
[IS_IN (libc)] (__wcschr_power6): Likewise.
[IS_IN (libc)] (__wcschr_power7): Likewise.
[IS_IN (libc)] (__libc_wcschr): New. Define with libc_ifunc
instead of wcschr.
[IS_IN (libc)] (wcschr): Undefine and define as weak alias of
__libc_wcschr.
[!IS_IN (libc)] (libc_hidden_def): Do not undefine and redefine.
* sysdeps/powerpc/powerpc64/multiarch/wcschr.c (wcschr): Rename to
__wcschr and define as weak alias of __wcschr. Use
libc_hidden_builtin_def.
* sysdeps/x86_64/wcschr.S (wcschr): Rename to __wcschr and define
as weak alias of __wcschr. Use libc_hidden_weak.
* time/alt_digit.c (_nl_get_walt_digit): Use __wcschr instead of
wcschr.
* time/era.c (_nl_init_era_entries): Likewise.
* conform/Makefile (test-xfail-ISO/time.h/linknamespace): Remove
variable.
(test-xfail-XPG3/time.h/linknamespace): Likewise.
(test-xfail-XPG4/time.h/linknamespace): Likewise.
This patch makes the POWER7 optimized strpbrk generic by using
default doubleword stores to zero the hash, instead of VSX
instructions. Performance on POWER7/POWER8 does not change.
This patch makes the POWER7 optimized strcspn generic by using
default doubleword stores to zero the hash, instead of VSX
instructions. Performance on POWER7/POWER8 does not change.
This patch makes the POWER7 optimized strspn generic by using
default doubleword stores to zero the hash, instead of VSX
instructions. Performance on POWER7/POWER8 machines does not changed.
This patch optimizes strtok and strtok_r for POWERPC64.
A table of 256 characters is created and marked based on
the 'accept' argument and used to check for any occurance on
the input string.Loop unrolling is also used to gain improvements.
On powerpc, atomic_exchange_and_add is implemented without any
barriers. This patchs adds the missing instruction and memory barrier
for acquire and release semanthics.
This sets __HAVE_64B_ATOMICS if provided. It also sets
USE_ATOMIC_COMPILER_BUILTINS to true if the existing atomic ops use the
__atomic* builtins (aarch64, mips partially) or if this has been
tested (x86_64); otherwise, this is set to false so that C11 atomics will
be based on the existing atomic operations.
This patch fixes the build of C mempcpy and stpcpy by disabling the
redirection to __mempcpy and __stpcpy asm names if
NO_MEMPCPY_STPCPY_REDIRECT is defined, and defining that macro in the
relevant source files.
Tested for powerpc32 that the build is fixed.
* include/string.h [NO_MEMPCPY_STPCPY_REDIRECT] (mempcpy): Do not
redeclare with asm name.
[NO_MEMPCPY_STPCPY_REDIRECT] (stpcpy): Likewise.
* string/mempcpy.c (NO_MEMPCPY_STPCPY_REDIRECT): Define before
including <string.h>.
* string/stpcpy.c (NO_MEMPCPY_STPCPY_REDIRECT): Likewise.
* sysdeps/powerpc/powerpc32/power4/multiarch/mempcpy.c
[!NOT_IN_libc] (NO_MEMPCPY_STPCPY_REDIRECT): Likewise.
* sysdeps/powerpc/powerpc64/multiarch/mempcpy.c
[!NOT_IN_libc] (NO_MEMPCPY_STPCPY_REDIRECT): Likewise.
* sysdeps/powerpc/powerpc64/multiarch/stpcpy.c
[SHARED && !NOT_IN_libc] (NO_MEMPCPY_STPCPY_REDIRECT): Likewise.
Completing the removal of the obsolete INTDEF / INTUSE mechanism, this
patch removes the final use - that for _dl_starting_up - replacing it
by rtld_hidden_def / rtld_hidden_proto. Having removed the last use,
the mechanism itself is also removed.
Tested for x86_64 that installed stripped shared libraries are
unchanged by the patch. (This is not much of a test since this
variable is only defined and used in the !HAVE_INLINED_SYSCALLS case.)
[BZ #14132]
* include/libc-symbols.h (INTUSE): Remove macro.
(INTDEF): Likewise.
(INTVARDEF): Likewise.
(_INTVARDEF): Likewise.
(INTDEF2): Likewise.
(INTVARDEF2): Likewise.
* elf/rtld.c [!HAVE_INLINED_SYSCALLS] (_dl_starting_up): Use
rtld_hidden_def instead of INTVARDEF.
* sysdeps/generic/ldsodefs.h [IS_IN_rtld]
(_dl_starting_up_internal): Remove declaration.
(_dl_starting_up): Use rtld_hidden_proto.
* elf/dl-init.c [!HAVE_INLINED_SYSCALLS] (_dl_starting_up): Remove
declaration.
[!HAVE_INLINED_SYSCALLS] (_dl_starting_up_internal): Likewise.
(_dl_init) [!HAVE_INLINED_SYSCALLS]: Don't use INTUSE with
_dl_starting_up.
* elf/dl-writev.h (_dl_writev): Likewise.
* sysdeps/powerpc/powerpc64/dl-machine.h [!HAVE_INLINED_SYSCALLS]
(DL_STARTING_UP_DEF): Use __GI__dl_starting_up instead of
_dl_starting_up_internal.
Continuing the removal of the obsolete INTDEF / INTUSE mechanism, this
patch replaces its use for _dl_argv with rtld_hidden_data_def and
rtld_hidden_proto. Some places in .S files that previously used
_dl_argv_internal or INTUSE(_dl_argv) now use __GI__dl_argv directly
(there are plenty of existing examples of such direct use of __GI_*).
A single place in rtld.c previously used _dl_argv without INTUSE,
apparently accidentally, while the rtld_hidden_proto mechanism avoids
such accidential omissions. As a consequence, this patch *does*
change the contents of stripped ld.so. However, the installed
stripped shared libraries are identical to those you get if instead of
this patch you change that single _dl_argv use to use INTUSE, without
any other changes.
Tested for x86_64 (testsuite as well as comparison of installed
stripped shared libraries as described above).
[BZ #14132]
* sysdeps/generic/ldsodefs.h (_dl_argv): Use rtld_hidden_proto.
[IS_IN_rtld] (_dl_argv_internal): Do not declare.
(rtld_progname): Make macro definition unconditional.
* elf/rtld.c (_dl_argv): Use rtld_hidden_data_def instead of
INTDEF.
(dlmopen_doit): Do not use INTUSE with _dl_argv.
(dl_main): Likewise.
* elf/dl-sysdep.c (_dl_sysdep_start): Likewise.
* sysdeps/alpha/dl-machine.h (RTLD_START): Use __GI__dl_argv
instead of _dl_argv_internal.
* sysdeps/powerpc/powerpc32/dl-start.S (_dl_start_user): Use
__GI__dl_argv instead of INTUSE(_dl_argv).
* sysdeps/powerpc/powerpc64/dl-machine.h (RTLD_START): Use
__GI__dl_argv instead of _dl_argv_internal.