This patch continues the preparation for additional _FloatN / _FloatNx
function aliases by using libm_alias_ldouble for sysdeps/i386/fpu long
double functions, so that they can have _Float64x aliases added in
future.
Tested for x86_64 (which includes some of these implementations) and
x86, including build-many-glibcs.py tests that installed stripped
shared libraries are unchanged by the patch.
* sysdeps/i386/fpu/e_expl.S: Include <libm-alias-ldouble.h>.
[USE_AS_EXPM1L] (expm1l): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_asinhl.S: Include <libm-alias-ldouble.h>.
(asinhl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_atanl.c: Include <libm-alias-ldouble.h>.
(atanl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_cbrtl.S: Include <libm-alias-ldouble.h>.
(cbrtl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_ceill.S: Include <libm-alias-ldouble.h>.
(ceill): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_copysignl.S: Include <libm-alias-ldouble.h>.
(copysignl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_fabsl.S: Include <libm-alias-ldouble.h>.
(fabsl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_floorl.S: Include <libm-alias-ldouble.h>.
(floorl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_fmaxl.S: Include <libm-alias-ldouble.h>.
(fmaxl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_fminl.S: Include <libm-alias-ldouble.h>.
(fminl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_frexpl.S: Include <libm-alias-ldouble.h>.
(frexpl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_llrintl.S: Include <libm-alias-ldouble.h>.
(llrintl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_logbl.c: Include <libm-alias-ldouble.h>.
(logbl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_lrintl.S: Include <libm-alias-ldouble.h>.
(lrintl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_nearbyintl.S: Include <libm-alias-ldouble.h>.
(nearbyintl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_nextafterl.c: Include <libm-alias-ldouble.h>.
(nextafterl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_remquol.S: Include <libm-alias-ldouble.h>.
(remquol): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_rintl.c: Include <libm-alias-ldouble.h>.
(rintl): Define using libm_alias_ldouble.
* sysdeps/i386/fpu/s_truncl.S: Include <libm-alias-ldouble.h>.
(truncl): Define using libm_alias_ldouble.
* sysdeps/i386/i686/fpu/s_fmaxl.S: Include <libm-alias-ldouble.h>.
(fmaxl): Define using libm_alias_ldouble.
* sysdeps/i386/i686/fpu/s_fminl.S: Include <libm-alias-ldouble.h>.
(fminl): Define using libm_alias_ldouble.
This patch replaces i386 assembly versions of e_exp2f with generic
e_exp2f.c. For workload-spec2017.wrf, on Nehalem, it improves
performance by:
Before After Improvement
reciprocal-throughput 112.996 40.0454 182%
latency 126.581 54.4479 132%
On Skylake, it improves performance by:
Before After Improvement
reciprocal-throughput 113.14 39.447 186%
latency 136.068 55.684 144%
On IvyBridge with --disable-multi-arch, it improves performance by:
Before After Improvement
reciprocal-throughput 132.521 40.3759 228%
latency 145.791 58.4587 149%
* sysdeps/i386/fpu/e_exp2f.S: Removed.
* sysdeps/i386/fpu/w_exp2f.c: Likewise.
* sysdeps/i386/fpu/libm-test-ulps: Updated for generic e_exp2f.c.
* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
* sysdeps/i386/i686/fpu/multiarch/Makefile (libm-sysdep_routines):
Add e_exp2f-sse2.
(CFLAGS-e_exp2f-sse2.c): New.
* sysdeps/i386/i686/fpu/multiarch/e_exp2f-sse2.c: New file.
* sysdeps/i386/i686/fpu/multiarch/e_exp2f.c: Likewise.
The new generic expf and exp2f code don't need wrappers any more, they
set errno inline, so only use the wrappers on targets that need it.
(If the wrapper is needed, then the top level wrapper code is included,
otherwise empty w_exp*f.c is used to suppress the wrapper.)
A powerpc64 expf implementation includes the expf c code directly which
needed some changes.
* sysdeps/ieee754/flt-32/e_exp2f.c (__exp2f): Define without wrapper.
* sysdeps/ieee754/flt-32/e_expf.c (__expf): Likewise
* sysdeps/ieee754/flt-32/w_exp2f.c: New file.
* sysdeps/ieee754/flt-32/w_expf.c: New file.
* sysdeps/powerpc/powerpc64/fpu/multiarch/e_expf-ppc64.c: Update for
the new expf code.
* sysdeps/powerpc/powerpc64/fpu/multiarch/w_expf.c: New file.
* sysdeps/powerpc/powerpc64/power8/fpu/w_expf.c: New file.
* sysdeps/m68k/m680x0/fpu/w_exp2f.c: New file.
* sysdeps/m68k/m680x0/fpu/w_expf.c: New file.
* sysdeps/i386/fpu/w_exp2f.c: New file.
* sysdeps/i386/fpu/w_expf.c: New file.
* sysdeps/i386/i686/fpu/multiarch/w_expf.c: New file.
* sysdeps/x86_64/fpu/w_expf.c: New file.
This patch obsoletes the pow10, pow10f and pow10l functions (makes
them into compat symbols, not available for new ports or static
linking). The exp10 names for these functions are standardized (in TS
18661-4) and were added in the same glibc version (2.1) as pow10 so
source code can change to use them without any loss of portability.
Since pow10 is deliberately not provided for _Float128, only exp10,
this slightly simplifies moving to the new wrapper templates in the
!LIBM_SVID_COMPAT case, by avoiding needing to arrange for pow10,
pow10f and pow10l to be defined by those templates.
Tested for x86_64, and with build-many-glibcs.py.
* manual/math.texi (pow10): Do not document.
(pow10f): Likewise.
(pow10l): Likewise.
* math/bits/mathcalls.h [__USE_GNU] (pow10): Do not declare.
* math/bits/math-finite.h [__USE_GNU] (pow10): Likewise.
* math/libm-test-exp10.inc (pow10_test): Remove.
(do_test): Do not call pow10.
* math/w_exp10_compat.c (pow10): Make into compat symbol.
[NO_LONG_DOUBLE] (pow10l): Likewise.
* math/w_exp10f_compat.c (pow10f): Likewise.
* math/w_exp10l_compat.c (pow10l): Likewise.
* sysdeps/ia64/fpu/e_exp10.S: Include <shlib-compat.h>.
(pow10): Make into compat symbol.
* sysdeps/ia64/fpu/e_exp10f.S: Include <shlib-compat.h>.
(pow10f): Make into compat symbol.
* sysdeps/ia64/fpu/e_exp10l.S: Include <shlib-compat.h>.
(pow10l): Make into compat symbol.
* sysdeps/ieee754/ldbl-opt/Makefile (libnldbl-calls): Remove
pow10.
(CFLAGS-nldbl-pow10.c): Remove variable..
* sysdeps/ieee754/ldbl-opt/nldbl-pow10.c: Remove file.
* sysdeps/ieee754/ldbl-opt/w_exp10_compat.c (pow10l): Condition on
[SHLIB_COMPAT (libm, GLIBC_2_1, GLIBC_2_27)].
* sysdeps/ieee754/ldbl-opt/w_exp10l_compat.c (compat_symbol):
Undefine and redefine.
(pow10l): Make into compat symbol.
* sysdeps/aarch64/libm-test-ulps: Remove pow10 ulps.
* sysdeps/alpha/fpu/libm-test-ulps: Likewise.
* sysdeps/arm/libm-test-ulps: Likewise.
* sysdeps/hppa/fpu/libm-test-ulps: Likewise.
* sysdeps/i386/fpu/libm-test-ulps: Likewise.
* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
* sysdeps/microblaze/libm-test-ulps: Likewise.
* sysdeps/mips/mips32/libm-test-ulps: Likewise.
* sysdeps/mips/mips64/libm-test-ulps: Likewise.
* sysdeps/nios2/libm-test-ulps: Likewise.
* sysdeps/powerpc/fpu/libm-test-ulps: Likewise.
* sysdeps/powerpc/nofpu/libm-test-ulps: Likewise.
* sysdeps/s390/fpu/libm-test-ulps: Likewise.
* sysdeps/sh/libm-test-ulps: Likewise.
* sysdeps/sparc/fpu/libm-test-ulps: Likewise.
* sysdeps/tile/libm-test-ulps: Likewise.
* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
__memset_zero_constant_len_parameter should be removed by
commit 61062f5630
Author: Ulrich Drepper <drepper@redhat.com>
Date: Tue Mar 1 00:35:23 2005 +0000
2005-02-24 Roland McGrath <roland@redhat.com>
* debug/Versions (libc: GLIBC_2.4): Remove
__memset_zero_constant_len_parameter.
* sysdeps/generic/memset_chk.c: Remove alias and warning.
* misc/sys/cdefs.h (__warndecl): New macro.
* debug/warning-nop.c: New file.
* string/bits/string3.h (memset): Call __warn_memset_zero_len with no
arguments, instead of calling __memset_zero_constant_len_parameter.
Use __warndecl for __warn_memset_zero_len.
* debug/Makefile (routines): Add $(static-only-routines).
(static-only-routines): New variable.
This patch removes the last emaining pieces of it. Tested it on i586,
i686 and x86-64.
[BZ #21790]
* sysdeps/i386/i586/memset.S
(__memset_zero_constant_len_parameter): Removed.
* sysdeps/i386/i686/memset.S
(__memset_zero_constant_len_parameter): Likewise.
* sysdeps/i386/i686/multiarch/memset_chk.S
(__memset_zero_constant_len_parameter): Likewise.
* sysdeps/x86_64/memset.S (__memset_zero_constant_len_parameter):
Likewise.
There is no need to define multiarch __memmove_chk in libc.a since they
aren't used at all.
[BZ #21791]
* sysdeps/i386/i686/multiarch/memcpy-sse2-unaligned.S
(MEMCPY_CHK): Define only if SHARED is defined.
* sysdeps/i386/i686/multiarch/memcpy-ssse3-rep.S (MEMCPY_CHK):
Likewise.
* sysdeps/i386/i686/multiarch/memcpy-ssse3.S (MEMCPY_CHK):
Likewise.
Since there are no multiarch versions of memmove_chk and memset_chk,
test multiarch versions of memmove_chk and memset_chk only in libc.so.
[BZ #21741]
* sysdeps/i386/i686/multiarch/ifunc-impl-list.c
(__libc_ifunc_impl_list): Test memmove_chk and memset_chk only
in libc.so.
This patch enables float128 support for x86_64 and x86. All GCC
versions that can build glibc provide the required support, but since
GCC 6 and before don't provide __builtin_nanq / __builtin_nansq, sNaN
tests and some tests of NaN payloads need to be disabled with such
compilers (this does not affect the generated glibc binaries at all,
just the tests). bits/floatn.h declares float128 support to be
available for GCC versions that provide the required libgcc support
(4.3 for x86_64, 4.4 for i386 GNU/Linux, 4.5 for i386 GNU/Hurd);
compilation-only support was present some time before then, but not
really useful without the libgcc functions.
fenv_private.h needed updating to avoid trying to put _Float128 values
in registers. I make no assertion of optimality of the
math_opt_barrier / math_force_eval definitions for this case; they are
simply intended to be sufficient to work correctly.
Tested for x86_64 and x86, with GCC 7 and GCC 6. (Testing for x32 was
compilation tests only with build-many-glibcs.py to verify the ABI
baseline updates. I have not done any testing for Hurd, although the
float128 support is enabled there as for GNU/Linux.)
* sysdeps/i386/Implies: Add ieee754/float128.
* sysdeps/x86_64/Implies: Likewise.
* sysdeps/x86/bits/floatn.h: New file.
* sysdeps/x86/float128-abi.h: Likewise.
* manual/math.texi (Mathematics): Document support for _Float128
on x86_64 and x86.
* sysdeps/i386/fpu/fenv_private.h: Include <bits/floatn.h>.
(math_opt_barrier): Do not put _Float128 values in floating-point
registers.
(math_force_eval): Likewise.
[__x86_64__] (SET_RESTORE_ROUNDF128): New macro.
* sysdeps/x86/fpu/Makefile [$(subdir) = math] (CPPFLAGS): Append
to Makefile variable.
* sysdeps/x86/fpu/e_sqrtf128.c: New file.
* sysdeps/x86/fpu/sfp-machine.h: Likewise. Based on libgcc.
* sysdeps/x86/math-tests.h: New file.
* math/libm-test-support.h (XFAIL_FLOAT128_PAYLOAD): New macro.
* math/libm-test-getpayload.inc (getpayload_test_data): Use
XFAIL_FLOAT128_PAYLOAD.
* math/libm-test-setpayload.inc (setpayload_test_data): Likewise.
* math/libm-test-totalorder.inc (totalorder_test_data): Likewise.
* math/libm-test-totalordermag.inc (totalordermag_test_data):
Likewise.
* sysdeps/unix/sysv/linux/i386/libc.abilist: Update.
* sysdeps/unix/sysv/linux/i386/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/x86_64/64/libc.abilist: Likewise.
* sysdeps/unix/sysv/linux/x86_64/64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/x86_64/x32/libc.abilist: Likewise.
* sysdeps/unix/sysv/linux/x86_64/x32/libm.abilist: Likewise.
* sysdeps/i386/fpu/libm-test-ulps: Likewise.
* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
Testing with GCC 7 for 32-bit x86 showed some ulps differences,
presumably from variation in when values with excess precision get
spilled to the stack and so lose that precision. This patch updates
the libm-test-ulps files accordingly.
* sysdeps/i386/fpu/libm-test-ulps: Update.
* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
These machine-dependent inline string functions have never been on by
default, and even if they were a good idea at the time they were
introduced, they haven't really been touched in ten to fifteen years
and probably aren't a good idea on current-gen processors. Current
thinking is that this class of optimization is best left to the
compiler.
* bits/string.h, string/bits/string.h
* sysdeps/aarch64/bits/string.h
* sysdeps/m68k/m680x0/m68020/bits/string.h
* sysdeps/s390/bits/string.h, sysdeps/sparc/bits/string.h
* sysdeps/x86/bits/string.h: Delete file.
* string/string.h: Don't include bits/string.h.
* string/bits/string3.h: Rename to bits/string_fortified.h.
No need to undef various symbols that the removed headers
might have defined as macros.
* string/Makefile (headers): Remove bits/string.h, change
bits/string3.h to bits/string_fortified.h.
* string/string-inlines.c: Update commentary. Remove definitions
of various macros that nothing looks at anymore. Don't directly
include bits/string.h. Set _STRING_INLINE_unaligned here, based on
compiler-predefined macros.
* string/strncat.c: If STRNCAT is not defined, or STRNCAT_PRIMARY
_is_ defined, provide internal hidden alias __strncat.
* include/string.h: Declare internal hidden alias __strncat.
Only forward __stpcpy to __builtin_stpcpy if __NO_STRING_INLINES is
not defined.
* include/bits/string3.h: Rename to bits/string_fortified.h,
update to match above.
* sysdeps/i386/string-inlines.c: Define compat symbols for
everything formerly defined by sysdeps/x86/bits/string.h.
Make existing definitions into compat symbols as well.
Remove some no-longer-necessary messing around with macros.
* sysdeps/powerpc/powerpc32/power4/multiarch/mempcpy.c
* sysdeps/powerpc/powerpc64/multiarch/mempcpy.c
* sysdeps/powerpc/powerpc64/multiarch/stpcpy.c
* sysdeps/s390/multiarch/mempcpy.c
No need to define _HAVE_STRING_ARCH_mempcpy.
Do define __NO_STRING_INLINES and NO_MEMPCPY_STPCPY_REDIRECT.
* sysdeps/i386/i686/multiarch/strncat-c.c
* sysdeps/s390/multiarch/strncat-c.c
* sysdeps/x86_64/multiarch/strncat-c.c
Define STRNCAT_PRIMARY. Don't change definition of libc_hidden_def.
Since commit d957c4d3fa (i386: Compile
rtld-*.os with -mno-sse -mno-mmx -mfpmath=387), vector intrinsics can
no longer be used in ld.so, even if the compiled code never makes it
into the final ld.so link. This commit adds the missing IS_IN (libc)
guard to the SSE 4.2 strcspn implementation, so that it can be used from
ld.so in the future.
This is fairly complicated, not because the users of __need_Emath and
__need_error_t have complicated requirements, but because the core
changes had a lot of fallout.
__need_error_t exists for gnulib compatibility in argz.h and argp.h.
error_t itself is a Hurdism, an enum containing all the E-constants,
so you can do 'p (error_t) errno' in gdb and get a symbolic value.
argz.h and argp.h use it for function return values, and they want to
fall back to 'int' when that's not available. There is no reason why
these nonstandard headers cannot just go ahead and include all of
errno.h; so we do that.
__need_Emath is defined only by .S files; what they _really_ need is
for errno.h to avoid declaring anything other than the E-constants
(e.g. 'extern int __errno_location(void);' is a syntax error in
assembly language). This is replaced with a check for __ASSEMBLER__ in
errno.h, plus a carefully documented requirement for bits/errno.h not
to define anything other than macros. That in turn has the
consequence that bits/errno.h must not define errno - fortunately, all
live ports use the same definition of errno, so I've moved it to
errno.h. The Hurd bits/errno.h must also take care not to define
error_t when __ASSEMBLER__ is defined, which involves repeating all of
the definitions twice, but it's a generated file so that's okay.
* stdlib/errno.h: Remove __need_Emath and __need_error_t logic.
Reorganize file. Declare errno here. When __ASSEMBLER__ is
defined, don't declare anything other than the E-constants.
* include/errno.h: Change conditional for exposing internal
declarations to (not _ISOMAC and not __ASSEMBLER__).
* bits/errno.h: Remove logic for __need_Emath. Document
requirements for a port-specific bits/errno.h.
* sysdeps/unix/sysv/linux/bits/errno.h
* sysdeps/unix/sysv/linux/alpha/bits/errno.h
* sysdeps/unix/sysv/linux/hppa/bits/errno.h
* sysdeps/unix/sysv/linux/mips/bits/errno.h
* sysdeps/unix/sysv/linux/sparc/bits/errno.h:
Add multiple-include guard and check against improper inclusion.
Remove __need_Emath logic. Don't declare errno here. Ensure all
constants are defined as simple integer literals. Consistent
formatting.
* sysdeps/mach/hurd/errnos.awk: Likewise. Only define error_t and
enum __error_t_codes if __ASSEMBLER__ is not defined.
* sysdeps/mach/hurd/bits/errno.h: Regenerate.
* argp/argp.h, string/argz.h: Don't define __need_error_t before
including errno.h.
* sysdeps/i386/i686/fpu/multiarch/s_cosf-sse2.S
* sysdeps/i386/i686/fpu/multiarch/s_sincosf-sse2.S
* sysdeps/i386/i686/fpu/multiarch/s_sinf-sse2.S
* sysdeps/x86_64/fpu/s_cosf.S
* sysdeps/x86_64/fpu/s_sincosf.S
* sysdeps/x86_64/fpu/s_sinf.S:
Just include errno.h; don't define __need_Emath or include
bits/errno.h directly.
SSE2 memchr computes "edx + ecx - 16" where ecx is less than 16. Use
"edx - (16 - ecx)", instead of satured math, to avoid possible addition
overflow. This replaces
add %ecx, %edx
sbb %eax, %eax
or %eax, %edx
sub $16, %edx
with
neg %ecx
add $16, %ecx
sub %ecx, %edx
It is the same for x86_64, except for rcx/rdx, instead of ecx/edx.
* sysdeps/i386/i686/multiarch/memchr-sse2.S (MEMCHR): Use
"edx + ecx - 16" to avoid possible addition overflow.
* sysdeps/x86_64/memchr.S (memchr): Likewise.
This patch fixes the regression added by 23d2770 for final address
overflow calculation. The subtraction of the considered size (16)
at line 120 is at wrong place, for sizes less than 16 subsequent
overflow check will not take in consideration an invalid size (since
the subtraction will be negative). Also, the lea instruction also
does not raise the carry flag (CF) that is used in subsequent jbe
to check for overflow.
The fix is to follow x86_64 logic from 3daef2c where the overflow
is first check and a sub instruction is issued. In case of resulting
negative size, CF will be set by the sub instruction and a NULL
result will be returned. The patch also add similar tests reported
in bug report.
Checked on i686-linux-gnu and x86_64-linux-gnu.
* string/test-memchr.c (do_test): Add BZ#21182 checks for address
near end of a page.
* sysdeps/i386/i686/multiarch/memchr-sse2.S (__memchr): Fix
overflow calculation.
This patch moves tests of catan and catanh with finite inputs (other
than the divide-by-zero cases producing an exact infinity) to using
the auto-libm-test machinery. Each of auto-libm-test-out-catan and
auto-libm-test-out-catanh takes about three seconds to generate on my
system (so in fact it wasn't necessary after all to defer the move to
auto-libm-test-* until the output files were split up by function).
Tested for x86_64 and x86 and ulps updated accordingly.
* math/auto-libm-test-in: Add tests of catan and catanh.
* math/auto-libm-test-out-catan: New generated file.
* math/auto-libm-test-out-catanh: Likewise.
* math/libm-test-catan.inc (catan_test_data): Use AUTO_TESTS_c_c.
Move tests with finite inputs, except divide-by-zero cases, to
auto-libm-test-in.
* math/libm-test-catanh.inc (catanh_test_data): Likewise.
* math/Makefile (libm-test-funcs-auto): Add catan and catanh.
(libm-test-funcs-noauto): Remove catan and catanh.
* sysdeps/i386/fpu/libm-test-ulps: Update.
* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
This patch moves tests of casin and casinh with finite inputs to using
the auto-libm-test machinery. Each of auto-libm-test-out-casin and
auto-libm-test-out-casinh takes about 38 minutes to generate on my
system because of MPC slowness on special cases that appear in the
tests (with MPC 1.0.3; I don't know to what extent current MPC master
might speed it up).
Tested for x86_64 and x86 and ulps updated accordingly.
* math/auto-libm-test-in: Add tests of casin and casinh.
* math/auto-libm-test-out-casin: New generated file.
* math/auto-libm-test-out-casinh: Likewise.
* math/libm-test-casin.inc (casin_test_data): Use AUTO_TESTS_c_c.
Move tests with finite inputs to auto-libm-test-in.
* math/libm-test-casinh.inc (casinh_test_data): Likewise.
* math/Makefile (libm-test-funcs-auto): Add casin and casinh.
(libm-test-funcs-noauto): Remove casin and casinh.
* sysdeps/i386/fpu/libm-test-ulps: Update.
* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
This patch moves tests of cacos and cacosh with finite inputs to using
the auto-libm-test machinery. Each of auto-libm-test-out-cacos and
auto-libm-test-out-cacosh takes about 80 minutes to generate on my
system because of MPC slowness on special cases that appear in the
tests (with MPC 1.0.3; I don't know to what extent current MPC master
might speed it up).
Tested for x86_64 and x86 and ulps updated accordingly.
* math/auto-libm-test-in: Add tests of cacos and cacosh.
* math/auto-libm-test-out-cacos: New generated file.
* math/auto-libm-test-out-cacosh: Likewise.
* math/libm-test-cacos.inc (cacos_test_data): Use AUTO_TESTS_c_c.
Move tests with finite inputs to auto-libm-test-in.
* math/libm-test-cacosh.inc (cacosh_test_data): Likewise.
* math/Makefile (libm-test-funcs-auto): Add cacos and cacosh.
(libm-test-funcs-noauto): Remove cacos and cacosh.
* sysdeps/i386/fpu/libm-test-ulps: Update.
* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
This patch removes the COLORING_INCREMENT define and usage on allocatestack.c.
It has not been used since 564cd8b67e (glibc-2.3.3) by any architecture.
The idea is to simplify the code by removing obsolete code.
* nptl/allocatestack.c [COLORING_INCREMENT] (nptl_ncreated): Remove.
(allocate_stack): Remove COLORING_INCREMENT usage.
* nptl/stack-aliasing.h (COLORING_INCREMENT). Likewise.
* sysdeps/i386/i686/stack-aliasing.h (COLORING_INCREMENT): Likewise.
Based on comments on previous attempt to address BZ#16640 [1],
the idea is not support invalid use of strtok (the original
bug report proposal). This leader to a new strtok optimized
strtok implementation [2].
The idea of this patch is to fix BZ#16640 to align all the
implementations to a same contract. However, with newer strtok
code it is better to get remove the old assembly ones instead of
fix them.
For x86 is a gain in all cases since the new implementation can
potentially use sse2/sse42 implementation for strspn and strcspn.
This shows a better performance on both i686 and x86_64 using
the string benchtests.
On powerpc64 the gains are mixed, where only for larger inputs
or keys some gains are showns (based on benchtest it seems that
it shows some gains for keys larger than 10 and inputs larger
than 32). I would prefer to remove the optimized implementation
based on first code simplicity and second because some more gain
could be optimized using a better optimized strcspn/strspn
code (as for x86). However if powerpc arch maintainers prefer I
can send a v2 with the assembly code adjusted instead.
Checked on x86_64-linux-gnu, i686-linux-gnu, and powerpc64le-linux-gnu.
[BZ #16640]
* sysdeps/i386/i686/strtok.S: Remove file.
* sysdeps/i386/i686/strtok_r.S: Likewise.
* sysdeps/i386/strtok.S: Likewise.
* sysdeps/i386/strtok_r.S: Likewise.
* sysdeps/powerpc/powerpc64/strtok.S: Likewise.
* sysdeps/powerpc/powerpc64/strtok_r.S: Likewise.
* sysdeps/x86_64/strtok.S: Likewise.
* sysdeps/x86_64/strtok_r.S: Likewise.
[1] https://sourceware.org/ml/libc-alpha/2016-10/msg00411.html
[2] https://sourceware.org/ml/libc-alpha/2016-12/msg00461.html
Similar to BZ#19387, BZ#21014, and BZ#20971, both x86 sse2 strncat
optimized assembly implementations do not handle the size overflow
correctly.
The x86_64 one is in fact an issue with strcpy-sse2-unaligned, but
that is triggered also with strncat optimized implementation.
This patch uses a similar strategy used on 3daef2c8ee, where
saturared math is used for overflow case.
Checked on x86_64-linux-gnu and i686-linux-gnu. It fixes BZ #19390.
[BZ #19390]
* string/test-strncat.c (test_main): Add tests with SIZE_MAX as
maximum string size.
* sysdeps/i386/i686/multiarch/strcat-sse2.S (STRCAT): Avoid overflow
in pointer addition.
* sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S (STRCPY):
Likewise.
Similar to BZ#19387 and BZ#20971, both i686 memchr optimized assembly
implementations (memchr-sse2-bsf and memchr-sse2) do not handle the
size overflow correctly.
It is shown by the new tests added by commit 3daef2c8ee, where
both implementation fails with size as SIZE_MAX.
This patch uses a similar strategy used on 3daef2c8ee, where
saturared math is used for overflow case.
Checked on i686-linux-gnu.
[BZ #21014]
* sysdeps/i386/i686/multiarch/memchr-sse2-bsf.S (MEMCHR): Avoid overflow
in pointer addition.
* sysdeps/i386/i686/multiarch/memchr-sse2.S (MEMCHR): Likewise.
Various fmax and fmin function implementations mishandle sNaN
arguments:
(a) When both arguments are NaNs, the return value should be a qNaN,
but sometimes it is an sNaN if at least one argument is an sNaN.
(b) Under TS 18661-1 semantics, if either argument is an sNaN then the
result should be a qNaN (whereas if one argument is a qNaN and the
other is not a NaN, the result should be the non-NaN argument).
Various implementations treat sNaNs like qNaNs here.
This patch fixes the x86 and x86_64 versions (ignoring float and
double for 32-bit x86 given the inability to reliably avoid the sNaN
turning into a qNaN before it gets to the called function). Tests of
sNaN inputs to these functions are added.
Note on architecture versions I haven't changed for this issue:
AArch64 already gets this right (it uses a hardware instruction with
the correct semantics for both quiet and signaling NaNs) and does not
need changes. It's possible Alpha, IA64, SPARC might need changes
(this would be shown by the testsuite if so).
Tested for x86_64 and x86 (both i686 and i586 builds, to cover the
different x86 implementations).
[BZ #20947]
* sysdeps/i386/fpu/s_fmaxl.S (__fmaxl): Add the arguments when
either is a signaling NaN.
* sysdeps/i386/fpu/s_fminl.S (__fminl): Likewise. Make code
follow fmaxl more closely.
* sysdeps/i386/i686/fpu/s_fmaxl.S (__fmaxl): Add the arguments
when either is a signaling NaN.
* sysdeps/i386/i686/fpu/s_fminl.S (__fminl): Likewise.
* sysdeps/x86_64/fpu/s_fmax.S (__fmax): Likewise.
* sysdeps/x86_64/fpu/s_fmaxf.S (__fmaxf): Likewise.
* sysdeps/x86_64/fpu/s_fmaxl.S (__fmaxl): Likewise.
* sysdeps/x86_64/fpu/s_fmin.S (__fmin): Likewise.
* sysdeps/x86_64/fpu/s_fminf.S (__fminf): Likewise.
* sysdeps/x86_64/fpu/s_fminl.S (__fminl): Likewise.
* math/libm-test.inc (fmax_test_data): Add tests of sNaN inputs.
(fmin_test_data): Likewise.
manual/libm-err-tab.pl hardcodes a list of names for particular
platforms (mapping from sysdeps directory name to friendly name for
the manual). This goes against the principle of keeping information
about individual platforms in their corresponding sysdeps directory,
and the list is also very out-of-date regarding supported platforms
and their corresponding sysdeps directories.
This patch fixes this by adding a libm-test-ulps-name file alongside
each libm-test-ulps file. The script then gets the friendly name from
that file, which is required to exist, so it no longer needs to allow
for the mapping being missing.
Tested for x86_64.
[BZ #14139]
* manual/libm-err-tab.pl (%pplatforms): Initialize to empty.
(find_files): Obtain platform name from libm-test-ulps-name and
store in %pplatforms.
(canonicalize_platform): Remove.
(print_platforms): Use $pplatforms directly.
(by_platforms): Do not allow for platforms missing from
%pplatforms.
* sysdeps/aarch64/libm-test-ulps-name: New file.
* sysdeps/alpha/fpu/libm-test-ulps-name: Likewise.
* sysdeps/arm/libm-test-ulps-name: Likewise.
* sysdeps/generic/libm-test-ulps-name: Likewise.
* sysdeps/hppa/fpu/libm-test-ulps-name: Likewise.
* sysdeps/i386/fpu/libm-test-ulps-name: Likewise.
* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps-name: Likewise.
* sysdeps/ia64/fpu/libm-test-ulps-name: Likewise.
* sysdeps/m68k/coldfire/fpu/libm-test-ulps-name: Likewise.
* sysdeps/m68k/m680x0/fpu/libm-test-ulps-name: Likewise.
* sysdeps/microblaze/libm-test-ulps-name: Likewise.
* sysdeps/mips/mips32/libm-test-ulps-name: Likewise.
* sysdeps/mips/mips64/libm-test-ulps-name: Likewise.
* sysdeps/nios2/libm-test-ulps-name: Likewise.
* sysdeps/powerpc/fpu/libm-test-ulps-name: Likewise.
* sysdeps/powerpc/nofpu/libm-test-ulps-name: Likewise.
* sysdeps/s390/fpu/libm-test-ulps-name: Likewise.
* sysdeps/sh/libm-test-ulps-name: Likewise.
* sysdeps/sparc/fpu/libm-test-ulps-name: Likewise.
* sysdeps/tile/libm-test-ulps-name: Likewise.
* sysdeps/x86_64/fpu/libm-test-ulps-name: Likewise.
This was used by --enable-omitfp, and the bulk of it was removed in this
commit:
commit bdeba1354b
Author: Ulrich Drepper <drepper@gmail.com>
Date: Sat Jan 7 11:29:31 2012 -0500
Remove --enable-omitfp support
Some architectures have their own versions of fdim functions, which
are missing errno setting (bug 6796) and may also return sNaN instead
of qNaN for sNaN input, in the case of the x86 / x86_64 long double
versions (bug 20256).
These versions are not actually doing anything that a compiler
couldn't generate, just straightforward comparisons / arithmetic (and,
in the x86 / x86_64 case, testing for NaNs with fxam, which isn't
actually needed once you use an unordered comparison and let the NaNs
pass through the same subtraction as non-NaN inputs). This patch
removes the x86 / x86_64 / powerpc versions, so that those
architectures use the generic C versions, which correctly handle
setting errno and deal properly with sNaN inputs. This seems better
than dealing with setting errno in lots of .S versions.
The i386 versions also return results with excess range and precision,
which is not appropriate for a function exactly defined by reference
to IEEE operations. For errno setting to work correctly on overflow,
it's necessary to remove excess range with math_narrow_eval, which
this patch duly does in the float and double versions so that the
tests can reliably pass on x86. For float, this avoids any double
rounding issues as the long double precision is more than twice that
of float. For double, double rounding issues will need to be
addressed separately, so this patch does not fully fix bug 20255.
Tested for x86_64, x86 and powerpc.
[BZ #6796]
[BZ #20255]
[BZ #20256]
* math/s_fdim.c: Include <math_private.h>.
(__fdim): Use math_narrow_eval on result.
* math/s_fdimf.c: Include <math_private.h>.
(__fdimf): Use math_narrow_eval on result.
* sysdeps/i386/fpu/s_fdim.S: Remove file.
* sysdeps/i386/fpu/s_fdimf.S: Likewise.
* sysdeps/i386/fpu/s_fdiml.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdim.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdimf.S: Likewise.
* sysdeps/i386/i686/fpu/s_fdiml.S: Likewise.
* sysdeps/powerpc/fpu/s_fdim.c: Likewise.
* sysdeps/powerpc/fpu/s_fdimf.c: Likewise.
* sysdeps/powerpc/powerpc32/fpu/s_fdim.c: Likewise.
* sysdeps/powerpc/powerpc64/fpu/s_fdim.c: Likewise.
* sysdeps/x86_64/fpu/s_fdiml.S: Likewise.
* math/libm-test.inc (fdim_test_data): Expect errno setting on
overflow. Add sNaN tests.
The i386/x86_64 versions of logl return sNaN for sNaN input. This
patch fixes them to add a NaN input to itself so that qNaN is returned
in this case.
Tested for x86_64 and x86 (including a build for i586 to cover the
non-i686 logl version).
[BZ #20227]
* sysdeps/i386/fpu/e_logl.S (__ieee754_logl): Add NaN input to
itself.
* sysdeps/i386/i686/fpu/e_logl.S (__ieee754_logl): Likewise.
* sysdeps/x86_64/fpu/e_logl.S (__ieee754_logl): Likewise.
* math/libm-test.inc (log_test_data): Add sNaN tests.
Merge x86 ifunc-defines.sym with x86 cpu-features-offsets.sym. Remove
x86 ifunc-defines.sym and rtld-global-offsets.sym. No code changes on
i686 and x86-64.
* sysdeps/i386/i686/multiarch/Makefile (gen-as-const-headers):
Remove ifunc-defines.sym.
* sysdeps/x86_64/multiarch/Makefile (gen-as-const-headers):
Likewise.
* sysdeps/i386/i686/multiarch/ifunc-defines.sym: Removed.
* sysdeps/x86/rtld-global-offsets.sym: Likewise.
* sysdeps/x86_64/multiarch/ifunc-defines.sym: Likewise.
* sysdeps/x86/Makefile (gen-as-const-headers): Remove
rtld-global-offsets.sym.
* sysdeps/x86_64/multiarch/ifunc-defines.sym: Merged with ...
* sysdeps/x86/cpu-features-offsets.sym: This.
* sysdeps/x86/cpu-features.h: Include <cpu-features-offsets.h>
instead of <ifunc-defines.h> and <rtld-global-offsets.h>.
Bug 19848 reports cases where powl on x86 / x86_64 has error
accumulation, for small integer exponents, larger than permitted by
glibc's accuracy goals, at least in some rounding modes. This patch
further restricts the exponent range for which the
small-integer-exponent logic is used to limit the possible error
accumulation.
Tested for x86_64 and x86 and ulps updated accordingly.
[BZ #19848]
* sysdeps/i386/fpu/e_powl.S (p3): Rename to p2 and change value
from 8 to 4.
(__ieee754_powl): Compare integer exponent against 4 not 8.
* sysdeps/x86_64/fpu/e_powl.S (p3): Rename to p2 and change value
from 8 to 4.
(__ieee754_powl): Compare integer exponent against 4 not 8.
* math/auto-libm-test-in: Add more tests of pow.
* math/auto-libm-test-out: Regenerated.
* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Update.
* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
When building on i686, x86_64, and arm, and with NDEBUG, or --with-cpu
there are various variables and functions which are unused based on
these settings.
This patch marks all such variables with __attribute__((unused)) to
avoid the compiler warnings when building with the aformentioned
options.
The i386 ULPs are actually the i686/multiarch ones. The i686/multiarch
float ULPs are more precise as the SSE2 version (when available) uses
double for the cosf and sinf functions.
On the other hand the higher precision of the x86 FPU improves the
precision for a few other math functions.
* sysdeps/i386/fpu/libm-test-ulps: Move to ....
* sysdeps/i386/i686/multiarch/fpu/libm-test-ulps: ...here.
* sysdeps/i386/fpu/libm-test-ulps: Regenerate.
For the -ffinite-math-only versions of various x86_64 and x86 log*
functions, a zero result from log* (1) is returned with incorrect sign
in round-downward mode. This patch fixes this in a similar way to the
previous fixes for the non-*_finite versions of the functions.
Tested for x86_64 and x86 (including an i586 build), together with a
patch that will be applied separately to enable the main libm-test.inc
tests for the finite-math-only functions.
[BZ #19213]
* sysdeps/i386/fpu/e_log.S (__log_finite): Ensure +0 is always
returned for argument 1.
* sysdeps/i386/fpu/e_logf.S (__logf_finite): Likewise.
* sysdeps/i386/fpu/e_logl.S (__logl_finite): Likewise.
* sysdeps/i386/i686/fpu/e_logl.S (__logl_finite): Likewise.
* sysdeps/x86_64/fpu/e_log10l.S (__log10l_finite): Likewise.
* sysdeps/x86_64/fpu/e_log2l.S (__log2l_finite): Likewise.
* sysdeps/x86_64/fpu/e_logl.S (__logl_finite): Likewise.
There is a configure test for assembler support for -mtune=i686. This
option was added in binutils 2.18 so the test is obsolete; this patch
removes it.
Tested for x86 (testsuite, and that installed shared libraries are
unchanged by the patch).
* sysdeps/i386/configure.ac (libc_cv_as_i686): Remove configure
test.
* sysdeps/i386/configure: Regenerated.
* sysdeps/i386/i686/Makefile [$(config-asflags-i686) = yes]: Make
code unconditional.
GCC added support for -msse4 in version 4.3. Thus the configure tests
for it are obsolete, and this patch removes them.
Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by this patch).
* sysdeps/i386/configure.ac (libc_cv_cc_sse4): Remove configure
test.
* sysdeps/i386/configure: Regenerated.
* sysdeps/i386/i686/multiarch/Makefile
[$(config-cflags-sse4) = yes]: Make code unconditional.
* sysdeps/i386/i686/multiarch/strcspn.S [HAVE_SSE4_SUPPORT]:
Likewise.
* sysdeps/i386/i686/multiarch/strspn.S [HAVE_SSE4_SUPPORT]:
Likewise.
* sysdeps/x86_64/configure.ac (libc_cv_cc_sse4): Remove configure
test.
* sysdeps/x86_64/configure: Regenerated.
* sysdeps/x86_64/multiarch/Makefile [$(config-cflags-sse4) = yes]:
Make code unconditional.
* sysdeps/x86_64/multiarch/strcspn.S [HAVE_SSE4_SUPPORT]:
Likewise.
* sysdeps/x86_64/multiarch/strspn.S [HAVE_SSE4_SUPPORT]: Likewise.
* config.h.in (HAVE_SSE4_SUPPORT): Remove #undef.
i386 exp, hypot and pow functions can return overflowing and
underflowing values with excess range and precision; ; Wilco
Dijkstra's patches to make isfinite etc. expand inline cause this
pre-existing issue to result in test failures.
This patch fixes those functions to avoid excess range and precision
in their return values. Appropriate macros are added for the repeated
code sequences; in future I'll add more such macros and refactor
existing code forcing underflow (with or without also eliminating
excess range and precision from the return value) to use such macros.
Tested for x86. If, after this patch, you still see x86 libm test
failures with excess range or precision, please file bugs in Bugzilla.
[BZ #18980]
* sysdeps/i386/fpu/i386-math-asm.h (DEFINE_FLT_MIN): New macro.
(DEFINE_DBL_MIN): Likewise.
(FLT_NARROW_EVAL_UFLOW_NONNEG_NAN): Likewise.
(DBL_NARROW_EVAL_UFLOW_NONNEG_NAN): Likewise.
(FLT_NARROW_EVAL_UFLOW_NONNEG): Likewise.
(DBL_NARROW_EVAL_UFLOW_NONNEG): Likewise.
* sysdeps/i386/fpu/e_exp.S: Include <i386-math-asm.h>.
(dbl_min): Replace with use of DEFINE_DBL_MIN.
(__ieee754_exp): Use DBL_NARROW_EVAL_UFLOW_NONNEG_NAN.
(__exp_finite): Use DBL_NARROW_EVAL_UFLOW_NONNEG.
* sysdeps/i386/fpu/e_exp10.S: Include <i386-math-asm.h>.
(dbl_min): Replace with use of DEFINE_DBL_MIN.
(__ieee754_exp10): Use DBL_NARROW_EVAL_UFLOW_NONNEG_NAN.
* sysdeps/i386/fpu/e_exp10f.S: Include <i386-math-asm.h>.
(flt_min): Replace with use of DEFINE_FLT_MIN.
(__ieee754_exp10f): Use FLT_NARROW_EVAL_UFLOW_NONNEG_NAN.
* sysdeps/i386/fpu/e_exp2.S: Include <i386-math-asm.h>.
(dbl_min): Replace with use of DEFINE_DBL_MIN.
(__ieee754_exp2): Use DBL_NARROW_EVAL_UFLOW_NONNEG_NAN.
* sysdeps/i386/fpu/e_exp2f.S: Include <i386-math-asm.h>.
(flt_min): Replace with use of DEFINE_FLT_MIN.
(__ieee754_exp2f): Use FLT_NARROW_EVAL_UFLOW_NONNEG_NAN.
* sysdeps/i386/fpu/e_expf.S: Include <i386-math-asm.h>.
(flt_min): Replace with use of DEFINE_FLT_MIN.
(__ieee754_expf): Use FLT_NARROW_EVAL_UFLOW_NONNEG_NAN.
(__expf_finite): Use FLT_NARROW_EVAL_UFLOW_NONNEG.
* sysdeps/i386/fpu/e_hypot.S: Include <i386-math-asm.h>.
(__ieee754_hypot): Use DBL_NARROW_EVAL.
* sysdeps/i386/fpu/e_hypotf.S: Include <i386-math-asm.h>.
(__ieee754_hypotf): Use FLT_NARROW_EVAL.
* sysdeps/i386/fpu/e_pow.S: Include <i386-math-asm.h>.
(__ieee754_pow): Use DBL_NARROW_EVAL.
* sysdeps/i386/fpu/e_powf.S: Include <i386-math-asm.h>.
(__ieee754_powf): Use FLT_NARROW_EVAL.
* sysdeps/i386/i686/fpu/multiarch/e_expf-sse2.S
(__ieee754_expf_sse2): Convert double-precision result to single
precision.
* sysdeps/i386/fpu/libm-test-ulps: Update.
We detect i586 and i686 features at run-time by checking CX8 and CMOV
CPUID features bits. We can use these information to select the best
implementation in ix86 multiarch. HAS_I586/HAS_I686 is true if i586/i686
instructions are available on the processor.
Due to the reordering and the other nifty extensions in i686, it is not
really good to use heavily i586 optimized code on an i686. It's better
to use i486 code if it isn't an i586. USE_I586/USE_I686 is true if
i586/i686 implementation should be used for the processor. USE_I586
is true only if i686 instructions aren't available. If i686 instructions
are available, we always choose i686 or i486 implementation, in that order,
and we never choose i586 implementation for i686-class processors.
* sysdeps/i386/init-arch.h: New file.
* sysdeps/i386/i586/init-arch.h: Likewise.
* sysdeps/i386/i686/init-arch.h: Likewise.
* sysdeps/x86/cpu-features.c (init_cpu_features): Set bit_I586
bit if CX8 is available. Set bit_I686 bit if CMOV is available.
* sysdeps/x86/cpu-features.h (bit_I586): New.
(bit_I686): Likewise.
(bit_CX8): Likewise.
(bit_CMOV): Likewise.
(index_CX8): Likewise.
(index_CMOV): Likewise.
(index_I586): Likewise.
(index_I686): Likewise.
(reg_CX8): Likewise.
(reg_CMOV): Likewise.
(HAS_I586): Defined as HAS_ARCH_FEATURE (I586) if i586 isn't
available at compile-time.
(HAS_I686): Defined as HAS_ARCH_FEATURE (I686) if i686 isn't
available at compile-time.
* sysdeps/x86/init-arch.h (USE_I586): New macro.
(USE_I686): Likewise.