glibc/sysdeps/generic/math_private.h

644 lines
18 KiB
C
Raw Normal View History

/*
* ====================================================
* Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
*
* Developed at SunPro, a Sun Microsystems, Inc. business.
* Permission to use, copy, modify, and distribute this
* software is freely granted, provided that this notice
* is preserved.
* ====================================================
*/
/*
* from: @(#)fdlibm.h 5.1 93/09/24
*/
#ifndef _MATH_PRIVATE_H_
#define _MATH_PRIVATE_H_
1999-07-14 00:54:57 +00:00
#include <endian.h>
#include <stdint.h>
#include <sys/types.h>
#include <fenv.h>
#include <float.h>
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
#include <get-rounding-mode.h>
/* Gather machine dependent _Floatn support. */
#include <bits/floatn.h>
/* The original fdlibm code used statements like:
n0 = ((*(int*)&one)>>29)^1; * index of high word *
ix0 = *(n0+(int*)&x); * high word of x *
ix1 = *((1-n0)+(int*)&x); * low word of x *
to dig two 32 bit words out of the 64 bit IEEE floating point
value. That is non-ANSI, and, moreover, the gcc instruction
scheduler gets it wrong. We instead use the following macros.
Unlike the original code, we determine the endianness at compile
time, not at run time; I don't see much benefit to selecting
endianness at run time. */
/* A union which permits us to convert between a double and two 32 bit
ints. */
#if __FLOAT_WORD_ORDER == __BIG_ENDIAN
typedef union
{
double value;
struct
{
Consistently use uintN_t not u_intN_t in libm. This patch changes libm code to make consistent use of C99 uintN_t types instead of sometimes using those and sometimes using the older nonstandard u_intN_t names. This makes sense as a cleanup in its own right, and also facilitates merges to GCC's libquadmath (which gets the types from stdint.h and so may not have u_intN_t available at all). Tested for x86_64, and with build-many-glibcs.py. * math/s_nextafter.c (__nextafter): Use uintN_t instead of u_intN_t. * math/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/generic/math_private.h (ieee_double_shape_type): Likewise. (ieee_float_shape_type): Likewise. * sysdeps/i386/fpu/s_fpclassifyl.c (__fpclassifyl): Likewise. * sysdeps/i386/fpu/s_isnanl.c (__isnanl): Likewise. * sysdeps/i386/fpu/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/i386/fpu/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/i386/fpu/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/dbl-64/e_acosh.c (__ieee754_acosh): Likewise. * sysdeps/ieee754/dbl-64/e_cosh.c (__ieee754_cosh): Likewise. * sysdeps/ieee754/dbl-64/e_fmod.c (__ieee754_fmod): Likewise. * sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r): Likewise. * sysdeps/ieee754/dbl-64/e_hypot.c (__ieee754_hypot): Likewise. * sysdeps/ieee754/dbl-64/e_jn.c (__ieee754_jn): Likewise. (__ieee754_yn): Likewise. * sysdeps/ieee754/dbl-64/e_log10.c (__ieee754_log10): Likewise. * sysdeps/ieee754/dbl-64/e_log2.c (__ieee754_log2): Likewise. * sysdeps/ieee754/dbl-64/e_rem_pio2.c (__ieee754_rem_pio2): Likewise. * sysdeps/ieee754/dbl-64/e_sinh.c (__ieee754_sinh): Likewise. * sysdeps/ieee754/dbl-64/s_ceil.c (__ceil): Likewise. * sysdeps/ieee754/dbl-64/s_copysign.c (__copysign): Likewise. * sysdeps/ieee754/dbl-64/s_erf.c (__erf): Likewise. (__erfc): Likewise. * sysdeps/ieee754/dbl-64/s_expm1.c (__expm1): Likewise. * sysdeps/ieee754/dbl-64/s_finite.c (FINITE): Likewise. * sysdeps/ieee754/dbl-64/s_floor.c (__floor): Likewise. * sysdeps/ieee754/dbl-64/s_fpclassify.c (__fpclassify): Likewise. * sysdeps/ieee754/dbl-64/s_isnan.c (__isnan): Likewise. * sysdeps/ieee754/dbl-64/s_issignaling.c (__issignaling): Likewise. * sysdeps/ieee754/dbl-64/s_llrint.c (__llrint): Likewise. * sysdeps/ieee754/dbl-64/s_llround.c (__llround): Likewise. * sysdeps/ieee754/dbl-64/s_lrint.c (__lrint): Likewise. * sysdeps/ieee754/dbl-64/s_lround.c (__lround): Likewise. * sysdeps/ieee754/dbl-64/s_modf.c (__modf): Likewise. * sysdeps/ieee754/dbl-64/s_nextup.c (__nextup): Likewise. * sysdeps/ieee754/dbl-64/s_remquo.c (__remquo): Likewise. * sysdeps/ieee754/dbl-64/s_round.c (__round): Likewise. * sysdeps/ieee754/dbl-64/s_trunc.c (__trunc): Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_issignaling.c (__issignaling): Likewise. * sysdeps/ieee754/flt-32/e_atan2f.c (__ieee754_atan2f): Likewise. * sysdeps/ieee754/flt-32/e_fmodf.c (__ieee754_fmodf): Likewise. * sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r): Likewise. * sysdeps/ieee754/flt-32/e_jnf.c (__ieee754_ynf): Likewise. * sysdeps/ieee754/flt-32/e_log10f.c (__ieee754_log10f): Likewise. * sysdeps/ieee754/flt-32/e_powf.c (__ieee754_powf): Likewise. * sysdeps/ieee754/flt-32/e_rem_pio2f.c (__ieee754_rem_pio2f): Likewise. * sysdeps/ieee754/flt-32/e_remainderf.c (__ieee754_remainderf): Likewise. * sysdeps/ieee754/flt-32/e_sqrtf.c (__ieee754_sqrtf): Likewise. * sysdeps/ieee754/flt-32/s_ceilf.c (__ceilf): Likewise. * sysdeps/ieee754/flt-32/s_copysignf.c (__copysignf): Likewise. * sysdeps/ieee754/flt-32/s_erff.c (__erff): Likewise. (__erfcf): Likewise. * sysdeps/ieee754/flt-32/s_expm1f.c (__expm1f): Likewise. * sysdeps/ieee754/flt-32/s_finitef.c (FINITEF): Likewise. * sysdeps/ieee754/flt-32/s_floorf.c (__floorf): Likewise. * sysdeps/ieee754/flt-32/s_fpclassifyf.c (__fpclassifyf): Likewise. * sysdeps/ieee754/flt-32/s_isnanf.c (__isnanf): Likewise. * sysdeps/ieee754/flt-32/s_issignalingf.c (__issignalingf): Likewise. * sysdeps/ieee754/flt-32/s_llrintf.c (__llrintf): Likewise. * sysdeps/ieee754/flt-32/s_llroundf.c (__llroundf): Likewise. * sysdeps/ieee754/flt-32/s_lrintf.c (__lrintf): Likewise. * sysdeps/ieee754/flt-32/s_lroundf.c (__lroundf): Likewise. * sysdeps/ieee754/flt-32/s_modff.c (__modff): Likewise. * sysdeps/ieee754/flt-32/s_remquof.c (__remquof): Likewise. * sysdeps/ieee754/flt-32/s_roundf.c (__roundf): Likewise. * sysdeps/ieee754/ldbl-128/e_acoshl.c (__ieee754_acoshl): Likewise. * sysdeps/ieee754/ldbl-128/e_atan2l.c (__ieee754_atan2l): Likewise. * sysdeps/ieee754/ldbl-128/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-128/e_fmodl.c (__ieee754_fmodl): Likewise. * sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128/e_hypotl.c (__ieee754_hypotl): Likewise. * sysdeps/ieee754/ldbl-128/e_jnl.c (__ieee754_jnl): Likewise. (__ieee754_ynl): Likewise. * sysdeps/ieee754/ldbl-128/e_powl.c (__ieee754_powl): Likewise. * sysdeps/ieee754/ldbl-128/e_rem_pio2l.c (__ieee754_rem_pio2l): Likewise. * sysdeps/ieee754/ldbl-128/e_remainderl.c (__ieee754_remainderl): Likewise. * sysdeps/ieee754/ldbl-128/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-128/k_cosl.c (__kernel_cosl): Likewise. * sysdeps/ieee754/ldbl-128/k_sincosl.c (__kernel_sincosl): Likewise. * sysdeps/ieee754/ldbl-128/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128/s_ceill.c (__ceill): Likewise. * sysdeps/ieee754/ldbl-128/s_copysignl.c (__copysignl): Likewise. * sysdeps/ieee754/ldbl-128/s_erfl.c (__erfcl): Likewise. * sysdeps/ieee754/ldbl-128/s_fabsl.c (__fabsl): Likewise. * sysdeps/ieee754/ldbl-128/s_finitel.c (__finitel): Likewise. * sysdeps/ieee754/ldbl-128/s_floorl.c (__floorl): Likewise. * sysdeps/ieee754/ldbl-128/s_fpclassifyl.c (__fpclassifyl): Likewise. * sysdeps/ieee754/ldbl-128/s_frexpl.c (__frexpl): Likewise. * sysdeps/ieee754/ldbl-128/s_isnanl.c (__isnanl): Likewise. * sysdeps/ieee754/ldbl-128/s_issignalingl.c (__issignalingl): Likewise. * sysdeps/ieee754/ldbl-128/s_llrintl.c (__llrintl): Likewise. * sysdeps/ieee754/ldbl-128/s_llroundl.c (__llroundl): Likewise. * sysdeps/ieee754/ldbl-128/s_lrintl.c (__lrintl): Likewise. * sysdeps/ieee754/ldbl-128/s_lroundl.c (__lroundl): Likewise. * sysdeps/ieee754/ldbl-128/s_modfl.c (__modfl): Likewise. * sysdeps/ieee754/ldbl-128/s_nearbyintl.c (__nearbyintl): Likewise. * sysdeps/ieee754/ldbl-128/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/ieee754/ldbl-128/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/ieee754/ldbl-128/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/ldbl-128/s_nextupl.c (__nextupl): Likewise. * sysdeps/ieee754/ldbl-128/s_remquol.c (__remquol): Likewise. * sysdeps/ieee754/ldbl-128/s_rintl.c (__rintl): Likewise. * sysdeps/ieee754/ldbl-128/s_roundl.c (__roundl): Likewise. * sysdeps/ieee754/ldbl-128/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-128/s_truncl.c (__truncl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_fmodl.c (__ieee754_fmodl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_rem_pio2l.c (__ieee754_rem_pio2l): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_remainderl.c (__ieee754_remainderl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_cosl.c (__kernel_cosl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_fabsl.c (__fabsl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_fpclassifyl.c (___fpclassifyl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_modfl.c (__modfl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_remquol.c (__remquol): Likewise. * sysdeps/ieee754/ldbl-96/e_acoshl.c (__ieee754_acoshl): Likewise. * sysdeps/ieee754/ldbl-96/e_asinl.c (__ieee754_asinl): Likewise. * sysdeps/ieee754/ldbl-96/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-96/e_coshl.c (__ieee754_coshl): Likewise. * sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-96/e_hypotl.c (__ieee754_hypotl): Likewise. * sysdeps/ieee754/ldbl-96/e_j0l.c (__ieee754_j0l): Likewise. (__ieee754_y0l): Likewise. (pzero): Likewise. (qzero): Likewise. * sysdeps/ieee754/ldbl-96/e_j1l.c (__ieee754_j1l): Likewise. (__ieee754_y1l): Likewise. (pone): Likewise. (qone): Likewise. * sysdeps/ieee754/ldbl-96/e_jnl.c (__ieee754_jnl): Likewise. (__ieee754_ynl): Likewise. * sysdeps/ieee754/ldbl-96/e_lgammal_r.c (sin_pi): Likewise. (__ieee754_lgammal_r): Likewise. * sysdeps/ieee754/ldbl-96/e_rem_pio2l.c (__ieee754_rem_pio2l): Likewise. * sysdeps/ieee754/ldbl-96/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-96/s_copysignl.c (__copysignl): Likewise. * sysdeps/ieee754/ldbl-96/s_erfl.c (__erfl): Likewise. (__erfcl): Likewise. * sysdeps/ieee754/ldbl-96/s_frexpl.c (__frexpl): Likewise. * sysdeps/ieee754/ldbl-96/s_issignalingl.c (__issignalingl): Likewise. * sysdeps/ieee754/ldbl-96/s_llrintl.c (__llrintl): Likewise. * sysdeps/ieee754/ldbl-96/s_llroundl.c (__llroundl): Likewise. * sysdeps/ieee754/ldbl-96/s_lrintl.c (__lrintl): Likewise. * sysdeps/ieee754/ldbl-96/s_lroundl.c (__lroundl): Likewise. * sysdeps/ieee754/ldbl-96/s_modfl.c (__modfl): Likewise. * sysdeps/ieee754/ldbl-96/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/ieee754/ldbl-96/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/ldbl-96/s_nextupl.c (__nextupl): Likewise. * sysdeps/ieee754/ldbl-96/s_remquol.c (__remquol): Likewise. * sysdeps/ieee754/ldbl-96/s_roundl.c (__roundl): Likewise. * sysdeps/ieee754/ldbl-96/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c (__nldbl_nexttowardf): Likewise. * sysdeps/m68k/m680x0/fpu/e_pow.c (s(__ieee754_pow)): Likewise. * sysdeps/m68k/m680x0/fpu/s_fpclassifyl.c (__fpclassifyl): Likewise. * sysdeps/m68k/m680x0/fpu/s_llrint.c (__llrint): Likewise. * sysdeps/m68k/m680x0/fpu/s_llrintf.c (__llrintf): Likewise. * sysdeps/m68k/m680x0/fpu/s_llrintl.c (__llrintl): Likewise. * sysdeps/m68k/m680x0/fpu/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/x86/fpu/powl_helper.c (__powl_helper): Likewise.
2017-08-03 19:55:04 +00:00
uint32_t msw;
uint32_t lsw;
} parts;
uint64_t word;
} ieee_double_shape_type;
#endif
#if __FLOAT_WORD_ORDER == __LITTLE_ENDIAN
typedef union
{
double value;
struct
{
Consistently use uintN_t not u_intN_t in libm. This patch changes libm code to make consistent use of C99 uintN_t types instead of sometimes using those and sometimes using the older nonstandard u_intN_t names. This makes sense as a cleanup in its own right, and also facilitates merges to GCC's libquadmath (which gets the types from stdint.h and so may not have u_intN_t available at all). Tested for x86_64, and with build-many-glibcs.py. * math/s_nextafter.c (__nextafter): Use uintN_t instead of u_intN_t. * math/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/generic/math_private.h (ieee_double_shape_type): Likewise. (ieee_float_shape_type): Likewise. * sysdeps/i386/fpu/s_fpclassifyl.c (__fpclassifyl): Likewise. * sysdeps/i386/fpu/s_isnanl.c (__isnanl): Likewise. * sysdeps/i386/fpu/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/i386/fpu/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/i386/fpu/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/dbl-64/e_acosh.c (__ieee754_acosh): Likewise. * sysdeps/ieee754/dbl-64/e_cosh.c (__ieee754_cosh): Likewise. * sysdeps/ieee754/dbl-64/e_fmod.c (__ieee754_fmod): Likewise. * sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r): Likewise. * sysdeps/ieee754/dbl-64/e_hypot.c (__ieee754_hypot): Likewise. * sysdeps/ieee754/dbl-64/e_jn.c (__ieee754_jn): Likewise. (__ieee754_yn): Likewise. * sysdeps/ieee754/dbl-64/e_log10.c (__ieee754_log10): Likewise. * sysdeps/ieee754/dbl-64/e_log2.c (__ieee754_log2): Likewise. * sysdeps/ieee754/dbl-64/e_rem_pio2.c (__ieee754_rem_pio2): Likewise. * sysdeps/ieee754/dbl-64/e_sinh.c (__ieee754_sinh): Likewise. * sysdeps/ieee754/dbl-64/s_ceil.c (__ceil): Likewise. * sysdeps/ieee754/dbl-64/s_copysign.c (__copysign): Likewise. * sysdeps/ieee754/dbl-64/s_erf.c (__erf): Likewise. (__erfc): Likewise. * sysdeps/ieee754/dbl-64/s_expm1.c (__expm1): Likewise. * sysdeps/ieee754/dbl-64/s_finite.c (FINITE): Likewise. * sysdeps/ieee754/dbl-64/s_floor.c (__floor): Likewise. * sysdeps/ieee754/dbl-64/s_fpclassify.c (__fpclassify): Likewise. * sysdeps/ieee754/dbl-64/s_isnan.c (__isnan): Likewise. * sysdeps/ieee754/dbl-64/s_issignaling.c (__issignaling): Likewise. * sysdeps/ieee754/dbl-64/s_llrint.c (__llrint): Likewise. * sysdeps/ieee754/dbl-64/s_llround.c (__llround): Likewise. * sysdeps/ieee754/dbl-64/s_lrint.c (__lrint): Likewise. * sysdeps/ieee754/dbl-64/s_lround.c (__lround): Likewise. * sysdeps/ieee754/dbl-64/s_modf.c (__modf): Likewise. * sysdeps/ieee754/dbl-64/s_nextup.c (__nextup): Likewise. * sysdeps/ieee754/dbl-64/s_remquo.c (__remquo): Likewise. * sysdeps/ieee754/dbl-64/s_round.c (__round): Likewise. * sysdeps/ieee754/dbl-64/s_trunc.c (__trunc): Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_issignaling.c (__issignaling): Likewise. * sysdeps/ieee754/flt-32/e_atan2f.c (__ieee754_atan2f): Likewise. * sysdeps/ieee754/flt-32/e_fmodf.c (__ieee754_fmodf): Likewise. * sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r): Likewise. * sysdeps/ieee754/flt-32/e_jnf.c (__ieee754_ynf): Likewise. * sysdeps/ieee754/flt-32/e_log10f.c (__ieee754_log10f): Likewise. * sysdeps/ieee754/flt-32/e_powf.c (__ieee754_powf): Likewise. * sysdeps/ieee754/flt-32/e_rem_pio2f.c (__ieee754_rem_pio2f): Likewise. * sysdeps/ieee754/flt-32/e_remainderf.c (__ieee754_remainderf): Likewise. * sysdeps/ieee754/flt-32/e_sqrtf.c (__ieee754_sqrtf): Likewise. * sysdeps/ieee754/flt-32/s_ceilf.c (__ceilf): Likewise. * sysdeps/ieee754/flt-32/s_copysignf.c (__copysignf): Likewise. * sysdeps/ieee754/flt-32/s_erff.c (__erff): Likewise. (__erfcf): Likewise. * sysdeps/ieee754/flt-32/s_expm1f.c (__expm1f): Likewise. * sysdeps/ieee754/flt-32/s_finitef.c (FINITEF): Likewise. * sysdeps/ieee754/flt-32/s_floorf.c (__floorf): Likewise. * sysdeps/ieee754/flt-32/s_fpclassifyf.c (__fpclassifyf): Likewise. * sysdeps/ieee754/flt-32/s_isnanf.c (__isnanf): Likewise. * sysdeps/ieee754/flt-32/s_issignalingf.c (__issignalingf): Likewise. * sysdeps/ieee754/flt-32/s_llrintf.c (__llrintf): Likewise. * sysdeps/ieee754/flt-32/s_llroundf.c (__llroundf): Likewise. * sysdeps/ieee754/flt-32/s_lrintf.c (__lrintf): Likewise. * sysdeps/ieee754/flt-32/s_lroundf.c (__lroundf): Likewise. * sysdeps/ieee754/flt-32/s_modff.c (__modff): Likewise. * sysdeps/ieee754/flt-32/s_remquof.c (__remquof): Likewise. * sysdeps/ieee754/flt-32/s_roundf.c (__roundf): Likewise. * sysdeps/ieee754/ldbl-128/e_acoshl.c (__ieee754_acoshl): Likewise. * sysdeps/ieee754/ldbl-128/e_atan2l.c (__ieee754_atan2l): Likewise. * sysdeps/ieee754/ldbl-128/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-128/e_fmodl.c (__ieee754_fmodl): Likewise. * sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128/e_hypotl.c (__ieee754_hypotl): Likewise. * sysdeps/ieee754/ldbl-128/e_jnl.c (__ieee754_jnl): Likewise. (__ieee754_ynl): Likewise. * sysdeps/ieee754/ldbl-128/e_powl.c (__ieee754_powl): Likewise. * sysdeps/ieee754/ldbl-128/e_rem_pio2l.c (__ieee754_rem_pio2l): Likewise. * sysdeps/ieee754/ldbl-128/e_remainderl.c (__ieee754_remainderl): Likewise. * sysdeps/ieee754/ldbl-128/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-128/k_cosl.c (__kernel_cosl): Likewise. * sysdeps/ieee754/ldbl-128/k_sincosl.c (__kernel_sincosl): Likewise. * sysdeps/ieee754/ldbl-128/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128/s_ceill.c (__ceill): Likewise. * sysdeps/ieee754/ldbl-128/s_copysignl.c (__copysignl): Likewise. * sysdeps/ieee754/ldbl-128/s_erfl.c (__erfcl): Likewise. * sysdeps/ieee754/ldbl-128/s_fabsl.c (__fabsl): Likewise. * sysdeps/ieee754/ldbl-128/s_finitel.c (__finitel): Likewise. * sysdeps/ieee754/ldbl-128/s_floorl.c (__floorl): Likewise. * sysdeps/ieee754/ldbl-128/s_fpclassifyl.c (__fpclassifyl): Likewise. * sysdeps/ieee754/ldbl-128/s_frexpl.c (__frexpl): Likewise. * sysdeps/ieee754/ldbl-128/s_isnanl.c (__isnanl): Likewise. * sysdeps/ieee754/ldbl-128/s_issignalingl.c (__issignalingl): Likewise. * sysdeps/ieee754/ldbl-128/s_llrintl.c (__llrintl): Likewise. * sysdeps/ieee754/ldbl-128/s_llroundl.c (__llroundl): Likewise. * sysdeps/ieee754/ldbl-128/s_lrintl.c (__lrintl): Likewise. * sysdeps/ieee754/ldbl-128/s_lroundl.c (__lroundl): Likewise. * sysdeps/ieee754/ldbl-128/s_modfl.c (__modfl): Likewise. * sysdeps/ieee754/ldbl-128/s_nearbyintl.c (__nearbyintl): Likewise. * sysdeps/ieee754/ldbl-128/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/ieee754/ldbl-128/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/ieee754/ldbl-128/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/ldbl-128/s_nextupl.c (__nextupl): Likewise. * sysdeps/ieee754/ldbl-128/s_remquol.c (__remquol): Likewise. * sysdeps/ieee754/ldbl-128/s_rintl.c (__rintl): Likewise. * sysdeps/ieee754/ldbl-128/s_roundl.c (__roundl): Likewise. * sysdeps/ieee754/ldbl-128/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-128/s_truncl.c (__truncl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_fmodl.c (__ieee754_fmodl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_rem_pio2l.c (__ieee754_rem_pio2l): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_remainderl.c (__ieee754_remainderl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_cosl.c (__kernel_cosl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_fabsl.c (__fabsl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_fpclassifyl.c (___fpclassifyl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_modfl.c (__modfl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_remquol.c (__remquol): Likewise. * sysdeps/ieee754/ldbl-96/e_acoshl.c (__ieee754_acoshl): Likewise. * sysdeps/ieee754/ldbl-96/e_asinl.c (__ieee754_asinl): Likewise. * sysdeps/ieee754/ldbl-96/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-96/e_coshl.c (__ieee754_coshl): Likewise. * sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-96/e_hypotl.c (__ieee754_hypotl): Likewise. * sysdeps/ieee754/ldbl-96/e_j0l.c (__ieee754_j0l): Likewise. (__ieee754_y0l): Likewise. (pzero): Likewise. (qzero): Likewise. * sysdeps/ieee754/ldbl-96/e_j1l.c (__ieee754_j1l): Likewise. (__ieee754_y1l): Likewise. (pone): Likewise. (qone): Likewise. * sysdeps/ieee754/ldbl-96/e_jnl.c (__ieee754_jnl): Likewise. (__ieee754_ynl): Likewise. * sysdeps/ieee754/ldbl-96/e_lgammal_r.c (sin_pi): Likewise. (__ieee754_lgammal_r): Likewise. * sysdeps/ieee754/ldbl-96/e_rem_pio2l.c (__ieee754_rem_pio2l): Likewise. * sysdeps/ieee754/ldbl-96/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-96/s_copysignl.c (__copysignl): Likewise. * sysdeps/ieee754/ldbl-96/s_erfl.c (__erfl): Likewise. (__erfcl): Likewise. * sysdeps/ieee754/ldbl-96/s_frexpl.c (__frexpl): Likewise. * sysdeps/ieee754/ldbl-96/s_issignalingl.c (__issignalingl): Likewise. * sysdeps/ieee754/ldbl-96/s_llrintl.c (__llrintl): Likewise. * sysdeps/ieee754/ldbl-96/s_llroundl.c (__llroundl): Likewise. * sysdeps/ieee754/ldbl-96/s_lrintl.c (__lrintl): Likewise. * sysdeps/ieee754/ldbl-96/s_lroundl.c (__lroundl): Likewise. * sysdeps/ieee754/ldbl-96/s_modfl.c (__modfl): Likewise. * sysdeps/ieee754/ldbl-96/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/ieee754/ldbl-96/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/ldbl-96/s_nextupl.c (__nextupl): Likewise. * sysdeps/ieee754/ldbl-96/s_remquol.c (__remquol): Likewise. * sysdeps/ieee754/ldbl-96/s_roundl.c (__roundl): Likewise. * sysdeps/ieee754/ldbl-96/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c (__nldbl_nexttowardf): Likewise. * sysdeps/m68k/m680x0/fpu/e_pow.c (s(__ieee754_pow)): Likewise. * sysdeps/m68k/m680x0/fpu/s_fpclassifyl.c (__fpclassifyl): Likewise. * sysdeps/m68k/m680x0/fpu/s_llrint.c (__llrint): Likewise. * sysdeps/m68k/m680x0/fpu/s_llrintf.c (__llrintf): Likewise. * sysdeps/m68k/m680x0/fpu/s_llrintl.c (__llrintl): Likewise. * sysdeps/m68k/m680x0/fpu/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/x86/fpu/powl_helper.c (__powl_helper): Likewise.
2017-08-03 19:55:04 +00:00
uint32_t lsw;
uint32_t msw;
} parts;
uint64_t word;
} ieee_double_shape_type;
#endif
/* Get two 32 bit ints from a double. */
#define EXTRACT_WORDS(ix0,ix1,d) \
do { \
ieee_double_shape_type ew_u; \
ew_u.value = (d); \
(ix0) = ew_u.parts.msw; \
(ix1) = ew_u.parts.lsw; \
} while (0)
/* Get the more significant 32 bit int from a double. */
#ifndef GET_HIGH_WORD
# define GET_HIGH_WORD(i,d) \
do { \
ieee_double_shape_type gh_u; \
gh_u.value = (d); \
(i) = gh_u.parts.msw; \
} while (0)
#endif
/* Get the less significant 32 bit int from a double. */
#ifndef GET_LOW_WORD
# define GET_LOW_WORD(i,d) \
do { \
ieee_double_shape_type gl_u; \
gl_u.value = (d); \
(i) = gl_u.parts.lsw; \
} while (0)
#endif
/* Get all in one, efficient on 64-bit machines. */
#ifndef EXTRACT_WORDS64
# define EXTRACT_WORDS64(i,d) \
do { \
ieee_double_shape_type gh_u; \
gh_u.value = (d); \
(i) = gh_u.word; \
} while (0)
#endif
/* Set a double from two 32 bit ints. */
#ifndef INSERT_WORDS
# define INSERT_WORDS(d,ix0,ix1) \
do { \
ieee_double_shape_type iw_u; \
iw_u.parts.msw = (ix0); \
iw_u.parts.lsw = (ix1); \
(d) = iw_u.value; \
} while (0)
#endif
/* Get all in one, efficient on 64-bit machines. */
#ifndef INSERT_WORDS64
# define INSERT_WORDS64(d,i) \
do { \
ieee_double_shape_type iw_u; \
iw_u.word = (i); \
(d) = iw_u.value; \
} while (0)
#endif
/* Set the more significant 32 bits of a double from an int. */
#ifndef SET_HIGH_WORD
#define SET_HIGH_WORD(d,v) \
do { \
ieee_double_shape_type sh_u; \
sh_u.value = (d); \
sh_u.parts.msw = (v); \
(d) = sh_u.value; \
} while (0)
#endif
/* Set the less significant 32 bits of a double from an int. */
#ifndef SET_LOW_WORD
# define SET_LOW_WORD(d,v) \
do { \
ieee_double_shape_type sl_u; \
sl_u.value = (d); \
sl_u.parts.lsw = (v); \
(d) = sl_u.value; \
} while (0)
#endif
/* A union which permits us to convert between a float and a 32 bit
int. */
typedef union
{
float value;
Consistently use uintN_t not u_intN_t in libm. This patch changes libm code to make consistent use of C99 uintN_t types instead of sometimes using those and sometimes using the older nonstandard u_intN_t names. This makes sense as a cleanup in its own right, and also facilitates merges to GCC's libquadmath (which gets the types from stdint.h and so may not have u_intN_t available at all). Tested for x86_64, and with build-many-glibcs.py. * math/s_nextafter.c (__nextafter): Use uintN_t instead of u_intN_t. * math/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/generic/math_private.h (ieee_double_shape_type): Likewise. (ieee_float_shape_type): Likewise. * sysdeps/i386/fpu/s_fpclassifyl.c (__fpclassifyl): Likewise. * sysdeps/i386/fpu/s_isnanl.c (__isnanl): Likewise. * sysdeps/i386/fpu/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/i386/fpu/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/i386/fpu/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/dbl-64/e_acosh.c (__ieee754_acosh): Likewise. * sysdeps/ieee754/dbl-64/e_cosh.c (__ieee754_cosh): Likewise. * sysdeps/ieee754/dbl-64/e_fmod.c (__ieee754_fmod): Likewise. * sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r): Likewise. * sysdeps/ieee754/dbl-64/e_hypot.c (__ieee754_hypot): Likewise. * sysdeps/ieee754/dbl-64/e_jn.c (__ieee754_jn): Likewise. (__ieee754_yn): Likewise. * sysdeps/ieee754/dbl-64/e_log10.c (__ieee754_log10): Likewise. * sysdeps/ieee754/dbl-64/e_log2.c (__ieee754_log2): Likewise. * sysdeps/ieee754/dbl-64/e_rem_pio2.c (__ieee754_rem_pio2): Likewise. * sysdeps/ieee754/dbl-64/e_sinh.c (__ieee754_sinh): Likewise. * sysdeps/ieee754/dbl-64/s_ceil.c (__ceil): Likewise. * sysdeps/ieee754/dbl-64/s_copysign.c (__copysign): Likewise. * sysdeps/ieee754/dbl-64/s_erf.c (__erf): Likewise. (__erfc): Likewise. * sysdeps/ieee754/dbl-64/s_expm1.c (__expm1): Likewise. * sysdeps/ieee754/dbl-64/s_finite.c (FINITE): Likewise. * sysdeps/ieee754/dbl-64/s_floor.c (__floor): Likewise. * sysdeps/ieee754/dbl-64/s_fpclassify.c (__fpclassify): Likewise. * sysdeps/ieee754/dbl-64/s_isnan.c (__isnan): Likewise. * sysdeps/ieee754/dbl-64/s_issignaling.c (__issignaling): Likewise. * sysdeps/ieee754/dbl-64/s_llrint.c (__llrint): Likewise. * sysdeps/ieee754/dbl-64/s_llround.c (__llround): Likewise. * sysdeps/ieee754/dbl-64/s_lrint.c (__lrint): Likewise. * sysdeps/ieee754/dbl-64/s_lround.c (__lround): Likewise. * sysdeps/ieee754/dbl-64/s_modf.c (__modf): Likewise. * sysdeps/ieee754/dbl-64/s_nextup.c (__nextup): Likewise. * sysdeps/ieee754/dbl-64/s_remquo.c (__remquo): Likewise. * sysdeps/ieee754/dbl-64/s_round.c (__round): Likewise. * sysdeps/ieee754/dbl-64/s_trunc.c (__trunc): Likewise. * sysdeps/ieee754/dbl-64/wordsize-64/s_issignaling.c (__issignaling): Likewise. * sysdeps/ieee754/flt-32/e_atan2f.c (__ieee754_atan2f): Likewise. * sysdeps/ieee754/flt-32/e_fmodf.c (__ieee754_fmodf): Likewise. * sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r): Likewise. * sysdeps/ieee754/flt-32/e_jnf.c (__ieee754_ynf): Likewise. * sysdeps/ieee754/flt-32/e_log10f.c (__ieee754_log10f): Likewise. * sysdeps/ieee754/flt-32/e_powf.c (__ieee754_powf): Likewise. * sysdeps/ieee754/flt-32/e_rem_pio2f.c (__ieee754_rem_pio2f): Likewise. * sysdeps/ieee754/flt-32/e_remainderf.c (__ieee754_remainderf): Likewise. * sysdeps/ieee754/flt-32/e_sqrtf.c (__ieee754_sqrtf): Likewise. * sysdeps/ieee754/flt-32/s_ceilf.c (__ceilf): Likewise. * sysdeps/ieee754/flt-32/s_copysignf.c (__copysignf): Likewise. * sysdeps/ieee754/flt-32/s_erff.c (__erff): Likewise. (__erfcf): Likewise. * sysdeps/ieee754/flt-32/s_expm1f.c (__expm1f): Likewise. * sysdeps/ieee754/flt-32/s_finitef.c (FINITEF): Likewise. * sysdeps/ieee754/flt-32/s_floorf.c (__floorf): Likewise. * sysdeps/ieee754/flt-32/s_fpclassifyf.c (__fpclassifyf): Likewise. * sysdeps/ieee754/flt-32/s_isnanf.c (__isnanf): Likewise. * sysdeps/ieee754/flt-32/s_issignalingf.c (__issignalingf): Likewise. * sysdeps/ieee754/flt-32/s_llrintf.c (__llrintf): Likewise. * sysdeps/ieee754/flt-32/s_llroundf.c (__llroundf): Likewise. * sysdeps/ieee754/flt-32/s_lrintf.c (__lrintf): Likewise. * sysdeps/ieee754/flt-32/s_lroundf.c (__lroundf): Likewise. * sysdeps/ieee754/flt-32/s_modff.c (__modff): Likewise. * sysdeps/ieee754/flt-32/s_remquof.c (__remquof): Likewise. * sysdeps/ieee754/flt-32/s_roundf.c (__roundf): Likewise. * sysdeps/ieee754/ldbl-128/e_acoshl.c (__ieee754_acoshl): Likewise. * sysdeps/ieee754/ldbl-128/e_atan2l.c (__ieee754_atan2l): Likewise. * sysdeps/ieee754/ldbl-128/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-128/e_fmodl.c (__ieee754_fmodl): Likewise. * sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128/e_hypotl.c (__ieee754_hypotl): Likewise. * sysdeps/ieee754/ldbl-128/e_jnl.c (__ieee754_jnl): Likewise. (__ieee754_ynl): Likewise. * sysdeps/ieee754/ldbl-128/e_powl.c (__ieee754_powl): Likewise. * sysdeps/ieee754/ldbl-128/e_rem_pio2l.c (__ieee754_rem_pio2l): Likewise. * sysdeps/ieee754/ldbl-128/e_remainderl.c (__ieee754_remainderl): Likewise. * sysdeps/ieee754/ldbl-128/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-128/k_cosl.c (__kernel_cosl): Likewise. * sysdeps/ieee754/ldbl-128/k_sincosl.c (__kernel_sincosl): Likewise. * sysdeps/ieee754/ldbl-128/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128/s_ceill.c (__ceill): Likewise. * sysdeps/ieee754/ldbl-128/s_copysignl.c (__copysignl): Likewise. * sysdeps/ieee754/ldbl-128/s_erfl.c (__erfcl): Likewise. * sysdeps/ieee754/ldbl-128/s_fabsl.c (__fabsl): Likewise. * sysdeps/ieee754/ldbl-128/s_finitel.c (__finitel): Likewise. * sysdeps/ieee754/ldbl-128/s_floorl.c (__floorl): Likewise. * sysdeps/ieee754/ldbl-128/s_fpclassifyl.c (__fpclassifyl): Likewise. * sysdeps/ieee754/ldbl-128/s_frexpl.c (__frexpl): Likewise. * sysdeps/ieee754/ldbl-128/s_isnanl.c (__isnanl): Likewise. * sysdeps/ieee754/ldbl-128/s_issignalingl.c (__issignalingl): Likewise. * sysdeps/ieee754/ldbl-128/s_llrintl.c (__llrintl): Likewise. * sysdeps/ieee754/ldbl-128/s_llroundl.c (__llroundl): Likewise. * sysdeps/ieee754/ldbl-128/s_lrintl.c (__lrintl): Likewise. * sysdeps/ieee754/ldbl-128/s_lroundl.c (__lroundl): Likewise. * sysdeps/ieee754/ldbl-128/s_modfl.c (__modfl): Likewise. * sysdeps/ieee754/ldbl-128/s_nearbyintl.c (__nearbyintl): Likewise. * sysdeps/ieee754/ldbl-128/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/ieee754/ldbl-128/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/ieee754/ldbl-128/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/ldbl-128/s_nextupl.c (__nextupl): Likewise. * sysdeps/ieee754/ldbl-128/s_remquol.c (__remquol): Likewise. * sysdeps/ieee754/ldbl-128/s_rintl.c (__rintl): Likewise. * sysdeps/ieee754/ldbl-128/s_roundl.c (__roundl): Likewise. * sysdeps/ieee754/ldbl-128/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-128/s_truncl.c (__truncl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_fmodl.c (__ieee754_fmodl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_powl.c (__ieee754_powl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_rem_pio2l.c (__ieee754_rem_pio2l): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_remainderl.c (__ieee754_remainderl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_cosl.c (__kernel_cosl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_fabsl.c (__fabsl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_fpclassifyl.c (___fpclassifyl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_modfl.c (__modfl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_remquol.c (__remquol): Likewise. * sysdeps/ieee754/ldbl-96/e_acoshl.c (__ieee754_acoshl): Likewise. * sysdeps/ieee754/ldbl-96/e_asinl.c (__ieee754_asinl): Likewise. * sysdeps/ieee754/ldbl-96/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-96/e_coshl.c (__ieee754_coshl): Likewise. * sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-96/e_hypotl.c (__ieee754_hypotl): Likewise. * sysdeps/ieee754/ldbl-96/e_j0l.c (__ieee754_j0l): Likewise. (__ieee754_y0l): Likewise. (pzero): Likewise. (qzero): Likewise. * sysdeps/ieee754/ldbl-96/e_j1l.c (__ieee754_j1l): Likewise. (__ieee754_y1l): Likewise. (pone): Likewise. (qone): Likewise. * sysdeps/ieee754/ldbl-96/e_jnl.c (__ieee754_jnl): Likewise. (__ieee754_ynl): Likewise. * sysdeps/ieee754/ldbl-96/e_lgammal_r.c (sin_pi): Likewise. (__ieee754_lgammal_r): Likewise. * sysdeps/ieee754/ldbl-96/e_rem_pio2l.c (__ieee754_rem_pio2l): Likewise. * sysdeps/ieee754/ldbl-96/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-96/s_copysignl.c (__copysignl): Likewise. * sysdeps/ieee754/ldbl-96/s_erfl.c (__erfl): Likewise. (__erfcl): Likewise. * sysdeps/ieee754/ldbl-96/s_frexpl.c (__frexpl): Likewise. * sysdeps/ieee754/ldbl-96/s_issignalingl.c (__issignalingl): Likewise. * sysdeps/ieee754/ldbl-96/s_llrintl.c (__llrintl): Likewise. * sysdeps/ieee754/ldbl-96/s_llroundl.c (__llroundl): Likewise. * sysdeps/ieee754/ldbl-96/s_lrintl.c (__lrintl): Likewise. * sysdeps/ieee754/ldbl-96/s_lroundl.c (__lroundl): Likewise. * sysdeps/ieee754/ldbl-96/s_modfl.c (__modfl): Likewise. * sysdeps/ieee754/ldbl-96/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/ieee754/ldbl-96/s_nexttowardf.c (__nexttowardf): Likewise. * sysdeps/ieee754/ldbl-96/s_nextupl.c (__nextupl): Likewise. * sysdeps/ieee754/ldbl-96/s_remquol.c (__remquol): Likewise. * sysdeps/ieee754/ldbl-96/s_roundl.c (__roundl): Likewise. * sysdeps/ieee754/ldbl-96/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c (__nldbl_nexttowardf): Likewise. * sysdeps/m68k/m680x0/fpu/e_pow.c (s(__ieee754_pow)): Likewise. * sysdeps/m68k/m680x0/fpu/s_fpclassifyl.c (__fpclassifyl): Likewise. * sysdeps/m68k/m680x0/fpu/s_llrint.c (__llrint): Likewise. * sysdeps/m68k/m680x0/fpu/s_llrintf.c (__llrintf): Likewise. * sysdeps/m68k/m680x0/fpu/s_llrintl.c (__llrintl): Likewise. * sysdeps/m68k/m680x0/fpu/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/x86/fpu/powl_helper.c (__powl_helper): Likewise.
2017-08-03 19:55:04 +00:00
uint32_t word;
} ieee_float_shape_type;
/* Get a 32 bit int from a float. */
#ifndef GET_FLOAT_WORD
# define GET_FLOAT_WORD(i,d) \
do { \
ieee_float_shape_type gf_u; \
gf_u.value = (d); \
(i) = gf_u.word; \
} while (0)
#endif
/* Set a float from a 32 bit int. */
#ifndef SET_FLOAT_WORD
# define SET_FLOAT_WORD(d,i) \
do { \
ieee_float_shape_type sf_u; \
sf_u.word = (i); \
(d) = sf_u.value; \
} while (0)
#endif
/* We need to guarantee an expansion of name when building
ldbl-128 files as another type (e.g _Float128). */
#define mathx_hidden_def(name) hidden_def(name)
1999-07-14 00:54:57 +00:00
/* Get long double macros from a separate header. */
#include <math_ldbl.h>
Thu May 30 11:24:05 1996 Roland McGrath <roland@delasyd.gnu.ai.mit.edu> * po/header.pot: Replace with exact boilerplate pinard dictates. * sysdeps/i386/strtok.S (Lillegal_argument): Remove this code to set errno and the check that jumped to it. * sysdeps/mach/hurd/Makefile (errnos.d): Use $(sed-remove-objpfx). Thu May 30 03:21:57 1996 Ulrich Drepper <drepper@cygnus.com> * FAQ: Document need of gperf program for developers. * elf/elf.h: Fix typos in comments. * libio/stdio.h [!__STRICT_ANSI__ || _POSIX_SOURCE]: Add prototypes for `ctermid' and `cuserid'. * locale/programs/locale.c: Switch to user selected locale before printing variables. * math/Makefile [$(long-double-fcts)==yes]: Define long-m-routines and long-c-routines. Only if the `long double' data type is available we need to compile the functions. (libm-routines): Add $(long-m-routines). (routines): Remove isinfl, isnanl. Use new file s_isinfl and s_isnanl instead if `long double' is available. * math/math.h: Include <mathcalls.h> again to define `long double' functions. * math/math_private.h: Define data types, prototypes and access macros for `long double'. * stdlib/stdlib.h: Add prototypes for `strtoll' and `strtoull'. [GCC2 && OPTIMIZE]: Define strto{,u}ll as inline function which calls __strto{,u}q_internal. * stdlib/strfmon.c: Replace PTR by `void *'. * stdlib/strtoq.c: Define strtoll as weak alias. * stdlib/strtouq.c: Define strtoull as weak alias. * string/tester.c: Correct `strsep' test. * sysdeps/generic/strsep.c: Make compatible with BSD version. Trailing characters of skip set are not skipped. In this case empty tokens are returned. * sysdeps/i386/isinfl.c, sysdeps/i386/isnanl.c, sysdeps/ieee754/isinf.c, sysdeps/ieee754/isinfl.c, sysdeps/ieee754/isnan.c, sysdeps/ieee754/isnanl.c: Removed. We now use the versions part of libm. * sysdeps/i386/strsep.S: Removed. Generic C version is of similar speed. * sysdeps/i386/strtok.S: Remove support for `strsep'. * sysdeps/libm-i387/e_acosl.S, sysdeps/libm-i387/s_ceill.S, sysdeps/libm-i387/s_copysignl.S, sysdeps/libm-i387/s_finitel.S, sysdeps/libm-i387/s_floorl.S, sysdeps/libm-i387/s_isinfl.c, sysdeps/libm-i387/s_isnanl.c, sysdeps/libm-i387/s_nextafterl.c, sysdeps/libm-i387/s_rintl.S, sysdeps/libm-i387/s_significandl.S: New i387 specific math functions implementing `long double' versions. * sysdeps/libm-ieee754/s_ceill.c, sysdeps/libm-ieee754/s_copysignl.c, sysdeps/libm-ieee754/s_fabsl.c, sysdeps/libm-ieee754/s_finitel.c, sysdeps/libm-ieee754/s_floorl.c, sysdeps/libm-ieee754/s_isinfl.c, sysdeps/libm-ieee754/s_isnanl.c, sysdeps/libm-ieee754/s_nextafterl.c, sysdeps/libm-ieee754/s_rintl.c, sysdeps/libm-ieee754/s_scalbnl.c, sysdeps/libm-ieee754/s_significandl.c: New generic `long double' versions of libm functions. * sysdeps/libm-i387/e_exp.S: Add a few comments to explain the Intel FPU nonsense. * sysdeps/libm-i387/s_ceil.S, sysdeps/libm-i387/s_ceilf.S, sysdeps/libm-i387/s_floor.S, sysdeps/libm-i387/s_floorf.S: Correct handling of local variables. The old version created a stack frame but stored the values outside. * sysdeps/libm-ieee754/s_isinf.c, sysdeps/libm-ieee754/s_isnan.c [!NO_LONG_DOUBLE]: Define alias with `long double' versions name. * login/pututline_r.c: Include sys/stat.h. Fix typos. according to currently used locale for category LC_CTYPE by inet_nsap_ntoa. Now in <arpa/inet.h>. _IO_dup2 to contain complete parameter list.
1996-05-30 16:12:42 +00:00
/* Include function declarations for each floating-point. */
#define _Mdouble_ double
#define _MSUF_
#include <math_private_calls.h>
#undef _MSUF_
#undef _Mdouble_
#define _Mdouble_ float
#define _MSUF_ f
#define __MATH_DECLARING_FLOAT
#include <math_private_calls.h>
#undef __MATH_DECLARING_FLOAT
#undef _MSUF_
#undef _Mdouble_
#define _Mdouble_ long double
#define _MSUF_ l
#define __MATH_DECLARING_LONG_DOUBLE
#include <math_private_calls.h>
#undef __MATH_DECLARING_LONG_DOUBLE
#undef _MSUF_
#undef _Mdouble_
#if __HAVE_DISTINCT_FLOAT128
# define _Mdouble_ _Float128
# define _MSUF_ f128
# define __MATH_DECLARING_FLOATN
# include <math_private_calls.h>
# undef __MATH_DECLARING_FLOATN
# undef _MSUF_
# undef _Mdouble_
#endif
#if __HAVE_DISTINCT_FLOAT128
/* __builtin_isinf_sign is broken in GCC < 7 for float128. */
# if ! __GNUC_PREREQ (7, 0)
# include <ieee754_float128.h>
extern inline int
__isinff128 (_Float128 x)
{
int64_t hx, lx;
GET_FLOAT128_WORDS64 (hx, lx, x);
lx |= (hx & 0x7fffffffffffffffLL) ^ 0x7fff000000000000LL;
lx |= -lx;
return ~(lx >> 63) & (hx >> 62);
}
# endif
extern inline _Float128
fabsf128 (_Float128 x)
{
return __builtin_fabsf128 (x);
}
#endif
/* Prototypes for functions of the IBM Accurate Mathematical Library. */
extern double __exp1 (double __x, double __xx, double __error);
extern double __sin (double __x);
extern double __cos (double __x);
extern int __branred (double __x, double *__a, double *__aa);
extern void __doasin (double __x, double __dx, double __v[]);
extern void __dubsin (double __x, double __dx, double __v[]);
extern void __dubcos (double __x, double __dx, double __v[]);
extern double __halfulp (double __x, double __y);
extern double __sin32 (double __x, double __res, double __res1);
extern double __cos32 (double __x, double __res, double __res1);
extern double __mpsin (double __x, double __dx, bool __range_reduce);
extern double __mpcos (double __x, double __dx, bool __range_reduce);
extern double __slowexp (double __x);
extern double __slowpow (double __x, double __y, double __z);
extern void __docos (double __x, double __dx, double __v[]);
[BZ #3306] 2007-03-27 Jakub Jelinek <jakub@redhat.com> [BZ #3306] * math/math_private.h (math_opt_barrier, math_force_eval): Define. * sysdeps/i386/fpu/math_private.h: New file. * sysdeps/x86_64/fpu/math_private.h: New file. * math/s_nexttowardf.c (__nexttowardf): Use math_opt_barrier and math_force_eval macros. Use "+m" constraint on asm rather than "=m" and "m". * math/s_nextafter.c (__nextafter): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/ieee754/flt-32/s_nextafterf.c (__nextafterf): Likewise. * sysdeps/ieee754/ldbl-128/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/ieee754/ldbl-96/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/i386/fpu/s_nextafterl.c (__nextafterl): Use math_opt_barrier and math_force_eval macros. * sysdeps/ieee754/ldbl-128/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/ieee754/ldbl-96/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/i386/fpu/s_nexttoward.c: Include float.h. (__nexttoward): Use math_opt_barrier and math_force_eval macros. Use "+m" constraint on asm rather than "=m" and "m". Only use asm to force double result if FLT_EVAL_METHOD is 2. * sysdeps/i386/fpu/s_nexttowardf.c: Include float.h. (__nexttowardf): Use math_opt_barrier and math_force_eval macros. Use "+m" constraint on asm rather than "=m" and "m". Only use asm to force double result if FLT_EVAL_METHOD is not 0. * sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c: Include float.h. (__nexttowardf): Use math_opt_barrier and math_force_eval macros. If FLT_EVAL_METHOD is not 0, force x to float using asm. * sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c: Include float.h. (__nldbl_nexttowardf): Use math_opt_barrier and math_force_eval macros. If FLT_EVAL_METHOD is not 0, force x to float using asm. * sysdeps/ieee754/ldbl-96/s_nexttowardf.c: Include float.h. (__nexttowardf): Use math_opt_barrier and math_force_eval macros. If FLT_EVAL_METHOD is not 0, force x to float using asm. * math/bug-nextafter.c (zero, inf): New variables. (main): Add new tests. * math/bug-nexttoward.c (zero, inf): New variables. (main): Add new tests.
2007-04-16 20:41:42 +00:00
#ifndef math_opt_barrier
# define math_opt_barrier(x) \
({ __typeof (x) __x = (x); __asm ("" : "+m" (__x)); __x; })
# define math_force_eval(x) \
({ __typeof (x) __x = (x); __asm __volatile__ ("" : : "m" (__x)); })
[BZ #3306] 2007-03-27 Jakub Jelinek <jakub@redhat.com> [BZ #3306] * math/math_private.h (math_opt_barrier, math_force_eval): Define. * sysdeps/i386/fpu/math_private.h: New file. * sysdeps/x86_64/fpu/math_private.h: New file. * math/s_nexttowardf.c (__nexttowardf): Use math_opt_barrier and math_force_eval macros. Use "+m" constraint on asm rather than "=m" and "m". * math/s_nextafter.c (__nextafter): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/ieee754/flt-32/s_nextafterf.c (__nextafterf): Likewise. * sysdeps/ieee754/ldbl-128/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/ieee754/ldbl-96/s_nexttoward.c (__nexttoward): Likewise. * sysdeps/i386/fpu/s_nextafterl.c (__nextafterl): Use math_opt_barrier and math_force_eval macros. * sysdeps/ieee754/ldbl-128/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/ieee754/ldbl-96/s_nextafterl.c (__nextafterl): Likewise. * sysdeps/i386/fpu/s_nexttoward.c: Include float.h. (__nexttoward): Use math_opt_barrier and math_force_eval macros. Use "+m" constraint on asm rather than "=m" and "m". Only use asm to force double result if FLT_EVAL_METHOD is 2. * sysdeps/i386/fpu/s_nexttowardf.c: Include float.h. (__nexttowardf): Use math_opt_barrier and math_force_eval macros. Use "+m" constraint on asm rather than "=m" and "m". Only use asm to force double result if FLT_EVAL_METHOD is not 0. * sysdeps/ieee754/ldbl-128ibm/s_nexttowardf.c: Include float.h. (__nexttowardf): Use math_opt_barrier and math_force_eval macros. If FLT_EVAL_METHOD is not 0, force x to float using asm. * sysdeps/ieee754/ldbl-opt/s_nexttowardfd.c: Include float.h. (__nldbl_nexttowardf): Use math_opt_barrier and math_force_eval macros. If FLT_EVAL_METHOD is not 0, force x to float using asm. * sysdeps/ieee754/ldbl-96/s_nexttowardf.c: Include float.h. (__nexttowardf): Use math_opt_barrier and math_force_eval macros. If FLT_EVAL_METHOD is not 0, force x to float using asm. * math/bug-nextafter.c (zero, inf): New variables. (main): Add new tests. * math/bug-nexttoward.c (zero, inf): New variables. (main): Add new tests.
2007-04-16 20:41:42 +00:00
#endif
/* math_narrow_eval reduces its floating-point argument to the range
and precision of its semantic type. (The original evaluation may
still occur with excess range and precision, so the result may be
affected by double rounding.) */
#if FLT_EVAL_METHOD == 0
# define math_narrow_eval(x) (x)
#else
# if FLT_EVAL_METHOD == 1
# define excess_precision(type) __builtin_types_compatible_p (type, float)
# else
# define excess_precision(type) (__builtin_types_compatible_p (type, float) \
|| __builtin_types_compatible_p (type, \
double))
# endif
# define math_narrow_eval(x) \
({ \
__typeof (x) math_narrow_eval_tmp = (x); \
if (excess_precision (__typeof (math_narrow_eval_tmp))) \
__asm__ ("" : "+m" (math_narrow_eval_tmp)); \
math_narrow_eval_tmp; \
})
#endif
2016-11-10 21:41:56 +00:00
#define fabs_tg(x) __MATH_TG ((x), (__typeof (x)) __builtin_fabs, (x))
#define min_of_type_f FLT_MIN
#define min_of_type_ DBL_MIN
#define min_of_type_l LDBL_MIN
#define min_of_type_f128 FLT128_MIN
#define min_of_type(x) __MATH_TG ((x), (__typeof (x)) min_of_type_, )
Refactor code forcing underflow exceptions. Various floating-point functions have code to force underflow exceptions if a tiny result was computed in a way that might not have resulted in such exceptions even though the result is inexact. This typically uses math_force_eval to ensure that the underflowing expression is evaluated, but sometimes uses volatile. This patch refactors such code to use three new macros math_check_force_underflow, math_check_force_underflow_nonneg and math_check_force_underflow_complex (which in turn use math_force_eval). In the limited number of cases not suited to a simple conversion to these macros, existing uses of volatile are changed to use math_force_eval instead. The converted code does not always execute exactly the same sequence of operations as the original code, but the overall effects should be the same. Tested for x86_64, x86, mips64 and powerpc. * sysdeps/generic/math_private.h (fabs_tg): New macro. (min_of_type): Likewise. (math_check_force_underflow): Likewise. (math_check_force_underflow_nonneg): Likewise. (math_check_force_underflow_complex): Likewise. * math/e_exp2l.c (__ieee754_exp2l): Use math_check_force_underflow_nonneg. * math/k_casinh.c (__kernel_casinh): Likewise. * math/k_casinhf.c (__kernel_casinhf): Likewise. * math/k_casinhl.c (__kernel_casinhl): Likewise. * math/s_catan.c (__catan): Use math_check_force_underflow_complex. * math/s_catanf.c (__catanf): Likewise. * math/s_catanh.c (__catanh): Likewise. * math/s_catanhf.c (__catanhf): Likewise. * math/s_catanhl.c (__catanhl): Likewise. * math/s_catanl.c (__catanl): Likewise. * math/s_ccosh.c (__ccosh): Likewise. * math/s_ccoshf.c (__ccoshf): Likewise. * math/s_ccoshl.c (__ccoshl): Likewise. * math/s_cexp.c (__cexp): Likewise. * math/s_cexpf.c (__cexpf): Likewise. * math/s_cexpl.c (__cexpl): Likewise. * math/s_clog.c (__clog): Use math_check_force_underflow_nonneg. * math/s_clog10.c (__clog10): Likewise. * math/s_clog10f.c (__clog10f): Likewise. * math/s_clog10l.c (__clog10l): Likewise. * math/s_clogf.c (__clogf): Likewise. * math/s_clogl.c (__clogl): Likewise. * math/s_csin.c (__csin): Use math_check_force_underflow_complex. * math/s_csinf.c (__csinf): Likewise. * math/s_csinh.c (__csinh): Likewise. * math/s_csinhf.c (__csinhf): Likewise. * math/s_csinhl.c (__csinhl): Likewise. * math/s_csinl.c (__csinl): Likewise. * math/s_csqrt.c (__csqrt): Use math_check_force_underflow. * math/s_csqrtf.c (__csqrtf): Likewise. * math/s_csqrtl.c (__csqrtl): Likewise. * math/s_ctan.c (__ctan): Use math_check_force_underflow_complex. * math/s_ctanf.c (__ctanf): Likewise. * math/s_ctanh.c (__ctanh): Likewise. * math/s_ctanhf.c (__ctanhf): Likewise. * math/s_ctanhl.c (__ctanhl): Likewise. * math/s_ctanl.c (__ctanl): Likewise. * stdlib/strtod_l.c (round_and_return): Use math_force_eval instead of volatile. * sysdeps/ieee754/dbl-64/e_asin.c (__ieee754_asin): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/e_atanh.c (__ieee754_atanh): Likewise. * sysdeps/ieee754/dbl-64/e_exp.c (__ieee754_exp): Do not use volatile when forcing underflow. * sysdeps/ieee754/dbl-64/e_exp2.c (__ieee754_exp2): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r): Likewise. * sysdeps/ieee754/dbl-64/e_j1.c (__ieee754_j1): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/e_jn.c (__ieee754_jn): Likewise. * sysdeps/ieee754/dbl-64/e_sinh.c (__ieee754_sinh): Likewise. * sysdeps/ieee754/dbl-64/s_asinh.c (__asinh): Likewise. * sysdeps/ieee754/dbl-64/s_atan.c (atan): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/dbl-64/s_erf.c (__erf): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/s_expm1.c (__expm1): Likewise. * sysdeps/ieee754/dbl-64/s_fma.c (__fma): Use math_force_eval instead of volatile. * sysdeps/ieee754/dbl-64/s_log1p.c (__log1p): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/s_sin.c (__sin): Likewise. * sysdeps/ieee754/dbl-64/s_tan.c (tan): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/dbl-64/s_tanh.c (__tanh): Use math_check_force_underflow. * sysdeps/ieee754/flt-32/e_asinf.c (__ieee754_asinf): Likewise. * sysdeps/ieee754/flt-32/e_atanhf.c (__ieee754_atanhf): Likewise. * sysdeps/ieee754/flt-32/e_exp2f.c (__ieee754_exp2f): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r): Likewise. * sysdeps/ieee754/flt-32/e_j1f.c (__ieee754_j1f): Use math_check_force_underflow. * sysdeps/ieee754/flt-32/e_jnf.c (__ieee754_jnf): Likewise. * sysdeps/ieee754/flt-32/e_sinhf.c (__ieee754_sinhf): Likewise. * sysdeps/ieee754/flt-32/k_sinf.c (__kernel_sinf): Likewise. * sysdeps/ieee754/flt-32/k_tanf.c (__kernel_tanf): Likewise. * sysdeps/ieee754/flt-32/s_asinhf.c (__asinhf): Likewise. * sysdeps/ieee754/flt-32/s_atanf.c (__atanf): Likewise. * sysdeps/ieee754/flt-32/s_erff.c (__erff): Likewise. * sysdeps/ieee754/flt-32/s_expm1f.c (__expm1f): Likewise. * sysdeps/ieee754/flt-32/s_log1pf.c (__log1pf): Likewise. * sysdeps/ieee754/flt-32/s_tanhf.c (__tanhf): Likewise. * sysdeps/ieee754/ldbl-128/e_asinl.c (__ieee754_asinl): Likewise. * sysdeps/ieee754/ldbl-128/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-128/e_expl.c (__ieee754_expl): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128/e_j1l.c (__ieee754_j1l): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128/e_jnl.c (__ieee754_jnl): Likewise. * sysdeps/ieee754/ldbl-128/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-128/k_sincosl.c (__kernel_sincosl): Likewise. * sysdeps/ieee754/ldbl-128/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128/k_tanl.c (__kernel_tanl): Likewise. * sysdeps/ieee754/ldbl-128/s_asinhl.c (__asinhl): Likewise. * sysdeps/ieee754/ldbl-128/s_atanl.c (__atanl): Likewise. * sysdeps/ieee754/ldbl-128/s_erfl.c (__erfl): Likewise. * sysdeps/ieee754/ldbl-128/s_expm1l.c (__expm1l): Likewise. * sysdeps/ieee754/ldbl-128/s_fmal.c (__fmal): Use math_force_eval instead of volatile. * sysdeps/ieee754/ldbl-128/s_log1pl.c (__log1pl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_asinl.c (__ieee754_asinl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128ibm/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-128ibm/e_jnl.c (__ieee754_jnl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128ibm/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_sincosl.c (__kernel_sincosl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_tanl.c (__kernel_tanl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_asinhl.c (__asinhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_atanl.c (__atanl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_erfl.c (__erfl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-96/e_asinl.c (__ieee754_asinl): Likewise. * sysdeps/ieee754/ldbl-96/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-96/e_j1l.c (__ieee754_j1l): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-96/e_jnl.c (__ieee754_jnl): Likewise. * sysdeps/ieee754/ldbl-96/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-96/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-96/k_tanl.c (__kernel_tanl): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-96/s_asinhl.c (__asinhl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-96/s_erfl.c (__erfl): Likewise. * sysdeps/ieee754/ldbl-96/s_fmal.c (__fmal): Use math_force_eval instead of volatile. * sysdeps/ieee754/ldbl-96/s_tanhl.c (__tanhl): Use math_check_force_underflow.
2015-09-23 22:42:30 +00:00
/* If X (which is not a NaN) is subnormal, force an underflow
exception. */
#define math_check_force_underflow(x) \
do \
{ \
__typeof (x) force_underflow_tmp = (x); \
if (fabs_tg (force_underflow_tmp) \
< min_of_type (force_underflow_tmp)) \
Refactor code forcing underflow exceptions. Various floating-point functions have code to force underflow exceptions if a tiny result was computed in a way that might not have resulted in such exceptions even though the result is inexact. This typically uses math_force_eval to ensure that the underflowing expression is evaluated, but sometimes uses volatile. This patch refactors such code to use three new macros math_check_force_underflow, math_check_force_underflow_nonneg and math_check_force_underflow_complex (which in turn use math_force_eval). In the limited number of cases not suited to a simple conversion to these macros, existing uses of volatile are changed to use math_force_eval instead. The converted code does not always execute exactly the same sequence of operations as the original code, but the overall effects should be the same. Tested for x86_64, x86, mips64 and powerpc. * sysdeps/generic/math_private.h (fabs_tg): New macro. (min_of_type): Likewise. (math_check_force_underflow): Likewise. (math_check_force_underflow_nonneg): Likewise. (math_check_force_underflow_complex): Likewise. * math/e_exp2l.c (__ieee754_exp2l): Use math_check_force_underflow_nonneg. * math/k_casinh.c (__kernel_casinh): Likewise. * math/k_casinhf.c (__kernel_casinhf): Likewise. * math/k_casinhl.c (__kernel_casinhl): Likewise. * math/s_catan.c (__catan): Use math_check_force_underflow_complex. * math/s_catanf.c (__catanf): Likewise. * math/s_catanh.c (__catanh): Likewise. * math/s_catanhf.c (__catanhf): Likewise. * math/s_catanhl.c (__catanhl): Likewise. * math/s_catanl.c (__catanl): Likewise. * math/s_ccosh.c (__ccosh): Likewise. * math/s_ccoshf.c (__ccoshf): Likewise. * math/s_ccoshl.c (__ccoshl): Likewise. * math/s_cexp.c (__cexp): Likewise. * math/s_cexpf.c (__cexpf): Likewise. * math/s_cexpl.c (__cexpl): Likewise. * math/s_clog.c (__clog): Use math_check_force_underflow_nonneg. * math/s_clog10.c (__clog10): Likewise. * math/s_clog10f.c (__clog10f): Likewise. * math/s_clog10l.c (__clog10l): Likewise. * math/s_clogf.c (__clogf): Likewise. * math/s_clogl.c (__clogl): Likewise. * math/s_csin.c (__csin): Use math_check_force_underflow_complex. * math/s_csinf.c (__csinf): Likewise. * math/s_csinh.c (__csinh): Likewise. * math/s_csinhf.c (__csinhf): Likewise. * math/s_csinhl.c (__csinhl): Likewise. * math/s_csinl.c (__csinl): Likewise. * math/s_csqrt.c (__csqrt): Use math_check_force_underflow. * math/s_csqrtf.c (__csqrtf): Likewise. * math/s_csqrtl.c (__csqrtl): Likewise. * math/s_ctan.c (__ctan): Use math_check_force_underflow_complex. * math/s_ctanf.c (__ctanf): Likewise. * math/s_ctanh.c (__ctanh): Likewise. * math/s_ctanhf.c (__ctanhf): Likewise. * math/s_ctanhl.c (__ctanhl): Likewise. * math/s_ctanl.c (__ctanl): Likewise. * stdlib/strtod_l.c (round_and_return): Use math_force_eval instead of volatile. * sysdeps/ieee754/dbl-64/e_asin.c (__ieee754_asin): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/e_atanh.c (__ieee754_atanh): Likewise. * sysdeps/ieee754/dbl-64/e_exp.c (__ieee754_exp): Do not use volatile when forcing underflow. * sysdeps/ieee754/dbl-64/e_exp2.c (__ieee754_exp2): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r): Likewise. * sysdeps/ieee754/dbl-64/e_j1.c (__ieee754_j1): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/e_jn.c (__ieee754_jn): Likewise. * sysdeps/ieee754/dbl-64/e_sinh.c (__ieee754_sinh): Likewise. * sysdeps/ieee754/dbl-64/s_asinh.c (__asinh): Likewise. * sysdeps/ieee754/dbl-64/s_atan.c (atan): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/dbl-64/s_erf.c (__erf): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/s_expm1.c (__expm1): Likewise. * sysdeps/ieee754/dbl-64/s_fma.c (__fma): Use math_force_eval instead of volatile. * sysdeps/ieee754/dbl-64/s_log1p.c (__log1p): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/s_sin.c (__sin): Likewise. * sysdeps/ieee754/dbl-64/s_tan.c (tan): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/dbl-64/s_tanh.c (__tanh): Use math_check_force_underflow. * sysdeps/ieee754/flt-32/e_asinf.c (__ieee754_asinf): Likewise. * sysdeps/ieee754/flt-32/e_atanhf.c (__ieee754_atanhf): Likewise. * sysdeps/ieee754/flt-32/e_exp2f.c (__ieee754_exp2f): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r): Likewise. * sysdeps/ieee754/flt-32/e_j1f.c (__ieee754_j1f): Use math_check_force_underflow. * sysdeps/ieee754/flt-32/e_jnf.c (__ieee754_jnf): Likewise. * sysdeps/ieee754/flt-32/e_sinhf.c (__ieee754_sinhf): Likewise. * sysdeps/ieee754/flt-32/k_sinf.c (__kernel_sinf): Likewise. * sysdeps/ieee754/flt-32/k_tanf.c (__kernel_tanf): Likewise. * sysdeps/ieee754/flt-32/s_asinhf.c (__asinhf): Likewise. * sysdeps/ieee754/flt-32/s_atanf.c (__atanf): Likewise. * sysdeps/ieee754/flt-32/s_erff.c (__erff): Likewise. * sysdeps/ieee754/flt-32/s_expm1f.c (__expm1f): Likewise. * sysdeps/ieee754/flt-32/s_log1pf.c (__log1pf): Likewise. * sysdeps/ieee754/flt-32/s_tanhf.c (__tanhf): Likewise. * sysdeps/ieee754/ldbl-128/e_asinl.c (__ieee754_asinl): Likewise. * sysdeps/ieee754/ldbl-128/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-128/e_expl.c (__ieee754_expl): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128/e_j1l.c (__ieee754_j1l): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128/e_jnl.c (__ieee754_jnl): Likewise. * sysdeps/ieee754/ldbl-128/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-128/k_sincosl.c (__kernel_sincosl): Likewise. * sysdeps/ieee754/ldbl-128/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128/k_tanl.c (__kernel_tanl): Likewise. * sysdeps/ieee754/ldbl-128/s_asinhl.c (__asinhl): Likewise. * sysdeps/ieee754/ldbl-128/s_atanl.c (__atanl): Likewise. * sysdeps/ieee754/ldbl-128/s_erfl.c (__erfl): Likewise. * sysdeps/ieee754/ldbl-128/s_expm1l.c (__expm1l): Likewise. * sysdeps/ieee754/ldbl-128/s_fmal.c (__fmal): Use math_force_eval instead of volatile. * sysdeps/ieee754/ldbl-128/s_log1pl.c (__log1pl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_asinl.c (__ieee754_asinl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128ibm/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-128ibm/e_jnl.c (__ieee754_jnl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128ibm/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_sincosl.c (__kernel_sincosl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_tanl.c (__kernel_tanl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_asinhl.c (__asinhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_atanl.c (__atanl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_erfl.c (__erfl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-96/e_asinl.c (__ieee754_asinl): Likewise. * sysdeps/ieee754/ldbl-96/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-96/e_j1l.c (__ieee754_j1l): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-96/e_jnl.c (__ieee754_jnl): Likewise. * sysdeps/ieee754/ldbl-96/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-96/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-96/k_tanl.c (__kernel_tanl): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-96/s_asinhl.c (__asinhl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-96/s_erfl.c (__erfl): Likewise. * sysdeps/ieee754/ldbl-96/s_fmal.c (__fmal): Use math_force_eval instead of volatile. * sysdeps/ieee754/ldbl-96/s_tanhl.c (__tanhl): Use math_check_force_underflow.
2015-09-23 22:42:30 +00:00
{ \
__typeof (force_underflow_tmp) force_underflow_tmp2 \
= force_underflow_tmp * force_underflow_tmp; \
math_force_eval (force_underflow_tmp2); \
} \
} \
while (0)
/* Likewise, but X is also known to be nonnegative. */
#define math_check_force_underflow_nonneg(x) \
do \
{ \
__typeof (x) force_underflow_tmp = (x); \
if (force_underflow_tmp \
< min_of_type (force_underflow_tmp)) \
Refactor code forcing underflow exceptions. Various floating-point functions have code to force underflow exceptions if a tiny result was computed in a way that might not have resulted in such exceptions even though the result is inexact. This typically uses math_force_eval to ensure that the underflowing expression is evaluated, but sometimes uses volatile. This patch refactors such code to use three new macros math_check_force_underflow, math_check_force_underflow_nonneg and math_check_force_underflow_complex (which in turn use math_force_eval). In the limited number of cases not suited to a simple conversion to these macros, existing uses of volatile are changed to use math_force_eval instead. The converted code does not always execute exactly the same sequence of operations as the original code, but the overall effects should be the same. Tested for x86_64, x86, mips64 and powerpc. * sysdeps/generic/math_private.h (fabs_tg): New macro. (min_of_type): Likewise. (math_check_force_underflow): Likewise. (math_check_force_underflow_nonneg): Likewise. (math_check_force_underflow_complex): Likewise. * math/e_exp2l.c (__ieee754_exp2l): Use math_check_force_underflow_nonneg. * math/k_casinh.c (__kernel_casinh): Likewise. * math/k_casinhf.c (__kernel_casinhf): Likewise. * math/k_casinhl.c (__kernel_casinhl): Likewise. * math/s_catan.c (__catan): Use math_check_force_underflow_complex. * math/s_catanf.c (__catanf): Likewise. * math/s_catanh.c (__catanh): Likewise. * math/s_catanhf.c (__catanhf): Likewise. * math/s_catanhl.c (__catanhl): Likewise. * math/s_catanl.c (__catanl): Likewise. * math/s_ccosh.c (__ccosh): Likewise. * math/s_ccoshf.c (__ccoshf): Likewise. * math/s_ccoshl.c (__ccoshl): Likewise. * math/s_cexp.c (__cexp): Likewise. * math/s_cexpf.c (__cexpf): Likewise. * math/s_cexpl.c (__cexpl): Likewise. * math/s_clog.c (__clog): Use math_check_force_underflow_nonneg. * math/s_clog10.c (__clog10): Likewise. * math/s_clog10f.c (__clog10f): Likewise. * math/s_clog10l.c (__clog10l): Likewise. * math/s_clogf.c (__clogf): Likewise. * math/s_clogl.c (__clogl): Likewise. * math/s_csin.c (__csin): Use math_check_force_underflow_complex. * math/s_csinf.c (__csinf): Likewise. * math/s_csinh.c (__csinh): Likewise. * math/s_csinhf.c (__csinhf): Likewise. * math/s_csinhl.c (__csinhl): Likewise. * math/s_csinl.c (__csinl): Likewise. * math/s_csqrt.c (__csqrt): Use math_check_force_underflow. * math/s_csqrtf.c (__csqrtf): Likewise. * math/s_csqrtl.c (__csqrtl): Likewise. * math/s_ctan.c (__ctan): Use math_check_force_underflow_complex. * math/s_ctanf.c (__ctanf): Likewise. * math/s_ctanh.c (__ctanh): Likewise. * math/s_ctanhf.c (__ctanhf): Likewise. * math/s_ctanhl.c (__ctanhl): Likewise. * math/s_ctanl.c (__ctanl): Likewise. * stdlib/strtod_l.c (round_and_return): Use math_force_eval instead of volatile. * sysdeps/ieee754/dbl-64/e_asin.c (__ieee754_asin): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/e_atanh.c (__ieee754_atanh): Likewise. * sysdeps/ieee754/dbl-64/e_exp.c (__ieee754_exp): Do not use volatile when forcing underflow. * sysdeps/ieee754/dbl-64/e_exp2.c (__ieee754_exp2): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/dbl-64/e_gamma_r.c (__ieee754_gamma_r): Likewise. * sysdeps/ieee754/dbl-64/e_j1.c (__ieee754_j1): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/e_jn.c (__ieee754_jn): Likewise. * sysdeps/ieee754/dbl-64/e_sinh.c (__ieee754_sinh): Likewise. * sysdeps/ieee754/dbl-64/s_asinh.c (__asinh): Likewise. * sysdeps/ieee754/dbl-64/s_atan.c (atan): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/dbl-64/s_erf.c (__erf): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/s_expm1.c (__expm1): Likewise. * sysdeps/ieee754/dbl-64/s_fma.c (__fma): Use math_force_eval instead of volatile. * sysdeps/ieee754/dbl-64/s_log1p.c (__log1p): Use math_check_force_underflow. * sysdeps/ieee754/dbl-64/s_sin.c (__sin): Likewise. * sysdeps/ieee754/dbl-64/s_tan.c (tan): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/dbl-64/s_tanh.c (__tanh): Use math_check_force_underflow. * sysdeps/ieee754/flt-32/e_asinf.c (__ieee754_asinf): Likewise. * sysdeps/ieee754/flt-32/e_atanhf.c (__ieee754_atanhf): Likewise. * sysdeps/ieee754/flt-32/e_exp2f.c (__ieee754_exp2f): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/flt-32/e_gammaf_r.c (__ieee754_gammaf_r): Likewise. * sysdeps/ieee754/flt-32/e_j1f.c (__ieee754_j1f): Use math_check_force_underflow. * sysdeps/ieee754/flt-32/e_jnf.c (__ieee754_jnf): Likewise. * sysdeps/ieee754/flt-32/e_sinhf.c (__ieee754_sinhf): Likewise. * sysdeps/ieee754/flt-32/k_sinf.c (__kernel_sinf): Likewise. * sysdeps/ieee754/flt-32/k_tanf.c (__kernel_tanf): Likewise. * sysdeps/ieee754/flt-32/s_asinhf.c (__asinhf): Likewise. * sysdeps/ieee754/flt-32/s_atanf.c (__atanf): Likewise. * sysdeps/ieee754/flt-32/s_erff.c (__erff): Likewise. * sysdeps/ieee754/flt-32/s_expm1f.c (__expm1f): Likewise. * sysdeps/ieee754/flt-32/s_log1pf.c (__log1pf): Likewise. * sysdeps/ieee754/flt-32/s_tanhf.c (__tanhf): Likewise. * sysdeps/ieee754/ldbl-128/e_asinl.c (__ieee754_asinl): Likewise. * sysdeps/ieee754/ldbl-128/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-128/e_expl.c (__ieee754_expl): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-128/e_gammal_r.c (__ieee754_gammal_r): Likewise. * sysdeps/ieee754/ldbl-128/e_j1l.c (__ieee754_j1l): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128/e_jnl.c (__ieee754_jnl): Likewise. * sysdeps/ieee754/ldbl-128/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-128/k_sincosl.c (__kernel_sincosl): Likewise. * sysdeps/ieee754/ldbl-128/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128/k_tanl.c (__kernel_tanl): Likewise. * sysdeps/ieee754/ldbl-128/s_asinhl.c (__asinhl): Likewise. * sysdeps/ieee754/ldbl-128/s_atanl.c (__atanl): Likewise. * sysdeps/ieee754/ldbl-128/s_erfl.c (__erfl): Likewise. * sysdeps/ieee754/ldbl-128/s_expm1l.c (__expm1l): Likewise. * sysdeps/ieee754/ldbl-128/s_fmal.c (__fmal): Use math_force_eval instead of volatile. * sysdeps/ieee754/ldbl-128/s_log1pl.c (__log1pl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_asinl.c (__ieee754_asinl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128ibm/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/e_gammal_r.c (__ieee754_gammal_r): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-128ibm/e_jnl.c (__ieee754_jnl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-128ibm/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_sincosl.c (__kernel_sincosl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-128ibm/k_tanl.c (__kernel_tanl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_asinhl.c (__asinhl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_atanl.c (__atanl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_erfl.c (__erfl): Likewise. * sysdeps/ieee754/ldbl-128ibm/s_tanhl.c (__tanhl): Likewise. * sysdeps/ieee754/ldbl-96/e_asinl.c (__ieee754_asinl): Likewise. * sysdeps/ieee754/ldbl-96/e_atanhl.c (__ieee754_atanhl): Likewise. * sysdeps/ieee754/ldbl-96/e_gammal_r.c (__ieee754_gammal_r): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-96/e_j1l.c (__ieee754_j1l): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-96/e_jnl.c (__ieee754_jnl): Likewise. * sysdeps/ieee754/ldbl-96/e_sinhl.c (__ieee754_sinhl): Likewise. * sysdeps/ieee754/ldbl-96/k_sinl.c (__kernel_sinl): Likewise. * sysdeps/ieee754/ldbl-96/k_tanl.c (__kernel_tanl): Use math_check_force_underflow_nonneg. * sysdeps/ieee754/ldbl-96/s_asinhl.c (__asinhl): Use math_check_force_underflow. * sysdeps/ieee754/ldbl-96/s_erfl.c (__erfl): Likewise. * sysdeps/ieee754/ldbl-96/s_fmal.c (__fmal): Use math_force_eval instead of volatile. * sysdeps/ieee754/ldbl-96/s_tanhl.c (__tanhl): Use math_check_force_underflow.
2015-09-23 22:42:30 +00:00
{ \
__typeof (force_underflow_tmp) force_underflow_tmp2 \
= force_underflow_tmp * force_underflow_tmp; \
math_force_eval (force_underflow_tmp2); \
} \
} \
while (0)
/* Likewise, for both real and imaginary parts of a complex
result. */
#define math_check_force_underflow_complex(x) \
do \
{ \
__typeof (x) force_underflow_complex_tmp = (x); \
math_check_force_underflow (__real__ force_underflow_complex_tmp); \
math_check_force_underflow (__imag__ force_underflow_complex_tmp); \
} \
while (0)
/* The standards only specify one variant of the fenv.h interfaces.
But at least for some architectures we can be more efficient if we
know what operations are going to be performed. Therefore we
define additional interfaces. By default they refer to the normal
interfaces. */
static __always_inline void
default_libc_feholdexcept (fenv_t *e)
{
Fix libm feholdexcept namespace (bug 17748). Continuing the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of feholdexcept by making it a weak alias of __feholdexcept and making the affected code call __feholdexcept. Tested for x86_64 (testsuite, and that disassembly of installed shared libraries is unchanged by the patch). Also tested for ARM (soft-float) that feholdexcept failures disappear from the linknamespace test failures (fesetenv, fsetround and feupdateenv remain to be addressed to complete fixing bug 17748). [BZ #17748] * include/fenv.h (__feholdexcept): Declare. Use libm_hidden_proto. * math/feholdexcpt.c (feholdexcept): Rename to __feholdexcept and define as weak alias of __feholdexcept. Use libm_hidden_weak. * sysdeps/aarch64/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/alpha/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/arm/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/hppa/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/i386/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/ia64/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/m68k/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/mips/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/powerpc/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/powerpc/nofpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/s390/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/sh/sh4/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/sparc/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/x86_64/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/generic/math_private.h (default_libc_feholdexcept): Use __feholdexcept instead of feholdexcept. (default_libc_feholdexcept_setround): Likewise.
2015-01-05 23:06:14 +00:00
(void) __feholdexcept (e);
}
#ifndef libc_feholdexcept
# define libc_feholdexcept default_libc_feholdexcept
#endif
#ifndef libc_feholdexceptf
# define libc_feholdexceptf default_libc_feholdexcept
#endif
#ifndef libc_feholdexceptl
# define libc_feholdexceptl default_libc_feholdexcept
#endif
static __always_inline void
default_libc_fesetround (int r)
{
Fix libm fesetround namespace (bug 17748). Continuing the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of fesetround by making it a weak alias of __fesetround and making the affected code call __fesetround. An existing __fesetround function in fenv_libc.h for powerpc is renamed to __fesetround_inline. Tested for x86_64 (testsuite, and that disassembly of installed shared libraries is unchanged by the patch). Also tested for ARM (soft-float) that fesetround failures disappear from the linknamespace test results (feupdateenv remains to be addressed to complete fixing bug 17748). [BZ #17748] * include/fenv.h (__fesetround): Declare. Use libm_hidden_proto. * math/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/aarch64/fpu/fesetround.c (fesetround): Likewise. * sysdeps/alpha/fpu/fesetround.c (fesetround): Likewise. * sysdeps/arm/fesetround.c (fesetround): Likewise. * sysdeps/hppa/fpu/fesetround.c (fesetround): Likewise. * sysdeps/i386/fpu/fesetround.c (fesetround): Likewise. * sysdeps/ia64/fpu/fesetround.c (fesetround): Likewise. * sysdeps/m68k/fpu/fesetround.c (fesetround): Likewise. * sysdeps/mips/fpu/fesetround.c (fesetround): Likewise. * sysdeps/powerpc/fpu/fenv_libc.h (__fesetround): Rename to __fesetround_inline. * sysdeps/powerpc/fpu/fenv_private.h (libc_fesetround_ppc): Call __fesetround_inline instead of __fesetround. * sysdeps/powerpc/fpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. Call __fesetround_inline instead of __fesetround. * sysdeps/powerpc/nofpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/powerpc/powerpc32/e500/nofpu/fesetround.c (fesetround): Likewise. * sysdeps/s390/fpu/fesetround.c (fesetround): Likewise. * sysdeps/sh/sh4/fpu/fesetround.c (fesetround): Likewise. * sysdeps/sparc/fpu/fesetround.c (fesetround): Likewise. * sysdeps/tile/math_private.h (__fesetround): New inline function. * sysdeps/x86_64/fpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/generic/math_private.h (default_libc_fesetround): Call __fesetround instead of fesetround. (default_libc_feholdexcept_setround): Likewise. (libc_feholdsetround_ctx): Likewise. (libc_feholdsetround_noex_ctx): Likewise.
2015-01-07 00:41:23 +00:00
(void) __fesetround (r);
}
#ifndef libc_fesetround
# define libc_fesetround default_libc_fesetround
#endif
#ifndef libc_fesetroundf
# define libc_fesetroundf default_libc_fesetround
#endif
#ifndef libc_fesetroundl
# define libc_fesetroundl default_libc_fesetround
#endif
static __always_inline void
default_libc_feholdexcept_setround (fenv_t *e, int r)
{
Fix libm feholdexcept namespace (bug 17748). Continuing the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of feholdexcept by making it a weak alias of __feholdexcept and making the affected code call __feholdexcept. Tested for x86_64 (testsuite, and that disassembly of installed shared libraries is unchanged by the patch). Also tested for ARM (soft-float) that feholdexcept failures disappear from the linknamespace test failures (fesetenv, fsetround and feupdateenv remain to be addressed to complete fixing bug 17748). [BZ #17748] * include/fenv.h (__feholdexcept): Declare. Use libm_hidden_proto. * math/feholdexcpt.c (feholdexcept): Rename to __feholdexcept and define as weak alias of __feholdexcept. Use libm_hidden_weak. * sysdeps/aarch64/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/alpha/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/arm/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/hppa/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/i386/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/ia64/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/m68k/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/mips/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/powerpc/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/powerpc/nofpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/s390/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/sh/sh4/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/sparc/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/x86_64/fpu/feholdexcpt.c (feholdexcept): Likewise. * sysdeps/generic/math_private.h (default_libc_feholdexcept): Use __feholdexcept instead of feholdexcept. (default_libc_feholdexcept_setround): Likewise.
2015-01-05 23:06:14 +00:00
__feholdexcept (e);
Fix libm fesetround namespace (bug 17748). Continuing the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of fesetround by making it a weak alias of __fesetround and making the affected code call __fesetround. An existing __fesetround function in fenv_libc.h for powerpc is renamed to __fesetround_inline. Tested for x86_64 (testsuite, and that disassembly of installed shared libraries is unchanged by the patch). Also tested for ARM (soft-float) that fesetround failures disappear from the linknamespace test results (feupdateenv remains to be addressed to complete fixing bug 17748). [BZ #17748] * include/fenv.h (__fesetround): Declare. Use libm_hidden_proto. * math/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/aarch64/fpu/fesetround.c (fesetround): Likewise. * sysdeps/alpha/fpu/fesetround.c (fesetround): Likewise. * sysdeps/arm/fesetround.c (fesetround): Likewise. * sysdeps/hppa/fpu/fesetround.c (fesetround): Likewise. * sysdeps/i386/fpu/fesetround.c (fesetround): Likewise. * sysdeps/ia64/fpu/fesetround.c (fesetround): Likewise. * sysdeps/m68k/fpu/fesetround.c (fesetround): Likewise. * sysdeps/mips/fpu/fesetround.c (fesetround): Likewise. * sysdeps/powerpc/fpu/fenv_libc.h (__fesetround): Rename to __fesetround_inline. * sysdeps/powerpc/fpu/fenv_private.h (libc_fesetround_ppc): Call __fesetround_inline instead of __fesetround. * sysdeps/powerpc/fpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. Call __fesetround_inline instead of __fesetround. * sysdeps/powerpc/nofpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/powerpc/powerpc32/e500/nofpu/fesetround.c (fesetround): Likewise. * sysdeps/s390/fpu/fesetround.c (fesetround): Likewise. * sysdeps/sh/sh4/fpu/fesetround.c (fesetround): Likewise. * sysdeps/sparc/fpu/fesetround.c (fesetround): Likewise. * sysdeps/tile/math_private.h (__fesetround): New inline function. * sysdeps/x86_64/fpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/generic/math_private.h (default_libc_fesetround): Call __fesetround instead of fesetround. (default_libc_feholdexcept_setround): Likewise. (libc_feholdsetround_ctx): Likewise. (libc_feholdsetround_noex_ctx): Likewise.
2015-01-07 00:41:23 +00:00
__fesetround (r);
}
#ifndef libc_feholdexcept_setround
# define libc_feholdexcept_setround default_libc_feholdexcept_setround
#endif
#ifndef libc_feholdexcept_setroundf
# define libc_feholdexcept_setroundf default_libc_feholdexcept_setround
#endif
#ifndef libc_feholdexcept_setroundl
# define libc_feholdexcept_setroundl default_libc_feholdexcept_setround
#endif
#ifndef libc_feholdsetround_53bit
# define libc_feholdsetround_53bit libc_feholdsetround
#endif
#ifndef libc_fetestexcept
# define libc_fetestexcept fetestexcept
#endif
#ifndef libc_fetestexceptf
# define libc_fetestexceptf fetestexcept
#endif
#ifndef libc_fetestexceptl
# define libc_fetestexceptl fetestexcept
#endif
static __always_inline void
default_libc_fesetenv (fenv_t *e)
{
Fix libm fesetenv namespace (bug 17748). Continuing the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of fesetenv by making it a weak alias of __fesetenv and making the affected code (including various copies of feupdateenv which also gets called from C90 functions) call __fesetenv. Tested for x86_64 (testsuite, and that disassembly of installed shared libraries is unchanged by the patch). Also tested for ARM (soft-float) that fesetenv failures disappear from the linknamespace test results (fsetround and feupdateenv remain to be addressed to complete fixing bug 17748). [BZ #17748] * include/fenv.h (__fesetenv): Use libm_hidden_proto. * math/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/aarch64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/alpha/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/arm/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/hppa/fpu/fesetenv.c (fesetenv): Likewise. * sysdeps/i386/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/ia64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/m68k/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/mips/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/powerpc/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/powerpc/nofpu/fesetenv.c (__fesetenv): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/fesetenv.c (__fesetenv): Likewise. * sysdeps/s390/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/sh/sh4/fpu/fesetenv.c (fesetenv): Likewise. * sysdeps/sparc/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/tile/math_private.h (__fesetenv): New inline function. * sysdeps/x86_64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/generic/math_private.h (default_libc_fesetenv): Use __fesetenv instead of fesetenv. (libc_feresetround_noex_ctx): Likewise. * sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Likewise.
2015-01-06 23:36:20 +00:00
(void) __fesetenv (e);
}
#ifndef libc_fesetenv
# define libc_fesetenv default_libc_fesetenv
#endif
#ifndef libc_fesetenvf
# define libc_fesetenvf default_libc_fesetenv
#endif
#ifndef libc_fesetenvl
# define libc_fesetenvl default_libc_fesetenv
#endif
static __always_inline void
default_libc_feupdateenv (fenv_t *e)
{
Fix libm feupdateenv namespace (bug 17748). Concluding the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of feupdateenv by making it a weak alias for __feupdateenv and making the affected code call __feupdateenv. Tested for x86_64 (testsuite, and that installed stripped shared libraries are unchanged by the patch). Also tested for ARM (soft-float) that the math.h linknamespace tests now pass. [BZ #17748] * include/fenv.h (__feupdateenv): Use libm_hidden_proto. * math/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/aarch64/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/arm/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/powerpc/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/tile/math_private.h (__feupdateenv): New inline function. * sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/generic/math_private.h (default_libc_feupdateenv): Call __feupdateenv instead of feupdateenv. (default_libc_feupdateenv_test): Likewise. (libc_feresetround_ctx): Likewise.
2015-01-07 19:01:20 +00:00
(void) __feupdateenv (e);
}
#ifndef libc_feupdateenv
# define libc_feupdateenv default_libc_feupdateenv
#endif
#ifndef libc_feupdateenvf
# define libc_feupdateenvf default_libc_feupdateenv
#endif
#ifndef libc_feupdateenvl
# define libc_feupdateenvl default_libc_feupdateenv
#endif
#ifndef libc_feresetround_53bit
# define libc_feresetround_53bit libc_feresetround
#endif
static __always_inline int
default_libc_feupdateenv_test (fenv_t *e, int ex)
{
int ret = fetestexcept (ex);
Fix libm feupdateenv namespace (bug 17748). Concluding the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of feupdateenv by making it a weak alias for __feupdateenv and making the affected code call __feupdateenv. Tested for x86_64 (testsuite, and that installed stripped shared libraries are unchanged by the patch). Also tested for ARM (soft-float) that the math.h linknamespace tests now pass. [BZ #17748] * include/fenv.h (__feupdateenv): Use libm_hidden_proto. * math/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/aarch64/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/arm/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/powerpc/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/tile/math_private.h (__feupdateenv): New inline function. * sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/generic/math_private.h (default_libc_feupdateenv): Call __feupdateenv instead of feupdateenv. (default_libc_feupdateenv_test): Likewise. (libc_feresetround_ctx): Likewise.
2015-01-07 19:01:20 +00:00
__feupdateenv (e);
return ret;
}
#ifndef libc_feupdateenv_test
# define libc_feupdateenv_test default_libc_feupdateenv_test
#endif
#ifndef libc_feupdateenv_testf
# define libc_feupdateenv_testf default_libc_feupdateenv_test
#endif
#ifndef libc_feupdateenv_testl
# define libc_feupdateenv_testl default_libc_feupdateenv_test
#endif
/* Save and set the rounding mode. The use of fenv_t to store the old mode
allows a target-specific version of this function to avoid converting the
rounding mode from the fpu format. By default we have no choice but to
manipulate the entire env. */
#ifndef libc_feholdsetround
# define libc_feholdsetround libc_feholdexcept_setround
#endif
#ifndef libc_feholdsetroundf
# define libc_feholdsetroundf libc_feholdexcept_setroundf
#endif
#ifndef libc_feholdsetroundl
# define libc_feholdsetroundl libc_feholdexcept_setroundl
#endif
/* ... and the reverse. */
#ifndef libc_feresetround
# define libc_feresetround libc_feupdateenv
#endif
#ifndef libc_feresetroundf
# define libc_feresetroundf libc_feupdateenvf
#endif
#ifndef libc_feresetroundl
# define libc_feresetroundl libc_feupdateenvl
#endif
/* ... and a version that may also discard exceptions. */
#ifndef libc_feresetround_noex
# define libc_feresetround_noex libc_fesetenv
#endif
#ifndef libc_feresetround_noexf
# define libc_feresetround_noexf libc_fesetenvf
#endif
#ifndef libc_feresetround_noexl
# define libc_feresetround_noexl libc_fesetenvl
#endif
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
#ifndef HAVE_RM_CTX
# define HAVE_RM_CTX 0
#endif
#if HAVE_RM_CTX
Set/restore rounding mode only when needed The most common use case of math functions is with default rounding mode, i.e. rounding to nearest. Setting and restoring rounding mode is an unnecessary overhead for this, so I've added support for a context, which does the set/restore only if the FP status needs a change. The code is written such that only x86 uses these. Other architectures should be unaffected by it, but would definitely benefit if the set/restore has as much overhead relative to the rest of the code, as the x86 bits do. Here's a summary of the performance improvement due to these improvements; I've only mentioned functions that use the set/restore and have benchmark inputs for x86_64: Before: cos(): ITERS:4.69335e+08: TOTAL:28884.6Mcy, MAX:4080.28cy, MIN:57.562cy, 16248.6 calls/Mcy exp(): ITERS:4.47604e+08: TOTAL:28796.2Mcy, MAX:207.721cy, MIN:62.385cy, 15543.9 calls/Mcy pow(): ITERS:1.63485e+08: TOTAL:28879.9Mcy, MAX:362.255cy, MIN:172.469cy, 5660.86 calls/Mcy sin(): ITERS:3.89578e+08: TOTAL:28900Mcy, MAX:704.859cy, MIN:47.583cy, 13480.2 calls/Mcy tan(): ITERS:7.0971e+07: TOTAL:28902.2Mcy, MAX:1357.79cy, MIN:388.58cy, 2455.55 calls/Mcy After: cos(): ITERS:6.0014e+08: TOTAL:28875.9Mcy, MAX:364.283cy, MIN:45.716cy, 20783.4 calls/Mcy exp(): ITERS:5.48578e+08: TOTAL:28764.9Mcy, MAX:191.617cy, MIN:51.011cy, 19071.1 calls/Mcy pow(): ITERS:1.70013e+08: TOTAL:28873.6Mcy, MAX:689.522cy, MIN:163.989cy, 5888.18 calls/Mcy sin(): ITERS:4.64079e+08: TOTAL:28891.5Mcy, MAX:6959.3cy, MIN:36.189cy, 16062.8 calls/Mcy tan(): ITERS:7.2354e+07: TOTAL:28898.9Mcy, MAX:1295.57cy, MIN:380.698cy, 2503.7 calls/Mcy So the improvements are: cos: 27.9089% exp: 22.6919% pow: 4.01564% sin: 19.1585% tan: 1.96086% The downside of the change is that it will have an adverse performance impact on non-default rounding modes, but I think the tradeoff is justified.
2013-06-12 05:06:48 +00:00
/* Set/Restore Rounding Modes only when necessary. If defined, these functions
set/restore floating point state only if the state needed within the lexical
block is different from the current state. This saves a lot of time when
the floating point unit is much slower than the fixed point units. */
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
# ifndef libc_feholdsetround_noex_ctx
# define libc_feholdsetround_noex_ctx libc_feholdsetround_ctx
# endif
# ifndef libc_feholdsetround_noexf_ctx
# define libc_feholdsetround_noexf_ctx libc_feholdsetroundf_ctx
# endif
# ifndef libc_feholdsetround_noexl_ctx
# define libc_feholdsetround_noexl_ctx libc_feholdsetroundl_ctx
# endif
Set/restore rounding mode only when needed The most common use case of math functions is with default rounding mode, i.e. rounding to nearest. Setting and restoring rounding mode is an unnecessary overhead for this, so I've added support for a context, which does the set/restore only if the FP status needs a change. The code is written such that only x86 uses these. Other architectures should be unaffected by it, but would definitely benefit if the set/restore has as much overhead relative to the rest of the code, as the x86 bits do. Here's a summary of the performance improvement due to these improvements; I've only mentioned functions that use the set/restore and have benchmark inputs for x86_64: Before: cos(): ITERS:4.69335e+08: TOTAL:28884.6Mcy, MAX:4080.28cy, MIN:57.562cy, 16248.6 calls/Mcy exp(): ITERS:4.47604e+08: TOTAL:28796.2Mcy, MAX:207.721cy, MIN:62.385cy, 15543.9 calls/Mcy pow(): ITERS:1.63485e+08: TOTAL:28879.9Mcy, MAX:362.255cy, MIN:172.469cy, 5660.86 calls/Mcy sin(): ITERS:3.89578e+08: TOTAL:28900Mcy, MAX:704.859cy, MIN:47.583cy, 13480.2 calls/Mcy tan(): ITERS:7.0971e+07: TOTAL:28902.2Mcy, MAX:1357.79cy, MIN:388.58cy, 2455.55 calls/Mcy After: cos(): ITERS:6.0014e+08: TOTAL:28875.9Mcy, MAX:364.283cy, MIN:45.716cy, 20783.4 calls/Mcy exp(): ITERS:5.48578e+08: TOTAL:28764.9Mcy, MAX:191.617cy, MIN:51.011cy, 19071.1 calls/Mcy pow(): ITERS:1.70013e+08: TOTAL:28873.6Mcy, MAX:689.522cy, MIN:163.989cy, 5888.18 calls/Mcy sin(): ITERS:4.64079e+08: TOTAL:28891.5Mcy, MAX:6959.3cy, MIN:36.189cy, 16062.8 calls/Mcy tan(): ITERS:7.2354e+07: TOTAL:28898.9Mcy, MAX:1295.57cy, MIN:380.698cy, 2503.7 calls/Mcy So the improvements are: cos: 27.9089% exp: 22.6919% pow: 4.01564% sin: 19.1585% tan: 1.96086% The downside of the change is that it will have an adverse performance impact on non-default rounding modes, but I think the tradeoff is justified.
2013-06-12 05:06:48 +00:00
# ifndef libc_feresetround_noex_ctx
# define libc_feresetround_noex_ctx libc_fesetenv_ctx
# endif
# ifndef libc_feresetround_noexf_ctx
# define libc_feresetround_noexf_ctx libc_fesetenvf_ctx
# endif
# ifndef libc_feresetround_noexl_ctx
# define libc_feresetround_noexl_ctx libc_fesetenvl_ctx
# endif
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
#else
Set/restore rounding mode only when needed The most common use case of math functions is with default rounding mode, i.e. rounding to nearest. Setting and restoring rounding mode is an unnecessary overhead for this, so I've added support for a context, which does the set/restore only if the FP status needs a change. The code is written such that only x86 uses these. Other architectures should be unaffected by it, but would definitely benefit if the set/restore has as much overhead relative to the rest of the code, as the x86 bits do. Here's a summary of the performance improvement due to these improvements; I've only mentioned functions that use the set/restore and have benchmark inputs for x86_64: Before: cos(): ITERS:4.69335e+08: TOTAL:28884.6Mcy, MAX:4080.28cy, MIN:57.562cy, 16248.6 calls/Mcy exp(): ITERS:4.47604e+08: TOTAL:28796.2Mcy, MAX:207.721cy, MIN:62.385cy, 15543.9 calls/Mcy pow(): ITERS:1.63485e+08: TOTAL:28879.9Mcy, MAX:362.255cy, MIN:172.469cy, 5660.86 calls/Mcy sin(): ITERS:3.89578e+08: TOTAL:28900Mcy, MAX:704.859cy, MIN:47.583cy, 13480.2 calls/Mcy tan(): ITERS:7.0971e+07: TOTAL:28902.2Mcy, MAX:1357.79cy, MIN:388.58cy, 2455.55 calls/Mcy After: cos(): ITERS:6.0014e+08: TOTAL:28875.9Mcy, MAX:364.283cy, MIN:45.716cy, 20783.4 calls/Mcy exp(): ITERS:5.48578e+08: TOTAL:28764.9Mcy, MAX:191.617cy, MIN:51.011cy, 19071.1 calls/Mcy pow(): ITERS:1.70013e+08: TOTAL:28873.6Mcy, MAX:689.522cy, MIN:163.989cy, 5888.18 calls/Mcy sin(): ITERS:4.64079e+08: TOTAL:28891.5Mcy, MAX:6959.3cy, MIN:36.189cy, 16062.8 calls/Mcy tan(): ITERS:7.2354e+07: TOTAL:28898.9Mcy, MAX:1295.57cy, MIN:380.698cy, 2503.7 calls/Mcy So the improvements are: cos: 27.9089% exp: 22.6919% pow: 4.01564% sin: 19.1585% tan: 1.96086% The downside of the change is that it will have an adverse performance impact on non-default rounding modes, but I think the tradeoff is justified.
2013-06-12 05:06:48 +00:00
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
/* Default implementation using standard fenv functions.
Avoid unnecessary rounding mode changes by first checking the
current rounding mode. Note the use of __glibc_unlikely is
important for performance. */
Set/restore rounding mode only when needed The most common use case of math functions is with default rounding mode, i.e. rounding to nearest. Setting and restoring rounding mode is an unnecessary overhead for this, so I've added support for a context, which does the set/restore only if the FP status needs a change. The code is written such that only x86 uses these. Other architectures should be unaffected by it, but would definitely benefit if the set/restore has as much overhead relative to the rest of the code, as the x86 bits do. Here's a summary of the performance improvement due to these improvements; I've only mentioned functions that use the set/restore and have benchmark inputs for x86_64: Before: cos(): ITERS:4.69335e+08: TOTAL:28884.6Mcy, MAX:4080.28cy, MIN:57.562cy, 16248.6 calls/Mcy exp(): ITERS:4.47604e+08: TOTAL:28796.2Mcy, MAX:207.721cy, MIN:62.385cy, 15543.9 calls/Mcy pow(): ITERS:1.63485e+08: TOTAL:28879.9Mcy, MAX:362.255cy, MIN:172.469cy, 5660.86 calls/Mcy sin(): ITERS:3.89578e+08: TOTAL:28900Mcy, MAX:704.859cy, MIN:47.583cy, 13480.2 calls/Mcy tan(): ITERS:7.0971e+07: TOTAL:28902.2Mcy, MAX:1357.79cy, MIN:388.58cy, 2455.55 calls/Mcy After: cos(): ITERS:6.0014e+08: TOTAL:28875.9Mcy, MAX:364.283cy, MIN:45.716cy, 20783.4 calls/Mcy exp(): ITERS:5.48578e+08: TOTAL:28764.9Mcy, MAX:191.617cy, MIN:51.011cy, 19071.1 calls/Mcy pow(): ITERS:1.70013e+08: TOTAL:28873.6Mcy, MAX:689.522cy, MIN:163.989cy, 5888.18 calls/Mcy sin(): ITERS:4.64079e+08: TOTAL:28891.5Mcy, MAX:6959.3cy, MIN:36.189cy, 16062.8 calls/Mcy tan(): ITERS:7.2354e+07: TOTAL:28898.9Mcy, MAX:1295.57cy, MIN:380.698cy, 2503.7 calls/Mcy So the improvements are: cos: 27.9089% exp: 22.6919% pow: 4.01564% sin: 19.1585% tan: 1.96086% The downside of the change is that it will have an adverse performance impact on non-default rounding modes, but I think the tradeoff is justified.
2013-06-12 05:06:48 +00:00
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
static __always_inline void
libc_feholdsetround_ctx (struct rm_ctx *ctx, int round)
{
ctx->updated_status = false;
/* Update rounding mode only if different. */
if (__glibc_unlikely (round != get_rounding_mode ()))
{
ctx->updated_status = true;
Fix libm fegetenv namespace (bug 17748). Some C90 libm functions call fegetenv via libc_feholdsetround* functions in math_private.h. This patch makes them call __fegetenv instead, making fegetenv into a weak alias for __fegetenv as needed. Tested for x86_64 (testsuite, and that disassembly of installed shared libraries is unchanged by the patch). Also tested for ARM (soft-float) that fegetenv failures disappear from the linknamespace test failures (however, similar fixes will also be needed for fegetround, feholdexcept, fesetenv, fesetround and feupdateenv before this set of namespace issues covered by bug 17748 is fully fixed and those linknamespace tests start passing). [BZ #17748] * include/fenv.h (__fegetenv): Use libm_hidden_proto. * math/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/aarch64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/alpha/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/arm/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/hppa/fpu/fegetenv.c (fegetenv): Likewise. * sysdeps/i386/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/ia64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/m68k/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/mips/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/powerpc/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/powerpc/nofpu/fegetenv.c (__fegetenv): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/fegetenv.c (__fegetenv): Likewise. * sysdeps/s390/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/sh/sh4/fpu/fegetenv.c (fegetenv): Likewise. * sysdeps/sparc/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/tile/math_private.h (__fegetenv): New inline function. * sysdeps/x86_64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/generic/math_private.h (libc_feholdsetround_ctx): Use __fegetenv instead of fegetenv. (libc_feholdsetround_noex_ctx): Likewise.
2014-12-31 22:07:52 +00:00
__fegetenv (&ctx->env);
Fix libm fesetround namespace (bug 17748). Continuing the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of fesetround by making it a weak alias of __fesetround and making the affected code call __fesetround. An existing __fesetround function in fenv_libc.h for powerpc is renamed to __fesetround_inline. Tested for x86_64 (testsuite, and that disassembly of installed shared libraries is unchanged by the patch). Also tested for ARM (soft-float) that fesetround failures disappear from the linknamespace test results (feupdateenv remains to be addressed to complete fixing bug 17748). [BZ #17748] * include/fenv.h (__fesetround): Declare. Use libm_hidden_proto. * math/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/aarch64/fpu/fesetround.c (fesetround): Likewise. * sysdeps/alpha/fpu/fesetround.c (fesetround): Likewise. * sysdeps/arm/fesetround.c (fesetround): Likewise. * sysdeps/hppa/fpu/fesetround.c (fesetround): Likewise. * sysdeps/i386/fpu/fesetround.c (fesetround): Likewise. * sysdeps/ia64/fpu/fesetround.c (fesetround): Likewise. * sysdeps/m68k/fpu/fesetround.c (fesetround): Likewise. * sysdeps/mips/fpu/fesetround.c (fesetround): Likewise. * sysdeps/powerpc/fpu/fenv_libc.h (__fesetround): Rename to __fesetround_inline. * sysdeps/powerpc/fpu/fenv_private.h (libc_fesetround_ppc): Call __fesetround_inline instead of __fesetround. * sysdeps/powerpc/fpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. Call __fesetround_inline instead of __fesetround. * sysdeps/powerpc/nofpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/powerpc/powerpc32/e500/nofpu/fesetround.c (fesetround): Likewise. * sysdeps/s390/fpu/fesetround.c (fesetround): Likewise. * sysdeps/sh/sh4/fpu/fesetround.c (fesetround): Likewise. * sysdeps/sparc/fpu/fesetround.c (fesetround): Likewise. * sysdeps/tile/math_private.h (__fesetround): New inline function. * sysdeps/x86_64/fpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/generic/math_private.h (default_libc_fesetround): Call __fesetround instead of fesetround. (default_libc_feholdexcept_setround): Likewise. (libc_feholdsetround_ctx): Likewise. (libc_feholdsetround_noex_ctx): Likewise.
2015-01-07 00:41:23 +00:00
__fesetround (round);
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
}
}
static __always_inline void
libc_feresetround_ctx (struct rm_ctx *ctx)
{
/* Restore the rounding mode if updated. */
if (__glibc_unlikely (ctx->updated_status))
Fix libm feupdateenv namespace (bug 17748). Concluding the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of feupdateenv by making it a weak alias for __feupdateenv and making the affected code call __feupdateenv. Tested for x86_64 (testsuite, and that installed stripped shared libraries are unchanged by the patch). Also tested for ARM (soft-float) that the math.h linknamespace tests now pass. [BZ #17748] * include/fenv.h (__feupdateenv): Use libm_hidden_proto. * math/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/aarch64/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/arm/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/powerpc/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Rename to __feupdateenv and define as weak alias of __feupdateenv. Use libm_hidden_weak. * sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/tile/math_private.h (__feupdateenv): New inline function. * sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Use libm_hidden_def. * sysdeps/generic/math_private.h (default_libc_feupdateenv): Call __feupdateenv instead of feupdateenv. (default_libc_feupdateenv_test): Likewise. (libc_feresetround_ctx): Likewise.
2015-01-07 19:01:20 +00:00
__feupdateenv (&ctx->env);
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
}
static __always_inline void
libc_feholdsetround_noex_ctx (struct rm_ctx *ctx, int round)
{
/* Save exception flags and rounding mode. */
Fix libm fegetenv namespace (bug 17748). Some C90 libm functions call fegetenv via libc_feholdsetround* functions in math_private.h. This patch makes them call __fegetenv instead, making fegetenv into a weak alias for __fegetenv as needed. Tested for x86_64 (testsuite, and that disassembly of installed shared libraries is unchanged by the patch). Also tested for ARM (soft-float) that fegetenv failures disappear from the linknamespace test failures (however, similar fixes will also be needed for fegetround, feholdexcept, fesetenv, fesetround and feupdateenv before this set of namespace issues covered by bug 17748 is fully fixed and those linknamespace tests start passing). [BZ #17748] * include/fenv.h (__fegetenv): Use libm_hidden_proto. * math/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/aarch64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/alpha/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/arm/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/hppa/fpu/fegetenv.c (fegetenv): Likewise. * sysdeps/i386/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/ia64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/m68k/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/mips/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/powerpc/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/powerpc/nofpu/fegetenv.c (__fegetenv): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/fegetenv.c (__fegetenv): Likewise. * sysdeps/s390/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/sh/sh4/fpu/fegetenv.c (fegetenv): Likewise. * sysdeps/sparc/fpu/fegetenv.c (__fegetenv): Use libm_hidden_def. * sysdeps/tile/math_private.h (__fegetenv): New inline function. * sysdeps/x86_64/fpu/fegetenv.c (fegetenv): Rename to __fegetenv and define as weak alias of __fegetenv. Use libm_hidden_weak. * sysdeps/generic/math_private.h (libc_feholdsetround_ctx): Use __fegetenv instead of fegetenv. (libc_feholdsetround_noex_ctx): Likewise.
2014-12-31 22:07:52 +00:00
__fegetenv (&ctx->env);
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
/* Update rounding mode only if different. */
if (__glibc_unlikely (round != get_rounding_mode ()))
Fix libm fesetround namespace (bug 17748). Continuing the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of fesetround by making it a weak alias of __fesetround and making the affected code call __fesetround. An existing __fesetround function in fenv_libc.h for powerpc is renamed to __fesetround_inline. Tested for x86_64 (testsuite, and that disassembly of installed shared libraries is unchanged by the patch). Also tested for ARM (soft-float) that fesetround failures disappear from the linknamespace test results (feupdateenv remains to be addressed to complete fixing bug 17748). [BZ #17748] * include/fenv.h (__fesetround): Declare. Use libm_hidden_proto. * math/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/aarch64/fpu/fesetround.c (fesetround): Likewise. * sysdeps/alpha/fpu/fesetround.c (fesetround): Likewise. * sysdeps/arm/fesetround.c (fesetround): Likewise. * sysdeps/hppa/fpu/fesetround.c (fesetround): Likewise. * sysdeps/i386/fpu/fesetround.c (fesetround): Likewise. * sysdeps/ia64/fpu/fesetround.c (fesetround): Likewise. * sysdeps/m68k/fpu/fesetround.c (fesetround): Likewise. * sysdeps/mips/fpu/fesetround.c (fesetround): Likewise. * sysdeps/powerpc/fpu/fenv_libc.h (__fesetround): Rename to __fesetround_inline. * sysdeps/powerpc/fpu/fenv_private.h (libc_fesetround_ppc): Call __fesetround_inline instead of __fesetround. * sysdeps/powerpc/fpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. Call __fesetround_inline instead of __fesetround. * sysdeps/powerpc/nofpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/powerpc/powerpc32/e500/nofpu/fesetround.c (fesetround): Likewise. * sysdeps/s390/fpu/fesetround.c (fesetround): Likewise. * sysdeps/sh/sh4/fpu/fesetround.c (fesetround): Likewise. * sysdeps/sparc/fpu/fesetround.c (fesetround): Likewise. * sysdeps/tile/math_private.h (__fesetround): New inline function. * sysdeps/x86_64/fpu/fesetround.c (fesetround): Rename to __fesetround and define as weak alias of __fesetround. Use libm_hidden_weak. * sysdeps/generic/math_private.h (default_libc_fesetround): Call __fesetround instead of fesetround. (default_libc_feholdexcept_setround): Likewise. (libc_feholdsetround_ctx): Likewise. (libc_feholdsetround_noex_ctx): Likewise.
2015-01-07 00:41:23 +00:00
__fesetround (round);
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
}
static __always_inline void
libc_feresetround_noex_ctx (struct rm_ctx *ctx)
{
/* Restore exception flags and rounding mode. */
Fix libm fesetenv namespace (bug 17748). Continuing the fixes for C90 libm functions calling C99 fe* functions, this patch fixes the case of fesetenv by making it a weak alias of __fesetenv and making the affected code (including various copies of feupdateenv which also gets called from C90 functions) call __fesetenv. Tested for x86_64 (testsuite, and that disassembly of installed shared libraries is unchanged by the patch). Also tested for ARM (soft-float) that fesetenv failures disappear from the linknamespace test results (fsetround and feupdateenv remain to be addressed to complete fixing bug 17748). [BZ #17748] * include/fenv.h (__fesetenv): Use libm_hidden_proto. * math/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/aarch64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/alpha/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/arm/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/hppa/fpu/fesetenv.c (fesetenv): Likewise. * sysdeps/i386/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/ia64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/m68k/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/mips/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/powerpc/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/powerpc/nofpu/fesetenv.c (__fesetenv): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/fesetenv.c (__fesetenv): Likewise. * sysdeps/s390/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/sh/sh4/fpu/fesetenv.c (fesetenv): Likewise. * sysdeps/sparc/fpu/fesetenv.c (__fesetenv): Use libm_hidden_def. * sysdeps/tile/math_private.h (__fesetenv): New inline function. * sysdeps/x86_64/fpu/fesetenv.c (fesetenv): Rename to __fesetenv and define as weak alias of __fesetenv. Use libm_hidden_weak. * sysdeps/generic/math_private.h (default_libc_fesetenv): Use __fesetenv instead of fesetenv. (libc_feresetround_noex_ctx): Likewise. * sysdeps/alpha/fpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/hppa/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/i386/fpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/ia64/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/m68k/fpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/mips/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/powerpc/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/powerpc/powerpc32/e500/nofpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/s390/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/sh/sh4/fpu/feupdateenv.c (feupdateenv): Likewise. * sysdeps/sparc/fpu/feupdateenv.c (__feupdateenv): Likewise. * sysdeps/x86_64/fpu/feupdateenv.c (__feupdateenv): Likewise.
2015-01-06 23:36:20 +00:00
__fesetenv (&ctx->env);
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
}
# define libc_feholdsetroundf_ctx libc_feholdsetround_ctx
# define libc_feholdsetroundl_ctx libc_feholdsetround_ctx
# define libc_feresetroundf_ctx libc_feresetround_ctx
# define libc_feresetroundl_ctx libc_feresetround_ctx
# define libc_feholdsetround_noexf_ctx libc_feholdsetround_noex_ctx
# define libc_feholdsetround_noexl_ctx libc_feholdsetround_noex_ctx
# define libc_feresetround_noexf_ctx libc_feresetround_noex_ctx
# define libc_feresetround_noexl_ctx libc_feresetround_noex_ctx
#endif
#ifndef libc_feholdsetround_53bit_ctx
# define libc_feholdsetround_53bit_ctx libc_feholdsetround_ctx
#endif
#ifndef libc_feresetround_53bit_ctx
# define libc_feresetround_53bit_ctx libc_feresetround_ctx
Set/restore rounding mode only when needed The most common use case of math functions is with default rounding mode, i.e. rounding to nearest. Setting and restoring rounding mode is an unnecessary overhead for this, so I've added support for a context, which does the set/restore only if the FP status needs a change. The code is written such that only x86 uses these. Other architectures should be unaffected by it, but would definitely benefit if the set/restore has as much overhead relative to the rest of the code, as the x86 bits do. Here's a summary of the performance improvement due to these improvements; I've only mentioned functions that use the set/restore and have benchmark inputs for x86_64: Before: cos(): ITERS:4.69335e+08: TOTAL:28884.6Mcy, MAX:4080.28cy, MIN:57.562cy, 16248.6 calls/Mcy exp(): ITERS:4.47604e+08: TOTAL:28796.2Mcy, MAX:207.721cy, MIN:62.385cy, 15543.9 calls/Mcy pow(): ITERS:1.63485e+08: TOTAL:28879.9Mcy, MAX:362.255cy, MIN:172.469cy, 5660.86 calls/Mcy sin(): ITERS:3.89578e+08: TOTAL:28900Mcy, MAX:704.859cy, MIN:47.583cy, 13480.2 calls/Mcy tan(): ITERS:7.0971e+07: TOTAL:28902.2Mcy, MAX:1357.79cy, MIN:388.58cy, 2455.55 calls/Mcy After: cos(): ITERS:6.0014e+08: TOTAL:28875.9Mcy, MAX:364.283cy, MIN:45.716cy, 20783.4 calls/Mcy exp(): ITERS:5.48578e+08: TOTAL:28764.9Mcy, MAX:191.617cy, MIN:51.011cy, 19071.1 calls/Mcy pow(): ITERS:1.70013e+08: TOTAL:28873.6Mcy, MAX:689.522cy, MIN:163.989cy, 5888.18 calls/Mcy sin(): ITERS:4.64079e+08: TOTAL:28891.5Mcy, MAX:6959.3cy, MIN:36.189cy, 16062.8 calls/Mcy tan(): ITERS:7.2354e+07: TOTAL:28898.9Mcy, MAX:1295.57cy, MIN:380.698cy, 2503.7 calls/Mcy So the improvements are: cos: 27.9089% exp: 22.6919% pow: 4.01564% sin: 19.1585% tan: 1.96086% The downside of the change is that it will have an adverse performance impact on non-default rounding modes, but I think the tradeoff is justified.
2013-06-12 05:06:48 +00:00
#endif
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
#define SET_RESTORE_ROUND_GENERIC(RM,ROUNDFUNC,CLEANUPFUNC) \
struct rm_ctx ctx __attribute__((cleanup (CLEANUPFUNC ## _ctx))); \
ROUNDFUNC ## _ctx (&ctx, (RM))
/* Set the rounding mode within a lexical block. Restore the rounding mode to
the value at the start of the block. The exception mode must be preserved.
Exceptions raised within the block must be set in the exception flags.
Non-stop mode may be enabled inside the block. */
#define SET_RESTORE_ROUND(RM) \
Set/restore rounding mode only when needed The most common use case of math functions is with default rounding mode, i.e. rounding to nearest. Setting and restoring rounding mode is an unnecessary overhead for this, so I've added support for a context, which does the set/restore only if the FP status needs a change. The code is written such that only x86 uses these. Other architectures should be unaffected by it, but would definitely benefit if the set/restore has as much overhead relative to the rest of the code, as the x86 bits do. Here's a summary of the performance improvement due to these improvements; I've only mentioned functions that use the set/restore and have benchmark inputs for x86_64: Before: cos(): ITERS:4.69335e+08: TOTAL:28884.6Mcy, MAX:4080.28cy, MIN:57.562cy, 16248.6 calls/Mcy exp(): ITERS:4.47604e+08: TOTAL:28796.2Mcy, MAX:207.721cy, MIN:62.385cy, 15543.9 calls/Mcy pow(): ITERS:1.63485e+08: TOTAL:28879.9Mcy, MAX:362.255cy, MIN:172.469cy, 5660.86 calls/Mcy sin(): ITERS:3.89578e+08: TOTAL:28900Mcy, MAX:704.859cy, MIN:47.583cy, 13480.2 calls/Mcy tan(): ITERS:7.0971e+07: TOTAL:28902.2Mcy, MAX:1357.79cy, MIN:388.58cy, 2455.55 calls/Mcy After: cos(): ITERS:6.0014e+08: TOTAL:28875.9Mcy, MAX:364.283cy, MIN:45.716cy, 20783.4 calls/Mcy exp(): ITERS:5.48578e+08: TOTAL:28764.9Mcy, MAX:191.617cy, MIN:51.011cy, 19071.1 calls/Mcy pow(): ITERS:1.70013e+08: TOTAL:28873.6Mcy, MAX:689.522cy, MIN:163.989cy, 5888.18 calls/Mcy sin(): ITERS:4.64079e+08: TOTAL:28891.5Mcy, MAX:6959.3cy, MIN:36.189cy, 16062.8 calls/Mcy tan(): ITERS:7.2354e+07: TOTAL:28898.9Mcy, MAX:1295.57cy, MIN:380.698cy, 2503.7 calls/Mcy So the improvements are: cos: 27.9089% exp: 22.6919% pow: 4.01564% sin: 19.1585% tan: 1.96086% The downside of the change is that it will have an adverse performance impact on non-default rounding modes, but I think the tradeoff is justified.
2013-06-12 05:06:48 +00:00
SET_RESTORE_ROUND_GENERIC (RM, libc_feholdsetround, libc_feresetround)
#define SET_RESTORE_ROUNDF(RM) \
Set/restore rounding mode only when needed The most common use case of math functions is with default rounding mode, i.e. rounding to nearest. Setting and restoring rounding mode is an unnecessary overhead for this, so I've added support for a context, which does the set/restore only if the FP status needs a change. The code is written such that only x86 uses these. Other architectures should be unaffected by it, but would definitely benefit if the set/restore has as much overhead relative to the rest of the code, as the x86 bits do. Here's a summary of the performance improvement due to these improvements; I've only mentioned functions that use the set/restore and have benchmark inputs for x86_64: Before: cos(): ITERS:4.69335e+08: TOTAL:28884.6Mcy, MAX:4080.28cy, MIN:57.562cy, 16248.6 calls/Mcy exp(): ITERS:4.47604e+08: TOTAL:28796.2Mcy, MAX:207.721cy, MIN:62.385cy, 15543.9 calls/Mcy pow(): ITERS:1.63485e+08: TOTAL:28879.9Mcy, MAX:362.255cy, MIN:172.469cy, 5660.86 calls/Mcy sin(): ITERS:3.89578e+08: TOTAL:28900Mcy, MAX:704.859cy, MIN:47.583cy, 13480.2 calls/Mcy tan(): ITERS:7.0971e+07: TOTAL:28902.2Mcy, MAX:1357.79cy, MIN:388.58cy, 2455.55 calls/Mcy After: cos(): ITERS:6.0014e+08: TOTAL:28875.9Mcy, MAX:364.283cy, MIN:45.716cy, 20783.4 calls/Mcy exp(): ITERS:5.48578e+08: TOTAL:28764.9Mcy, MAX:191.617cy, MIN:51.011cy, 19071.1 calls/Mcy pow(): ITERS:1.70013e+08: TOTAL:28873.6Mcy, MAX:689.522cy, MIN:163.989cy, 5888.18 calls/Mcy sin(): ITERS:4.64079e+08: TOTAL:28891.5Mcy, MAX:6959.3cy, MIN:36.189cy, 16062.8 calls/Mcy tan(): ITERS:7.2354e+07: TOTAL:28898.9Mcy, MAX:1295.57cy, MIN:380.698cy, 2503.7 calls/Mcy So the improvements are: cos: 27.9089% exp: 22.6919% pow: 4.01564% sin: 19.1585% tan: 1.96086% The downside of the change is that it will have an adverse performance impact on non-default rounding modes, but I think the tradeoff is justified.
2013-06-12 05:06:48 +00:00
SET_RESTORE_ROUND_GENERIC (RM, libc_feholdsetroundf, libc_feresetroundf)
#define SET_RESTORE_ROUNDL(RM) \
Set/restore rounding mode only when needed The most common use case of math functions is with default rounding mode, i.e. rounding to nearest. Setting and restoring rounding mode is an unnecessary overhead for this, so I've added support for a context, which does the set/restore only if the FP status needs a change. The code is written such that only x86 uses these. Other architectures should be unaffected by it, but would definitely benefit if the set/restore has as much overhead relative to the rest of the code, as the x86 bits do. Here's a summary of the performance improvement due to these improvements; I've only mentioned functions that use the set/restore and have benchmark inputs for x86_64: Before: cos(): ITERS:4.69335e+08: TOTAL:28884.6Mcy, MAX:4080.28cy, MIN:57.562cy, 16248.6 calls/Mcy exp(): ITERS:4.47604e+08: TOTAL:28796.2Mcy, MAX:207.721cy, MIN:62.385cy, 15543.9 calls/Mcy pow(): ITERS:1.63485e+08: TOTAL:28879.9Mcy, MAX:362.255cy, MIN:172.469cy, 5660.86 calls/Mcy sin(): ITERS:3.89578e+08: TOTAL:28900Mcy, MAX:704.859cy, MIN:47.583cy, 13480.2 calls/Mcy tan(): ITERS:7.0971e+07: TOTAL:28902.2Mcy, MAX:1357.79cy, MIN:388.58cy, 2455.55 calls/Mcy After: cos(): ITERS:6.0014e+08: TOTAL:28875.9Mcy, MAX:364.283cy, MIN:45.716cy, 20783.4 calls/Mcy exp(): ITERS:5.48578e+08: TOTAL:28764.9Mcy, MAX:191.617cy, MIN:51.011cy, 19071.1 calls/Mcy pow(): ITERS:1.70013e+08: TOTAL:28873.6Mcy, MAX:689.522cy, MIN:163.989cy, 5888.18 calls/Mcy sin(): ITERS:4.64079e+08: TOTAL:28891.5Mcy, MAX:6959.3cy, MIN:36.189cy, 16062.8 calls/Mcy tan(): ITERS:7.2354e+07: TOTAL:28898.9Mcy, MAX:1295.57cy, MIN:380.698cy, 2503.7 calls/Mcy So the improvements are: cos: 27.9089% exp: 22.6919% pow: 4.01564% sin: 19.1585% tan: 1.96086% The downside of the change is that it will have an adverse performance impact on non-default rounding modes, but I think the tradeoff is justified.
2013-06-12 05:06:48 +00:00
SET_RESTORE_ROUND_GENERIC (RM, libc_feholdsetroundl, libc_feresetroundl)
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
/* Set the rounding mode within a lexical block. Restore the rounding mode to
the value at the start of the block. The exception mode must be preserved.
Exceptions raised within the block must be discarded, and exception flags
are restored to the value at the start of the block.
Non-stop mode may be enabled inside the block. */
#define SET_RESTORE_ROUND_NOEX(RM) \
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
SET_RESTORE_ROUND_GENERIC (RM, libc_feholdsetround_noex, \
libc_feresetround_noex)
#define SET_RESTORE_ROUND_NOEXF(RM) \
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
SET_RESTORE_ROUND_GENERIC (RM, libc_feholdsetround_noexf, \
libc_feresetround_noexf)
#define SET_RESTORE_ROUND_NOEXL(RM) \
Add generic HAVE_RM_CTX implementation This patch adds a generic implementation of HAVE_RM_CTX using standard fenv calls. As a result math functions using SET_RESTORE_ROUND* macros do not suffer from a large slowdown on targets which do not implement optimized libc_fe*_ctx inline functions. Most of the libc_fe* inline functions are now unused and could be removed in the future (there are a few math functions left which use a mixture of standard fenv calls and libc_fe* inline functions - they could be updated to use SET_RESTORE_ROUND or improved to avoid expensive fenv manipulations across just a few FP instructions). libc_feholdsetround*_noex_ctx is added to enable better optimization of SET_RESTORE_ROUND_NOEX* implementations. Performance measurements on ARM and x86 of sin() show significant gains over the current default, fairly close to a highly optimized fenv_private: ARM x86 no fenv_private : 100% 100% generic HAVE_RM_CTX : 250% 350% fenv_private (CTX) : 250% 450% 2014-06-23 Will Newton <will.newton@linaro.org> Wilco <wdijkstr@arm.com> * sysdeps/generic/math_private.h: Add generic HAVE_RM_CTX implementation. Include get-rounding-mode.h. [!HAVE_RM_CTX]: Define HAVE_RM_CTX to zero. [!libc_feholdsetround_noex_ctx]: Define libc_feholdsetround_noex_ctx. [!libc_feholdsetround_noexf_ctx]: Define libc_feholdsetround_noexf_ctx. [!libc_feholdsetround_noexl_ctx]: Define libc_feholdsetround_noexl_ctx. (libc_feholdsetround_ctx): New function. (libc_feresetround_ctx): New function. (libc_feholdsetround_noex_ctx): New function. (libc_feresetround_noex_ctx): New function.
2014-06-23 16:15:41 +00:00
SET_RESTORE_ROUND_GENERIC (RM, libc_feholdsetround_noexl, \
libc_feresetround_noexl)
/* Like SET_RESTORE_ROUND, but also set rounding precision to 53 bits. */
#define SET_RESTORE_ROUND_53BIT(RM) \
Set/restore rounding mode only when needed The most common use case of math functions is with default rounding mode, i.e. rounding to nearest. Setting and restoring rounding mode is an unnecessary overhead for this, so I've added support for a context, which does the set/restore only if the FP status needs a change. The code is written such that only x86 uses these. Other architectures should be unaffected by it, but would definitely benefit if the set/restore has as much overhead relative to the rest of the code, as the x86 bits do. Here's a summary of the performance improvement due to these improvements; I've only mentioned functions that use the set/restore and have benchmark inputs for x86_64: Before: cos(): ITERS:4.69335e+08: TOTAL:28884.6Mcy, MAX:4080.28cy, MIN:57.562cy, 16248.6 calls/Mcy exp(): ITERS:4.47604e+08: TOTAL:28796.2Mcy, MAX:207.721cy, MIN:62.385cy, 15543.9 calls/Mcy pow(): ITERS:1.63485e+08: TOTAL:28879.9Mcy, MAX:362.255cy, MIN:172.469cy, 5660.86 calls/Mcy sin(): ITERS:3.89578e+08: TOTAL:28900Mcy, MAX:704.859cy, MIN:47.583cy, 13480.2 calls/Mcy tan(): ITERS:7.0971e+07: TOTAL:28902.2Mcy, MAX:1357.79cy, MIN:388.58cy, 2455.55 calls/Mcy After: cos(): ITERS:6.0014e+08: TOTAL:28875.9Mcy, MAX:364.283cy, MIN:45.716cy, 20783.4 calls/Mcy exp(): ITERS:5.48578e+08: TOTAL:28764.9Mcy, MAX:191.617cy, MIN:51.011cy, 19071.1 calls/Mcy pow(): ITERS:1.70013e+08: TOTAL:28873.6Mcy, MAX:689.522cy, MIN:163.989cy, 5888.18 calls/Mcy sin(): ITERS:4.64079e+08: TOTAL:28891.5Mcy, MAX:6959.3cy, MIN:36.189cy, 16062.8 calls/Mcy tan(): ITERS:7.2354e+07: TOTAL:28898.9Mcy, MAX:1295.57cy, MIN:380.698cy, 2503.7 calls/Mcy So the improvements are: cos: 27.9089% exp: 22.6919% pow: 4.01564% sin: 19.1585% tan: 1.96086% The downside of the change is that it will have an adverse performance impact on non-default rounding modes, but I think the tradeoff is justified.
2013-06-12 05:06:48 +00:00
SET_RESTORE_ROUND_GENERIC (RM, libc_feholdsetround_53bit, \
libc_feresetround_53bit)
#endif /* _MATH_PRIVATE_H_ */