This patch replaces i386 assembly versions of e_exp2f with generic
e_exp2f.c. For workload-spec2017.wrf, on Nehalem, it improves
performance by:
Before After Improvement
reciprocal-throughput 112.996 40.0454 182%
latency 126.581 54.4479 132%
On Skylake, it improves performance by:
Before After Improvement
reciprocal-throughput 113.14 39.447 186%
latency 136.068 55.684 144%
On IvyBridge with --disable-multi-arch, it improves performance by:
Before After Improvement
reciprocal-throughput 132.521 40.3759 228%
latency 145.791 58.4587 149%
* sysdeps/i386/fpu/e_exp2f.S: Removed.
* sysdeps/i386/fpu/w_exp2f.c: Likewise.
* sysdeps/i386/fpu/libm-test-ulps: Updated for generic e_exp2f.c.
* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Likewise.
* sysdeps/i386/i686/fpu/multiarch/Makefile (libm-sysdep_routines):
Add e_exp2f-sse2.
(CFLAGS-e_exp2f-sse2.c): New.
* sysdeps/i386/i686/fpu/multiarch/e_exp2f-sse2.c: New file.
* sysdeps/i386/i686/fpu/multiarch/e_exp2f.c: Likewise.
The bits/floatn.h header currently only has defines relating to
_Float128. This patch adds defines relating to other _FloatN /
_FloatNx types.
The approach taken is to add defines for all _FloatN / _FloatNx types
known to GCC, and to put them in a common bits/floatn-common.h header
included at the end of all the individual bits/floatn.h headers. If
in future some defines become different for different glibc
configurations, they will move out into the separate bits/floatn.h
headers.
Some defines are expected always to be the same across glibc ports.
Corresponding defines are nevertheless put in this header. The intent
is that where there are conditionals (in headers or in non-installed
files) that can just repeat the same or nearly the same logic for each
floating-point type, they should do so, even if in fact the cases for
some types could be unconditionally present or absent because the same
conditionals are true or false for all glibc configurations. This
should make the glibc code with such conditionals easier to read,
because the reader can just see that the same conditionals are
repeated for each type, rather than seeing different conditionals for
different types and needing to reason, at each location with such
differences, why those differences are indeed correct there. (Cases
involving per-format rather than per-type logic are more likely still
to need differences in how they handle different types.)
Having such defines and conditionals also helps in incremental
preparation for adding _Float32 / _Float64 / _Float32x / _Float64x
function aliases. I intend subsequent patches to add such
conditionals corresponding to those already present for _Float128, as
well as making more architecture-specific function implementations use
common macros to define aliases in preparation for adding such _FloatN
/ _FloatNx aliases.
Tested for x86_64.
* bits/floatn-common.h: New file.
* math/Makefile (headers): Add bits/floatn-common.h.
* bits/floatn.h: Include <bits/floatn-common.h>.
* sysdeps/ia64/bits/floatn.h: Likewise.
* sysdeps/ieee754/ldbl-128/bits/floatn.h: Likewise.
* sysdeps/mips/ieee754/bits/floatn.h: Likewise.
* sysdeps/powerpc/bits/floatn.h: Likewise.
* sysdeps/x86/bits/floatn.h: Likewise.
GCC 8 emits an warning for alias for functions with incompatible types
and it is used extensivelly for ifunc resolvers implementations in C
(for instance on weak_alias with the internal symbol name to the
external one or with the libc_hidden_def to set ifunc for internal
usage).
This breaks the build when the ifunc resolver is not defined using
gcc attribute extensions (HAVE_GCC_IFUNC being 0). Although for
all currently architectures that have multiarch support this compiler
options is enabled for default, there is still the option where the
user might try build glibc with a compiler without support for such
extension. In this case this patch just disable the multiarch folder
in sysdeps selections.
GCC 7 and before still builds IFUNCs regardless of compiler support
(although for the lack of attribute support debug information would
be optimal).
Checked with a build on multiarch support architectures (aarch64,
arm, sparc, s390, powerpc, x86_64, i386) with multiarch enable
and disable and with GCC 7 and GCC 8.
* configure.ac (libc_cv_gcc_incompatbile_alias): New define:
indicates whether compiler emits an warning for alias for
functions with incompatible types.
As noted by Florian Weimer, current Linux posix_spawn implementation
can trigger an assert if the auxiliary process is terminated before
actually setting the err member:
340 /* Child must set args.err to something non-negative - we rely on
341 the parent and child sharing VM. */
342 args.err = -1;
[...]
362 new_pid = CLONE (__spawni_child, STACK (stack, stack_size), stack_size,
363 CLONE_VM | CLONE_VFORK | SIGCHLD, &args);
364
365 if (new_pid > 0)
366 {
367 ec = args.err;
368 assert (ec >= 0);
Another possible issue is killing the child between setting the err and
actually calling execve. In this case the process will not ran, but
posix_spawn also will not report any error:
269
270 args->err = 0;
271 args->exec (args->file, args->argv, args->envp);
As suggested by Andreas Schwab, this patch removes the faulty assert
and also handles any signal that happens before fork and execve as the
spawn was successful (and thus relaying the handling to the caller to
figure this out). Different than Florian, I can not see why using
atomics to set err would help here, essentially the code runs
sequentially (due CLONE_VFORK) and I think it would not be legal the
compiler evaluate ec without checking for new_pid result (thus there
is no need to compiler barrier).
Summarizing the possible scenarios on posix_spawn execution, we
have:
1. For default case with a success execution, args.err will be 0, pid
will not be collected and it will be reported to caller.
2. For default failure case, args.err will be positive and the it will
be collected by the waitpid. An error will be reported to the
caller.
3. For the unlikely case where the process was terminated and not
collected by a caller signal handler, it will be reported as succeful
execution and not be collected by posix_spawn (since args.err will
be 0). The caller will need to actually handle this case.
4. For the unlikely case where the process was terminated and collected
by caller we have 3 other possible scenarios:
4.1. The auxiliary process was terminated with args.err equal to 0:
it will handled as 1. (so it does not matter if we hit the pid
reuse race since we won't possible collect an unexpected
process).
4.2. The auxiliary process was terminated after execve (due a failure
in calling it) and before setting args.err to -1: it will also
be handle as 1. but with the issue of not be able to report the
caller a possible execve failures.
4.3. The auxiliary process was terminated after args.err is set to -1:
this is the case where it will be possible to hit the pid reuse
case where we will need to collected the auxiliary pid but we
can not be sure if it will be expected one. I think for this
case we need to actually change waitpid to use WNOHANG to avoid
hanging indefinitely on the call and report an error to caller
since we can't differentiate between a default failure as 2.
and a possible pid reuse race issue.
Checked on x86_64-linux-gnu.
* sysdeps/unix/sysv/linux/spawni.c (__spawnix): Handle the case where
the auxiliary process is terminated by a signal before calling _exit
or execve.
In _dl_runtime_resolve, use fxsave/xsave/xsavec to preserve all vector,
mask and bound registers. It simplifies _dl_runtime_resolve and supports
different calling conventions. ld.so code size is reduced by more than
1 KB. However, use fxsave/xsave/xsavec takes a little bit more cycles
than saving and restoring vector and bound registers individually.
Latency for _dl_runtime_resolve to lookup the function, foo, from one
shared library plus libc.so:
Before After Change
Westmere (SSE)/fxsave 345 866 151%
IvyBridge (AVX)/xsave 420 643 53%
Haswell (AVX)/xsave 713 1252 75%
Skylake (AVX+MPX)/xsavec 559 719 28%
Skylake (AVX512+MPX)/xsavec 145 272 87%
Ryzen (AVX)/xsavec 280 553 97%
This is the worst case where portion of time spent for saving and
restoring registers is bigger than majority of cases. With smaller
_dl_runtime_resolve code size, overall performance impact is negligible.
On IvyBridge, differences in build and test time of binutils with lazy
binding GCC and binutils are noises. On Westmere, differences in
bootstrap and "makc check" time of GCC 7 with lazy binding GCC and
binutils are also noises.
[BZ #21265]
* sysdeps/x86/cpu-features-offsets.sym (XSAVE_STATE_SIZE_OFFSET):
New.
* sysdeps/x86/cpu-features.c: Include <libc-pointer-arith.h>.
(get_common_indeces): Set xsave_state_size, xsave_state_full_size
and bit_arch_XSAVEC_Usable if needed.
(init_cpu_features): Remove bit_arch_Use_dl_runtime_resolve_slow
and bit_arch_Use_dl_runtime_resolve_opt.
* sysdeps/x86/cpu-features.h (bit_arch_Use_dl_runtime_resolve_opt):
Removed.
(bit_arch_Use_dl_runtime_resolve_slow): Likewise.
(bit_arch_Prefer_No_AVX512): Updated.
(bit_arch_MathVec_Prefer_No_AVX512): Likewise.
(bit_arch_XSAVEC_Usable): New.
(STATE_SAVE_OFFSET): Likewise.
(STATE_SAVE_MASK): Likewise.
[__ASSEMBLER__]: Include <cpu-features-offsets.h>.
(cpu_features): Add xsave_state_size and xsave_state_full_size.
(index_arch_Use_dl_runtime_resolve_opt): Removed.
(index_arch_Use_dl_runtime_resolve_slow): Likewise.
(index_arch_XSAVEC_Usable): New.
* sysdeps/x86/cpu-tunables.c (TUNABLE_CALLBACK (set_hwcaps)):
Support XSAVEC_Usable. Remove Use_dl_runtime_resolve_slow.
* sysdeps/x86_64/Makefile (tst-x86_64-1-ENV): New if tunables
is enabled.
* sysdeps/x86_64/dl-machine.h (elf_machine_runtime_setup):
Replace _dl_runtime_resolve_sse, _dl_runtime_resolve_avx,
_dl_runtime_resolve_avx_slow, _dl_runtime_resolve_avx_opt,
_dl_runtime_resolve_avx512 and _dl_runtime_resolve_avx512_opt
with _dl_runtime_resolve_fxsave, _dl_runtime_resolve_xsave and
_dl_runtime_resolve_xsavec.
* sysdeps/x86_64/dl-trampoline.S (DL_RUNTIME_UNALIGNED_VEC_SIZE):
Removed.
(DL_RUNTIME_RESOLVE_REALIGN_STACK): Check STATE_SAVE_ALIGNMENT
instead of VEC_SIZE.
(REGISTER_SAVE_BND0): Removed.
(REGISTER_SAVE_BND1): Likewise.
(REGISTER_SAVE_BND3): Likewise.
(REGISTER_SAVE_RAX): Always defined to 0.
(VMOV): Removed.
(_dl_runtime_resolve_avx): Likewise.
(_dl_runtime_resolve_avx_slow): Likewise.
(_dl_runtime_resolve_avx_opt): Likewise.
(_dl_runtime_resolve_avx512): Likewise.
(_dl_runtime_resolve_avx512_opt): Likewise.
(_dl_runtime_resolve_sse): Likewise.
(_dl_runtime_resolve_sse_vex): Likewise.
(USE_FXSAVE): New.
(_dl_runtime_resolve_fxsave): Likewise.
(USE_XSAVE): Likewise.
(_dl_runtime_resolve_xsave): Likewise.
(USE_XSAVEC): Likewise.
(_dl_runtime_resolve_xsavec): Likewise.
* sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_avx512):
Removed.
(_dl_runtime_resolve_avx512_opt): Likewise.
(_dl_runtime_resolve_avx): Likewise.
(_dl_runtime_resolve_avx_opt): Likewise.
(_dl_runtime_resolve_sse): Likewise.
(_dl_runtime_resolve_sse_vex): Likewise.
(_dl_runtime_resolve_fxsave): New.
(_dl_runtime_resolve_xsave): Likewise.
(_dl_runtime_resolve_xsavec): Likewise.
This patch adds single-threaded fast paths to _int_free.
Bypass the explicit locking for larger allocations.
* malloc/malloc.c (_int_free): Add SINGLE_THREAD_P fast paths.
Remove the bogus targets (and source) that supposedly build ga_test.
This code was added to resolv very early in the development process
but does not appear to be an actual test program. The target for
building this file is tests but because the glibc Make system is
built the way it is, the target is overriden by higher-level tests
targets and, therefore, the ga_test program is never built. Removing
the target and the source code makes the resolv/Makefile less confusing.
Tested by building and running 'make check' on 64 bit host running
Kernel 4.10.0-19 configured with
--prefix=/home/hawkinsw/code/glibc-build/install
--enable-hardcoded-path-in-tests
--disable-mathvec
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
When --enable-static-pie is used to configure glibc, we need to use
_dl_relocate_static_pie to compute load address in static PIE.
* sysdeps/m68k/dl-machine.h (elf_machine_load_address): Use
_dl_relocate_static_pie instead of _dl_start to compute load
address in static PIE.
After commit 37f802f864 (Remove
__need_IOV_MAX and __need_FOPEN_MAX), UIO_MAXIOV is no longer supplied
(indirectly) through <bits/stdio_lim.h>, so sysdeps/posix/sysconf.c no
longer sees the definition.
This patch adds a MIPS-specific bits/floatn.h header. This header is
identical to the ldbl-128 version except for the comment at the top;
the purpose is to ensure that a 32-bit MIPS build installs a header
that is the same as in a 64-bit MIPS build and so properly shows
_Float128 support to be available for 64-bit compilations, on the
general principle of an installation for one multilib providing
headers also suitable for other multilibs.
Tested with build-many-glibcs.py.
* sysdeps/mips/ieee754/bits/floatn.h: New file.
Similar to bug 21987 for SPARC, MIPS64 wrongly installs the ldbl-128
version of bits/long-double.h, meaning incorrect results when using
headers installed from a 64-bit installation for a 32-bit build. (I
haven't actually seen this cause build failures before its interaction
with bits/floatn.h did so - installed headers wrongly expecting
_Float128 to be available in a 32-bit configuration.)
This patch fixes the bug by moving the MIPS header to
sysdeps/mips/ieee754, which comes before sysdeps/ieee754/ldbl-128 in
the sysdeps directory ordering. (bits/floatn.h will need a similar
fix - duplicating the ldbl-128 version for MIPS will suffice - for
headers from a 32-bit installation to be correct for 64-bit builds.)
Tested with build-many-glibcs.py (compilers build for
mips64-linux-gnu, where there was previously a libstdc++ build failure
as at
<https://sourceware.org/ml/libc-testresults/2017-q4/msg00130.html>).
[BZ #22322]
* sysdeps/mips/bits/long-double.h: Move to ....
* sysdeps/mips/ieee754/bits/long-double.h: ... here.
This patch fixes a deadlock in the fastbin consistency check.
If we fail the fast check due to concurrent modifications to
the next chunk or system_mem, we should not lock if we already
have the arena lock. Simplify the check to make it obviously
correct.
* malloc/malloc.c (_int_free): Fix deadlock bug in consistency check.
This patch adds support for *f128 function aliases on platforms where
long double has the binary128 format (and thus GCC 7 provides the
_Float128 type with the same ABI as long double but as a distinct type
in terms of C type compatibility). This is the same API as provided
in glibc 2.26 for powerpc64le / x86_64 / x86 / ia64 where _Float128
has a different format from long double, with the bulk of the API
coming from TS 18661-3. All the functions alias the corresponding
long double functions, and __* function names are not provided since
those are only needed once for each floating-point format, not more
than once for different types with the same format (so for example,
-ffinite-math-only maps foof128 to __fool_finite, while type-generic
macros end up calling e.g. __issignalingl for _Float128 arguments on
such platforms).
The preparation for this feature was done in previous patches, so this
one just needs to add the relevant makefile and header definitions,
and update macro definitions of libm_alias_ldouble_other_r, to turn on
the feature, and update documentation and ABI baselines.
Tested (a) for x86_64, (b) for aarch64, (c) with build-many-glibcs.py
with both GCC 6 and GCC 7.
* sysdeps/ieee754/ldbl-128/Makeconfig: New file.
* sysdeps/ieee754/ldbl-128/bits/floatn.h: Likewise.
* sysdeps/ieee754/ldbl-128/float128-abi.h: Likewise.
* sysdeps/generic/libm-alias-ldouble.h: Include <bits/floatn.h>.
[__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128]
(libm_alias_ldouble_other_r): Also create _Float128 alias.
* sysdeps/ieee754/ldbl-opt/libm-alias-ldouble.h: Include
<bits/floatn.h>.
[__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128]
(libm_alias_ldouble_other_r): Also create _Float128 alias.
* manual/math.texi (Mathematics): Document additional architecture
support for _Float128.
* sysdeps/unix/sysv/linux/aarch64/libc.abilist: Update.
* sysdeps/unix/sysv/linux/aarch64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/alpha/libc.abilist: Likewise.
* sysdeps/unix/sysv/linux/alpha/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/n32/libc.abilist: Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/n64/libc.abilist: Likewise.
* sysdeps/unix/sysv/linux/s390/s390-32/libc.abilist: Likewise.
* sysdeps/unix/sysv/linux/s390/s390-32/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/libc.abilist: Likewise.
* sysdeps/unix/sysv/linux/s390/s390-64/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc32/libc.abilist: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc32/libm.abilist: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/libc.abilist: Likewise.
* sysdeps/unix/sysv/linux/sparc/sparc64/libm.abilist: Likewise.
This patch rewrites aarch64 elf_machine_load_address to use special _DYNAMIC
symbol instead of _dl_start.
The static address of _DYNAMIC symbol is stored in the first GOT entry.
Here is the change which makes this solution work (part of binutils 2.24):
https://sourceware.org/ml/binutils/2013-06/msg00248.html
i386, x86_64 targets use the same method to do this as well.
The original implementation relies on a trick that R_AARCH64_ABS32 relocation
being resolved at link time and the static address fits in the 32bits.
However, in LP64, normally, the address is defined to be 64 bit.
Here is the C version one which should be portable in all cases.
* sysdeps/aarch64/dl-machine.h (elf_machine_load_address): Use
_DYNAMIC symbol to calculate load address.
A performance regression was introduced by commit
84d74e427a "powerpc: Cleanup fenv_private.h".
In the powerpc implementation of SET_RESTORE_ROUND, there is the
following code in the "SET" function (slightly simplified):
--
old.fenv = fegetenv_register ();
new.l = (old.l & _FPU_MASK_TRAPS_RN) | r; (1)
if (new.l != old.l) (2)
{
if ((old.l & _FPU_ALL_TRAPS) != 0)
(void) __fe_mask_env ();
fesetenv_register (new.fenv); (3)
--
Line (1) sets the value of "new" to the current value of FPSCR,
but masks off summary bits, exceptions, non-IEEE mode, and
rounding mode, then ORs in the new rounding mode.
Line (2) compares this new value to the current value in order to
avoid setting a new value in the FPSCR (line (3)) unless something
significant has changed (exception enables or rounding mode).
The summary bits are not germane to the comparison, but are cleared
in "new" and preserved in "old", resulting in false negative
comparisons, and unnecessarily setting the FPSCR in those cases
with associated negative performance impacts.
The solution is to treat the summaries identically for "new" and "old":
- save them in SET
- leave them alone otherwise
- restore the saved values in RESTORE
Also minor changes:
- expand _FPU_MASK_RN to 64bit hex, to match other MASKs
- treat bit 52 (left-to-right) as reserved (since it is)
* sysdeps/powerpc/fpu/fenv_private.h (_FPU_MASK_TRAPS_RN):
(_FPU_MASK_FRAC_INEX_RET_CC): Fix masks to more properly handle
summary bits.
(_FPU_MASK_RN): Expand _FPU_MASK_RN to 64bit hex.
(_FPU_MASK_NOT_RN_NI): Treat bit 52 (left-to-right) as reserved.
Reviewed-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com>
[BZ #16777]
* localedata/locales/pl_PL (LC_MONETARY): Use U+202F as mon_thousands_sep
and improve readability by using more ASCII.
* localedata/locales/pl_PL (LC_NUMERIC): Use U+202F as thousands_sep
and improve readability by using more ASCII.
When using compilers before GCC 7, include/float.h provides fallback
definitions of FLT128_* constants. These definitions use 'Q' constant
suffixes, which works for configurations with _Float128 ABI-distinct
from long double, but not where it has the same ABI as long double.
This patch changes the definitions to use the __f128 macro from
<bits/floatn.h>, so allowing them to work in the non-distinct
_Float128 case (where they are used in building glibc tests, not for
building glibc itself) as well.
Tested (a) with build-many-glibcs.py with GCC 6 (installed stripped
shared libraries unchanged by the patch); (b) with
build-many-glibcs.py with GCC 6 together with the main patch to enable
float128 aliases; (c) for x86_64 with both GCC 6 and GCC 7.
* include/float.h [!__GNUC_PREREQ (7, 0) && __HAVE_FLOAT128 &&
__GLIBC_USE (IEC_60559_TYPES_EXT)] (FLT128_MAX): Define using
__f128.
[!__GNUC_PREREQ (7, 0) && __HAVE_FLOAT128 && __GLIBC_USE
(IEC_60559_TYPES_EXT)] (FLT128_EPSILON): Likewise.
[!__GNUC_PREREQ (7, 0) && __HAVE_FLOAT128 && __GLIBC_USE
(IEC_60559_TYPES_EXT)] (FLT128_MIN): Likewise.
[!__GNUC_PREREQ (7, 0) && __HAVE_FLOAT128 && __GLIBC_USE
(IEC_60559_TYPES_EXT)] (FLT128_TRUE_MIN): Likewise.
The current malloc initialization is quite convoluted. Instead of
sometimes calling malloc_consolidate from ptmalloc_init, call
malloc_init_state early so that the main_arena is always initialized.
The special initialization can now be removed from malloc_consolidate.
This also fixes BZ #22159.
Check all calls to malloc_consolidate and remove calls that are
redundant initialization after ptmalloc_init, like in int_mallinfo
and __libc_mallopt (but keep the latter as consolidation is required for
set_max_fast). Update comments to improve clarity.
Remove impossible initialization check from _int_malloc, fix assert
in do_check_malloc_state to ensure arena->top != 0. Fix the obvious bugs
in do_check_free_chunk and do_check_remalloced_chunk to enable single
threaded malloc debugging (do_check_malloc_state is not thread safe!).
[BZ #22159]
* malloc/arena.c (ptmalloc_init): Call malloc_init_state.
* malloc/malloc.c (do_check_free_chunk): Fix build bug.
(do_check_remalloced_chunk): Fix build bug.
(do_check_malloc_state): Add assert that checks arena->top.
(malloc_consolidate): Remove initialization.
(int_mallinfo): Remove call to malloc_consolidate.
(__libc_mallopt): Clarify why malloc_consolidate is needed.
Currently free typically uses 2 atomic operations per call. The have_fastchunks
flag indicates whether there are recently freed blocks in the fastbins. This
is purely an optimization to avoid calling malloc_consolidate too often and
avoiding the overhead of walking all fast bins even if all are empty during a
sequence of allocations. However using catomic_or to update the flag is
completely unnecessary since it can be changed into a simple boolean and
accessed using relaxed atomics. There is no change in multi-threaded behaviour
given the flag is already approximate (it may be set when there are no blocks in
any fast bins, or it may be clear when there are free blocks that could be
consolidated).
Performance of malloc/free improves by 27% on a simple benchmark on AArch64
(both single and multithreaded). The number of load/store exclusive instructions
is reduced by 33%. Bench-malloc-thread speeds up by ~3% in all cases.
* malloc/malloc.c (FASTCHUNKS_BIT): Remove.
(have_fastchunks): Remove.
(clear_fastchunks): Remove.
(set_fastchunks): Remove.
(malloc_state): Add have_fastchunks.
(malloc_init_state): Use have_fastchunks.
(do_check_malloc_state): Remove incorrect invariant checks.
(_int_malloc): Use have_fastchunks.
(_int_free): Likewise.
(malloc_consolidate): Likewise.
The functions tcache_get and tcache_put show up in profiles as they
are a critical part of the tcache code. Inline them to give tcache
a 16% performance gain. Since this improves multi-threaded cases
as well, it helps offset any potential performance loss due to adding
single-threaded fast paths.
* malloc/malloc.c (tcache_put): Inline.
(tcache_get): Inline.
The Valencian (meridional Catalan) locale is basically a copy of the
Catalan locale. The point of having a separate locale is only for PO
translations. This locale is already provided by several distributions
and is already supported by various projects like LibreOffice, Mozilla,
Gnome, KDE.
Aurelien Jarno <aurelien@aurel32.net>
[BZ #2522]
* localedata/locales/ca_ES@valencia: New file.
* localedata/SUPPORTED: Add ca_ES@valencia/UTF-8.
When using gcc < 6.x, signbit does not use the type-generic
__builtin_signbit builtin, instead it uses __MATH_TG.
However, when library support for float128 is available, __MATH_TG uses
__builtin_types_compatible_p, which is not available in C++ mode.
On the other hand, libstdc++ undefines (in cmath) many macros from
math.h, including signbit, so that it can provide its own functions.
However, during its configure tests, libstdc++ just tests for the
availability of the macros (it does not undefine them, nor does it
provide its own functions).
Finally, libstdc++ configure tests include math.h and get the definition
of signbit that uses __MATH_TG (and __builtin_types_compatible_p).
Since libstdc++ does not undefine the macros during its configure
tests, they fail.
This patch lets signbit use the builtin in C++ mode when gcc < 6.x is
used. This allows the configure test in libstdc++ to work.
Tested for x86_64.
[BZ #22296]
* math/math.h: Let signbit use the builtin in C++ mode with gcc
< 6.x
Cc: Gabriel F. T. Gomes <gftg@linux.vnet.ibm.com>
Cc: Joseph Myers <joseph@codesourcery.com>
This patch adds two extra configuration for arm-linux-gnueabihf to
cover for multiarch support:
1. arm-linux-gnueabihf-v7a: enables multiarch support by using
-march=armv7a.
2. Same as 1. but with --disable-multiarch.
Check with build-many-glibcs.py for both options.
* scripts/build-many-glibcs.py (Context.add_all_configs):
Add arm-linux-gnueabihf multiarch extra_glibcs.
This patch moves the generic definition from x86_64 init-arch
to a common header ifunc-init.h. No functional changes is expected.
Checked on a x86_64-linux-gnu build.
* sysdeps/generic/ifunc-init.h: New file.
* sysdeps/x86/init-arch.h: Use generic ifunc-init.h.
CLDR uses this pattern as well.
[BZ #22019]
* localedata/locales/el_GR: Set n_cs_precedes to 0.
* localedata/locales/el_CY: copy "el_GR" because it is identical.
* stdlib/tst-strfmon_l.c: adapt test case.
With support for _Float128 functions on platforms where that type has
the same ABI as long double, as well as on platforms where it is
ABI-distinct, those functions will need to be exported from glibc's
shared libraries at appropriate symbol versions in each case.
This patch avoids duplication of lists of symbols to export by moving
the symbols other than __* to math/Versions and stdlib/Versions.
There, they are conditional on <float128-abi.h> defining
FLOAT128_VERSION and a default version of that header is added that
does not define that macro. Enabling the float128 function aliases
will then include adding a sysdeps/ieee754/ldbl-128/float128-abi.h
that defines FLOAT128_VERSION to GLIBC_2.27. Symbols __* remain in
sysdeps/ieee754/float128/Versions; those symbols should be present
only once per floating-point format, not once per type.
Note that if any platforms currently lacking support for a type with
binary128 format get glibc support for such a type in future (whether
only as _Float128, or also as a new long double format), and new libm
functions (present for all types) have been added by then, additional
macros will be needed to allow such functions to get a version of the
form "GLIBC_2.28 if the platform had _Float128 support by then, or the
later version at which that platform had _Float128 support added".
This is not however a preexisting condition, but would have applied
equally to the existing support for _Float128 as an ABI-distinct
type. New all-type libm functions should just be added to the
appropriate symbol version (currently GLIBC_2.27) for all types, with
such special-case handling for _Float128 versions (and _Float64x as
well in future) waiting until someone actually wants to add support
for _Float128 to an existing platform after a release in which that
platform and a post-2.26 libm function had support but that platform
lacked _Float128 support.
Tested with build-many-glibcs.py that installed stripped shared
libraries are unchanged by this patch. Also tested in conjunction
with the remaining changes to enable float128 aliases.
* sysdeps/generic/float128-abi.h: New file.
* sysdeps/ieee754/float128/Versions (FLOAT128_VERSION): Move
non-__prefixed symbols to ....
* math/Versions: ... here. Include <float128-abi.h>.
* stdlib/Versions ... and here. Include <float128-abi.h>
Since glibc 2.24, __malloc_initialize_hook is a compat symbol. As a
result, the link editor does not export a definition of
__malloc_initialize_hook from the main program, so that it no longer
interposes the variable definition in libc.so. Specifying the symbol
version restores the exported symbol.
This patch adds support for running libm tests for float128 in the
case where the float128 functions are aliases of long double
functions. In this case, the sysdeps Makeconfig file
(i.e. sysdeps/ieee754/ldbl-128/Makeconfig) will need to define
"float128-alias-fcts = yes" to enable the tests.
Tested for x86_64. Also tested with build-many-glibcs.py; installed
stripped shared libraries are unchanged by the patch. Also tested
together with changes to enable the float128 aliases.
* math/Makefile (test-types): Add
$(type-float128-$(float128-alias-fcts)).
* math/test-float128.h (TYPE_STR): Define conditional on
[FLT128_MANT_DIG == LDBL_MANT_DIG].
(ULP_IDX): Likewise.
(ULP_I_IDX): Likewise.
This patch adds support for building strtof128, wcstof128, strtof128_l
and wcstof128_l as aliases, in the case of __HAVE_FLOAT128 &&
!__HAVE_DISTINCT_FLOAT128.
Tested with build-many-glibcs.py that installed stripped shared
libraries are unchanged by this patch. Also tested together with
changes to enable float128 aliases.
* stdlib/strtold.c: Include <bits/floatn.h>
[__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (strtof128): Define
and later undefine as macro. Define as weak alias if
[!USE_WIDE_CHAR].
[__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (wcstof128): Define
and later undefine as macro. Define as weak alias if
[USE_WIDE_CHAR].
* sysdeps/ieee754/ldbl-128/strtold_l.c [__HAVE_FLOAT128 &&
!__HAVE_DISTINCT_FLOAT128] (strtof128_l): Define and later
undefine as macro. Define as weak alias if [!USE_WIDE_CHAR].
[__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (wcstof128_l):
Define and later undefine as macro. Define as weak alias if
[USE_WIDE_CHAR].
* sysdeps/ieee754/ldbl-64-128/strtold_l.c: Include
<bits/floatn.h>.
[__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (strtof128_l):
Define and later undefine as macro. Define as weak alias if
[!USE_WIDE_CHAR].
[__HAVE_FLOAT128 && !__HAVE_DISTINCT_FLOAT128] (wcstof128_l):
Define and later undefine as macro. Define as weak alias if
[USE_WIDE_CHAR].