sys/cdefs.h has a macro __long_double_t used in two places in glibc.
long double is a standard part of C since C89; there is no need for
such an alias for it. This patch removes that macro and uses long
double directly everywhere. As an implementation-namespace,
undocumented symbol, it should not be considered part of the API for
users, and codesearch.debian.net shows no sign of it being used
outside glibc in a way that would break with this patch.
Tested for x86_64.
* misc/sys/cdefs.h (__long_double_t): Remove.
* stdio-common/printf_fp.c (__printf_fp_l): Use long double
instead of __long_double_t,
* stdlib/strfmon_l.c (__vstrfmon_l): Likewise.
The compare_strings.py script generates a graph for the benchmarks it
performs a comparison on and that fails if X is not available. Avoid
the error and ensure that only the graph is generated and saved as a
PNG file.
* benchtests/scripts/compare_strings.py: Avoid display error
when generating graph.
This patch allows one to provide the function name using an optional
-base option to compare all other functions against. This is useful
when pitching one implementation of a string function against
alternatives. In the absence of this option, comparisons are done
against the first ifunc in the list.
* benchtests/scripts/compare_strings.py (main): Add an
optional -base option.
(process_results): New argument base_func.
The hardcoded 'memcpy' name turns up in other derived tests like
mempcpy.
* benchtests/bench-memcpy.c (test_main): Use TEST_NAME instead of
hardcoding memcpy.
* benchtests/bench-memcpy-large.c (test_name): Likewise.
* benchtests/bench-memcpy-random.c (test_name): Likewise.
This patch reimplements the libm-internal min_of_type macro to use
__MATH_TG instead of its own local type-generic implementation, so
simplifying the code and reducing the number of different type-generic
implementation variants in use in glibc.
Tested for x86_64.
* sysdeps/generic/math_private.h (__EXPR_FLT128): Remove macro.
(min_of_type_f): New macro.
(min_of_type_): Likewise.
(min_of_type_l): Likewise.
(min_of_type_f128): Likewise.
(min_of_type): Define using __MATH_TG and taking an expression
argument.
(math_check_force_underflow): Pass expression instead of type to
min_of_type.
(math_check_force_underflow_nonneg): Likewise.
Since all x86 IFUNC selectors are implemented in C, assembly versions of
HAS_CPU_FEATURE and HAS_ARCH_FEATURE can be removed.
* sysdeps/x86/cpu-features.h [__ASSEMBLER__]
(LOAD_RTLD_GLOBAL_RO_RDX, HAS_FEATURE, LOAD_FUNC_GOT_EAX,
HAS_CPU_FEATURE, HAS_ARCH_FEATURE): Removed.
Since start.o may be compiled as PIC, we should check PIC instead of
SHARED. Also avoid dynamic relocation against main in static PIE since
_start is the entry point before the executable is relocated.
* sysdeps/i386/start.S (_start): Check Check PIC instead of
SHARED. Avoid dynamic relocation against main in static PIE.
tst-prelink.c checks for conflict with GLOB_DAT relocation against stdio.
On i386, there is no GLOB_DAT relocation against stdio with PIE. We
should compile tst-prelink.c without PIE.
[BZ #21815]
* elf/Makefile (CFLAGS-tst-prelink.c): New.
(LDFLAGS-tst-prelink): Likewise.
Define I386_USE_SYSENTER to 0 or 1 so that special versions of syscalls
with "int $0x80" can be provided for static PIE during self relocation.
Also check PIC instead SHARED for PIC version of syscall macros.
* sysdeps/unix/sysv/linux/i386/sysdep.h (I386_USE_SYSENTER):
Define to I386_USE_SYSENTER to 0 or 1 if not defined.
(ENTER_KERNEL): Check if I386_USE_SYSENTER is 1 and check PIC.
(INTERNAL_SYSCALL_MAIN_INLINE): Likewise.
(INTERNAL_SYSCALL_NCS): Likewise.
(LOADARGS_1): Likewise.
(LOADARGS_5): Likewise.
(RESTOREARGS_1): Likewise.
(RESTOREARGS_5): Likewise.
Since apply_irel is called before memcpy and mempcpy are called, we
can use IFUNC memcpy and mempcpy in libc.a.
* sysdeps/x86_64/memmove.S (MEMCPY_SYMBOL): Don't check SHARED.
(MEMPCPY_SYMBOL): Likewise.
* sysdeps/x86_64/multiarch/ifunc-impl-list.c
(__libc_ifunc_impl_list): Test memcpy and mempcpy in libc.a.
* sysdeps/x86_64/multiarch/memcpy-ssse3-back.S: Also include
in libc.a.
* sysdeps/x86_64/multiarch/memcpy-ssse3.S: Likewise.
* sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S:
Likewise.
* sysdeps/x86_64/multiarch/memcpy.c: Also include in libc.a.
(__hidden_ver1): Don't use in libc.a.
* sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S
(__mempcpy): Don't create a weak alias in libc.a.
* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: Support
libc.a.
* sysdeps/x86_64/multiarch/mempcpy.c: Also include in libc.a.
(__hidden_ver1): Don't use in libc.a.
Since gold doesn't support INSERT in linker script:
https://sourceware.org/bugzilla/show_bug.cgi?id=21676
tst-split-dynreloc fails to link with gold. Check if linker supports
INSERT in linker script before using it.
* config.make.in (have-insert): New.
* configure.ac (libc_cv_insert): New. Set to yes if linker
supports INSERT in linker script.
(AC_SUBST(libc_cv_insert): New.
* configure: Regenerated.
* sysdeps/x86_64/Makefile (tests): Add tst-split-dynreloc only
if $(have-insert) == yes.
Gold doesn't support protected data symbol:
configure:5672: checking linker support for protected data symbol
configure:5682: gcc -fuse-ld=gold -nostdlib -nostartfiles -fno-stack-protector -fPIC -shared conftest.c -o conftest.so
configure:5685: $? = 0
configure:5692: gcc -fuse-ld=gold -nostdlib -nostartfiles -fno-stack-protector conftest.c -o conftest conftest.so
/usr/local/bin/ld.gold: error: /tmp/ccXWoofs.o: cannot make copy relocation for protected symbol 'bar', defined in conftest.so
collect2: error: ld returned 1 exit status
Run vismain only if linker supports protected data symbol.
* elf/Makefile (tests): Add vismain only if
$(have-protected-data) == yes.
(tests-pie): Likewise.
On AVX machines with XGETBV (ECX == 1) like Skylake processors,
(gdb) disass _dl_runtime_resolve_avx_opt
Dump of assembler code for function _dl_runtime_resolve_avx_opt:
0x0000000000015890 <+0>: push %rax
0x0000000000015891 <+1>: push %rcx
0x0000000000015892 <+2>: push %rdx
0x0000000000015893 <+3>: mov $0x1,%ecx
0x0000000000015898 <+8>: xgetbv
0x000000000001589b <+11>: mov %eax,%r11d
0x000000000001589e <+14>: pop %rdx
0x000000000001589f <+15>: pop %rcx
0x00000000000158a0 <+16>: pop %rax
0x00000000000158a1 <+17>: and $0x4,%r11d
0x00000000000158a5 <+21>: bnd je 0x16200 <_dl_runtime_resolve_sse_vex>
End of assembler dump.
is slower than:
(gdb) disass _dl_runtime_resolve_avx_slow
Dump of assembler code for function _dl_runtime_resolve_avx_slow:
0x0000000000015850 <+0>: vorpd %ymm0,%ymm1,%ymm8
0x0000000000015854 <+4>: vorpd %ymm2,%ymm3,%ymm9
0x0000000000015858 <+8>: vorpd %ymm4,%ymm5,%ymm10
0x000000000001585c <+12>: vorpd %ymm6,%ymm7,%ymm11
0x0000000000015860 <+16>: vorpd %ymm8,%ymm9,%ymm9
0x0000000000015865 <+21>: vorpd %ymm10,%ymm11,%ymm10
0x000000000001586a <+26>: vpcmpeqd %xmm8,%xmm8,%xmm8
0x000000000001586f <+31>: vorpd %ymm9,%ymm10,%ymm10
0x0000000000015874 <+36>: vptest %ymm10,%ymm8
0x0000000000015879 <+41>: bnd jae 0x158b0 <_dl_runtime_resolve_avx>
0x000000000001587c <+44>: vzeroupper
0x000000000001587f <+47>: bnd jmpq 0x16200 <_dl_runtime_resolve_sse_vex>
End of assembler dump.
(gdb)
since xgetbv takes much more cycles than single cycle operations like
vpord/vvpcmpeq/ptest. _dl_runtime_resolve_opt should be used only with
AVX512 where AVX512 instructions lead to lower CPU frequency on Skylake
server.
[BZ #21871]
* sysdeps/x86/cpu-features.c (init_cpu_features): Set
bit_arch_Use_dl_runtime_resolve_opt only with AVX512F.
__memset_zero_constant_len_parameter should be removed by
commit 61062f5630
Author: Ulrich Drepper <drepper@redhat.com>
Date: Tue Mar 1 00:35:23 2005 +0000
2005-02-24 Roland McGrath <roland@redhat.com>
* debug/Versions (libc: GLIBC_2.4): Remove
__memset_zero_constant_len_parameter.
* sysdeps/generic/memset_chk.c: Remove alias and warning.
* misc/sys/cdefs.h (__warndecl): New macro.
* debug/warning-nop.c: New file.
* string/bits/string3.h (memset): Call __warn_memset_zero_len with no
arguments, instead of calling __memset_zero_constant_len_parameter.
Use __warndecl for __warn_memset_zero_len.
* debug/Makefile (routines): Add $(static-only-routines).
(static-only-routines): New variable.
This patch removes the last emaining pieces of it. Tested it on i586,
i686 and x86-64.
[BZ #21790]
* sysdeps/i386/i586/memset.S
(__memset_zero_constant_len_parameter): Removed.
* sysdeps/i386/i686/memset.S
(__memset_zero_constant_len_parameter): Likewise.
* sysdeps/i386/i686/multiarch/memset_chk.S
(__memset_zero_constant_len_parameter): Likewise.
* sysdeps/x86_64/memset.S (__memset_zero_constant_len_parameter):
Likewise.
The return type of the getentropy stub is wrongly defined as ssize_t,
while both the <sys/random.h> header and the Linux implementation
define it as int. This patch fixes that.
Changelog:
* stdlib/getentropy.c (getentropy): Change return type to int.
For the locales doi_IN, kok_IN, and sat_IN, the words for
“yes” and “no” were apparently in yesexpr and noexpr.
Copy them from there to add yesstr and nostr.
Also make yesexpr and noexpr more readable by using
the POSIX portable character set.
* locales/doi_IN (LC_MESSAGES): Add yesstr and nostr.
* locales/kok_IN (LC_MESSAGES): Add yesstr and nostr.
* locales/sat_IN (LC_MESSAGES): Add yesstr and nostr.
This reverts commit 8f75515080
Revert “Fix yesexpr in en_DK locale”.
* locales/en_DK (LC_MESSAGES): Restore original yesexpr, noexpr,
yesstr, nostr. Convert them to ASCII and add a comment why
we want to have them like this.
And make the expressions more readable by using the POSIX portable character set
instead of Unicode code points.
* locales/agr_PE (LC_MESSAGES): drop .* from yesexpr and noexpr
This makes the __tls_get_addr_opt test run as a shared library, and so
actually test that DTPMOD64/DTPREL64 pairs are processed by ld.so to
support the __tls_get_adfr_opt call stub fast return. After a
2017-01-24 patch (binutils f0158f4416) ld.bfd no longer emitted
unnecessary dynamic relocations against local thread variables,
instead setting up the __tls_index GOT entries for the call stub fast
return. This meant tst-tlsopt-powerpc passed but did not check ld.so
relocation support. After a 2017-07-16 patch (binutils 676ee2b5fa)
ld.bfd no longer set up the __tls_index GOT entries for the call stub
fast return, and tst-tlsopt-powerpc failed.
Compiling mod-tlsopt-powerpc.c with -DSHARED exposed a bug in
powerpc64/tls-macros.h, which defines a __TLS_GET_ADDR macro that
clashes with one defined in dl-tls.h. The tls-macros.h version is
only used in that file, so delete it and expand.
* sysdeps/powerpc/mod-tlsopt-powerpc.c: Extract from
tst-tlsopt-powerpc.c with function name change and no test harness.
* sysdeps/powerpc/tst-tlsopt-powerpc.c: Remove body of test.
Call tls_get_addr_opt_test.
* sysdeps/powerpc/Makefile (LDFLAGS-tst-tlsopt-powerpc): Don't define.
(modules-names): Add mod-tlsopt-powerpc.
(mod-tlsopt-powerpc.so-no-z-defs): Define.
(tst-tlsopt-powerpc): Depend on .so.
* sysdeps/powerpc/powerpc64/tls-macros.h (__TLS_GET_ADDR): Don't
define. Expand use in TLS_GD and TLS_LD.
csu/libc-start.c now insists on calling __libc_init_secure, while the Hurd
port already implements it "very early" in dl-sysdep.c and init-first.c
* sysdeps/mach/hurd/enbl-secure.c (__libc_init_secure): Define
function.
When a tgmath.h macro is passed a double argument and an argument of
type __int128, it generates a call to a long double function (although
the result still gets converted to type double). __int128 is similar
enough to integer types that it should be handled consistently like
them, so always like double for these macros rather than sometimes
like double and sometimes like long double. This patch fixes the
logic accordingly and makes gen-tgmath-tests.py generate tests for
__int128.
Tested for x86_64 and x86.
[BZ #21686]
* math/tgmath.h (__TGMATH_BINARY_REAL_ONLY): Add arguments before
comparing size with that of double.
(__TGMATH_BINARY_REAL_STD_ONLY): Likewise.
(__TGMATH_BINARY_REAL_RET_ONLY): Likewise.
(__TGMATH_TERNARY_FIRST_SECOND_REAL_ONLY): Likewise.
(__TGMATH_TERNARY_REAL_ONLY): Likewise.
(__TGMATH_BINARY_REAL_IMAG): Likewise.
* math/gen-tgmath-tests.py (Type.init_types): Create __int128 and
unsigned __int128 types.
The tgmath.h macros produce errors for bit-field arguments, because
they apply sizeof and typeof to the arguments. This patch fixes them
to use unary + systematically before using sizeof or typeof on
arguments that might be bit-fields (note that __real__ of a bit-field
is still a bit-field for this purpose, since it's an lvalue).
gen-tgmath-tests.py is extended to add tests for this case.
Tested for x86_64.
[BZ #21685]
* math/tgmath.h (__tgmath_real_type): Use unary + on potentially
bit-field expressions passed to sizeof or typeof.
[__HAVE_FLOAT128 && __GLIBC_USE (IEC_60559_TYPES_EXT)]
(__TGMATH_F128): Likewise.
[__HAVE_FLOAT128 && __GLIBC_USE (IEC_60559_TYPES_EXT)]
(__TGMATH_CF128): Likewise.
(__TGMATH_UNARY_REAL_ONLY): Likewise.
(__TGMATH_UNARY_REAL_RET_ONLY): Likewise.
(__TGMATH_BINARY_FIRST_REAL_ONLY): Likewise.
(__TGMATH_BINARY_FIRST_REAL_STD_ONLY): Likewise.
(__TGMATH_BINARY_REAL_ONLY): Likewise.
(__TGMATH_BINARY_REAL_STD_ONLY): Likewise.
(__TGMATH_BINARY_REAL_RET_ONLY): Likewise.
(__TGMATH_TERNARY_FIRST_SECOND_REAL_ONLY): Likewise.
(__TGMATH_TERNARY_REAL_ONLY): Likewise.
(__TGMATH_TERNARY_FIRST_REAL_RET_ONLY): Likewise.
(__TGMATH_UNARY_REAL_IMAG): Likewise.
(__TGMATH_UNARY_IMAG): Likewise.
(__TGMATH_UNARY_REAL_IMAG_RET_REAL): Likewise.
(__TGMATH_BINARY_REAL_IMAG): Likewise.
* math/gen-tgmath-tests.py (Type.init_types): Create bit_field
type.
(define_vars_for_type): Handle bit_field type specially.
(Tests.__init__): Declare structure with bit-field element.
There is no need to define multiarch __memmove_chk in libc.a since they
aren't used at all.
[BZ #21791]
* sysdeps/i386/i686/multiarch/memcpy-sse2-unaligned.S
(MEMCPY_CHK): Define only if SHARED is defined.
* sysdeps/i386/i686/multiarch/memcpy-ssse3-rep.S (MEMCPY_CHK):
Likewise.
* sysdeps/i386/i686/multiarch/memcpy-ssse3.S (MEMCPY_CHK):
Likewise.
I incorrectly assumed that the ChangeLog numbers (.1, .2, etc.) are in
order. They're not and the latest non-current ChangeLog is the one
with the highest number. Fixed.
65810f0ef0 fixed a robust mutex bug but
introduced BZ 21778: if the CAS used to try to acquire a lock fails, the
expected value is not updated, which breaks other cases in the loce
acquisition loop. The fix is to simply update the expected value with
the value returned by the CAS, which ensures that behavior is as if the
first case with the CAS never happened (if the CAS fails).
This is a regression introduced in the last release.
Tested on x86_64, i686, ppc64, ppc64le, s390x, aarch64, armv7hl.