Commit Graph

15954 Commits

Author SHA1 Message Date
Adhemerval Zanella Netto
e7d1c58664 mips: Add the clone3 wrapper
It follows the internal signature:

extern int clone3 (struct clone_args *__cl_args, size_t __size,
                   int (*__func) (void *__arg), void *__arg);

Checked on mips64el-linux-gnueabihf, mips64el-n32-linux-gnu, and
mipsel-linux-gnu.
2023-09-05 10:15:48 -03:00
Adhemerval Zanella Netto
b56f7fe79e arm: Add the clone3 wrapper
It follows the internal signature:

  extern int clone3 (struct clone_args *__cl_args, size_t __size,
		    int (*__func) (void *__arg), void *__arg);

Checked on arm-linux-gnueabihf.
2023-09-05 10:15:48 -03:00
Samuel Thibault
8076906109 htl: Fix stack information for main thread
We can easily directly ask the kernel with vm_region rather than
assuming a one-page stack.
2023-09-03 21:11:29 +02:00
Szabolcs Nagy
d2123d6827 elf: Fix slow tls access after dlopen [BZ #19924]
In short: __tls_get_addr checks the global generation counter and if
the current dtv is older then _dl_update_slotinfo updates dtv up to the
generation of the accessed module. So if the global generation is newer
than generation of the module then __tls_get_addr keeps hitting the
slow dtv update path. The dtv update path includes a number of checks
to see if any update is needed and this already causes measurable tls
access slow down after dlopen.

It may be possible to detect up-to-date dtv faster.  But if there are
many modules loaded (> TLS_SLOTINFO_SURPLUS) then this requires at
least walking the slotinfo list.

This patch tries to update the dtv to the global generation instead, so
after a dlopen the tls access slow path is only hit once.  The modules
with larger generation than the accessed one were not necessarily
synchronized before, so additional synchronization is needed.

This patch uses acquire/release synchronization when accessing the
generation counter.

Note: in the x86_64 version of dl-tls.c the generation is only loaded
once, since relaxed mo is not faster than acquire mo load.

I have not benchmarked this. Tested by Adhemerval Zanella on aarch64,
powerpc, sparc, x86 who reported that it fixes the performance issue
of bug 19924.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-09-01 08:21:37 +01:00
H.J. Lu
1493622f4f x86: Check the lower byte of EAX of CPUID leaf 2 [BZ #30643]
The old Intel software developer manual specified that the low byte of
EAX of CPUID leaf 2 returned 1 which indicated the number of rounds of
CPUDID leaf 2 was needed to retrieve the complete cache information. The
newer Intel manual has been changed to that it should always return 1
and be ignored.  If the lower byte isn't 1, CPUID leaf 2 can't be used.
In this case, we ignore CPUID leaf 2 and use CPUID leaf 4 instead.  If
CPUID leaf 4 doesn't contain the cache information, cache information
isn't available at all.  This addresses BZ #30643.
2023-08-29 12:57:41 -07:00
dengjianbo
693918b6dd LoongArch: Change loongarch to LoongArch in comments 2023-08-29 10:35:38 +08:00
dengjianbo
ea7698a616 LoongArch: Add ifunc support for memcmp{aligned, lsx, lasx}
According to glibc memcmp microbenchmark test results(Add generic
memcmp), this implementation have performance improvement
except the length is less than 3, details as below:

Name             Percent of time reduced
memcmp-lasx      16%-74%
memcmp-lsx       20%-50%
memcmp-aligned   5%-20%
2023-08-29 10:35:38 +08:00
dengjianbo
1b1e9b7c10 LoongArch: Add ifunc support for memset{aligned, unaligned, lsx, lasx}
According to glibc memset microbenchmark test results, for LSX and LASX
versions, A few cases with length less than 8 experience performace
degradation, overall, the LASX version could reduce the runtime about
15% - 75%, LSX version could reduce the runtime about 15%-50%.

The unaligned version uses unaligned memmory access to set data which
length is less than 64 and make address aligned with 8. For this part,
the performace is better than aligned version. Comparing with the generic
version, the performance is close when the length is larger than 128. When
the length is 8-128, the unaligned version could reduce the runtime about
30%-70%, the aligned version could reduce the runtime about 20%-50%.
2023-08-29 10:35:38 +08:00
dengjianbo
55e84dc6ed LoongArch: Add ifunc support for memrchr{lsx, lasx}
According to glibc memrchr microbenchmark, this implementation could reduce
the runtime as following:

Name            Percent of rutime reduced
memrchr-lasx    20%-83%
memrchr-lsx     20%-64%
2023-08-29 10:35:38 +08:00
dengjianbo
60bcb9acbf LoongArch: Add ifunc support for memchr{aligned, lsx, lasx}
According to glibc memchr microbenchmark, this implementation could reduce
the runtime as following:

Name               Percent of runtime reduced
memchr-lasx        37%-83%
memchr-lsx         30%-66%
memchr-aligned     0%-15%
2023-08-29 10:35:38 +08:00
dengjianbo
f8664fe215 LoongArch: Add ifunc support for rawmemchr{aligned, lsx, lasx}
According to glibc rawmemchr microbenchmark, A few cases tested with
char '\0' experience performance degradation due to the lasx and lsx
versions don't handle the '\0' separately. Overall, rawmemchr-lasx
implementation could reduce the runtime about 40%-80%, rawmemchr-lsx
implementation could reduce the runtime about 40%-66%, rawmemchr-aligned
implementation could reduce the runtime about 20%-40%.
2023-08-29 10:35:38 +08:00
Xi Ruoyao
3efa26749e LoongArch: Micro-optimize LD_PCREL
We are requiring Binutils >= 2.41, so explicit relocation syntax is
always supported by the assembler.  Use it to reduce one instruction.

Signed-off-by: Xi Ruoyao <xry111@xry111.site>
2023-08-29 10:35:38 +08:00
Xi Ruoyao
aac842d0ed LoongArch: Remove support code for old linker in start.S
We are requiring Binutils >= 2.41, so la.pcrel always works here.

Signed-off-by: Xi Ruoyao <xry111@xry111.site>
2023-08-29 10:35:38 +08:00
Xi Ruoyao
e757412c3e LoongArch: Simplify the autoconf check for static PIE
We are strictly requiring GAS >= 2.41 now, so we don't need to check
assembler capability anymore.

Signed-off-by: Xi Ruoyao <xry111@xry111.site>
2023-08-29 10:35:38 +08:00
Kir Kolyshkin
42c960a4f1 Add F_SEAL_EXEC from Linux 6.3 to bits/fcntl-linux.h.
This patch adds the new F_SEAL_EXEC constant from Linux 6.3 (see Linux
commit 6fd7353829c ("mm/memfd: add F_SEAL_EXEC") to bits/fcntl-linux.h.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-08-28 14:51:39 -03:00
Adhemerval Zanella
87ced255bd m68k: Use M68K_SCALE_AVAILABLE on __mpn_lshift and __mpn_rshift
This patch adds a new macro, M68K_SCALE_AVAILABLE, similar to gmp
scale_available_p (mpn/m68k/m68k-defs.m4) that expand to 1 if a
scale factor can be used in addressing modes.  This is used
instead of __mc68020__ for some optimization decisions.

Checked on a build for m68k-linux-gnu target mc68020 and mc68040.
2023-08-25 10:07:24 -03:00
Adhemerval Zanella
b85880633f m68k: Fix build with -mcpu=68040 or higher (BZ 30740)
GCC currently does not define __mc68020__ for -mcpu=68040 or higher,
which memcpy/memmove assumptions.  Since this memory copy optimization
seems only intended for m68020, disable for other m680X0 variants.

Checked on a build for m68k-linux-gnu target mc68020 and mc68040.
2023-08-25 10:07:24 -03:00
dengjianbo
ddbb74f5c2 LoongArch: Add ifunc support for strncmp{aligned, lsx}
Based on the glibc microbenchmark, only a few short inputs with this
strncmp-aligned and strncmp-lsx implementation experience performance
degradation, overall, strncmp-aligned could reduce the runtime 0%-10%
for aligned comparision, 10%-25% for unaligend comparision, strncmp-lsx
could reduce the runtime about 0%-60%.
2023-08-24 17:19:47 +08:00
dengjianbo
82d9426e4a LoongArch: Add ifunc support for strcmp{aligned, lsx}
Based on the glibc microbenchmark, strcmp-aligned implementation could
reduce the runtime 0%-10% for aligned comparison, 10%-20% for unaligned
comparison, strcmp-lsx implemenation could reduce the runtime 0%-50%.
2023-08-24 17:19:47 +08:00
dengjianbo
e74d959862 LoongArch: Add ifunc support for strnlen{aligned, lsx, lasx}
Based on the glibc microbenchmark, strnlen-aligned implementation could
reduce the runtime more than 10%, strnlen-lsx implementation could reduce
the runtime about 50%-78%, strnlen-lasx implementation could reduce the
runtime about 50%-88%.
2023-08-24 17:19:47 +08:00
Guy-Fleury Iteriteka
1dc0bc8f07 htl: move pthread_attr_setdetachstate into libc
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-11-gfleury@disroot.org>
2023-08-24 01:57:22 +02:00
Guy-Fleury Iteriteka
92a6c26470 htl: move pthread_attr_getdetachstate into libc
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-10-gfleury@disroot.org>
2023-08-24 01:57:17 +02:00
Guy-Fleury Iteriteka
c2c9feebdc htl: move pthread_attr_setschedpolicy into libc
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-9-gfleury@disroot.org>
2023-08-24 01:57:16 +02:00
Guy-Fleury Iteriteka
0f3a39072b htl: move pthread_attr_getschedpolicy into libc
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-8-gfleury@disroot.org>
2023-08-24 01:57:14 +02:00
Guy-Fleury Iteriteka
fb2d92a5b3 htl: move pthread_attr_setinheritsched into libc
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-7-gfleury@disroot.org>
2023-08-24 01:57:13 +02:00
Guy-Fleury Iteriteka
62cf5d2bb3 htl: move pthread_attr_getinheritsched into libc
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-6-gfleury@disroot.org>
2023-08-24 01:57:11 +02:00
Guy-Fleury Iteriteka
79de1a0ca2 htl: move pthread_attr_getschedparam into libc
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-5-gfleury@disroot.org>
2023-08-24 01:57:10 +02:00
Guy-Fleury Iteriteka
3caa6362d0 htl: move pthread_setschedparam into libc
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-4-gfleury@disroot.org>
2023-08-24 01:57:08 +02:00
Guy-Fleury Iteriteka
a1a942fb5f htl: move pthread_getschedparam into libc
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-3-gfleury@disroot.org>
2023-08-24 01:57:04 +02:00
Guy-Fleury Iteriteka
9dfa256216 htl: move pthread_equal into libc
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230716084414.107245-2-gfleury@disroot.org>
2023-08-24 01:56:57 +02:00
Florian Weimer
65a5112ede Linux: Avoid conflicting types in ld.so --list-diagnostics
The path auxv[*].a_val could either be an integer or a string,
depending on the a_type value.  Use a separate field, a_val_string, to
simplify mechanical parsing of the --list-diagnostics output.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-08-23 08:12:48 +02:00
H.J. Lu
a8ecb126d4 x86_64: Add log1p with FMA
On Skylake, it changes log1p bench performance by:

        Before       After     Improvement
max     63.349       58.347       8%
min     4.448        5.651        -30%
mean    12.0674      10.336       14%

The minimum code path is

 if (hx < 0x3FDA827A)                          /* x < 0.41422  */
    {
      if (__glibc_unlikely (ax >= 0x3ff00000))           /* x <= -1.0 */
        {
	   ...
        }
      if (__glibc_unlikely (ax < 0x3e200000))           /* |x| < 2**-29 */
        {
          math_force_eval (two54 + x);          /* raise inexact */
          if (ax < 0x3c900000)                  /* |x| < 2**-54 */
            {
	      ...
            }
          else
            return x - x * x * 0.5;

FMA and non-FMA code sequences look similar.  Non-FMA version is slightly
faster.  Since log1p is called by asinh and atanh, it improves asinh
performance by:

        Before       After     Improvement
max     75.645       63.135       16%
min     10.074       10.071       0%
mean    15.9483      14.9089      6%

and improves atanh performance by:

        Before       After     Improvement
max     91.768       75.081       18%
min     15.548       13.883       10%
mean    18.3713      16.8011      8%
2023-08-21 10:44:26 -07:00
Andreas Schwab
ce99601fa8 Remove references to the defunct db2 subdir
The db2 subdir has been removed more than 20 years ago.
2023-08-21 18:20:53 +02:00
Stefan Liebler
f5f96b784b s390x: Fix static PIE condition for toolchain bootstrapping.
The static PIE configure check uses link tests.  When bootstrapping
a cross-toolchain, the link tests fail due to missing crt-files /
libc.so.  As we explicitely want to test an issue in binutils (ld),
we now also explicitely check for known linker versions.

See also commit 368b7c614b
S390: Use compile-only instead of also link-tests in configure.
2023-08-18 10:57:59 +02:00
Andreas Schwab
464fd8249e m68k: fix __mpn_lshift and __mpn_rshift for non-68020
From revision 03f3d275d0d6 in the gmp repository.
2023-08-17 21:56:14 +02:00
Sam James
369f373057
sysdeps: tst-bz21269: fix -Wreturn-type
Thanks to Andreas Schwab for reporting.

Fixes: 652b9fdb77
Signed-off-by: Sam James <sam@gentoo.org>
2023-08-17 09:30:57 +01:00
dengjianbo
8944ba483f Loongarch: Add ifunc support for memcpy{aligned, unaligned, lsx, lasx} and memmove{aligned, unaligned, lsx, lasx}
These implementations improve the time to copy data in the glibc
microbenchmark as below:
memcpy-lasx       reduces the runtime about 8%-76%
memcpy-lsx        reduces the runtime about 8%-72%
memcpy-unaligned  reduces the runtime of unaligned data copying up to 40%
memcpy-aligned    reduece the runtime of unaligned data copying up to 25%
memmove-lasx      reduces the runtime about 20%-73%
memmove-lsx       reduces the runtime about 50%
memmove-unaligned reduces the runtime of unaligned data moving up to 40%
memmove-aligned   reduces the runtime of unaligned data moving up to 25%
2023-08-17 10:12:18 +08:00
dengjianbo
ba67bc8e0a Loongarch: Add ifunc support for strchr{aligned, lsx, lasx} and strchrnul{aligned, lsx, lasx}
These implementations improve the time to run strchr{nul}
microbenchmark in glibc as below:
strchr-lasx       reduces the runtime about 50%-83%
strchr-lsx        reduces the runtime about 30%-67%
strchr-aligned    reduces the runtime about 10%-20%
strchrnul-lasx    reduces the runtime about 50%-83%
strchrnul-lsx     reduces the runtime about 36%-65%
strchrnul-aligned reduces the runtime about 6%-10%
2023-08-17 10:12:18 +08:00
Sam James
652b9fdb77 sysdeps: tst-bz21269: handle ENOSYS & skip appropriately
SYS_modify_ldt requires CONFIG_MODIFY_LDT_SYSCALL to be set in the kernel, which
some distributions may disable for hardening. Check if that's the case (unset)
and mark the test as UNSUPPORTED if so.

Reviewed-by: DJ Delorie <dj@redhat.com>
Signed-off-by: Sam James <sam@gentoo.org>
2023-08-16 21:01:39 +01:00
Sam James
e0b712dd91 sysdeps: tst-bz21269: fix test parameter
All callers pass 1 or 0x11 anyway (same meaning according to man page),
but still.

Reviewed-by: DJ Delorie <dj@redhat.com>
Signed-off-by: Sam James <sam@gentoo.org>
2023-08-16 21:01:37 +01:00
Samuel Thibault
81dcf8b3d1 hurd: Fix strictness of <mach/thread_state.h>
Fixes: db25bc52026f ("hurd: Add prototype for and thus fix _hurdsig_abort_rpcs call")
2023-08-16 00:12:52 +02:00
H.J. Lu
1b214630ce x86_64: Add expm1 with FMA
On Skylake, it improves expm1 bench performance by:

        Before       After     Improvement
max     70.204       68.054       3%
min     20.709       16.2         22%
mean    22.1221      16.7367      24%

NB: Add

extern long double __expm1l (long double);
extern long double __expm1f128 (long double);

for __typeof (__expm1l) and __typeof (__expm1f128) when __expm1 is
defined since __expm1 may be expanded in their declarations which
causes the build failure.
2023-08-14 08:14:19 -07:00
dengjianbo
135407f431 Loongarch: Add ifunc support and add different versions of strlen
strlen-lasx is implemeted by LASX simd instructions(256bit)
strlen-lsx is implemeted by LSX simd instructions(128bit)
strlen-align is implemented by LA basic instructions and never use unaligned memory acess
2023-08-14 09:47:09 +08:00
dengjianbo
cb7954c4c2 LoongArch: Add minuimum binutils required version
LoongArch glibc can add some LASX/LSX vector instructions codes,
change the required minimum binutils version to 2.41 which could
support vector instructions. HAVE_LOONGARCH_VEC_ASM is removed
accordingly.
2023-08-14 09:47:09 +08:00
dengjianbo
57b2c14272 LoongArch: Redefine macro LEAF/ENTRY.
The following usage of macro LEAF/ENTRY are all feasible:
1. LEAF(fcn) -- the align value of fcn is .align 3(default value)
2. LEAF(fcn, 6) -- the align value of fcn is .align 6
2023-08-14 09:47:09 +08:00
Noah Goldstein
084fb31bc2 x86: Fix incorrect scope of setting shared_per_thread [BZ# 30745]
The:

```
    if (shared_per_thread > 0 && threads > 0)
      shared_per_thread /= threads;
```

Code was accidentally moved to inside the else scope.  This doesn't
match how it was previously (before af992e7abd).

This patch fixes that by putting the division after the `else` block.
2023-08-11 15:33:08 -05:00
H.J. Lu
f6b10ed8e9 x86_64: Add log2 with FMA
On Skylake, it improves log2 bench performance by:

        Before       After     Improvement
max     208.779      63.827       69%
min     9.977        6.55         34%
mean    10.366       6.8191       34%
2023-08-11 07:49:45 -07:00
Florian Weimer
039ff51ac7 nscd: Do not rebuild getaddrinfo (bug 30709)
The nscd daemon caches hosts data from NSS modules verbatim, without
filtering protocol families or sorting them (otherwise separate caches
would be needed for certain ai_flags combinations).  The cache
implementation is complete separate from the getaddrinfo code.  This
means that rebuilding getaddrinfo is not needed.  The only function
actually used is __bump_nl_timestamp from check_pf.c, and this change
moves it into nscd/connections.c.

Tested on x86_64-linux-gnu with -fexceptions, built with
build-many-glibcs.py.  I also backported this patch into a distribution
that still supports nscd and verified manually that caching still works.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-08-11 10:10:16 +02:00
H.J. Lu
881546979d x86_64: Sort fpu/multiarch/Makefile
Sort Makefile variables using scripts/sort-makefile-lines.py.

No code generation changes observed in libm.  No regressions on x86_64.
2023-08-10 11:23:25 -07:00
Adhemerval Zanella
c73c96a4a1 i686: Fix build with --disable-multiarch
Since i686 provides the fortified wrappers for memcpy, mempcpy,
memmove, and memset on the same string implementation, the static
build tries to optimized it by not tying the fortified wrappers
to string routine (to avoid pulling the fortify function if
they are not required).

Checked on i686-linux-gnu building with different option:
default and --disable-multi-arch plus default, --disable-default-pie,
--enable-fortify-source={2,3}, and --enable-fortify-source={2,3}
with --disable-default-pie.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-08-10 10:29:29 -03:00
Adhemerval Zanella
51cb52214f x86_64: Fix build with --disable-multiarch (BZ 30721)
With multiarch disabled, the default memmove implementation provides
the fortify routines for memcpy, mempcpy, and memmove.  However, it
does not provide the internal hidden definitions used when building
with fortify enabled.  The memset has a similar issue.

Checked on x86_64-linux-gnu building with different options:
default and --disable-multi-arch plus default, --disable-default-pie,
--enable-fortify-source={2,3}, and --enable-fortify-source={2,3}
with --disable-default-pie.
Tested-by: Andreas K. Huettel <dilfridge@gentoo.org>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-08-10 10:29:29 -03:00
Joseph Myers
b163fca6c3 Add PTRACE_SET_SYSCALL_USER_DISPATCH_CONFIG etc. from Linux 6.4 to sys/ptrace.h
Linux 6.4 adds new constants PTRACE_SET_SYSCALL_USER_DISPATCH_CONFIG
and PTRACE_GET_SYSCALL_USER_DISPATCH_CONFIG.  Add those to all
relevant sys/ptrace.h headers, along with adding the associated
argument structure to bits/ptrace-shared.h (named struct
__ptrace_sud_config there following the usual convention for such
structures).

Tested for x86_64 and with build-many-glibcs.py.
2023-08-08 14:38:22 +00:00
Joseph Myers
c8c20039c7 Add PACKET_VNET_HDR_SZ from Linux 6.4 to netpacket/packet.h
Linux 6.4 adds a new constant PACKET_VNET_HDR_SZ; add it to glibc's
netpacket/packet.h.

Tested for x86_64.
2023-08-08 14:37:45 +00:00
Samuel Thibault
e3ae80adbc hurd: Make error_t an int in C++
Making error_t defined to enum __error_t_codes conveniently makes the
debugger print symbolic values, but in C++ int is not interoperable with
enum __error_t_codes, leading to C++ application build issues, so let's
revert error_t to int in C++.
2023-08-08 16:07:57 +02:00
наб
92861d93cd linux: statvfs: allocate spare for f_type
This is the only missing part in struct statvfs.
The LSB calls [f]statfs() deprecated, and its weird types are definitely
off-putting. However, its use is required to get f_type.

Instead, allocate one of the six spares to f_type,
copied directly from struct statfs.
This then becomes a small glibc extension to the standard interface
on Linux and the Hurd, instead of two different interfaces, one of which
is quite odd due to being an ABI type, and there no longer is any reason
to use statfs().

The underlying kernel type is a mess, but all architectures agree on u32
(or more) for the ABI, and all filesystem magicks are 32-bit integers.
We don't lose any generality by using u32, and by doing so we both make
the API consistent with the Hurd, and allow C++
  switch(f_type) { case RAMFS_MAGIC: ...; }

Also fix tst-statvfs so that it actually fails;
as it stood, all it did was return 0 always.
Test statfs()' and statvfs()' f_types are the same.

Link: https://lore.kernel.org/linux-man/f54kudgblgk643u32tb6at4cd3kkzha6hslahv24szs4raroaz@ogivjbfdaqtb/t/#u
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-08-08 09:29:06 -03:00
наб
a9847e2c66 hurd: statvfs: __f_type -> f_type
No further changes needed ([f]statvfs() just cast to struct statfs *
and call [f]statfs()).

Link: https://lore.kernel.org/linux-man/f54kudgblgk643u32tb6at4cd3kkzha6hslahv24szs4raroaz@ogivjbfdaqtb/t/#u
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
2023-08-08 09:29:06 -03:00
Samuel Thibault
53da64d1cf htl: Initialize ___pthread_self early
When using jemalloc, malloc() needs to use TSD, while libpthread
initialization needs malloc(). Having ___pthread_self set early to some
static storage allows TSD to work early, thus allowing jemalloc and
libpthread to initialize together.

This incidentaly simplifies __pthread_enable/disable_asynccancel and
__pthread_self, now that ___pthread_self is always initialized.
2023-08-08 12:19:29 +02:00
Samuel Thibault
644aa127b9 htl: Add support for static TSD data
When using jemalloc, malloc() needs to use TSD, while libpthread
initialization needs malloc(). Supporting a static TSD area allows jemalloc
and libpthread to initialize together.
2023-08-08 12:17:48 +02:00
Sajan Karumanchi
dcad5c8578 x86: Fix for cache computation on AMD legacy cpus.
Some legacy AMD CPUs and hypervisors have the _cpuid_ '0x8000_001D'
set to Zero, thus resulting in zeroed-out computed cache values.
This patch reintroduces the old way of cache computation as a
fail-safe option to handle these exceptions.
Fixed 'level4_cache_size' value through handle_amd().

Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
Tested-by: Florian Weimer <fweimer@redhat.com>
2023-08-06 19:10:42 +05:30
Samuel Thibault
53850f044f hurd: Rework generating errno.h
We only need to give to gawk the headers that actually define error
numbers, so let's rather filter out the other included headers early.
2023-08-06 22:35:01 +02:00
Samuel Thibault
41d8c3bc33 powerpc longjmp: Fix build after chk hidden builtin fix
04bf7d2d8a ("chk: Add and fix hidden builtin definitions for *_chk")
added an #undef for longjmp and siglongjmp to compensate for the
definition in include/setjmp.h, but missed doing so for the powerpc
version too.

Fixes: 04bf7d2d8a ("chk: Add and fix hidden builtin definitions for
*_chk")
2023-08-04 10:03:59 +02:00
Yang Yujie
c579293f67 LoongArch: Fix static PIE condition for toolchain bootstrapping.
This patch allows the static PIE startfile rcrt1.o to be built
without requiring libgcc_s.so from GCC, which depends on libc
in the first place.
2023-08-04 14:04:37 +08:00
Joseph Myers
bd154cdb9e Add IP_PROTOCOL from Linux 6.4 to bits/in.h
Linux 6.4 adds a new constant IP_PROTOCOL; add it to glibc's
bits/in.h.

Tested for x86_64.
2023-08-01 17:22:12 +00:00
Joseph Myers
47b76f6d1d Update kernel version to 6.4 in header constant tests
This patch updates the kernel version in the tests tst-mman-consts.py,
tst-mount-consts.py and tst-pidfd-consts.py to 6.4.  (There are no new
constants covered by these tests in 6.4 that need any other header
changes.)

Tested with build-many-glibcs.py.
2023-08-01 12:43:04 +00:00
Mahesh Bodapati
21841f0d56 PowerPC: Influence cpu/arch hwcap features via GLIBC_TUNABLES
This patch enables the option to influence hwcaps used by PowerPC.
The environment variable, GLIBC_TUNABLES=glibc.cpu.hwcaps=-xxx,yyy,-zzz....,
can be used to enable CPU/ARCH feature yyy, disable CPU/ARCH feature xxx
and zzz, where the feature name is case-sensitive and has to match the ones
mentioned in the file{sysdeps/powerpc/dl-procinfo.c}.

Note that the hwcap tunables only used in the IFUNC selection.
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-08-01 07:41:17 -05:00
H.J. Lu
1547d6a64f <sys/platform/x86.h>: Add APX support
Add support for Intel Advanced Performance Extensions:

https://www.intel.com/content/www/us/en/developer/articles/technical/advanced-performance-extensions-apx.html

to <sys/platform/x86.h>.
2023-07-27 08:42:32 -07:00
Adhemerval Zanella Netto
dbc4b032dc linux: Fix i686 with gcc6
On __convert_scm_timestamps GCC 6 issues an warning that tvts[0]/tvts[1]
maybe be used uninitialized, however it would be used if type is set to a
value different than 0 (done by either COMPAT_SO_TIMESTAMP_OLD or
COMPAT_SO_TIMESTAMPNS_OLD) which will fallthrough to 'common' label.

It does not show with gcc 7 or more recent versions.

Checked on i686-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-07-26 09:45:55 -03:00
Adhemerval Zanella Netto
0b1a76c577 i386: Remove memset_chk-nonshared.S
Similar to memcpy, mempcpy, and memmove there is no need for an
specific memset_chk-nonshared.S.  It can be provided by
memset-ia32.S itself for static library.

Checked on i686-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-07-26 09:45:55 -03:00
Adhemerval Zanella Netto
f8f9a27257 i386: Fix build with --enable-fortify=3
The i386 string routines provide multiple internal definitions
for memcpy, memmove, and mempcpy chk routines:

  $ objdump -t libc.a | grep __memcpy_chk
  00000000 g     F .text  0000000e __memcpy_chk
  00000000 g     F .text  00000013 __memcpy_chk
  $ objdump -t libc.a | grep __mempcpy_chk
  00000000 g     F .text  0000000e __mempcpy_chk
  00000000 g     F .text  00000013 __mempcpy_chk
  $ objdump -t libc.a | grep __memmove_chk
  00000000 g     F .text  0000000e __memmove_chk
  00000000 g     F .text  00000013 __memmove_chk

Although is not an issue for normal static builds, with fortify=3
glibc itself might use the fortify chk functions and thus static
build might fail with multiple definitions.  For instance:

x86_64-glibc-linux-gnu-gcc -m32 -march=i686 -o [...]math/test-signgam-uchar-static -nostdlib -nostartfiles -static -static-pie [...]
x86_64-glibc-linux-gnu/bin/ld: [...]/libc.a(mempcpy-ia32.o):
in function `__mempcpy_chk': [...]/glibc-git/string/../sysdeps/i386/i686/mempcpy.S:32: multiple definition of `__mempcpy_chk';
[...]/libc.a(mempcpy_chk-nonshared.o):[...]/debug/../sysdeps/i386/mempcpy_chk.S:28: first defined here
collect2: error: ld returned 1 exit status
make[2]: *** [../Rules:298:

There is no need for mem*-nonshared.S, the __mem*_chk routines
are already provided by the assembly routines.

Checked on i686-linux-gnu with gcc 13 built with fortify=1,2,3 and
without fortify.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-07-26 09:45:55 -03:00
Adhemerval Zanella Netto
648c3b574d powerpc: Fix powerpc64 strchrnul build with old gcc
The compiler might not see that internal definition is an alias
due the libc_ifunc macro, which redefines __strchrnul.  With
gcc 6 it fails with:

In file included from <command-line>:0:0:
./../include/libc-symbols.h:472:33: error: ‘__EI___strchrnul’ aliased to
undefined symbol ‘__GI___strchrnul’
   extern thread __typeof (name) __EI_##name \
                                 ^
./../include/libc-symbols.h:468:3: note: in expansion of macro
‘__hidden_ver2’
   __hidden_ver2 (, local, internal, name)
   ^~~~~~~~~~~~~
./../include/libc-symbols.h:476:29: note: in expansion of macro
‘__hidden_ver1’
 #  define hidden_def(name)  __hidden_ver1(__GI_##name, name, name);
                             ^~~~~~~~~~~~~
./../include/libc-symbols.h:557:32: note: in expansion of macro
‘hidden_def’
 # define libc_hidden_def(name) hidden_def (name)
                                ^~~~~~~~~~
../sysdeps/powerpc/powerpc64/multiarch/strchrnul.c:38:1: note: in
expansion of macro ‘libc_hidden_def’
 libc_hidden_def (__strchrnul)
 ^~~~~~~~~~~~~~~

Use libc_ifunc_hidden as stpcpy.  Checked on powerpc64 with
gcc 6 and gcc 13.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-07-26 09:45:22 -03:00
Aurelien Jarno
a3eac15251 MIPS: Update mips32 and mip64 libm test ulps
Generated on a Cavium Octeon III 2 board running Linux version 4.19.249
and GCC 13.1.0.

Needed due to commit cf7ffdd8a5 ("added pair of inputs for hypotf in
binary32").
2023-07-25 22:20:57 +02:00
Stefan Liebler
637aac2ae3 Include sys/rseq.h in tst-rseq-disable.c
Starting with commit 2c6b4b272e
"nptl: Unconditionally use a 32-byte rseq area", the testcase
misc/tst-rseq-disable is UNSUPPORTED as RSEQ_SIG is not defined.

The mentioned commit removes inclusion of sys/rseq.h in nptl/descr.h.
Thus just include sys/rseq.h in the tst-rseq-disable.c as also done
in tst-rseq.c and tst-rseq-nptl.c.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2023-07-25 12:27:30 +02:00
Aurelien Jarno
7fcdc2380c riscv: Update rvd libm test ulps
Generated on a VisionFive 2 board running Linux version 6.4.2 and
GCC 13.1.0.

Needed due to commit cf7ffdd8a5 ("added pair of inputs for hypotf in
binary32").
2023-07-22 15:55:33 +02:00
Andreas K. Hüttel
6d457ff36a
Update x86_64 libm-test-ulps (x32 ABI)
Based on feedback by Mike Gilbert <floppym@gentoo.org>
Linux-6.1.38-dist x86_64 AMD Phenom-tm- II X6 1055T Processor
-march=amdfam10
failures occur for x32 ABI

Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
2023-07-19 16:56:54 +02:00
Noah Goldstein
8b9a0af8ca [PATCH v1] x86: Use 3/4*sizeof(per-thread-L3) as low bound for NT threshold.
On some machines we end up with incomplete cache information. This can
make the new calculation of `sizeof(total-L3)/custom-divisor` end up
lower than intended (and lower than the prior value). So reintroduce
the old bound as a lower bound to avoid potentially regressing code
where we don't have complete information to make the decision.
Reviewed-by: DJ Delorie <dj@redhat.com>
2023-07-18 22:34:34 -05:00
Noah Goldstein
47f7472178 x86: Fix slight bug in shared_per_thread cache size calculation.
After:
```
    commit af992e7abd
    Author: Noah Goldstein <goldstein.w.n@gmail.com>
    Date:   Wed Jun 7 13:18:01 2023 -0500

        x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
```

Split `shared` (cumulative cache size) from `shared_per_thread` (cache
size per socket), the `shared_per_thread` *can* be slightly off from
the previous calculation.

Previously we added `core` even if `threads_l2` was invalid, and only
used `threads_l2` to divide `core` if it was present. The changed
version only included `core` if `threads_l2` was valid.

This change restores the old behavior if `threads_l2` is invalid by
adding the entire value of `core`.
Reviewed-by: DJ Delorie <dj@redhat.com>
2023-07-18 20:56:25 -05:00
Andreas K. Hüttel
2037f8ad01
Update i686 libm-test-ulps (again)
Based on feedback by Arsen Arsenović <arsen@gentoo.org>
Linux-6.1.38-gentoo-dist-hardened x86_64 AMD Ryzen 7 3800X 8-Core Processor
-march=x86-64-v2

Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
2023-07-19 01:32:13 +02:00
Andreas K. Hüttel
86e56ecf2f
Update i686 libm-test-ulps
Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
2023-07-18 23:12:24 +02:00
Siddhesh Poyarekar
c6cb8783b5 configure: Use autoconf 2.71
Bump autoconf requirement to 2.71 to allow regenerating configure on
more recent distributions.  autoconf 2.71 has been in Fedora since F36
and is the current version in Debian stable (bookworm).  It appears to
be current in Gentoo as well.

All sysdeps configure and preconfigure scripts have also been
regenerated; all changes are trivial transformations that do not affect
functionality.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-07-17 10:08:10 -04:00
Adhemerval Zanella
5a70ac9d39 Update sparc libm-test-ulps 2023-07-17 10:09:44 -03:00
Adhemerval Zanella
721f30116c s390: Add the clone3 wrapper
It follows the internal signature:

  extern int clone3 (struct clone_args *__cl_args, size_t __size,
                     int (*__func) (void *__arg), void *__arg);

Checked on s390x-linux-gnu and s390-linux-gnu.
2023-07-13 10:26:34 -03:00
Adhemerval Zanella
dddc88587a sparc: Fix la_symbind for bind-now (BZ 23734)
The sparc ABI has multiple cases on how to handle JMP_SLOT relocations,
(sparc_fixup_plt/sparc64_fixup_plt).  For BINDNOW, _dl_audit_symbind
will be responsible to setup the final relocation value; while for
lazy binding _dl_fixup/_dl_profile_fixup will call the audit callback
and tail cail elf_machine_fixup_plt (which will call
sparc64_fixup_plt).

This patch fixes by issuing the SPARC specific routine on bindnow and
forwarding the audit value to elf_machine_fixup_plt for lazy resolution.
It fixes the la_symbind for bind-now tests on sparc64 and sparcv9:

  elf/tst-audit24a
  elf/tst-audit24b
  elf/tst-audit24c
  elf/tst-audit24d

Checked on sparc64-linux-gnu and sparcv9-linux-gnu.
Tested-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
2023-07-12 15:29:08 -03:00
Andreas Schwab
ca230f5833 i386: make debug wrappers compatible with static PIE
Static PIE requires the use of PLT relocation.
2023-07-12 14:38:13 +02:00
caiyinyu
0e1324e655 LoongArch: Fix soft-float bug about _dl_runtime_resolve{,lsx,lasx} 2023-07-11 11:57:12 +08:00
caiyinyu
7f079fdc16 LoongArch: Add vector implementation for _dl_runtime_resolve. 2023-07-11 10:56:01 +08:00
caiyinyu
0d341d09f2 LoongArch: config: Added HAVE_LOONGARCH_VEC_ASM.
This patch checks if assembler supports vector instructions to
generate LASX/LSX code or not, and then define HAVE_LOONGARCH_VEC_ASM macro

We have added support for vector instructions in binutils-2.41
See:
https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=75b2f521b101d974354f6ce9ed7c054d8b2e3b7a

commit 75b2f521b101d974354f6ce9ed7c054d8b2e3b7a
Author: mengqinggang <mengqinggang@loongson.cn>
Date:   Thu Jun 22 10:35:28 2023 +0800

LoongArch: gas: Add lsx and lasx instructions support

gas/ChangeLog:

        * config/tc-loongarch.c (md_parse_option): Add lsx and lasx option.
        (loongarch_after_parse_args): Add lsx and lasx option.

opcodes/ChangeLog:

        * loongarch-opc.c (struct loongarch_ase): Add lsx and lasx
        instructions.
2023-07-11 10:56:01 +08:00
Frédéric Bérat
19f9f7f9d5 sysdeps: Add missing hidden definitions for i386
Add missing libc_hidden_builtin_def for memset_chk and MEMCPY_CHK on
i386.
2023-07-10 14:48:07 +02:00
Frédéric Bérat
e30048fdc1 sysdeps/s390: Exclude fortified routines from being built with _FORTIFY_SOURCE
Depending on build configuration, the [routine]-c.c files may be chosen
to provide fortified routines implementation. While [routines].c
implementation were automatically excluded, the [routines]-c.c ones were
not. This patch fixes that by adding these file to the list to be
filtered.
2023-07-10 14:48:04 +02:00
caiyinyu
0567edf1b2 LoongArch: config: Rewrite check on static PIE.
It's better to add "\" before "EOF" and remove "\"
before "$".
2023-07-07 09:01:51 +08:00
John David Anglin
5000549746 Revert "hppa: Drop 16-byte pthread lock alignment"
This change reverts commits c4468cd399
and ab991a3d1b.
2023-07-06 15:47:50 +00:00
Frédéric Bérat
02261d1bd9 sysdeps/ieee754/ldbl-128ibm-compat: Fix warn unused result
Return value from *scanf and *asprintf routines are now properly checked
in test-scanf-ldbl-compat-template.c and test-printf-ldbl-compat.c.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-07-05 16:59:48 +02:00
Frédéric Bérat
ba745eff46 misc/bits/syslog.h: Clearly separate declaration from definition
This allows to include bits/syslog-decl.h in include/sys/syslog.h and
therefore be able to create the libc_hidden_builtin_proto (__syslog_chk)
prototype.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-07-05 16:59:48 +02:00
Frédéric Bérat
64f9857507 wchar: Avoid PLT entries with _FORTIFY_SOURCE
The change is meant to avoid unwanted PLT entries for the wmemset and
wcrtomb routines when _FORTIFY_SOURCE is set.

On top of that, ensure that *_chk routines have their hidden builtin
definitions available.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-07-05 16:59:48 +02:00
Frédéric Bérat
505c884aeb stdio: Ensure *_chk routines have their hidden builtin definition available
If libc_hidden_builtin_{def,proto} isn't properly set for *_chk routines,
there are unwanted PLT entries in libc.so.

There is a special case with __asprintf_chk:
If ldbl_* macros are used for asprintf, ABI gets broken on s390x,
if it isn't, ppc64le isn't building due to multiple asm redirections.

This is due to the inclusion of bits/stdio-lbdl.h for ppc64le whereas it
isn't for s390x. This header creates redirections, which are not
compatible with the ones generated using libc_hidden_def.
Yet, we can't use libc_hidden_ldbl_proto on s390x since it will not
create a simple strong alias (e.g. as done on x86_64), but a versioned
alias, leading to ABI breakage.

This results in errors on s390x:
/usr/bin/ld: glibc/iconv/../libio/bits/stdio2.h:137: undefined reference
to `__asprintf_chk'

Original __asprintf_chk symbols:
00000000001395b0 T __asprintf_chk
0000000000177e90 T __nldbl___asprintf_chk

__asprintf_chk symbols with ldbl_* macros:
000000000012d590 t ___asprintf_chk
000000000012d590 t __asprintf_chk@@GLIBC_2.4
000000000012d590 t __GI___asprintf_chk
000000000012d590 t __GL____asprintf_chk___asprintf_chk
0000000000172240 T __nldbl___asprintf_chk

__asprintf_chk symbols with the patch:
000000000012d590 t ___asprintf_chk
000000000012d590 T __asprintf_chk
000000000012d590 t __GI___asprintf_chk
0000000000172240 T __nldbl___asprintf_chk

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-07-05 16:59:48 +02:00
Frédéric Bérat
dd8486ffc1 string: Ensure *_chk routines have their hidden builtin definition available
If libc_hidden_builtin_{def,proto} isn't properly set for *_chk routines,
there are unwanted PLT entries in libc.so.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-07-05 16:59:48 +02:00
Frédéric Bérat
ba96ff24b2 sysdeps: Ensure ieee128*_chk routines to be properly named
The *_chk routines naming doesn't match the name that would be generated
using libc_hidden_ldbl_proto. Since the macro is needed for some of
these *_chk functions for _FORTIFY_SOURCE to be enabled, that needed to
be fixed.
While at it, all the *_chk function get renamed appropriately for
consistency, even if not strictly necessary.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Paul E. Murphy <murphyp@linux.ibm.com>
2023-07-05 16:59:48 +02:00
Frédéric Bérat
20c894d21e Exclude routines from fortification
Since the _FORTIFY_SOURCE feature uses some routines of Glibc, they need to
be excluded from the fortification.

On top of that:
 - some tests explicitly verify that some level of fortification works
   appropriately, we therefore shouldn't modify the level set for them.
 - some objects need to be build with optimization disabled, which
   prevents _FORTIFY_SOURCE to be used for them.

Assembler files that implement architecture specific versions of the
fortified routines were not excluded from _FORTIFY_SOURCE as there is no
C header included that would impact their behavior.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-07-05 16:59:48 +02:00
Sergey Bugaev
27cb2bb93d hurd: Implement MAP_EXCL
MAP_FIXED is defined to silently replace any existing mappings at the
address range being mapped over. This, however, is a dangerous, and only
rarely desired behavior.

Various Unix systems provide replacements or additions to MAP_FIXED:

* SerenityOS and Linux provide MAP_FIXED_NOREPLACE. If the address space
  already contains a mapping in the requested range, Linux returns
  EEXIST. SerenityOS returns ENOMEM, however that is a bug, as the
  MAP_FIXED_NOREPLACE implementation is intended to be compatible with
  Linux.

* FreeBSD provides the MAP_EXCL flag that has to be used in combination
  with MAP_FIXED. It returns EINVAL if the requested range already
  contains existing mappings. This is directly analogous to the O_EXCL
  flag in the open () call.

* DragonFly BSD, NetBSD, and OpenBSD provide MAP_TRYFIXED, but with
  different semantics. DragonFly BSD returns ENOMEM if the requested
  range already contains existing mappings. NetBSD does not return an
  error, but instead creates the mapping at a different address if the
  requested range contains mappings. OpenBSD behaves the same, but also
  notes that this is the default behavior even without MAP_TRYFIXED
  (which is the case on the Hurd too).

Since the Hurd leans closer to the BSD side, add MAP_EXCL as the primary
API to request the behavior of not replacing existing mappings. Declare
MAP_FIXED_NOREPLACE and MAP_TRYFIXED as aliases of (MAP_FIXED|MAP_EXCL),
so any existing software that checks for either of those macros will
pick them up automatically. For compatibility with Linux, return EEXIST
if a mapping already exists.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230625231751.404120-5-bugaevc@gmail.com>
2023-07-03 01:38:14 +02:00
Sergey Bugaev
19c3b31812 hurd: Fix mapping at address 0 with MAP_FIXED
Zero address passed to mmap () typically means the caller doesn't have
any specific preferred address. Not so if MAP_FIXED is passed: in this
case 0 means literal 0. Fix this case to pass anywhere = 0 into vm_map.

Also add some documentation.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230625231751.404120-4-bugaevc@gmail.com>
2023-07-03 01:38:12 +02:00
Sergey Bugaev
f84c3ceb04 hurd: Fix calling vm_deallocate (NULL)
Only call vm_deallocate when we do have the old buffer, and check for
unexpected errors.

Spotted while debugging a msgids/readdir issue on x86_64-gnu.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230625231751.404120-3-bugaevc@gmail.com>
2023-07-03 01:38:12 +02:00
Sergey Bugaev
4b5e576fc2 hurd: Map brk non-executable
The rest of the heap (backed by individual pages) is already mapped RW.
Mapping these pages RWX presents a security hazard.

Also, in another branch memory gets allocated using vm_allocate, which
sets memory protection to VM_PROT_DEFAULT (which is RW). The mismatch
between protections prevents Mach from coalescing the VM map entries.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230625231751.404120-2-bugaevc@gmail.com>
2023-07-03 01:38:08 +02:00
Sergey Bugaev
019b0bbc84 htl: Let Mach place thread stacks
Instead of trying to allocate a thread stack at a specific address,
looping over the address space, just set the ANYWHERE flag in
vm_allocate (). The previous behavior:

- defeats ASLR (for Mach versions that support ASLR),
- is particularly slow if the lower 4 GB of the address space are mapped
  inaccessible, as we're planning to do on 64-bit Hurd,
- is just silly.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230625231751.404120-1-bugaevc@gmail.com>
2023-07-03 01:25:33 +02:00
Samuel Thibault
efdb85183a mach: strerror must not return NULL (bug 30555)
This follows 1d44530a5b ("string: strerror must not return NULL (bug 30555)"):

«
    For strerror, this fixes commit 28aff04781 ("string:
    Implement strerror in terms of strerror_l").  This commit avoids
    returning NULL for strerror_l as well, although POSIX allows this
    behavior for strerror_l.
»
2023-07-02 11:27:51 +00:00
John David Anglin
181e991dfb hppa: xfail debug/tst-ssp-1 when have-ssp is yes (gcc-12 and later) 2023-07-01 18:26:18 +00:00
Samuel Thibault
494714d407 hurd: Make getrandom return ENOSYS when /dev/random is not set up
So that callers (e.g. __arc4random_buf) don't try calling it again.
2023-07-01 14:23:40 +02:00
H.J. Lu
6259ab3941 ld.so: Always use MAP_COPY to map the first segment [BZ #30452]
The first segment in a shared library may be read-only, not executable.
To support LD_PREFER_MAP_32BIT_EXEC on such shared libraries, we also
check MAP_DENYWRITE to decide if MAP_32BIT should be passed to mmap.
Normally the first segment is mapped with MAP_COPY, which is defined
as (MAP_PRIVATE | MAP_DENYWRITE).  But if the segment alignment is
greater than the page size, MAP_COPY isn't used to allocate enough
space to ensure that the segment can be properly aligned.  Map the
first segment with MAP_COPY in this case to fix BZ #30452.
2023-06-30 10:42:42 -07:00
Joe Ramsay
4a9392ffc2 aarch64: Add vector implementations of exp routines
Optimised implementations for single and double precision, Advanced
SIMD and SVE, copied from Arm Optimized Routines.

As previously, data tables are used via a barrier to prevent
overly aggressive constant inlining. Special-case handlers are
marked NOINLINE to avoid incurring the penalty of switching call
standards unnecessarily.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2023-06-30 09:04:26 +01:00
Joe Ramsay
78c01a5cbe aarch64: Add vector implementations of log routines
Optimised implementations for single and double precision, Advanced
SIMD and SVE, copied from Arm Optimized Routines. Log lookup table
added as HIDDEN symbol to allow it to be shared between AdvSIMD and
SVE variants.

As previously, data tables are used via a barrier to prevent
overly aggressive constant inlining. Special-case handlers are
marked NOINLINE to avoid incurring the penalty of switching call
standards unnecessarily.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2023-06-30 09:04:22 +01:00
Joe Ramsay
3bb1af2051 aarch64: Add vector implementations of sin routines
Optimised implementations for single and double precision, Advanced
SIMD and SVE, copied from Arm Optimized Routines.

As previously, data tables are used via a barrier to prevent
overly aggressive constant inlining. Special-case handlers are
marked NOINLINE to avoid incurring the penalty of switching call
standards unnecessarily.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2023-06-30 09:04:16 +01:00
Joe Ramsay
aed39a3aa3 aarch64: Add vector implementations of cos routines
Replace the loop-over-scalar placeholder routines with optimised
implementations from Arm Optimized Routines (AOR).

Also add some headers containing utilities for aarch64 libmvec
routines, and update libm-test-ulps.

Data tables for new routines are used via a pointer with a
barrier on it, in order to prevent overly aggressive constant
inlining in GCC. This allows a single adrp, combined with offset
loads, to be used for every constant in the table.

Special-case handlers are marked NOINLINE in order to confine the
save/restore overhead of switching from vector to normal calling
standard. This way we only incur the extra memory access in the
exceptional cases. NOINLINE definitions have been moved to
math_private.h in order to reduce duplication.

AOR exposes a config option, WANT_SIMD_EXCEPT, to enable
selective masking (and later fixing up) of invalid lanes, in
order to trigger fp exceptions correctly (AdvSIMD only). This is
tested and maintained in AOR, however it is configured off at
source level here for performance reasons. We keep the
WANT_SIMD_EXCEPT blocks in routine sources to greatly simplify
the upstreaming process from AOR to glibc.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2023-06-30 09:04:10 +01:00
Joseph Myers
1a21693e16 Update syscall lists for Linux 6.4
Linux 6.4 adds the riscv_hwprobe syscall on riscv and enables
memfd_secret on s390.  Update syscall-names.list and regenerate the
arch-syscall.h headers with build-many-glibcs.py update-syscalls.

Tested with build-many-glibcs.py.
2023-06-28 21:22:14 +00:00
Adhemerval Zanella
d35fbd3e68 linux: Return unsupported if procfs can not be mount on tst-ttyname-namespace
Trying to mount procfs can fail due multiples reasons: proc is locked
due the container configuration, mount syscall is filtered by a
Linux Secuirty Module, or any other security or hardening mechanism
that Linux might eventually add.

The tests does require a new procfs without binding to parent, and
to fully fix it would require to change how the container was created
(which is out of the scope of the test itself).  Instead of trying to
foresee any possible scenario, if procfs can not be mount fail with
unsupported.

Checked on aarch64-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-28 09:19:11 -03:00
Adhemerval Zanella
a9fed5ea81 linux: Split tst-ttyname
The tst-ttyname-direct.c checks the ttyname with procfs mounted in
bind mode (MS_BIND|MS_REC), while tst-ttyname-namespace.c checks
with procfs mount with MS_NOSUID|MS_NOEXEC|MS_NODEV in a new
namespace.

Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-28 09:18:23 -03:00
Adhemerval Zanella
b29e70657d x86: Adjust Linux x32 dl-cache inclusion path
It fixes the x32 build failure introduced by 45e2483a6c.

Checked on a x86_64-linux-gnu-x32 build.
2023-06-26 16:51:30 -03:00
Joe Simmons-Talbott
9a17a193b4 check_native: Get rid of alloca
Use malloc rather than alloca to avoid potential stack overflow.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-06-26 10:17:47 -03:00
Joe Simmons-Talbott
48170127d9 ifaddrs: Get rid of alloca
Use scratch_buffer and malloc rather than alloca to avoid potential stack
overflows.
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-06-26 10:17:39 -03:00
Sergey Bugaev
45e2483a6c x86: Make dl-cache.h and readelflib.c not Linux-specific
These files could be useful to any port that wants to use ld.so.cache.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-06-26 10:04:31 -03:00
Frederic Berat
99f9ae4ed0 benchtests: fix warn unused result
Few tests needed to properly check for asprintf and system calls return
values with _FORTIFY_SOURCE enabled.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-22 00:21:19 -04:00
Frederic Berat
d636339306 sysdeps/powerpc/fpu/tst-setcontext-fpscr.c: Fix warn unused result
The fread routine return value needs to be checked when fortification
is enabled, hence use xfread helper.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-22 00:21:17 -04:00
Frederic Berat
1bc85effd5 sysdeps/{i386, x86_64}/mempcpy_chk.S: fix linknamespace for __mempcpy_chk
On i386 and x86_64, for libc.a specifically, __mempcpy_chk calls
mempcpy which leads POSIX routines to call non-POSIX mempcpy indirectly.

This leads the linknamespace test to fail when glibc is built with
__FORTIFY_SOURCE=3.

Since calling mempcpy doesn't bring any benefit for libc.a, directly
call __mempcpy instead.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-22 00:20:52 -04:00
Joe Simmons-Talbott
9e6863a537 hurd: readv: Get rid of alloca
Replace alloca with a scratch_buffer to avoid potential stack overflows.

Checked on i686-gnu and x86_64-linux-gnu
Message-Id: <20230619144334.2902429-1-josimmon@redhat.com>
2023-06-20 19:15:10 +02:00
Joe Simmons-Talbott
c6957bddb9 hurd: writev: Add back cleanup handler
There is a potential memory leak for large writes due to writev being a
"shall occur" cancellation point.  Add back the cleanup handler removed
in cf30aa43a5.

Checked on i686-gnu and x86_64-linux-gnu.
Message-Id: <20230619143842.2901522-1-josimmon@redhat.com>
2023-06-20 18:37:04 +02:00
Paul Pluzhnikov
4290aed051 Fix misspellings -- BZ 25337 2023-06-19 21:58:33 +00:00
Frédéric Bérat
20b6b8e8a5 tests: replace read by xread
With fortification enabled, read calls return result needs to be checked,
has it gets the __wur macro enabled.

Note on read call removal from  sysdeps/pthread/tst-cancel20.c and
sysdeps/pthread/tst-cancel21.c:
It is assumed that this second read call was there to overcome the race
condition between pipe closure and thread cancellation that could happen
in the original code. Since this race condition got fixed by
d0e3ffb7a5 the second call seems
superfluous. Hence, instead of checking for the return value of read, it
looks reasonable to simply remove it.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-19 09:14:56 -04:00
Joe Simmons-Talbott
cf30aa43a5 hurd: writev: Get rid of alloca
Use a scratch_buffer rather than alloca to avoid potential stack
overflows.

Checked on i686-gnu and x86_64-linux-gnu
Message-Id: <20230608155844.976554-1-josimmon@redhat.com>
2023-06-19 02:45:19 +02:00
Joe Simmons-Talbott
01dd2875f8 grantpt: Get rid of alloca
Replace alloca with a scratch_buffer to avoid potential stack overflows.
Message-Id: <20230613191631.1080455-1-josimmon@redhat.com>
2023-06-18 01:08:04 +02:00
Florian Weimer
388ae538dd hurd: Add strlcpy, strlcat, wcslcpy, wcslcat to libc.abilist 2023-06-15 10:05:25 +02:00
Florian Weimer
b54e5d1c92 Add the wcslcpy, wcslcat functions
These functions are about to be added to POSIX, under Austin Group
issue 986.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-14 18:10:24 +02:00
Florian Weimer
454a20c875 Implement strlcpy and strlcat [BZ #178]
These functions are about to be added to POSIX, under Austin Group
issue 986.

The fortified strlcat implementation does not raise SIGABRT if the
destination buffer does not contain a null terminator, it just
inherits the non-failing regular strlcat behavior.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-14 18:10:08 +02:00
Frederic Berat
7ba426a111 tests: replace fgets by xfgets
With fortification enabled, fgets calls return result needs to be checked,
has it gets the __wur macro enabled.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-13 19:59:08 -04:00
Dridi Boukelmoune
658f601f2a posix: Handle success in gai_strerror()
Signed-off-by: Dridi Boukelmoune <dridi.boukelmoune@gmail.com>
Reviewed-by: Arjun Shankar <arjun@redhat.com>
2023-06-13 20:54:49 +02:00
caiyinyu
eaa5b1cce8 LoongArch: Add support for dl_runtime_profile
This commit can fix the FAIL item: elf/tst-sprof-basic.
2023-06-13 10:27:45 +08:00
Noah Goldstein
180897c161 x86: Make the divisor in setting non_temporal_threshold cpu specific
Different systems prefer a different divisors.

From benchmarks[1] so far the following divisors have been found:
    ICX     : 2
    SKX     : 2
    BWD     : 8

For Intel, we are generalizing that BWD and older prefers 8 as a
divisor, and SKL and newer prefers 2. This number can be further tuned
as benchmarks are run.

[1]: https://github.com/goldsteinn/memcpy-nt-benchmarks
Reviewed-by: DJ Delorie <dj@redhat.com>
2023-06-12 11:33:39 -05:00
Noah Goldstein
f193ea20ed x86: Refactor Intel init_cpu_features
This patch should have no affect on existing functionality.

The current code, which has a single switch for model detection and
setting prefered features, is difficult to follow/extend. The cases
use magic numbers and many microarchitectures are missing. This makes
it difficult to reason about what is implemented so far and/or
how/where to add support for new features.

This patch splits the model detection and preference setting stages so
that CPU preferences can be set based on a complete list of available
microarchitectures, rather than based on model magic numbers.
Reviewed-by: DJ Delorie <dj@redhat.com>
2023-06-12 11:33:39 -05:00
Noah Goldstein
af992e7abd x86: Increase non_temporal_threshold to roughly sizeof_L3 / 4
Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.

Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.

The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.

Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-06-12 11:33:39 -05:00
Florian Weimer
7d42120928 pthreads: Use _exit to terminate the tst-stdio1 test
Previously, the exit function was used, but this causes the test to
block (until the timeout) once exit is changed to lock stdio streams
during flush.
2023-06-06 11:39:06 +02:00
Adhemerval Zanella
d4963a844d linux: Fail as unsupported if personality call is filtered
Container management default seccomp filter [1] only accepts
personality(2) with PER_LINUX, (0x0), UNAME26 (0x20000),
PER_LINUX32 (0x8), UNAME26 | PER_LINUX32, and 0xffffffff (to query
current personality)

Although the documentation only state it is blocked to prevent
'enabling BSD emulation' (PER_BSD, not implemented by Linux), checking
on repository log the real reason is to block ASLR disable flag
(ADDR_NO_RANDOMIZE) and other poorly support emulations.

So handle EPERM and fail as UNSUPPORTED if we can really check for
BZ#19408.

Checked on aarch64-linux-gnu.

[1] https://github.com/moby/moby/blob/master/profiles/seccomp/default.json

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2023-06-05 12:51:48 -03:00
Joseph Myers
be9b883ddd Remove MAP_VARIABLE from hppa bits/mman.h
As suggested in
<https://sourceware.org/pipermail/libc-alpha/2023-February/145890.html>,
remove the MAP_VARIABLE define from the hppa bits/mman.h, for
consistency with Linux 6.2 which removed the define there.

Tested with build-many-glibcs.py for hppa-linux-gnu.
2023-06-05 14:35:25 +00:00
Sergey Bugaev
67f704ab69 hurd: Fix x86_64 sigreturn restoring bogus reply_port
Since the area of the user's stack we use for the registers dump (and
otherwise as __sigreturn2's stack) can and does overlap the sigcontext,
we have to be very careful about the order of loads and stores that we
do. In particular we have to load sc_reply_port before we start
clobbering the sigcontext.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
2023-06-04 19:05:51 +02:00
Paul Pluzhnikov
2cbeda847b Fix a few more typos I missed in previous round -- BZ 25337 2023-06-02 23:46:32 +00:00
Alejandro Colomar
5013f6fc6c Use __nonnull for the epoll_wait(2) family of syscalls
Signed-off-by: Alejandro Colomar <alx@kernel.org>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-06-01 14:50:42 -03:00
Alejandro Colomar
cc5372806a Fix invalid use of NULL in epoll_pwait2(2) test
epoll_pwait2(2)'s second argument should be nonnull.  We're going to add
__nonnull to the prototype, so let's fix the test accordingly.  We can
use a dummy variable to avoid passing NULL.

Reported-by: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2023-06-01 14:50:35 -03:00
Joe Simmons-Talbott
884012db20 getipv4sourcefilter: Get rid of alloca
Use a scratch_buffer rather than alloca to avoid potential stack
overflows.
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-06-01 14:47:12 -03:00
Joe Simmons-Talbott
d1eaab5a79 getsourcefilter: Get rid of alloca.
Use a scratch_buffer rather than alloca to avoid potential stack
overflows.
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-06-01 14:46:09 -03:00
Frédéric Bérat
29e25f6f13 tests: fix warn unused results
With fortification enabled, few function calls return result need to be
checked, has they get the __wur macro enabled.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-01 13:01:32 -04:00
Frédéric Bérat
026a84a54d tests: replace write by xwrite
Using write without cheks leads to warn unused result when __wur is
enabled.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-01 12:40:05 -04:00
H.J. Lu
a8c8889978 x86-64: Use YMM registers in memcmpeq-evex.S
Since the assembly source file with -evex suffix should use YMM registers,
not ZMM registers, include x86-evex256-vecs.h by default to use YMM
registers in memcmpeq-evex.S
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2023-06-01 09:21:14 -07:00
Adhemerval Zanella
5f828ff824 io: Fix F_GETLK, F_SETLK, and F_SETLKW for powerpc64
Different than other 64 bit architectures, powerpc64 defines the
LFS POSIX lock constants  with values similar to 32 ABI, which
are meant to be used with fcntl64 syscall.  Since powerpc64 kABI
does not have fcntl, the constants are adjusted with the
FCNTL_ADJUST_CMD macro.

The 4d0fe291ae changed the logic of generic constants
LFS value are equal to the default values; which is now wrong
for powerpc64.

Fix the value by explicit define the previous glibc constants
(powerpc64 does not need to use the 32 kABI value, but it simplifies
the FCNTL_ADJUST_CMD which should be kept as compatibility).

Checked on powerpc64-linux-gnu and powerpc-linux-gnu.
2023-05-31 15:31:02 -03:00
Paul Pluzhnikov
65cc53fe7c Fix misspellings in sysdeps/ -- BZ 25337 2023-05-30 23:02:29 +00:00
Adhemerval Zanella
4d0fe291ae io: Fix record locking contants on 32 bit arch with 64 bit default time_t (BZ#30477)
For architecture with default 64 bit time_t support, the kernel
does not provide LFS and non-LFS values for F_GETLK, F_GETLK, and
F_GETLK (the default value used for 64 bit architecture are used).

This is might be considered an ABI break, but the currenct exported
values is bogus anyway.

The POSIX lockf is not affected since it is aliased to lockf64,
which already uses the LFS values.

Checked on i686-linux-gnu and the new tests on a riscv32.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-05-30 08:53:07 -03:00