Commit Graph

16416 Commits

Author SHA1 Message Date
Adhemerval Zanella
12b8dd7718 math: Fix log10f on some ABIs
The commit 9247f53219 triggered some regressions on loongarch and
riscv:

math/test-float-log10
math/test-float32-log10

And it is due a wrong sync with CORE-MATH for special 0.0/-0.0
inputs.

Checked on aarch64-linux-gnu and loongarch64-linux-gnu-lp64d.
2024-11-07 07:59:43 -03:00
caiyinyu
1b70a0a024 nptl: fix __builtin_thread_pointer detection on LoongArch
Signed-off-by: caiyinyu <caiyinyu@loongson.cn>
2024-11-07 14:08:30 +08:00
Florian Weimer
ba60be8735 math: Fix incorrect results of exp10m1f with some GCC versions
On GCC 11 (x86-64), the previous code produced test failures like
this one:

Failure: Test: exp10m1_towardzero (-0x1.1p+4)
Result:
 is:         -1.00000000e+00  -0x1.000000p+0
 should be:  -9.99999940e-01  -0x1.fffffep-1
 difference:  5.96046447e-08   0x1.000000p-24
 ulp       :  1.0000
 max.ulp   :  0.0000

Apply a similar fix to exp2m1f.

Co-authored-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2024-11-06 16:09:05 +01:00
Yury Khrustalev
ff254cabd6 misc: Align argument name for pkey_*() functions with the manual
Change name of the access_rights argument to access_restrictions
of the following functions:

 - pkey_alloc()
 - pkey_set()

as this argument refers to access restrictions rather than access
rights and previous name might have been misleading.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2024-11-06 13:11:33 +00:00
Florian Weimer
f2326c2ec0 elf: Introduce _dl_relocate_object_no_relro
And make _dl_protect_relro apply RELRO conditionally.

Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-06 10:33:44 +01:00
Aurelien Jarno
273694cd78 Add Arm HWCAP2_* constants from Linux 3.15 and 6.2 to <bits/hwcap.h>
Linux 3.15 and 6.2 added HWCAP2_* values for Arm. These bits have
already been added to dl-procinfo.{c,h} in commits 9aea0cb842 and
8ebe9c0b38. Also add them to <bits/hwcap.h> so that they can be used
in user code. For example, for checking bits in the value returned by
getauxval(AT_HWCAP2).

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yury Khrustalev <yury.khrustalev@arm.com>
2024-11-05 21:03:37 +01:00
Joe Ramsay
2d82d781a5 AArch64: Remove SVE erf and erfc tables
By using a combination of mask-and-add instead of the shift-based
index calculation the routines can share the same table as other
variants with no performance degradation.

The tables change name because of other changes in downstream AOR.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2024-11-01 16:10:41 +00:00
Adhemerval Zanella
6d477b8de8 x86_64: Add exp2m1f with FMA
The CORE-MATH exp2m1f implementation showed slight worse latency
when using x86_64 baseline ABI.  This patch adds a ifunc variant
with similar performance for x86_64-v3.

Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:40 -03:00
Adhemerval Zanella
c28f8d7f19 x86_64: Add exp10m1f with FMA
The CORE-MATH exp10m1f implementation showed slight worse latency
when using x86_64 baseline ABI.  This patch adds a ifunc variant
with similar performance for x86_64-v3.

Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:40 -03:00
Adhemerval Zanella
f338c7c5f5 math: Use log10p1f from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows slight better performance to the generic log10p1f.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (M1,
gcc 13.2.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      68.5251        32.2627        52.92%
x86_64v2                    68.8912        32.7887        52.41%
x86_64v3                    59.3427        27.0521        54.41%
i686                        162.026        103.383        36.19%
aarch64                     26.8513        14.5695        45.74%
power10                     12.7426         8.4929        33.35%
powerpc                     16.6768        9.29135        44.29%

reciprocal-throughput        master        patched   improvement
x86_64                      26.0969        12.4023        52.48%
x86_64v2                    25.0045        11.0748        55.71%
x86_64v3                    20.5610        10.2995        49.91%
i686                        89.8842        78.5211        12.64%
aarch64                     17.1200         9.4832        44.61%
power10                      6.7814         6.4258         5.24%
powerpc                      15.769         7.6825        51.28%

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:40 -03:00
Adhemerval Zanella
8ae9e51376 math: Use log1pf from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows slight better performance to the generic log1pf.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (M1,
gcc 13.2.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      71.8142        38.9668        45.74%
x86_64v2                    71.9094        39.1321        45.58%
x86_64v3                    60.1000        32.4016        46.09%
i686                        147.105        104.258        29.13%
aarch64                     26.4439        14.0050        47.04%
power10                     19.4874         9.4146        51.69%
powerpc                     17.6145        8.00736        54.54%

reciprocal-throughput        master        patched   improvement
x86_64                      19.7604        12.7254        35.60%
x86_64v2                    19.0039        11.9455        37.14%
x86_64v3                    16.8559        11.9317        29.21%
i686                        82.3426        73.9718        10.17%
aarch64                     14.4665         7.9614        44.97%
power10                     11.9974         8.4117        29.89%
powerpc                     7.15222         6.0914        14.83%

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:39 -03:00
Adhemerval Zanella
c369580814 math: Use log2p1f from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance compared to the generic log2p1f.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      70.1462        47.0090        32.98%
x86_64v2                    70.2513        47.6160        32.22%
x86_64v3                    60.4840        39.9443        33.96%
i686                        164.068        122.909        25.09%
aarch64                     25.9169        16.9207        34.71%
power10                     18.1261        9.8592         45.61%
powerpc                     17.2683        9.38665        45.64%

reciprocal-throughput        master        patched   improvement
x86_64                      26.2240        16.4082        37.43%
x86_64v2                    25.0911        15.7480        37.24%
x86_64v3                    20.9371        11.7264        43.99%
i686                        90.4209        95.3073        -5.40%
aarch64                     16.8537        8.9561         46.86%
power10                     12.9401        6.5555         49.34%
powerpc                     9.01763        7.54745        16.30%

The performance decrease for i686 is mostly due the use of x87 fpu,
when building with '-msse2 -mfpmath=sse:

                             master        patched   improvement
latency                     164.068        102.982        37.23%
reciprocal-throughput       89.1968        82.5117         7.49%

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:39 -03:00
Adhemerval Zanella
9247f53219 math: Use log10f from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance compared to the generic log10f.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      49.9017        33.5143        32.84%
x86_64v2                    50.4878        33.5623        33.52%
x86_64v3                    50.0991        27.6078        44.89%
i686                        140.874        106.086        24.69%
aarch64                     19.2846        11.3573        41.11%
power10                     14.0994        7.7739        44.86%
powerpc                     14.2898        7.92497        44.54%

reciprocal-throughput        master        patched   improvement
x86_64                      17.8336        12.9074        27.62%
x86_64v2                    16.4418        11.3220        31.14%
x86_64v3                    15.6002        10.5158        32.59%
i686                        66.0678        80.2287        -21.43%
aarch64                      9.4906        6.8393        27.94%
power10                      7.5255        5.5084        26.80%
powerpc                      9.5204        6.98055        26.68%

The performance decrease for i686 is mostly due the use of x87 fpu,
when building with '-msse2 -mfpmath=sse':

                             master        patched   improvement
latency                     140.874        77.1137        45.26%
reciprocal-throughput        64.481        56.4397        12.47%

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:39 -03:00
Adhemerval Zanella
bbd578b38d math: Use expm1f from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance compared to the generic expm1f.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      96.7402        36.4026        62.37%
x86_64v2                    97.5391        33.4625        65.69%
x86_64v3                    82.1778        30.8668        62.44%
i686                         120.58        94.8302        21.35%
aarch64                     32.3558        12.8881        60.17%
power10                     23.5087        9.8574         58.07%
powerpc                     23.4776        9.06325        61.40%

reciprocal-throughput        master        patched   improvement
x86_64                      27.8224        15.9255        42.76%
x86_64v2                    27.8364        9.6438         65.36%
x86_64v3                    20.3227        9.6146         52.69%
i686                        63.5629        59.4718         6.44%
aarch64                     17.4838        7.1082         59.34%
power10                     12.4644        8.7829         29.54%
powerpc                     14.2152        5.94765        58.16%

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:35 -03:00
Adhemerval Zanella
5c22fd25c1 math: Use exp2m1f from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance compared to the generic exp2m1f.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).  The
only change is to handle FLT_MAX_EXP for FE_DOWNWARD or FE_TOWARDZERO.

The benchmark inputs are based on exp2f ones.

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      40.6042        48.7104       -19.96%
x86_64v2                    40.7506        35.9032        11.90%
x86_64v3                    35.2301        31.7956        9.75%
i686                        102.094        94.6657        7.28%
aarch64                     18.2704        15.1387        17.14%
power10                     11.9444         8.2402        31.01%

reciprocal-throughput        master        patched   improvement
x86_64                      20.8683        16.1428        22.64%
x86_64v2                    19.5076        10.4474        46.44%
x86_64v3                    19.2106        10.4014        45.86%
i686                        56.4054        59.3004        -5.13%
aarch64                     12.0781         7.3953        38.77%
power10                      6.5306         5.9388         9.06%

The generic implementation calls __ieee754_exp2f and x86_64 provides
an optimized ifunc version (built with -mfma -mavx2, not correctly
rounded).  This explains the performance difference for x86_64.

Same for i686, where the ABI provides an optimized __ieee754_exp2f
version built with '-msse2 -mfpmath=sse'.  When built wth same
flags, the new algorithm shows a better performance:

                            master        patched    improvement
latency                    102.094        91.2823         10.59%
reciprocal-throughput      56.4054        52.7984          6.39%

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:35 -03:00
Adhemerval Zanella
5fa89852fa math: Use exp10m1f from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance compared to the generic exp10m1f.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).  I mostly
fixed some small issues in corner cases (sNaN handling, -INFINITY,
a specific overflow check).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      45.4690        49.5845        -9.05%
x86_64v2                    46.1604        36.2665        21.43%
x86_64v3                    37.8442        31.0359        17.99%
i686                        121.367        93.0079        23.37%
aarch64                     21.1126        15.0165        28.87%
power10                     12.7426        8.4929         33.35%

reciprocal-throughput        master        patched   improvement
x86_64                      19.6005        17.4005        11.22%
x86_64v2                    19.6008        11.1977        42.87%
x86_64v3                    17.5427        10.2898        41.34%
i686                        59.4215        60.9675        -2.60%
aarch64                     13.9814        7.9173         43.37%
power10                      6.7814        6.4258          5.24%

The generic implementation calls __ieee754_exp10f which has an
optimized version, although it is not correctly rounded, which is
the main culprit of the the latency difference for x86_64 and
throughp for i686.

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:26 -03:00
Adhemerval Zanella
345e9c7d0b math: Add e_gammaf_r to glibc code and style
Also remove the use of builtins in favor of standard names, compiler
already inline them (if supported) with current compiler options.
It also fixes and issue where __builtin_roundeven is not support on
gcc older than version 10.

Checked on x86_64-linux-gnu and i686-linux_gnu.

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:17:04 -03:00
caiyinyu
93ced0e1b8 LoongArch: Add RSEQ_SIG in rseq.h.
Signed-off-by: caiyinyu <caiyinyu@loongson.cn>
2024-11-01 10:41:20 +08:00
Michael Jeanson
3d24fb25ef nptl: Add <thread_pointer.h> for LoongArch
This will be required by the rseq extensible ABI implementation on all
Linux architectures exposing the '__rseq_size' and '__rseq_offset'
symbols to set the initial value of the 'cpu_id' field which can be used
by applications to test if rseq is available and registered. As long as
the symbols are exposed it is valid for an application to perform this
test even if rseq is not yet implemented in libc for this architecture.

Both code paths are compile tested with build-many-glibcs.py but I don't
have access to any hardware to run the tests.

Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: Arjun Shankar <arjun@redhat.com>
2024-11-01 10:41:20 +08:00
Sachin Monga
383e4f53cb powerpc64: Obviate the need for ROP protection in clone/clone3
Save lr in a non-volatile register before scv in clone/clone3.
For clone, the non-volatile register was unused and already
saved/restored.  Remove the dead code from clone.

Signed-off-by: Sachin Monga <smonga@linux.ibm.com>
Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2024-10-30 16:50:04 -04:00
Sachin Monga
f144dae4a1 powerpc64le: Adhere to ABI stack alignment requirement
The ABI requires all stack frames be 16-byte aligned.

Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2024-10-28 16:12:34 -05:00
Joe Ramsay
1cf29fbc5b AArch64: Small optimisation in AdvSIMD erf and erfc
In both routines, reduce register pressure such that GCC 14 emits no
spills for erf and fewer spills for erfc.  Also use more efficient
comparison for the special-case in erf.

Benchtests show erf improves by 6.4%, erfc by 1.0%.
2024-10-28 15:01:37 +00:00
Florian Weimer
4f5f8343c3 Linux: Match kernel text for SCHED_ macros
This avoids -Werror build issues in strace, which bundles UAPI
headers, but does not include them as system headers.

Fixes commit c444cc1d83
("Linux: Add missing scheduler constants to <sched.h>").

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-10-25 16:46:30 +02:00
Joseph Myers
c5dd659f22 Add more tests of pthread_mutexattr_gettype and pthread_mutexattr_settype
Add basic tests of pthread_mutexattr_gettype and
pthread_mutexattr_settype with each valid mutex kind, plus test for
EINVAL with an invalid mutex kind.

Tested for x86_64.
2024-10-23 16:45:15 +00:00
DJ Delorie
81439a116c configure: default to --prefix=/usr on GNU/Linux
I'm getting tired of always typing --prefix=/usr
so making it the default.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-22 18:11:49 -04:00
Joseph Myers
b371ed2726 Check time arguments to pthread_timedjoin_np and pthread_clockjoin_np
The pthread_timedjoin_np and pthread_clockjoin_np functions do not
check that a valid time has been specified.  The documentation for
these functions in the glibc manual isn't sufficiently detailed to say
if they should, but consistency with POSIX functions such as
pthread_mutex_timedlock and pthread_cond_timedwait strongly indicates
that an EINVAL error is appropriate (even if there might be some
ambiguity about exactly where such a check should go in relation to
other checks for whether the thread exists, whether it's immediately
joinable, etc.).  Copy the logic for such a check used in
pthread_rwlock_common.c.

pthread_join_common had some logic calling valid_nanoseconds before
commit 9e92278ffa, "nptl: Remove
clockwait_tid"; I haven't checked exactly what cases that detected.

Tested for x86_64 and x86.
2024-10-21 20:56:48 +00:00
Adhemerval Zanella
ab564362d0 linux: Fix tst-syscall-restart.c on old gcc (BZ 32283)
To avoid a parameter name omitted error.
2024-10-18 08:48:22 -03:00
Adhemerval Zanella
2c1903cbba sparc: Fix restartable syscalls (BZ 32173)
The commit 'sparc: Use Linux kABI for syscall return'
(86c5d2cf0c) did not take into account
a subtle sparc syscall kABI constraint.  For syscalls that might block
indefinitely, on an interrupt (like SIGCONT) the kernel will set the
instruction pointer to just before the syscall:

arch/sparc/kernel/signal_64.c
476 static void do_signal(struct pt_regs *regs, unsigned long orig_i0)
477 {
[...]
525                 if (restart_syscall) {
526                         switch (regs->u_regs[UREG_I0]) {
527                         case ERESTARTNOHAND:
528                         case ERESTARTSYS:
529                         case ERESTARTNOINTR:
530                                 /* replay the system call when we are done */
531                                 regs->u_regs[UREG_I0] = orig_i0;
532                                 regs->tpc -= 4;
533                                 regs->tnpc -= 4;
534                                 pt_regs_clear_syscall(regs);
535                                 fallthrough;
536                         case ERESTART_RESTARTBLOCK:
537                                 regs->u_regs[UREG_G1] = __NR_restart_syscall;
538                                 regs->tpc -= 4;
539                                 regs->tnpc -= 4;
540                                 pt_regs_clear_syscall(regs);
541                         }

However, on a SIGCONT it seems that 'g1' register is being clobbered after the
syscall returns.  Before 86c5d2cf0c, the 'g1' was always placed jus
before the 'ta' instruction which then reloads the syscall number and restarts
the syscall.

On master, where 'g1' might be placed before 'ta':

  $ cat test.c
  #include <unistd.h>

  int main ()
  {
    pause ();
  }
  $ gcc test.c -o test
  $ strace -f ./t
  [...]
  ppoll(NULL, 0, NULL, NULL, 0

On another terminal

  $ kill -STOP 2262828

  $ strace -f ./t
  [...]
  --- SIGSTOP {si_signo=SIGSTOP, si_code=SI_USER, si_pid=2521813, si_uid=8289} ---
  --- stopped by SIGSTOP ---

And then

  $ kill -CONT 2262828

Results in:

  --- SIGCONT {si_signo=SIGCONT, si_code=SI_USER, si_pid=2521813, si_uid=8289} ---
  restart_syscall(<... resuming interrupted ppoll ...>) = -1 EINTR (Interrupted system call)

Where the expected behaviour would be:

  $ strace -f ./t
  [...]
  ppoll(NULL, 0, NULL, NULL, 0)           = ? ERESTARTNOHAND (To be restarted if no handler)
  --- SIGSTOP {si_signo=SIGSTOP, si_code=SI_USER, si_pid=2521813, si_uid=8289} ---
  --- stopped by SIGSTOP ---
  --- SIGCONT {si_signo=SIGCONT, si_code=SI_USER, si_pid=2521813, si_uid=8289} ---
  ppoll(NULL, 0, NULL, NULL, 0

Just moving the 'g1' setting near the syscall asm is not suffice,
the compiler might optimize it away (as I saw on cancellation.c by
trying this fix).  Instead, I have change the inline asm to put the
'g1' setup in ithe asm block.  This would require to change the asm
constraint for INTERNAL_SYSCALL_NCS, since the syscall number is not
constant.

Checked on sparc64-linux-gnu.

Reported-by: René Rebe <rene@exactcode.de>
Tested-by: Sam James <sam@gentoo.org>
Reviewed-by: Sam James <sam@gentoo.org>
2024-10-16 14:54:24 -03:00
caiyinyu
2fffaffde8 LoongArch: Regenerate loongarch/arch-syscall.h by build-many-glibcs.py update-syscalls. 2024-10-12 15:50:11 +08:00
Paul Zimmermann
392b3f0971 replace tgammaf by the CORE-MATH implementation
The CORE-MATH implementation is correctly rounded (for any rounding mode).
This can be checked by exhaustive tests in a few minutes since there are
less than 2^32 values to check against for example GNU MPFR.
This patch also adds some bench values for tgammaf.

Tested on x86_64 and x86 (cfarm26).

With the initial GNU libc code it gave on an Intel(R) Core(TM) i7-8700:

      "tgammaf": {
       "": {
        "duration": 3.50188e+09,
        "iterations": 2e+07,
        "max": 602.891,
        "min": 65.1415,
        "mean": 175.094
       }
      }

With the new code:

      "tgammaf": {
       "": {
        "duration": 3.30825e+09,
        "iterations": 5e+07,
        "max": 211.592,
        "min": 32.0325,
        "mean": 66.1649
       }
      }

With the initial GNU libc code it gave on cfarm26 (i686):

  "tgammaf": {
   "": {
    "duration": 3.70505e+09,
    "iterations": 6e+06,
    "max": 2420.23,
    "min": 243.154,
    "mean": 617.509
   }
  }

With the new code:

  "tgammaf": {
   "": {
    "duration": 3.24497e+09,
    "iterations": 1.8e+07,
    "max": 1238.15,
    "min": 101.155,
    "mean": 180.276
   }
  }

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>

Changes in v2:
    - include <math.h> (fix the linknamespace failures)
    - restored original benchtests/strcoll-inputs/filelist#en_US.UTF-8 file
    - restored original wrapper code (math/w_tgammaf_compat.c),
      except for the dealing with the sign
    - removed the tgammaf/float entries in all libm-test-ulps files
    - address other comments from Joseph Myers
      (https://sourceware.org/pipermail/libc-alpha/2024-July/158736.html)

Changes in v3:
    - pass NULL argument for signgam from w_tgammaf_compat.c
    - use of math_narrow_eval
    - added more comments

Changes in v4:
    - initialize local_signgam to 0 in math/w_tgamma_template.c
    - replace sysdeps/ieee754/dbl-64/gamma_productf.c by dummy file

Changes in v5:
    - do not mention local_signgam any more in math/w_tgammaf_compat.c
    - initialize local_signgam to 1 instead of 0 in w_tgamma_template.c
      and added comment

Changes in v6:
    - pass NULL as 2nd argument of __ieee754_gammaf_r in
      w_tgammaf_compat.c, and check for NULL in e_gammaf_r.c

Changes in v7:
    - added Signed-off-by line for Alexei Sibidanov (author of the code)

Changes in v8:
    - added Signed-off-by line for Paul Zimmermann (submitted of the patch)

Changes in v9:
    - address comments from review by Adhemerval Zanella
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2024-10-11 11:12:32 +02:00
Adhemerval Zanella
5ffc903216 misc: Add support for Linux uio.h RWF_ATOMIC flag
Linux 6.11 adds the new flag for pwritev2 (commit
c34fc6f26ab86d03a2d47446f42b6cd492dfdc56).

Checked on x86_64-linux-gnu on 6.11 kernel.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-10 10:28:01 -03:00
Adhemerval Zanella
934d0bf426 Update kernel version to 6.11 in header constant tests
This patch updates the kernel version in the tests tst-mount-consts.py,
and tst-sched-consts.py to 6.11.

There are no new constants covered by these tests in 6.11.

Tested with build-many-glibcs.py.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-10 10:27:55 -03:00
Adhemerval Zanella
f6e849fd7c linux: Add MAP_DROPPABLE from Linux 6.11
This request the page to be never written out to swap, it will be zeroed
under memory pressure (so kernel can just drop the page), it is inherited
by fork, it is not counted against @code{mlock} budget, and if there is
no enough memory to service a page faults there is no fatal error (so not
signal is sent).

Tested with build-many-glibcs.py.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-10 10:27:53 -03:00
Adhemerval Zanella
86f06282cc Update PIDFD_* constants for Linux 6.11
Linux 6.11 adds some more PIDFD_* constants for 'pidfs: allow retrieval
of namespace file descriptors'
(5b08bd408534bfb3a7cf5778da5b27d4e4fffe12).

Tested with build-many-glibcs.py.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-10 10:27:51 -03:00
Adhemerval Zanella
02de16df48 Update syscall lists for Linux 6.11
Linux 6.11 changes for syscall are:

  * fstat/newfstatat for loongarch (it should be safe to add since
    255dc1e4ed that undefine them).
  * clone3 for nios2, which only adds the entry point but defined
    __ARCH_BROKEN_SYS_CLONE3 (the syscall will always return ENOSYS).
  * uretprobe for x86_64 and x32.

Update syscall-names.list and regenerate the arch-syscall.h headers
with build-many-glibcs.py update-syscalls.

Tested with build-many-glibcs.py.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-10 10:27:49 -03:00
Joseph Myers
0e8738a48c Fix header guard in sysdeps/mach/hurd/x86_64/vm_param.h
GCC mainline produces a -Wheader-guard error building for x86_64-gnu.
Fix what seems to be incorrect macro naming in the #ifndef
conditional.

Tested with build-many-glibc.py for x86_64-gnu (GCC mainline).

Message-ID: <fd800046-5ecb-ebd5-4df1-29d4eb3d5433@redhat.com>
2024-10-09 19:16:53 +02:00
Adhemerval Zanella
d40ac01cbb stdlib: Make abort/_Exit AS-safe (BZ 26275)
The recursive lock used on abort does not synchronize with a new process
creation (either by fork-like interfaces or posix_spawn ones), nor it
is reinitialized after fork().

Also, the SIGABRT unblock before raise() shows another race condition,
where a fork or posix_spawn() call by another thread, just after the
recursive lock release and before the SIGABRT signal, might create
programs with a non-expected signal mask.  With the default option
(without POSIX_SPAWN_SETSIGDEF), the process can see SIG_DFL for
SIGABRT, where it should be SIG_IGN.

To fix the AS-safe, raise() does not change the process signal mask,
and an AS-safe lock is used if a SIGABRT is installed or the process
is blocked or ignored.  With the signal mask change removal,
there is no need to use a recursive loc.  The lock is also taken on
both _Fork() and posix_spawn(), to avoid the spawn process to see the
abort handler as SIG_DFL.

A read-write lock is used to avoid serialize _Fork and posix_spawn
execution.  Both sigaction (SIGABRT) and abort() requires to lock
as writer (since both change the disposition).

The fallback is also simplified: there is no need to use a loop of
ABORT_INSTRUCTION after _exit() (if the syscall does not terminate the
process, the system is broken).

The proposed fix changes how setjmp works on a SIGABRT handler, where
glibc does not save the signal mask.  So usage like the below will now
always abort.

  static volatile int chk_fail_ok;
  static jmp_buf chk_fail_buf;

  static void
  handler (int sig)
  {
    if (chk_fail_ok)
      {
        chk_fail_ok = 0;
        longjmp (chk_fail_buf, 1);
      }
    else
      _exit (127);
  }
  [...]
  signal (SIGABRT, handler);
  [....]
  chk_fail_ok = 1;
  if (! setjmp (chk_fail_buf))
    {
      // Something that can calls abort, like a failed fortify function.
      chk_fail_ok = 0;
      printf ("FAIL\n");
    }

Such cases will need to use sigsetjmp instead.

The _dl_start_profile calls sigaction through _profil, and to avoid
pulling abort() on loader the call is replaced with __libc_sigaction.

Checked on x86_64-linux-gnu and aarch64-linux-gnu.

Reviewed-by: DJ Delorie <dj@redhat.com>
2024-10-08 14:40:12 -03:00
Adhemerval Zanella
55d33108c7 linux: Use GLRO(dl_vdso_time) on time
The BZ#24967 fix (1bdda52fe9) missed the time for
architectures that define USE_IFUNC_TIME.  Although it is not
an issue, since there is no pointer mangling, there is also no need
to call dl_vdso_vsym since the vDSO setup was already done by the
loader.

Checked on x86_64-linux-gnu and i686-linux-gnu.
2024-10-08 13:28:21 -03:00
Adhemerval Zanella
02b195d30f linux: Use GLRO(dl_vdso_gettimeofday) on gettimeofday
The BZ#24967 fix (1bdda52fe9) missed the gettimeofday for
architectures that define USE_IFUNC_GETTIMEOFDAY.  Although it is not
an issue, since there is no pointer mangling, there is also no need
to call dl_vdso_vsym since the vDSO setup was already done by the
loader.

Checked on x86_64-linux-gnu and i686-linux-gnu.
2024-10-08 13:28:21 -03:00
Stefan Liebler
7949f552cb S390: Don't use r11 for cu-instructions as used as frame-pointer. [BZ# 32192]
Building the s390 specific iconv modules - utf16-utf32-z9.c, utf8-utf32-z9.c
and utf8-utf16-z9.c - with -fno-omit-frame-pointer leads to a build error
"error: %r11 cannot be used in 'asm' here" as r11 is needed as frame-pointer.

The cuXY-instructions need two even-odd register pairs. Therefore the register
pinning is used. This patch just uses a different register pair.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-10-08 10:13:02 +02:00
Carlos O'Donell
cae9944a6c Fix whitespace related license issues.
Several copies of the licenses in files contained whitespace related
problems.  Two cases are addressed here, the first is two spaces
after a period which appears between "PURPOSE." and "See". The other
is a space after the last forward slash in the URL. Both issues are
corrected and the licenses now match the official textual description
of the license (and the other license in the sources).

Since these whitespaces changes do not alter the paragraph structure of
the license, nor create new sentences, they do not change the license.
2024-10-07 18:08:16 -04:00
Bruno Haible
e67f8e6dbd hurd: Add missing va_end call in fcntl implementation. [BZ #32234]
* sysdeps/mach/hurd/fcntl.c (__libc_fcntl): Add va_end call in two code paths.
2024-10-03 20:18:29 +02:00
Andreas Schwab
a36814e145 riscv: align .preinit_array (bug 32228)
The section contains an array of pointers, so it should be aligned to
pointer size.
2024-10-02 13:04:30 +02:00
Adhemerval Zanella
5e8cfc5d62 linux: sparc: Fix clone for LEON/sparcv8 (BZ 31394)
The sparc clone mitigation (faeaa3bc9f) added the use of
flushw, which is not support by LEON/sparcv8.  As discussed on
the libc-alpha, 'ta 3' is a working alternative [1].

[1] https://sourceware.org/pipermail/libc-alpha/2024-August/158905.html

Checked with a build for sparcv8-linux-gnu targetting leon.

Acked-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
2024-10-01 10:37:21 -03:00
Adhemerval Zanella
49c3682ce1 linux: sparc: Fix syscall_cancel for LEON
LEON2/LEON3 are both sparcv8, which does not support branch hints
(bne,pn) nor the return instruction.

Checked with a build for sparcv8-linux-gnu targetting leon. I also
checked some cancellation tests with qemu-system (targeting LEON3).

Acked-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
2024-10-01 10:37:21 -03:00
Wilco Dijkstra
44fa9c1080 math: Improve layout of expf data
GCC aligns global data to 16 bytes if their size is >= 16 bytes.  This patch
changes the exp2f_data struct slightly so that the fields are better aligned.
As a result on targets that support them, load-pair instructions accessing
poly_scaled and invln2_scaled are now 16-byte aligned.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2024-10-01 13:39:26 +01:00
Noah Goldstein
483443d321 x86/string: Fixup alignment of main loop in str{n}cmp-evex [BZ #32212]
The loop should be aligned to 32-bytes so that it can ideally run out
the DSB. This is particularly important on Skylake-Server where
deficiencies in it's DSB implementation make it prone to not being
able to run loops out of the DSB.

For example running strcmp-evex on 200Mb string:

32-byte aligned loop:
    - 43,399,578,766      idq.dsb_uops
not 32-byte aligned loop:
    - 6,060,139,704       idq.dsb_uops

This results in a 25% performance degradation for the non-aligned
version.

The fix is to just ensure the code layout is such that the loop is
aligned. (Which was previously the case but was accidentally dropped
in 84e7c46df).

NB: The fix was actually 64-byte alignment. This is because 64-byte
alignment generally produces more stable performance than 32-byte
aligned code (cache line crosses can affect perf), so if we are going
past 16-byte alignmnent, might as well go to 64. 64-byte alignment
also matches most other functions we over-align, so it creates a
common point of optimization.

Times are reported as ratio of Time_With_Patch /
Time_Without_Patch. Lower is better.

The values being reported is the geometric mean of the ratio across
all tests in bench-strcmp and bench-strncmp.

Note this patch is only attempting to improve the Skylake-Server
strcmp for long strings. The rest of the numbers are only to test for
regressions.

Tigerlake Results Strings <= 512:
    strcmp : 1.026
    strncmp: 0.949

Tigerlake Results Strings > 512:
    strcmp : 0.994
    strncmp: 0.998

Skylake-Server Results Strings <= 512:
    strcmp : 0.945
    strncmp: 0.943

Skylake-Server Results Strings > 512:
    strcmp : 0.778
    strncmp: 1.000

The 2.6% regression on TGL-strcmp is due to slowdowns caused by
changes in alignment of code handling small sizes (most on the
page-cross logic). These should be safe to ignore because 1) We
previously only 16-byte aligned the function so this behavior is not
new and was essentially up to chance before this patch and 2) this
type of alignment related regression on small sizes really only comes
up in tight micro-benchmark loops and is unlikely to have any affect
on realworld performance.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-09-30 07:40:40 -07:00
Florian Weimer
b300078d97 Linux: Block signals around _Fork (bug 32215)
This hides the inconsistent TCB state (missing robust mutex list) from
signal handlers.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2024-09-28 09:44:25 +02:00
Andreas Schwab
5f62cf88c4 Fix missing randomness in __gen_tempname (bug 32214)
Make sure to update the random value also if getrandom fails.

Fixes: 686d542025 ("posix: Sync tempname with gnulib")
2024-09-26 11:45:44 +02:00
Pavel Kozlov
cc84cd389c arc: Cleanup arcbe
Remove the mention of arcbe ABI to avoid any mislead.
ARC big endian ABI is no longer supported.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-09-25 15:54:07 +01:00