Commit Graph

16456 Commits

Author SHA1 Message Date
Adhemerval Zanella
8ae9e51376 math: Use log1pf from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows slight better performance to the generic log1pf.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (M1,
gcc 13.2.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      71.8142        38.9668        45.74%
x86_64v2                    71.9094        39.1321        45.58%
x86_64v3                    60.1000        32.4016        46.09%
i686                        147.105        104.258        29.13%
aarch64                     26.4439        14.0050        47.04%
power10                     19.4874         9.4146        51.69%
powerpc                     17.6145        8.00736        54.54%

reciprocal-throughput        master        patched   improvement
x86_64                      19.7604        12.7254        35.60%
x86_64v2                    19.0039        11.9455        37.14%
x86_64v3                    16.8559        11.9317        29.21%
i686                        82.3426        73.9718        10.17%
aarch64                     14.4665         7.9614        44.97%
power10                     11.9974         8.4117        29.89%
powerpc                     7.15222         6.0914        14.83%

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:39 -03:00
Adhemerval Zanella
c369580814 math: Use log2p1f from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance compared to the generic log2p1f.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      70.1462        47.0090        32.98%
x86_64v2                    70.2513        47.6160        32.22%
x86_64v3                    60.4840        39.9443        33.96%
i686                        164.068        122.909        25.09%
aarch64                     25.9169        16.9207        34.71%
power10                     18.1261        9.8592         45.61%
powerpc                     17.2683        9.38665        45.64%

reciprocal-throughput        master        patched   improvement
x86_64                      26.2240        16.4082        37.43%
x86_64v2                    25.0911        15.7480        37.24%
x86_64v3                    20.9371        11.7264        43.99%
i686                        90.4209        95.3073        -5.40%
aarch64                     16.8537        8.9561         46.86%
power10                     12.9401        6.5555         49.34%
powerpc                     9.01763        7.54745        16.30%

The performance decrease for i686 is mostly due the use of x87 fpu,
when building with '-msse2 -mfpmath=sse:

                             master        patched   improvement
latency                     164.068        102.982        37.23%
reciprocal-throughput       89.1968        82.5117         7.49%

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:39 -03:00
Adhemerval Zanella
9247f53219 math: Use log10f from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance compared to the generic log10f.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      49.9017        33.5143        32.84%
x86_64v2                    50.4878        33.5623        33.52%
x86_64v3                    50.0991        27.6078        44.89%
i686                        140.874        106.086        24.69%
aarch64                     19.2846        11.3573        41.11%
power10                     14.0994        7.7739        44.86%
powerpc                     14.2898        7.92497        44.54%

reciprocal-throughput        master        patched   improvement
x86_64                      17.8336        12.9074        27.62%
x86_64v2                    16.4418        11.3220        31.14%
x86_64v3                    15.6002        10.5158        32.59%
i686                        66.0678        80.2287        -21.43%
aarch64                      9.4906        6.8393        27.94%
power10                      7.5255        5.5084        26.80%
powerpc                      9.5204        6.98055        26.68%

The performance decrease for i686 is mostly due the use of x87 fpu,
when building with '-msse2 -mfpmath=sse':

                             master        patched   improvement
latency                     140.874        77.1137        45.26%
reciprocal-throughput        64.481        56.4397        12.47%

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:39 -03:00
Adhemerval Zanella
bbd578b38d math: Use expm1f from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance compared to the generic expm1f.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      96.7402        36.4026        62.37%
x86_64v2                    97.5391        33.4625        65.69%
x86_64v3                    82.1778        30.8668        62.44%
i686                         120.58        94.8302        21.35%
aarch64                     32.3558        12.8881        60.17%
power10                     23.5087        9.8574         58.07%
powerpc                     23.4776        9.06325        61.40%

reciprocal-throughput        master        patched   improvement
x86_64                      27.8224        15.9255        42.76%
x86_64v2                    27.8364        9.6438         65.36%
x86_64v3                    20.3227        9.6146         52.69%
i686                        63.5629        59.4718         6.44%
aarch64                     17.4838        7.1082         59.34%
power10                     12.4644        8.7829         29.54%
powerpc                     14.2152        5.94765        58.16%

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:35 -03:00
Adhemerval Zanella
5c22fd25c1 math: Use exp2m1f from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance compared to the generic exp2m1f.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).  The
only change is to handle FLT_MAX_EXP for FE_DOWNWARD or FE_TOWARDZERO.

The benchmark inputs are based on exp2f ones.

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      40.6042        48.7104       -19.96%
x86_64v2                    40.7506        35.9032        11.90%
x86_64v3                    35.2301        31.7956        9.75%
i686                        102.094        94.6657        7.28%
aarch64                     18.2704        15.1387        17.14%
power10                     11.9444         8.2402        31.01%

reciprocal-throughput        master        patched   improvement
x86_64                      20.8683        16.1428        22.64%
x86_64v2                    19.5076        10.4474        46.44%
x86_64v3                    19.2106        10.4014        45.86%
i686                        56.4054        59.3004        -5.13%
aarch64                     12.0781         7.3953        38.77%
power10                      6.5306         5.9388         9.06%

The generic implementation calls __ieee754_exp2f and x86_64 provides
an optimized ifunc version (built with -mfma -mavx2, not correctly
rounded).  This explains the performance difference for x86_64.

Same for i686, where the ABI provides an optimized __ieee754_exp2f
version built with '-msse2 -mfpmath=sse'.  When built wth same
flags, the new algorithm shows a better performance:

                            master        patched    improvement
latency                    102.094        91.2823         10.59%
reciprocal-throughput      56.4054        52.7984          6.39%

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:35 -03:00
Adhemerval Zanella
5fa89852fa math: Use exp10m1f from CORE-MATH
The CORE-MATH implementation is correctly rounded (for any rounding mode)
and shows better performance compared to the generic exp10m1f.

The code was adapted to glibc style and to use the definition of
math_config.h (to handle errno, overflow, and underflow).  I mostly
fixed some small issues in corner cases (sNaN handling, -INFINITY,
a specific overflow check).

Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1,
gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1):

Latency                      master        patched   improvement
x86_64                      45.4690        49.5845        -9.05%
x86_64v2                    46.1604        36.2665        21.43%
x86_64v3                    37.8442        31.0359        17.99%
i686                        121.367        93.0079        23.37%
aarch64                     21.1126        15.0165        28.87%
power10                     12.7426        8.4929         33.35%

reciprocal-throughput        master        patched   improvement
x86_64                      19.6005        17.4005        11.22%
x86_64v2                    19.6008        11.1977        42.87%
x86_64v3                    17.5427        10.2898        41.34%
i686                        59.4215        60.9675        -2.60%
aarch64                     13.9814        7.9173         43.37%
power10                      6.7814        6.4258          5.24%

The generic implementation calls __ieee754_exp10f which has an
optimized version, although it is not correctly rounded, which is
the main culprit of the the latency difference for x86_64 and
throughp for i686.

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:27:26 -03:00
Adhemerval Zanella
345e9c7d0b math: Add e_gammaf_r to glibc code and style
Also remove the use of builtins in favor of standard names, compiler
already inline them (if supported) with current compiler options.
It also fixes and issue where __builtin_roundeven is not support on
gcc older than version 10.

Checked on x86_64-linux-gnu and i686-linux_gnu.

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-11-01 11:17:04 -03:00
caiyinyu
93ced0e1b8 LoongArch: Add RSEQ_SIG in rseq.h.
Signed-off-by: caiyinyu <caiyinyu@loongson.cn>
2024-11-01 10:41:20 +08:00
Michael Jeanson
3d24fb25ef nptl: Add <thread_pointer.h> for LoongArch
This will be required by the rseq extensible ABI implementation on all
Linux architectures exposing the '__rseq_size' and '__rseq_offset'
symbols to set the initial value of the 'cpu_id' field which can be used
by applications to test if rseq is available and registered. As long as
the symbols are exposed it is valid for an application to perform this
test even if rseq is not yet implemented in libc for this architecture.

Both code paths are compile tested with build-many-glibcs.py but I don't
have access to any hardware to run the tests.

Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
Reviewed-by: Arjun Shankar <arjun@redhat.com>
2024-11-01 10:41:20 +08:00
Sachin Monga
383e4f53cb powerpc64: Obviate the need for ROP protection in clone/clone3
Save lr in a non-volatile register before scv in clone/clone3.
For clone, the non-volatile register was unused and already
saved/restored.  Remove the dead code from clone.

Signed-off-by: Sachin Monga <smonga@linux.ibm.com>
Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2024-10-30 16:50:04 -04:00
Sachin Monga
f144dae4a1 powerpc64le: Adhere to ABI stack alignment requirement
The ABI requires all stack frames be 16-byte aligned.

Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2024-10-28 16:12:34 -05:00
Joe Ramsay
1cf29fbc5b AArch64: Small optimisation in AdvSIMD erf and erfc
In both routines, reduce register pressure such that GCC 14 emits no
spills for erf and fewer spills for erfc.  Also use more efficient
comparison for the special-case in erf.

Benchtests show erf improves by 6.4%, erfc by 1.0%.
2024-10-28 15:01:37 +00:00
Florian Weimer
4f5f8343c3 Linux: Match kernel text for SCHED_ macros
This avoids -Werror build issues in strace, which bundles UAPI
headers, but does not include them as system headers.

Fixes commit c444cc1d83
("Linux: Add missing scheduler constants to <sched.h>").

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-10-25 16:46:30 +02:00
Joseph Myers
c5dd659f22 Add more tests of pthread_mutexattr_gettype and pthread_mutexattr_settype
Add basic tests of pthread_mutexattr_gettype and
pthread_mutexattr_settype with each valid mutex kind, plus test for
EINVAL with an invalid mutex kind.

Tested for x86_64.
2024-10-23 16:45:15 +00:00
DJ Delorie
81439a116c configure: default to --prefix=/usr on GNU/Linux
I'm getting tired of always typing --prefix=/usr
so making it the default.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-22 18:11:49 -04:00
Joseph Myers
b371ed2726 Check time arguments to pthread_timedjoin_np and pthread_clockjoin_np
The pthread_timedjoin_np and pthread_clockjoin_np functions do not
check that a valid time has been specified.  The documentation for
these functions in the glibc manual isn't sufficiently detailed to say
if they should, but consistency with POSIX functions such as
pthread_mutex_timedlock and pthread_cond_timedwait strongly indicates
that an EINVAL error is appropriate (even if there might be some
ambiguity about exactly where such a check should go in relation to
other checks for whether the thread exists, whether it's immediately
joinable, etc.).  Copy the logic for such a check used in
pthread_rwlock_common.c.

pthread_join_common had some logic calling valid_nanoseconds before
commit 9e92278ffa, "nptl: Remove
clockwait_tid"; I haven't checked exactly what cases that detected.

Tested for x86_64 and x86.
2024-10-21 20:56:48 +00:00
Adhemerval Zanella
ab564362d0 linux: Fix tst-syscall-restart.c on old gcc (BZ 32283)
To avoid a parameter name omitted error.
2024-10-18 08:48:22 -03:00
Adhemerval Zanella
2c1903cbba sparc: Fix restartable syscalls (BZ 32173)
The commit 'sparc: Use Linux kABI for syscall return'
(86c5d2cf0c) did not take into account
a subtle sparc syscall kABI constraint.  For syscalls that might block
indefinitely, on an interrupt (like SIGCONT) the kernel will set the
instruction pointer to just before the syscall:

arch/sparc/kernel/signal_64.c
476 static void do_signal(struct pt_regs *regs, unsigned long orig_i0)
477 {
[...]
525                 if (restart_syscall) {
526                         switch (regs->u_regs[UREG_I0]) {
527                         case ERESTARTNOHAND:
528                         case ERESTARTSYS:
529                         case ERESTARTNOINTR:
530                                 /* replay the system call when we are done */
531                                 regs->u_regs[UREG_I0] = orig_i0;
532                                 regs->tpc -= 4;
533                                 regs->tnpc -= 4;
534                                 pt_regs_clear_syscall(regs);
535                                 fallthrough;
536                         case ERESTART_RESTARTBLOCK:
537                                 regs->u_regs[UREG_G1] = __NR_restart_syscall;
538                                 regs->tpc -= 4;
539                                 regs->tnpc -= 4;
540                                 pt_regs_clear_syscall(regs);
541                         }

However, on a SIGCONT it seems that 'g1' register is being clobbered after the
syscall returns.  Before 86c5d2cf0c, the 'g1' was always placed jus
before the 'ta' instruction which then reloads the syscall number and restarts
the syscall.

On master, where 'g1' might be placed before 'ta':

  $ cat test.c
  #include <unistd.h>

  int main ()
  {
    pause ();
  }
  $ gcc test.c -o test
  $ strace -f ./t
  [...]
  ppoll(NULL, 0, NULL, NULL, 0

On another terminal

  $ kill -STOP 2262828

  $ strace -f ./t
  [...]
  --- SIGSTOP {si_signo=SIGSTOP, si_code=SI_USER, si_pid=2521813, si_uid=8289} ---
  --- stopped by SIGSTOP ---

And then

  $ kill -CONT 2262828

Results in:

  --- SIGCONT {si_signo=SIGCONT, si_code=SI_USER, si_pid=2521813, si_uid=8289} ---
  restart_syscall(<... resuming interrupted ppoll ...>) = -1 EINTR (Interrupted system call)

Where the expected behaviour would be:

  $ strace -f ./t
  [...]
  ppoll(NULL, 0, NULL, NULL, 0)           = ? ERESTARTNOHAND (To be restarted if no handler)
  --- SIGSTOP {si_signo=SIGSTOP, si_code=SI_USER, si_pid=2521813, si_uid=8289} ---
  --- stopped by SIGSTOP ---
  --- SIGCONT {si_signo=SIGCONT, si_code=SI_USER, si_pid=2521813, si_uid=8289} ---
  ppoll(NULL, 0, NULL, NULL, 0

Just moving the 'g1' setting near the syscall asm is not suffice,
the compiler might optimize it away (as I saw on cancellation.c by
trying this fix).  Instead, I have change the inline asm to put the
'g1' setup in ithe asm block.  This would require to change the asm
constraint for INTERNAL_SYSCALL_NCS, since the syscall number is not
constant.

Checked on sparc64-linux-gnu.

Reported-by: René Rebe <rene@exactcode.de>
Tested-by: Sam James <sam@gentoo.org>
Reviewed-by: Sam James <sam@gentoo.org>
2024-10-16 14:54:24 -03:00
caiyinyu
2fffaffde8 LoongArch: Regenerate loongarch/arch-syscall.h by build-many-glibcs.py update-syscalls. 2024-10-12 15:50:11 +08:00
Paul Zimmermann
392b3f0971 replace tgammaf by the CORE-MATH implementation
The CORE-MATH implementation is correctly rounded (for any rounding mode).
This can be checked by exhaustive tests in a few minutes since there are
less than 2^32 values to check against for example GNU MPFR.
This patch also adds some bench values for tgammaf.

Tested on x86_64 and x86 (cfarm26).

With the initial GNU libc code it gave on an Intel(R) Core(TM) i7-8700:

      "tgammaf": {
       "": {
        "duration": 3.50188e+09,
        "iterations": 2e+07,
        "max": 602.891,
        "min": 65.1415,
        "mean": 175.094
       }
      }

With the new code:

      "tgammaf": {
       "": {
        "duration": 3.30825e+09,
        "iterations": 5e+07,
        "max": 211.592,
        "min": 32.0325,
        "mean": 66.1649
       }
      }

With the initial GNU libc code it gave on cfarm26 (i686):

  "tgammaf": {
   "": {
    "duration": 3.70505e+09,
    "iterations": 6e+06,
    "max": 2420.23,
    "min": 243.154,
    "mean": 617.509
   }
  }

With the new code:

  "tgammaf": {
   "": {
    "duration": 3.24497e+09,
    "iterations": 1.8e+07,
    "max": 1238.15,
    "min": 101.155,
    "mean": 180.276
   }
  }

Signed-off-by: Alexei Sibidanov <sibid@uvic.ca>
Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>

Changes in v2:
    - include <math.h> (fix the linknamespace failures)
    - restored original benchtests/strcoll-inputs/filelist#en_US.UTF-8 file
    - restored original wrapper code (math/w_tgammaf_compat.c),
      except for the dealing with the sign
    - removed the tgammaf/float entries in all libm-test-ulps files
    - address other comments from Joseph Myers
      (https://sourceware.org/pipermail/libc-alpha/2024-July/158736.html)

Changes in v3:
    - pass NULL argument for signgam from w_tgammaf_compat.c
    - use of math_narrow_eval
    - added more comments

Changes in v4:
    - initialize local_signgam to 0 in math/w_tgamma_template.c
    - replace sysdeps/ieee754/dbl-64/gamma_productf.c by dummy file

Changes in v5:
    - do not mention local_signgam any more in math/w_tgammaf_compat.c
    - initialize local_signgam to 1 instead of 0 in w_tgamma_template.c
      and added comment

Changes in v6:
    - pass NULL as 2nd argument of __ieee754_gammaf_r in
      w_tgammaf_compat.c, and check for NULL in e_gammaf_r.c

Changes in v7:
    - added Signed-off-by line for Alexei Sibidanov (author of the code)

Changes in v8:
    - added Signed-off-by line for Paul Zimmermann (submitted of the patch)

Changes in v9:
    - address comments from review by Adhemerval Zanella
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2024-10-11 11:12:32 +02:00
Adhemerval Zanella
5ffc903216 misc: Add support for Linux uio.h RWF_ATOMIC flag
Linux 6.11 adds the new flag for pwritev2 (commit
c34fc6f26ab86d03a2d47446f42b6cd492dfdc56).

Checked on x86_64-linux-gnu on 6.11 kernel.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-10 10:28:01 -03:00
Adhemerval Zanella
934d0bf426 Update kernel version to 6.11 in header constant tests
This patch updates the kernel version in the tests tst-mount-consts.py,
and tst-sched-consts.py to 6.11.

There are no new constants covered by these tests in 6.11.

Tested with build-many-glibcs.py.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-10 10:27:55 -03:00
Adhemerval Zanella
f6e849fd7c linux: Add MAP_DROPPABLE from Linux 6.11
This request the page to be never written out to swap, it will be zeroed
under memory pressure (so kernel can just drop the page), it is inherited
by fork, it is not counted against @code{mlock} budget, and if there is
no enough memory to service a page faults there is no fatal error (so not
signal is sent).

Tested with build-many-glibcs.py.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-10 10:27:53 -03:00
Adhemerval Zanella
86f06282cc Update PIDFD_* constants for Linux 6.11
Linux 6.11 adds some more PIDFD_* constants for 'pidfs: allow retrieval
of namespace file descriptors'
(5b08bd408534bfb3a7cf5778da5b27d4e4fffe12).

Tested with build-many-glibcs.py.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-10 10:27:51 -03:00
Adhemerval Zanella
02de16df48 Update syscall lists for Linux 6.11
Linux 6.11 changes for syscall are:

  * fstat/newfstatat for loongarch (it should be safe to add since
    255dc1e4ed that undefine them).
  * clone3 for nios2, which only adds the entry point but defined
    __ARCH_BROKEN_SYS_CLONE3 (the syscall will always return ENOSYS).
  * uretprobe for x86_64 and x32.

Update syscall-names.list and regenerate the arch-syscall.h headers
with build-many-glibcs.py update-syscalls.

Tested with build-many-glibcs.py.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-10-10 10:27:49 -03:00
Joseph Myers
0e8738a48c Fix header guard in sysdeps/mach/hurd/x86_64/vm_param.h
GCC mainline produces a -Wheader-guard error building for x86_64-gnu.
Fix what seems to be incorrect macro naming in the #ifndef
conditional.

Tested with build-many-glibc.py for x86_64-gnu (GCC mainline).

Message-ID: <fd800046-5ecb-ebd5-4df1-29d4eb3d5433@redhat.com>
2024-10-09 19:16:53 +02:00
Adhemerval Zanella
d40ac01cbb stdlib: Make abort/_Exit AS-safe (BZ 26275)
The recursive lock used on abort does not synchronize with a new process
creation (either by fork-like interfaces or posix_spawn ones), nor it
is reinitialized after fork().

Also, the SIGABRT unblock before raise() shows another race condition,
where a fork or posix_spawn() call by another thread, just after the
recursive lock release and before the SIGABRT signal, might create
programs with a non-expected signal mask.  With the default option
(without POSIX_SPAWN_SETSIGDEF), the process can see SIG_DFL for
SIGABRT, where it should be SIG_IGN.

To fix the AS-safe, raise() does not change the process signal mask,
and an AS-safe lock is used if a SIGABRT is installed or the process
is blocked or ignored.  With the signal mask change removal,
there is no need to use a recursive loc.  The lock is also taken on
both _Fork() and posix_spawn(), to avoid the spawn process to see the
abort handler as SIG_DFL.

A read-write lock is used to avoid serialize _Fork and posix_spawn
execution.  Both sigaction (SIGABRT) and abort() requires to lock
as writer (since both change the disposition).

The fallback is also simplified: there is no need to use a loop of
ABORT_INSTRUCTION after _exit() (if the syscall does not terminate the
process, the system is broken).

The proposed fix changes how setjmp works on a SIGABRT handler, where
glibc does not save the signal mask.  So usage like the below will now
always abort.

  static volatile int chk_fail_ok;
  static jmp_buf chk_fail_buf;

  static void
  handler (int sig)
  {
    if (chk_fail_ok)
      {
        chk_fail_ok = 0;
        longjmp (chk_fail_buf, 1);
      }
    else
      _exit (127);
  }
  [...]
  signal (SIGABRT, handler);
  [....]
  chk_fail_ok = 1;
  if (! setjmp (chk_fail_buf))
    {
      // Something that can calls abort, like a failed fortify function.
      chk_fail_ok = 0;
      printf ("FAIL\n");
    }

Such cases will need to use sigsetjmp instead.

The _dl_start_profile calls sigaction through _profil, and to avoid
pulling abort() on loader the call is replaced with __libc_sigaction.

Checked on x86_64-linux-gnu and aarch64-linux-gnu.

Reviewed-by: DJ Delorie <dj@redhat.com>
2024-10-08 14:40:12 -03:00
Adhemerval Zanella
55d33108c7 linux: Use GLRO(dl_vdso_time) on time
The BZ#24967 fix (1bdda52fe9) missed the time for
architectures that define USE_IFUNC_TIME.  Although it is not
an issue, since there is no pointer mangling, there is also no need
to call dl_vdso_vsym since the vDSO setup was already done by the
loader.

Checked on x86_64-linux-gnu and i686-linux-gnu.
2024-10-08 13:28:21 -03:00
Adhemerval Zanella
02b195d30f linux: Use GLRO(dl_vdso_gettimeofday) on gettimeofday
The BZ#24967 fix (1bdda52fe9) missed the gettimeofday for
architectures that define USE_IFUNC_GETTIMEOFDAY.  Although it is not
an issue, since there is no pointer mangling, there is also no need
to call dl_vdso_vsym since the vDSO setup was already done by the
loader.

Checked on x86_64-linux-gnu and i686-linux-gnu.
2024-10-08 13:28:21 -03:00
Stefan Liebler
7949f552cb S390: Don't use r11 for cu-instructions as used as frame-pointer. [BZ# 32192]
Building the s390 specific iconv modules - utf16-utf32-z9.c, utf8-utf32-z9.c
and utf8-utf16-z9.c - with -fno-omit-frame-pointer leads to a build error
"error: %r11 cannot be used in 'asm' here" as r11 is needed as frame-pointer.

The cuXY-instructions need two even-odd register pairs. Therefore the register
pinning is used. This patch just uses a different register pair.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-10-08 10:13:02 +02:00
Carlos O'Donell
cae9944a6c Fix whitespace related license issues.
Several copies of the licenses in files contained whitespace related
problems.  Two cases are addressed here, the first is two spaces
after a period which appears between "PURPOSE." and "See". The other
is a space after the last forward slash in the URL. Both issues are
corrected and the licenses now match the official textual description
of the license (and the other license in the sources).

Since these whitespaces changes do not alter the paragraph structure of
the license, nor create new sentences, they do not change the license.
2024-10-07 18:08:16 -04:00
Bruno Haible
e67f8e6dbd hurd: Add missing va_end call in fcntl implementation. [BZ #32234]
* sysdeps/mach/hurd/fcntl.c (__libc_fcntl): Add va_end call in two code paths.
2024-10-03 20:18:29 +02:00
Andreas Schwab
a36814e145 riscv: align .preinit_array (bug 32228)
The section contains an array of pointers, so it should be aligned to
pointer size.
2024-10-02 13:04:30 +02:00
Adhemerval Zanella
5e8cfc5d62 linux: sparc: Fix clone for LEON/sparcv8 (BZ 31394)
The sparc clone mitigation (faeaa3bc9f) added the use of
flushw, which is not support by LEON/sparcv8.  As discussed on
the libc-alpha, 'ta 3' is a working alternative [1].

[1] https://sourceware.org/pipermail/libc-alpha/2024-August/158905.html

Checked with a build for sparcv8-linux-gnu targetting leon.

Acked-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
2024-10-01 10:37:21 -03:00
Adhemerval Zanella
49c3682ce1 linux: sparc: Fix syscall_cancel for LEON
LEON2/LEON3 are both sparcv8, which does not support branch hints
(bne,pn) nor the return instruction.

Checked with a build for sparcv8-linux-gnu targetting leon. I also
checked some cancellation tests with qemu-system (targeting LEON3).

Acked-by: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
2024-10-01 10:37:21 -03:00
Wilco Dijkstra
44fa9c1080 math: Improve layout of expf data
GCC aligns global data to 16 bytes if their size is >= 16 bytes.  This patch
changes the exp2f_data struct slightly so that the fields are better aligned.
As a result on targets that support them, load-pair instructions accessing
poly_scaled and invln2_scaled are now 16-byte aligned.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2024-10-01 13:39:26 +01:00
Noah Goldstein
483443d321 x86/string: Fixup alignment of main loop in str{n}cmp-evex [BZ #32212]
The loop should be aligned to 32-bytes so that it can ideally run out
the DSB. This is particularly important on Skylake-Server where
deficiencies in it's DSB implementation make it prone to not being
able to run loops out of the DSB.

For example running strcmp-evex on 200Mb string:

32-byte aligned loop:
    - 43,399,578,766      idq.dsb_uops
not 32-byte aligned loop:
    - 6,060,139,704       idq.dsb_uops

This results in a 25% performance degradation for the non-aligned
version.

The fix is to just ensure the code layout is such that the loop is
aligned. (Which was previously the case but was accidentally dropped
in 84e7c46df).

NB: The fix was actually 64-byte alignment. This is because 64-byte
alignment generally produces more stable performance than 32-byte
aligned code (cache line crosses can affect perf), so if we are going
past 16-byte alignmnent, might as well go to 64. 64-byte alignment
also matches most other functions we over-align, so it creates a
common point of optimization.

Times are reported as ratio of Time_With_Patch /
Time_Without_Patch. Lower is better.

The values being reported is the geometric mean of the ratio across
all tests in bench-strcmp and bench-strncmp.

Note this patch is only attempting to improve the Skylake-Server
strcmp for long strings. The rest of the numbers are only to test for
regressions.

Tigerlake Results Strings <= 512:
    strcmp : 1.026
    strncmp: 0.949

Tigerlake Results Strings > 512:
    strcmp : 0.994
    strncmp: 0.998

Skylake-Server Results Strings <= 512:
    strcmp : 0.945
    strncmp: 0.943

Skylake-Server Results Strings > 512:
    strcmp : 0.778
    strncmp: 1.000

The 2.6% regression on TGL-strcmp is due to slowdowns caused by
changes in alignment of code handling small sizes (most on the
page-cross logic). These should be safe to ignore because 1) We
previously only 16-byte aligned the function so this behavior is not
new and was essentially up to chance before this patch and 2) this
type of alignment related regression on small sizes really only comes
up in tight micro-benchmark loops and is unlikely to have any affect
on realworld performance.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-09-30 07:40:40 -07:00
Florian Weimer
b300078d97 Linux: Block signals around _Fork (bug 32215)
This hides the inconsistent TCB state (missing robust mutex list) from
signal handlers.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2024-09-28 09:44:25 +02:00
Andreas Schwab
5f62cf88c4 Fix missing randomness in __gen_tempname (bug 32214)
Make sure to update the random value also if getrandom fails.

Fixes: 686d542025 ("posix: Sync tempname with gnulib")
2024-09-26 11:45:44 +02:00
Pavel Kozlov
cc84cd389c arc: Cleanup arcbe
Remove the mention of arcbe ABI to avoid any mislead.
ARC big endian ABI is no longer supported.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-09-25 15:54:07 +01:00
Florian Weimer
4ff55d08df arc: Remove HAVE_ARC_BE macro and disable big-endian port
It is no longer needed, now that ARC is always little endian.
2024-09-25 11:25:22 +02:00
caiyinyu
255dc1e4ed LoongArch: Undef __NR_fstat and __NR_newfstatat.
In Linux 6.11, fstat and newfstatat are added back. To avoid the messy
usage of the fstat, newfstatat, and statx system calls, we will continue
using statx only in glibc, maintaining consistency with previous versions of
the LoongArch-specific glibc implementation.

Signed-off-by: caiyinyu <caiyinyu@loongson.cn>
Reviewed-by: Xi Ruoyao <xry111@xry111.site>
Suggested-by: Florian Weimer <fweimer@redhat.com>
2024-09-25 10:00:42 +08:00
Florian Weimer
7e21a65c58 misc: Enable internal use of memory protection keys
This adds the necessary hidden prototypes.
2024-09-24 13:23:10 +02:00
Joe Ramsay
16a59571e4 AArch64: Simplify rounding-multiply pattern in several AdvSIMD routines
This operation can be simplified to use simpler multiply-round-convert
sequence, which uses fewer instructions and constants.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2024-09-23 15:44:08 +01:00
Joe Ramsay
7900ac490d AArch64: Improve codegen in users of ADVSIMD expm1f helper
Rearrange operations so MOV is not necessary in reduction or around
the special-case handler.  Reduce memory access by using more indexed
MLAs in polynomial.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2024-09-23 15:44:07 +01:00
Joe Ramsay
5bc100bd4b AArch64: Improve codegen in users of AdvSIMD log1pf helper
log1pf is quite register-intensive - use fewer registers for the
polynomial, and make various changes to shorten dependency chains in
parent routines.  There is now no spilling with GCC 14.  Accuracy moves
around a little - comments adjusted accordingly but does not require
regen-ulps.

Use the helper in log1pf as well, instead of having separate
implementations.  The more accurate polynomial means special-casing can
be simplified, and the shorter dependency chain avoids the usual dance
around v0, which is otherwise difficult.

There is a small duplication of vectors containing 1.0f (or 0x3f800000) -
GCC is not currently able to efficiently handle values which fit in FMOV
but not MOVI, and are reinterpreted to integer.  There may be potential
for more optimisation if this is fixed.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2024-09-23 15:44:07 +01:00
Joe Ramsay
a15b1394b5 AArch64: Improve codegen in SVE F32 logs
Reduce MOVPRFXs by using unpredicated (non-destructive) instructions
where possible.  Similar to the recent change to AdvSIMD F32 logs,
adjust special-case arguments and bounds to allow for more optimal
register usage.  For all 3 routines one MOVPRFX remains in the
reduction, which cannot be avoided as immediate AND and ASR are both
destructive.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2024-09-23 15:44:07 +01:00
Joe Ramsay
7b8c134b54 AArch64: Improve codegen in SVE expf & related routines
Reduce MOV and MOVPRFX by improving special-case handling.  Use inline
helper to duplicate the entire computation between the special- and
non-special case branches, removing the contention for z0 between x
and the return value.

Also rearrange some MLAs and MLSs - by making the multiplicand the
destination we can avoid a MOVPRFX in several cases.  Also change which
constants go in the vector used for lanewise ops - the last lane is no
longer wasted.

Spotted that shift was incorrect in exp2f and exp10f, w.r.t. to the
comment that explains it.  Fixed - worst-case ULP for exp2f moves
around but it doesn't change significantly for either routine.

Worst-case error for coshf increases due to passing x to exp rather
than abs(x) - updated the comment, but does not require regen-ulps.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2024-09-23 15:44:07 +01:00
Florian Weimer
6f3f6c506c Linux: readdir64_r should not skip d_ino == 0 entries (bug 32126)
This is the same bug as bug 12165, but for readdir_r.  The
regression test covers both bug 12165 and bug 32126.

Reviewed-by: DJ Delorie <dj@redhat.com>
2024-09-21 19:32:34 +02:00
Florian Weimer
e92718552e Linux: Use readdir64_r for compat __old_readdir64_r (bug 32128)
It is not necessary to do the conversion at the getdents64
layer for readdir64_r.  Doing it piecewise for readdir64
is slightly simpler and allows deleting __old_getdents64.

This fixes bug 32128 because readdir64_r handles the length
check correctly.

Reviewed-by: DJ Delorie <dj@redhat.com>
2024-09-21 19:32:34 +02:00
Joe Ramsay
751a5502be AArch64: Add vector logp1 alias for log1p
This enables vectorisation of C23 logp1, which is an alias for log1p.
There are no new tests or ulp entries because the new symbols are simply
aliases.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2024-09-19 17:53:34 +01:00
Sergey Bugaev
4524670545 hurd: Avoid file_check_access () RPC for access (F_OK)
A common use case of access () / faccessat () is checking for file
existence, not any specific access permissions.  In that case, we can
avoid doing the file_check_access () RPC; whether the given path had
been successfully resolved to a file is all we need to know to answer.

This is prompted by GLib switching to use faccessat (F_OK) to implement
g_file_query_exists () for local files.
https://gitlab.gnome.org/GNOME/glib/-/merge_requests/4272

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-ID: <20240919101439.179663-1-bugaevc@gmail.com>
2024-09-19 14:18:39 +02:00
Florian Weimer
c444cc1d83 Linux: Add missing scheduler constants to <sched.h>
And add a test, misc/tst-sched-consts, that checks
consistency with <sched.h>.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-09-11 10:05:08 +02:00
Florian Weimer
21571ca0d7 Linux: Add the sched_setattr and sched_getattr functions
And struct sched_attr.

In sysdeps/unix/sysv/linux/bits/sched.h, the hack that defines
sched_param around the inclusion of <linux/sched/types.h> is quite
ugly, but the definition of struct sched_param has already been
dropped by the kernel, so there is nothing else we can do and maintain
compatibility of <sched.h> with a wide range of kernel header
versions.  (An alternative would involve introducing a separate header
for this functionality, but this seems unnecessary.)

The existing sched_* functions that change scheduler parameters
are already incompatible with PTHREAD_PRIO_PROTECT mutexes, so
there is no harm in adding more functionality in this area.

The documentation mostly defers to the Linux manual pages.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-09-11 10:05:08 +02:00
Wilco Dijkstra
8ecb477ea1 AArch64: Remove memset-reg.h
Remove memset-reg.h by moving register definitions into the memset
implementations.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2024-09-10 14:18:03 +01:00
Wilco Dijkstra
cec3aef324 AArch64: Optimize memset
Improve small memsets by avoiding branches and use overlapping stores.
Use DC ZVA for copies over 128 bytes.  Remove unnecessary code for ZVA sizes
other than 64 and 128.  Performance of random memset benchmark improves by 24%
on Neoverse N1.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2024-09-09 15:30:00 +01:00
John David Anglin
3fc1d3bc33 hppa: Update libm-test-ulps 2024-09-09 09:57:42 -04:00
Joe Ramsay
8b09af572b aarch64: Avoid redundant MOVs in AdvSIMD F32 logs
Since the last operation is destructive, the first argument to the FMA
also has to be the first argument to the special-case in order to
avoid unnecessary MOVs. Reorder arguments and adjust special-case
bounds to facilitate this.

Reviewed-by: Wilco Dijkstra  <Wilco.Dijkstra@arm.com>
2024-09-09 13:03:49 +01:00
mengqinggang
6252c59f15 LoongArch: Fix macro redefined warning in tls-desc.S
Undef macro to avoid redefined warning.
2024-09-06 15:46:13 +08:00
Florian Weimer
a8c433856f i386: Update ulps
As seen on an AMD Ryzen 9 7950X CPU when building with GCC 14
with SSE2 math.
2024-09-05 22:25:55 +02:00
Florian Weimer
cc3e743fc0 powerpc64le: Build new strtod tests with long double ABI flags (bug 32145)
This fixes several test failures:

=====FAIL: stdlib/tst-strtod1i.out=====
Locale tests
all OK
Locale tests
all OK
Locale tests
strtold("1,5") returns -6,38643e+367 and not 1,5
strtold("1.5") returns 1,5 and not 1
strtold("1.500") returns 1 and not 1500
strtold("36.893.488.147.419.103.232") returns 1500 and not 3,68935e+19
Locale tests
all OK

=====FAIL: stdlib/tst-strtod3.out=====
0: got wrong results -2.5937e+4826, expected 0

=====FAIL: stdlib/tst-strtod4.out=====
0: got wrong results -6,38643e+367, expected 0
1: got wrong results 0, expected 1e+06
2: got wrong results 1e+06, expected 10

=====FAIL: stdlib/tst-strtod5i.out=====
0: got wrong results -6,38643e+367, expected 0
2: got wrong results 0, expected -0
4: got wrong results -0, expected 0
5: got wrong results 0, expected -0
6: got wrong results -0, expected 0
7: got wrong results 0, expected -0
8: got wrong results -0, expected 0
9: got wrong results 0, expected -0
10: got wrong results -0, expected 0
11: got wrong results 0, expected -0
12: got wrong results -0, expected 0
13: got wrong results 0, expected -0
14: got wrong results -0, expected 0
15: got wrong results 0, expected -0
16: got wrong results -0, expected 0
17: got wrong results 0, expected -0
18: got wrong results -0, expected 0
20: got wrong results 0, expected -0
22: got wrong results -0, expected 0
23: got wrong results 0, expected -0
24: got wrong results -0, expected 0
25: got wrong results 0, expected -0
26: got wrong results -0, expected 0
27: got wrong results 0, expected -0

Fixes commit 3fc063dee0
("Make __strtod_internal tests type-generic").

Suggested-by: Joseph Myers <josmyers@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-09-05 22:02:23 +02:00
Florian Weimer
61f2c2e1d1 Linux: readdir_r needs to report getdents failures (bug 32124)
Upon error, return the errno value set by the __getdents call
in __readdir_unlocked.  Previously, kernel-reported errors
were ignored.

Reviewed-by: DJ Delorie <dj@redhat.com>
2024-09-05 12:05:32 +02:00
Florian Weimer
ed416ee402 i386: Update ulps
As seen on an unspecified Intel system with glibc compiled
with GCC 8.
2024-09-05 09:57:25 +02:00
Adhemerval Zanella
1927f718fc linux: mips: Fix syscall_cancell build for __mips_isa_rev >= 6
Use beqzc instead of bnel.

Checked with a mipsisa64r6el-n64-linux-gnu build and some nptl
cancellation tests on qemu.
2024-09-02 12:30:45 -03:00
Jeevitha Palanisamy
29f0db6a2e powerpc64: Fix syscall_cancel build for powerpc64le-linux-gnu [BZ #32125]
In __syscall_cancel_arch, there's a tail call to __syscall_do_cancel.
On P10, since the caller uses the TOC and the callee is using
PC-relative addressing, there's only a branch instruction with no NOPs
to restore the TOC, which causes the build error. The fix involves adding
the NOTOC directive to the branch instruction, informing the linker
not to generate a TOC stub, thus resolving the issue.
2024-08-30 08:50:47 -05:00
Feifei Wang
ca90758b2a x86: Enable non-temporal memset for Hygon processors
This patch uses 'Avoid_Non_Temporal_Memset' flag to access
the non-temporal memset implementation for hygon processors.

Test Results:

hygon1 arch
x86_memset_non_temporal_threshold = 8MB
size                          new performance time / old performance time
1MB                           0.994
4MB                           0.996
8MB                           0.670
16MB                          0.343
32MB                          0.355

hygon2 arch
x86_memset_non_temporal_threshold = 8MB
size                          new performance time / old performance time
1MB                           1
4MB                           1
8MB                           1.312
16MB                          0.822
32MB                          0.830

hygon3 arch
x86_memset_non_temporal_threshold = 8MB
size                          new performance time / old performance time
1MB                           1
4MB                           0.990
8MB                           0.737
16MB                          0.390
32MB                          0.401

For hygon arch with this patch, non-temporal stores can improve
performance by 20% - 65%.

Signed-off-by: Feifei Wang <wangfeifei@hygon.cn>
Reviewed-by: Jing Li <lijing@hygon.cn>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-08-26 10:01:58 -07:00
Feifei Wang
d14aecbffc x86: Add cache information support for Hygon processors
Add hygon branch in dl_init_cacheinfo function to initialize
cache size variables for hygon processors. In the meanwhile,
add handle_hygon() function to get cache information.

Signed-off-by: Feifei Wang <wangfeifei@hygon.cn>
Reviewed-by: Jing Li <lijing@hygon.cn>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-08-26 10:01:58 -07:00
Feifei Wang
6b08116b2d x86: Add new architecture type for Hygon processors
Add a new architecture type arch_kind_hygon to spilt Hygon branch
from AMD. This is to facilitate the Hygon processors to make settings
that are suitable for its own characteristics.

Signed-off-by: Feifei Wang <wangfeifei@hygon.cn>
Reviewed-by: Jing Li <lijing@hygon.cn>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-08-26 10:01:58 -07:00
Samuel Thibault
f071795d80 mach: Fix bogus negative return
One can be very unlucky to call time_now first just before a second switch,
and mach_msg sleep just a bit more enough for the second time_now call to
count one second too many (or even more if scheduling is really unlucky).

So we have to protect against returning a bogus negative value in such case.
2024-08-25 03:35:29 +02:00
Mahesh Bodapati
82b5340ebd powerpc64: Optimize strcpy and stpcpy for Power9/10
This patch modifies the current Power9 implementation of strcpy and
stpcpy to optimize it for Power9 and Power10.

No new Power10 instructions are used, so the original Power9 strcpy
is modified instead of creating a new implementation for Power10.

The changes also affect stpcpy, which uses the same implementation
with some additional code before returning.

Improvements compared to the old Power9 version:

Use simple comparisons for the first ~512 bytes:
  The main loop is good for long strings, but comparing 16B each time is
  better for shorter strings. After aligning the address to 16 bytes, we
  unroll the loop four times, checking 128 bytes each time. There may be
  some overlap with the main loop for unaligned strings, but it is better
  for shorter strings.

Loop with 64 bytes for longer bytes:
  Use 4 consecutive lxv/stxv instructions.

Showed an average improvement of 13%.

Reviewed-by: Paul E. Murphy <murphyp@linux.ibm.com>
Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2024-08-23 16:48:32 -05:00
Adhemerval Zanella
89b53077d2 nptl: Fix Race conditions in pthread cancellation [BZ#12683]
The current racy approach is to enable asynchronous cancellation
before making the syscall and restore the previous cancellation
type once the syscall returns, and check if cancellation has happen
during the cancellation entrypoint.

As described in BZ#12683, this approach shows 2 problems:

  1. Cancellation can act after the syscall has returned from the
     kernel, but before userspace saves the return value.  It might
     result in a resource leak if the syscall allocated a resource or a
     side effect (partial read/write), and there is no way to program
     handle it with cancellation handlers.

  2. If a signal is handled while the thread is blocked at a cancellable
     syscall, the entire signal handler runs with asynchronous
     cancellation enabled.  This can lead to issues if the signal
     handler call functions which are async-signal-safe but not
     async-cancel-safe.

For the cancellation to work correctly, there are 5 points at which the
cancellation signal could arrive:

	[ ... )[ ... )[ syscall ]( ...
	   1      2        3    4   5

  1. Before initial testcancel, e.g. [*... testcancel)
  2. Between testcancel and syscall start, e.g. [testcancel...syscall start)
  3. While syscall is blocked and no side effects have yet taken
     place, e.g. [ syscall ]
  4. Same as 3 but with side-effects having occurred (e.g. a partial
     read or write).
  5. After syscall end e.g. (syscall end...*]

And libc wants to act on cancellation in cases 1, 2, and 3 but not
in cases 4 or 5.  For the 4 and 5 cases, the cancellation will eventually
happen in the next cancellable entrypoint without any further external
event.

The proposed solution for each case is:

  1. Do a conditional branch based on whether the thread has received
     a cancellation request;

  2. It can be caught by the signal handler determining that the saved
     program counter (from the ucontext_t) is in some address range
     beginning just before the "testcancel" and ending with the
     syscall instruction.

  3. SIGCANCEL can be caught by the signal handler and determine that
     the saved program counter (from the ucontext_t) is in the address
     range beginning just before "testcancel" and ending with the first
     uninterruptable (via a signal) syscall instruction that enters the
      kernel.

  4. In this case, except for certain syscalls that ALWAYS fail with
     EINTR even for non-interrupting signals, the kernel will reset
     the program counter to point at the syscall instruction during
     signal handling, so that the syscall is restarted when the signal
     handler returns.  So, from the signal handler's standpoint, this
     looks the same as case 2, and thus it's taken care of.

  5. For syscalls with side-effects, the kernel cannot restart the
     syscall; when it's interrupted by a signal, the kernel must cause
     the syscall to return with whatever partial result is obtained
     (e.g. partial read or write).

  6. The saved program counter points just after the syscall
     instruction, so the signal handler won't act on cancellation.
     This is similar to 4. since the program counter is past the syscall
     instruction.

So The proposed fixes are:

  1. Remove the enable_asynccancel/disable_asynccancel function usage in
     cancellable syscall definition and instead make them call a common
     symbol that will check if cancellation is enabled (__syscall_cancel
     at nptl/cancellation.c), call the arch-specific cancellable
     entry-point (__syscall_cancel_arch), and cancel the thread when
     required.

  2. Provide an arch-specific generic system call wrapper function
     that contains global markers.  These markers will be used in
     SIGCANCEL signal handler to check if the interruption has been
     called in a valid syscall and if the syscalls has side-effects.

     A reference implementation sysdeps/unix/sysv/linux/syscall_cancel.c
     is provided.  However, the markers may not be set on correct
     expected places depending on how INTERNAL_SYSCALL_NCS is
     implemented by the architecture.  It is expected that all
     architectures add an arch-specific implementation.

  3. Rewrite SIGCANCEL asynchronous handler to check for both canceling
     type and if current IP from signal handler falls between the global
     markers and act accordingly.

  4. Adjust libc code to replace LIBC_CANCEL_ASYNC/LIBC_CANCEL_RESET to
     use the appropriate cancelable syscalls.

  5. Adjust 'lowlevellock-futex.h' arch-specific implementations to
     provide cancelable futex calls.

Some architectures require specific support on syscall handling:

  * On i386 the syscall cancel bridge needs to use the old int80
    instruction because the optimized vDSO symbol the resulting PC value
    for an interrupted syscall points to an address outside the expected
    markers in __syscall_cancel_arch.  It has been discussed in LKML [1]
    on how kernel could help userland to accomplish it, but afaik
    discussion has stalled.

    Also, sysenter should not be used directly by libc since its calling
    convention is set by the kernel depending of the underlying x86 chip
    (check kernel commit 30bfa7b3488bfb1bb75c9f50a5fcac1832970c60).

  * mips o32 is the only kABI that requires 7 argument syscall, and to
    avoid add a requirement on all architectures to support it, mips
    support is added with extra internal defines.

Checked on aarch64-linux-gnu, arm-linux-gnueabihf, powerpc-linux-gnu,
powerpc64-linux-gnu, powerpc64le-linux-gnu, i686-linux-gnu, and
x86_64-linux-gnu.

[1] https://lkml.org/lkml/2016/3/8/1105
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-08-23 14:27:43 -03:00
Adhemerval Zanella
745c3cc10f elf: Make dl-fptr and dl-symaddr hppa specific
With ia64 removal, the function descriptor supports is only used
by HPPA and new architectures do not seem leaning towards this
design.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-08-19 14:54:07 -03:00
Matthew Sterrett
294a892769 x86: Unifies 'strnlen-evex' and 'strnlen-evex512' implementations.
This commit uses a common implementation 'strnlen-evex-base.S' for both
'strnlen-evex' and 'strnlen-evex512'

This patch serves both to reduce the number of implementations, and it also does some small optimizations that benefit strnlen-evex and strnlen-evex512.

All tests pass on x86.

Benchmarks were taken on SKX.
https://www.intel.com/content/www/us/en/products/sku/123613/intel-core-i97900x-xseries-processor-13-75m-cache-up-to-4-30-ghz/specifications.html

Geometric mean for strnlen-evex over all benchmarks (N=10) was (new/old) 0.881
Geometric mean for strnlen-evex512 over all benchmarks (N=10) was (new/old) 0.953

Code Size Changes:
    strnlen-evex       :  +31 bytes
    strnlen-evex512    :  +156 bytes
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2024-08-19 07:34:02 -07:00
Maciej W. Rozycki
9fb237a1c8 nptl: Fix extraneous testing run by tst-rseq-nptl in the test driver
Fix an issue with commit 8f4632deb3 ("Linux: rseq registration tests")
and prevent testing from being run in the process of the test driver
itself rather than just the test child where one has been forked.  The
problem here is the unguarded use of a destructor to call a part of the
testing.  The destructor function, 'do_rseq_destructor_test' is called
implicitly at program completion, however because it is associated with
the executable itself rather than an individual process, it is called
both in the test child *and* in the test driver itself.

Prevent this from happening by providing a guard variable that only
enables test invocation from 'do_rseq_destructor_test' in the process
that has first run 'do_test'.  Consequently extra testing is invoked
from 'do_rseq_destructor_test' only once and in the correct process,
regardless of the use or the lack of of the '--direct' option.  Where
called in the controlling test driver process that has neved called
'do_test' the destructor function silently returns right away without
taking any further actions, letting the test driver fail gracefully
where applicable.

This arrangement prevents 'tst-rseq-nptl' from ever causing testing to
hang forever and never complete, such as currently happening with the
'mips-linux-gnu' (o32 ABI) target.

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-08-16 14:38:33 +01:00
Carlos O'Donell
b22923abb0 Report error if setaffinity wrapper fails (Bug 32040)
Previously if the setaffinity wrapper failed the rest of the subtest
would not execute and the current subtest would be reported as passing.
Now if the setaffinity wrapper fails the subtest is correctly reported
as faling. Tested manually by changing the conditions of the affinity
call including setting size to zero, or checking the wrong condition.

No regressions on x86_64.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-08-15 15:28:48 -04:00
Noah Goldstein
f446d90fe6 x86: Add Avoid_STOSB tunable to allow NT memset without ERMS
The goal of this flag is to allow targets which don't prefer/have ERMS
to still access the non-temporal memset implementation.

There are 4 cases for tuning memset:
    1) `Avoid_STOSB && Avoid_Non_Temporal_Memset`
        - Memset with temporal stores
    2) `Avoid_STOSB && !Avoid_Non_Temporal_Memset`
        - Memset with temporal/non-temporal stores. Non-temporal path
          goes through `rep stosb` path. We accomplish this by setting
          `x86_rep_stosb_threshold` to
          `x86_memset_non_temporal_threshold`.
    3) `!Avoid_STOSB && Avoid_Non_Temporal_Memset`
        - Memset with temporal stores/`rep stosb`
    3) `!Avoid_STOSB && !Avoid_Non_Temporal_Memset`
        - Memset with temporal stores/`rep stosb`/non-temporal stores.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-08-15 08:19:15 -07:00
Noah Goldstein
b93dddfaf4 x86: Use Avoid_Non_Temporal_Memset to control non-temporal path
This is just a refactor and there should be no behavioral change from
this commit.

The goal is to make `Avoid_Non_Temporal_Memset` a more universal knob
for controlling whether we use non-temporal memset rather than having
extra logic based on vendor.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-08-15 08:19:15 -07:00
Noah Goldstein
7da0886247 x86: Fix bug in strchrnul-evex512 [BZ #32078]
Issue was we were expecting not matches with CHAR before the start of
the string in the page cross case.

The check code in the page cross case:
```
    and    $0xffffffffffffffc0,%rax
    vmovdqa64 (%rax),%zmm17
    vpcmpneqb %zmm17,%zmm16,%k1
    vptestmb %zmm17,%zmm17,%k0{%k1}
    kmovq  %k0,%rax
    inc    %rax
    shr    %cl,%rax
    je     L(continue)
```

expects that all characters that neither match null nor CHAR will be
1s in `rax` prior to the `inc`. Then the `inc` will overflow all of
the 1s where no relevant match was found.

This is incorrect in the page-cross case, as the
`vmovdqa64 (%rax),%zmm17` loads from before the start of the input
string.

If there are matches with CHAR before the start of the string, `rax`
won't properly overflow.

The fix is quite simple. Just replace:

```
    inc    %rax
    shr    %cl,%rax
```
With:
```
    sar    %cl,%rax
    inc    %rax
```

The arithmetic shift will clear any matches prior to the start of the
string while maintaining the signbit so the 1s can properly overflow
to zero in the case of no matches.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-08-15 08:11:33 -07:00
Pavel Kozlov
cf03971f7a
ARC: Regenerate ULPs
Regenerate fpu and soft-fp ULPs. Based on results from HSDK-4xD board
with GCC 14 build.
Including new tests added by 0797283910.
2024-08-11 15:29:56 +02:00
mengqinggang
5662433c38 LoongArch: Add cfi instructions for _dl_tlsdesc_dynamic
In _dl_tlsdesc_dynamic, there are three 'addi.d sp, sp, -size'
instructions to allocate stack size for Float/LSX/LASX registers.
Every 'addi.d sp, sp, -size' needs a cfi_adjust_cfa_offset because
of sp is used to compute CFA. But only one 'addi.d sp, sp, -size'
will be run according to HWCAP value. And all cfi_adjust_cfa_offset
will be executed in stack unwinding, it result in incorrect CFA.

Change _dl_tlsdesc_dynamic to _dl_tlsdesc_dynamic,
_dl_tlsdesc_dynamic_lsx and _dl_tlsdesc_dynamic_lasx.
Conflicting cfi instructions can be distributed to the three functions.
And cfi instructions can correspond to stack down instructions.
2024-08-09 09:06:17 +08:00
caiyinyu
d5f1da2a8a LoongArch: Regenerate ULPs
From new tests added by 0797283910.

Signed-off-by: caiyinyu <caiyinyu@loongson.cn>
2024-08-09 09:06:17 +08:00
Julian Zhu
a0ecbb4596 RISC-V: Regenerate ULPs
From new tests added by 0797283910.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-08-08 14:53:55 +02:00
Julian Zhu
0f39b60a7e MIPS: Regenerate ULPs
From new tests added by 0797283910.

Signed-off-by: Julian Zhu <jz531210@gmail.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-08-08 14:53:53 +02:00
Florian Weimer
9446351dac powerpc64le: Update ulps
Based on results from a POWER8 system with a GCC 8 build.
2024-08-08 13:42:12 +02:00
Florian Weimer
bd410d14e1 s390x: Update ulps
Based on results from a z16 system with a GCC 8 build.
2024-08-08 13:01:02 +02:00
Adhemerval Zanella
6396e10b20 powerpc: Regenerate ULPs for soft-fp
From new tests added by 0797283910.
2024-08-07 11:02:03 -03:00
Adhemerval Zanella
6411dba836 powerpc: Update soft-fp ulps
From new tests added by 0797283910.
2024-08-07 11:02:03 -03:00
Adhemerval Zanella
1dcc107a1f sparc: Regenerate ULPs
From new tests added by 0797283910.
2024-08-07 11:02:03 -03:00
Adhemerval Zanella
f8aafb5a16 i386: Regenerate ULPs
From new tests added by 0797283910.
2024-08-07 11:02:03 -03:00
Adhemerval Zanella
d8023eb460 arm: Regenerate ULPs
From new tests added by 0797283910.
2024-08-07 11:02:03 -03:00
Adhemerval Zanella
e2f88d8524 aarch64: Regenerate ULPs
From new tests added by 0797283910.
2024-08-07 11:02:03 -03:00
Adhemerval Zanella
428c7383da sysdeps: Re-flow and sort multiline gnu/Makefile definitions 2024-08-07 11:02:03 -03:00
Wilco Dijkstra
3dc426b642 AArch64: Improve generic strlen
Improve performance by handling another 16 bytes before entering the loop.
Use ADDHN in the loop to avoid SHRN+FMOV when it terminates.  Change final
size computation to avoid increasing latency.  On Neoverse V1 performance
of the random strlen benchmark improves by 4.6%.

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-08-07 14:58:46 +01:00
Paul Zimmermann
0797283910 added inputs giving large errors on x86_64 for new C23 functions
These functions are exp10m1, exp2m1, log10p1, log2p1.
Also regenerated ulps on x86_64.

For each format, there are 4 values, one for each rounding mode.
(For the intel96 format, there are 8 values, 4 for Intel hardware,
and 4 for AMD hardware. However, regen-ulps was only run on Intel.
It should be run in a separate patch on a AMD x86_64.)
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2024-08-07 14:28:46 +02:00
caiyinyu
d7eca2714f LoongArch: Update Ulps.
From new tests added by 4dc22baa84.

Signed-off-by: caiyinyu <caiyinyu@loongson.cn>
2024-08-06 09:23:56 +08:00
Florian Weimer
5097cd344f elf: Avoid re-initializing already allocated TLS in dlopen (bug 31717)
The old code used l_init_called as an indicator for whether TLS
initialization was complete.  However, it is possible that
TLS for an object is initialized, written to, and then dlopen
for this object is called again, and l_init_called is not true at
this point.  Previously, this resulted in TLS being initialized
twice, discarding any interim writes (technically introducing a
use-after-free bug even).

This commit introduces an explicit per-object flag, l_tls_in_slotinfo.
It indicates whether _dl_add_to_slotinfo has been called for this
object.  This flag is used to avoid double-initialization of TLS.
In update_tls_slotinfo, the first_static_tls micro-optimization
is removed because preserving the initalization flag for subsequent
use by the second loop for static TLS is a bit complicated, and
another per-object flag does not seem to be worth it.  Furthermore,
the l_init_called flag is dropped from the second loop (for static
TLS initialization) because l_need_tls_init on its own prevents
double-initialization.

The remaining l_init_called usage in resize_scopes and update_scopes
is just an optimization due to the use of scope_has_map, so it is
not changed in this commit.

The isupper check ensures that libc.so.6 is TLS is not reverted.
Such a revert happens if l_need_tls_init is not cleared in
_dl_allocate_tls_init for the main_thread case, now that
l_init_called is not checked anymore in update_tls_slotinfo
in elf/dl-open.c.

Reported-by: Jonathon Anderson <janderson@rice.edu>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-08-05 18:26:52 +02:00
Florian Weimer
fe06fb313b elf: Clarify and invert second argument of _dl_allocate_tls_init
Also remove an outdated comment: _dl_allocate_tls_init is
called as part of pthread_create.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2024-08-05 18:26:42 +02:00
Florian Weimer
7a630f7d33 x86: Tunables may incorrectly set Prefer_PMINUB_for_stringop (bug 32047)
Fixes commit 5bcf6265f2 ("x86:
Disable non-temporal memset on Skylake Server").

Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2024-08-02 18:08:14 +02:00
Florian Weimer
0df48472ff x86: Add missing switch/case fall-through markers to init_cpu_features
The commits introducing these fall-throughs intended them to
happen.

Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2024-08-02 18:08:14 +02:00
Samuel Thibault
8dc3f4f8ad hurd: Fix missing pthread_ compat symbol in libc
5476f8cd2e ("htl: move pthread_self info libc.") and
9dfa256216 ("htl: move pthread_equal into libc") to
1dc0bc8f07 ("htl: move pthread_attr_setdetachstate into libc")
moved some pthread_ symbols from libpthread.so to libc.so, but missed
adding the compat version like 5476f8cd2e ("htl: move pthread_self
info libc.") did: libc already had these symbols as forwards,
but versioned GLIBC_2.21, while the symbols in libpthread.so were
versioned GLIBC_2.12.

To fix running executables built before this, we thus have to add the
GLIBC_2.12 version, otherwise execution fails with e.g.

/usr/lib/i386-gnu/libglib-2.0.so: symbol lookup error: /usr/lib/i386-gnu/libglib-2.0.so: undefined symbol: pthread_attr_setinheritsched, version GLIBC_2.12
2024-08-01 23:58:51 +02:00