Commit Graph

1821 Commits

Author SHA1 Message Date
Noah Goldstein
9469261cf1 x86: Only align destination to 1x VEC_SIZE in memset 4x loop
Current code aligns to 2x VEC_SIZE. Aligning to 2x has no affect on
performance other than potentially resulting in an additional
iteration of the loop.
1x maintains aligned stores (the only reason to align in this case)
and doesn't incur any unnecessary loop iterations.
Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
2023-11-28 12:06:19 -06:00
Adhemerval Zanella
55f41ef8de elf: Remove LD_PROFILE for static binaries
The _dl_non_dynamic_init does not parse LD_PROFILE, which does not
enable profile for dlopen objects.  Since dlopen is deprecated for
static objects, it is better to remove the support.

It also allows to trim down libc.a of profile support.

Checked on x86_64-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-11-21 16:15:42 -03:00
Adhemerval Zanella
4862d546c0 x86: Use dl-symbol-redir-ifunc.h on cpu-tunables
The dl-symbol-redir-ifunc.h redirects compiler-generated libcalls to
arch-specific memory implementations to avoid ifunc calls where it is not
yet possible. The memcmp-isa-default-impl.h aims to fix the same issue
by calling the specific memset implementation directly.

Using the memcmp symbol directly allows the compiler to inline the memset
calls (especially because _dl_tunable_set_hwcaps uses constants values),
generating better code.

Checked on x86_64-linux-gnu.

Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-11-21 16:15:42 -03:00
Adhemerval Zanella
9c96c87d60 elf: Ignore GLIBC_TUNABLES for setuid/setgid binaries
The tunable privilege levels were a retrofit to try and keep the malloc
tunable environment variables' behavior unchanged across security
boundaries.  However, CVE-2023-4911 shows how tricky can be
tunable parsing in a security-sensitive environment.

Not only parsing, but the malloc tunable essentially changes some
semantics on setuid/setgid processes.  Although it is not a direct
security issue, allowing users to change setuid/setgid semantics is not
a good security practice, and requires extra code and analysis to check
if each tunable is safe to use on all security boundaries.

It also means that security opt-in features, like aarch64 MTE, would
need to be explicit enabled by an administrator with a wrapper script
or with a possible future system-wide tunable setting.

Co-authored-by: Siddhesh Poyarekar  <siddhesh@sourceware.org>
Reviewed-by: DJ Delorie <dj@redhat.com>
2023-11-21 16:15:42 -03:00
Noah Goldstein
b7f8b6b64b x86: Fix unchecked AVX512-VBMI2 usage in strrchr-evex-base.S
strrchr-evex-base used `vpcompress{b|d}` in the page cross logic but
was missing the CPU_FEATURE checks for VBMI2 in the
ifunc/ifunc-impl-list.

The fix is either to add those checks or change the logic to not use
`vpcompress{b|d}`. Choosing the latter here so that the strrchr-evex
implementation is usable on SKX.

New implementation is a bit slower, but this is in a cold path so its
probably okay.
2023-11-15 11:09:44 -06:00
Noah Goldstein
a3c50bf46a x86: Prepare strrchr-evex and strrchr-evex512 for AVX10
This commit refactors `strrchr-evex` and `strrchr-evex512` to use a
common implementation: `strrchr-evex-base.S`.

The motivation is `strrchr-evex` needed to be refactored to not use
64-bit masked registers in preperation for AVX10.

Once vec-width masked register combining was removed, the EVEX and
EVEX512 implementations can easily be implemented in the same file
without any major overhead.

The net result is performance improvements (measured on TGL) for both
`strrchr-evex` and `strrchr-evex512`. Although, note there are some
regressions in the test suite and it may be many of the cases that
make the total-geomean of improvement/regression across bench-strrchr
are cold. The point of the performance measurement is to show there
are no major regressions, but the primary motivation is preperation
for AVX10.

Benchmarks where taken on TGL:
https://www.intel.com/content/www/us/en/products/sku/213799/intel-core-i711850h-processor-24m-cache-up-to-4-80-ghz/specifications.html

EVEX geometric_mean(N=5) of all benchmarks New / Original   : 0.74
EVEX512 geometric_mean(N=5) of all benchmarks New / Original: 0.87

Full check passes on x86.
2023-10-06 00:18:55 -05:00
Samuel Thibault
29d4591b07 hurd: Drop REG_GSFS and REG_ESDS from x86_64's ucontext
These are useless on x86_64, and __NGREG was actually wrong with them.
2023-09-28 00:10:13 +02:00
Szabolcs Nagy
d2123d6827 elf: Fix slow tls access after dlopen [BZ #19924]
In short: __tls_get_addr checks the global generation counter and if
the current dtv is older then _dl_update_slotinfo updates dtv up to the
generation of the accessed module. So if the global generation is newer
than generation of the module then __tls_get_addr keeps hitting the
slow dtv update path. The dtv update path includes a number of checks
to see if any update is needed and this already causes measurable tls
access slow down after dlopen.

It may be possible to detect up-to-date dtv faster.  But if there are
many modules loaded (> TLS_SLOTINFO_SURPLUS) then this requires at
least walking the slotinfo list.

This patch tries to update the dtv to the global generation instead, so
after a dlopen the tls access slow path is only hit once.  The modules
with larger generation than the accessed one were not necessarily
synchronized before, so additional synchronization is needed.

This patch uses acquire/release synchronization when accessing the
generation counter.

Note: in the x86_64 version of dl-tls.c the generation is only loaded
once, since relaxed mo is not faster than acquire mo load.

I have not benchmarked this. Tested by Adhemerval Zanella on aarch64,
powerpc, sparc, x86 who reported that it fixes the performance issue
of bug 19924.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-09-01 08:21:37 +01:00
H.J. Lu
a8ecb126d4 x86_64: Add log1p with FMA
On Skylake, it changes log1p bench performance by:

        Before       After     Improvement
max     63.349       58.347       8%
min     4.448        5.651        -30%
mean    12.0674      10.336       14%

The minimum code path is

 if (hx < 0x3FDA827A)                          /* x < 0.41422  */
    {
      if (__glibc_unlikely (ax >= 0x3ff00000))           /* x <= -1.0 */
        {
	   ...
        }
      if (__glibc_unlikely (ax < 0x3e200000))           /* |x| < 2**-29 */
        {
          math_force_eval (two54 + x);          /* raise inexact */
          if (ax < 0x3c900000)                  /* |x| < 2**-54 */
            {
	      ...
            }
          else
            return x - x * x * 0.5;

FMA and non-FMA code sequences look similar.  Non-FMA version is slightly
faster.  Since log1p is called by asinh and atanh, it improves asinh
performance by:

        Before       After     Improvement
max     75.645       63.135       16%
min     10.074       10.071       0%
mean    15.9483      14.9089      6%

and improves atanh performance by:

        Before       After     Improvement
max     91.768       75.081       18%
min     15.548       13.883       10%
mean    18.3713      16.8011      8%
2023-08-21 10:44:26 -07:00
H.J. Lu
1b214630ce x86_64: Add expm1 with FMA
On Skylake, it improves expm1 bench performance by:

        Before       After     Improvement
max     70.204       68.054       3%
min     20.709       16.2         22%
mean    22.1221      16.7367      24%

NB: Add

extern long double __expm1l (long double);
extern long double __expm1f128 (long double);

for __typeof (__expm1l) and __typeof (__expm1f128) when __expm1 is
defined since __expm1 may be expanded in their declarations which
causes the build failure.
2023-08-14 08:14:19 -07:00
H.J. Lu
f6b10ed8e9 x86_64: Add log2 with FMA
On Skylake, it improves log2 bench performance by:

        Before       After     Improvement
max     208.779      63.827       69%
min     9.977        6.55         34%
mean    10.366       6.8191       34%
2023-08-11 07:49:45 -07:00
H.J. Lu
881546979d x86_64: Sort fpu/multiarch/Makefile
Sort Makefile variables using scripts/sort-makefile-lines.py.

No code generation changes observed in libm.  No regressions on x86_64.
2023-08-10 11:23:25 -07:00
Adhemerval Zanella
51cb52214f x86_64: Fix build with --disable-multiarch (BZ 30721)
With multiarch disabled, the default memmove implementation provides
the fortify routines for memcpy, mempcpy, and memmove.  However, it
does not provide the internal hidden definitions used when building
with fortify enabled.  The memset has a similar issue.

Checked on x86_64-linux-gnu building with different options:
default and --disable-multi-arch plus default, --disable-default-pie,
--enable-fortify-source={2,3}, and --enable-fortify-source={2,3}
with --disable-default-pie.
Tested-by: Andreas K. Huettel <dilfridge@gentoo.org>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-08-10 10:29:29 -03:00
Andreas K. Hüttel
6d457ff36a
Update x86_64 libm-test-ulps (x32 ABI)
Based on feedback by Mike Gilbert <floppym@gentoo.org>
Linux-6.1.38-dist x86_64 AMD Phenom-tm- II X6 1055T Processor
-march=amdfam10
failures occur for x32 ABI

Signed-off-by: Andreas K. Hüttel <dilfridge@gentoo.org>
2023-07-19 16:56:54 +02:00
Siddhesh Poyarekar
c6cb8783b5 configure: Use autoconf 2.71
Bump autoconf requirement to 2.71 to allow regenerating configure on
more recent distributions.  autoconf 2.71 has been in Fedora since F36
and is the current version in Debian stable (bookworm).  It appears to
be current in Gentoo as well.

All sysdeps configure and preconfigure scripts have also been
regenerated; all changes are trivial transformations that do not affect
functionality.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-07-17 10:08:10 -04:00
Frédéric Bérat
64f9857507 wchar: Avoid PLT entries with _FORTIFY_SOURCE
The change is meant to avoid unwanted PLT entries for the wmemset and
wcrtomb routines when _FORTIFY_SOURCE is set.

On top of that, ensure that *_chk routines have their hidden builtin
definitions available.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-07-05 16:59:48 +02:00
Frédéric Bérat
dd8486ffc1 string: Ensure *_chk routines have their hidden builtin definition available
If libc_hidden_builtin_{def,proto} isn't properly set for *_chk routines,
there are unwanted PLT entries in libc.so.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-07-05 16:59:48 +02:00
H.J. Lu
6259ab3941 ld.so: Always use MAP_COPY to map the first segment [BZ #30452]
The first segment in a shared library may be read-only, not executable.
To support LD_PREFER_MAP_32BIT_EXEC on such shared libraries, we also
check MAP_DENYWRITE to decide if MAP_32BIT should be passed to mmap.
Normally the first segment is mapped with MAP_COPY, which is defined
as (MAP_PRIVATE | MAP_DENYWRITE).  But if the segment alignment is
greater than the page size, MAP_COPY isn't used to allocate enough
space to ensure that the segment can be properly aligned.  Map the
first segment with MAP_COPY in this case to fix BZ #30452.
2023-06-30 10:42:42 -07:00
Sergey Bugaev
45e2483a6c x86: Make dl-cache.h and readelflib.c not Linux-specific
These files could be useful to any port that wants to use ld.so.cache.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2023-06-26 10:04:31 -03:00
Frederic Berat
1bc85effd5 sysdeps/{i386, x86_64}/mempcpy_chk.S: fix linknamespace for __mempcpy_chk
On i386 and x86_64, for libc.a specifically, __mempcpy_chk calls
mempcpy which leads POSIX routines to call non-POSIX mempcpy indirectly.

This leads the linknamespace test to fail when glibc is built with
__FORTIFY_SOURCE=3.

Since calling mempcpy doesn't bring any benefit for libc.a, directly
call __mempcpy instead.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-06-22 00:20:52 -04:00
H.J. Lu
a8c8889978 x86-64: Use YMM registers in memcmpeq-evex.S
Since the assembly source file with -evex suffix should use YMM registers,
not ZMM registers, include x86-evex256-vecs.h by default to use YMM
registers in memcmpeq-evex.S
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2023-06-01 09:21:14 -07:00
Paul Pluzhnikov
1e9d5987fd Fix misspellings in sysdeps/x86_64 -- BZ 25337.
Applying this commit results in bit-identical rebuild of libc.so.6
math/libm.so.6 elf/ld-linux-x86-64.so.2 mathvec/libmvec.so.1

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2023-05-23 10:25:11 +00:00
Paul Pluzhnikov
1d2971b525 Fix misspellings in sysdeps/x86_64/fpu/multiarch -- BZ 25337.
Applying this commit results in a bit-identical rebuild of
mathvec/libmvec.so.1 (which is the only binary that gets rebuilt).

Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2023-05-23 03:28:58 +00:00
Joe Ramsay
cd94326a13 Enable libmvec support for AArch64
This patch enables libmvec on AArch64. The proposed change is mainly
implementing build infrastructure to add the new routines to ABI,
tests and benchmarks. I have demonstrated how this all fits together
by adding implementations for vector cos, in both single and double
precision, targeting both Advanced SIMD and SVE.

The implementations of the routines themselves are just loops over the
scalar routine from libm for now, as we are more concerned with
getting the plumbing right at this point. We plan to contribute vector
routines from the Arm Optimized Routines repo that are compliant with
requirements described in the libmvec wiki.

Building libmvec requires minimum GCC 10 for SVE ACLE. To avoid raising
the minimum GCC by such a big jump, we allow users to disable libmvec
if their compiler is too old.

Note that at this point users have to manually call the vector math
functions. This seems to be acceptable to some downstream users.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2023-05-03 12:09:49 +01:00
Samuel Thibault
6d4f183495 nptl: move tst-x86-64-tls-1 to nptl-only tests
It is essentially nptl-only.
2023-05-01 12:59:33 +02:00
Sergey Bugaev
c02b26455b hurd: Implement prefer_map_32bit_exec tunable
This makes the prefer_map_32bit_exec tunable no longer Linux-specific.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230423215526.346009-4-bugaevc@gmail.com>
2023-04-24 22:48:35 +02:00
Sergey Bugaev
57df0f16b4 hurd: Add sys/ucontext.h and sigcontext.h for x86_64
This is based on the Linux port's version, but laid out to match Mach's
struct i386_thread_state, much like the i386 version does.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
2023-04-10 20:11:43 +02:00
Florian Weimer
5d1ccdda7b x86_64: Fix asm constraints in feraiseexcept (bug 30305)
The divss instruction clobbers its first argument, and the constraints
need to reflect that.  Fortunately, with GCC 12, generated code does
not actually change, so there is no externally visible bug.

Suggested-by: Jakub Jelinek <jakub@redhat.com>
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2023-04-03 18:40:52 +02:00
Sergey Bugaev
8d873a4904 x86_64: Add rtld-stpncpy & rtld-strncpy
Just like the other existing rtld-str* files, this provides rtld with
usable versions of stpncpy and strncpy.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-22-bugaevc@gmail.com>
2023-04-03 01:17:56 +02:00
Sergey Bugaev
fb9e7f6732 htl: Add tcb-offsets.sym for x86_64
The source code is the same as sysdeps/i386/htl/tcb-offsets.sym, but of
course the produced tcb-offsets.h will be different.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-21-bugaevc@gmail.com>
2023-04-03 01:15:30 +02:00
Adhemerval Zanella Netto
33237fe83d Remove --enable-tunables configure option
And make always supported.  The configure option was added on glibc 2.25
and some features require it (such as hwcap mask, huge pages support, and
lock elisition tuning).  It also simplifies the build permutations.

Changes from v1:
 * Remove glibc.rtld.dynamic_sort changes, it is orthogonal and needs
   more discussion.
 * Cleanup more code.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2023-03-29 14:33:06 -03:00
Joe Ramsay
e4d336f1ac benchtests: Move libmvec benchtest inputs to benchtests directory
This allows other targets to use the same inputs for their own libmvec
microbenchmarks without having to duplicate them in their own
subdirectory.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2023-03-27 17:04:03 +01:00
Sergey Bugaev
35ce4c99e7 htl: Add pthreadtypes-arch.h for x86_64
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230221211932.296459-5-bugaevc@gmail.com>
2023-02-27 23:30:15 +01:00
H.J. Lu
04a558e669 x86_64: Update libm test ulps
Update libm test ulps for

commit 3efbf11fdf
Author: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Date:   Tue Feb 14 11:24:59 2023 +0100

    update auto-libm-test-out-hypot

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2023-02-27 08:39:32 -08:00
Sergey Bugaev
d08ae9c3fb hurd, htl: Add some x86_64-specific code
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230212111044.610942-12-bugaevc@gmail.com>
2023-02-12 16:35:03 +01:00
Samuel Thibault
bfb583e791 htl: Generalize i386 pt-machdep.h to x86 2023-02-12 16:33:39 +01:00
Adhemerval Zanella
22999b2f0f string: Add libc_hidden_proto for memrchr
Although static linker can optimize it to local call, it follows the
internal scheme to provide hidden proto and definitions.

Reviewed-by: Carlos Eduardo Seo <carlos.seo@linaro.org>
2023-02-08 17:13:58 -03:00
Adhemerval Zanella
7ea510127e string: Add libc_hidden_proto for strchrnul
Although static linker can optimize it to local call, it follows the
internal scheme to provide hidden proto and definitions.

Reviewed-by: Carlos Eduardo Seo <carlos.seo@linaro.org>
2023-02-08 17:13:56 -03:00
Adhemerval Zanella
d1a9b6d8e7 Parameterize op_t from memcopy.h
It moves the op_t definition out to an specific header, adds
the attribute 'may-alias', and cleanup its duplicated definitions.

Checked with a build and check with run-built-tests=no for all major
Linux ABIs.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2023-02-06 16:19:35 -03:00
Noah Goldstein
b2c474f8de x86: Fix strncat-avx2.S reading past length [BZ #30065]
Occurs when `src` has no null-term.

Two cases:

1) Zero-length check is doing:
```
    test    %rdx, %rdx
    jl      L(zero_len)
```
which doesn't actually check zero (was at some point `decq` and the
flag never got updated).

The fix is just make the flag `jle` i.e:
```
    test    %rdx, %rdx
    jle     L(zero_len)
```

2) Length check in page-cross case checking if we should continue is
doing:
```
    cmpq    %r8, %rdx
    jb      L(page_cross_small)
```
which means we will continue searching for null-term if length ends at
the end of a page and there was no null-term in `src`.

The fix is to make the flag:
```
    cmpq    %r8, %rdx
    jbe     L(page_cross_small)
```
2023-01-31 19:13:46 -06:00
Joseph Myers
6d7e8eda9b Update copyright dates with scripts/update-copyrights 2023-01-06 21:14:39 +00:00
Florian Weimer
e88b9f0e5c stdio-common: Convert vfprintf and related functions to buffers
vfprintf is entangled with vfwprintf (of course), __printf_fp,
__printf_fphex, __vstrfmon_l_internal, and the strfrom family of
functions.  The latter use the internal snprintf functionality,
so vsnprintf is converted as well.

The simples conversion is __printf_fphex, followed by
__vstrfmon_l_internal and __printf_fp, and finally
__vfprintf_internal and __vfwprintf_internal.  __vsnprintf_internal
and strfrom* are mostly consuming the new interfaces, so they
are comparatively simple.

__printf_fp is a public symbol, so the FILE *-based interface
had to preserved.

The __printf_fp rewrite does not change the actual binary-to-decimal
conversion algorithm, and digits are still not emitted directly to
the target buffer.  However, the staging buffer now uses bytes
instead of wide characters, and one buffer copy is eliminated.

The changes are at least performance-neutral in my testing.
Floating point printing and snprintf improved measurably, so that
this Lua script

  for i=1,5000000 do
      print(i, i * math.pi)
  end

runs about 5% faster for me.  To preserve fprintf performance for
a simple "%d" format, this commit has some logic changes under
LABEL (unsigned_number) to avoid additional function calls.  There
are certainly some very easy performance improvements here: binary,
octal and hexadecimal formatting can easily avoid the temporary work
buffer (the number of digits can be computed ahead-of-time using one
of the __builtin_clz* built-ins). Decimal formatting can use a
specialized version of _itoa_word for base 10.

The existing (inconsistent) width handling between strfmon and printf
is preserved here.  __print_fp_buffer_1 would have to use
__translated_number_width to achieve ISO conformance for printf.

Test expectations in libio/tst-vtables-common.c are adjusted because
the internal staging buffer merges all virtual function calls into
one.

In general, stack buffer usage is greatly reduced, particularly for
unbuffered input streams.  __printf_fp can still use a large buffer
in binary128 mode for %g, though.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2022-12-19 18:56:54 +01:00
Noah Goldstein
b712be5264 x86: Prevent SIGSEGV in memcmp-sse2 when data is concurrently modified [BZ #29863]
In the case of INCORRECT usage of `memcmp(a, b, N)` where `a` and `b`
are concurrently modified as `memcmp` runs, there can be a SIGSEGV
in `L(ret_nonzero_vec_end_0)` because the sequential logic
assumes that `(rdx - 32 + rax)` is a positive 32-bit integer.

To be clear, this change does not mean the usage of `memcmp` is
supported.  The program behaviour is undefined (UB) in the
presence of data races, and `memcmp` is incorrect when the values
of `a` and/or `b` are modified concurrently (data race). This UB
may manifest itself as a SIGSEGV. That being said, if we can
allow the idiomatic use cases, like those in yottadb with
opportunistic concurrency control (OCC), to execute without a
SIGSEGV, at no cost to regular use cases, then we can aim to
minimize harm to those existing users.

The fix replaces a 32-bit `addl %edx, %eax` with the 64-bit variant
`addq %rdx, %rax`. The 1-extra byte of code size from using the
64-bit instruction doesn't contribute to overall code size as the
next target is aligned and has multiple bytes of `nop` padding
before it. As well all the logic between the add and `ret` still
fits in the same fetch block, so the cost of this change is
basically zero.

The relevant sequential logic can be seen in the following
pseudo-code:
```
    /*
     * rsi = a
     * rdi = b
     * rdx = len - 32
     */
    /* cmp a[0:15] and b[0:15]. Since length is known to be [17, 32]
    in this case, this check is also assumed to cover a[0:(31 - len)]
    and b[0:(31 - len)].  */
    movups  (%rsi), %xmm0
    movups  (%rdi), %xmm1
    PCMPEQ  %xmm0, %xmm1
    pmovmskb %xmm1, %eax
    subl    %ecx, %eax
    jnz L(END_NEQ)

    /* cmp a[len-16:len-1] and b[len-16:len-1].  */
    movups  16(%rsi, %rdx), %xmm0
    movups  16(%rdi, %rdx), %xmm1
    PCMPEQ  %xmm0, %xmm1
    pmovmskb %xmm1, %eax
    subl    %ecx, %eax
    jnz L(END_NEQ2)
    ret

L(END2):
    /* Position first mismatch.  */
    bsfl    %eax, %eax

    /* The sequential version is able to assume this value is a
    positive 32-bit value because the first check included bytes in
    range a[0:(31 - len)] and b[0:(31 - len)] so `eax` must be
    greater than `31 - len` so the minimum value of `edx` + `eax` is
    `(len - 32) + (32 - len) >= 0`. In the concurrent case, however,
    `a` or `b` could have been changed so a mismatch in `eax` less or
    equal than `(31 - len)` is possible (the new low bound is `(16 -
    len)`. This can result in a negative 32-bit signed integer, which
    when zero extended to 64-bits is a random large value this out
    out of bounds. */
    addl %edx, %eax

    /* Crash here because 32-bit negative number in `eax` zero
    extends to out of bounds 64-bit offset.  */
    movzbl  16(%rdi, %rax), %ecx
    movzbl  16(%rsi, %rax), %eax
```

This fix is quite simple, just make the `addl %edx, %eax` 64 bit (i.e
`addq %rdx, %rax`). This prevents the 32-bit zero extension
and since `eax` is still a low bound of `16 - len` the `rdx + rax`
is bound by `(len - 32) - (16 - len) >= -16`. Since we have a
fixed offset of `16` in the memory access this must be in bounds.
2022-12-15 09:09:35 -08:00
H.J. Lu
e5672763c4 x86-64 strncpy: Properly handle the length parameter [BZ# 29839]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits.  The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.

This pach fixes strncpy for x32.  Tested on x86-64 and x32.  On x86-64,
libc.so is the same with and without the fix.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2022-12-02 08:18:41 -08:00
H.J. Lu
f566b02852 x86-64 strncat: Properly handle the length parameter [BZ# 24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits.  The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.

This pach fixes strncat for x32.  Tested on x86-64 and x32.  On x86-64,
libc.so is the same with and without the fix.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2022-12-02 08:18:10 -08:00
Noah Goldstein
f704192911 x86/fpu: Factor out shared avx2/avx512 code in svml_{s|d}_wrapper_impl.h
Code is exactly the same for the two so better to only maintain one
version.

All math and mathvec tests pass on x86.
2022-11-27 20:22:49 -08:00
Noah Goldstein
72f6a5a0ed x86/fpu: Cleanup code in svml_{s|d}_wrapper_impl.h
1. Remove unnecessary spills.
2. Fix some small nit missed optimizations.

All math and mathvec tests pass on x86.
2022-11-27 20:22:49 -08:00
Noah Goldstein
d371be4b11 x86/fpu: Reformat svml_{s|d}_wrapper_impl.h
Just reformat with the style convention used in other x86 assembler
files.  This doesn't change libm.so or libmvec.so.
2022-11-27 20:22:49 -08:00
Noah Goldstein
95177b78ff x86/fpu: Fix misspelled evex512 section in variety of svml files
```
.section .text.evex512, "ax", @progbits
```

With misspelled as:

```
.section .text.exex512, "ax", @progbits
```
2022-11-27 20:22:49 -08:00
Noah Goldstein
e1d082d9de x86/fpu: Add missing ISA sections to variety of svml files
Many sse4/avx2/avx512 files where just in .text.
2022-11-27 20:22:49 -08:00