Commit Graph

34582 Commits

Author SHA1 Message Date
Adhemerval Zanella
3c15600e79 elf: Fix elf/tst-pldd with --enable-hardcoded-path-in-tests (BZ#24506)
The elf/tst-pldd (added by 1a4c27355e to fix BZ#18035) test does
not expect the hardcoded paths that are output by pldd when the test
is built with --enable-hardcoded-path-in-tests.  Instead of showing
the ABI installed library names for loader and libc (such as
ld-linux-x86-64.so.2 and libc.so.6 for x86_64), pldd shows the default
built ld.so and libc.so.

It makes the tests fail with an invalid expected loader/libc name.

This patch fixes the elf-pldd test by adding the canonical ld.so and
libc.so names in the expected list of possible outputs when parsing
the result output from pldd.  The test now handles both default
build and --enable-hardcoded-path-in-tests option.

Checked on x86_64-linux-gnu (built with and without
--enable-hardcoded-path-in-tests) and i686-linux-gnu.

	* elf/tst-pldd.c (in_str_list): New function.
	(do_test): Add default names for ld and libc as one option.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit b2af6fb2ed)
2024-05-10 07:50:59 -07:00
H.J. Lu
6fc6b074ea Force DT_RPATH for --enable-hardcoded-path-in-tests
On Fedora 40/x86-64, linker enables --enable-new-dtags by default which
generates DT_RUNPATH instead of DT_RPATH.  Unlike DT_RPATH, DT_RUNPATH
only applies to DT_NEEDED entries in the executable and doesn't applies
to DT_NEEDED entries in shared libraries which are loaded via DT_NEEDED
entries in the executable.  Some glibc tests have libstdc++.so.6 in
DT_NEEDED, which has libm.so.6 in DT_NEEDED.  When DT_RUNPATH is generated,
/lib64/libm.so.6 is loaded for such tests.  If the newly built glibc is
older than glibc 2.36, these tests fail with

assert/tst-assert-c++: /export/build/gnu/tools-build/glibc-gitlab-release/build-x86_64-linux/libc.so.6: version `GLIBC_2.36' not found (required by /lib64/libm.so.6)
assert/tst-assert-c++: /export/build/gnu/tools-build/glibc-gitlab-release/build-x86_64-linux/libc.so.6: version `GLIBC_ABI_DT_RELR' not found (required by /lib64/libm.so.6)

Pass -Wl,--disable-new-dtags to linker when building glibc tests with
--enable-hardcoded-path-in-tests.  This fixes BZ #31719.

Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 2dcaf70643)
2024-05-10 06:16:48 -07:00
H.J. Lu
3d443b8d6a Don't make errlist.o[s].d depend on errlist-compat.h
stdio-common/errlist.o.d and stdio-common/errlist.os.d aren't generated
alongside with stdio-common/errlist-compat.h.  Don't make them depend on
stdio-common/errlist-compat.h to avoid infinite loop with make-4.4.  This
fixes BZ #31330.

Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
2024-05-09 20:12:05 -07:00
Sergei Trofimovich
b1241520a0 Makerules: fix MAKEFLAGS assignment for upcoming make-4.4 [BZ# 29564]
make-4.4 will add long flags to MAKEFLAGS variable:

    * WARNING: Backward-incompatibility!
      Previously only simple (one-letter) options were added to the MAKEFLAGS
      variable that was visible while parsing makefiles.  Now, all options
      are available in MAKEFLAGS.

This causes locale builds to fail when long options are used:

    $ make --shuffle
    ...
    make  -C localedata install-locales
    make: invalid shuffle mode: '1662724426r'

The change fixes it by passing eash option via whitespace and dashes.
That way option is appended to both single-word form and whitespace
separated form.

While at it fixed --silent mode detection in $(MAKEFLAGS) by filtering
out --long-options. Otherwise options like --shuffle flag enable silent
mode unintentionally. $(silent-make) variable consolidates the checks.

Resolves: BZ# 29564

CC: Paul Smith <psmith@gnu.org>
CC: Siddhesh Poyarekar <siddhesh@gotplt.org>
Signed-off-by: Sergei Trofimovich <slyich@gmail.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
(cherry picked from commit 2d7ed98add)
2024-05-09 20:12:03 -07:00
Florian Weimer
ba7a5e3e0f elf: Disable some subtests of ifuncmain1, ifuncmain5 for !PIE
(cherry picked from commit 9cc9d61ee1)
2024-05-09 19:06:57 -07:00
Aurelien Jarno
b6e9f21071 testsuite: stdlib/isomac.c: add missing include
When running the testsuite, building stdlib/isomac.c outputs the
following warning:

  gcc -O   -D_GNU_SOURCE -DIS_IN_build -include /home/aurel32/glibc-build/config.h isomac.c -o /home/aurel32/glibc-build/stdlib/isomac
  isomac.c: In function ‘get_null_defines’:
  isomac.c:260:3: warning: implicit declaration of function ‘close’; did you mean ‘pclose’? [-Wimplicit-function-declaration]
     close (fd);
     ^~~~~
     pclose

Fix that by adding the <unistd.h> include.

Changelog:
	* stdlib/isomac.c: Include <unistd.h>.
(cherry picked from commit 11f382ee78)
2024-05-09 19:05:44 -07:00
Florian Weimer
76d9e44c59 resolv/tst-idna_name_classify: Isolate from system libraries
Loading NSS modules from static binaries uses installed system
libraries if LD_LIBRARY_PATH is not set.

(cherry picked from commit 8dddf0bd5a)
2024-05-09 15:32:52 -07:00
Wilco Dijkstra
2a7a9d3ac8 aarch64: Use memcpy_simd as the default memcpy
Since __memcpy_simd is the fastest memcpy on almost all cores, replace
the generic memcpy with it.

(cherry picked from commit 531717afbc)
2024-04-09 19:48:23 +01:00
Sunil K Pandey
d04b63770f x86_64: Optimize ffsll function code size.
Ffsll function randomly regress by ~20%, depending on how code gets
aligned in memory.  Ffsll function code size is 17 bytes.  Since default
function alignment is 16 bytes, it can load on 16, 32, 48 or 64 bytes
aligned memory.  When ffsll function load at 16, 32 or 64 bytes aligned
memory, entire code fits in single 64 bytes cache line.  When ffsll
function load at 48 bytes aligned memory, it splits in two cache line,
hence random regression.

Ffsll function size reduction from 17 bytes to 12 bytes ensures that it
will always fit in single 64 bytes cache line.

This patch fixes ffsll function random performance regression.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 9d94997b5f)
2024-02-08 10:07:26 -08:00
Noah Goldstein
bb7c57219d x86: Fix incorrect scope of setting shared_per_thread [BZ# 30745]
The:

```
    if (shared_per_thread > 0 && threads > 0)
      shared_per_thread /= threads;
```

Code was accidentally moved to inside the else scope.  This doesn't
match how it was previously (before af992e7abd).

This patch fixes that by putting the division after the `else` block.

(cherry picked from commit 084fb31bc2)
2023-09-11 22:48:28 -05:00
Noah Goldstein
83ce856e77 x86: Use 3/4*sizeof(per-thread-L3) as low bound for NT threshold.
On some machines we end up with incomplete cache information. This can
make the new calculation of `sizeof(total-L3)/custom-divisor` end up
lower than intended (and lower than the prior value). So reintroduce
the old bound as a lower bound to avoid potentially regressing code
where we don't have complete information to make the decision.
Reviewed-by: DJ Delorie <dj@redhat.com>

(cherry picked from commit 8b9a0af8ca)
2023-09-11 22:48:28 -05:00
Noah Goldstein
f578da1020 x86: Fix slight bug in shared_per_thread cache size calculation.
After:
```
    commit af992e7abd
    Author: Noah Goldstein <goldstein.w.n@gmail.com>
    Date:   Wed Jun 7 13:18:01 2023 -0500

        x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
```

Split `shared` (cumulative cache size) from `shared_per_thread` (cache
size per socket), the `shared_per_thread` *can* be slightly off from
the previous calculation.

Previously we added `core` even if `threads_l2` was invalid, and only
used `threads_l2` to divide `core` if it was present. The changed
version only included `core` if `threads_l2` was valid.

This change restores the old behavior if `threads_l2` is invalid by
adding the entire value of `core`.
Reviewed-by: DJ Delorie <dj@redhat.com>

(cherry picked from commit 47f7472178)
2023-09-11 22:48:28 -05:00
Noah Goldstein
d4386d345e x86: Increase non_temporal_threshold to roughly sizeof_L3 / 4
Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.

Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.

The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.

Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>

(cherry picked from commit af992e7abd)
2023-09-11 22:48:28 -05:00
Florian Weimer
66b1fe1d4f debug: Mark libSegFault.so as NODELETE
The signal handler installed in the ELF constructor cannot easily
be removed again (because the program may have changed handlers
in the meantime).  Mark the object as NODELETE so that the registered
handler function is never unloaded.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 23ee92deea)
2023-07-21 16:41:11 +02:00
Noah Goldstein
5ac4f45c1b x86: Fix wcsnlen-avx2 page cross length comparison [BZ #29591]
Previous implementation was adjusting length (rsi) to match
bytes (eax), but since there is no bound to length this can cause
overflow.

Fix is to just convert the byte-count (eax) to length by dividing by
sizeof (wchar_t) before the comparison.

Full check passes on x86-64 and build succeeds w/ and w/o multiarch.

(cherry picked from commit b0969fa53a)
2022-11-24 18:05:33 -08:00
Sunil K Pandey
114e3349de x86-64: Require BMI2 for avx2 functions [BZ #29611]
This patch fixes BZ #29611
2022-09-28 19:04:13 -07:00
H.J. Lu
986e7b911f x86-64: Require BMI2 for strchr-avx2.S [BZ #29611]
Since strchr-avx2.S updated by

commit 1f745ecc21
Author: noah <goldstein.w.n@gmail.com>
Date:   Wed Feb 3 00:38:59 2021 -0500

    x86-64: Refactor and improve performance of strchr-avx2.S

uses sarx:

c4 e2 72 f7 c0       	sarx   %ecx,%eax,%eax

for strchr-avx2 family functions, require BMI2 in ifunc-impl-list.c and
ifunc-avx2.h.

This fixes BZ #29611.

(cherry picked from commit 83c5b36822)
2022-09-28 19:04:00 -07:00
Andreas Schwab
6da40102c7 Add test for bug 29530
This tests for a bug that was introduced in commit edc1686af0 ("vfprintf:
Reuse work_buffer in group_number") and fixed as a side effect of commit
6caddd34bd ("Remove most vfprintf width/precision-dependent allocations
(bug 14231, bug 26211).").

(cherry picked from commit ca6466e8be)
2022-08-30 11:07:43 +02:00
Arjun Shankar
f22d1cb932 support: Add xsetlocale function
(cherry picked from commit cce35a50c1)
2022-08-30 11:07:43 +02:00
Joseph Myers
b2b761bc47 Fix memmove call in vfprintf-internal.c:group_number
A recent GCC mainline change introduces errors of the form:

vfprintf-internal.c: In function 'group_number':
vfprintf-internal.c:2093:15: error: 'memmove' specified bound between 9223372036854775808 and 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Werror=stringop-overflow=]
 2093 |               memmove (w, s, (front_ptr -s) * sizeof (CHAR_T));
      |               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This is a genuine bug in the glibc code: s > front_ptr is always true
at this point in the code, and the intent is clearly for the
subtraction to be the other way round.  The other arguments to the
memmove call here also appear to be wrong; w and s point just *after*
the destination and source for copying the rest of the number, so the
size needs to be subtracted to get appropriate pointers for the
copying.  Adjust the memmove call to conform to the apparent intent of
the code, so fixing the -Wstringop-overflow error.

Now, if the original code were ever executed, a buffer overrun would
result.  However, I believe this code (introduced in commit
edc1686af0, "vfprintf: Reuse work_buffer
in group_number", so in glibc 2.26) is unreachable in prior glibc
releases (so there is no need for a bug in Bugzilla, no need to
consider any backports unless someone wants to build older glibc
releases with GCC 12 and no possibility of this buffer overrun
resulting in a security issue).

work_buffer is 1000 bytes / 250 wide characters.  This case is only
reachable if an initial part of the number, plus a grouped copy of the
rest of the number, fail to fit in that space; that is, if the grouped
number fails to fit in the space.  In the wide character case,
grouping is always one wide character, so even with a locale (of which
there aren't any in glibc) grouping every digit, a number would need
to occupy at least 125 wide characters to overflow, and a 64-bit
integer occupies at most 23 characters in octal including a leading 0.
In the narrow character case, the multibyte encoding of the grouping
separator would need to be at least 42 bytes to overflow, again
supposing grouping every digit, but MB_LEN_MAX is 16.  So even if we
admit the case of artificially constructed locales not shipped with
glibc, given that such a locale would need to use one of the character
sets supported by glibc, this code cannot be reached at present.  (And
POSIX only actually specifies the ' flag for grouping for decimal
output, though glibc acts on it for other bases as well.)

With binary output (if you consider use of grouping there to be
valid), you'd need a 15-byte multibyte character for overflow; I don't
know if any supported character set has such a character (if, again,
we admit constructed locales using grouping every digit and a grouping
separator chosen to have a multibyte encoding as long as possible, as
well as accepting use of grouping with binary), but given that we have
this code at all (clearly it's not *correct*, or in accordance with
the principle of avoiding arbitrary limits, to skip grouping on
running out of internal space like that), I don't think it should need
any further changes for binary printf support to go in.

On the other hand, support for large sizes of _BitInt in printf (see
the N2858 proposal) *would* require something to be done about such
arbitrary limits (presumably using dynamic allocation in printf again,
for sufficiently large _BitInt arguments only - currently only
floating-point uses dynamic allocation, and, as previously discussed,
that could actually be replaced by bounded allocation given smarter
code).

Tested with build-many-glibcs.py for aarch64-linux-gnu (GCC mainline).
Also tested natively for x86_64.

(cherry picked from commit db6c4935fa)
2022-08-30 11:07:43 +02:00
Joseph Myers
5b5dd5a43d Remove most vfprintf width/precision-dependent allocations (bug 14231, bug 26211).
The vfprintf implementation (used for all printf-family functions)
contains complicated logic to allocate internal buffers of a size
depending on the width and precision used for a format, using either
malloc or alloca depending on that size, and with consequent checks
for size overflow and allocation failure.

As noted in bug 26211, the version of that logic used when '$' plus
argument number formats are in use is missing the overflow checks,
which can result in segfaults (quite possibly exploitable, I didn't
try to work that out) when the width or precision is in the range
0x7fffffe0 through 0x7fffffff (maybe smaller values as well in the
wprintf case on 32-bit systems, when the multiplication by sizeof
(CHAR_T) can overflow).

All that complicated logic in fact appears to be useless.  As far as I
can tell, there has been no need (outside the floating-point printf
code, which does its own allocations) for allocations depending on
width or precision since commit
3e95f6602b ("Remove limitation on size
of precision for integers", Sun Sep 12 21:23:32 1999 +0000).  Thus,
this patch removes that logic completely, thereby fixing both problems
with excessive allocations for large width and precision for
non-floating-point formats, and the problem with missing overflow
checks with such allocations.  Note that this does have the
consequence that width and precision up to INT_MAX are now allowed
where previously INT_MAX / sizeof (CHAR_T) - EXTSIZ or more would have
been rejected, so could potentially expose any other overflows where
the value would previously have been rejected by those removed checks.

I believe this completely fixes bugs 14231 and 26211.

Excessive allocations are still possible in the floating-point case
(bug 21127), as are other integer or buffer overflows (see bug 26201).
This does not address the cases where a precision larger than INT_MAX
(embedded in the format string) would be meaningful without printf's
return value overflowing (when it's used with a string format, or %g
without the '#' flag, so the actual output will be much smaller), as
mentioned in bug 17829 comment 8; using size_t internally for
precision to handle that case would be complicated by struct
printf_info being a public ABI.  Nor does it address the matter of an
INT_MIN width being negated (bug 17829 comment 7; the same logic
appears a second time in the file as well, in the form of multiplying
by -1).  There may be other sources of memory allocations with malloc
in printf functions as well (bug 24988, bug 16060).  From inspection,
I think there are also integer overflows in two copies of "if ((width
-= len) < 0)" logic (where width is int, len is size_t and a very long
string could result in spurious padding being output on a 32-bit
system before printf overflows the count of output characters).

Tested for x86-64 and x86.

(cherry picked from commit 6caddd34bd)
2022-08-30 11:07:36 +02:00
Adhemerval Zanella
b307c13ef3 stdio: Add tests for printf multibyte convertion leak [BZ#25691]
Checked on x86_64-linux-gnu and i686-linux-gnu.

(cherry picked from commit 910a835dc9)
2022-08-30 11:03:58 +02:00
Florian Weimer
e68db8bf5a stdio: Remove memory leak from multibyte convertion [BZ#25691]
This is an updated version of a previous patch [1] with the
following changes:

  - Use compiler overflow builtins on done_add_func function.
  - Define the scratch +utstring_converted_wide_string using
    CHAR_T.
  - Added a testcase and mention the bug report.

Both default and wide printf functions might leak memory when
manipulate multibyte characters conversion depending of the size
of the input (whether __libc_use_alloca trigger or not the fallback
heap allocation).

This patch fixes it by removing the extra memory allocation on
string formatting with conversion parts.

The testcase uses input argument size that trigger memory leaks
on unpatched code (using a scratch buffer the threashold to use
heap allocation is lower).

Checked on x86_64-linux-gnu and i686-linux-gnu.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>

[1] https://sourceware.org/pipermail/libc-alpha/2017-June/082098.html

(cherry picked from commit 3cc4a8367c)
2022-08-30 11:03:56 +02:00
H.J. Lu
efe736ebe7 NEWS: Add a bug fix entry for BZ #28896 2022-02-18 19:14:26 -08:00
Noah Goldstein
8c21bd940e x86: Fix TEST_NAME to make it a string in tst-strncmp-rtm.c
Previously TEST_NAME was passing a function pointer. This didn't fail
because of the -Wno-error flag (to allow for overflow sizes passed
to strncmp/wcsncmp)

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit b98d0bbf74)
2022-02-18 19:14:01 -08:00
Noah Goldstein
9e69d744b3 x86: Test wcscmp RTM in the wcsncmp overflow case [BZ #28896]
In the overflow fallback strncmp-avx2-rtm and wcsncmp-avx2-rtm would
call strcmp-avx2 and wcscmp-avx2 respectively. This would have
not checks around vzeroupper and would trigger spurious
aborts. This commit fixes that.

test-strcmp, test-strncmp, test-wcscmp, and test-wcsncmp all pass on
AVX2 machines with and without RTM.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>

(cherry picked from commit 7835d611af)
2022-02-18 19:13:56 -08:00
Noah Goldstein
428e34eab5 x86: Fallback {str|wcs}cmp RTM in the ncmp overflow case [BZ #28896]
In the overflow fallback strncmp-avx2-rtm and wcsncmp-avx2-rtm would
call strcmp-avx2 and wcscmp-avx2 respectively. This would have
not checks around vzeroupper and would trigger spurious
aborts. This commit fixes that.

test-strcmp, test-strncmp, test-wcscmp, and test-wcsncmp all pass on
AVX2 machines with and without RTM.

Co-authored-by: H.J. Lu <hjl.tools@gmail.com>

(cherry picked from commit c627209832)
2022-02-18 19:13:50 -08:00
H.J. Lu
a486152569 string: Add a testcase for wcsncmp with SIZE_MAX [BZ #28755]
Verify that wcsncmp (L("abc"), L("abd"), SIZE_MAX) == 0.  The new test
fails without

commit ddf0992cf5
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Sun Jan 9 16:02:21 2022 -0600

    x86: Fix __wcsncmp_avx2 in strcmp-avx2.S [BZ# 28755]

and

commit 7e08db3359
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Sun Jan 9 16:02:28 2022 -0600

    x86: Fix __wcsncmp_evex in strcmp-evex.S [BZ# 28755]

This is for BZ #28755.

Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>

(cherry picked from commit aa5a720056)
2022-02-17 11:43:34 -08:00
H.J. Lu
28689d6255 x86-64: Test strlen and wcslen with 0 in the RSI register [BZ #28064]
commit 6f573a27b6
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Wed Jun 23 01:19:34 2021 -0400

    x86-64: Add wcslen optimize for sse4.1

added wcsnlen-sse4.1 to the wcslen ifunc implementation list.  Since the
random value in the the RSI register is larger than the wide-character
string length in the existing wcslen test, it didn't trigger the wcslen
test failure.  Add a test to force 0 into the RSI register before calling
wcslen.

(cherry picked from commit a6e7c3745d)
2022-02-01 12:51:16 -08:00
Noah Goldstein
352cb39fa0 x86: Remove wcsnlen-sse4_1 from wcslen ifunc-impl-list [BZ #28064]
The following commit

commit 6f573a27b6
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Wed Jun 23 01:19:34 2021 -0400

    x86-64: Add wcslen optimize for sse4.1

Added wcsnlen-sse4.1 to the wcslen ifunc implementation list and did
not add wcslen-sse4.1 to wcslen ifunc implementation list. This commit
fixes that by removing wcsnlen-sse4.1 from the wcslen ifunc
implementation list and adding wcslen-sse4.1 to the ifunc
implementation list.

Testing:
test-wcslen.c, test-rsi-wcslen.c, and test-rsi-strlen.c are passing as
well as all other tests in wcsmbs and string.

Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 0679442def)
2022-02-01 12:51:09 -08:00
H.J. Lu
5d94c86a47 x86: Black list more Intel CPUs for TSX [BZ #27398]
Disable TSX and enable RTM_ALWAYS_ABORT for Intel CPUs listed in:

https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html

This fixes BZ #27398.

Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
(cherry picked from commit 1e000d3d33)
2022-02-01 07:38:16 -08:00
H.J. Lu
7aac95739a x86: Check RTM_ALWAYS_ABORT for RTM [BZ #28033]
From

https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html

* Intel TSX will be disabled by default.
* The processor will force abort all Restricted Transactional Memory (RTM)
  transactions by default.
* A new CPUID bit CPUID.07H.0H.EDX[11](RTM_ALWAYS_ABORT) will be enumerated,
  which is set to indicate to updated software that the loaded microcode is
  forcing RTM abort.
* On processors that enumerate support for RTM, the CPUID enumeration bits
  for Intel TSX (CPUID.07H.0H.EBX[11] and CPUID.07H.0H.EBX[4]) continue to
  be set by default after microcode update.
* Workloads that were benefited from Intel TSX might experience a change
  in performance.
* System software may use a new bit in Model-Specific Register (MSR) 0x10F
  TSX_FORCE_ABORT[TSX_CPUID_CLEAR] functionality to clear the Hardware Lock
  Elision (HLE) and RTM bits to indicate to software that Intel TSX is
  disabled.

1. Add RTM_ALWAYS_ABORT to CPUID features.
2. Set RTM usable only if RTM_ALWAYS_ABORT isn't set.  This skips the
string/tst-memchr-rtm etc. testcases on the affected processors, which
always fail after a microcde update.
3. Check RTM feature, instead of usability, against /proc/cpuinfo.

This fixes BZ #28033.

(cherry picked from commit ea8e465a6b)
2022-02-01 07:38:09 -08:00
H.J. Lu
a0ed5893fc NEWS: Add a bug fix entry for BZ #27974 2022-01-27 16:27:20 -08:00
Noah Goldstein
ac1d6f25d6 String: Add overflow tests for strnlen, memchr, and strncat [BZ #27974]
This commit adds tests for a bug in the wide char variant of the
functions where the implementation may assume that maxlen for wcsnlen
or n for wmemchr/strncat will not overflow when multiplied by
sizeof(wchar_t).

These tests show the following implementations failing on x86_64:

wcsnlen-sse4_1
wcsnlen-avx2

wmemchr-sse2
wmemchr-avx2

strncat would fail as well if it where on a system that prefered
either of the wcsnlen implementations that failed as it relies on
wcsnlen.

Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit da5a6fba0f)
2022-01-27 16:26:26 -08:00
Noah Goldstein
f9fe740ec4 x86: Optimize strlen-evex.S
No bug. This commit optimizes strlen-evex.S. The
optimizations are mostly small things but they add up to roughly
10-30% performance improvement for strlen. The results for strnlen are
bit more ambiguous. test-strlen, test-strnlen, test-wcslen, and
test-wcsnlen are all passing.

Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
(cherry picked from commit 4ba6558684)
2022-01-27 16:26:21 -08:00
Noah Goldstein
dd92fa3029 x86: Fix overflow bug in wcsnlen-sse4_1 and wcsnlen-avx2 [BZ #27974]
This commit fixes the bug mentioned in the previous commit.

The previous implementations of wmemchr in these files relied
on maxlen * sizeof(wchar_t) which was not guranteed by the standard.

The new overflow tests added in the previous commit now
pass (As well as all the other tests).

Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit a775a7a3eb)
2022-01-27 16:26:15 -08:00
Noah Goldstein
2599967449 x86-64: Add wcslen optimize for sse4.1
No bug. This comment adds the ifunc / build infrastructure
necessary for wcslen to prefer the sse4.1 implementation
in strlen-vec.S. test-wcslen.c is passing.

Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 6f573a27b6)
2022-01-27 16:26:10 -08:00
H.J. Lu
ccf4e0edde x86-64: Move strlen.S to multiarch/strlen-vec.S
Since strlen.S contains SSE2 version of strlen/strnlen and SSE4.1
version of wcslen/wcsnlen, move strlen.S to multiarch/strlen-vec.S
and include multiarch/strlen-vec.S from SSE2 and SSE4.1 variants.
This also removes the unused symbols, __GI___strlen_sse2 and
__GI___wcsnlen_sse4_1.

(cherry picked from commit a0db678071)
2022-01-27 16:26:04 -08:00
Alice Xu
3cee1ddad2 x86-64: Fix an unknown vector operation in memchr-evex.S
An unknown vector operation occurred in commit 2a76821c30. Fixed it
by using "ymm{k1}{z}" but not "ymm {k1} {z}".

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 6ea916adfa)
2022-01-27 16:25:58 -08:00
Noah Goldstein
cba2fbe17f x86: Optimize memchr-evex.S
No bug. This commit optimizes memchr-evex.S. The optimizations include
replacing some branches with cmovcc, avoiding some branches entirely
in the less_4x_vec case, making the page cross logic less strict,
saving some ALU in the alignment process, and most importantly
increasing ILP in the 4x loop. test-memchr, test-rawmemchr, and
test-wmemchr are all passing.

Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 2a76821c30)
2022-01-27 16:25:53 -08:00
Noah Goldstein
0a3b2efccc x86: Optimize strlen-avx2.S
No bug. This commit optimizes strlen-avx2.S. The optimizations are
mostly small things but they add up to roughly 10-30% performance
improvement for strlen. The results for strnlen are bit more
ambiguous. test-strlen, test-strnlen, test-wcslen, and test-wcsnlen
are all passing.

Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
(cherry picked from commit aaa23c3507)
2022-01-27 16:25:48 -08:00
Noah Goldstein
c51e9501c2 x86: Fix overflow bug with wmemchr-sse2 and wmemchr-avx2 [BZ #27974]
This commit fixes the bug mentioned in the previous commit.

The previous implementations of wmemchr in these files relied
on n * sizeof(wchar_t) which was not guranteed by the standard.

The new overflow tests added in the previous commit now
pass (As well as all the other tests).

Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 645a158978)
2022-01-27 16:25:42 -08:00
Noah Goldstein
5c38877df5 x86: Optimize memchr-avx2.S
No bug. This commit optimizes memchr-avx2.S. The optimizations include
replacing some branches with cmovcc, avoiding some branches entirely
in the less_4x_vec case, making the page cross logic less strict,
asaving a few instructions the in loop return loop. test-memchr,
test-rawmemchr, and test-wmemchr are all passing.

Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit acfd088a19)
2022-01-27 16:25:37 -08:00
H.J. Lu
8fa2afcec3 test-strnlen.c: Check that strnlen won't go beyond the maximum length
Place strings ending at page boundary without the null byte.  If an
implementation goes beyond EXP_LEN, it will trigger the segfault.

(cherry picked from commit cb882b21b6)
2022-01-27 16:25:31 -08:00
H.J. Lu
0717959eb3 test-strnlen.c: Initialize wchar_t string with wmemset [BZ #27655]
Use wmemset to initialize wchar_t string.

(cherry picked from commit 86859b7e58)
2022-01-27 16:25:26 -08:00
H.J. Lu
3fc179e2b2 x86-64: Require BMI2 for __strlen_evex and __strnlen_evex
Since __strlen_evex and __strnlen_evex added by

commit 1fd8c163a8
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Fri Mar 5 06:24:52 2021 -0800

    x86-64: Add ifunc-avx2.h functions with 256-bit EVEX

use sarx:

c4 e2 6a f7 c0       	sarx   %edx,%eax,%eax

require BMI2 for __strlen_evex and __strnlen_evex in ifunc-impl-list.c.
ifunc-avx2.h already requires BMI2 for EVEX implementation.

(cherry picked from commit 55bf411b45)
2022-01-27 16:25:20 -08:00
H.J. Lu
c3535cb6cd NEWS: Add a bug fix entry for BZ #27457 2022-01-27 12:25:41 -08:00
Sunil K Pandey
aff5eec9ce x86-64: Fix ifdef indentation in strlen-evex.S
Fix some indentations of ifdef in file strlen-evex.S which are off by 1
and confusing to read.

(cherry picked from commit 595c22ecd8)
2022-01-27 12:07:11 -08:00
H.J. Lu
7056607e6a x86-64: Use ZMM16-ZMM31 in AVX512 memmove family functions
Update ifunc-memmove.h to select the function optimized with AVX512
instructions using ZMM16-ZMM31 registers to avoid RTM abort with usable
AVX512VL since VZEROUPPER isn't needed at function exit.

(cherry picked from commit e4fda46310)
2022-01-27 12:07:11 -08:00
H.J. Lu
a54fae8df4 x86-64: Use ZMM16-ZMM31 in AVX512 memset family functions
Update ifunc-memset.h/ifunc-wmemset.h to select the function optimized
with AVX512 instructions using ZMM16-ZMM31 registers to avoid RTM abort
with usable AVX512VL and AVX512BW since VZEROUPPER isn't needed at
function exit.

(cherry picked from commit 4e2d8f3527)
2022-01-27 12:07:11 -08:00