Rename the identifier sz to __sz everywhere.
Fixes: a643f60c53 ("Make sure that the fortified function conditionals are constant")
(cherry picked from commit 39ca997ab3)
(redone from scratch because of many conflicts)
The __getrandom_nocancel used by __arc4random_buf uses
INLINE_SYSCALL_CALL (which returns -1/errno) and the loop checks for
the return value instead of errno to fallback to /dev/urandom.
The malloc code now uses __getrandom_nocancel_nostatus, which uses
INTERNAL_SYSCALL_CALL, so there is no need to use the variant that does
not set errno (BZ#29624).
Checked on x86_64-linux-gnu.
Reviewed-by: Xi Ruoyao <xry111@xry111.site>
(cherry picked from commit 184b9e530e)
The following patch uses the GCC 14 __builtin_stdc_* builtins in stdbit.h
for the type-generic macros, so that when compiled with GCC 14 or later,
it supports not just 8/16/32/64-bit unsigned integers, but also 128-bit
(if target supports them) and unsigned _BitInt (any supported precision).
And so that the macros don't expand arguments multiple times and can be
evaluated in constant expressions.
The new testcase is gcc's gcc/testsuite/gcc.dg/builtin-stdc-bit-1.c
adjusted to test stdbit.h and the type-generic macros in there instead
of the builtins and adjusted to use glibc test framework rather than
gcc style tests with __builtin_abort ().
Signed-off-by: Jakub Jelinek <jakub@redhat.com>
Reviewed-by: Joseph Myers <josmyers@redhat.com>
(cherry picked from commit da89496337)
In qsort_r we allocate a buffer sized QSORT_STACK_SIZE (1024) on stack
and we intend to use it if all elements can fit into it. But there is a
typo:
if (total_size < sizeof buf)
buf = tmp;
else
/* allocate a buffer on heap and use it ... */
Here "buf" is a pointer, thus sizeof buf is just 4 or 8, instead of
1024. There is also a minor issue that we should use "<=" instead of
"<".
This bug is detected debugging some strange heap corruption running the
Ruby-3.3.0 test suite (on an experimental Linux From Scratch build using
Binutils-2.41.90 and Glibc trunk, and also Fedora Rawhide [1]). It
seems Ruby is doing some wild "optimization" by jumping into somewhere
in qsort_r instead of calling it normally, resulting in a double free of
buf if we allocate it on heap. The issue can be reproduced
deterministically with:
LD_PRELOAD=/usr/lib/libc_malloc_debug.so MALLOC_CHECK_=3 \
LD_LIBRARY_PATH=. ./ruby test/runner.rb test/ruby/test_enum.rb
in Ruby-3.3.0 tree after building it. This change would hide the issue
for Ruby, but Ruby is likely still buggy (if using this "optimization"
sorting larger arrays).
[1]:https://kojipkgs.fedoraproject.org/work/tasks/9729/111889729/build.log
Signed-off-by: Xi Ruoyao <xry111@xry111.site>
Adjust the testing approach to start from scenarios with only 2
elements, as insertion sort no longer handles such cases.
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
When malloc fails to allocate a buffer and falls back to heapsort, the
current heapsort implementation does not perform sorting when there are
exactly two elements. Heapsort is now skipped only when there is
exactly one element.
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The mergesort removal from qsort implementation (commit 03bf8357e8)
had the side-effect of making sorting nonstable. Although neither
POSIX nor C standard specify that qsort should be stable, it seems
that it has become an instance of Hyrum's law where multiple programs
expect it.
Also, the resulting introsort implementation is not faster than
the previous mergesort (which makes the change even less appealing).
This patch restores the previous mergesort implementation, with the
exception of machinery that checks the resulting allocation against
the _SC_PHYS_PAGES (it only adds complexity and the heuristic not
always make sense depending on the system configuration and load).
The alloca usage was replaced with a fixed-size buffer.
For the fallback mechanism, the implementation uses heapsort. It is
simpler than quicksort, and it does not suffer from adversarial
inputs. With memory overcommit, it should be rarely triggered.
The drawback is mergesort requires O(n) extra space, and since it is
allocated with malloc the function is AS-signal-unsafe. It should be
feasible to change it to use mmap, although I am not sure how urgent
it is. The heapsort is also nonstable, so programs that require a
stable sort would still be subject to this latent issue.
The tst-qsort5 is removed since it will not create quicksort adversarial
inputs with the current qsort_r implementation.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
With gcc 6.5.0, 7.5.0, 8.5.0, and 9.5.0 the tst-stdbit-Wconversion
issues the warnings:
../stdlib/stdbit.h: In function ‘__clo16_inline’:
../stdlib/stdbit.h:128:26: error: conversion to ‘uint16_t {aka short
unsigned int}’ from ‘int’ may alter its value [-Werror=conversion]
return __clz16_inline (~__x);
^
../stdlib/stdbit.h: In function ‘__clo8_inline’:
../stdlib/stdbit.h:134:25: error: conversion to ‘uint8_t {aka unsigned
char}’ from ‘int’ may alter its value [-Werror=conversion]
return __clz8_inline (~__x);
^
../stdlib/stdbit.h: In function ‘__cto16_inline’:
../stdlib/stdbit.h:232:26: error: conversion to ‘uint16_t {aka short
unsigned int}’ from ‘int’ may alter its value [-Werror=conversion]
return __ctz16_inline (~__x);
^
../stdlib/stdbit.h: In function ‘__cto8_inline’:
../stdlib/stdbit.h:238:25: error: conversion to ‘uint8_t {aka unsigned
char}’ from ‘int’ may alter its value [-Werror=conversion]
return __ctz8_inline (~__x);
^
../stdlib/stdbit.h: In function ‘__bf16_inline’:
../stdlib/stdbit.h:701:23: error: conversion to ‘uint16_t {aka short
unsigned int}’ from ‘int’ may alter its value [-Werror=conversion]
return __x == 0 ? 0 : ((uint16_t) 1) << (__bw16_inline (__x) - 1);
~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../stdlib/stdbit.h: In function ‘__bf8_inline’:
../stdlib/stdbit.h:707:23: error: conversion to ‘uint8_t {aka unsigned
char}’ from ‘int’ may alter its value [-Werror=conversion]
return __x == 0 ? 0 : ((uint8_t) 1) << (__bw8_inline (__x) - 1);
~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../stdlib/stdbit.h: In function ‘__bc16_inline’:
../stdlib/stdbit.h:751:59: error: conversion to ‘uint16_t {aka short
unsigned int}’ from ‘int’ may alter its value [-Werror=conversion]
return __x <= 1 ? 1 : ((uint16_t) 2) << (__bw16_inline (__x - 1) -
1);
^~~
../stdlib/stdbit.h:751:23: error: conversion to ‘uint16_t {aka short
unsigned int}’ from ‘int’ may alter its value [-Werror=conversion]
return __x <= 1 ? 1 : ((uint16_t) 2) << (__bw16_inline (__x - 1) -
1);
~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../stdlib/stdbit.h: In function ‘__bc8_inline’:
../stdlib/stdbit.h:757:57: error: conversion to ‘uint8_t {aka unsigned
char}’ from ‘int’ may alter its value [-Werror=conversion]
return __x <= 1 ? 1 : ((uint8_t) 2) << (__bw8_inline (__x - 1) - 1);
^~~
../stdlib/stdbit.h:757:23: error: conversion to ‘uint8_t {aka unsigned
char}’ from ‘int’ may alter its value [-Werror=conversion]
return __x <= 1 ? 1 : ((uint8_t) 2) << (__bw8_inline (__x - 1) - 1);
It seems to boiler down to __builtin_clz not having a variant for 8 or
16 bits. Fix it by explicit casting to the expected types.
Checked on x86_64-linux-gnu and i686-linux-gnu with gcc 9.5.0.
C23 adds a header <stdbit.h> with various functions and type-generic
macros for bit-manipulation of unsigned integers (plus macro defines
related to endianness). Implement this header for glibc.
The functions have both inline definitions in the header (referenced
by macros defined in the header) and copies with external linkage in
the library (which are implemented in terms of those macros to avoid
duplication). They are documented in the glibc manual. Tests, as
well as verifying results for various inputs (of both the macros and
the out-of-line functions), verify the types of those results (which
showed up a bug in an earlier version with the type-generic macro
stdc_has_single_bit wrongly returning a promoted type), that the
macros can be used at top level in a source file (so don't use ({})),
that they evaluate their arguments exactly once, and that the macros
for the type-specific functions have the expected implicit conversions
to the relevant argument type.
Jakub previously referred to -Wconversion warnings in type-generic
macros, so I've included a test with -Wconversion (but the only
warnings I saw and fixed from that test were actually in inline
functions in the <stdbit.h> header - not anything coming from use of
the type-generic macros themselves).
This implementation of the type-generic macros does not handle
unsigned __int128, or unsigned _BitInt types with a width other than
that of a standard integer type (and C23 doesn't require the header to
handle such types either). Support for those types, using the new
type-generic built-in functions Jakub's added for GCC 14, can
reasonably be added in a followup (along of course with associated
tests).
This implementation doesn't do anything special to handle C++, or have
any tests of functionality in C++ beyond the existing tests that all
headers can be compiled in C++ code; it's not clear exactly what form
this header should take in C++, but probably not one using macros.
DIS ballot comment AT-107 asks for the word "count" to be added to the
names of the stdc_leading_zeros, stdc_leading_ones,
stdc_trailing_zeros and stdc_trailing_ones functions and macros. I
don't think it's likely to be accepted (accepting any technical
comments would mean having an FDIS ballot), but if it is accepted at
the WG14 meeting (22-26 January in Strasbourg, starting with DIS
ballot comment handling) then there would still be time to update
glibc for the renaming before the 2.39 release.
The new functions and header are placed in the stdlib/ directory in
glibc, rather than creating a new toplevel stdbit/ or putting them in
string/ alongside ffs.
Tested for x86_64 and x86.
Verify that setjmp and longjmp work correctly between user contexts.
Arrange stacks for uctx_func1 and uctx_func2 so that ____longjmp_chk
works when setjmp and longjmp are called from different user contexts.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
When _FORTIFY_SOURCE is defined to 2, ____longjmp_chk is called,
instead of longjmp. ____longjmp_chk compares the relative stack
values to decide if it is called from a stack frame which called
setjmp. If not, ____longjmp_chk assumes that an alternate signal
stack is used. Since comparing the relative stack values isn't
reliable with user context, when there is no signal, ____longjmp_chk
will fail. Undefine _FORTIFY_SOURCE to avoid ____longjmp_chk in
user context test.
The previous check did not do anything because tmp_ptr already
points before run_ptr due to the way it is initialized.
Fixes commit e4d8117b82
("stdlib: Avoid another self-comparison in qsort").
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The existing logic avoided internal stack overflow. To avoid
a denial-of-service condition with adversarial input, it is necessary
to fall over to heapsort if tail-recursing deeply, too, which does
not result in a deep stack of pending partitions.
The new test stdlib/tst-qsort5 is based on Douglas McIlroy's paper
on this subject.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The previous implementation did not consistently apply the rule that
the child nodes of node K are at 2 * K + 1 and 2 * K + 2, or
that the parent node is at (K - 1) / 2.
Add an internal test that targets the heapsort implementation
directly.
Reported-by: Stepan Golosunov <stepan@golosunov.pp.ru>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
In the insertion phase, we could run off the start of the array if the
comparison function never runs zero. In that case, it never finds the
initial element that terminates the iteration.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This improves compatibility with applications which assume that qsort
does not invoke the comparison function with equal pointer arguments.
The newly introduced branches should be predictable, as leading to a
call to the comparison function. If the prediction fails, we avoid
calling the function.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch adds a qsort and qsort_r to trigger the worst case
scenario for the quicksort (which glibc current lacks coverage).
The test is done with random input, dfferent internal types (uint8_t,
uint16_t, uint32_t, uint64_t, large size), and with
different set of element numbers.
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
This patch removes the mergesort optimization on qsort implementation
and uses the introsort instead. The mergesort implementation has some
issues:
- It is as-safe only for certain types sizes (if total size is less
than 1 KB with large element sizes also forcing memory allocation)
which contradicts the function documentation. Although not required
by the C standard, it is preferable and doable to have an O(1) space
implementation.
- The malloc for certain element size and element number adds
arbitrary latency (might even be worse if malloc is interposed).
- To avoid trigger swap from memory allocation the implementation
relies on system information that might be virtualized (for instance
VMs with overcommit memory) which might lead to potentially use of
swap even if system advertise more memory than actually has. The
check also have the downside of issuing syscalls where none is
expected (although only once per execution).
- The mergesort is suboptimal on an already sorted array (BZ#21719).
The introsort implementation is already optimized to use constant extra
space (due to the limit of total number of elements from maximum VM
size) and thus can be used to avoid the malloc usage issues.
Resulting performance is slower due the usage of qsort, specially in the
worst-case scenario (partialy or sorted arrays) and due the fact
mergesort uses a slight improved swap operations.
This change also renders the BZ#21719 fix unrequired (since it is meant
to fix the sorted input performance degradation for mergesort). The
manual is also updated to indicate the function is now async-cancel
safe.
Checked on x86_64-linux-gnu.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
This patch makes the quicksort implementation to acts as introsort, to
avoid worse-case performance (and thus making it O(nlog n)). It switch
to heapsort when the depth level reaches 2*log2(total elements). The
heapsort is a textbook implementation.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
The optimization takes in consideration both the most common elements
are either 32 or 64 bit in size and inputs are aligned to the word
boundary. This is similar to what msort does.
For large buffer the swap operation uses memcpy/mempcpy with a
small fixed size buffer (so compiler might inline the operations).
Checked on x86_64-linux-gnu.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
The grouping verification only worked for a single-byte thousands
separator. With a multi-byte separator it returned as if no separators
were present. The actual parsing in str_to_mpn will then go wrong when
there are multiple adjacent multi-byte separators in the number.
On GCC before 11, IPA can make the fortified realpath aware that the
buffer size is not large enough (8 bytes instead of PATH_MAX bytes).
Fix this by using a buffer that is large enough.
Since the _FORTIFY_SOURCE feature uses some routines of Glibc, they need to
be excluded from the fortification.
On top of that:
- some tests explicitly verify that some level of fortification works
appropriately, we therefore shouldn't modify the level set for them.
- some objects need to be build with optimization disabled, which
prevents _FORTIFY_SOURCE to be used for them.
Assembler files that implement architecture specific versions of the
fortified routines were not excluded from _FORTIFY_SOURCE as there is no
C header included that would impact their behavior.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
ISO C2x defines scanf %b for input of binary integers (with an
optional 0b or 0B prefix). Implement such support, along with the
corresponding SCNb* macros in <inttypes.h>. Unlike the support for
binary integers with 0b or 0B prefix with scanf %i, this is supported
in all versions of scanf (independent of the standards mode used for
compilation), because there are no backwards compatibility concerns
(%b wasn't previously a supported format) the way there were for %i.
Tested for x86_64 and x86.
There is no fork detection on current arc4random implementation, so
use lower subprocess on fork tests. The tests now run on 0.1s
instead of 8s on a Ryzen9 5900X.
Checked on x86_64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
With fortification enabled, few function calls return result need to be
checked, has they get the __wur macro enabled.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
With fortification enabled, few function calls return result need to be
checked, has they get the __wur macro enabled.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
When the base is 0 or 2 and the first two characters are '0' and 'b',
but the rest are no binary digits. In this case this is no error,
and strtol must return 0 and ENDPTR points to the 'x' or 'b'.
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
The last loop could attempt to overflow beyond INT_MAX on 32-bit
architectures.
Also switch to GNU style.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Test minimum and maximum long long values, zero, 32bit crossover points, and
part of the range of long long values. Use '-fno-builtin' to ensure we are
testing the implementation.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Test minimum and maximum long values, zero, and part of the range
of long values. Use '-fno-builtin' to ensure we are testing the
implementation.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Test minimum and maximum int values, zero, and part of the range
of int values. Use '-fno-builtin' to ensure we are testing the
implementation.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reflow Makefile.
Sort using scripts/sort-makefile-lines.py.
No code generation changes observed in binary artifacts.
No regressions on x86_64 and i686.
All of the mentioned variables are gone. gcc is just the default and
argv[1] can be used instead. /usr/include isn't hard-coded and you can
pass argv[2] with -I... to adjust.
Signed-off-by: Ahelenia Ziemiańska <nabijaczleweli@nabijaczleweli.xyz>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Prevent sh from interpreting a user string as shell options if it
starts with '-' or '+'. Since the version of /bin/sh used for testing
system() is different from the full-fledged system /bin/sh add support
to it for handling "--" after "-c". Add a testcase to ensure the
expected behavior.
Signed-off-by: Joe Simmons-Talbott <josimmon@redhat.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
They are both used by __libc_freeres to free all library malloc
allocated resources to help tooling like mtrace or valgrind with
memory leak tracking.
The current scheme uses assembly markers and linker script entries
to consolidate the free routine function pointers in the RELRO segment
and to be freed buffers in BSS.
This patch changes it to use specific free functions for
libc_freeres_ptrs buffers and call the function pointer array directly
with call_function_static_weak.
It allows the removal of both the internal macros and the linker
script sections.
Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>