Separate the malloc check implementation from the malloc hooks. They
still use the hooks but are now maintained in a separate file.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The variable and function pair appear to provide a way for users to
set conditional breakpoints in mtrace when a specific address is
returned by the allocator. This can be achieved by using conditional
breakpoints in gdb so it is redundant. There is no documentation of
this interface in the manual either, so it appears to have been a hack
that got added to debug malloc. Deprecate these symbols and do not
call tr_break anymore.
Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Dependencies on hooks.c and arena.c get auto-computed when generating
malloc.o{,s}.d so there is no need to add them manually.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Andreas Schwab <schwab@linux-m68k.org>
After commit 1e26d35193 ("malloc: Fix
tcache leak after thread destruction [BZ #22111]"),
tcache_shutting_down is still not early enough. When we detach a
thread with no tcache allocated, tcache_shutting_down would still be
false.
Reviewed-by: DJ Delorie <dj@redhat.com>
Like malloc-check, add generic rules to run all tests in malloc by
linking with libmcheck.a so as to provide coverage for mcheck().
Currently the following 12 tests fail:
FAIL: malloc/tst-malloc-backtrace-mcheck
FAIL: malloc/tst-malloc-fork-deadlock-mcheck
FAIL: malloc/tst-malloc-stats-cancellation-mcheck
FAIL: malloc/tst-malloc-tcache-leak-mcheck
FAIL: malloc/tst-malloc-thread-exit-mcheck
FAIL: malloc/tst-malloc-thread-fail-mcheck
FAIL: malloc/tst-malloc-usable-static-mcheck
FAIL: malloc/tst-malloc-usable-static-tunables-mcheck
FAIL: malloc/tst-malloc-usable-tunables-mcheck
FAIL: malloc/tst-malloc_info-mcheck
FAIL: malloc/tst-memalign-mcheck
FAIL: malloc/tst-posix_memalign-mcheck
and they have been added to tests-exclude-mcheck for now to keep
status quo. At least the last two can be attributed to bugs in
mcheck() but I haven't fixed them here since they should be fixed by
removing malloc hooks. Others need to be triaged to check if they're
due to mcheck bugs or due to actual bugs.
Reviewed-by: DJ Delorie <dj@redhat.com>
Austin Group issue 62 [1] dropped the async-signal-safe requirement
for fork and provided a async-signal-safe _Fork replacement that
does not run the atfork handlers. It will be included in the next
POSIX standard.
It allow to close a long standing issue to make fork AS-safe (BZ#4737).
As indicated on the bug, besides the internal lock for the atfork
handlers itself; there is no guarantee that the handlers itself will
not introduce more AS-safe issues.
The idea is synchronize fork with the required internal locks to allow
children in multithread processes to use mostly of standard function
(even though POSIX states only AS-safe function should be used). On
signal handles, _Fork should be used intead and only AS-safe functions
should be used.
For testing, the new tst-_Fork only check basic usage. I also added
a new tst-mallocfork3 which uses the same strategy to check for
deadlock of tst-mallocfork2 but using threads instead of subprocesses
(and it does deadlock if it replaces _Fork with fork).
[1] https://austingroupbugs.net/view.php?id=62
MALLOC_CHECK_ and mcheck() are two different malloc checking features.
tst-mcheck does not check mcheck(), instead it checks MALLOC_CHECK_,
so rename the file to avoid confusion.
This commit removes the ELF constructor and internal variables from
dlfcn/dlfcn.c. The file now serves the same purpose as
nptl/libpthread-compat.c, so it is renamed to dlfcn/libdl-compat.c.
The use of libdl-shared-only-routines ensures that libdl.a is empty.
This commit adjusts the test suite not to use $(libdl). The libdl.so
symbolic link is no longer installed.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Since test uses 160 multiple for malloc size, we should also use 160 multiple
for total variable instead of 16, then comparison is meaningful. So fix it.
Also change the ">" to ">=" so that the test is technically valid.
Reviewed-by: DJ Delorie <dj@redhat.com>
When MALLOC_CHECK_ is non-zero, the realloc hook missed to set errno to
ENOMEM when called with too big size. Run the test tst-malloc-too-large
also with MALLOC_CHECK_=3 to catch that.
To help detect common kinds of memory (and other resource) management
bugs, GCC 11 adds support for the detection of mismatched calls to
allocation and deallocation functions. At each call site to a known
deallocation function GCC checks the set of allocation functions
the former can be paired with and, if the two don't match, issues
a -Wmismatched-dealloc warning (something similar happens in C++
for mismatched calls to new and delete). GCC also uses the same
mechanism to detect attempts to deallocate objects not allocated
by any allocation function (or pointers past the first byte into
allocated objects) by -Wfree-nonheap-object.
This support is enabled for built-in functions like malloc and free.
To extend it beyond those, GCC extends attribute malloc to designate
a deallocation function to which pointers returned from the allocation
function may be passed to deallocate the allocated objects. Another,
optional argument designates the positional argument to which
the pointer must be passed.
This change is the first step in enabling this extended support for
Glibc.
(FYI, this is a repost of
https://sourceware.org/pipermail/libc-alpha/2019-July/105035.html now
that FSF papers have been signed and confirmed on FSF side).
This trivial patch attemps to fix BZ 24106. Basically the bash locally
used when building glibc on the host shall not leak on the installed
glibc, as the system where it is installed might be different and use
another bash location.
So I have looked for all occurences of @BASH@ or $(BASH) in installed
files, and replaced it by /bin/bash. This was suggested by Florian
Weimer in the bug report.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
This replaces the FREE_P macro with the __nptl_stack_in_use inline
function. stack_list_del is renamed to __nptl_stack_list_del,
stack_list_add to __nptl_stack_list_add, __deallocate_stack to
__nptl_deallocate_stack, free_stacks to __nptl_free_stacks.
It is convenient to move __libpthread_freeres into libc at the
same time. This removes the temporary __default_pthread_attr_freeres
export and restores full freeres coverage for __default_pthread_attr.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Calling free directly may end up freeing a pointer allocated by the
dynamic loader using malloc from libc.so in the base namespace using
the allocator from libc.so in a secondary namespace, which results in
crashes.
This commit redirects the free call through GLRO and the dynamic
linker, to reach the correct namespace. It also cleans up the dlerror
handling along the way, so that pthread_setspecific is no longer
needed (which avoids triggering bug 24774).
This is a workaround (hack) for a gcc optimization issue (PR 99551).
Without this the generated code may evaluate the expression in the
cold path which causes performance regression for small allocations
in the memory tagging disabled (common) case.
Reviewed-by: DJ Delorie <dj@redhat.com>
The internal _mid_memalign already returns newly tagged memory.
(__libc_memalign and posix_memalign already relied on this, this
patch fixes the other call sites.)
Reviewed-by: DJ Delorie <dj@redhat.com>
The previous patch ensured that all chunk to mem computations use
chunk2rawmem, so now we can rename it to chunk2mem, and in the few
cases where the tag of mem is relevant chunk2mem_tag can be used.
Replaced tag_at (chunk2rawmem (x)) with chunk2mem_tag (x).
Renamed chunk2rawmem to chunk2mem.
Reviewed-by: DJ Delorie <dj@redhat.com>
The difference between chunk2mem and chunk2rawmem is that the latter
does not get the memory tag for the returned pointer. It turns out
chunk2rawmem almost always works:
The input of chunk2mem is a chunk pointer that is untagged so it can
access the chunk header. All memory that is not user allocated heap
memory is untagged, which in the current implementation means that it
has the 0 tag, but this patch does not rely on the tag value. The
patch relies on that chunk operations are either done on untagged
chunks or without doing memory access to the user owned part.
Internal interface contracts:
sysmalloc: Returns untagged memory.
_int_malloc: Returns untagged memory.
_int_free: Takes untagged memory.
_int_memalign: Returns untagged memory.
_int_realloc: Takes and returns tagged memory.
So only _int_realloc and functions outside this list need care.
Alignment checks do not need the right tag and tcache works with
untagged memory.
tag_at was kept in realloc after an mremap, which is not strictly
necessary, since the pointer is only used to retag the memory, but this
way the tag is guaranteed to be different from the old tag.
Reviewed-by: DJ Delorie <dj@redhat.com>
The comment explained why different tag is used after mremap, but
for that correctly tagged pointer should be passed to tag_new_usable.
Use chunk2mem to get the tag.
Reviewed-by: DJ Delorie <dj@redhat.com>
This is a pure refactoring change that does not affect behaviour.
The CHUNK_AVAILABLE_SIZE name was unclear, the memsize name tries to
follow the existing convention of mem denoting the allocation that is
handed out to the user, while chunk is its internally used container.
The user owned memory for a given chunk starts at chunk2mem(p) and
the size is memsize(p). It is not valid to use on dumped heap chunks.
Moved the definition next to other chunk and mem related macros.
Reviewed-by: DJ Delorie <dj@redhat.com>
Use the runtime check where possible: it should not cause slow down in
the !USE_MTAG case since then mtag_enabled is constant false, but it
allows compiling the tagging logic so it's less likely to break or
diverge when developers only test the !USE_MTAG case.
Reviewed-by: DJ Delorie <dj@redhat.com>
The branches may be better optimized since mtag_enabled is widely used.
Granule size larger than a chunk header is not supported since then we
cannot have both the chunk header and user area granule aligned. To
fix that for targets with large granule, the chunk layout has to change.
So code that attempted to handle the granule mask generally was changed.
This simplified CHUNK_AVAILABLE_SIZE and the logic in malloc_usable_size.
Reviewed-by: DJ Delorie <dj@redhat.com>
When glibc is built with memory tagging support (USE_MTAG) but it is not
enabled at runtime (mtag_enabled) then unconditional memset was used
even though that can be often avoided.
This is for performance when tagging is supported but not enabled.
The extra check should have no overhead: tag_new_zero_region already
had a runtime check which the compiler can now optimize away.
Reviewed-by: DJ Delorie <dj@redhat.com>
The memset api is suboptimal and does not provide much benefit. Memory
tagging only needs a zeroing memset (and only for memory that's sized
and aligned to multiples of the tag granule), so change the internal
api and the target hooks accordingly. This is to simplify the
implementation of the target hook.
Reviewed-by: DJ Delorie <dj@redhat.com>
A flag check can be faster than function pointers because of how
branch prediction and speculation works and it can also remove a layer
of indirection when there is a mismatch between the malloc internal
tag_* api and __libc_mtag_* target hooks.
Memory tagging wrapper functions are moved to malloc.c from arena.c and
the logic now checks mmap_enabled. The definition of tag_new_usable is
moved after chunk related definitions.
This refactoring also allows using mtag_enabled checks instead of
USE_MTAG ifdefs when memory tagging support only changes code logic
when memory tagging is enabled at runtime. Note: an "if (false)" code
block is optimized away even at -O0 by gcc.
Reviewed-by: DJ Delorie <dj@redhat.com>
This does not change behaviour, just removes one layer of indirection
in the internal memory tagging logic.
Use tag_ and mtag_ prefixes instead of __tag_ and __mtag_ since these
are all symbols with internal linkage, private to malloc.c, so there
is no user namespace pollution issue.
Reviewed-by: DJ Delorie <dj@redhat.com>
Either the memory belongs to the dumped area, in which case we don't
want to tag (the dumped area has the same tag as malloc internal data
so tagging is unnecessary, but chunks there may not have the right
alignment for the tag granule), or the memory will be unmapped
immediately (and thus tagging is not useful).
Reviewed-by: DJ Delorie <dj@redhat.com>
The chunk cannot be a dumped one here. The only non-obvious cases
are free and realloc which may be called on a dumped area chunk,
but in both cases it can be verified that tagging is already
avoided for dumped area chunks.
Reviewed-by: DJ Delorie <dj@redhat.com>
This is only used internally in malloc.c, the extern declaration
was wrong, __mtag_mmap_flags has internal linkage.
Reviewed-by: DJ Delorie <dj@redhat.com>
At an _int_free call site in realloc the wrong size was used for tag
clearing: the chunk header of the next chunk was also cleared which
in practice may work, but logically wrong.
The tag clearing is moved before the memcpy to save a tag computation,
this avoids a chunk2mem. Another chunk2mem is removed because newmem
does not have to be recomputed. Whitespaces got fixed too.
Reviewed-by: DJ Delorie <dj@redhat.com>
_int_free must be called with a chunk that has its tag reset. This was
missing in a rare case that could crash when heap tagging is enabled:
when in a multi-threaded process the current arena runs out of memory
during realloc, but another arena still has space to finish the realloc
then _int_free was called without clearing the user allocation tags.
Fixes bug 27468.
Reviewed-by: DJ Delorie <dj@redhat.com>
This essentially folds compat_symbol_unique functionality into
compat_symbol.
This change eliminates the need for intermediate aliases for defining
multiple symbol versions, for both compat_symbol and versioned_symbol.
Some binutils versions do not suport multiple versions per symbol on
some targets, so aliases are automatically introduced, similar to what
compat_symbol_unique did. To reduce symbol table sizes, a configure
check is added to avoid these aliases if they are not needed.
The new mechanism works with data symbols as well as function symbols,
due to the way an assembler-level redirect is used. It is not
compatible with weak symbols for old binutils versions, which is why
the definition of __malloc_initialize_hook had to be changed. This
is not a loss of functionality because weak symbols do not matter
to dynamic linking.
The placeholder symbol needs repeating in nptl/libpthread-compat.c
now that compat_symbol is used, but that seems more obvious than
introducing yet another macro.
A subtle difference was that compat_symbol_unique made the symbol
global automatically. compat_symbol does not do this, so static
had to be removed from the definition of
__libpthread_version_placeholder.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
compat_symbol_reference no longer needs tests-internal. Do not build
the test at all for newer targets, so that no spurious UNSUPPORTED
result is generated. Use compat_symbol_reference for
__malloc_initialize_hook as well, eliminating the need for -rdynamic.
Reviewed-by: DJ Delorie <dj@redhat.com>
This will be used to consolidate the libgcc_s access for backtrace
and pthread_cancel.
Unlike the existing backtrace implementations, it provides some
hardening based on pointer mangling.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
It syncs with gnulib version a8bac4d49. The main changes are:
- Remove the usage of anonymous union within DYNARRAY_STRUCT.
- Use DYNARRAY_FREE instead of DYNARRAY_NAME (free) so that
Gnulib does not change 'free' to 'rpl_free'.
- Use __nonnull instead of __attribute__ ((nonnull ())).
- Use __attribute_maybe_unused__ instead of
__attribute__ ((unused, nonnull (1))).
- Use of _Noreturn instead of _attribute__ ((noreturn)).
The only difference with gnulib is:
--- glibc
+++ gnulib
@@ -18,6 +18,7 @@
#include <dynarray.h>
#include <stdio.h>
+#include <stdlib.h>
void
__libc_dynarray_at_failure (size_t size, size_t index)
@@ -27,7 +28,6 @@
__snprintf (buf, sizeof (buf), "Fatal glibc error: "
"array index %zu not less than array length %zu\n",
index, size);
- __libc_fatal (buf);
#else
abort ();
#endif
It seems a wrong sync from gnulib (the code is used on loader and
thus it requires __libc_fatal instead of abort).
Checked on x86_64-linux-gnu.
I've updated copyright dates in glibc for 2021. This is the patch for
the changes not generated by scripts/update-copyrights and subsequent
build / regeneration of generated files. As well as the usual annual
updates, mainly dates in --version output (minus csu/version.c which
previously had to be handled manually but is now successfully updated
by update-copyrights), there is a small change to the copyright notice
in NEWS which should let NEWS get updated automatically next year.
Please remember to include 2021 in the dates for any new files added
in future (which means updating any existing uncommitted patches you
have that add new files to use the new copyright dates in them).
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 6694 files FOO.
I then removed trailing white space from benchtests/bench-pthread-locks.c
and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this
diagnostic from Savannah:
remote: *** pre-commit check failed ...
remote: *** error: lines with trailing whitespace found
remote: error: hook declined to update refs/heads/master
Similar to the fix 69fda43b8d, save and restore errno for the hook
functions used for MALLOC_CHECK_=3.
It fixes the malloc/tst-free-errno-mcheck regression.
Checked on x86_64-linux-gnu.
In the next release of POSIX, free must preserve errno
<https://www.austingroupbugs.net/view.php?id=385>.
Modify __libc_free to save and restore errno, so that
any internal munmap etc. syscalls do not disturb the caller's errno.
Add a test malloc/tst-free-errno.c (almost all by Bruno Haible),
and document that free preserves errno.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The MTE patch to add malloc support incorrectly padded the size passed
to _int_realloc by SIZE_SZ when it ought to have sent just the
chunksize. Revert that bit of the change so that realloc works
correctly with MALLOC_CHECK_ set.
This also brings the realloc_check implementation back in sync with
libc_realloc.
This new variable allows various subsystems in glibc to run all or
some of their tests with MALLOC_CHECK_=3. This patch adds
infrastructure support for this variable as well as an implementation
in malloc/Makefile to allow running some of the tests with
MALLOC_CHECK_=3.
At present some tests in malloc/ have been excluded from the mcheck
tests either because they're specifically testing MALLOC_CHECK_ or
they are failing in master even without the Memory Tagging patches
that prompted this work. Some tests were reviewed and found to need
specific error points that MALLOC_CHECK_ defeats by terminating early
but a thorough review of all tests is needed to bring them into mcheck
coverage.
The following failures are seen in current master:
FAIL: malloc/tst-malloc-fork-deadlock-mcheck
FAIL: malloc/tst-malloc-stats-cancellation-mcheck
FAIL: malloc/tst-malloc-thread-fail-mcheck
FAIL: malloc/tst-realloc-mcheck
FAIL: malloc/tst-reallocarray-mcheck
All of these are due to the Memory Tagging patchset and will be fixed
separately.
This patch adds the basic support for memory tagging.
Various flavours are supported, particularly being able to turn on
tagged memory at run-time: this allows the same code to be used on
systems where memory tagging support is not present without neededing
a separate build of glibc. Also, depending on whether the kernel
supports it, the code will use mmap for the default arena if morecore
does not, or cannot support tagged memory (on AArch64 it is not
available).
All the hooks use function pointers to allow this to work without
needing ifuncs.
Reviewed-by: DJ Delorie <dj@redhat.com>
The secondary/non-primary/inner libc (loaded via dlmopen, LD_AUDIT,
static dlopen) must not use sbrk to allocate member because that would
interfere with allocations in the outer libc. On Linux, this does not
matter because sbrk itself was changed to fail in secondary libcs.
_dl_addr occasionally shows up in profiles, but had to be used before
because __libc_multiple_libs was unreliable. So this change achieves
a slight reduction in startup time.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
If linked-list of tcache contains a loop, it invokes infinite
loop in _int_free when freeing tcache. The PoC which invokes
such infinite loop is on the Bugzilla(#27052). This loop
should terminate when the loop exceeds mp_.tcache_count and
the program should abort. The affected glibc version is
2.29 or later.
Reviewed-by: DJ Delorie <dj@redhat.com>
This provides the struct nss_module type, which combines the old
struct service_library type with the known_function tree, by
statically allocating space for all function pointers.
struct nss_module is fairly large (536 bytes), but it will be
shared across NSS databases. The old known_function handling
had non-some per-function overhead (at least 32 bytes per looked-up
function, but more for long function anmes), so overall, this is not
too bad. Resolving all functions at load time simplifies locking,
and the repeated lookups should be fast because the caches are hot
at this point.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The tls.h inclusion is not really required and limits possible
definition on more arch specific headers.
This is a cleanup to allow inline functions on sysdep.h, more
specifically on i386 and ia64 which requires to access some tls
definitions its own.
No semantic changes expected, checked with a build against all
affected ABIs.
malloc debug: fix compile error when enable macro MALLOC_DEBUG > 1.
this is because commit e9c4fe93b3 has change the struct malloc_chunk's member "size" to "mchunk_size".
the reproduction is like that:
setp1: modify related Makefile.
vim ../glibc/malloc/Makefile
CPPFLAGS-malloc.o += -DMALLOC_DEBUG=2
step2: ../configure --prefix=/usr
make -j32
this will cause the compile error:
/home/liqingqing/glibc_upstream/buildglibc/malloc/malloc.o
In file included from malloc.c:1899:0:
arena.c: In function 'dump_heap':
arena.c:422:58: error: 'struct malloc_chunk' has no member named 'size'
fprintf (stderr, "chunk %p size %10lx", p, (long) p->size);
^~
arena.c:428:17: error: 'struct malloc_chunk' has no member named 'size'
else if (p->size == (0 | PREV_INUSE))
Reviewed-by: DJ Delorie <dj@redhat.com>
This patch adds the ABI-related bits to reflect the new mallinfo2
function, and adds a test case to verify basic functionality.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
It fixes the build issue below introduced by e3960d1c57 (Add
mallinfo2 function that support sizes >= 4GB). It moves the
__MALLOC_DEPRECATED to the usual place for function attributes:
In file included from ../include/malloc.h:3,
from ../sysdeps/x86_64/multiarch/../../../test-skeleton.c:31,
from ../sysdeps/x86_64/multiarch/test-multiarch.c:96:
../malloc/malloc.h:118:1: error: empty declaration [-Werror]
118 | __MALLOC_DEPRECATED;
It also adds the required deprecated warning suppression on the tests.
Checked on x86_64-linux-gnu.
Sun RPC was removed from glibc. This includes rpcgen program, librpcsvc,
and Sun RPC headers. Also test for bug #20790 was removed
(test for rpcgen).
Backward compatibility for old programs is kept only for architectures
and ABIs that have been added in or before version 2.28.
libtirpc is mature enough, librpcsvc and rpcgen are provided in
rpcsvc-proto project.
NOTE: libnsl code depends on Sun RPC (installed libnsl headers use
installed Sun RPC headers), thus --enable-obsolete-rpc was a dependency
for --enable-obsolete-nsl (removed in a previous commit).
The arc ABI list file has to be updated because the port was added
with the sunrpc symbols
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
__morecore, __after_morecore_hook, and __default_morecore had not
been deprecated in commit 7d17596c19
("Mark malloc hook variables as deprecated"), probably by accident.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The buffer allocation uses the same strategy of strsignal.
Checked on x86-64-linux-gnu, i686-linux-gnu, powerpc64le-linux-gnu,
and s390x-linux-gnu.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The per-thread state is refactored two use two strategies:
1. The default one uses a TLS structure, which will be placed in the
static TLS space (using __thread keyword).
2. Linux allocates via struct pthread and access it through THREAD_*
macros.
The default strategy has the disadvantage of increasing libc.so static
TLS consumption and thus decreasing the possible surplus used in
some scenarios (which might be mitigated by BZ#25051 fix).
It is used only on Hurd, where accessing the thread storage in the in
single thread case is not straightforward (afaiu, Hurd developers could
correct me here).
The fallback static allocation used for allocation failure is also
removed: defining its size is problematic without synchronizing with
translated messages (to avoid partial translation) and the resulting
usage is not thread-safe.
Checked on x86-64-linux-gnu, i686-linux-gnu, powerpc64le-linux-gnu,
and s390x-linux-gnu.
Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The code for set_max_fast() stores an "impossibly small value"
instead of zero, when the parameter is zero. However, for
small values of the parameter (ex: 1 or 2) the computation
results in a zero being stored anyway.
This patch checks for the parameter being small enough for the
computation to result in zero instead, so that a zero is never
stored.
key values which result in zero being stored:
x86-64: 1..7 (or other 64-bit)
i686: 1..11
armhfp: 1..3 (or other 32-bit)
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Adding the test "tst-safe-linking" for testing that Safe-Linking works
as expected. The test checks these 3 main flows:
* tcache protection
* fastbin protection
* malloc_consolidate() correctness
As there is a random chance of 1/16 that of the alignment will remain
correct, the test checks each flow up to 10 times, using different random
values for the pointer corruption. As a result, the chance for a false
failure of a given tested flow is 2**(-40), thus highly unlikely.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Alignment checks should be performed on the user's buffer and NOT
on the mchunkptr as was done before. This caused bugs in 32 bit
versions, because: 2*sizeof(t) != MALLOC_ALIGNMENT.
As the tcache works on users' buffers it uses the aligned_OK()
check, and the rest work on mchunkptr and therefore check using
misaligned_chunk().
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Removed unneeded '\' chars from end of lines and fixed some
indentation issues that were introduced in the original
Safe-Linking patch.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbin and tcache) from being tampered by attackers.
The mechanism makes use of randomness from ASLR (mmap_base), and when
combined with chunk alignment integrity checks, it protects the "next"
pointers from being hijacked by an attacker.
While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
* Partial pointer override: modifies the lower bytes (Little Endian)
* Full pointer override: hijacks the pointer to an attacker's location
* Unaligned chunks: pointing the list to an unaligned address
The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
* PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
* *L = PROTECT(P)
This way, the random bits from the address L (which start at the bit
in the PAGE_SHIFT position), will be merged with LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.
An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
* Attackers can't point to illegal (unaligned) memory addresses
* Attackers must guess correctly the alignment bits
On standard 32 bit Linux machines, an attack will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.
This proposed patch was benchmarked and it's effect on the overall
performance of the heap was negligible and couldn't be distinguished
from the default variance between tests on the vanilla version. A
similar protection was added to Chromium's version of TCMalloc
in 2012, and according to their documentation it had an overhead of
less than 2%.
Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Adhemerval Zacnella <adhemerval.zanella@linaro.org>
If the test fails due some unexpected failure after the children
creation, either in the signal handler by calling abort or in the main
loop; the created children might not be killed properly.
This patches fixes it by:
* Avoid aborting in the signal handler by setting a flag that
an error has occured and add a check in the main loop.
* Add a atexit handler to handle kill child processes.
Checked on x86_64-linux-gnu.
pvalloc is guarantueed to round up the allocation size to the page
size, so applications can assume that the memory region is larger
than the passed-in argument. The alloc_size attribute cannot express
that.
The test case is based on a suggestion from Jakub Jelinek.
This fixes commit 9bf8e29ca1 ("malloc:
make malloc fail with requests larger than PTRDIFF_MAX (BZ#23741)").
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch moves the vDSO setup from libc to loader code, just after
the vDSO link_map setup. For static case the initialization
is moved to _dl_non_dynamic_init instead.
Instead of using the mangled pointer, the vDSO data is set as
attribute_relro (on _rtld_global_ro for shared or _dl_vdso_* for
static). It is read-only even with partial relro.
It fixes BZ#24967 now that the vDSO pointer is setup earlier than
malloc interposition is called.
Also, vDSO calls should not be a problem for static dlopen as
indicated by BZ#20802. The vDSO pointer would be zero-initialized
and the syscall will be issued instead.
Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu,
arm-linux-gnueabihf, powerpc64le-linux-gnu, powerpc64-linux-gnu,
powerpc-linux-gnu, s390x-linux-gnu, sparc64-linux-gnu, and
sparcv9-linux-gnu. I also run some tests on mips.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
I've updated copyright dates in glibc for 2020. This is the patch for
the changes not generated by scripts/update-copyrights and subsequent
build / regeneration of generated files. As well as the usual annual
updates, mainly dates in --version output (minus libc.texinfo which
previously had to be handled manually but is now successfully updated
by update-copyrights), there is a fix to
sysdeps/unix/sysv/linux/powerpc/bits/termios-c_lflag.h where a typo in
the copyright notice meant it failed to be updated automatically.
Please remember to include 2020 in the dates for any new files added
in future (which means updating any existing uncommitted patches you
have that add new files to use the new copyright dates in them).
do_set_tcache_max, do_set_mxfast:
Fix two instances of comparing "size_t < 0"
Both cases have upper limit, so the "negative value" case
is already handled via overflow semantics.
do_set_tcache_max, do_set_tcache_count:
Fix return value on error. Note: currently not used.
mallopt:
pass return value of helper functions to user. Behavior should
only be actually changed for mxfast, where we restore the old
(pre-tunables) behavior.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
set_max_fast sets the "impossibly small" value based on,
eventually, MALLOC_ALIGNMENT. The comparisons for the smallest
chunk used is, eventually, MIN_CHUNK_SIZE. Note that i386
is the only platform where these are the same, so a smallest
chunk *would* be put in a no-fastbins fastbin.
This change calculates the "impossibly small" value
based on MIN_CHUNK_SIZE instead, so that we can know it will
always be impossibly small.
Fixes `<total type="rest" size="..."> incorrectly showing as 0 most
of the time.
The rest value being wrong is significant because to compute the
actual amount of memory handed out via malloc, the user must subtract
it from <system type="current" size="...">. That result being wrong
makes investigating memory fragmentation issues like
<https://bugzilla.redhat.com/show_bug.cgi?id=843478> close to
impossible.
memusagestat may indirectly link against libpthread. The built
libpthread should be used, but that is only possible if it has been
built before the malloc programs.
GCC mainline has recently added warn_unused_result attributes to some
malloc-like built-in functions, where glibc previously had them in its
headers only for __USE_FORTIFY_LEVEL > 0. This results in those
attributes being newly in effect for building the glibc testsuite, so
resulting in new warnings that break the build where tests
deliberately call such functions and ignore the result. Thus patch
duly adds calls to DIAG_* macros around those calls to disable the
warning.
Tested with build-many-glibcs.py for aarch64-linux-gnu.
* malloc/tst-calloc.c: Include <libc-diag.h>.
(null_test): Ignore -Wunused-result around calls to calloc.
* malloc/tst-mallocfork.c: Include <libc-diag.h>.
(do_test): Ignore -Wunused-result around call to malloc.
Change the tcache->counts[] entries to uint16_t - this removes
the limit set by char and allows a larger tcache. Remove a few
redundant asserts.
bench-malloc-thread with 4 threads is ~15% faster on Cortex-A72.
Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc/malloc.c (MAX_TCACHE_COUNT): Increase to UINT16_MAX.
(tcache_put): Remove redundant assert.
(tcache_get): Remove redundant asserts.
(__libc_malloc): Check tcache count is not zero.
* manual/tunables.texi (glibc.malloc.tcache_count): Update maximum.
The tcache counts[] array is a char, which has a very small range and thus
may overflow. When setting tcache_count tunable, there is no overflow check.
However the tunable must not be larger than the maximum value of the tcache
counts[] array, otherwise it can overflow when filling the tcache.
[BZ #24531]
* malloc/malloc.c (MAX_TCACHE_COUNT): New define.
(do_set_tcache_count): Only update if count is small enough.
* manual/tunables.texi (glibc.malloc.tcache_count): Document max value.
This synchronization method has a lower overhead and makes
it more likely that the signal arrives during one of the critical
functions.
Also test for fork deadlocks explicitly.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
The memusagestat is the only binary that has its own link line which
causes it to be linked against the existing installed C library. It
has been this way since it was originally committed in 1999, but I
don't see any reason as to why. Since we want all the programs we
build locally to be against the new copy of glibc, change the build
to be like all other programs.
Remove do_set_mallopt_check prototype since it is unused.
* malloc/arena.c (do_set_mallopt_check): Removed.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: DJ Delorie <dj@redhat.com>
As discussed previously on libc-alpha [1], this patch follows up the idea
and add both the __attribute_alloc_size__ on malloc functions (malloc,
calloc, realloc, reallocarray, valloc, pvalloc, and memalign) and limit
maximum requested allocation size to up PTRDIFF_MAX (taking into
consideration internal padding and alignment).
This aligns glibc with gcc expected size defined by default warning
-Walloc-size-larger-than value which warns for allocation larger than
PTRDIFF_MAX. It also aligns with gcc expectation regarding libc and
expected size, such as described in PR#67999 [2] and previously discussed
ISO C11 issues [3] on libc-alpha.
From the RFC thread [4] and previous discussion, it seems that consensus
is only to limit such requested size for malloc functions, not the system
allocation one (mmap, sbrk, etc.).
The implementation changes checked_request2size to check for both overflow
and maximum object size up to PTRDIFF_MAX. No additional checks are done
on sysmalloc, so it can still issue mmap with values larger than
PTRDIFF_T depending on the requested size.
The __attribute_alloc_size__ is for functions that return a pointer only,
which means it cannot be applied to posix_memalign (see remarks in GCC
PR#87683 [5]). The runtimes checks to limit maximum requested allocation
size does applies to posix_memalign.
Checked on x86_64-linux-gnu and i686-linux-gnu.
[1] https://sourceware.org/ml/libc-alpha/2018-11/msg00223.html
[2] https://gcc.gnu.org/bugzilla//show_bug.cgi?id=67999
[3] https://sourceware.org/ml/libc-alpha/2011-12/msg00066.html
[4] https://sourceware.org/ml/libc-alpha/2018-11/msg00224.html
[5] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87683
[BZ #23741]
* malloc/hooks.c (malloc_check, realloc_check): Use
__builtin_add_overflow on overflow check and adapt to
checked_request2size change.
* malloc/malloc.c (__libc_malloc, __libc_realloc, _mid_memalign,
__libc_pvalloc, __libc_calloc, _int_memalign): Limit maximum
allocation size to PTRDIFF_MAX.
(REQUEST_OUT_OF_RANGE): Remove macro.
(checked_request2size): Change to inline function and limit maximum
requested size to PTRDIFF_MAX.
(__libc_malloc, __libc_realloc, _int_malloc, _int_memalign): Limit
maximum allocation size to PTRDIFF_MAX.
(_mid_memalign): Use _int_memalign call for overflow check.
(__libc_pvalloc): Use __builtin_add_overflow on overflow check.
(__libc_calloc): Use __builtin_mul_overflow for overflow check and
limit maximum requested size to PTRDIFF_MAX.
* malloc/malloc.h (malloc, calloc, realloc, reallocarray, memalign,
valloc, pvalloc): Add __attribute_alloc_size__.
* stdlib/stdlib.h (malloc, realloc, reallocarray, valloc): Likewise.
* malloc/tst-malloc-too-large.c (do_test): Add check for allocation
larger than PTRDIFF_MAX.
* malloc/tst-memalign.c (do_test): Disable -Walloc-size-larger-than=
around tests of malloc with negative sizes.
* malloc/tst-posix_memalign.c (do_test): Likewise.
* malloc/tst-pvalloc.c (do_test): Likewise.
* malloc/tst-valloc.c (do_test): Likewise.
* malloc/tst-reallocarray.c (do_test): Replace call to reallocarray
with resulting size allocation larger than PTRDIFF_MAX with
reallocarray_nowarn.
(reallocarray_nowarn): New function.
* NEWS: Mention the malloc function semantic change.
If an error occurs during the tracing operation, particularly during a
call to lock_and_info() which calls _dl_addr, we may end up calling back
into the malloc-subsystem and relock the loader lock and deadlock. For
all intents and purposes the call to _dl_addr can call any of the malloc
family API functions and so we should disable all tracing before calling
such loader functions. This is similar to the strategy that the new
malloc tracer takes when calling the real malloc, namely that all
tracing ceases at the boundary to the real function and any faults at
that point are the purvue of the library (though the new tracer does
this on a per-thread basis in an MT-safe fashion). Since the new tracer
and the hook deprecation are not yet complete we must fix these issues
where we can.
Tested on x86_64 with no regressions.
Co-authored-by: Kwok Cheung Yeung <kcy@codesourcery.com>
Reviewed-by: DJ Delorie <dj@redhat.com>
Fixes bug 24216. This patch adds security checks for bk and bk_nextsize pointers
of chunks in large bin when inserting chunk from unsorted bin. It was possible
to write the pointer to victim (newly inserted chunk) to arbitrary memory
locations if bk or bk_nextsize pointers of the next large bin chunk
got corrupted.
One group of warnings seen with -Wextra is warnings for static or
inline not at the start of a declaration (-Wold-style-declaration).
This patch fixes various such cases for inline, ensuring it comes at
the start of the declaration (after any static). A common case of the
fix is "static inline <type> __always_inline"; the definition of
__always_inline starts with __inline, so the natural change is to
"static __always_inline <type>". Other cases of the warning may be
harder to fix (one pattern is a function definition that gets
rewritten to be static by an including file, "#define funcname static
wrapped_funcname" or similar), but it seems worth fixing these cases
with inline anyway.
Tested for x86_64.
* elf/dl-load.h (_dl_postprocess_loadcmd): Use __always_inline
before return type, without separate inline.
* elf/dl-tunables.c (maybe_enable_malloc_check): Likewise.
* elf/dl-tunables.h (tunable_is_name): Likewise.
* malloc/malloc.c (do_set_trim_threshold): Likewise.
(do_set_top_pad): Likewise.
(do_set_mmap_threshold): Likewise.
(do_set_mmaps_max): Likewise.
(do_set_mallopt_check): Likewise.
(do_set_perturb_byte): Likewise.
(do_set_arena_test): Likewise.
(do_set_arena_max): Likewise.
(do_set_tcache_max): Likewise.
(do_set_tcache_count): Likewise.
(do_set_tcache_unsorted_limit): Likewise.
* nis/nis_subr.c (count_dots): Likewise.
* nptl/allocatestack.c (advise_stack_range): Likewise.
* sysdeps/ieee754/dbl-64/s_sin.c (do_cos): Likewise.
(do_sin): Likewise.
(reduce_sincos): Likewise.
(do_sincos): Likewise.
* sysdeps/unix/sysv/linux/x86/elision-conf.c
(do_set_elision_enable): Likewise.
(TUNABLE_CALLBACK_FNDECL): Likewise.
One of the warnings that appears with -Wextra is "ordered comparison
of pointer with integer zero" in malloc.c:tcache_get, for the
assertion:
assert (tcache->entries[tc_idx] > 0);
Indeed, a "> 0" comparison does not make sense for
tcache->entries[tc_idx], which is a pointer. My guess is that
tcache->counts[tc_idx] is what's intended here, and this patch changes
the assertion accordingly.
Tested for x86_64.
* malloc/malloc.c (tcache_get): Compare tcache->counts[tc_idx]
with 0, not tcache->entries[tc_idx].
Commit 6923f6db1e ("malloc: Use current
(C11-style) atomics for fastbin access") caused a substantial
performance regression on POWER and Aarch64, and the old atomics,
while hard to prove correct, seem to work in practice.
This commit removes the custom memcpy implementation from _int_realloc
for small chunk sizes. The ncopies variable has the wrong type, and
an integer wraparound could cause the existing code to copy too few
elements (leaving the new memory region mostly uninitialized).
Therefore, removing this code fixes bug 24027.
This one tests for BZ#23907 where the double free
test didn't check the tcache bin bounds before dereferencing
the bin.
[BZ #23907]
* malloc/tst-tcfree3.c: New.
* malloc/Makefile: Add it.
The previous check could read beyond the end of the tcache entry
array. If the e->key == tcache cookie check happened to pass, this
would result in crashes.
This commit is in preparation of turning the macro into a proper
function. The output arguments of the macro were in fact unused.
Also clean up uses of __builtin_expect.
On Thu, Jan 11, 2018 at 3:50 PM, Florian Weimer <fweimer@redhat.com> wrote:
> On 11/07/2017 04:27 PM, Istvan Kurucsai wrote:
>>
>> + next = chunk_at_offset (victim, size);
>
>
> For new code, we prefer declarations with initializers.
Noted.
>> + if (__glibc_unlikely (chunksize_nomask (victim) <= 2 * SIZE_SZ)
>> + || __glibc_unlikely (chunksize_nomask (victim) >
>> av->system_mem))
>> + malloc_printerr("malloc(): invalid size (unsorted)");
>> + if (__glibc_unlikely (chunksize_nomask (next) < 2 * SIZE_SZ)
>> + || __glibc_unlikely (chunksize_nomask (next) >
>> av->system_mem))
>> + malloc_printerr("malloc(): invalid next size (unsorted)");
>> + if (__glibc_unlikely ((prev_size (next) & ~(SIZE_BITS)) !=
>> size))
>> + malloc_printerr("malloc(): mismatching next->prev_size
>> (unsorted)");
>
>
> I think this check is redundant because prev_size (next) and chunksize
> (victim) are loaded from the same memory location.
I'm fairly certain that it compares mchunk_size of victim against
mchunk_prev_size of the next chunk, i.e. the size of victim in its
header and footer.
>> + if (__glibc_unlikely (bck->fd != victim)
>> + || __glibc_unlikely (victim->fd != unsorted_chunks (av)))
>> + malloc_printerr("malloc(): unsorted double linked list
>> corrupted");
>> + if (__glibc_unlikely (prev_inuse(next)))
>> + malloc_printerr("malloc(): invalid next->prev_inuse
>> (unsorted)");
>
>
> There's a missing space after malloc_printerr.
Noted.
> Why do you keep using chunksize_nomask? We never investigated why the
> original code uses it. It may have been an accident.
You are right, I don't think it makes a difference in these checks. So
the size local can be reused for the checks against victim. For next,
leaving it as such avoids the masking operation.
> Again, for non-main arenas, the checks against av->system_mem could be made
> tighter (against the heap size). Maybe you could put the condition into a
> separate inline function?
We could also do a chunk boundary check similar to what I proposed in
the thread for the first patch in the series to be even more strict.
I'll gladly try to implement either but believe that refining these
checks would bring less benefits than in the case of the top chunk.
Intra-arena or intra-heap overlaps would still be doable here with
unsorted chunks and I don't see any way to counter that besides more
generic measures like randomizing allocations and your metadata
encoding patches.
I've attached a revised version with the above comments incorporated
but without the refined checks.
Thanks,
Istvan
From a12d5d40fd7aed5fa10fc444dcb819947b72b315 Mon Sep 17 00:00:00 2001
From: Istvan Kurucsai <pistukem@gmail.com>
Date: Tue, 16 Jan 2018 14:48:16 +0100
Subject: [PATCH v2 1/1] malloc: Additional checks for unsorted bin integrity
I.
Ensure the following properties of chunks encountered during binning:
- victim chunk has reasonable size
- next chunk has reasonable size
- next->prev_size == victim->size
- valid double linked list
- PREV_INUSE of next chunk is unset
* malloc/malloc.c (_int_malloc): Additional binning code checks.
The House of Force is a well-known technique to exploit heap
overflow. In essence, this exploit takes three steps:
1. Overwrite the size of top chunk with very large value (e.g. -1).
2. Request x bytes from top chunk. As the size of top chunk
is corrupted, x can be arbitrarily large and top chunk will
still be offset by x.
3. The next allocation from top chunk will thus be controllable.
If we verify the size of top chunk at step 2, we can stop such attack.
The __libc_freeres framework does not extend to non-libc.so objects.
This causes problems in general for valgrind and mtrace detecting
unfreed objects in both libdl.so and libpthread.so. This change is
a pre-requisite to properly moving the malloc hooks out of malloc
since such a move now requires precise accounting of all allocated
data before destructors are run.
This commit adds a proper hook in libc.so.6 for both libdl.so and
for libpthread.so, this ensures that shm-directory.c which uses
freeit () to free memory is called properly. We also remove the
nptl_freeres hook and fall back to using weak-ref-and-check idiom
for a loaded libpthread.so, thus making this process similar for
all DSOs.
Lastly we follow best practice and use explicit free calls for
both libdl.so and libpthread.so instead of the generic hook process
which has undefined order.
Tested on x86_64 with no regressions.
Signed-off-by: DJ Delorie <dj@redhat.com>
Signed-off-by: Carlos O'Donell <carlos@redhat.com>
This patch mechanically removes all remaining uses, and the
definitions, of the following libio name aliases:
name replaced with
---- -------------
_IO_FILE FILE
_IO_fpos_t __fpos_t
_IO_fpos64_t __fpos64_t
_IO_size_t size_t
_IO_ssize_t ssize_t or __ssize_t
_IO_off_t off_t
_IO_off64_t off64_t
_IO_pid_t pid_t
_IO_uid_t uid_t
_IO_wint_t wint_t
_IO_va_list va_list or __gnuc_va_list
_IO_BUFSIZ BUFSIZ
_IO_cookie_io_functions_t cookie_io_functions_t
__io_read_fn cookie_read_function_t
__io_write_fn cookie_write_function_t
__io_seek_fn cookie_seek_function_t
__io_close_fn cookie_close_function_t
I used __fpos_t and __fpos64_t instead of fpos_t and fpos64_t because
the definitions of fpos_t and fpos64_t depend on the largefile mode.
I used __ssize_t and __gnuc_va_list in a handful of headers where
namespace cleanliness might be relevant even though they're
internal-use-only. In all other cases, I used the public-namespace
name.
There are a tiny handful of places where I left a use of 'struct _IO_FILE'
alone, because it was being used together with 'struct _IO_FILE_plus'
or 'struct _IO_FILE_complete' in the same arithmetic expression.
Because this patch was almost entirely done with search and replace, I
may have introduced indentation botches. I did proofread the diff,
but I may have missed something.
The ChangeLog below calls out all of the places where this was not a
pure search-and-replace change.
Installed stripped libraries and executables are unchanged by this patch,
except that some assertions in vfscanf.c change line numbers.
* libio/libio.h (_IO_FILE): Delete; all uses changed to FILE.
(_IO_fpos_t): Delete; all uses changed to __fpos_t.
(_IO_fpos64_t): Delete; all uses changed to __fpos64_t.
(_IO_size_t): Delete; all uses changed to size_t.
(_IO_ssize_t): Delete; all uses changed to ssize_t or __ssize_t.
(_IO_off_t): Delete; all uses changed to off_t.
(_IO_off64_t): Delete; all uses changed to off64_t.
(_IO_pid_t): Delete; all uses changed to pid_t.
(_IO_uid_t): Delete; all uses changed to uid_t.
(_IO_wint_t): Delete; all uses changed to wint_t.
(_IO_va_list): Delete; all uses changed to va_list or __gnuc_va_list.
(_IO_BUFSIZ): Delete; all uses changed to BUFSIZ.
(_IO_cookie_io_functions_t): Delete; all uses changed to
cookie_io_functions_t.
(__io_read_fn): Delete; all uses changed to cookie_read_function_t.
(__io_write_fn): Delete; all uses changed to cookie_write_function_t.
(__io_seek_fn): Delete; all uses changed to cookie_seek_function_t.
(__io_close_fn): Delete: all uses changed to cookie_close_function_t.
* libio/iofopncook.c: Remove unnecessary forward declarations.
* libio/iolibio.h: Correct outdated commentary.
* malloc/malloc.c (__malloc_stats): Remove unnecessary casts.
* stdio-common/fxprintf.c (__fxprintf_nocancel):
Remove unnecessary casts.
* stdio-common/getline.c: Use _IO_getdelim directly.
Don't redefine ssize_t.
* stdio-common/printf_fp.c, stdio_common/printf_fphex.c
* stdio-common/printf_size.c: Don't redefine size_t or FILE.
Remove outdated comments.
* stdio-common/vfscanf.c: Don't redefine va_list.
malloc_stats means to disable cancellation for writes to stderr while
it runs, but it restores stderr->_flags2 with |= instead of =, so what
it actually does is disable cancellation on stderr permanently.
[BZ #22830]
* malloc/malloc.c (__malloc_stats): Restore stderr->_flags2
correctly.
* malloc/tst-malloc-stats-cancellation.c: New test case.
* malloc/Makefile: Add new test case.
This avoids assert definition conflicts if some of the headers used by
malloc.c happens to include assert.h. Malloc still needs a malloc-avoiding
implementation, which we get by redirecting __assert_fail to malloc's
__malloc_assert.
* malloc/malloc.c: Include <assert.h>.
(assert): Do not define.
[!defined NDEBUG] (__assert_fail): Define to __malloc_assert.
When posix_memalign is called with an alignment less than MALLOC_ALIGNMENT
and a requested size close to SIZE_MAX, it falls back to malloc code
(because the alignment of a block returned by malloc is sufficient to
satisfy the call). In this case, an integer overflow in _int_malloc leads
to posix_memalign incorrectly returning successfully.
Upon fixing this and writing a somewhat thorough regression test, it was
discovered that when posix_memalign is called with an alignment larger than
MALLOC_ALIGNMENT (so it uses _int_memalign instead) and a requested size
close to SIZE_MAX, a different integer overflow in _int_memalign leads to
posix_memalign incorrectly returning successfully.
Both integer overflows affect other memory allocation functions that use
_int_malloc (one affected malloc in x86) or _int_memalign as well.
This commit fixes both integer overflows. In addition to this, it adds a
regression test to guard against false successful allocations by the
following memory allocation functions when called with too-large allocation
sizes and, where relevant, various valid alignments:
malloc, realloc, calloc, reallocarray, memalign, posix_memalign,
aligned_alloc, valloc, and pvalloc.
This patch increases timeouts on three tests I observed timing out on
slow systems.
* malloc/tst-malloc-tcache-leak.c (TIMEOUT): Define to 50.
* posix/tst-glob-tilde.c (TIMEOUT): Define to 200.
* resolv/tst-resolv-res_ninit.c (TIMEOUT): Define to 50.
POSIX explicitly says that applications should check errno only after
failure, so the errno value can be clobbered on success as long as it
is not set to zero.
Changelog:
[BZ #22611]
* malloc/tst-realloc.c (do_test): Remove the test checking that errno
is unchanged on success.
When the per-thread cache is enabled, __libc_malloc uses request2size (which
does not perform an overflow check) to calculate the chunk size from the
requested allocation size. This leads to an integer overflow causing malloc
to incorrectly return the last successfully allocated block when called with
a very large size argument (close to SIZE_MAX).
This commit uses checked_request2size instead, removing the overflow.
It does not make sense to register separate cleanup functions for arena
and tcache since they're always going to be called together. Call the
tcache cleanup function from within arena_thread_freeres since it at
least makes the order of those cleanups clear in the code.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Update all sourceware links to https. The website redirects
everything to https anyway so let the web server do a bit less work.
The only reference that remains unchanged is the one in the old
ChangeLog, since it didn't seem worth changing it.
* NEWS: Update sourceware link to https.
* configure.ac: Likewise.
* crypt/md5test-giant.c: Likewise.
* dlfcn/bug-atexit1.c: Likewise.
* dlfcn/bug-atexit2.c: Likewise.
* localedata/README: Likewise.
* malloc/tst-mallocfork.c: Likewise.
* manual/install.texi: Likewise.
* nptl/tst-pthread-getattr.c: Likewise.
* stdio-common/tst-fgets.c: Likewise.
* stdio-common/tst-fwrite.c: Likewise.
* sunrpc/Makefile: Likewise.
* sysdeps/arm/armv7/multiarch/memcpy_impl.S: Likewise.
* wcsmbs/tst-mbrtowc2.c: Likewise.
* configure: Regenerate.
* INSTALL: Regenerate.
This commit adds a "subheaps" field to the malloc_info output that
shows the number of heaps that were allocated to extend a non-main
arena.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
This patch adds a single-threaded fast path to malloc, realloc,
calloc and memalloc. When we're single-threaded, we can bypass
arena_get (which always locks the arena it returns) and just use
the main arena. Also avoid retrying a different arena since
there is just the main arena.
* malloc/malloc.c (__libc_malloc): Add SINGLE_THREAD_P path.
(__libc_realloc): Likewise.
(_mid_memalign): Likewise.
(__libc_calloc): Likewise.
This patch adds single-threaded fast paths to _int_free.
Bypass the explicit locking for larger allocations.
* malloc/malloc.c (_int_free): Add SINGLE_THREAD_P fast paths.
This patch fixes a deadlock in the fastbin consistency check.
If we fail the fast check due to concurrent modifications to
the next chunk or system_mem, we should not lock if we already
have the arena lock. Simplify the check to make it obviously
correct.
* malloc/malloc.c (_int_free): Fix deadlock bug in consistency check.
The current malloc initialization is quite convoluted. Instead of
sometimes calling malloc_consolidate from ptmalloc_init, call
malloc_init_state early so that the main_arena is always initialized.
The special initialization can now be removed from malloc_consolidate.
This also fixes BZ #22159.
Check all calls to malloc_consolidate and remove calls that are
redundant initialization after ptmalloc_init, like in int_mallinfo
and __libc_mallopt (but keep the latter as consolidation is required for
set_max_fast). Update comments to improve clarity.
Remove impossible initialization check from _int_malloc, fix assert
in do_check_malloc_state to ensure arena->top != 0. Fix the obvious bugs
in do_check_free_chunk and do_check_remalloced_chunk to enable single
threaded malloc debugging (do_check_malloc_state is not thread safe!).
[BZ #22159]
* malloc/arena.c (ptmalloc_init): Call malloc_init_state.
* malloc/malloc.c (do_check_free_chunk): Fix build bug.
(do_check_remalloced_chunk): Fix build bug.
(do_check_malloc_state): Add assert that checks arena->top.
(malloc_consolidate): Remove initialization.
(int_mallinfo): Remove call to malloc_consolidate.
(__libc_mallopt): Clarify why malloc_consolidate is needed.
Currently free typically uses 2 atomic operations per call. The have_fastchunks
flag indicates whether there are recently freed blocks in the fastbins. This
is purely an optimization to avoid calling malloc_consolidate too often and
avoiding the overhead of walking all fast bins even if all are empty during a
sequence of allocations. However using catomic_or to update the flag is
completely unnecessary since it can be changed into a simple boolean and
accessed using relaxed atomics. There is no change in multi-threaded behaviour
given the flag is already approximate (it may be set when there are no blocks in
any fast bins, or it may be clear when there are free blocks that could be
consolidated).
Performance of malloc/free improves by 27% on a simple benchmark on AArch64
(both single and multithreaded). The number of load/store exclusive instructions
is reduced by 33%. Bench-malloc-thread speeds up by ~3% in all cases.
* malloc/malloc.c (FASTCHUNKS_BIT): Remove.
(have_fastchunks): Remove.
(clear_fastchunks): Remove.
(set_fastchunks): Remove.
(malloc_state): Add have_fastchunks.
(malloc_init_state): Use have_fastchunks.
(do_check_malloc_state): Remove incorrect invariant checks.
(_int_malloc): Use have_fastchunks.
(_int_free): Likewise.
(malloc_consolidate): Likewise.
The functions tcache_get and tcache_put show up in profiles as they
are a critical part of the tcache code. Inline them to give tcache
a 16% performance gain. Since this improves multi-threaded cases
as well, it helps offset any potential performance loss due to adding
single-threaded fast paths.
* malloc/malloc.c (tcache_put): Inline.
(tcache_get): Inline.
Since glibc 2.24, __malloc_initialize_hook is a compat symbol. As a
result, the link editor does not export a definition of
__malloc_initialize_hook from the main program, so that it no longer
interposes the variable definition in libc.so. Specifying the symbol
version restores the exported symbol.
realloc_check has
unsigned char *magic_p;
...
__libc_lock_lock (main_arena.mutex);
const mchunkptr oldp = mem2chunk_check (oldmem, &magic_p);
__libc_lock_unlock (main_arena.mutex);
if (!oldp)
malloc_printerr ("realloc(): invalid pointer");
...
if (newmem == NULL)
*magic_p ^= 0xFF;
with
static void malloc_printerr(const char *str) __attribute__ ((noreturn));
GCC 7 -O3 warns
hooks.c: In function ‘realloc_check’:
hooks.c:352:14: error: ‘magic_p’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
*magic_p ^= 0xFF;
due to the GCC bug:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82090
This patch silences GCC 7 by using DIAG_IGNORE_NEEDS_COMMENT.
[BZ #22052]
* malloc/hooks.c (realloc_check): Use DIAG_IGNORE_NEEDS_COMMENT
to silence -O3 -Wall warning with GCC 7.
The malloc tcache added in 2.26 will leak all of the elements remaining
in the cache and the cache structure itself when a thread exits. The
defect is that we do not set tcache_shutting_down early enough, and the
thread simply recreates the tcache and places the elements back onto a
new tcache which is subsequently lost as the thread exits (unfreed
memory). The fix is relatively simple, move the setting of
tcache_shutting_down earlier in tcache_thread_freeres. We add a test
case which uses mallinfo and some heuristics to look for unaccounted for
memory usage between the start and end of a thread start/join loop. It
is very reliable at detecting that there is a leak given the number of
iterations. Without the fix the test will consume 122MiB of leaked
memory.
Problem reported by Florian Weimer [1] and solution suggested by
Andreas Schwab [2]. It also set the same buffer size independent
of architecture max_align_t size.
Checked on x86_64-linux-gnu and i686-linux-gnu.
* lib/malloc/scratch_buffer.h (struct scratch_buffer):
Use an union instead of a max_align_t array for __space,
so that __space is the same size on all platforms.
* malloc/scratch_buffer_grow_preserve.c
(__libc_scratch_buffer_grow_preserve): Likewise.
[1] https://sourceware.org/ml/libc-alpha/2017-09/msg00693.html
[2] https://sourceware.org/ml/libc-alpha/2017-09/msg00695.html