This is a workaround (hack) for a gcc optimization issue (PR 99551).
Without this the generated code may evaluate the expression in the
cold path which causes performance regression for small allocations
in the memory tagging disabled (common) case.
Reviewed-by: DJ Delorie <dj@redhat.com>
The internal _mid_memalign already returns newly tagged memory.
(__libc_memalign and posix_memalign already relied on this, this
patch fixes the other call sites.)
Reviewed-by: DJ Delorie <dj@redhat.com>
The previous patch ensured that all chunk to mem computations use
chunk2rawmem, so now we can rename it to chunk2mem, and in the few
cases where the tag of mem is relevant chunk2mem_tag can be used.
Replaced tag_at (chunk2rawmem (x)) with chunk2mem_tag (x).
Renamed chunk2rawmem to chunk2mem.
Reviewed-by: DJ Delorie <dj@redhat.com>
The difference between chunk2mem and chunk2rawmem is that the latter
does not get the memory tag for the returned pointer. It turns out
chunk2rawmem almost always works:
The input of chunk2mem is a chunk pointer that is untagged so it can
access the chunk header. All memory that is not user allocated heap
memory is untagged, which in the current implementation means that it
has the 0 tag, but this patch does not rely on the tag value. The
patch relies on that chunk operations are either done on untagged
chunks or without doing memory access to the user owned part.
Internal interface contracts:
sysmalloc: Returns untagged memory.
_int_malloc: Returns untagged memory.
_int_free: Takes untagged memory.
_int_memalign: Returns untagged memory.
_int_realloc: Takes and returns tagged memory.
So only _int_realloc and functions outside this list need care.
Alignment checks do not need the right tag and tcache works with
untagged memory.
tag_at was kept in realloc after an mremap, which is not strictly
necessary, since the pointer is only used to retag the memory, but this
way the tag is guaranteed to be different from the old tag.
Reviewed-by: DJ Delorie <dj@redhat.com>
The comment explained why different tag is used after mremap, but
for that correctly tagged pointer should be passed to tag_new_usable.
Use chunk2mem to get the tag.
Reviewed-by: DJ Delorie <dj@redhat.com>
This is a pure refactoring change that does not affect behaviour.
The CHUNK_AVAILABLE_SIZE name was unclear, the memsize name tries to
follow the existing convention of mem denoting the allocation that is
handed out to the user, while chunk is its internally used container.
The user owned memory for a given chunk starts at chunk2mem(p) and
the size is memsize(p). It is not valid to use on dumped heap chunks.
Moved the definition next to other chunk and mem related macros.
Reviewed-by: DJ Delorie <dj@redhat.com>
Use the runtime check where possible: it should not cause slow down in
the !USE_MTAG case since then mtag_enabled is constant false, but it
allows compiling the tagging logic so it's less likely to break or
diverge when developers only test the !USE_MTAG case.
Reviewed-by: DJ Delorie <dj@redhat.com>
The branches may be better optimized since mtag_enabled is widely used.
Granule size larger than a chunk header is not supported since then we
cannot have both the chunk header and user area granule aligned. To
fix that for targets with large granule, the chunk layout has to change.
So code that attempted to handle the granule mask generally was changed.
This simplified CHUNK_AVAILABLE_SIZE and the logic in malloc_usable_size.
Reviewed-by: DJ Delorie <dj@redhat.com>
When glibc is built with memory tagging support (USE_MTAG) but it is not
enabled at runtime (mtag_enabled) then unconditional memset was used
even though that can be often avoided.
This is for performance when tagging is supported but not enabled.
The extra check should have no overhead: tag_new_zero_region already
had a runtime check which the compiler can now optimize away.
Reviewed-by: DJ Delorie <dj@redhat.com>
The memset api is suboptimal and does not provide much benefit. Memory
tagging only needs a zeroing memset (and only for memory that's sized
and aligned to multiples of the tag granule), so change the internal
api and the target hooks accordingly. This is to simplify the
implementation of the target hook.
Reviewed-by: DJ Delorie <dj@redhat.com>
A flag check can be faster than function pointers because of how
branch prediction and speculation works and it can also remove a layer
of indirection when there is a mismatch between the malloc internal
tag_* api and __libc_mtag_* target hooks.
Memory tagging wrapper functions are moved to malloc.c from arena.c and
the logic now checks mmap_enabled. The definition of tag_new_usable is
moved after chunk related definitions.
This refactoring also allows using mtag_enabled checks instead of
USE_MTAG ifdefs when memory tagging support only changes code logic
when memory tagging is enabled at runtime. Note: an "if (false)" code
block is optimized away even at -O0 by gcc.
Reviewed-by: DJ Delorie <dj@redhat.com>
This does not change behaviour, just removes one layer of indirection
in the internal memory tagging logic.
Use tag_ and mtag_ prefixes instead of __tag_ and __mtag_ since these
are all symbols with internal linkage, private to malloc.c, so there
is no user namespace pollution issue.
Reviewed-by: DJ Delorie <dj@redhat.com>
Either the memory belongs to the dumped area, in which case we don't
want to tag (the dumped area has the same tag as malloc internal data
so tagging is unnecessary, but chunks there may not have the right
alignment for the tag granule), or the memory will be unmapped
immediately (and thus tagging is not useful).
Reviewed-by: DJ Delorie <dj@redhat.com>
This is only used internally in malloc.c, the extern declaration
was wrong, __mtag_mmap_flags has internal linkage.
Reviewed-by: DJ Delorie <dj@redhat.com>
At an _int_free call site in realloc the wrong size was used for tag
clearing: the chunk header of the next chunk was also cleared which
in practice may work, but logically wrong.
The tag clearing is moved before the memcpy to save a tag computation,
this avoids a chunk2mem. Another chunk2mem is removed because newmem
does not have to be recomputed. Whitespaces got fixed too.
Reviewed-by: DJ Delorie <dj@redhat.com>
_int_free must be called with a chunk that has its tag reset. This was
missing in a rare case that could crash when heap tagging is enabled:
when in a multi-threaded process the current arena runs out of memory
during realloc, but another arena still has space to finish the realloc
then _int_free was called without clearing the user allocation tags.
Fixes bug 27468.
Reviewed-by: DJ Delorie <dj@redhat.com>
This essentially folds compat_symbol_unique functionality into
compat_symbol.
This change eliminates the need for intermediate aliases for defining
multiple symbol versions, for both compat_symbol and versioned_symbol.
Some binutils versions do not suport multiple versions per symbol on
some targets, so aliases are automatically introduced, similar to what
compat_symbol_unique did. To reduce symbol table sizes, a configure
check is added to avoid these aliases if they are not needed.
The new mechanism works with data symbols as well as function symbols,
due to the way an assembler-level redirect is used. It is not
compatible with weak symbols for old binutils versions, which is why
the definition of __malloc_initialize_hook had to be changed. This
is not a loss of functionality because weak symbols do not matter
to dynamic linking.
The placeholder symbol needs repeating in nptl/libpthread-compat.c
now that compat_symbol is used, but that seems more obvious than
introducing yet another macro.
A subtle difference was that compat_symbol_unique made the symbol
global automatically. compat_symbol does not do this, so static
had to be removed from the definition of
__libpthread_version_placeholder.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 6694 files FOO.
I then removed trailing white space from benchtests/bench-pthread-locks.c
and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this
diagnostic from Savannah:
remote: *** pre-commit check failed ...
remote: *** error: lines with trailing whitespace found
remote: error: hook declined to update refs/heads/master
In the next release of POSIX, free must preserve errno
<https://www.austingroupbugs.net/view.php?id=385>.
Modify __libc_free to save and restore errno, so that
any internal munmap etc. syscalls do not disturb the caller's errno.
Add a test malloc/tst-free-errno.c (almost all by Bruno Haible),
and document that free preserves errno.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch adds the basic support for memory tagging.
Various flavours are supported, particularly being able to turn on
tagged memory at run-time: this allows the same code to be used on
systems where memory tagging support is not present without neededing
a separate build of glibc. Also, depending on whether the kernel
supports it, the code will use mmap for the default arena if morecore
does not, or cannot support tagged memory (on AArch64 it is not
available).
All the hooks use function pointers to allow this to work without
needing ifuncs.
Reviewed-by: DJ Delorie <dj@redhat.com>
The secondary/non-primary/inner libc (loaded via dlmopen, LD_AUDIT,
static dlopen) must not use sbrk to allocate member because that would
interfere with allocations in the outer libc. On Linux, this does not
matter because sbrk itself was changed to fail in secondary libcs.
_dl_addr occasionally shows up in profiles, but had to be used before
because __libc_multiple_libs was unreliable. So this change achieves
a slight reduction in startup time.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
If linked-list of tcache contains a loop, it invokes infinite
loop in _int_free when freeing tcache. The PoC which invokes
such infinite loop is on the Bugzilla(#27052). This loop
should terminate when the loop exceeds mp_.tcache_count and
the program should abort. The affected glibc version is
2.29 or later.
Reviewed-by: DJ Delorie <dj@redhat.com>
This patch adds the ABI-related bits to reflect the new mallinfo2
function, and adds a test case to verify basic functionality.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The code for set_max_fast() stores an "impossibly small value"
instead of zero, when the parameter is zero. However, for
small values of the parameter (ex: 1 or 2) the computation
results in a zero being stored anyway.
This patch checks for the parameter being small enough for the
computation to result in zero instead, so that a zero is never
stored.
key values which result in zero being stored:
x86-64: 1..7 (or other 64-bit)
i686: 1..11
armhfp: 1..3 (or other 32-bit)
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Alignment checks should be performed on the user's buffer and NOT
on the mchunkptr as was done before. This caused bugs in 32 bit
versions, because: 2*sizeof(t) != MALLOC_ALIGNMENT.
As the tcache works on users' buffers it uses the aligned_OK()
check, and the rest work on mchunkptr and therefore check using
misaligned_chunk().
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Removed unneeded '\' chars from end of lines and fixed some
indentation issues that were introduced in the original
Safe-Linking patch.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbin and tcache) from being tampered by attackers.
The mechanism makes use of randomness from ASLR (mmap_base), and when
combined with chunk alignment integrity checks, it protects the "next"
pointers from being hijacked by an attacker.
While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
* Partial pointer override: modifies the lower bytes (Little Endian)
* Full pointer override: hijacks the pointer to an attacker's location
* Unaligned chunks: pointing the list to an unaligned address
The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
* PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
* *L = PROTECT(P)
This way, the random bits from the address L (which start at the bit
in the PAGE_SHIFT position), will be merged with LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.
An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
* Attackers can't point to illegal (unaligned) memory addresses
* Attackers must guess correctly the alignment bits
On standard 32 bit Linux machines, an attack will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.
This proposed patch was benchmarked and it's effect on the overall
performance of the heap was negligible and couldn't be distinguished
from the default variance between tests on the vanilla version. A
similar protection was added to Chromium's version of TCMalloc
in 2012, and according to their documentation it had an overhead of
less than 2%.
Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Adhemerval Zacnella <adhemerval.zanella@linaro.org>
do_set_tcache_max, do_set_mxfast:
Fix two instances of comparing "size_t < 0"
Both cases have upper limit, so the "negative value" case
is already handled via overflow semantics.
do_set_tcache_max, do_set_tcache_count:
Fix return value on error. Note: currently not used.
mallopt:
pass return value of helper functions to user. Behavior should
only be actually changed for mxfast, where we restore the old
(pre-tunables) behavior.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
set_max_fast sets the "impossibly small" value based on,
eventually, MALLOC_ALIGNMENT. The comparisons for the smallest
chunk used is, eventually, MIN_CHUNK_SIZE. Note that i386
is the only platform where these are the same, so a smallest
chunk *would* be put in a no-fastbins fastbin.
This change calculates the "impossibly small" value
based on MIN_CHUNK_SIZE instead, so that we can know it will
always be impossibly small.
Fixes `<total type="rest" size="..."> incorrectly showing as 0 most
of the time.
The rest value being wrong is significant because to compute the
actual amount of memory handed out via malloc, the user must subtract
it from <system type="current" size="...">. That result being wrong
makes investigating memory fragmentation issues like
<https://bugzilla.redhat.com/show_bug.cgi?id=843478> close to
impossible.
Change the tcache->counts[] entries to uint16_t - this removes
the limit set by char and allows a larger tcache. Remove a few
redundant asserts.
bench-malloc-thread with 4 threads is ~15% faster on Cortex-A72.
Reviewed-by: DJ Delorie <dj@redhat.com>
* malloc/malloc.c (MAX_TCACHE_COUNT): Increase to UINT16_MAX.
(tcache_put): Remove redundant assert.
(tcache_get): Remove redundant asserts.
(__libc_malloc): Check tcache count is not zero.
* manual/tunables.texi (glibc.malloc.tcache_count): Update maximum.
The tcache counts[] array is a char, which has a very small range and thus
may overflow. When setting tcache_count tunable, there is no overflow check.
However the tunable must not be larger than the maximum value of the tcache
counts[] array, otherwise it can overflow when filling the tcache.
[BZ #24531]
* malloc/malloc.c (MAX_TCACHE_COUNT): New define.
(do_set_tcache_count): Only update if count is small enough.
* manual/tunables.texi (glibc.malloc.tcache_count): Document max value.
As discussed previously on libc-alpha [1], this patch follows up the idea
and add both the __attribute_alloc_size__ on malloc functions (malloc,
calloc, realloc, reallocarray, valloc, pvalloc, and memalign) and limit
maximum requested allocation size to up PTRDIFF_MAX (taking into
consideration internal padding and alignment).
This aligns glibc with gcc expected size defined by default warning
-Walloc-size-larger-than value which warns for allocation larger than
PTRDIFF_MAX. It also aligns with gcc expectation regarding libc and
expected size, such as described in PR#67999 [2] and previously discussed
ISO C11 issues [3] on libc-alpha.
From the RFC thread [4] and previous discussion, it seems that consensus
is only to limit such requested size for malloc functions, not the system
allocation one (mmap, sbrk, etc.).
The implementation changes checked_request2size to check for both overflow
and maximum object size up to PTRDIFF_MAX. No additional checks are done
on sysmalloc, so it can still issue mmap with values larger than
PTRDIFF_T depending on the requested size.
The __attribute_alloc_size__ is for functions that return a pointer only,
which means it cannot be applied to posix_memalign (see remarks in GCC
PR#87683 [5]). The runtimes checks to limit maximum requested allocation
size does applies to posix_memalign.
Checked on x86_64-linux-gnu and i686-linux-gnu.
[1] https://sourceware.org/ml/libc-alpha/2018-11/msg00223.html
[2] https://gcc.gnu.org/bugzilla//show_bug.cgi?id=67999
[3] https://sourceware.org/ml/libc-alpha/2011-12/msg00066.html
[4] https://sourceware.org/ml/libc-alpha/2018-11/msg00224.html
[5] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87683
[BZ #23741]
* malloc/hooks.c (malloc_check, realloc_check): Use
__builtin_add_overflow on overflow check and adapt to
checked_request2size change.
* malloc/malloc.c (__libc_malloc, __libc_realloc, _mid_memalign,
__libc_pvalloc, __libc_calloc, _int_memalign): Limit maximum
allocation size to PTRDIFF_MAX.
(REQUEST_OUT_OF_RANGE): Remove macro.
(checked_request2size): Change to inline function and limit maximum
requested size to PTRDIFF_MAX.
(__libc_malloc, __libc_realloc, _int_malloc, _int_memalign): Limit
maximum allocation size to PTRDIFF_MAX.
(_mid_memalign): Use _int_memalign call for overflow check.
(__libc_pvalloc): Use __builtin_add_overflow on overflow check.
(__libc_calloc): Use __builtin_mul_overflow for overflow check and
limit maximum requested size to PTRDIFF_MAX.
* malloc/malloc.h (malloc, calloc, realloc, reallocarray, memalign,
valloc, pvalloc): Add __attribute_alloc_size__.
* stdlib/stdlib.h (malloc, realloc, reallocarray, valloc): Likewise.
* malloc/tst-malloc-too-large.c (do_test): Add check for allocation
larger than PTRDIFF_MAX.
* malloc/tst-memalign.c (do_test): Disable -Walloc-size-larger-than=
around tests of malloc with negative sizes.
* malloc/tst-posix_memalign.c (do_test): Likewise.
* malloc/tst-pvalloc.c (do_test): Likewise.
* malloc/tst-valloc.c (do_test): Likewise.
* malloc/tst-reallocarray.c (do_test): Replace call to reallocarray
with resulting size allocation larger than PTRDIFF_MAX with
reallocarray_nowarn.
(reallocarray_nowarn): New function.
* NEWS: Mention the malloc function semantic change.
Fixes bug 24216. This patch adds security checks for bk and bk_nextsize pointers
of chunks in large bin when inserting chunk from unsorted bin. It was possible
to write the pointer to victim (newly inserted chunk) to arbitrary memory
locations if bk or bk_nextsize pointers of the next large bin chunk
got corrupted.
One group of warnings seen with -Wextra is warnings for static or
inline not at the start of a declaration (-Wold-style-declaration).
This patch fixes various such cases for inline, ensuring it comes at
the start of the declaration (after any static). A common case of the
fix is "static inline <type> __always_inline"; the definition of
__always_inline starts with __inline, so the natural change is to
"static __always_inline <type>". Other cases of the warning may be
harder to fix (one pattern is a function definition that gets
rewritten to be static by an including file, "#define funcname static
wrapped_funcname" or similar), but it seems worth fixing these cases
with inline anyway.
Tested for x86_64.
* elf/dl-load.h (_dl_postprocess_loadcmd): Use __always_inline
before return type, without separate inline.
* elf/dl-tunables.c (maybe_enable_malloc_check): Likewise.
* elf/dl-tunables.h (tunable_is_name): Likewise.
* malloc/malloc.c (do_set_trim_threshold): Likewise.
(do_set_top_pad): Likewise.
(do_set_mmap_threshold): Likewise.
(do_set_mmaps_max): Likewise.
(do_set_mallopt_check): Likewise.
(do_set_perturb_byte): Likewise.
(do_set_arena_test): Likewise.
(do_set_arena_max): Likewise.
(do_set_tcache_max): Likewise.
(do_set_tcache_count): Likewise.
(do_set_tcache_unsorted_limit): Likewise.
* nis/nis_subr.c (count_dots): Likewise.
* nptl/allocatestack.c (advise_stack_range): Likewise.
* sysdeps/ieee754/dbl-64/s_sin.c (do_cos): Likewise.
(do_sin): Likewise.
(reduce_sincos): Likewise.
(do_sincos): Likewise.
* sysdeps/unix/sysv/linux/x86/elision-conf.c
(do_set_elision_enable): Likewise.
(TUNABLE_CALLBACK_FNDECL): Likewise.
One of the warnings that appears with -Wextra is "ordered comparison
of pointer with integer zero" in malloc.c:tcache_get, for the
assertion:
assert (tcache->entries[tc_idx] > 0);
Indeed, a "> 0" comparison does not make sense for
tcache->entries[tc_idx], which is a pointer. My guess is that
tcache->counts[tc_idx] is what's intended here, and this patch changes
the assertion accordingly.
Tested for x86_64.
* malloc/malloc.c (tcache_get): Compare tcache->counts[tc_idx]
with 0, not tcache->entries[tc_idx].
Commit 6923f6db1e ("malloc: Use current
(C11-style) atomics for fastbin access") caused a substantial
performance regression on POWER and Aarch64, and the old atomics,
while hard to prove correct, seem to work in practice.
This commit removes the custom memcpy implementation from _int_realloc
for small chunk sizes. The ncopies variable has the wrong type, and
an integer wraparound could cause the existing code to copy too few
elements (leaving the new memory region mostly uninitialized).
Therefore, removing this code fixes bug 24027.
The previous check could read beyond the end of the tcache entry
array. If the e->key == tcache cookie check happened to pass, this
would result in crashes.