Address space for heap segments is reserved in a mmap call with
MAP_ANONYMOUS | MAP_PRIVATE and protection flags PROT_NONE. This
reservation does not count against the RSS limit of the process or
system. Backing memory is allocated using mprotect in alloc_new_heap
and grow_heap, and at this point, the allocator expects the kernel
to provide memory (subject to memory overcommit).
The SIGSEGV that might generate due to MAP_NORESERVE (according to
the mmap manual page) does not seem to occur in practice, it's always
SIGKILL from the OOM killer. Even if there is a way that SIGSEGV
could be generated, it is confusing to applications that this only
happens for secondary heaps, not for large mmap-based allocations,
and not for the main arena.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Redirect internal assertion failures to __libc_assert_fail, based on
based on __libc_message, which writes directly to STDERR_FILENO
and calls abort. Also disable message translation and reword the
error message slightly (adjusting stdlib/tst-bz20544 accordingly).
As a result of these changes, malloc no longer needs its own
redefinition of __assert_fail.
__libc_assert_fail needs to be stubbed out during rtld dependency
analysis because the rtld rebuilds turn __libc_assert_fail into
__assert_fail, which is unconditionally provided by elf/dl-minimal.c.
This change is not possible for the public assert macro and its
__assert_fail function because POSIX requires that the diagnostic
is written to stderr.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Since commit ec2c1fcefb ("malloc:
Abort on heap corruption, without a backtrace [BZ #21754]"),
__libc_message always terminates the process. Since commit
a289ea09ea ("Do not print backtraces
on fatal glibc errors"), the backtrace facility has been removed.
Therefore, remove enum __libc_message_action and the action
argument of __libc_message, and mark __libc_message as _No_return.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Cancellation currently cannot happen at this point because dlopen
as used by the unwind link always performs additional allocations
for libgcc_s.so.1, even if it has been loaded already as a dependency
of the main executable. But it seems prudent not to rely on this
quirk.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
It is prudent not to run too much code after detecting heap
corruption, and __fxprintf is really complex. The line number
and file name do not carry much information, so it is not included
in the error message. (__libc_message only supports %s formatting.)
The function name and assertion should provide some context.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
In-band signaling avoids an uninitialized variable warning when
building with -Og and GCC 12.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 7061 files FOO.
I then removed trailing white space from math/tgmath.h,
support/tst-support-open-dev-null-range.c, and
sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following
obscure pre-commit check failure diagnostics from Savannah. I don't
know why I run into these diagnostics whereas others evidently do not.
remote: *** 912-#endif
remote: *** 913:
remote: *** 914-
remote: *** error: lines with trailing whitespace found
...
remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
The current limit on MALLOC_MMAP_THRESHOLD is either 1 Mbyte (for
32-bit apps) or 32 Mbytes (for 64-bit apps). This value was set by a
patch dated 2006 (15 years ago). Attempts to set the threshold higher
are currently ignored.
The default behavior is appropriate for many highly parallel
applications where many processes or threads are sharing RAM. In other
situations where the number of active processes or threads closely
matches the number of cores, a much higher limit may be desired by the
application designer. By today's standards on personal computers and
small servers, 2 Gbytes of RAM per core is commonly available. On
larger systems 4 Gbytes or more of RAM is sometimes available.
Instead of raising the limit to match current needs, this patch
proposes to remove the limit of the tunable, leaving the decision up
to the user of a tunable to judge the best value for their needs.
This patch does not change any of the defaults for malloc tunables,
retaining the current behavior of the dynamic malloc mmap threshold.
bugzilla 27801 - Remove upper limit on tunable MALLOC_MMAP_THRESHOLD
Reviewed-by: DJ Delorie <dj@redhat.com>
malloc/
malloc.c changed do_set_mmap_threshold to remove test
for HEAP_MAX_SIZE.
This patch adds support huge page support on main arena allocation,
enable with tunable glibc.malloc.hugetlb=2. The patch essentially
disable the __glibc_morecore() sbrk() call (similar when memory
tag does when sbrk() call does not support it) and fallback to
default page size if the memory allocation fails.
Checked on x86_64-linux-gnu.
Reviewed-by: DJ Delorie <dj@redhat.com>
It is enabled as default for glibc.malloc.hugetlb set to 2 or higher.
It also uses a non configurable minimum value and maximum value,
currently set respectively to 1 and 4 selected huge page size.
The arena allocation with huge pages does not use MAP_NORESERVE. As
indicate by kernel internal documentation [1], the flag might trigger
a SIGBUS on soft page faults if at memory access there is no left
pages in the pool.
On systems without a reserved huge pages pool, is just stress the
mmap(MAP_HUGETLB) allocation failure. To improve test coverage it is
required to create a pool with some allocated pages.
Checked on x86_64-linux-gnu with no reserved pages, 10 reserved pages
(which trigger mmap(MAP_HUGETBL) failures) and with 256 reserved pages
(which does not trigger mmap(MAP_HUGETLB) failures).
[1] https://www.kernel.org/doc/html/v4.18/vm/hugetlbfs_reserv.html#resv-map-modifications
Reviewed-by: DJ Delorie <dj@redhat.com>
With the morecore hook removed, there is not easy way to provide huge
pages support on with glibc allocator without resorting to transparent
huge pages. And some users and programs do prefer to use the huge pages
directly instead of THP for multiple reasons: no splitting, re-merging
by the VM, no TLB shootdowns for running processes, fast allocation
from the reserve pool, no competition with the rest of the processes
unlike THP, no swapping all, etc.
This patch extends the 'glibc.malloc.hugetlb' tunable: the value
'2' means to use huge pages directly with the system default size,
while a positive value means and specific page size that is matched
against the supported ones by the system.
Currently only memory allocated on sysmalloc() is handled, the arenas
still uses the default system page size.
To test is a new rule is added tests-malloc-hugetlb2, which run the
addes tests with the required GLIBC_TUNABLE setting. On systems without
a reserved huge pages pool, is just stress the mmap(MAP_HUGETLB)
allocation failure. To improve test coverage it is required to create
a pool with some allocated pages.
Checked on x86_64-linux-gnu.
Reviewed-by: DJ Delorie <dj@redhat.com>
To increase effectiveness with Transparent Huge Page with madvise, the
large page size is use instead page size for sbrk increment for the
main arena.
Checked on x86_64-linux-gnu.
Reviewed-by: DJ Delorie <dj@redhat.com>
Linux Transparent Huge Pages (THP) current supports three different
states: 'never', 'madvise', and 'always'. The 'never' is
self-explanatory and 'always' will enable THP for all anonymous
pages. However, 'madvise' is still the default for some system and
for such case THP will be only used if the memory range is explicity
advertise by the program through a madvise(MADV_HUGEPAGE) call.
To enable it a new tunable is provided, 'glibc.malloc.hugetlb',
where setting to a value diffent than 0 enables the madvise call.
This patch issues the madvise(MADV_HUGEPAGE) call after a successful
mmap() call at sysmalloc() with sizes larger than the default huge
page size. The madvise() call is disable is system does not support
THP or if it has the mode set to "never" and on Linux only support
one page size for THP, even if the architecture supports multiple
sizes.
To test is a new rule is added tests-malloc-hugetlb1, which run the
addes tests with the required GLIBC_TUNABLE setting.
Checked on x86_64-linux-gnu.
Reviewed-by: DJ Delorie <dj@redhat.com>
Hoist the NULL check for malloc_usable_size into its entry points in
malloc-debug and malloc and assume non-NULL in all callees. This fixes
BZ #28506
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
We stopped adding "Contributed by" or similar lines in sources in 2012
in favour of git logs and keeping the Contributors section of the
glibc manual up to date. Removing these lines makes the license
header a bit more consistent across files and also removes the
possibility of error in attribution when license blocks or files are
copied across since the contributed-by lines don't actually reflect
reality in those cases.
Move all "Contributed by" and similar lines (Written by, Test by,
etc.) into a new file CONTRIBUTED-BY to retain record of these
contributions. These contributors are also mentioned in
manual/contrib.texi, so we just maintain this additional record as a
courtesy to the earlier developers.
The following scripts were used to filter a list of files to edit in
place and to clean up the CONTRIBUTED-BY file respectively. These
were not added to the glibc sources because they're not expected to be
of any use in future given that this is a one time task:
https://gist.github.com/siddhesh/b5ecac94eabfd72ed2916d6d8157e7dchttps://gist.github.com/siddhesh/15ea1f5e435ace9774f485030695ee02
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Remove unused code and declare __libc_mallopt when !IS_IN (libc) to
allow the debug hook to build with --disable-tunables.
Also, run tst-ifunc-isa-2* tests only when tunables are enabled since
the result depends on it.
Tested on x86_64.
Reported-by: Matheus Castanho <msc@linux.ibm.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
These deprecated functions are only safe to call from
__malloc_initialize_hook and as a result, are not useful in the
general case. Move the implementations to libc_malloc_debug so that
existing binaries that need it will now have to preload the debug DSO
to work correctly.
This also allows simplification of the core malloc implementation by
dropping all the undumping support code that was added to make
malloc_set_state work.
One known breakage is that of ancient emacs binaries that depend on
this. They will now crash when running with this libc. With
LD_BIND_NOW=1, it will terminate immediately because of not being able
to find malloc_set_state but with lazy binding it will crash in
unpredictable ways. It will need a preloaded libc_malloc_debug.so so
that its initialization hook is executed to allow its malloc
implementation to work properly.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
The malloc-check debugging feature is tightly integrated into glibc
malloc, so thanks to an idea from Florian Weimer, much of the malloc
implementation has been moved into libc_malloc_debug.so to support
malloc-check. Due to this, glibc malloc and malloc-check can no
longer work together; they use altogether different (but identical)
structures for heap management. This should not make a difference
though since the malloc check hook is not disabled anywhere.
malloc_set_state does, but it does so early enough that it shouldn't
cause any problems.
The malloc check tunable is now in the debug DSO and has no effect
when the DSO is not preloaded.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Now that mcheck no longer needs to check __malloc_initialized (and no
other third party hook can since the symbol is not exported), make the
variable boolean and static so that it is used strictly within malloc.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Remove all malloc hook uses from core malloc functions and move it
into a new library libc_malloc_debug.so. With this, the hooks now no
longer have any effect on the core library.
libc_malloc_debug.so is a malloc interposer that needs to be preloaded
to get hooks functionality back so that the debugging features that
depend on the hooks, i.e. malloc-check, mcheck and mtrace work again.
Without the preloaded DSO these debugging features will be nops.
These features will be ported away from hooks in subsequent patches.
Similarly, legacy applications that need hooks functionality need to
preload libc_malloc_debug.so.
The symbols exported by libc_malloc_debug.so are maintained at exactly
the same version as libc.so.
Finally, static binaries will no longer be able to use malloc
debugging features since they cannot preload the debugging DSO.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Make the __morecore and __default_morecore symbols compat-only and
remove their declarations from the API. Also, include morecore.c
directly into malloc.c; this should ideally get merged into malloc in
a future cleanup.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
Remove __after_morecore_hook from the API and finalize the symbol so
that it can no longer be used in new applications. Old applications
using __after_morecore_hook will find that their hook is no longer
called.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
As a result, is not necessary to specify __attribute__ ((nocommon))
on individual definitions.
GCC 10 defaults to -fno-common on all architectures except ARC,
but this change is compatible with older GCC versions and ARC, too.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
_int_realloc is correctly declared at the top to be static, but
incorrectly defined without the static keyword. Fix that. The
generated binaries have identical code.
The tcache allocator layer uses the tcache pointer as a key to
identify a block that may be freed twice. Since this is in the
application data area, an attacker exploiting a use-after-free could
potentially get access to the entire tcache structure through this
key. A detailed write-up was provided by Awarau here:
https://awaraucom.wordpress.com/2020/07/19/house-of-io-remastered/
Replace this static pointer use for key checking with one that is
generated at malloc initialization. The first attempt is through
getrandom with a fallback to random_bits(), which is a simple
pseudo-random number generator based on the clock. The fallback ought
to be sufficient since the goal of the randomness is only to make the
key arbitrary enough that it is very unlikely to collide with user
data.
Co-authored-by: Eyal Itkin <eyalit@checkpoint.com>
After commit 1e26d35193 ("malloc: Fix
tcache leak after thread destruction [BZ #22111]"),
tcache_shutting_down is still not early enough. When we detach a
thread with no tcache allocated, tcache_shutting_down would still be
false.
Reviewed-by: DJ Delorie <dj@redhat.com>
This is a workaround (hack) for a gcc optimization issue (PR 99551).
Without this the generated code may evaluate the expression in the
cold path which causes performance regression for small allocations
in the memory tagging disabled (common) case.
Reviewed-by: DJ Delorie <dj@redhat.com>
The internal _mid_memalign already returns newly tagged memory.
(__libc_memalign and posix_memalign already relied on this, this
patch fixes the other call sites.)
Reviewed-by: DJ Delorie <dj@redhat.com>
The previous patch ensured that all chunk to mem computations use
chunk2rawmem, so now we can rename it to chunk2mem, and in the few
cases where the tag of mem is relevant chunk2mem_tag can be used.
Replaced tag_at (chunk2rawmem (x)) with chunk2mem_tag (x).
Renamed chunk2rawmem to chunk2mem.
Reviewed-by: DJ Delorie <dj@redhat.com>
The difference between chunk2mem and chunk2rawmem is that the latter
does not get the memory tag for the returned pointer. It turns out
chunk2rawmem almost always works:
The input of chunk2mem is a chunk pointer that is untagged so it can
access the chunk header. All memory that is not user allocated heap
memory is untagged, which in the current implementation means that it
has the 0 tag, but this patch does not rely on the tag value. The
patch relies on that chunk operations are either done on untagged
chunks or without doing memory access to the user owned part.
Internal interface contracts:
sysmalloc: Returns untagged memory.
_int_malloc: Returns untagged memory.
_int_free: Takes untagged memory.
_int_memalign: Returns untagged memory.
_int_realloc: Takes and returns tagged memory.
So only _int_realloc and functions outside this list need care.
Alignment checks do not need the right tag and tcache works with
untagged memory.
tag_at was kept in realloc after an mremap, which is not strictly
necessary, since the pointer is only used to retag the memory, but this
way the tag is guaranteed to be different from the old tag.
Reviewed-by: DJ Delorie <dj@redhat.com>
The comment explained why different tag is used after mremap, but
for that correctly tagged pointer should be passed to tag_new_usable.
Use chunk2mem to get the tag.
Reviewed-by: DJ Delorie <dj@redhat.com>
This is a pure refactoring change that does not affect behaviour.
The CHUNK_AVAILABLE_SIZE name was unclear, the memsize name tries to
follow the existing convention of mem denoting the allocation that is
handed out to the user, while chunk is its internally used container.
The user owned memory for a given chunk starts at chunk2mem(p) and
the size is memsize(p). It is not valid to use on dumped heap chunks.
Moved the definition next to other chunk and mem related macros.
Reviewed-by: DJ Delorie <dj@redhat.com>
Use the runtime check where possible: it should not cause slow down in
the !USE_MTAG case since then mtag_enabled is constant false, but it
allows compiling the tagging logic so it's less likely to break or
diverge when developers only test the !USE_MTAG case.
Reviewed-by: DJ Delorie <dj@redhat.com>
The branches may be better optimized since mtag_enabled is widely used.
Granule size larger than a chunk header is not supported since then we
cannot have both the chunk header and user area granule aligned. To
fix that for targets with large granule, the chunk layout has to change.
So code that attempted to handle the granule mask generally was changed.
This simplified CHUNK_AVAILABLE_SIZE and the logic in malloc_usable_size.
Reviewed-by: DJ Delorie <dj@redhat.com>
When glibc is built with memory tagging support (USE_MTAG) but it is not
enabled at runtime (mtag_enabled) then unconditional memset was used
even though that can be often avoided.
This is for performance when tagging is supported but not enabled.
The extra check should have no overhead: tag_new_zero_region already
had a runtime check which the compiler can now optimize away.
Reviewed-by: DJ Delorie <dj@redhat.com>
The memset api is suboptimal and does not provide much benefit. Memory
tagging only needs a zeroing memset (and only for memory that's sized
and aligned to multiples of the tag granule), so change the internal
api and the target hooks accordingly. This is to simplify the
implementation of the target hook.
Reviewed-by: DJ Delorie <dj@redhat.com>
A flag check can be faster than function pointers because of how
branch prediction and speculation works and it can also remove a layer
of indirection when there is a mismatch between the malloc internal
tag_* api and __libc_mtag_* target hooks.
Memory tagging wrapper functions are moved to malloc.c from arena.c and
the logic now checks mmap_enabled. The definition of tag_new_usable is
moved after chunk related definitions.
This refactoring also allows using mtag_enabled checks instead of
USE_MTAG ifdefs when memory tagging support only changes code logic
when memory tagging is enabled at runtime. Note: an "if (false)" code
block is optimized away even at -O0 by gcc.
Reviewed-by: DJ Delorie <dj@redhat.com>
This does not change behaviour, just removes one layer of indirection
in the internal memory tagging logic.
Use tag_ and mtag_ prefixes instead of __tag_ and __mtag_ since these
are all symbols with internal linkage, private to malloc.c, so there
is no user namespace pollution issue.
Reviewed-by: DJ Delorie <dj@redhat.com>
Either the memory belongs to the dumped area, in which case we don't
want to tag (the dumped area has the same tag as malloc internal data
so tagging is unnecessary, but chunks there may not have the right
alignment for the tag granule), or the memory will be unmapped
immediately (and thus tagging is not useful).
Reviewed-by: DJ Delorie <dj@redhat.com>
This is only used internally in malloc.c, the extern declaration
was wrong, __mtag_mmap_flags has internal linkage.
Reviewed-by: DJ Delorie <dj@redhat.com>
At an _int_free call site in realloc the wrong size was used for tag
clearing: the chunk header of the next chunk was also cleared which
in practice may work, but logically wrong.
The tag clearing is moved before the memcpy to save a tag computation,
this avoids a chunk2mem. Another chunk2mem is removed because newmem
does not have to be recomputed. Whitespaces got fixed too.
Reviewed-by: DJ Delorie <dj@redhat.com>
_int_free must be called with a chunk that has its tag reset. This was
missing in a rare case that could crash when heap tagging is enabled:
when in a multi-threaded process the current arena runs out of memory
during realloc, but another arena still has space to finish the realloc
then _int_free was called without clearing the user allocation tags.
Fixes bug 27468.
Reviewed-by: DJ Delorie <dj@redhat.com>