malloc: Improve MAP_HUGETLB with glibc.malloc.hugetlb=2

Even for explicit large page support, allocation might use mmap without
the hugepage bit set if the requested size is smaller than
mmap_threshold.  For this case where mmap is issued, MAP_HUGETLB is set
iff the allocation size is larger than the used large page.

To force such allocations to use large pages, also tune the mmap_threhold
(if it is not explicit set by a tunable).  This forces allocation to
follow the sbrk path, which will fall back to mmap (which will try large
pages before galling back to default mmap).

Checked on x86_64-linux-gnu.
Reviewed-by: DJ Delorie <dj@redhat.com>
Tested-by: Zhangfei Gao <zhangfei.gao@linaro.org>
This commit is contained in:
Adhemerval Zanella 2023-11-23 14:29:15 -03:00
parent a4c3f5f46e
commit bc6d79f4ae

View File

@ -312,10 +312,17 @@ ptmalloc_init (void)
# endif
TUNABLE_GET (mxfast, size_t, TUNABLE_CALLBACK (set_mxfast));
TUNABLE_GET (hugetlb, size_t, TUNABLE_CALLBACK (set_hugetlb));
if (mp_.hp_pagesize > 0)
/* Force mmap for main arena instead of sbrk, so hugepages are explicitly
used. */
__always_fail_morecore = true;
{
/* Force mmap for main arena instead of sbrk, so MAP_HUGETLB is always
tried. Also tune the mmap threshold, so allocation smaller than the
large page will also try to use large pages by falling back
to sysmalloc_mmap_fallback on sysmalloc. */
if (!TUNABLE_IS_INITIALIZED (mmap_threshold))
do_set_mmap_threshold (mp_.hp_pagesize);
__always_fail_morecore = true;
}
}
/* Managing heaps and arenas (for concurrent threads) */