Commit Graph

363 Commits

Author SHA1 Message Date
Florian Weimer
524d796d5f malloc: Update heap dumping/undumping comments [BZ #23351]
Also remove a few now-unused declarations and definitions.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2018-06-29 14:55:15 +02:00
Francois Goichon
bdc3009b8f malloc: harden removal from unsorted list
* malloc/malloc.c (_int_malloc): Added check before removing from
unsorted list.
2018-03-14 16:25:57 -04:00
Florian Weimer
229855e598 malloc: Revert sense of prev_inuse in comments
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2018-03-09 16:21:22 +01:00
Zack Weinberg
9964a14579 Mechanically remove _IO_ name aliases for types and constants.
This patch mechanically removes all remaining uses, and the
definitions, of the following libio name aliases:

 name                         replaced with
 ----                         -------------
 _IO_FILE                     FILE
 _IO_fpos_t                   __fpos_t
 _IO_fpos64_t                 __fpos64_t
 _IO_size_t                   size_t
 _IO_ssize_t                  ssize_t or __ssize_t
 _IO_off_t                    off_t
 _IO_off64_t                  off64_t
 _IO_pid_t                    pid_t
 _IO_uid_t                    uid_t
 _IO_wint_t                   wint_t
 _IO_va_list                  va_list or __gnuc_va_list
 _IO_BUFSIZ                   BUFSIZ
 _IO_cookie_io_functions_t    cookie_io_functions_t
 __io_read_fn                 cookie_read_function_t
 __io_write_fn                cookie_write_function_t
 __io_seek_fn                 cookie_seek_function_t
 __io_close_fn                cookie_close_function_t

I used __fpos_t and __fpos64_t instead of fpos_t and fpos64_t because
the definitions of fpos_t and fpos64_t depend on the largefile mode.
I used __ssize_t and __gnuc_va_list in a handful of headers where
namespace cleanliness might be relevant even though they're
internal-use-only.  In all other cases, I used the public-namespace
name.

There are a tiny handful of places where I left a use of 'struct _IO_FILE'
alone, because it was being used together with 'struct _IO_FILE_plus'
or 'struct _IO_FILE_complete' in the same arithmetic expression.

Because this patch was almost entirely done with search and replace, I
may have introduced indentation botches.  I did proofread the diff,
but I may have missed something.

The ChangeLog below calls out all of the places where this was not a
pure search-and-replace change.

Installed stripped libraries and executables are unchanged by this patch,
except that some assertions in vfscanf.c change line numbers.

	* libio/libio.h (_IO_FILE): Delete; all uses changed to FILE.
	(_IO_fpos_t): Delete; all uses changed to __fpos_t.
	(_IO_fpos64_t): Delete; all uses changed to __fpos64_t.
	(_IO_size_t): Delete; all uses changed to size_t.
	(_IO_ssize_t): Delete; all uses changed to ssize_t or __ssize_t.
	(_IO_off_t): Delete; all uses changed to off_t.
	(_IO_off64_t): Delete; all uses changed to off64_t.
	(_IO_pid_t): Delete; all uses changed to pid_t.
	(_IO_uid_t): Delete; all uses changed to uid_t.
	(_IO_wint_t): Delete; all uses changed to wint_t.
	(_IO_va_list): Delete; all uses changed to va_list or __gnuc_va_list.
	(_IO_BUFSIZ): Delete; all uses changed to BUFSIZ.
	(_IO_cookie_io_functions_t): Delete; all uses changed to
	cookie_io_functions_t.
	(__io_read_fn): Delete; all uses changed to cookie_read_function_t.
	(__io_write_fn): Delete; all uses changed to cookie_write_function_t.
	(__io_seek_fn): Delete; all uses changed to cookie_seek_function_t.
	(__io_close_fn): Delete: all uses changed to cookie_close_function_t.

	* libio/iofopncook.c: Remove unnecessary forward declarations.
	* libio/iolibio.h: Correct outdated commentary.
	* malloc/malloc.c (__malloc_stats): Remove unnecessary casts.
	* stdio-common/fxprintf.c (__fxprintf_nocancel):
	Remove unnecessary casts.
	* stdio-common/getline.c: Use _IO_getdelim directly.
	Don't redefine ssize_t.
	* stdio-common/printf_fp.c, stdio_common/printf_fphex.c
	* stdio-common/printf_size.c: Don't redefine size_t or FILE.
	Remove outdated comments.
	* stdio-common/vfscanf.c: Don't redefine va_list.
2018-02-21 14:11:05 -05:00
Zack Weinberg
402ecba487 [BZ #22830] malloc_stats: restore cancellation for stderr correctly.
malloc_stats means to disable cancellation for writes to stderr while
it runs, but it restores stderr->_flags2 with |= instead of =, so what
it actually does is disable cancellation on stderr permanently.

	[BZ #22830]
	* malloc/malloc.c (__malloc_stats): Restore stderr->_flags2
        correctly.
        * malloc/tst-malloc-stats-cancellation.c: New test case.
        * malloc/Makefile: Add new test case.
2018-02-10 16:24:17 -05:00
Samuel Thibault
406e7a0a47 malloc: Use assert.h's assert macro
This avoids assert definition conflicts if some of the headers used by
malloc.c happens to include assert.h.  Malloc still needs a malloc-avoiding
implementation, which we get by redirecting __assert_fail to malloc's
__malloc_assert.

	* malloc/malloc.c: Include <assert.h>.
	(assert): Do not define.
	[!defined NDEBUG] (__assert_fail): Define to __malloc_assert.
2018-01-29 23:00:17 +01:00
Arjun Shankar
8e448310d7 Fix integer overflows in internal memalign and malloc functions [BZ #22343]
When posix_memalign is called with an alignment less than MALLOC_ALIGNMENT
and a requested size close to SIZE_MAX, it falls back to malloc code
(because the alignment of a block returned by malloc is sufficient to
satisfy the call).  In this case, an integer overflow in _int_malloc leads
to posix_memalign incorrectly returning successfully.

Upon fixing this and writing a somewhat thorough regression test, it was
discovered that when posix_memalign is called with an alignment larger than
MALLOC_ALIGNMENT (so it uses _int_memalign instead) and a requested size
close to SIZE_MAX, a different integer overflow in _int_memalign leads to
posix_memalign incorrectly returning successfully.

Both integer overflows affect other memory allocation functions that use
_int_malloc (one affected malloc in x86) or _int_memalign as well.

This commit fixes both integer overflows.  In addition to this, it adds a
regression test to guard against false successful allocations by the
following memory allocation functions when called with too-large allocation
sizes and, where relevant, various valid alignments:
malloc, realloc, calloc, reallocarray, memalign, posix_memalign,
aligned_alloc, valloc, and pvalloc.
2018-01-18 17:55:45 +01:00
Istvan Kurucsai
249a5895f1 malloc: Ensure that the consolidated fast chunk has a sane size. 2018-01-12 15:26:20 +01:00
Joseph Myers
688903eb3e Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2018-01-01 00:32:25 +00:00
Arjun Shankar
34697694e8 Fix integer overflow in malloc when tcache is enabled [BZ #22375]
When the per-thread cache is enabled, __libc_malloc uses request2size (which
does not perform an overflow check) to calculate the chunk size from the
requested allocation size. This leads to an integer overflow causing malloc
to incorrectly return the last successfully allocated block when called with
a very large size argument (close to SIZE_MAX).

This commit uses checked_request2size instead, removing the overflow.
2017-11-30 13:42:53 +01:00
Florian Weimer
0a947e061d malloc: Call tcache destructor in arena_thread_freeres
It does not make sense to register separate cleanup functions for arena
and tcache since they're always going to be called together.  Call the
tcache cleanup function from within arena_thread_freeres since it at
least makes the order of those cleanups clear in the code.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2017-11-23 14:47:31 +01:00
Florian Weimer
34eb41579c malloc: Account for all heaps in an arena in malloc_info [BZ #22439]
This commit adds a "subheaps" field to the malloc_info output that
shows the number of heaps that were allocated to extend a non-main
arena.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2017-11-15 11:40:41 +01:00
Florian Weimer
7a9368a117 malloc: Add missing arena lock in malloc_info [BZ #22408]
Obtain the size information while the arena lock is acquired, and only
print it later.
2017-11-15 11:39:01 +01:00
Wilco Dijkstra
905a7725e9 Add single-threaded path to _int_malloc
This patch adds single-threaded fast paths to _int_malloc.

	* malloc/malloc.c (_int_malloc): Add SINGLE_THREAD_P path.
2017-10-24 12:43:05 +01:00
Wilco Dijkstra
3f6bb8a32e Add single-threaded path to malloc/realloc/calloc/memalloc
This patch adds a single-threaded fast path to malloc, realloc,
calloc and memalloc.  When we're single-threaded, we can bypass
arena_get (which always locks the arena it returns) and just use
the main arena.  Also avoid retrying a different arena since
there is just the main arena.

	* malloc/malloc.c (__libc_malloc): Add SINGLE_THREAD_P path.
	(__libc_realloc): Likewise.
	(_mid_memalign): Likewise.
	(__libc_calloc): Likewise.
2017-10-24 12:39:24 +01:00
Wilco Dijkstra
6d43de4b85 Fix build issue with SINGLE_THREAD_P
Add sysdep-cancel.h include.

	* malloc/malloc.c (sysdep-cancel.h): Add include.
2017-10-20 17:39:47 +01:00
Wilco Dijkstra
a15d53e2de Add single-threaded path to _int_free
This patch adds single-threaded fast paths to _int_free.
Bypass the explicit locking for larger allocations.

	* malloc/malloc.c (_int_free): Add SINGLE_THREAD_P fast paths.
2017-10-20 17:31:06 +01:00
Wilco Dijkstra
d74e6f6c0d Fix deadlock in _int_free consistency check
This patch fixes a deadlock in the fastbin consistency check.
If we fail the fast check due to concurrent modifications to
the next chunk or system_mem, we should not lock if we already
have the arena lock.  Simplify the check to make it obviously
correct.

	* malloc/malloc.c (_int_free): Fix deadlock bug in consistency check.
2017-10-19 18:19:55 +01:00
Wilco Dijkstra
2c2245b92c Fix build failure on tilepro due to unsupported atomics
* malloc/malloc.c (malloc_state): Use int for have_fastchunks since
        not all targets support atomics on bool.
2017-10-18 12:20:55 +01:00
Wilco Dijkstra
3381be5cde Improve malloc initialization sequence
The current malloc initialization is quite convoluted. Instead of
sometimes calling malloc_consolidate from ptmalloc_init, call
malloc_init_state early so that the main_arena is always initialized.
The special initialization can now be removed from malloc_consolidate.
This also fixes BZ #22159.

Check all calls to malloc_consolidate and remove calls that are
redundant initialization after ptmalloc_init, like in int_mallinfo
and __libc_mallopt (but keep the latter as consolidation is required for
set_max_fast).  Update comments to improve clarity.

Remove impossible initialization check from _int_malloc, fix assert
in do_check_malloc_state to ensure arena->top != 0.  Fix the obvious bugs
in do_check_free_chunk and do_check_remalloced_chunk to enable single
threaded malloc debugging (do_check_malloc_state is not thread safe!).

	[BZ #22159]
	* malloc/arena.c (ptmalloc_init): Call malloc_init_state.
	* malloc/malloc.c (do_check_free_chunk): Fix build bug.
	(do_check_remalloced_chunk): Fix build bug.
	(do_check_malloc_state): Add assert that checks arena->top.
	(malloc_consolidate): Remove initialization.
	(int_mallinfo): Remove call to malloc_consolidate.
	(__libc_mallopt): Clarify why malloc_consolidate is needed.
2017-10-17 18:55:16 +01:00
Wilco Dijkstra
e956075a5a Use relaxed atomics for malloc have_fastchunks
Currently free typically uses 2 atomic operations per call.  The have_fastchunks
flag indicates whether there are recently freed blocks in the fastbins.  This
is purely an optimization to avoid calling malloc_consolidate too often and
avoiding the overhead of walking all fast bins even if all are empty during a
sequence of allocations.  However using catomic_or to update the flag is
completely unnecessary since it can be changed into a simple boolean and
accessed using relaxed atomics.  There is no change in multi-threaded behaviour
given the flag is already approximate (it may be set when there are no blocks in
any fast bins, or it may be clear when there are free blocks that could be
consolidated).

Performance of malloc/free improves by 27% on a simple benchmark on AArch64
(both single and multithreaded). The number of load/store exclusive instructions
is reduced by 33%. Bench-malloc-thread speeds up by ~3% in all cases.

	* malloc/malloc.c (FASTCHUNKS_BIT): Remove.
	(have_fastchunks): Remove.
	(clear_fastchunks): Remove.
	(set_fastchunks): Remove.
	(malloc_state): Add have_fastchunks.
	(malloc_init_state): Use have_fastchunks.
	(do_check_malloc_state): Remove incorrect invariant checks.
	(_int_malloc): Use have_fastchunks.
	(_int_free): Likewise.
	(malloc_consolidate): Likewise.
2017-10-17 18:43:31 +01:00
Wilco Dijkstra
e4dd4ace56 Inline tcache functions
The functions tcache_get and tcache_put show up in profiles as they
are a critical part of the tcache code.  Inline them to give tcache
a 16% performance gain.  Since this improves multi-threaded cases
as well, it helps offset any potential performance loss due to adding
single-threaded fast paths.

	* malloc/malloc.c (tcache_put): Inline.
	(tcache_get): Inline.
2017-10-17 18:25:43 +01:00
Carlos O'Donell
1e26d35193 malloc: Fix tcache leak after thread destruction [BZ #22111]
The malloc tcache added in 2.26 will leak all of the elements remaining
in the cache and the cache structure itself when a thread exits. The
defect is that we do not set tcache_shutting_down early enough, and the
thread simply recreates the tcache and places the elements back onto a
new tcache which is subsequently lost as the thread exits (unfreed
memory). The fix is relatively simple, move the setting of
tcache_shutting_down earlier in tcache_thread_freeres. We add a test
case which uses mallinfo and some heuristics to look for unaccounted for
memory usage between the start and end of a thread start/join loop. It
is very reliable at detecting that there is a leak given the number of
iterations.  Without the fix the test will consume 122MiB of leaked
memory.
2017-10-06 09:31:52 -07:00
Florian Weimer
0c71122c0c malloc: Remove the internal_function attribute 2017-08-31 15:59:06 +02:00
Florian Weimer
24cffce736 malloc: Resolve compilation failure in NDEBUG mode
In _int_free, the locked variable is not used if NDEBUG is defined.
2017-08-31 15:34:22 +02:00
Florian Weimer
5129873a8e malloc: Change top_check return type to void
After commit ec2c1fcefb,
(malloc: Abort on heap corruption, without a backtrace), the function
always returns 0.
2017-08-31 14:16:52 +02:00
Florian Weimer
a9da0bb266 malloc: Remove corrupt arena flag
This is no longer needed because we now abort immediately
once heap corruption is detected.
2017-08-30 20:08:34 +02:00
Florian Weimer
ac3ed168d0 malloc: Remove check_action variable [BZ #21754]
Clean up calls to malloc_printerr and trim its argument list.

This also removes a few bits of work done before calling
malloc_printerr (such as unlocking operations).

The tunable/environment variable still enables the lightweight
additional malloc checking, but mallopt (M_CHECK_ACTION)
no longer has any effect.
2017-08-30 20:08:34 +02:00
Florian Weimer
ec2c1fcefb malloc: Abort on heap corruption, without a backtrace [BZ #21754]
The stack trace printing caused deadlocks and has been itself been
targeted by code execution exploits.
2017-08-30 16:39:41 +02:00
Florian Weimer
eac43cbb8d malloc: Avoid optimizer warning with GCC 7 and -O3 2017-08-10 15:58:28 +02:00
H.J. Lu
ed421fca42 Avoid backtrace from __stack_chk_fail [BZ #12189]
__stack_chk_fail is called on corrupted stack.  Stack backtrace is very
unreliable against corrupted stack.  __libc_message is changed to accept
enum __libc_message_action and call BEFORE_ABORT only if action includes
do_backtrace.  __fortify_fail_abort is added to avoid backtrace from
__stack_chk_fail.

	[BZ #12189]
	* debug/Makefile (CFLAGS-tst-ssp-1.c): New.
	(tests): Add tst-ssp-1 if -fstack-protector works.
	* debug/fortify_fail.c: Include <stdbool.h>.
	(_fortify_fail_abort): New function.
	(__fortify_fail): Call _fortify_fail_abort.
	(__fortify_fail_abort): Add a hidden definition.
	* debug/stack_chk_fail.c: Include <stdbool.h>.
	(__stack_chk_fail): Call __fortify_fail_abort, instead of
	__fortify_fail.
	* debug/tst-ssp-1.c: New file.
	* include/stdio.h (__libc_message_action): New enum.
	(__libc_message): Replace int with enum __libc_message_action.
	(__fortify_fail_abort): New hidden prototype.
	* malloc/malloc.c (malloc_printerr): Update __libc_message calls.
	* sysdeps/posix/libc_fatal.c (__libc_message): Replace int
	with enum __libc_message_action.  Call BEFORE_ABORT only if
	action includes do_backtrace.
	(__libc_fatal): Update __libc_message call.
2017-07-11 07:44:14 -07:00
DJ Delorie
d5c3fafc43 Add per-thread cache to malloc
* config.make.in: Enable experimental malloc option.
* configure.ac: Likewise.
* configure: Regenerate.
* manual/install.texi: Document it.
* INSTALL: Regenerate.
* malloc/Makefile: Likewise.
* malloc/malloc.c: Add per-thread cache (tcache).
(tcache_put): New.
(tcache_get): New.
(tcache_thread_freeres): New.
(tcache_init): New.
(__libc_malloc): Use cached chunks if available.
(__libc_free): Initialize tcache if needed.
(__libc_realloc): Likewise.
(__libc_calloc): Likewise.
(_int_malloc): Prefill tcache when appropriate.
(_int_free): Likewise.
(do_set_tcache_max): New.
(do_set_tcache_count): New.
(do_set_tcache_unsorted_limit): New.
* manual/probes.texi: Document new probes.
* malloc/arena.c: Add new tcache tunables.
* elf/dl-tunables.list: Likewise.
* manual/tunables.texi: Document them.
* NEWS: Mention the per-thread cache.
2017-07-06 13:37:30 -04:00
DJ Delorie
3b5f801ddb Tweak realloc/MREMAP comment to be more accurate.
MMap'd memory isn't shrunk without MREMAP, but IIRC this is intentional for
performance reasons.  Regardless, this patch tweaks the existing comment to
be more accurate wrt the existing code.

	[BZ #21411]
	* malloc/malloc.c: Tweak realloc/MREMAP comment to be more accurate.
2017-05-03 16:28:01 -04:00
Florian Weimer
025b33ae84 malloc: Turn cfree into a compatibility symbol 2017-04-18 11:50:58 +02:00
Wladimir J. van der Laan
622222846a Call the right helper function when setting mallopt M_ARENA_MAX (BZ #21338)
Fixes a typo introduced in commit
be7991c070. This caused
mallopt(M_ARENA_MAX) as well as the environment variable
MALLOC_ARENA_MAX to not work as intended because it set the
wrong internal parameter.

 	[BZ #21338]
	* malloc/malloc.c: Call do_set_arena_max for M_ARENA_MAX
	instead of incorrect do_set_arena_test
2017-04-01 12:39:10 +05:30
DJ Delorie
17f487b7af Further harden glibc malloc metadata against 1-byte overflows.
Additional check for chunk_size == next->prev->chunk_size in unlink()

2017-03-17  Chris Evans  <scarybeasts@gmail.com>

	* malloc/malloc.c (unlink): Add consistency check between size and
	next->prev->size, to further harden against 1-byte overflows.
2017-03-17 15:31:38 -04:00
Zack Weinberg
9090848d06 Narrowing the visibility of libc-internal.h even further.
posix/wordexp-test.c used libc-internal.h for PTR_ALIGN_DOWN; similar
to what was done with libc-diag.h, I have split the definitions of
cast_to_integer, ALIGN_UP, ALIGN_DOWN, PTR_ALIGN_UP, and PTR_ALIGN_DOWN
to a new header, libc-pointer-arith.h.

It then occurred to me that the remaining declarations in libc-internal.h
are mostly to do with early initialization, and probably most of the
files including it, even in the core code, don't need it anymore.  Indeed,
only 19 files actually need what remains of libc-internal.h.  23 others
need libc-diag.h instead, and 12 need libc-pointer-arith.h instead.
No file needs more than one of them, and 16 don't need any of them!

So, with this patch, libc-internal.h stops including libc-diag.h as
well as losing the pointer arithmetic macros, and all including files
are adjusted.

        * include/libc-pointer-arith.h: New file.  Define
	cast_to_integer, ALIGN_UP, ALIGN_DOWN, PTR_ALIGN_UP, and
        PTR_ALIGN_DOWN here.
        * include/libc-internal.h: Definitions of above macros
	moved from here.  Don't include libc-diag.h anymore either.
	* posix/wordexp-test.c: Include stdint.h and libc-pointer-arith.h.
        Don't include libc-internal.h.

	* debug/pcprofile.c, elf/dl-tunables.c, elf/soinit.c, io/openat.c
	* io/openat64.c, misc/ptrace.c, nptl/pthread_clock_gettime.c
	* nptl/pthread_clock_settime.c, nptl/pthread_cond_common.c
	* string/strcoll_l.c, sysdeps/nacl/brk.c
	* sysdeps/unix/clock_settime.c
	* sysdeps/unix/sysv/linux/i386/get_clockfreq.c
	* sysdeps/unix/sysv/linux/ia64/get_clockfreq.c
	* sysdeps/unix/sysv/linux/powerpc/get_clockfreq.c
	* sysdeps/unix/sysv/linux/sparc/sparc64/get_clockfreq.c:
	Don't include libc-internal.h.

	* elf/get-dynamic-info.h, iconv/loop.c
	* iconvdata/iso-2022-cn-ext.c, locale/weight.h, locale/weightwc.h
	* misc/reboot.c, nis/nis_table.c, nptl_db/thread_dbP.h
	* nscd/connections.c, resolv/res_send.c, soft-fp/fmadf4.c
	* soft-fp/fmasf4.c, soft-fp/fmatf4.c, stdio-common/vfscanf.c
	* sysdeps/ieee754/dbl-64/e_lgamma_r.c
	* sysdeps/ieee754/dbl-64/k_rem_pio2.c
	* sysdeps/ieee754/flt-32/e_lgammaf_r.c
	* sysdeps/ieee754/flt-32/k_rem_pio2f.c
	* sysdeps/ieee754/ldbl-128/k_tanl.c
	* sysdeps/ieee754/ldbl-128ibm/k_tanl.c
	* sysdeps/ieee754/ldbl-96/e_lgammal_r.c
	* sysdeps/ieee754/ldbl-96/k_tanl.c, sysdeps/nptl/futex-internal.h:
	Include libc-diag.h instead of libc-internal.h.

        * elf/dl-load.c, elf/dl-reloc.c, locale/programs/locarchive.c
        * nptl/nptl-init.c, string/strcspn.c, string/strspn.c
	* malloc/malloc.c, sysdeps/i386/nptl/tls.h
	* sysdeps/nacl/dl-map-segments.h, sysdeps/x86_64/atomic-machine.h
	* sysdeps/unix/sysv/linux/spawni.c
        * sysdeps/x86_64/nptl/tls.h:
        Include libc-pointer-arith.h instead of libc-internal.h.

	* elf/get-dynamic-info.h, sysdeps/nacl/dl-map-segments.h
	* sysdeps/x86_64/atomic-machine.h:
        Add multiple include guard.
2017-03-01 20:33:46 -05:00
Joseph Myers
bfff8b1bec Update copyright dates with scripts/update-copyrights. 2017-01-01 00:14:16 +00:00
Florian Weimer
ae9166f2b8 malloc: Update comments about chunk layout 2016-10-28 22:36:58 +02:00
Florian Weimer
681421f3ca sysmalloc: Initialize previous size field of mmaped chunks
With different encodings of the header, the previous zero initialization
may be insufficient and produce an invalid encoding.
2016-10-28 16:49:04 +02:00
Florian Weimer
e9c4fe93b3 malloc: Use accessors for chunk metadata access
This change allows us to change the encoding of these struct members
in a centralized fashion.
2016-10-28 16:45:45 +02:00
Siddhesh Poyarekar
be7991c070 Static inline functions for mallopt helpers
Make mallopt helper functions for each mallopt parameter so that it
can be called consistently in other areas, like setting tunables.

	* malloc/malloc.c (do_set_mallopt_check): New function.
	(do_set_mmap_threshold): Likewise.
	(do_set_mmaps_max): Likewise.
	(do_set_top_pad): Likewise.
	(do_set_perturb_byte): Likewise.
	(do_set_trim_threshold): Likewise.
	(do_set_arena_max): Likewise.
	(do_set_arena_test): Likewise.
	(__libc_mallopt): Use them.
2016-10-27 08:34:55 +05:30
Florian Weimer
e863cce57b malloc: Remove malloc_get_state, malloc_set_state [BZ #19473]
After the removal of __malloc_initialize_hook, newly compiled
Emacs binaries are no longer able to use these interfaces.
malloc_get_state is only used during the Emacs build process,
so we provide a stub implementation only.  Existing Emacs binaries
will not call this stub function, but still reference the symbol.

The rewritten tst-mallocstate test constructs a dumped heap
which should approximates what existing Emacs binaries pass
to glibc malloc.
2016-10-26 13:28:28 +02:00
Siddhesh Poyarekar
68fc2ccc1a Remove redundant definitions of M_ARENA_* macros
The M_ARENA_MAX and M_ARENA_TEST macros are defined in malloc.c as
well as malloc.h, and the former is unnecessary.  This patch removes
the duplicate.  Tested on x86_64 to verify that the generated code
remains unchanged barring changed line numbers to __malloc_assert.

	* malloc/malloc.c (M_ARENA_TEST, M_ARENA_MAX): Remove.
2016-10-26 15:07:34 +05:30
Siddhesh Poyarekar
c1234e60f9 Document the M_ARENA_* mallopt parameters
The M_ARENA_* mallopt parameters are in wide use in production to
control the number of arenas that a long lived process creates and
hence there is no point in stating that this interface is non-public.
Document this interface and remove the obsolete comment.

	* manual/memory.texi (M_ARENA_TEST): Add documentation.
	(M_ARENA_MAX): Likewise.
	* malloc/malloc.c: Remove obsolete comment.
2016-10-26 15:06:21 +05:30
Florian Weimer
cbb47fa1c6 malloc: Manual part of conversion to __libc_lock
This removes the old mutex_t-related definitions from malloc-machine.h,
too.
2016-09-21 16:28:08 +02:00
Florian Weimer
4bf5f2224b malloc: Automated part of conversion to __libc_lock 2016-09-06 12:49:54 +02:00
Florian Weimer
5bc17330eb elf: dl-minimal malloc needs to respect fundamental alignment
The dynamic linker currently uses __libc_memalign for TLS-related
allocations.  The goal is to switch to malloc instead.  If the minimal
malloc follows the ABI fundamental alignment, we can assume that malloc
provides this alignment, and thus skip explicit alignment in a few
cases as an optimization.

It was requested on libc-alpha that MALLOC_ALIGNMENT should be used,
although this results in wasted space if MALLOC_ALIGNMENT is larger
than the fundamental alignment.  (The dynamic linker cannot assume
that the non-minimal malloc will provide an alignment of
MALLOC_ALIGNMENT; the ABI provides _Alignof (max_align_t) only.)
2016-08-03 16:11:01 +02:00
Florian Weimer
92e1ab0eb5 Revert __malloc_initialize_hook symbol poisoning
It turns out the Emacs-internal malloc implementation uses
__malloc_* symbols.  If glibc poisons them in <stdc-pre.h>,
Emacs will no longer compile.
2016-06-20 11:11:29 +02:00
Florian Weimer
073f82140c malloc_usable_size: Use correct size for dumped fake mapped chunks
The adjustment for the size computation in commit
1e8a8875d6 is needed in
malloc_usable_size, too.
2016-06-11 12:09:19 +02:00