Commit Graph

332 Commits

Author SHA1 Message Date
DJ Delorie
d5c3fafc43 Add per-thread cache to malloc
* config.make.in: Enable experimental malloc option.
* configure.ac: Likewise.
* configure: Regenerate.
* manual/install.texi: Document it.
* INSTALL: Regenerate.
* malloc/Makefile: Likewise.
* malloc/malloc.c: Add per-thread cache (tcache).
(tcache_put): New.
(tcache_get): New.
(tcache_thread_freeres): New.
(tcache_init): New.
(__libc_malloc): Use cached chunks if available.
(__libc_free): Initialize tcache if needed.
(__libc_realloc): Likewise.
(__libc_calloc): Likewise.
(_int_malloc): Prefill tcache when appropriate.
(_int_free): Likewise.
(do_set_tcache_max): New.
(do_set_tcache_count): New.
(do_set_tcache_unsorted_limit): New.
* manual/probes.texi: Document new probes.
* malloc/arena.c: Add new tcache tunables.
* elf/dl-tunables.list: Likewise.
* manual/tunables.texi: Document them.
* NEWS: Mention the per-thread cache.
2017-07-06 13:37:30 -04:00
DJ Delorie
3b5f801ddb Tweak realloc/MREMAP comment to be more accurate.
MMap'd memory isn't shrunk without MREMAP, but IIRC this is intentional for
performance reasons.  Regardless, this patch tweaks the existing comment to
be more accurate wrt the existing code.

	[BZ #21411]
	* malloc/malloc.c: Tweak realloc/MREMAP comment to be more accurate.
2017-05-03 16:28:01 -04:00
Florian Weimer
025b33ae84 malloc: Turn cfree into a compatibility symbol 2017-04-18 11:50:58 +02:00
Wladimir J. van der Laan
622222846a Call the right helper function when setting mallopt M_ARENA_MAX (BZ #21338)
Fixes a typo introduced in commit
be7991c070. This caused
mallopt(M_ARENA_MAX) as well as the environment variable
MALLOC_ARENA_MAX to not work as intended because it set the
wrong internal parameter.

 	[BZ #21338]
	* malloc/malloc.c: Call do_set_arena_max for M_ARENA_MAX
	instead of incorrect do_set_arena_test
2017-04-01 12:39:10 +05:30
DJ Delorie
17f487b7af Further harden glibc malloc metadata against 1-byte overflows.
Additional check for chunk_size == next->prev->chunk_size in unlink()

2017-03-17  Chris Evans  <scarybeasts@gmail.com>

	* malloc/malloc.c (unlink): Add consistency check between size and
	next->prev->size, to further harden against 1-byte overflows.
2017-03-17 15:31:38 -04:00
Zack Weinberg
9090848d06 Narrowing the visibility of libc-internal.h even further.
posix/wordexp-test.c used libc-internal.h for PTR_ALIGN_DOWN; similar
to what was done with libc-diag.h, I have split the definitions of
cast_to_integer, ALIGN_UP, ALIGN_DOWN, PTR_ALIGN_UP, and PTR_ALIGN_DOWN
to a new header, libc-pointer-arith.h.

It then occurred to me that the remaining declarations in libc-internal.h
are mostly to do with early initialization, and probably most of the
files including it, even in the core code, don't need it anymore.  Indeed,
only 19 files actually need what remains of libc-internal.h.  23 others
need libc-diag.h instead, and 12 need libc-pointer-arith.h instead.
No file needs more than one of them, and 16 don't need any of them!

So, with this patch, libc-internal.h stops including libc-diag.h as
well as losing the pointer arithmetic macros, and all including files
are adjusted.

        * include/libc-pointer-arith.h: New file.  Define
	cast_to_integer, ALIGN_UP, ALIGN_DOWN, PTR_ALIGN_UP, and
        PTR_ALIGN_DOWN here.
        * include/libc-internal.h: Definitions of above macros
	moved from here.  Don't include libc-diag.h anymore either.
	* posix/wordexp-test.c: Include stdint.h and libc-pointer-arith.h.
        Don't include libc-internal.h.

	* debug/pcprofile.c, elf/dl-tunables.c, elf/soinit.c, io/openat.c
	* io/openat64.c, misc/ptrace.c, nptl/pthread_clock_gettime.c
	* nptl/pthread_clock_settime.c, nptl/pthread_cond_common.c
	* string/strcoll_l.c, sysdeps/nacl/brk.c
	* sysdeps/unix/clock_settime.c
	* sysdeps/unix/sysv/linux/i386/get_clockfreq.c
	* sysdeps/unix/sysv/linux/ia64/get_clockfreq.c
	* sysdeps/unix/sysv/linux/powerpc/get_clockfreq.c
	* sysdeps/unix/sysv/linux/sparc/sparc64/get_clockfreq.c:
	Don't include libc-internal.h.

	* elf/get-dynamic-info.h, iconv/loop.c
	* iconvdata/iso-2022-cn-ext.c, locale/weight.h, locale/weightwc.h
	* misc/reboot.c, nis/nis_table.c, nptl_db/thread_dbP.h
	* nscd/connections.c, resolv/res_send.c, soft-fp/fmadf4.c
	* soft-fp/fmasf4.c, soft-fp/fmatf4.c, stdio-common/vfscanf.c
	* sysdeps/ieee754/dbl-64/e_lgamma_r.c
	* sysdeps/ieee754/dbl-64/k_rem_pio2.c
	* sysdeps/ieee754/flt-32/e_lgammaf_r.c
	* sysdeps/ieee754/flt-32/k_rem_pio2f.c
	* sysdeps/ieee754/ldbl-128/k_tanl.c
	* sysdeps/ieee754/ldbl-128ibm/k_tanl.c
	* sysdeps/ieee754/ldbl-96/e_lgammal_r.c
	* sysdeps/ieee754/ldbl-96/k_tanl.c, sysdeps/nptl/futex-internal.h:
	Include libc-diag.h instead of libc-internal.h.

        * elf/dl-load.c, elf/dl-reloc.c, locale/programs/locarchive.c
        * nptl/nptl-init.c, string/strcspn.c, string/strspn.c
	* malloc/malloc.c, sysdeps/i386/nptl/tls.h
	* sysdeps/nacl/dl-map-segments.h, sysdeps/x86_64/atomic-machine.h
	* sysdeps/unix/sysv/linux/spawni.c
        * sysdeps/x86_64/nptl/tls.h:
        Include libc-pointer-arith.h instead of libc-internal.h.

	* elf/get-dynamic-info.h, sysdeps/nacl/dl-map-segments.h
	* sysdeps/x86_64/atomic-machine.h:
        Add multiple include guard.
2017-03-01 20:33:46 -05:00
Joseph Myers
bfff8b1bec Update copyright dates with scripts/update-copyrights. 2017-01-01 00:14:16 +00:00
Florian Weimer
ae9166f2b8 malloc: Update comments about chunk layout 2016-10-28 22:36:58 +02:00
Florian Weimer
681421f3ca sysmalloc: Initialize previous size field of mmaped chunks
With different encodings of the header, the previous zero initialization
may be insufficient and produce an invalid encoding.
2016-10-28 16:49:04 +02:00
Florian Weimer
e9c4fe93b3 malloc: Use accessors for chunk metadata access
This change allows us to change the encoding of these struct members
in a centralized fashion.
2016-10-28 16:45:45 +02:00
Siddhesh Poyarekar
be7991c070 Static inline functions for mallopt helpers
Make mallopt helper functions for each mallopt parameter so that it
can be called consistently in other areas, like setting tunables.

	* malloc/malloc.c (do_set_mallopt_check): New function.
	(do_set_mmap_threshold): Likewise.
	(do_set_mmaps_max): Likewise.
	(do_set_top_pad): Likewise.
	(do_set_perturb_byte): Likewise.
	(do_set_trim_threshold): Likewise.
	(do_set_arena_max): Likewise.
	(do_set_arena_test): Likewise.
	(__libc_mallopt): Use them.
2016-10-27 08:34:55 +05:30
Florian Weimer
e863cce57b malloc: Remove malloc_get_state, malloc_set_state [BZ #19473]
After the removal of __malloc_initialize_hook, newly compiled
Emacs binaries are no longer able to use these interfaces.
malloc_get_state is only used during the Emacs build process,
so we provide a stub implementation only.  Existing Emacs binaries
will not call this stub function, but still reference the symbol.

The rewritten tst-mallocstate test constructs a dumped heap
which should approximates what existing Emacs binaries pass
to glibc malloc.
2016-10-26 13:28:28 +02:00
Siddhesh Poyarekar
68fc2ccc1a Remove redundant definitions of M_ARENA_* macros
The M_ARENA_MAX and M_ARENA_TEST macros are defined in malloc.c as
well as malloc.h, and the former is unnecessary.  This patch removes
the duplicate.  Tested on x86_64 to verify that the generated code
remains unchanged barring changed line numbers to __malloc_assert.

	* malloc/malloc.c (M_ARENA_TEST, M_ARENA_MAX): Remove.
2016-10-26 15:07:34 +05:30
Siddhesh Poyarekar
c1234e60f9 Document the M_ARENA_* mallopt parameters
The M_ARENA_* mallopt parameters are in wide use in production to
control the number of arenas that a long lived process creates and
hence there is no point in stating that this interface is non-public.
Document this interface and remove the obsolete comment.

	* manual/memory.texi (M_ARENA_TEST): Add documentation.
	(M_ARENA_MAX): Likewise.
	* malloc/malloc.c: Remove obsolete comment.
2016-10-26 15:06:21 +05:30
Florian Weimer
cbb47fa1c6 malloc: Manual part of conversion to __libc_lock
This removes the old mutex_t-related definitions from malloc-machine.h,
too.
2016-09-21 16:28:08 +02:00
Florian Weimer
4bf5f2224b malloc: Automated part of conversion to __libc_lock 2016-09-06 12:49:54 +02:00
Florian Weimer
5bc17330eb elf: dl-minimal malloc needs to respect fundamental alignment
The dynamic linker currently uses __libc_memalign for TLS-related
allocations.  The goal is to switch to malloc instead.  If the minimal
malloc follows the ABI fundamental alignment, we can assume that malloc
provides this alignment, and thus skip explicit alignment in a few
cases as an optimization.

It was requested on libc-alpha that MALLOC_ALIGNMENT should be used,
although this results in wasted space if MALLOC_ALIGNMENT is larger
than the fundamental alignment.  (The dynamic linker cannot assume
that the non-minimal malloc will provide an alignment of
MALLOC_ALIGNMENT; the ABI provides _Alignof (max_align_t) only.)
2016-08-03 16:11:01 +02:00
Florian Weimer
92e1ab0eb5 Revert __malloc_initialize_hook symbol poisoning
It turns out the Emacs-internal malloc implementation uses
__malloc_* symbols.  If glibc poisons them in <stdc-pre.h>,
Emacs will no longer compile.
2016-06-20 11:11:29 +02:00
Florian Weimer
073f82140c malloc_usable_size: Use correct size for dumped fake mapped chunks
The adjustment for the size computation in commit
1e8a8875d6 is needed in
malloc_usable_size, too.
2016-06-11 12:09:19 +02:00
Florian Weimer
2ba3cfa160 malloc: Remove __malloc_initialize_hook from the API [BZ #19564]
__malloc_initialize_hook is interposed by application code, so
the usual approach to define a compatibility symbol does not work.
This commit adds a new mechanism based on #pragma GCC poison in
<stdc-predef.h>.
2016-06-10 10:46:05 +02:00
Florian Weimer
1e8a8875d6 malloc: Correct size computation in realloc for dumped fake mmapped chunks
For regular mmapped chunks there are two size fields (hence a reduction
by 2 * SIZE_SZ bytes), but for fake chunks, we only have one size field,
so we need to subtract SIZE_SZ bytes.

This was initially reported as Emacs bug 23726.
2016-06-08 20:50:21 +02:00
Florian Weimer
dea39b13e2 malloc: Correct malloc alignment on 32-bit architectures [BZ #6527]
After the heap rewriting added in commit
4cf6c72fd2 (malloc: Rewrite dumped heap
for compatibility in __malloc_set_state), we can change malloc alignment
for new allocations because the alignment of old allocations no longer
matters.

We need to increase the malloc state version number, so that binaries
containing dumped heaps of the new layout will not try to run on
previous versions of glibc, resulting in obscure crashes.

This commit addresses a failure of tst-malloc-thread-fail on the
affected architectures (32-bit ppc and mips) because the test checks
pointer alignment.
2016-05-24 08:05:15 +02:00
Florian Weimer
4cf6c72fd2 malloc: Rewrite dumped heap for compatibility in __malloc_set_state
This will allow us to change many aspects of the malloc implementation
while preserving compatibility with existing Emacs binaries.

As a result, existing Emacs binaries will have a larger RSS, and Emacs
needs a few more milliseconds to start.  This overhead is specific
to Emacs (and will go away once Emacs switches to its internal malloc).

The new checks to make free and realloc compatible with the dumped heap
are confined to the mmap paths, which are already quite slow due to the
munmap overhead.

This commit weakens some security checks, but only for heap pointers
in the dumped main arena.  By default, this area is empty, so those
checks are as effective as before.
2016-05-13 14:16:39 +02:00
Florian Weimer
8a727af925 malloc: Remove malloc hooks from fork handler
The fork handler now runs so late that there is no risk anymore that
other fork handlers in the same thread use malloc, so it is no
longer necessary to install malloc hooks which made a subset
of malloc functionality available to the thread that called fork.
2016-04-14 09:18:30 +02:00
Florian Weimer
29d794863c malloc: Run fork handler as late as possible [BZ #19431]
Previously, a thread M invoking fork would acquire locks in this order:

  (M1) malloc arena locks (in the registered fork handler)
  (M2) libio list lock

A thread F invoking flush (NULL) would acquire locks in this order:

  (F1) libio list lock
  (F2) individual _IO_FILE locks

A thread G running getdelim would use this order:

  (G1) _IO_FILE lock
  (G2) malloc arena lock

After executing (M1), (F1), (G1), none of the threads can make progress.

This commit changes the fork lock order to:

  (M'1) libio list lock
  (M'2) malloc arena locks

It explicitly encodes the lock order in the implementations of fork,
and does not rely on the registration order, thus avoiding the deadlock.
2016-04-14 09:17:02 +02:00
Tulio Magno Quites Machado Filho
b43f552a8a Fix type of parameter passed by malloc_consolidate
atomic_exchange_acq() expected a pointer, but was receiving an integer.
2016-03-11 18:09:40 -03:00
Florian Weimer
59eda029a8 malloc: Remove NO_THREADS
No functional change.  It was not possible to build without
threading support before.
2016-02-19 17:07:45 +01:00
Florian Weimer
ca135f824b malloc: Remove max_total_mem member form struct malloc_par
Also note that sumblks in struct mallinfo is always 0.
No functional change.
2016-02-19 17:07:04 +01:00
Florian Weimer
00d4e2ea35 malloc: Remove arena_mem variable
The computed value is never used.  The accesses were data races.
2016-02-19 17:06:33 +01:00
Joseph Myers
f7a9f785e5 Update copyright dates with scripts/update-copyrights. 2016-01-04 16:05:18 +00:00
Florian Weimer
90c400bd49 malloc: Fix list_lock/arena lock deadlock [BZ #19182]
* malloc/arena.c (list_lock): Document lock ordering requirements.
	(free_list_lock): New lock.
	(ptmalloc_lock_all): Comment on free_list_lock.
	(ptmalloc_unlock_all2): Reinitialize free_list_lock.
	(detach_arena): Update comment.  free_list_lock is now needed.
	(_int_new_arena): Use free_list_lock around detach_arena call.
	Acquire arena lock after list_lock.  Add comment, including FIXME
	about incorrect synchronization.
	(get_free_list): Switch to free_list_lock.
	(reused_arena): Acquire free_list_lock around detach_arena call
	and attached threads counter update.  Add two FIXMEs about
	incorrect synchronization.
	(arena_thread_freeres): Switch to free_list_lock.
	* malloc/malloc.c (struct malloc_state): Update comments to
	mention free_list_lock.
2015-12-21 16:42:46 +01:00
Florian Weimer
400e12265d Replace MUTEX_INITIALIZER with _LIBC_LOCK_INITIALIZER in generic code
* sysdeps/mach/hurd/libc-lock.h (_LIBC_LOCK_INITIALIZER): Define.
	(__libc_lock_define_initialized): Use it.
	* sysdeps/nptl/libc-lockP.h (_LIBC_LOCK_INITIALIZER): Define.
	* malloc/arena.c (list_lock): Use _LIBC_LOCK_INITIALIZER.
	* malloc/malloc.c (main_arena): Likewise.
	* sysdeps/generic/malloc-machine.h (MUTEX_INITIALIZER): Remove.
	* sysdeps/nptl/malloc-machine.h (MUTEX_INITIALIZER): Remove.
2015-11-24 16:37:15 +01:00
David Kastrup
8ba14398e6 Don't macro-expand failed assertion expression [BZ #18604]
[BZ #18604]
	* assert/assert.h (assert): Don't macro-expand failed assertion
	expression in error message.
	* malloc/malloc.c (assert): Likewise.
2015-11-03 23:26:15 +01:00
Florian Weimer
a62719ba90 malloc: Prevent arena free_list from turning cyclic [BZ #19048]
[BZ# 19048]
	* malloc/malloc.c (struct malloc_state): Update comment.  Add
	attached_threads member.
	(main_arena): Initialize attached_threads.
	* malloc/arena.c (list_lock): Update comment.
	(ptmalloc_lock_all, ptmalloc_unlock_all): Likewise.
	(ptmalloc_unlock_all2): Reinitialize arena reference counts.
	(deattach_arena): New function.
	(_int_new_arena): Initialize arena reference count and deattach
	replaced arena.
	(get_free_list, reused_arena): Update reference count and deattach
	replaced arena.
	(arena_thread_freeres): Update arena reference count and only put
	unreferenced arenas on the free list.
2015-10-28 21:29:23 +01:00
Joseph Myers
9dd346ff43 Convert 113 more function definitions to prototype style (files with assertions).
This mostly automatically-generated patch converts 113 function
definitions in glibc from old-style K&R to prototype-style.  Following
my other recent such patches, this one deals with the case of function
definitions in files that either contain assertions or where grep
suggested they might contain assertions - and thus where it isn't
possible to use a simple object code comparison as a sanity check on
the correctness of the patch, because line numbers are changed.

A few such automatically-generated changes needed to be supplemented
by manual changes for the result to compile.  openat64 had a prototype
declaration with "..." but an old-style definition in
sysdeps/unix/sysv/linux/dl-openat64.c, and "..." needed adding to the
generated prototype in the definition (I've filed
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68024> for diagnosing
such cases in GCC; the old state was undefined behavior not requiring
a diagnostic, but one seems a good idea).  In addition, as Florian has
noted regparm attribute mismatches between declaration and definition
are only diagnosed for prototype definitions, and five functions
needed internal_function added to their definitions (in the case of
__pthread_mutex_cond_lock, via the macro definition of
__pthread_mutex_lock) to compile on i386.

After this patch is in, remaining old-style definitions are probably
most readily fixed manually before we can turn on
-Wold-style-definition for all builds.

Tested for x86_64 and x86 (testsuite).

	* crypt/md5-crypt.c (__md5_crypt_r): Convert to prototype-style
	function definition.
	* crypt/sha256-crypt.c (__sha256_crypt_r): Likewise.
	* crypt/sha512-crypt.c (__sha512_crypt_r): Likewise.
	* debug/backtracesyms.c (__backtrace_symbols): Likewise.
	* elf/dl-minimal.c (_itoa): Likewise.
	* hurd/hurdmalloc.c (malloc): Likewise.
	(free): Likewise.
	(realloc): Likewise.
	* inet/inet6_option.c (inet6_option_space): Likewise.
	(inet6_option_init): Likewise.
	(inet6_option_append): Likewise.
	(inet6_option_alloc): Likewise.
	(inet6_option_next): Likewise.
	(inet6_option_find): Likewise.
	* io/ftw.c (FTW_NAME): Likewise.
	(NFTW_NAME): Likewise.
	(NFTW_NEW_NAME): Likewise.
	(NFTW_OLD_NAME): Likewise.
	* libio/iofwide.c (_IO_fwide): Likewise.
	* libio/strops.c (_IO_str_init_static_internal): Likewise.
	(_IO_str_init_static): Likewise.
	(_IO_str_init_readonly): Likewise.
	(_IO_str_overflow): Likewise.
	(_IO_str_underflow): Likewise.
	(_IO_str_count): Likewise.
	(_IO_str_seekoff): Likewise.
	(_IO_str_pbackfail): Likewise.
	(_IO_str_finish): Likewise.
	* libio/wstrops.c (_IO_wstr_init_static): Likewise.
	(_IO_wstr_overflow): Likewise.
	(_IO_wstr_underflow): Likewise.
	(_IO_wstr_count): Likewise.
	(_IO_wstr_seekoff): Likewise.
	(_IO_wstr_pbackfail): Likewise.
	(_IO_wstr_finish): Likewise.
	* locale/programs/localedef.c (normalize_codeset): Likewise.
	* locale/programs/locarchive.c (add_locale_to_archive): Likewise.
	(add_locales_to_archive): Likewise.
	(delete_locales_from_archive): Likewise.
	* malloc/malloc.c (__libc_mallinfo): Likewise.
	* math/gen-auto-libm-tests.c (init_fp_formats): Likewise.
	* misc/tsearch.c (__tfind): Likewise.
	* nptl/pthread_attr_destroy.c (__pthread_attr_destroy): Likewise.
	* nptl/pthread_attr_getdetachstate.c
	(__pthread_attr_getdetachstate): Likewise.
	* nptl/pthread_attr_getguardsize.c (pthread_attr_getguardsize):
	Likewise.
	* nptl/pthread_attr_getinheritsched.c
	(__pthread_attr_getinheritsched): Likewise.
	* nptl/pthread_attr_getschedparam.c
	(__pthread_attr_getschedparam): Likewise.
	* nptl/pthread_attr_getschedpolicy.c
	(__pthread_attr_getschedpolicy): Likewise.
	* nptl/pthread_attr_getscope.c (__pthread_attr_getscope):
	Likewise.
	* nptl/pthread_attr_getstack.c (__pthread_attr_getstack):
	Likewise.
	* nptl/pthread_attr_getstackaddr.c (__pthread_attr_getstackaddr):
	Likewise.
	* nptl/pthread_attr_getstacksize.c (__pthread_attr_getstacksize):
	Likewise.
	* nptl/pthread_attr_init.c (__pthread_attr_init_2_1): Likewise.
	(__pthread_attr_init_2_0): Likewise.
	* nptl/pthread_attr_setdetachstate.c
	(__pthread_attr_setdetachstate): Likewise.
	* nptl/pthread_attr_setguardsize.c (pthread_attr_setguardsize):
	Likewise.
	* nptl/pthread_attr_setinheritsched.c
	(__pthread_attr_setinheritsched): Likewise.
	* nptl/pthread_attr_setschedparam.c
	(__pthread_attr_setschedparam): Likewise.
	* nptl/pthread_attr_setschedpolicy.c
	(__pthread_attr_setschedpolicy): Likewise.
	* nptl/pthread_attr_setscope.c (__pthread_attr_setscope):
	Likewise.
	* nptl/pthread_attr_setstack.c (__pthread_attr_setstack):
	Likewise.
	* nptl/pthread_attr_setstackaddr.c (__pthread_attr_setstackaddr):
	Likewise.
	* nptl/pthread_attr_setstacksize.c (__pthread_attr_setstacksize):
	Likewise.
	* nptl/pthread_condattr_setclock.c (pthread_condattr_setclock):
	Likewise.
	* nptl/pthread_create.c (__find_in_stack_list): Likewise.
	* nptl/pthread_getattr_np.c (pthread_getattr_np): Likewise.
	* nptl/pthread_mutex_cond_lock.c (__pthread_mutex_lock): Define to
	use internal_function.
	* nptl/pthread_mutex_init.c (__pthread_mutex_init): Convert to
	prototype-style function definition.
	* nptl/pthread_mutex_lock.c (__pthread_mutex_lock): Likewise.
	(__pthread_mutex_cond_lock_adjust): Likewise.  Use
	internal_function.
	* nptl/pthread_mutex_timedlock.c (pthread_mutex_timedlock):
	Convert to prototype-style function definition.
	* nptl/pthread_mutex_trylock.c (__pthread_mutex_trylock):
	Likewise.
	* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_usercnt):
	Likewise.
	(__pthread_mutex_unlock): Likewise.
	* nptl_db/td_ta_clear_event.c (td_ta_clear_event): Likewise.
	* nptl_db/td_ta_set_event.c (td_ta_set_event): Likewise.
	* nptl_db/td_thr_clear_event.c (td_thr_clear_event): Likewise.
	* nptl_db/td_thr_event_enable.c (td_thr_event_enable): Likewise.
	* nptl_db/td_thr_set_event.c (td_thr_set_event): Likewise.
	* nss/makedb.c (process_input): Likewise.
	* posix/fnmatch.c (__strchrnul): Likewise.
	(__wcschrnul): Likewise.
	(fnmatch): Likewise.
	* posix/fnmatch_loop.c (FCT): Likewise.
	* posix/glob.c (globfree): Likewise.
	(__glob_pattern_type): Likewise.
	(__glob_pattern_p): Likewise.
	* posix/regcomp.c (re_compile_pattern): Likewise.
	(re_set_syntax): Likewise.
	(re_compile_fastmap): Likewise.
	(regcomp): Likewise.
	(regerror): Likewise.
	(regfree): Likewise.
	* posix/regexec.c (regexec): Likewise.
	(re_match): Likewise.
	(re_search): Likewise.
	(re_match_2): Likewise.
	(re_search_2): Likewise.
	(re_search_stub): Likewise.  Use internal_function
	(re_copy_regs): Likewise.
	(re_set_registers): Convert to prototype-style function
	definition.
	(prune_impossible_nodes): Likewise.  Use internal_function.
	* resolv/inet_net_pton.c (inet_net_pton): Convert to
	prototype-style function definition.
	(inet_net_pton_ipv4): Likewise.
	* stdlib/strtod_l.c (____STRTOF_INTERNAL): Likewise.
	* sysdeps/pthread/aio_cancel.c (aio_cancel): Likewise.
	* sysdeps/pthread/aio_suspend.c (aio_suspend): Likewise.
	* sysdeps/pthread/timer_delete.c (timer_delete): Likewise.
	* sysdeps/unix/sysv/linux/dl-openat64.c (openat64): Likewise.
	Make variadic.
	* time/strptime_l.c (localtime_r): Convert to prototype-style
	function definition.
	* wcsmbs/mbsnrtowcs.c (__mbsnrtowcs): Likewise.
	* wcsmbs/mbsrtowcs_l.c (__mbsrtowcs_l): Likewise.
	* wcsmbs/wcsnrtombs.c (__wcsnrtombs): Likewise.
	* wcsmbs/wcsrtombs.c (__wcsrtombs): Likewise.
2015-10-20 11:54:09 +00:00
Carlos O'Donell
ca6be1655b Use ALIGN_DOWN in systrim.
While doing code review I converted another bespoke round down, and
corrected a comment.

The comment spoke about keeping at least one page allocated even
during systrim, which is not correct. The code does nothing to keep
a page allocated. The code does attempt to keep PAD padding as
documented in comments and MINSIZE as required by design.

Historically in 2002 when Ulrich wrote the code (fa8d436c) the math
was inlined into one statement which did reserve an extra page:
extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
There is no reason given for this extra page.

In 2010 Anton Branchard's change (b9b42ee0) from division
to shifts removed the extra page by dropping the "+ (pagesiz-1), which
mean we might have attempted to return -0 via MORECORE. The fix by Will
Newton in 2014 added a check for extra being zero (51a7380b).

From first principles I see no reason why we should keep an extra
page of memory from being trimmed back to the OS. The only sensible
interface is to honour PAD padding as the function is documented,
with the caveat the MINSIZE is maintained for the top chunk.

Given that we've been using this code for 5+ years with no extra
page allocated is sufficient evidence that the comment should be changed
to match the code that I'm touching.

Tested on x86_64 and i686, no regressions.
2015-09-14 15:32:47 -04:00
Siddhesh Poyarekar
fff94fa224 Avoid deadlock in malloc on backtrace (BZ #16159)
When the malloc subsystem detects some kind of memory corruption,
depending on the configuration it prints the error, a backtrace, a
memory map and then aborts the process.  In this process, the
backtrace() call may result in a call to malloc, resulting in
various kinds of problematic behavior.

In one case, the malloc it calls may detect a corruption and call
backtrace again, and a stack overflow may result due to the infinite
recursion.  In another case, the malloc it calls may deadlock on an
arena lock with the malloc (or free, realloc, etc.) that detected the
corruption.  In yet another case, if the program is linked with
pthreads, backtrace may do a pthread_once initialization, which
deadlocks on itself.

In all these cases, the program exit is not as intended.  This is
avoidable by marking the arena that malloc detected a corruption on,
as unusable.  The following patch does that.  Features of this patch
are as follows:

- A flag is added to the mstate struct of the arena to indicate if the
  arena is corrupt.

- The flag is checked whenever malloc functions try to get a lock on
  an arena.  If the arena is unusable, a NULL is returned, causing the
  malloc to use mmap or try the next arena.

- malloc_printerr sets the corrupt flag on the arena when it detects a
  corruption

- free does not concern itself with the flag at all.  It is not
  important since the backtrace workflow does not need free.  A free
  in a parallel thread may cause another corruption, but that's not
  new

- The flag check and set are not atomic and may race.  This is fine
  since we don't care about contention during the flag check.  We want
  to make sure that the malloc call in the backtrace does not trip on
  itself and all that action happens in the same thread and not across
  threads.

I verified that the test case does not show any regressions due to
this patch.  I also ran the malloc benchmarks and found an
insignificant difference in timings (< 2%).

	* malloc/Makefile (tests): New test case tst-malloc-backtrace.
	* malloc/arena.c (arena_lock): Check if arena is corrupt.
	(reused_arena): Find a non-corrupt arena.
	(heap_trim): Pass arena to unlink.
	* malloc/hooks.c (malloc_check_get_size): Pass arena to
	malloc_printerr.
	(top_check): Likewise.
	(free_check): Likewise.
	(realloc_check): Likewise.
	* malloc/malloc.c (malloc_printerr): Add arena argument.
	(unlink): Likewise.
	(munmap_chunk): Adjust.
	(ARENA_CORRUPTION_BIT): New macro.
	(arena_is_corrupt): Likewise.
	(set_arena_corrupt): Likewise.
	(sysmalloc): Use mmap if there are no usable arenas.
	(_int_malloc): Likewise.
	(__libc_malloc): Don't fail if arena_get returns NULL.
	(_mid_memalign): Likewise.
	(__libc_calloc): Likewise.
	(__libc_realloc): Adjust for additional argument to
	malloc_printerr.
	(_int_free): Likewise.
	(malloc_consolidate): Likewise.
	(_int_realloc): Likewise.
	(_int_memalign): Don't touch corrupt arenas.
	* malloc/tst-malloc-backtrace.c: New test case.
2015-05-19 06:40:38 +05:30
Siddhesh Poyarekar
94c5a52a84 Consolidate arena_lookup and arena_lock into a single arena_get
This seems to have been left behind as an artifact of some old changes
and can now be merged.  Verified that the only generated code change
on x86_64 is that of line numbers in asserts, like so:

@@ -27253,7 +27253,7 @@ Disassembly of section .text:
   416f09:      48 89 42 20             mov    %rax,0x20(%rdx)
   416f0d:      e9 7e f6 ff ff          jmpq   416590 <_int_free+0x230>
   416f12:      b9 3f 9f 4a 00          mov    $0x4a9f3f,%ecx
-  416f17:      ba d5 0f 00 00          mov    $0xfd5,%edx
+  416f17:      ba d6 0f 00 00          mov    $0xfd6,%edx
   416f1c:      be a8 9b 4a 00          mov    $0x4a9ba8,%esi
   416f21:      bf 6a 9c 4a 00          mov    $0x4a9c6a,%edi
   416f26:      e8 45 e8 ff ff          callq  415770 <__malloc_assert>
2015-02-18 11:06:06 +05:30
Carlos O'Donell
8a35c3fe12 Use alignment macros, pagesize and powerof2.
We are replacing all of the bespoke alignment code with
ALIGN_UP, ALIGN_DOWN, PTR_ALIGN_UP, and PTR_ALIGN_DOWN.
This cleans up malloc/malloc.c, malloc/arena.c, and
elf/dl-reloc.c. It also makes all the code consistently
use pagesize, and powerof2 as required.

Code size is reduced with the removal of precomputed
pagemask, and use of pagesize instead. No measurable
difference in performance.

No regressions on x86_64.
2015-02-17 19:29:15 -05:00
Joseph Myers
b168057aaa Update copyright dates with scripts/update-copyrights. 2015-01-02 16:29:47 +00:00
Roland McGrath
af102d9529 Remove explicit inline on malloc perturb functions. 2014-12-17 10:41:28 -08:00
Steve Ellcey
fc56e97093 2014-12-11 Steve Ellcey <sellcey@imgtec.com>
* malloc/malloc.c: Fix powerof2 check.
2014-12-11 08:14:17 -08:00
Joseph Myers
c52ff39e8e Fix malloc_info namespace (bug 17570).
malloc_info is defined in the same file as malloc and free, but is not
an ISO C function, so should be a weak symbol.  This patch makes it
so.

Tested for x86_64 (testsuite, and that disassembly of installed shared
libraries is unchanged by the patch).

	[BZ #17570]
	* malloc/malloc.c (malloc_info): Rename to __malloc_info and
	define as weak alias of __malloc_info.
2014-11-12 22:31:38 +00:00
Florian Weimer
52ffbdf25a malloc: additional unlink hardening for non-small bins [BZ #17344]
Turn two asserts into a conditional call to malloc_printerr.  The
memory locations are accessed later anyway, so the performance
impact is minor.
2014-09-11 10:59:05 +02:00
Sean Anderson
bb2ce41656 malloc: fix comment typo 2014-08-12 05:24:29 -04:00
Will Newton
51a7380b89 malloc/malloc.c: Avoid calling sbrk unnecessarily with zero
Due to my bad review suggestion for the fix for BZ #15089 a check
was removed from systrim to prevent sbrk being called with a zero
argument. Add the check back to avoid this useless work.

ChangeLog:

2014-06-19  Will Newton  <will.newton@linaro.org>

	* malloc/malloc.c (systrim): If extra is zero then return
	early.
2014-06-19 14:34:08 +01:00
Siddhesh Poyarekar
9fa76613d0 Fix format specifier for n_mmaps 2014-06-02 23:38:32 +05:30
Siddhesh Poyarekar
62a5881678 Fix formatting in malloc_info 2014-05-30 22:44:45 +05:30
Siddhesh Poyarekar
4d653a59ff Add mmap usage in malloc_info output
The current malloc_info xml output only has information about
allocations on the heap.  Display information about number of mappings
and total mmapped size to this to complete the picture.
2014-05-30 22:43:52 +05:30
Ondřej Bílka
987c02692a Remove mi_arena nested function. 2014-05-30 13:25:43 +02:00