Commit Graph

392 Commits

Author SHA1 Message Date
Eyal Itkin
49c3c37651 Fix alignment bug in Safe-Linking
Alignment checks should be performed on the user's buffer and NOT
on the mchunkptr as was done before. This caused bugs in 32 bit
versions, because: 2*sizeof(t) != MALLOC_ALIGNMENT.

As the tcache works on users' buffers it uses the aligned_OK()
check, and the rest work on mchunkptr and therefore check using
misaligned_chunk().

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-03-31 21:48:54 -04:00
Eyal Itkin
768358b6a8 Typo fixes and CR cleanup in Safe-Linking
Removed unneeded '\' chars from end of lines and fixed some
indentation issues that were introduced in the original
Safe-Linking patch.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-03-31 21:48:54 -04:00
Eyal Itkin
a1a486d70e Add Safe-Linking to fastbins and tcache
Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbin and tcache) from being tampered by attackers.
The mechanism makes use of randomness from ASLR (mmap_base), and when
combined with chunk alignment integrity checks, it protects the "next"
pointers from being hijacked by an attacker.

While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
  * Partial pointer override: modifies the lower bytes (Little Endian)
  * Full pointer override: hijacks the pointer to an attacker's location
  * Unaligned chunks: pointing the list to an unaligned address

The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
  * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
  * *L = PROTECT(P)

This way, the random bits from the address L (which start at the bit
in the PAGE_SHIFT position), will be merged with LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.

An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
  * Attackers can't point to illegal (unaligned) memory addresses
  * Attackers must guess correctly the alignment bits

On standard 32 bit Linux machines, an attack will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.

This proposed patch was benchmarked and it's effect on the overall
performance of the heap was negligible and couldn't be distinguished
from the default variance between tests on the vanilla version. A
similar protection was added to Chromium's version of TCMalloc
in 2012, and according to their documentation it had an overhead of
less than 2%.

Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Adhemerval Zacnella <adhemerval.zanella@linaro.org>
2020-03-29 13:03:14 -04:00
Joseph Myers
d614a75396 Update copyright dates with scripts/update-copyrights. 2020-01-01 00:14:33 +00:00
DJ Delorie
16554464bc Correct range checking in mallopt/mxfast/tcache [BZ #25194]
do_set_tcache_max, do_set_mxfast:
Fix two instances of comparing "size_t < 0"
Both cases have upper limit, so the "negative value" case
is already handled via overflow semantics.

do_set_tcache_max, do_set_tcache_count:
Fix return value on error.  Note: currently not used.

mallopt:
pass return value of helper functions to user.  Behavior should
only be actually changed for mxfast, where we restore the old
(pre-tunables) behavior.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2019-12-05 16:46:37 -05:00
DJ Delorie
ff12e0fb91 Base max_fast on alignment, not width, of bins (Bug 24903)
set_max_fast sets the "impossibly small" value based on,
eventually, MALLOC_ALIGNMENT.  The comparisons for the smallest
chunk used is, eventually, MIN_CHUNK_SIZE.  Note that i386
is the only platform where these are the same, so a smallest
chunk *would* be put in a no-fastbins fastbin.

This change calculates the "impossibly small" value
based on MIN_CHUNK_SIZE instead, so that we can know it will
always be impossibly small.
2019-10-30 22:59:11 -04:00
Paul Eggert
5a82c74822 Prefer https to http for gnu.org and fsf.org URLs
Also, change sources.redhat.com to sourceware.org.
This patch was automatically generated by running the following shell
script, which uses GNU sed, and which avoids modifying files imported
from upstream:

sed -ri '
  s,(http|ftp)(://(.*\.)?(gnu|fsf|sourceware)\.org($|[^.]|\.[^a-z])),https\2,g
  s,(http|ftp)(://(.*\.)?)sources\.redhat\.com($|[^.]|\.[^a-z]),https\2sourceware.org\4,g
' \
  $(find $(git ls-files) -prune -type f \
      ! -name '*.po' \
      ! -name 'ChangeLog*' \
      ! -path COPYING ! -path COPYING.LIB \
      ! -path manual/fdl-1.3.texi ! -path manual/lgpl-2.1.texi \
      ! -path manual/texinfo.tex ! -path scripts/config.guess \
      ! -path scripts/config.sub ! -path scripts/install-sh \
      ! -path scripts/mkinstalldirs ! -path scripts/move-if-change \
      ! -path INSTALL ! -path  locale/programs/charmap-kw.h \
      ! -path po/libc.pot ! -path sysdeps/gnu/errlist.c \
      ! '(' -name configure \
            -execdir test -f configure.ac -o -f configure.in ';' ')' \
      ! '(' -name preconfigure \
            -execdir test -f preconfigure.ac ';' ')' \
      -print)

and then by running 'make dist-prepare' to regenerate files built
from the altered files, and then executing the following to cleanup:

  chmod a+x sysdeps/unix/sysv/linux/riscv/configure
  # Omit irrelevant whitespace and comment-only changes,
  # perhaps from a slightly-different Autoconf version.
  git checkout -f \
    sysdeps/csky/configure \
    sysdeps/hppa/configure \
    sysdeps/riscv/configure \
    sysdeps/unix/sysv/linux/csky/configure
  # Omit changes that caused a pre-commit check to fail like this:
  # remote: *** error: sysdeps/powerpc/powerpc64/ppc-mcount.S: trailing lines
  git checkout -f \
    sysdeps/powerpc/powerpc64/ppc-mcount.S \
    sysdeps/unix/sysv/linux/s390/s390-64/syscall.S
  # Omit change that caused a pre-commit check to fail like this:
  # remote: *** error: sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S: last line does not end in newline
  git checkout -f sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S
2019-09-07 02:43:31 -07:00
DJ Delorie
c48d92b430 Add glibc.malloc.mxfast tunable
* elf/dl-tunables.list: Add glibc.malloc.mxfast.
* manual/tunables.texi: Document it.
* malloc/malloc.c (do_set_mxfast): New.
(__libc_mallopt): Call it.
* malloc/arena.c: Add mxfast tunable.
* malloc/tst-mxfast.c: New.
* malloc/Makefile: Add it.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2019-08-09 14:04:03 -04:00
Niklas Hambüchen
b6d2c4475d malloc: Fix missing accounting of top chunk in malloc_info [BZ #24026]
Fixes `<total type="rest" size="..."> incorrectly showing as 0 most
of the time.

The rest value being wrong is significant because to compute the
actual amount of memory handed out via malloc, the user must subtract
it from <system type="current" size="...">. That result being wrong
makes investigating memory fragmentation issues like
<https://bugzilla.redhat.com/show_bug.cgi?id=843478> close to
impossible.
2019-08-08 22:02:27 +02:00
Florian Weimer
b0f6679bcd malloc: Remove unwanted leading whitespace in malloc_info [BZ #24867]
It was introduced in commit 6c8dbf00f5
("Reformat malloc to gnu style.").

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2019-08-01 14:07:08 +02:00
Wilco Dijkstra
1f50f2ad85 Small tcache improvements
Change the tcache->counts[] entries to uint16_t - this removes
the limit set by char and allows a larger tcache.  Remove a few
redundant asserts.

bench-malloc-thread with 4 threads is ~15% faster on Cortex-A72.

Reviewed-by: DJ Delorie <dj@redhat.com>

	* malloc/malloc.c (MAX_TCACHE_COUNT): Increase to UINT16_MAX.
	(tcache_put): Remove redundant assert.
	(tcache_get): Remove redundant asserts.
	(__libc_malloc): Check tcache count is not zero.
	* manual/tunables.texi (glibc.malloc.tcache_count): Update maximum.
2019-05-17 18:16:20 +01:00
Wilco Dijkstra
5ad533e8e6 Fix tcache count maximum (BZ #24531)
The tcache counts[] array is a char, which has a very small range and thus
may overflow.  When setting tcache_count tunable, there is no overflow check.
However the tunable must not be larger than the maximum value of the tcache
counts[] array, otherwise it can overflow when filling the tcache.

	[BZ #24531]
	* malloc/malloc.c (MAX_TCACHE_COUNT): New define.
	(do_set_tcache_count): Only update if count is small enough.
	* manual/tunables.texi (glibc.malloc.tcache_count): Document max value.
2019-05-10 16:38:21 +01:00
Adhemerval Zanella
9bf8e29ca1 malloc: make malloc fail with requests larger than PTRDIFF_MAX (BZ#23741)
As discussed previously on libc-alpha [1], this patch follows up the idea
and add both the __attribute_alloc_size__ on malloc functions (malloc,
calloc, realloc, reallocarray, valloc, pvalloc, and memalign) and limit
maximum requested allocation size to up PTRDIFF_MAX (taking into
consideration internal padding and alignment).

This aligns glibc with gcc expected size defined by default warning
-Walloc-size-larger-than value which warns for allocation larger than
PTRDIFF_MAX.  It also aligns with gcc expectation regarding libc and
expected size, such as described in PR#67999 [2] and previously discussed
ISO C11 issues [3] on libc-alpha.

From the RFC thread [4] and previous discussion, it seems that consensus
is only to limit such requested size for malloc functions, not the system
allocation one (mmap, sbrk, etc.).

The implementation changes checked_request2size to check for both overflow
and maximum object size up to PTRDIFF_MAX. No additional checks are done
on sysmalloc, so it can still issue mmap with values larger than
PTRDIFF_T depending on the requested size.

The __attribute_alloc_size__ is for functions that return a pointer only,
which means it cannot be applied to posix_memalign (see remarks in GCC
PR#87683 [5]). The runtimes checks to limit maximum requested allocation
size does applies to posix_memalign.

Checked on x86_64-linux-gnu and i686-linux-gnu.

[1] https://sourceware.org/ml/libc-alpha/2018-11/msg00223.html
[2] https://gcc.gnu.org/bugzilla//show_bug.cgi?id=67999
[3] https://sourceware.org/ml/libc-alpha/2011-12/msg00066.html
[4] https://sourceware.org/ml/libc-alpha/2018-11/msg00224.html
[5] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87683

	[BZ #23741]
	* malloc/hooks.c (malloc_check, realloc_check): Use
	__builtin_add_overflow on overflow check and adapt to
	checked_request2size change.
	* malloc/malloc.c (__libc_malloc, __libc_realloc, _mid_memalign,
	__libc_pvalloc, __libc_calloc, _int_memalign): Limit maximum
	allocation size to PTRDIFF_MAX.
	(REQUEST_OUT_OF_RANGE): Remove macro.
	(checked_request2size): Change to inline function and limit maximum
	requested size to PTRDIFF_MAX.
	(__libc_malloc, __libc_realloc, _int_malloc, _int_memalign): Limit
	maximum allocation size to PTRDIFF_MAX.
	(_mid_memalign): Use _int_memalign call for overflow check.
	(__libc_pvalloc): Use __builtin_add_overflow on overflow check.
	(__libc_calloc): Use __builtin_mul_overflow for overflow check and
	limit maximum requested size to PTRDIFF_MAX.
	* malloc/malloc.h (malloc, calloc, realloc, reallocarray, memalign,
	valloc, pvalloc): Add __attribute_alloc_size__.
	* stdlib/stdlib.h (malloc, realloc, reallocarray, valloc): Likewise.
	* malloc/tst-malloc-too-large.c (do_test): Add check for allocation
	larger than PTRDIFF_MAX.
	* malloc/tst-memalign.c (do_test): Disable -Walloc-size-larger-than=
	around tests of malloc with negative sizes.
	* malloc/tst-posix_memalign.c (do_test): Likewise.
	* malloc/tst-pvalloc.c (do_test): Likewise.
	* malloc/tst-valloc.c (do_test): Likewise.
	* malloc/tst-reallocarray.c (do_test): Replace call to reallocarray
	with resulting size allocation larger than PTRDIFF_MAX with
	reallocarray_nowarn.
	(reallocarray_nowarn): New function.
	* NEWS: Mention the malloc function semantic change.
2019-04-18 17:30:06 -03:00
Adam Maris
5b06f538c5 malloc: Check for large bin list corruption when inserting unsorted chunk
Fixes bug 24216. This patch adds security checks for bk and bk_nextsize pointers
of chunks in large bin when inserting chunk from unsorted bin. It was possible
to write the pointer to victim (newly inserted chunk) to arbitrary memory
locations if bk or bk_nextsize pointers of the next large bin chunk
got corrupted.
2019-03-14 16:51:16 -04:00
Joseph Myers
c2d8f0b704 Avoid "inline" after return type in function definitions.
One group of warnings seen with -Wextra is warnings for static or
inline not at the start of a declaration (-Wold-style-declaration).

This patch fixes various such cases for inline, ensuring it comes at
the start of the declaration (after any static).  A common case of the
fix is "static inline <type> __always_inline"; the definition of
__always_inline starts with __inline, so the natural change is to
"static __always_inline <type>".  Other cases of the warning may be
harder to fix (one pattern is a function definition that gets
rewritten to be static by an including file, "#define funcname static
wrapped_funcname" or similar), but it seems worth fixing these cases
with inline anyway.

Tested for x86_64.

	* elf/dl-load.h (_dl_postprocess_loadcmd): Use __always_inline
	before return type, without separate inline.
	* elf/dl-tunables.c (maybe_enable_malloc_check): Likewise.
	* elf/dl-tunables.h (tunable_is_name): Likewise.
	* malloc/malloc.c (do_set_trim_threshold): Likewise.
	(do_set_top_pad): Likewise.
	(do_set_mmap_threshold): Likewise.
	(do_set_mmaps_max): Likewise.
	(do_set_mallopt_check): Likewise.
	(do_set_perturb_byte): Likewise.
	(do_set_arena_test): Likewise.
	(do_set_arena_max): Likewise.
	(do_set_tcache_max): Likewise.
	(do_set_tcache_count): Likewise.
	(do_set_tcache_unsorted_limit): Likewise.
	* nis/nis_subr.c (count_dots): Likewise.
	* nptl/allocatestack.c (advise_stack_range): Likewise.
	* sysdeps/ieee754/dbl-64/s_sin.c (do_cos): Likewise.
	(do_sin): Likewise.
	(reduce_sincos): Likewise.
	(do_sincos): Likewise.
	* sysdeps/unix/sysv/linux/x86/elision-conf.c
	(do_set_elision_enable): Likewise.
	(TUNABLE_CALLBACK_FNDECL): Likewise.
2019-02-06 17:16:43 +00:00
Joseph Myers
77dc0d8643 Fix assertion in malloc.c:tcache_get.
One of the warnings that appears with -Wextra is "ordered comparison
of pointer with integer zero" in malloc.c:tcache_get, for the
assertion:

  assert (tcache->entries[tc_idx] > 0);

Indeed, a "> 0" comparison does not make sense for
tcache->entries[tc_idx], which is a pointer.  My guess is that
tcache->counts[tc_idx] is what's intended here, and this patch changes
the assertion accordingly.

Tested for x86_64.

	* malloc/malloc.c (tcache_get): Compare tcache->counts[tc_idx]
	with 0, not tcache->entries[tc_idx].
2019-02-04 23:46:58 +00:00
Florian Weimer
71effcea34 malloc: Revert fastbins to old-style atomics
Commit 6923f6db1e ("malloc: Use current
(C11-style) atomics for fastbin access") caused a substantial
performance regression on POWER and Aarch64, and the old atomics,
while hard to prove correct, seem to work in practice.
2019-01-18 22:38:32 +01:00
Joseph Myers
04277e02d7 Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2019-01-01 00:11:28 +00:00
Florian Weimer
b50dd3bc8c malloc: Always call memcpy in _int_realloc [BZ #24027]
This commit removes the custom memcpy implementation from _int_realloc
for small chunk sizes.  The ncopies variable has the wrong type, and
an integer wraparound could cause the existing code to copy too few
elements (leaving the new memory region mostly uninitialized).
Therefore, removing this code fixes bug 24027.
2018-12-31 22:09:37 +01:00
Istvan Kurucsai
c0e82f1173 malloc: Check the alignment of mmapped chunks before unmapping.
* malloc/malloc.c (munmap_chunk): Verify chunk alignment.
2018-12-21 00:15:28 -05:00
Istvan Kurucsai
ebe544bf6e malloc: Add more integrity checks to mremap_chunk.
* malloc/malloc.c (mremap_chunk): Additional checks.
2018-12-20 23:31:37 -05:00
Florian Weimer
affec03b71 malloc: tcache: Validate tc_idx before checking for double-frees [BZ #23907]
The previous check could read beyond the end of the tcache entry
array.  If the e->key == tcache cookie check happened to pass, this
would result in crashes.
2018-11-26 20:06:37 +01:00
DJ Delorie
bcdaad21d4 malloc: tcache double free check
* malloc/malloc.c (tcache_entry): Add key field.
(tcache_put): Set it.
(tcache_get): Likewise.
(_int_free): Check for double free in tcache.
* malloc/tst-tcfree1.c: New.
* malloc/tst-tcfree2.c: New.
* malloc/Makefile: Run the new tests.
* manual/probes.texi: Document memory_tcache_double_free probe.

* dlfcn/dlerror.c (check_free): Prevent double frees.
2018-11-20 13:24:09 -05:00
Florian Weimer
6923f6db1e malloc: Use current (C11-style) atomics for fastbin access 2018-11-13 10:36:10 +01:00
Florian Weimer
1ecba1fafc malloc: Convert the unlink macro to the unlink_chunk function
This commit is in preparation of turning the macro into a proper
function.  The output arguments of the macro were in fact unused.

Also clean up uses of __builtin_expect.
2018-11-12 14:35:06 +01:00
Florian Weimer
35cfefd960 malloc: Add ChangeLog for accidentally committed change
Commit b90ddd08f6 ("malloc: Additional
checks for unsorted bin integrity I.") was committed without a
whitespace fix, so it is adjusted here as well.
2018-08-20 14:57:13 +02:00
Istvan Kurucsai
b90ddd08f6 malloc: Additional checks for unsorted bin integrity I.
On Thu, Jan 11, 2018 at 3:50 PM, Florian Weimer <fweimer@redhat.com> wrote:
> On 11/07/2017 04:27 PM, Istvan Kurucsai wrote:
>>
>> +          next = chunk_at_offset (victim, size);
>
>
> For new code, we prefer declarations with initializers.

Noted.

>> +          if (__glibc_unlikely (chunksize_nomask (victim) <= 2 * SIZE_SZ)
>> +              || __glibc_unlikely (chunksize_nomask (victim) >
>> av->system_mem))
>> +            malloc_printerr("malloc(): invalid size (unsorted)");
>> +          if (__glibc_unlikely (chunksize_nomask (next) < 2 * SIZE_SZ)
>> +              || __glibc_unlikely (chunksize_nomask (next) >
>> av->system_mem))
>> +            malloc_printerr("malloc(): invalid next size (unsorted)");
>> +          if (__glibc_unlikely ((prev_size (next) & ~(SIZE_BITS)) !=
>> size))
>> +            malloc_printerr("malloc(): mismatching next->prev_size
>> (unsorted)");
>
>
> I think this check is redundant because prev_size (next) and chunksize
> (victim) are loaded from the same memory location.

I'm fairly certain that it compares mchunk_size of victim against
mchunk_prev_size of the next chunk, i.e. the size of victim in its
header and footer.

>> +          if (__glibc_unlikely (bck->fd != victim)
>> +              || __glibc_unlikely (victim->fd != unsorted_chunks (av)))
>> +            malloc_printerr("malloc(): unsorted double linked list
>> corrupted");
>> +          if (__glibc_unlikely (prev_inuse(next)))
>> +            malloc_printerr("malloc(): invalid next->prev_inuse
>> (unsorted)");
>
>
> There's a missing space after malloc_printerr.

Noted.

> Why do you keep using chunksize_nomask?  We never investigated why the
> original code uses it.  It may have been an accident.

You are right, I don't think it makes a difference in these checks. So
the size local can be reused for the checks against victim. For next,
leaving it as such avoids the masking operation.

> Again, for non-main arenas, the checks against av->system_mem could be made
> tighter (against the heap size).  Maybe you could put the condition into a
> separate inline function?

We could also do a chunk boundary check similar to what I proposed in
the thread for the first patch in the series to be even more strict.
I'll gladly try to implement either but believe that refining these
checks would bring less benefits than in the case of the top chunk.
Intra-arena or intra-heap overlaps would still be doable here with
unsorted chunks and I don't see any way to counter that besides more
generic measures like randomizing allocations and your metadata
encoding patches.

I've attached a revised version with the above comments incorporated
but without the refined checks.

Thanks,
Istvan

From a12d5d40fd7aed5fa10fc444dcb819947b72b315 Mon Sep 17 00:00:00 2001
From: Istvan Kurucsai <pistukem@gmail.com>
Date: Tue, 16 Jan 2018 14:48:16 +0100
Subject: [PATCH v2 1/1] malloc: Additional checks for unsorted bin integrity
 I.

Ensure the following properties of chunks encountered during binning:
- victim chunk has reasonable size
- next chunk has reasonable size
- next->prev_size == victim->size
- valid double linked list
- PREV_INUSE of next chunk is unset

    * malloc/malloc.c (_int_malloc): Additional binning code checks.
2018-08-17 16:04:02 +02:00
Moritz Eckert
d6db68e66d malloc: Mitigate null-byte overflow attacks
* malloc/malloc.c (_int_free): Check for corrupt prev_size vs size.
(malloc_consolidate): Likewise.
2018-08-16 21:26:16 -04:00
Pochang Chen
30a17d8c95 malloc: Verify size of top chunk.
The House of Force is a well-known technique to exploit heap
overflow. In essence, this exploit takes three steps:
1. Overwrite the size of top chunk with very large value (e.g. -1).
2. Request x bytes from top chunk. As the size of top chunk
   is corrupted, x can be arbitrarily large and top chunk will
   still be offset by x.
3. The next allocation from top chunk will thus be controllable.

If we verify the size of top chunk at step 2, we can stop such attack.
2018-08-16 15:24:24 -04:00
Florian Weimer
524d796d5f malloc: Update heap dumping/undumping comments [BZ #23351]
Also remove a few now-unused declarations and definitions.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2018-06-29 14:55:15 +02:00
Francois Goichon
bdc3009b8f malloc: harden removal from unsorted list
* malloc/malloc.c (_int_malloc): Added check before removing from
unsorted list.
2018-03-14 16:25:57 -04:00
Florian Weimer
229855e598 malloc: Revert sense of prev_inuse in comments
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2018-03-09 16:21:22 +01:00
Zack Weinberg
9964a14579 Mechanically remove _IO_ name aliases for types and constants.
This patch mechanically removes all remaining uses, and the
definitions, of the following libio name aliases:

 name                         replaced with
 ----                         -------------
 _IO_FILE                     FILE
 _IO_fpos_t                   __fpos_t
 _IO_fpos64_t                 __fpos64_t
 _IO_size_t                   size_t
 _IO_ssize_t                  ssize_t or __ssize_t
 _IO_off_t                    off_t
 _IO_off64_t                  off64_t
 _IO_pid_t                    pid_t
 _IO_uid_t                    uid_t
 _IO_wint_t                   wint_t
 _IO_va_list                  va_list or __gnuc_va_list
 _IO_BUFSIZ                   BUFSIZ
 _IO_cookie_io_functions_t    cookie_io_functions_t
 __io_read_fn                 cookie_read_function_t
 __io_write_fn                cookie_write_function_t
 __io_seek_fn                 cookie_seek_function_t
 __io_close_fn                cookie_close_function_t

I used __fpos_t and __fpos64_t instead of fpos_t and fpos64_t because
the definitions of fpos_t and fpos64_t depend on the largefile mode.
I used __ssize_t and __gnuc_va_list in a handful of headers where
namespace cleanliness might be relevant even though they're
internal-use-only.  In all other cases, I used the public-namespace
name.

There are a tiny handful of places where I left a use of 'struct _IO_FILE'
alone, because it was being used together with 'struct _IO_FILE_plus'
or 'struct _IO_FILE_complete' in the same arithmetic expression.

Because this patch was almost entirely done with search and replace, I
may have introduced indentation botches.  I did proofread the diff,
but I may have missed something.

The ChangeLog below calls out all of the places where this was not a
pure search-and-replace change.

Installed stripped libraries and executables are unchanged by this patch,
except that some assertions in vfscanf.c change line numbers.

	* libio/libio.h (_IO_FILE): Delete; all uses changed to FILE.
	(_IO_fpos_t): Delete; all uses changed to __fpos_t.
	(_IO_fpos64_t): Delete; all uses changed to __fpos64_t.
	(_IO_size_t): Delete; all uses changed to size_t.
	(_IO_ssize_t): Delete; all uses changed to ssize_t or __ssize_t.
	(_IO_off_t): Delete; all uses changed to off_t.
	(_IO_off64_t): Delete; all uses changed to off64_t.
	(_IO_pid_t): Delete; all uses changed to pid_t.
	(_IO_uid_t): Delete; all uses changed to uid_t.
	(_IO_wint_t): Delete; all uses changed to wint_t.
	(_IO_va_list): Delete; all uses changed to va_list or __gnuc_va_list.
	(_IO_BUFSIZ): Delete; all uses changed to BUFSIZ.
	(_IO_cookie_io_functions_t): Delete; all uses changed to
	cookie_io_functions_t.
	(__io_read_fn): Delete; all uses changed to cookie_read_function_t.
	(__io_write_fn): Delete; all uses changed to cookie_write_function_t.
	(__io_seek_fn): Delete; all uses changed to cookie_seek_function_t.
	(__io_close_fn): Delete: all uses changed to cookie_close_function_t.

	* libio/iofopncook.c: Remove unnecessary forward declarations.
	* libio/iolibio.h: Correct outdated commentary.
	* malloc/malloc.c (__malloc_stats): Remove unnecessary casts.
	* stdio-common/fxprintf.c (__fxprintf_nocancel):
	Remove unnecessary casts.
	* stdio-common/getline.c: Use _IO_getdelim directly.
	Don't redefine ssize_t.
	* stdio-common/printf_fp.c, stdio_common/printf_fphex.c
	* stdio-common/printf_size.c: Don't redefine size_t or FILE.
	Remove outdated comments.
	* stdio-common/vfscanf.c: Don't redefine va_list.
2018-02-21 14:11:05 -05:00
Zack Weinberg
402ecba487 [BZ #22830] malloc_stats: restore cancellation for stderr correctly.
malloc_stats means to disable cancellation for writes to stderr while
it runs, but it restores stderr->_flags2 with |= instead of =, so what
it actually does is disable cancellation on stderr permanently.

	[BZ #22830]
	* malloc/malloc.c (__malloc_stats): Restore stderr->_flags2
        correctly.
        * malloc/tst-malloc-stats-cancellation.c: New test case.
        * malloc/Makefile: Add new test case.
2018-02-10 16:24:17 -05:00
Samuel Thibault
406e7a0a47 malloc: Use assert.h's assert macro
This avoids assert definition conflicts if some of the headers used by
malloc.c happens to include assert.h.  Malloc still needs a malloc-avoiding
implementation, which we get by redirecting __assert_fail to malloc's
__malloc_assert.

	* malloc/malloc.c: Include <assert.h>.
	(assert): Do not define.
	[!defined NDEBUG] (__assert_fail): Define to __malloc_assert.
2018-01-29 23:00:17 +01:00
Arjun Shankar
8e448310d7 Fix integer overflows in internal memalign and malloc functions [BZ #22343]
When posix_memalign is called with an alignment less than MALLOC_ALIGNMENT
and a requested size close to SIZE_MAX, it falls back to malloc code
(because the alignment of a block returned by malloc is sufficient to
satisfy the call).  In this case, an integer overflow in _int_malloc leads
to posix_memalign incorrectly returning successfully.

Upon fixing this and writing a somewhat thorough regression test, it was
discovered that when posix_memalign is called with an alignment larger than
MALLOC_ALIGNMENT (so it uses _int_memalign instead) and a requested size
close to SIZE_MAX, a different integer overflow in _int_memalign leads to
posix_memalign incorrectly returning successfully.

Both integer overflows affect other memory allocation functions that use
_int_malloc (one affected malloc in x86) or _int_memalign as well.

This commit fixes both integer overflows.  In addition to this, it adds a
regression test to guard against false successful allocations by the
following memory allocation functions when called with too-large allocation
sizes and, where relevant, various valid alignments:
malloc, realloc, calloc, reallocarray, memalign, posix_memalign,
aligned_alloc, valloc, and pvalloc.
2018-01-18 17:55:45 +01:00
Istvan Kurucsai
249a5895f1 malloc: Ensure that the consolidated fast chunk has a sane size. 2018-01-12 15:26:20 +01:00
Joseph Myers
688903eb3e Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2018-01-01 00:32:25 +00:00
Arjun Shankar
34697694e8 Fix integer overflow in malloc when tcache is enabled [BZ #22375]
When the per-thread cache is enabled, __libc_malloc uses request2size (which
does not perform an overflow check) to calculate the chunk size from the
requested allocation size. This leads to an integer overflow causing malloc
to incorrectly return the last successfully allocated block when called with
a very large size argument (close to SIZE_MAX).

This commit uses checked_request2size instead, removing the overflow.
2017-11-30 13:42:53 +01:00
Florian Weimer
0a947e061d malloc: Call tcache destructor in arena_thread_freeres
It does not make sense to register separate cleanup functions for arena
and tcache since they're always going to be called together.  Call the
tcache cleanup function from within arena_thread_freeres since it at
least makes the order of those cleanups clear in the code.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2017-11-23 14:47:31 +01:00
Florian Weimer
34eb41579c malloc: Account for all heaps in an arena in malloc_info [BZ #22439]
This commit adds a "subheaps" field to the malloc_info output that
shows the number of heaps that were allocated to extend a non-main
arena.

Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2017-11-15 11:40:41 +01:00
Florian Weimer
7a9368a117 malloc: Add missing arena lock in malloc_info [BZ #22408]
Obtain the size information while the arena lock is acquired, and only
print it later.
2017-11-15 11:39:01 +01:00
Wilco Dijkstra
905a7725e9 Add single-threaded path to _int_malloc
This patch adds single-threaded fast paths to _int_malloc.

	* malloc/malloc.c (_int_malloc): Add SINGLE_THREAD_P path.
2017-10-24 12:43:05 +01:00
Wilco Dijkstra
3f6bb8a32e Add single-threaded path to malloc/realloc/calloc/memalloc
This patch adds a single-threaded fast path to malloc, realloc,
calloc and memalloc.  When we're single-threaded, we can bypass
arena_get (which always locks the arena it returns) and just use
the main arena.  Also avoid retrying a different arena since
there is just the main arena.

	* malloc/malloc.c (__libc_malloc): Add SINGLE_THREAD_P path.
	(__libc_realloc): Likewise.
	(_mid_memalign): Likewise.
	(__libc_calloc): Likewise.
2017-10-24 12:39:24 +01:00
Wilco Dijkstra
6d43de4b85 Fix build issue with SINGLE_THREAD_P
Add sysdep-cancel.h include.

	* malloc/malloc.c (sysdep-cancel.h): Add include.
2017-10-20 17:39:47 +01:00
Wilco Dijkstra
a15d53e2de Add single-threaded path to _int_free
This patch adds single-threaded fast paths to _int_free.
Bypass the explicit locking for larger allocations.

	* malloc/malloc.c (_int_free): Add SINGLE_THREAD_P fast paths.
2017-10-20 17:31:06 +01:00
Wilco Dijkstra
d74e6f6c0d Fix deadlock in _int_free consistency check
This patch fixes a deadlock in the fastbin consistency check.
If we fail the fast check due to concurrent modifications to
the next chunk or system_mem, we should not lock if we already
have the arena lock.  Simplify the check to make it obviously
correct.

	* malloc/malloc.c (_int_free): Fix deadlock bug in consistency check.
2017-10-19 18:19:55 +01:00
Wilco Dijkstra
2c2245b92c Fix build failure on tilepro due to unsupported atomics
* malloc/malloc.c (malloc_state): Use int for have_fastchunks since
        not all targets support atomics on bool.
2017-10-18 12:20:55 +01:00
Wilco Dijkstra
3381be5cde Improve malloc initialization sequence
The current malloc initialization is quite convoluted. Instead of
sometimes calling malloc_consolidate from ptmalloc_init, call
malloc_init_state early so that the main_arena is always initialized.
The special initialization can now be removed from malloc_consolidate.
This also fixes BZ #22159.

Check all calls to malloc_consolidate and remove calls that are
redundant initialization after ptmalloc_init, like in int_mallinfo
and __libc_mallopt (but keep the latter as consolidation is required for
set_max_fast).  Update comments to improve clarity.

Remove impossible initialization check from _int_malloc, fix assert
in do_check_malloc_state to ensure arena->top != 0.  Fix the obvious bugs
in do_check_free_chunk and do_check_remalloced_chunk to enable single
threaded malloc debugging (do_check_malloc_state is not thread safe!).

	[BZ #22159]
	* malloc/arena.c (ptmalloc_init): Call malloc_init_state.
	* malloc/malloc.c (do_check_free_chunk): Fix build bug.
	(do_check_remalloced_chunk): Fix build bug.
	(do_check_malloc_state): Add assert that checks arena->top.
	(malloc_consolidate): Remove initialization.
	(int_mallinfo): Remove call to malloc_consolidate.
	(__libc_mallopt): Clarify why malloc_consolidate is needed.
2017-10-17 18:55:16 +01:00
Wilco Dijkstra
e956075a5a Use relaxed atomics for malloc have_fastchunks
Currently free typically uses 2 atomic operations per call.  The have_fastchunks
flag indicates whether there are recently freed blocks in the fastbins.  This
is purely an optimization to avoid calling malloc_consolidate too often and
avoiding the overhead of walking all fast bins even if all are empty during a
sequence of allocations.  However using catomic_or to update the flag is
completely unnecessary since it can be changed into a simple boolean and
accessed using relaxed atomics.  There is no change in multi-threaded behaviour
given the flag is already approximate (it may be set when there are no blocks in
any fast bins, or it may be clear when there are free blocks that could be
consolidated).

Performance of malloc/free improves by 27% on a simple benchmark on AArch64
(both single and multithreaded). The number of load/store exclusive instructions
is reduced by 33%. Bench-malloc-thread speeds up by ~3% in all cases.

	* malloc/malloc.c (FASTCHUNKS_BIT): Remove.
	(have_fastchunks): Remove.
	(clear_fastchunks): Remove.
	(set_fastchunks): Remove.
	(malloc_state): Add have_fastchunks.
	(malloc_init_state): Use have_fastchunks.
	(do_check_malloc_state): Remove incorrect invariant checks.
	(_int_malloc): Use have_fastchunks.
	(_int_free): Likewise.
	(malloc_consolidate): Likewise.
2017-10-17 18:43:31 +01:00