Since the size argument is unsigned. we should use unsigned Jcc
instructions, instead of signed, to check size.
Tested on x86-64 and x32, with and without --disable-multi-arch.
[BZ #24155]
CVE-2019-7309
* NEWS: Updated for CVE-2019-7309.
* sysdeps/x86_64/memcmp.S: Use RDX_LP for size. Clear the
upper 32 bits of RDX register for x32. Use unsigned Jcc
instructions, instead of signed.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcmp-2.
* sysdeps/x86_64/x32/tst-size_t-memcmp-2.c: New test.
(cherry picked from commit 3f635fb433)
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes strnlen/wcsnlen for x32. Tested on x86-64 and x32. On
x86-64, libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/strlen-avx2.S: Use RSI_LP for length.
Clear the upper 32 bits of RSI register.
* sysdeps/x86_64/strlen.S: Use RSI_LP for length.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strnlen
and tst-size_t-wcsnlen.
* sysdeps/x86_64/x32/tst-size_t-strnlen.c: New file.
* sysdeps/x86_64/x32/tst-size_t-wcsnlen.c: Likewise.
(cherry picked from commit 5165de69c0)
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes strncpy for x32. Tested on x86-64 and x32. On x86-64,
libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S: Use RDX_LP
for length.
* sysdeps/x86_64/multiarch/strcpy-ssse3.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strncpy.
* sysdeps/x86_64/x32/tst-size_t-strncpy.c: New file.
(cherry picked from commit c7c54f65b0)
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes the strncmp family for x32. Tested on x86-64 and x32.
On x86-64, libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/strcmp-avx2.S: Use RDX_LP for length.
* sysdeps/x86_64/multiarch/strcmp-sse42.S: Likewise.
* sysdeps/x86_64/strcmp.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strncasecmp,
tst-size_t-strncmp and tst-size_t-wcsncmp.
* sysdeps/x86_64/x32/tst-size_t-strncasecmp.c: New file.
* sysdeps/x86_64/x32/tst-size_t-strncmp.c: Likewise.
* sysdeps/x86_64/x32/tst-size_t-wcsncmp.c: Likewise.
(cherry picked from commit ee915088a0)
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes memset/wmemset for x32. Tested on x86-64 and x32. On
x86-64, libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: Use
RDX_LP for length. Clear the upper 32 bits of RDX register.
* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-wmemset.
* sysdeps/x86_64/x32/tst-size_t-memset.c: New file.
* sysdeps/x86_64/x32/tst-size_t-wmemset.c: Likewise.
(cherry picked from commit 82d0b4a4d7)
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes memrchr for x32. Tested on x86-64 and x32. On x86-64,
libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/memrchr.S: Use RDX_LP for length.
* sysdeps/x86_64/multiarch/memrchr-avx2.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memrchr.
* sysdeps/x86_64/x32/tst-size_t-memrchr.c: New file.
(cherry picked from commit ecd8b842cf)
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes memcpy for x32. Tested on x86-64 and x32. On x86-64,
libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/memcpy-ssse3-back.S: Use RDX_LP for
length. Clear the upper 32 bits of RDX register.
* sysdeps/x86_64/multiarch/memcpy-ssse3.S: Likewise.
* sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S:
Likewise.
* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:
Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcpy.
tst-size_t-wmemchr.
* sysdeps/x86_64/x32/tst-size_t-memcpy.c: New file.
(cherry picked from commit 231c56760c)
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes memcmp/wmemcmp for x32. Tested on x86-64 and x32. On
x86-64, libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/memcmp-avx2-movbe.S: Use RDX_LP for
length. Clear the upper 32 bits of RDX register.
* sysdeps/x86_64/multiarch/memcmp-sse4.S: Likewise.
* sysdeps/x86_64/multiarch/memcmp-ssse3.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcmp and
tst-size_t-wmemcmp.
* sysdeps/x86_64/x32/tst-size_t-memcmp.c: New file.
* sysdeps/x86_64/x32/tst-size_t-wmemcmp.c: Likewise.
(cherry picked from commit b304fc201d)
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes memchr/wmemchr for x32. Tested on x86-64 and x32. On
x86-64, libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/memchr.S: Use RDX_LP for length. Clear the
upper 32 bits of RDX register.
* sysdeps/x86_64/multiarch/memchr-avx2.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memchr and
tst-size_t-wmemchr.
* sysdeps/x86_64/x32/test-size_t.h: New file.
* sysdeps/x86_64/x32/tst-size_t-memchr.c: Likewise.
* sysdeps/x86_64/x32/tst-size_t-wmemchr.c: Likewise.
(cherry picked from commit 97700a34f3)
Ignore 16 errors in math/test-ldouble-fma and 4 errors in
math/test-ildouble-fma when IBM 128-bit long double used.
These errors are caused by spurious overflows from libgcc.
* math/libm-test-fma.inc (fma_test_data): Set
XFAIL_ROUNDING_IBM128_LIBGCC to more tests.
Signed-off-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
(cherry picked from commit ecdacd34a2)
Commit 1294b1892e ("Add support for sqrt asm redirects") added the
-fno-math-errno flag to build most of the glibc in order to enable GCC
to inline math functions. Due to GCC bug #88576, saving and restoring
errno around calls to malloc are optimized-out. In turn this causes
strerror to set errno to ENOMEM if it get passed an invalid error number
and if malloc sets errno to ENOMEM (which might happen even if it
succeeds). This is not allowed by POSIX.
This patch changes the build flags, building only libm with
-fno-math-errno and all the remaining code with -fno-math-errno. This
should be safe as libm doesn't contain any code saving and restoring
errno around malloc. This patch can probably be reverted once the GCC
bug is fixed and available in stable releases.
Tested on x86-64, no regression in the testsuite.
Changelog:
[BZ #24024]
* Makeconfig: Build libm with -fno-math-errno but build the remaining
code with -fmath-errno.
* string/Makefile [$(build-shared)] (tests): Add test-strerror-errno.
[$(build-shared)] (LDLIBS-test-strerror-errno): New variable.
* string/test-strerror-errno.c: New file.
(cherry picked from commit 2ef4271688)
* with -O, -O1, -Os it fails with:
In file included from ../soft-fp/soft-fp.h:318,
from ../sysdeps/ieee754/soft-fp/s_fdiv.c:28:
../sysdeps/ieee754/soft-fp/s_fdiv.c: In function '__fdiv':
../soft-fp/op-2.h:98:25: error: 'R_f1' may be used uninitialized in this function [-Werror=maybe-uninitialized]
X##_f0 = (X##_f1 << (_FP_W_TYPE_SIZE - (N)) | X##_f0 >> (N) \
^~
../sysdeps/ieee754/soft-fp/s_fdiv.c:38:14: note: 'R_f1' was declared here
FP_DECL_D (R);
^
../soft-fp/op-2.h:37:36: note: in definition of macro '_FP_FRAC_DECL_2'
_FP_W_TYPE X##_f0 _FP_ZERO_INIT, X##_f1 _FP_ZERO_INIT
^
../soft-fp/double.h:95:24: note: in expansion of macro '_FP_DECL'
# define FP_DECL_D(X) _FP_DECL (2, X)
^~~~~~~~
../sysdeps/ieee754/soft-fp/s_fdiv.c:38:3: note: in expansion of macro 'FP_DECL_D'
FP_DECL_D (R);
^~~~~~~~~
../soft-fp/op-2.h:101:17: error: 'R_f0' may be used uninitialized in this function [-Werror=maybe-uninitialized]
: (X##_f0 << (_FP_W_TYPE_SIZE - (N))) != 0)); \
^~
../sysdeps/ieee754/soft-fp/s_fdiv.c:38:14: note: 'R_f0' was declared here
FP_DECL_D (R);
^
../soft-fp/op-2.h:37:14: note: in definition of macro '_FP_FRAC_DECL_2'
_FP_W_TYPE X##_f0 _FP_ZERO_INIT, X##_f1 _FP_ZERO_INIT
^
../soft-fp/double.h:95:24: note: in expansion of macro '_FP_DECL'
# define FP_DECL_D(X) _FP_DECL (2, X)
^~~~~~~~
../sysdeps/ieee754/soft-fp/s_fdiv.c:38:3: note: in expansion of macro 'FP_DECL_D'
FP_DECL_D (R);
^~~~~~~~~
Build tested with Yocto for ARM, AARCH64, X86, X86_64, PPC, MIPS, MIPS64
with -O, -O1, -Os.
For AARCH64 it needs one more fix in locale for -Os.
[BZ #19444]
* sysdeps/ieee754/soft-fp/s_fdiv.c: Include <libc-diag.h> and use
DIAG_PUSH_NEEDS_COMMENT, DIAG_IGNORE_NEEDS_COMMENT and
DIAG_POP_NEEDS_COMMENT to disable -Wmaybe-uninitialized.
(cherry picked from commit 4a06ceea33)
The pre-ARMv7 CPUs are missing atomic compare and exchange and/or
barrier instructions. Therefore those are implemented using kernel
assistance, calling a kernel function at a specific address, and passing
the arguments in the r0 to r4 registers. This is done by specifying
registers for local variables. The a_ptr variable is placed in the r2
register and declared with __typeof (mem). According to the GCC
documentation on local register variables, if mem is a constant pointer,
the compiler may substitute the variable with its initializer in asm
statements, which may cause the corresponding operand to appear in a
different register.
This happens in __libc_start_main with the pointer to the thread counter
for static binaries (but not the shared ones):
# ifdef SHARED
unsigned int *ptr = __libc_pthread_functions.ptr_nthreads;
# ifdef PTR_DEMANGLE
PTR_DEMANGLE (ptr);
# endif
# else
extern unsigned int __nptl_nthreads __attribute ((weak));
unsigned int *const ptr = &__nptl_nthreads;
# endif
This causes static binaries using threads to crash when the GNU libc is
built with GCC 8 and most notably tst-cancel21-static.
To fix that, use the same trick than for the volatile qualifier,
defining a_ptr as a union.
Changelog:
[BZ #24034]
* sysdeps/unix/sysv/linux/arm/atomic-machine.h
(__arm_assisted_compare_and_exchange_val_32_acq): Use uint32_t rather
than __typeof (...) for the a_ptr variable.
(cherry picked from commit fe20bb1d60)
<asm/syscalls.h> has been removed by
commit 27f8899d6002e11a6e2d995e29b8deab5aa9cc25
Author: David Abdurachmanov <david.abdurachmanov@gmail.com>
Date: Thu Nov 8 20:02:39 2018 +0100
riscv: add asm/unistd.h UAPI header
Marcin Juszkiewicz reported issues while generating syscall table for riscv
using 4.20-rc1. The patch refactors our unistd.h files to match some other
architectures.
- Add asm/unistd.h UAPI header, which has __ARCH_WANT_NEW_STAT only for 64-bit
- Remove asm/syscalls.h UAPI header and merge to asm/unistd.h
- Adjust kernel asm/unistd.h
So now asm/unistd.h UAPI header should show all syscalls for riscv.
<asm/syscalls.h> may be restored by
Subject: [PATCH] riscv: restore asm/syscalls.h UAPI header
Date: Tue, 11 Dec 2018 09:09:35 +0100
UAPI header asm/syscalls.h was merged into UAPI asm/unistd.h header,
which did resolve issue with missing syscalls macros resulting in
glibc (2.28) build failure. It also broke glibc in a different way:
asm/syscalls.h is being used by glibc. I noticed this while doing
Fedora 30/Rawhide mass rebuild.
The patch returns asm/syscalls.h header and incl. it into asm/unistd.h.
I plan to send a patch to glibc to use asm/unistd.h instead of
asm/syscalls.h
In the meantime, we use __has_include__, which was added to GCC 5, to
check if <asm/syscalls.h> exists before including it. Tested with
build-many-glibcs.py for riscv against kernel 4.19.12 and 4.20-rc7.
[BZ #24022]
* sysdeps/unix/sysv/linux/riscv/flush-icache.c: Check if
<asm/syscalls.h> exists with __has_include__ before including it.
(cherry picked from commit 0b9c84906f)
This commit removes the custom memcpy implementation from _int_realloc
for small chunk sizes. The ncopies variable has the wrong type, and
an integer wraparound could cause the existing code to copy too few
elements (leaving the new memory region mostly uninitialized).
Therefore, removing this code fixes bug 24027.
(cherry picked from commit b50dd3bc8c)
Commit b4a5d26d88 (linux: Consolidate sigaction implementation) added
a wrong kernel_sigaction definition for m68k, meant for __NR_sigaction
instead of __NR_rt_sigaction as used on generic Linux sigaction
implementation. This patch fixes it by using the Linux generic
definition meant for the RT kernel ABI.
Checked the signal tests on emulated m68-linux-gnu (Aranym). It fixes
the faulty signal/tst-sigaction and man works as expected.
Adhemerval Zanella <adhemerval.zanella@linaro.org>
James Clarke <jrtc27@jrtc27.com>
[BZ #23967]
* sysdeps/unix/sysv/linux/kernel_sigaction.h (HAS_SA_RESTORER):
Define if SA_RESTORER is defined.
(kernel_sigaction): Define sa_restorer if HAS_SA_RESTORER is defined.
(SET_SA_RESTORER, RESET_SA_RESTORER): Define iff the macro are not
already defined.
* sysdeps/unix/sysv/linux/m68k/kernel_sigaction.h (SA_RESTORER,
kernel_sigaction, SET_SA_RESTORER, RESET_SA_RESTORER): Remove
definitions.
(HAS_SA_RESTORER): Define.
* sysdeps/unix/sysv/linux/sparc/kernel_sigaction.h (SA_RESTORER,
SET_SA_RESTORER, RESET_SA_RESTORER): Remove definition.
(HAS_SA_RESTORER): Define.
* sysdeps/unix/sysv/linux/nios2/kernel_sigaction.h: Include generic
kernel_sigaction after define SET_SA_RESTORER and RESET_SA_RESTORER.
* sysdeps/unix/sysv/linux/powerpc/kernel_sigaction.h: Likewise.
* sysdeps/unix/sysv/linux/x86_64/sigaction.c: Likewise.
(cherry picked from commit 43a45c2d82)
Mark the ra register as undefined in _start, so that unwinding through
main works correctly. Also, don't use a tail call so that ra points after
the call to __libc_start_main, not after the previous call.
(cherry picked from commit 2dd12baa04)
In the read lock function (__pthread_rwlock_rdlock_full) there was a
code path which would fail to reload __readers while waiting for
PTHREAD_RWLOCK_RWAITING to change. This failure to reload __readers
into a local value meant that various conditionals used the old value
of __readers and with only two threads left it could result in an
indefinite stall of one of the readers (waiting for PTHREAD_RWLOCK_RWAITING
to go to zero, but it never would).
(cherry picked from commit f21e8f8ca4)
Add CFI information about the offset of registers stored in the stack
frame.
[BZ #23614]
* sysdeps/powerpc/powerpc64/addmul_1.S (FUNC): Add CFI offset for
registers saved in the stack frame.
* sysdeps/powerpc/powerpc64/lshift.S (__mpn_lshift): Likewise.
* sysdeps/powerpc/powerpc64/mul_1.S (__mpn_mul_1): Likewise.
Signed-off-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Reviewed-by: Gabriel F. T. Gomes <gabriel@inconstante.eti.br>
(cherry picked from commit 1d880d4a9b)
This one tests for BZ#23907 where the double free
test didn't check the tcache bin bounds before dereferencing
the bin.
[BZ #23907]
* malloc/tst-tcfree3.c: New.
* malloc/Makefile: Add it.
(cherry picked from commit 7c9a7c6836)
There is a data-dependency between the fields of struct l_reloc_result
and the field used as the initialization guard. Users of the guard
expect writes to the structure to be observable when they also observe
the guard initialized. The solution for this problem is to use an acquire
and release load and store to ensure previous writes to the structure are
observable if the guard is initialized.
The previous implementation used DL_FIXUP_VALUE_ADDR (l_reloc_result->addr)
as the initialization guard, making it impossible for some architectures
to load and store it atomically, i.e. hppa and ia64, due to its larger size.
This commit adds an unsigned int to l_reloc_result to be used as the new
initialization guard of the struct, making it possible to load and store
it atomically in all architectures. The fix ensures that the values
observed in l_reloc_result are consistent and do not lead to crashes.
The algorithm is documented in the code in elf/dl-runtime.c
(_dl_profile_fixup). Not all data races have been eliminated.
Tested with build-many-glibcs and on powerpc, powerpc64, and powerpc64le.
[BZ #23690]
* elf/dl-runtime.c (_dl_profile_fixup): Guarantee memory
modification order when accessing reloc_result->addr.
* include/link.h (reloc_result): Add field init.
* nptl/Makefile (tests): Add tst-audit-threads.
(modules-names): Add tst-audit-threads-mod1 and
tst-audit-threads-mod2.
Add rules to build tst-audit-threads.
* nptl/tst-audit-threads-mod1.c: New file.
* nptl/tst-audit-threads-mod2.c: Likewise.
* nptl/tst-audit-threads.c: Likewise.
* nptl/tst-audit-threads.h: Likewise.
Signed-off-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit e5d262effe)
* malloc/malloc.c (tcache_entry): Add key field.
(tcache_put): Set it.
(tcache_get): Likewise.
(_int_free): Check for double free in tcache.
* malloc/tst-tcfree1.c: New.
* malloc/tst-tcfree2.c: New.
* malloc/Makefile: Run the new tests.
* manual/probes.texi: Document memory_tcache_double_free probe.
* dlfcn/dlerror.c (check_free): Prevent double frees.
(cherry picked from commit bcdaad21d4)
malloc: tcache: Validate tc_idx before checking for double-frees [BZ #23907]
The previous check could read beyond the end of the tcache entry
array. If the e->key == tcache cookie check happened to pass, this
would result in crashes.
(cherry picked from commit affec03b71)
This is sometimes useful to determine if a test truly got stuck, or if
it was making progress (logging information to standard output) and
was merely slow to finish.
(cherry picked from commit 35e3fbc451)
Increase timeout from the default 20s to 100s. This test makes close to
20 million syscalls with distribution:
12327675 read
4143204 lseek
929475 close
929471 openat
92817 fstat
1431 write
...
The default timeout assumes each can finish in 1us on average which
is not true on slow machines.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* libio/tst-readline.c (TIMEOUT): Define.
(cherry picked from commit ed643089cd)
Linux 4.19 does not add any new syscalls (some existing ones are added
to more architectures); this patch updates the version number in
syscall-names.list to reflect that it's still current for 4.19.
Tested with build-many-glibcs.py.
* sysdeps/unix/sysv/linux/syscall-names.list: Update kernel
version to 4.19.
(cherry picked from commit 029ad711b8)
[BZ #21716]
* time/tzfile.c (__tzfile_read): Check for memory exhaustion
when registering time zone abbreviations.
(cherry picked from commit e4e4fde51a)
On Thu, Jan 11, 2018 at 3:50 PM, Florian Weimer <fweimer@redhat.com> wrote:
> On 11/07/2017 04:27 PM, Istvan Kurucsai wrote:
>>
>> + next = chunk_at_offset (victim, size);
>
>
> For new code, we prefer declarations with initializers.
Noted.
>> + if (__glibc_unlikely (chunksize_nomask (victim) <= 2 * SIZE_SZ)
>> + || __glibc_unlikely (chunksize_nomask (victim) >
>> av->system_mem))
>> + malloc_printerr("malloc(): invalid size (unsorted)");
>> + if (__glibc_unlikely (chunksize_nomask (next) < 2 * SIZE_SZ)
>> + || __glibc_unlikely (chunksize_nomask (next) >
>> av->system_mem))
>> + malloc_printerr("malloc(): invalid next size (unsorted)");
>> + if (__glibc_unlikely ((prev_size (next) & ~(SIZE_BITS)) !=
>> size))
>> + malloc_printerr("malloc(): mismatching next->prev_size
>> (unsorted)");
>
>
> I think this check is redundant because prev_size (next) and chunksize
> (victim) are loaded from the same memory location.
I'm fairly certain that it compares mchunk_size of victim against
mchunk_prev_size of the next chunk, i.e. the size of victim in its
header and footer.
>> + if (__glibc_unlikely (bck->fd != victim)
>> + || __glibc_unlikely (victim->fd != unsorted_chunks (av)))
>> + malloc_printerr("malloc(): unsorted double linked list
>> corrupted");
>> + if (__glibc_unlikely (prev_inuse(next)))
>> + malloc_printerr("malloc(): invalid next->prev_inuse
>> (unsorted)");
>
>
> There's a missing space after malloc_printerr.
Noted.
> Why do you keep using chunksize_nomask? We never investigated why the
> original code uses it. It may have been an accident.
You are right, I don't think it makes a difference in these checks. So
the size local can be reused for the checks against victim. For next,
leaving it as such avoids the masking operation.
> Again, for non-main arenas, the checks against av->system_mem could be made
> tighter (against the heap size). Maybe you could put the condition into a
> separate inline function?
We could also do a chunk boundary check similar to what I proposed in
the thread for the first patch in the series to be even more strict.
I'll gladly try to implement either but believe that refining these
checks would bring less benefits than in the case of the top chunk.
Intra-arena or intra-heap overlaps would still be doable here with
unsorted chunks and I don't see any way to counter that besides more
generic measures like randomizing allocations and your metadata
encoding patches.
I've attached a revised version with the above comments incorporated
but without the refined checks.
Thanks,
Istvan
From a12d5d40fd7aed5fa10fc444dcb819947b72b315 Mon Sep 17 00:00:00 2001
From: Istvan Kurucsai <pistukem@gmail.com>
Date: Tue, 16 Jan 2018 14:48:16 +0100
Subject: [PATCH v2 1/1] malloc: Additional checks for unsorted bin integrity
I.
Ensure the following properties of chunks encountered during binning:
- victim chunk has reasonable size
- next chunk has reasonable size
- next->prev_size == victim->size
- valid double linked list
- PREV_INUSE of next chunk is unset
* malloc/malloc.c (_int_malloc): Additional binning code checks.
(cherry picked from commit b90ddd08f6)
The House of Force is a well-known technique to exploit heap
overflow. In essence, this exploit takes three steps:
1. Overwrite the size of top chunk with very large value (e.g. -1).
2. Request x bytes from top chunk. As the size of top chunk
is corrupted, x can be arbitrarily large and top chunk will
still be offset by x.
3. The next allocation from top chunk will thus be controllable.
If we verify the size of top chunk at step 2, we can stop such attack.
(cherry picked from commit 30a17d8c95)
This patch updates sysdeps/unix/sysv/linux/syscall-names.list for
Linux 4.18. The io_pgetevents and rseq syscalls are added to the
kernel on various architectures, so need to be mentioned in this file.
Tested with build-many-glibcs.py.
* sysdeps/unix/sysv/linux/syscall-names.list: Update kernel
version to 4.18.
(io_pgetevents): New syscall.
(rseq): Likewise.
(cherry picked from commit 17b26500f9)
Linkers group input note sections with the same name into one output
note section with the same name. One output note section is placed in
one PT_NOTE segment. Since new linkers merge input .note.gnu.property
sections into one output .note.gnu.property section, there is only
one NT_GNU_PROPERTY_TYPE_0 note in one PT_NOTE segment with new linkers.
Since older linkers treat input .note.gnu.property section as a generic
note section and just concatenate all input .note.gnu.property sections
into one output .note.gnu.property section without merging them, we may
see multiple NT_GNU_PROPERTY_TYPE_0 notes in one PT_NOTE segment with
older linkers.
When an older linker is used to created the program on CET-enabled OS,
the linker output has a single .note.gnu.property section with multiple
NT_GNU_PROPERTY_TYPE_0 notes, some of which have IBT and SHSTK enable
bits set even if the program isn't CET enabled. Such programs will
crash on CET-enabled machines. This patch updates the note parser:
1. Skip note parsing if a NT_GNU_PROPERTY_TYPE_0 note has been processed.
2. Check multiple NT_GNU_PROPERTY_TYPE_0 notes.
[BZ #23509]
* sysdeps/x86/dl-prop.h (_dl_process_cet_property_note): Skip
note parsing if a NT_GNU_PROPERTY_TYPE_0 note has been processed.
Update the l_cet field when processing NT_GNU_PROPERTY_TYPE_0 note.
Check multiple NT_GNU_PROPERTY_TYPE_0 notes.
* sysdeps/x86/link_map.h (l_cet): Expand to 3 bits, Add
lc_unknown.
(cherry picked from commit d524fa6c35)
Th commit 'Disable TSX on some Haswell processors.' (2702856bf4) changed the
default flags for Haswell models. Previously, new models were handled by the
default switch path, which assumed a Core i3/i5/i7 if AVX is available. After
the patch, Haswell models (0x3f, 0x3c, 0x45, 0x46) do not set the flags
Fast_Rep_String, Fast_Unaligned_Load, Fast_Unaligned_Copy, and
Prefer_PMINUB_for_stringop (only the TSX one).
This patch fixes it by disentangle the TSX flag handling from the memory
optimization ones. The strstr case cited on patch now selects the
__strstr_sse2_unaligned as expected for the Haswell cpu.
Checked on x86_64-linux-gnu.
[BZ #23709]
* sysdeps/x86/cpu-features.c (init_cpu_features): Set TSX bits
independently of other flags.
(cherry picked from commit c3d8dc45c9)