Commit b4a5d26d88 (linux: Consolidate sigaction implementation) added
a wrong kernel_sigaction definition for m68k, meant for __NR_sigaction
instead of __NR_rt_sigaction as used on generic Linux sigaction
implementation. This patch fixes it by using the Linux generic
definition meant for the RT kernel ABI.
Checked the signal tests on emulated m68-linux-gnu (Aranym). It fixes
the faulty signal/tst-sigaction and man works as expected.
Adhemerval Zanella <adhemerval.zanella@linaro.org>
James Clarke <jrtc27@jrtc27.com>
[BZ #23967]
* sysdeps/unix/sysv/linux/kernel_sigaction.h (HAS_SA_RESTORER):
Define if SA_RESTORER is defined.
(kernel_sigaction): Define sa_restorer if HAS_SA_RESTORER is defined.
(SET_SA_RESTORER, RESET_SA_RESTORER): Define iff the macro are not
already defined.
* sysdeps/unix/sysv/linux/m68k/kernel_sigaction.h (SA_RESTORER,
kernel_sigaction, SET_SA_RESTORER, RESET_SA_RESTORER): Remove
definitions.
(HAS_SA_RESTORER): Define.
* sysdeps/unix/sysv/linux/sparc/kernel_sigaction.h (SA_RESTORER,
SET_SA_RESTORER, RESET_SA_RESTORER): Remove definition.
(HAS_SA_RESTORER): Define.
* sysdeps/unix/sysv/linux/nios2/kernel_sigaction.h: Include generic
kernel_sigaction after define SET_SA_RESTORER and RESET_SA_RESTORER.
* sysdeps/unix/sysv/linux/powerpc/kernel_sigaction.h: Likewise.
* sysdeps/unix/sysv/linux/x86_64/sigaction.c: Likewise.
(cherry picked from commit 43a45c2d82)
Mark the ra register as undefined in _start, so that unwinding through
main works correctly. Also, don't use a tail call so that ra points after
the call to __libc_start_main, not after the previous call.
(cherry picked from commit 2dd12baa04)
In the read lock function (__pthread_rwlock_rdlock_full) there was a
code path which would fail to reload __readers while waiting for
PTHREAD_RWLOCK_RWAITING to change. This failure to reload __readers
into a local value meant that various conditionals used the old value
of __readers and with only two threads left it could result in an
indefinite stall of one of the readers (waiting for PTHREAD_RWLOCK_RWAITING
to go to zero, but it never would).
(cherry picked from commit f21e8f8ca4)
Add CFI information about the offset of registers stored in the stack
frame.
[BZ #23614]
* sysdeps/powerpc/powerpc64/addmul_1.S (FUNC): Add CFI offset for
registers saved in the stack frame.
* sysdeps/powerpc/powerpc64/lshift.S (__mpn_lshift): Likewise.
* sysdeps/powerpc/powerpc64/mul_1.S (__mpn_mul_1): Likewise.
Signed-off-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Reviewed-by: Gabriel F. T. Gomes <gabriel@inconstante.eti.br>
(cherry picked from commit 1d880d4a9b)
This one tests for BZ#23907 where the double free
test didn't check the tcache bin bounds before dereferencing
the bin.
[BZ #23907]
* malloc/tst-tcfree3.c: New.
* malloc/Makefile: Add it.
(cherry picked from commit 7c9a7c6836)
There is a data-dependency between the fields of struct l_reloc_result
and the field used as the initialization guard. Users of the guard
expect writes to the structure to be observable when they also observe
the guard initialized. The solution for this problem is to use an acquire
and release load and store to ensure previous writes to the structure are
observable if the guard is initialized.
The previous implementation used DL_FIXUP_VALUE_ADDR (l_reloc_result->addr)
as the initialization guard, making it impossible for some architectures
to load and store it atomically, i.e. hppa and ia64, due to its larger size.
This commit adds an unsigned int to l_reloc_result to be used as the new
initialization guard of the struct, making it possible to load and store
it atomically in all architectures. The fix ensures that the values
observed in l_reloc_result are consistent and do not lead to crashes.
The algorithm is documented in the code in elf/dl-runtime.c
(_dl_profile_fixup). Not all data races have been eliminated.
Tested with build-many-glibcs and on powerpc, powerpc64, and powerpc64le.
[BZ #23690]
* elf/dl-runtime.c (_dl_profile_fixup): Guarantee memory
modification order when accessing reloc_result->addr.
* include/link.h (reloc_result): Add field init.
* nptl/Makefile (tests): Add tst-audit-threads.
(modules-names): Add tst-audit-threads-mod1 and
tst-audit-threads-mod2.
Add rules to build tst-audit-threads.
* nptl/tst-audit-threads-mod1.c: New file.
* nptl/tst-audit-threads-mod2.c: Likewise.
* nptl/tst-audit-threads.c: Likewise.
* nptl/tst-audit-threads.h: Likewise.
Signed-off-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit e5d262effe)
* malloc/malloc.c (tcache_entry): Add key field.
(tcache_put): Set it.
(tcache_get): Likewise.
(_int_free): Check for double free in tcache.
* malloc/tst-tcfree1.c: New.
* malloc/tst-tcfree2.c: New.
* malloc/Makefile: Run the new tests.
* manual/probes.texi: Document memory_tcache_double_free probe.
* dlfcn/dlerror.c (check_free): Prevent double frees.
(cherry picked from commit bcdaad21d4)
malloc: tcache: Validate tc_idx before checking for double-frees [BZ #23907]
The previous check could read beyond the end of the tcache entry
array. If the e->key == tcache cookie check happened to pass, this
would result in crashes.
(cherry picked from commit affec03b71)
This is sometimes useful to determine if a test truly got stuck, or if
it was making progress (logging information to standard output) and
was merely slow to finish.
(cherry picked from commit 35e3fbc451)
Increase timeout from the default 20s to 100s. This test makes close to
20 million syscalls with distribution:
12327675 read
4143204 lseek
929475 close
929471 openat
92817 fstat
1431 write
...
The default timeout assumes each can finish in 1us on average which
is not true on slow machines.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* libio/tst-readline.c (TIMEOUT): Define.
(cherry picked from commit ed643089cd)
Linux 4.19 does not add any new syscalls (some existing ones are added
to more architectures); this patch updates the version number in
syscall-names.list to reflect that it's still current for 4.19.
Tested with build-many-glibcs.py.
* sysdeps/unix/sysv/linux/syscall-names.list: Update kernel
version to 4.19.
(cherry picked from commit 029ad711b8)
[BZ #21716]
* time/tzfile.c (__tzfile_read): Check for memory exhaustion
when registering time zone abbreviations.
(cherry picked from commit e4e4fde51a)
On Thu, Jan 11, 2018 at 3:50 PM, Florian Weimer <fweimer@redhat.com> wrote:
> On 11/07/2017 04:27 PM, Istvan Kurucsai wrote:
>>
>> + next = chunk_at_offset (victim, size);
>
>
> For new code, we prefer declarations with initializers.
Noted.
>> + if (__glibc_unlikely (chunksize_nomask (victim) <= 2 * SIZE_SZ)
>> + || __glibc_unlikely (chunksize_nomask (victim) >
>> av->system_mem))
>> + malloc_printerr("malloc(): invalid size (unsorted)");
>> + if (__glibc_unlikely (chunksize_nomask (next) < 2 * SIZE_SZ)
>> + || __glibc_unlikely (chunksize_nomask (next) >
>> av->system_mem))
>> + malloc_printerr("malloc(): invalid next size (unsorted)");
>> + if (__glibc_unlikely ((prev_size (next) & ~(SIZE_BITS)) !=
>> size))
>> + malloc_printerr("malloc(): mismatching next->prev_size
>> (unsorted)");
>
>
> I think this check is redundant because prev_size (next) and chunksize
> (victim) are loaded from the same memory location.
I'm fairly certain that it compares mchunk_size of victim against
mchunk_prev_size of the next chunk, i.e. the size of victim in its
header and footer.
>> + if (__glibc_unlikely (bck->fd != victim)
>> + || __glibc_unlikely (victim->fd != unsorted_chunks (av)))
>> + malloc_printerr("malloc(): unsorted double linked list
>> corrupted");
>> + if (__glibc_unlikely (prev_inuse(next)))
>> + malloc_printerr("malloc(): invalid next->prev_inuse
>> (unsorted)");
>
>
> There's a missing space after malloc_printerr.
Noted.
> Why do you keep using chunksize_nomask? We never investigated why the
> original code uses it. It may have been an accident.
You are right, I don't think it makes a difference in these checks. So
the size local can be reused for the checks against victim. For next,
leaving it as such avoids the masking operation.
> Again, for non-main arenas, the checks against av->system_mem could be made
> tighter (against the heap size). Maybe you could put the condition into a
> separate inline function?
We could also do a chunk boundary check similar to what I proposed in
the thread for the first patch in the series to be even more strict.
I'll gladly try to implement either but believe that refining these
checks would bring less benefits than in the case of the top chunk.
Intra-arena or intra-heap overlaps would still be doable here with
unsorted chunks and I don't see any way to counter that besides more
generic measures like randomizing allocations and your metadata
encoding patches.
I've attached a revised version with the above comments incorporated
but without the refined checks.
Thanks,
Istvan
From a12d5d40fd7aed5fa10fc444dcb819947b72b315 Mon Sep 17 00:00:00 2001
From: Istvan Kurucsai <pistukem@gmail.com>
Date: Tue, 16 Jan 2018 14:48:16 +0100
Subject: [PATCH v2 1/1] malloc: Additional checks for unsorted bin integrity
I.
Ensure the following properties of chunks encountered during binning:
- victim chunk has reasonable size
- next chunk has reasonable size
- next->prev_size == victim->size
- valid double linked list
- PREV_INUSE of next chunk is unset
* malloc/malloc.c (_int_malloc): Additional binning code checks.
(cherry picked from commit b90ddd08f6)
The House of Force is a well-known technique to exploit heap
overflow. In essence, this exploit takes three steps:
1. Overwrite the size of top chunk with very large value (e.g. -1).
2. Request x bytes from top chunk. As the size of top chunk
is corrupted, x can be arbitrarily large and top chunk will
still be offset by x.
3. The next allocation from top chunk will thus be controllable.
If we verify the size of top chunk at step 2, we can stop such attack.
(cherry picked from commit 30a17d8c95)
This patch updates sysdeps/unix/sysv/linux/syscall-names.list for
Linux 4.18. The io_pgetevents and rseq syscalls are added to the
kernel on various architectures, so need to be mentioned in this file.
Tested with build-many-glibcs.py.
* sysdeps/unix/sysv/linux/syscall-names.list: Update kernel
version to 4.18.
(io_pgetevents): New syscall.
(rseq): Likewise.
(cherry picked from commit 17b26500f9)
Linkers group input note sections with the same name into one output
note section with the same name. One output note section is placed in
one PT_NOTE segment. Since new linkers merge input .note.gnu.property
sections into one output .note.gnu.property section, there is only
one NT_GNU_PROPERTY_TYPE_0 note in one PT_NOTE segment with new linkers.
Since older linkers treat input .note.gnu.property section as a generic
note section and just concatenate all input .note.gnu.property sections
into one output .note.gnu.property section without merging them, we may
see multiple NT_GNU_PROPERTY_TYPE_0 notes in one PT_NOTE segment with
older linkers.
When an older linker is used to created the program on CET-enabled OS,
the linker output has a single .note.gnu.property section with multiple
NT_GNU_PROPERTY_TYPE_0 notes, some of which have IBT and SHSTK enable
bits set even if the program isn't CET enabled. Such programs will
crash on CET-enabled machines. This patch updates the note parser:
1. Skip note parsing if a NT_GNU_PROPERTY_TYPE_0 note has been processed.
2. Check multiple NT_GNU_PROPERTY_TYPE_0 notes.
[BZ #23509]
* sysdeps/x86/dl-prop.h (_dl_process_cet_property_note): Skip
note parsing if a NT_GNU_PROPERTY_TYPE_0 note has been processed.
Update the l_cet field when processing NT_GNU_PROPERTY_TYPE_0 note.
Check multiple NT_GNU_PROPERTY_TYPE_0 notes.
* sysdeps/x86/link_map.h (l_cet): Expand to 3 bits, Add
lc_unknown.
(cherry picked from commit d524fa6c35)
Th commit 'Disable TSX on some Haswell processors.' (2702856bf4) changed the
default flags for Haswell models. Previously, new models were handled by the
default switch path, which assumed a Core i3/i5/i7 if AVX is available. After
the patch, Haswell models (0x3f, 0x3c, 0x45, 0x46) do not set the flags
Fast_Rep_String, Fast_Unaligned_Load, Fast_Unaligned_Copy, and
Prefer_PMINUB_for_stringop (only the TSX one).
This patch fixes it by disentangle the TSX flag handling from the memory
optimization ones. The strstr case cited on patch now selects the
__strstr_sse2_unaligned as expected for the Haswell cpu.
Checked on x86_64-linux-gnu.
[BZ #23709]
* sysdeps/x86/cpu-features.c (init_cpu_features): Set TSX bits
independently of other flags.
(cherry picked from commit c3d8dc45c9)
On systems without enough random-access memory, stdlib/test-bz22786
will go deeply into swap and time out, even with a substantial
TIMEOUTFACTOR. This commit adds a facility to construct repeating
strings with alias mappings, so that the requirement for physical
memory, and uses it in stdlib/test-bz22786.
(cherry picked from commit f5e7e95921)
The test tries to allocate more than 2^31 bytes which will always fail on s390
as it has maximum 2^31bit of memory.
Before commit 6c3a8a9d86, this test returned
unsupported if malloc fails. This patch re enables this behaviour.
Furthermore support_delete_temp_files() failed to remove the temp directory
in this case as it is not empty due to the created symlink.
Thus the creation of the symlink is moved behind malloc.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
ChangeLog:
* stdlib/test-bz22786.c (do_test): Return EXIT_UNSUPPORTED
if malloc fails.
(cherry picked from commit 3bad2358d6)
When new symbol versions were introduced without SVID compatible
error handling the exp2f, log2f and powf symbols were accidentally
removed from the ia64 lim.a. The regression was introduced by
the commits
f5f0f52651
New expf and exp2f version without SVID compat wrapper
72d3d28108
New symbol version for logf, log2f and powf without SVID compat
With WEAK_LIBM_ENTRY(foo), there is a hidden __foo and weak foo
symbol definition in both SHARED and !SHARED build.
[BZ #23822]
* sysdeps/ia64/fpu/e_exp2f.S (exp2f): Use WEAK_LIBM_ENTRY.
* sysdeps/ia64/fpu/e_log2f.S (log2f): Likewise.
* sysdeps/ia64/fpu/e_exp2f.S (powf): Likewise.
(cherry picked from commit ba5b14c761)
We can use long int on sparcv9, but on sparc64, we must match the int
type used by the kernel (and not long int, as in POSIX).
(cherry picked from commit 7c5e34d7f1)
The race leads either to pthread_mutex_destroy returning EBUSY
or triggering an assertion (See description in bugzilla).
This patch is fixing the race by ensuring that the elision path is
used in all cases if elision is enabled by the GLIBC_TUNABLES framework.
The __kind variable in struct __pthread_mutex_s is accessed concurrently.
Therefore we are now using the atomic macros.
The new testcase tst-mutex10 is triggering the race on s390x and intel.
Presumably also on power, but I don't have access to a power machine
with lock-elision. At least the code for power is the same as on the other
two architectures.
ChangeLog:
[BZ #23275]
* nptl/tst-mutex10.c: New File.
* nptl/Makefile (tests): Add tst-mutex10.
(tst-mutex10-ENV): New variable.
* sysdeps/unix/sysv/linux/s390/force-elision.h: (FORCE_ELISION):
Ensure that elision path is used if elision is available.
* sysdeps/unix/sysv/linux/powerpc/force-elision.h (FORCE_ELISION):
Likewise.
* sysdeps/unix/sysv/linux/x86/force-elision.h: (FORCE_ELISION):
Likewise.
* nptl/pthreadP.h (PTHREAD_MUTEX_TYPE, PTHREAD_MUTEX_TYPE_ELISION)
(PTHREAD_MUTEX_PSHARED): Use atomic_load_relaxed.
* nptl/pthread_mutex_consistent.c (pthread_mutex_consistent): Likewise.
* nptl/pthread_mutex_getprioceiling.c (pthread_mutex_getprioceiling):
Likewise.
* nptl/pthread_mutex_lock.c (__pthread_mutex_lock_full)
(__pthread_mutex_cond_lock_adjust): Likewise.
* nptl/pthread_mutex_setprioceiling.c (pthread_mutex_setprioceiling):
Likewise.
* nptl/pthread_mutex_timedlock.c (__pthread_mutex_timedlock): Likewise.
* nptl/pthread_mutex_trylock.c (__pthread_mutex_trylock): Likewise.
* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise.
* sysdeps/nptl/bits/thread-shared-types.h (struct __pthread_mutex_s):
Add comments.
* nptl/pthread_mutex_destroy.c (__pthread_mutex_destroy):
Use atomic_load_relaxed and atomic_store_relaxed.
* nptl/pthread_mutex_init.c (__pthread_mutex_init):
Use atomic_store_relaxed.
(cherry picked from commit 403b4feb22)
When elf_machine_runtime_setup is called to set up resolver, it should
use _dl_runtime_resolve_shstk or _dl_runtime_profile_shstk if SHSTK is
enabled by kernel.
Tested on i686 with and without --enable-cet as well as on CET emulator
with --enable-cet.
[BZ #23716]
* sysdeps/i386/dl-cet.c: Removed.
* sysdeps/i386/dl-machine.h (_dl_runtime_resolve_shstk): New
prototype.
(_dl_runtime_profile_shstk): Likewise.
(elf_machine_runtime_setup): Use _dl_runtime_profile_shstk or
_dl_runtime_resolve_shstk if SHSTK is enabled by kernel.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 7b1f940676)
Although CLDR says otherwise, it is confirmed by Oqaasileriffik, the
official Greenlandic language regulator, that this change is correct.
[BZ #20209]
* localedata/locales/kl_GL: (abday): Fix spelling of Sun (Sunday),
should be "sap" rather than "sab".
(day): Fix spelling of Sunday, should be "sapaat" rather than
"sabaat".
(cherry picked from commit dae3ed958c)
The fallback code of Linux wrapper for preadv2/pwritev2 executes
regardless of the errno code for preadv2, instead of the case where
the syscall is not supported.
This fixes it by calling the fallback code iff errno is ENOSYS. The
patch also adds tests for both invalid file descriptor and invalid
iov_len and vector count.
The only discrepancy between preadv2 and fallback code regarding
error reporting is when an invalid flags are used. The fallback code
bails out earlier with ENOTSUP instead of EINVAL/EBADF when the syscall
is used.
Checked on x86_64-linux-gnu on a 4.4.0 and 4.15.0 kernel.
[BZ #23579]
* misc/tst-preadvwritev2-common.c (do_test_with_invalid_fd): New
test.
* misc/tst-preadvwritev2.c, misc/tst-preadvwritev64v2.c (do_test):
Call do_test_with_invalid_fd.
* sysdeps/unix/sysv/linux/preadv2.c (preadv2): Use fallback code iff
errno is ENOSYS.
* sysdeps/unix/sysv/linux/preadv64v2.c (preadv64v2): Likewise.
* sysdeps/unix/sysv/linux/pwritev2.c (pwritev2): Likewise.
* sysdeps/unix/sysv/linux/pwritev64v2.c (pwritev64v2): Likewise.
(cherry picked from commit 7a16bdbb9f)
The function f1a, executed on a stack of size 32k, allocates an object of
size 32k on the stack. Make the stack variables static to reduce
excessive stack usage.
(cherry picked from commit f841c97e51)
Wrapping the _start function with ENTRY and END to insert ENDBR32 at
function entry when CET is enabled. Since _start now includes CFI,
without "cfi_undefined (eip)", unwinder may not terminate at _start
and we will get
Program received signal SIGSEGV, Segmentation fault.
0xf7dc661e in ?? () from /lib/libgcc_s.so.1
Missing separate debuginfos, use: dnf debuginfo-install libgcc-8.2.1-3.0.fc28.i686
(gdb) bt
#0 0xf7dc661e in ?? () from /lib/libgcc_s.so.1
#1 0xf7dc7c18 in _Unwind_Backtrace () from /lib/libgcc_s.so.1
#2 0xf7f0d809 in __GI___backtrace (array=array@entry=0xffffc7d0,
size=size@entry=20) at ../sysdeps/i386/backtrace.c:127
#3 0x08049254 in compare (p1=p1@entry=0xffffcad0, p2=p2@entry=0xffffcad4)
at backtrace-tst.c:12
#4 0xf7e2a28c in msort_with_tmp (p=p@entry=0xffffca5c, b=b@entry=0xffffcad0,
n=n@entry=2) at msort.c:65
#5 0xf7e29f64 in msort_with_tmp (n=2, b=0xffffcad0, p=0xffffca5c)
at msort.c:53
#6 msort_with_tmp (p=p@entry=0xffffca5c, b=b@entry=0xffffcad0, n=n@entry=5)
at msort.c:53
#7 0xf7e29f64 in msort_with_tmp (n=5, b=0xffffcad0, p=0xffffca5c)
at msort.c:53
#8 msort_with_tmp (p=p@entry=0xffffca5c, b=b@entry=0xffffcad0, n=n@entry=10)
at msort.c:53
#9 0xf7e29f64 in msort_with_tmp (n=10, b=0xffffcad0, p=0xffffca5c)
at msort.c:53
#10 msort_with_tmp (p=p@entry=0xffffca5c, b=b@entry=0xffffcad0, n=n@entry=20)
at msort.c:53
#11 0xf7e2a5b6 in msort_with_tmp (n=20, b=0xffffcad0, p=0xffffca5c)
at msort.c:297
#12 __GI___qsort_r (b=b@entry=0xffffcad0, n=n@entry=20, s=s@entry=4,
cmp=cmp@entry=0x8049230 <compare>, arg=arg@entry=0x0) at msort.c:297
#13 0xf7e2a84d in __GI_qsort (b=b@entry=0xffffcad0, n=n@entry=20, s=s@entry=4,
cmp=cmp@entry=0x8049230 <compare>) at msort.c:308
#14 0x080490f6 in main (argc=2, argv=0xffffcbd4) at backtrace-tst.c:39
FAIL: debug/backtrace-tst
[BZ #23606]
* sysdeps/i386/start.S: Include <sysdep.h>
(_start): Use ENTRY/END to insert ENDBR32 at entry when CET is
enabled. Add cfi_undefined (eip).
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 5a274db4ea)
The generic strstr in GLIBC 2.28 fails to match huge needles. The optimized
AVAILABLE macro reads ahead a large fixed amount to reduce the overhead of
repeatedly checking for the end of the string. However if the needle length
is larger than this, two_way_long_needle may confuse this as meaning the end
of the string and return NULL. This is fixed by adding the needle length to
the amount to read ahead.
[BZ #23637]
* string/test-strstr.c (pr23637): New function.
(test_main): Add tests with longer needles.
* string/strcasestr.c (AVAILABLE): Fix readahead distance.
* string/strstr.c (AVAILABLE): Likewise.
(cherry picked from commit 83a552b0bb)
If the compiler reduces the stack usage in function f1 before calling
into function f2, then when we swapcontext back to f1 and continue
execution we may overwrite registers that were spilled to the stack
while f2 was executing. Later when we return to f2 the corrupt
registers will be reloaded from the stack and the test will crash. This
was most commonly observed on i686 with __x86.get_pc_thunk.dx and
needing to save and restore $edx. Overall i686 has few registers and
the spilling to the stack is bound to happen, therefore the solution to
making this test robust is to split function f1 into two parts f1a and
f1b, and allocate f1b it's own stack such that subsequent execution does
not overwrite the stack in use by function f2.
Tested on i686 and x86_64.
Signed-off-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 791b350dc7)