It's necessary to stub out __libc_disable_asynccancel and
__libc_enable_asynccancel via rtld-stubbed-symbols because the new
direct references to the unwinder result in symbol conflicts when the
rtld exception handling from libc is linked in during the construction
of librtld.map.
unwind-forcedunwind.c is merged into unwind-resume.c. libc now needs
the functions that were previously only used in libpthread.
The GLIBC_PRIVATE exports of __libc_longjmp and __libc_siglongjmp are
no longer needed, so switch them to hidden symbols.
The symbol __pthread_unwind_next has been moved using
scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerva Zanella <adhemerval.zanella@linaro.org>
And also the fork generation counter, __fork_generation. This
eliminates the need for __fork_generation_pointer.
call_once remains in libpthread and calls the exported __pthread_once
symbol.
pthread_once and __pthread_once have been moved using
scripts/move-symbol-to-libc.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This internal symbol is used as part of the longjmp implementation.
Rename the file from nptl/pt-cleanup.c to nptl/pthread_cleanup_upto.c
so that the pt-* files remain restricted to libpthread.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The definitions in libc are sufficient, the forwarders are no longer
needed.
The symbols have been moved using scripts/move-symbol-to-libc.py.
s390-linux-gnu and s390x-linux-gnu need a new version placeholder
to keep the GLIBC_2.19 symbol version in libpthread.
Tested on i386-linux-gnu, powerpc64le-linux-gnu, s390x-linux-gnu,
x86_64-linux-gnu. Built with build-many-glibcs.py.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This affects _pthread_cleanup_pop, _pthread_cleanup_pop_restore,
_pthread_cleanup_push, _pthread_cleanup_push_defer. The symbols
have been moved using scripts/move-symbol-to-libc.py.
No new symbol versions are added because the symbols are turned into
compatibility symbols at the same time.
__pthread_cleanup_pop and __pthread_cleanup_push are added as
GLIBC_PRIVATE symbols because they are also used internally, for
glibc's own cancellation handling.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
It is still used internally. Since unwinding is now available
unconditionally, avoid indirect calls through function pointers loaded
from the stack by inlining the non-cancellation cleanup code. This
avoids a regression in security hardening.
The out-of-line __libc_cleanup_routine implementation is no longer
needed because the inline definition is now static __always_inline.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Remove generic tlsdesc code related to lazy tlsdesc processing since
lazy tlsdesc relocation is no longer supported. This includes removing
GL(dl_load_lock) from _dl_make_tlsdesc_dynamic which is only called at
load time when that lock is already held.
Added a documentation comment too.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
libthread_db is loaded once GDB encounters libpthread, and at this
point, ld.so may not have been processed by GDB yet. As a result,
_rtld_global cannot be accessed by regular means from libthread_db.
To make this work until GDB can be fixed, acess _rtld_global through
a pointer stored in libpthread.
The new test does not reproduce bug 27744 with
--disable-hardcoded-path-in-tests, but is still a valid smoke test.
With --enable-hardcoded-path-in-tests, it is necessary to avoid
add-symbol-file because this can tickle a GDB bug.
Fixes commit 1daccf403b ("nptl: Move
stack list variables into _rtld_global").
Tested-by: Emil Velikov <emil.velikov@collabora.com>
No bug. This commit optimizes strlen-avx2.S. The optimizations are
mostly small things but they add up to roughly 10-30% performance
improvement for strlen. The results for strnlen are bit more
ambiguous. test-strlen, test-strnlen, test-wcslen, and test-wcsnlen
are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
No bug. This commit optimizes strlen-evex.S. The
optimizations are mostly small things but they add up to roughly
10-30% performance improvement for strlen. The results for strnlen are
bit more ambiguous. test-strlen, test-strnlen, test-wcslen, and
test-wcsnlen are all passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
No bug. This commit adds tests cases and benchmarks for page cross and
for memset to the end of the page without crossing. As well in
test-memset.c this commit adds sentinel on start/end of tstbuf to test
for overwrites
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
No bug. This commit adds optimized cased for less_vec memset case that
uses the avx512vl/avx512bw mask store avoiding the excessive
branches. test-memset and test-wmemset are passing.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Since strchr-avx2.S updated by
commit 1f745ecc21
Author: noah <goldstein.w.n@gmail.com>
Date: Wed Feb 3 00:38:59 2021 -0500
x86-64: Refactor and improve performance of strchr-avx2.S
uses sarx:
c4 e2 72 f7 c0 sarx %ecx,%eax,%eax
for strchr-avx2 family functions, require BMI2 in ifunc-impl-list.c and
ifunc-avx2.h.
Since __strlen_evex and __strnlen_evex added by
commit 1fd8c163a8
Author: H.J. Lu <hjl.tools@gmail.com>
Date: Fri Mar 5 06:24:52 2021 -0800
x86-64: Add ifunc-avx2.h functions with 256-bit EVEX
use sarx:
c4 e2 6a f7 c0 sarx %edx,%eax,%eax
require BMI2 for __strlen_evex and __strnlen_evex in ifunc-impl-list.c.
ifunc-avx2.h already requires BMI2 for EVEX implementation.
The benchtests json allows {function {variant}} categorization of
results whereas the pthread-locks tests had {function {variant
{subvariant}}}, which broke validation. Fix that by serializing the
subvariants as variant-subvariant. Also update the schema to
recognize the new benchmark attributes after fixing the naming
conventions.
No Bug. This commit expanding the range of tests / benchmarks for
memmove and memcpy. The test expansion is mostly in the vein of
increasing the maximum size, increasing the number of unique
alignments tested, and testing both source < destination and vice
versa. The benchmark expansaion is just to increase the number of
unique alignments. test-memcpy, test-memccpy, test-mempcpy,
test-memmove, and tst-memmove-overflow all pass.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
So that text_set_element/data_set_element/bss_set_element defined
variables will be retained by the linker.
Note: 'used' and 'retain' are orthogonal: 'used' makes sure the variable
will not be optimized out; 'retain' prevents section garbage collection
if the linker support SHF_GNU_RETAIN.
GNU ld 2.37 and LLD 13 will support -z start-stop-gc which allow C
identifier name sections to be GCed even if there are live
__start_/__stop_ references.
Without the change, there are some static linking problems, e.g.
_IO_cleanup (libio/genops.c) may be discarded by ld --gc-sections, so
stdout is not flushed on exit.
Note: GCC may warning 'retain' attribute ignored while __has_attribute(retain)
is 1 (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99587).
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
No Bug. This commit updates the large memcpy case (no overlap). The
update is to perform memcpy on either 2 or 4 contiguous pages at
once. This 1) helps to alleviate the affects of false memory aliasing
when destination and source have a close 4k alignment and 2) In most
cases and for most DRAM units is a modestly more efficient access
pattern. These changes are a clear performance improvement for
VEC_SIZE =16/32, though more ambiguous for VEC_SIZE=64. test-memcpy,
test-memccpy, test-mempcpy, test-memmove, and tst-memmove-overflow all
pass.
Signed-off-by: Noah Goldstein <goldstein.w.n@gmail.com>
Some registers that can be clobbered by the kernel during a syscall are not
listed on the clobbers list in sysdeps/unix/sysv/linux/powerpc/sysdep.h.
For syscalls using sc:
- XER is zeroed by the kernel on exit
For syscalls using scv:
- XER is zeroed by the kernel on exit
- Different from the sc case, most CR fields can be clobbered (according to
the ELF ABI and the Linux kernel's syscall ABI for powerpc
(linux/Documentation/powerpc/syscall64-abi.rst)
The same should apply to vsyscalls, which effectively execute a function call
but are not currently adding these registers as clobbers either.
These are likely not causing issues today, but they should be added to the
clobbers list just in case things change on the kernel side in the future.
Reported-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Raphael M Zinsly <rzinsly@linux.ibm.com>
The syslog open the '/dev/console' for LOG_CONS without O_CLOEXEC,
which might leak in multithread programs that call fork.
Checked on x86_64-linux-gnu.
MSG_NOSIGNAL was added on POSIX 2008 and Hurd seems to support it.
The SIGPIPE handling also makes the implementation not thread-safe
(due the sigaction usage).
Checked on x86_64-linux-gnu.
The tst-wait4 is moved to common file and used for wait3
tests.
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The test is also converted to use libsupport.
Checked on i686-linux-gnu and x86_64-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
It uses stat to compare against the values set by lutimes.
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
It uses stat to compare against the values set by futimes.
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Now that libsupport abstract Linux possible missing support (either
due FS limitation that can't handle 64 bit timestamp or architectures
that do not handle values larger than unsigned 32 bit values) the
tests can be turned generic.
Checked on x86_64-linux-gnu and i686-linux-gnu. I also built the
tests for i686-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Lazy tlsdesc relocation is racy because the static tls optimization and
tlsdesc management operations are done without holding the dlopen lock.
This similar to the commit b7cf203b5c
for aarch64, but it fixes a different race: bug 27137.
On i386 the code is a bit more complicated than on x86_64 because both
rel and rela relocs are supported.
Lazy tlsdesc relocation is racy because the static tls optimization and
tlsdesc management operations are done without holding the dlopen lock.
This similar to the commit b7cf203b5c
for aarch64, but it fixes a different race: bug 27137.
Another issue is that ld auditing ignores DT_BIND_NOW and thus tries to
relocate tlsdesc lazily, but that does not work in a BIND_NOW module
due to missing DT_TLSDESC_PLT. Unconditionally relocating tlsdesc at
load time fixes this bug 27721 too.
map is not valid to access here because it can be freed by a concurrent
dlclose: during tls access (via __tls_get_addr) _dl_update_slotinfo is
called without holding dlopen locks. So don't check the modid of map.
The map == 0 and map != 0 code paths can be shared (avoiding the dtv
resize in case of map == 0 is just an optimization: larger dtv than
necessary would be fine too).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Since
commit a509eb117f
Avoid late dlopen failure due to scope, TLS slotinfo updates [BZ #25112]
the generation counter update is not needed in the failure path.
That commit ensures allocation in _dl_add_to_slotinfo happens before
the demarcation point in dlopen (it is called twice, first time is for
allocation only where dlopen can still be reverted on failure, then
second time actual dtv updates are done which then cannot fail).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>