The flt-32 implementation of powf wrongly uses x-1 instead of |x|-1
when computing log (x) for the case where |x| is close to 1 and y is
large. This patch fixes the logic accordingly. Relevant tests
existed for x close to 1, and corresponding tests are added for x
close to -1, as well as for some new variant cases.
Tested for x86_64 and x86.
[BZ #18647]
* sysdeps/ieee754/flt-32/e_powf.c (__ieee754_powf): For large y
and |x| close to 1, use absolute value of x when computing log.
* math/auto-libm-test-in: Add more tests of pow.
* math/auto-libm-test-out: Regenerated.
as discussed in the thread starting at
https://sourceware.org/ml/libc-alpha/2015-06/msg00098.html
it looks like the best options is to remove locale timezone information
from locales which currently provide it (in incomplete or incorrect
fashion) rather than to start duplicating tzdata info in glibc.
This patch adds __nonnull annotations for wcscat, wcsncat, wcscmp and wcsncmp.
These added annotations match the annoations for strcat, strncat, strcmp, strncmp in glibc.
I keep trying to run tests with --help and then remembering that does
nothing when it throws an error. That means I have to dig into the
source when I want to refer to flags or env vars and re-read a good
amount of code to find the nested locations.
Make this all much more user friendly with a usage screen that gets
printed out whenever an unknown option is specified.
On arches that set _STACK_GROWS_UP, the stacktop variable is declared
and set, but never actually used. Refactor the code a bit so that the
variable is only declared/set under _STACK_GROWS_DOWN settings.
<regexp.h> (not to be confused with <regex.h>) is an obsolete and
frankly horrible regular expression-matching API. It was part of SVID
but was withdrawn in Issue 5 (for reference, we're on Issue 7 now).
It doesn't do anything you can't do with <regex.h>, and using it
involves defining a bunch of macros before including the header.
Moreover, the code in regexp.h that uses those macros has been buggy
since its creation (in 1996) and no one has noticed, which indicates
to me that there are no users. (Specifically, RETURN() is used in a
whole bunch of cases where it should have been ERROR().)
The header is given a warning and marked deprecated for 2.22.
See:
https://sourceware.org/ml/libc-alpha/2015-07/msg00862.html and
https://sourceware.org/ml/libc-alpha/2015-07/msg00871.html.
The semi-recent SYSCALL_CANCEL inclusion broke hppa due to the sysdep.h
headers not including the unix/sysdep.h headers. Rework the includes so
we match the other ports:
* hppa/sysdep.h:
- Do not include sys/syscall.h as the unix sysdep.h headers do it.
- Do not include config.h as libc-symbols.h does it, and it has no
#ifdef multiple-include protection, and it breaks when some files
do things like #undef __OPTIMIZE__.
* sysdeps/unix/sysv/linux/hppa/sysdep-cancel.h:
- Drop the generic/sysdep.h as the unix sysdep.h headers include it.
* sysdeps/unix/sysv/linux/hppa/sysdep.h:
- Change to the unix & core hppa sysdep header stacks.
- Undef a few defines that the core headers already set up for us.
The semi-recent SYSCALL_CANCEL macro imposes a slight nuance on the
implementation of INLINE_SYSCALL: the nr argument cannot be expanded
directly but must be passed on to another macro which may expand it.
Most arches don't notice because INLINE_SYSCALL is defined in terms
of INTERNAL_SYSCALL which has the additional layer of expansion, but
on hppa, it was attempting to expand it directly. That causes build
errors like so:
../sysdeps/unix/sysv/linux/sigsuspend.c: In function '__sigsuspend':
../sysdeps/unix/sysv/linux/sigsuspend.c:31:62: error:
implicit declaration of function 'LOAD_ARGS___SYSCALL_NARGS'
../sysdeps/unix/sysv/linux/sigsuspend.c:31:304: error:
called object 'LOAD_ARGS___SYSCALL_NARGS(set, 8)' is not a function
So rewrite hppa's INLINE_SYSCALL to use INTERNAL_SYSCALL like other
arches do. This is also a nice clean up as the two macros had quite
a bit of duplicated logic.
On x86, linker in binutils 2.26 and newer consolidates R_*_JUMP_SLOT with
R_*_GLOB_DAT relocation against the same symbol. This patch extends
local PLT reference check to support alternate relocations.
[BZ #18078]
* scripts/check-localplt.awk: Support alternate relocations.
* scripts/localplt.awk: Also check relocations in DT_RELA/DT_REL
sections.
* sysdeps/unix/sysv/linux/i386/localplt.data: Mark free and
malloc entries with + REL R_386_GLOB_DAT.
* sysdeps/x86_64/localplt.data: New file.
Way back in 2005 the atomic_exchange_and_add function was cleaned up to
avoid the explicit size checking and instead let gcc handle things itself.
Unfortunately that change ended up leaving beyond a cast to int, even when
the incoming value was a long. This has flown under the radar for a long
time due to the function not being heavily used in the tree (especially as
a full 64bit field), but a recent change to semaphores made some nptl tests
fail reliably. This is due to the code packing two 32bit values into one
64bit variable (where the high 32bits contained the number of waiters), and
then the whole variable being atomically updated between threads. On ia64,
that meant we never atomically updated the count, so sometimes the sem_post
would not wake up the waiters.
This define made more sense in the pre-sanitized kernel headers days,
but since we require kernel versions that are sanitized, we don't need
this hack anymore.
This function actually checks for NULL arguments and the API has been
tenatively documented as using EINVAL in that case. We can debate
leaving it this way, but it should be done after the pending release.
Changes in support of -fno-plt also cause the elf/tst-audit* tests to
start passing on MIPS. This patch duly marks the relevant bug as
fixed in ChangeLog and NEWS.
The recently introduced TLS variables in the thread-local destructor
implementation (__cxa_thread_atexit_impl) used the default GD access
model, resulting in a call to __tls_get_addr. This causes a deadlock
with recent changes to the way TLS is initialized because DTV
allocations are delayed and hence despite knowing the offset to the
variable inside its TLS block, the thread has to take the global rtld
lock to safely update the TLS offset.
This causes deadlocks when a thread is instantiated and joined inside
a destructor of a dlopen'd DSO. The correct long term fix is to
somehow not take the lock, but that will need a lot deeper change set
to alter the way in which the big rtld lock is used.
Instead, this patch just eliminates the call to __tls_get_addr for the
thread-local variables inside libc.so, libpthread.so and rtld by
building all of their units with -mtls-model=initial-exec.
There were concerns that the static storage for TLS is limited and
hence we should not be using it. Additionally, dynamically loaded
modules may result in libc.so looking for this static storage pretty
late in static binaries. Both concerns are valid when using TLSDESC
since that is where one may attempt to allocate a TLS block from
static storage for even those variables that are not IE. They're not
very strong arguments for the traditional TLS model though, since it
assumes that the static storage would be used sparingly and definitely
not by default. Hence, for now this would only theoretically affect
ARM architectures.
The impact is hence limited to statically linked binaries that dlopen
modules that in turn load libc.so, all that on arm hardware. It seems
like a small enough impact to justify fixing the larger problem that
currently affects everything everywhere.
This still does not solve the original problem completely. That is,
it is still possible to deadlock on the big rtld lock with a small
tweak to the test case attached to this patch. That problem is
however not a regression in 2.22 and hence could be tackled as a
separate project. The test case is picked up as is from Alex's patch.
This change has been tested to verify that it does not cause any
issues on x86_64.
ChangeLog:
[BZ #18457]
* nptl/Makefile (tests): New test case tst-join7.
(modules-names): New test case module tst-join7mod.
* nptl/tst-join7.c: New file.
* nptl/tst-join7mod.c: New file.
* Makeconfig (tls-model): Pass -ftls-model=initial-exec for
all translation units in libc.so, libpthread.so and rtld.
glibc supports the deprecated matherr hook for math error reporting. The
conform tests take this into consideration and whitelist this symbol when
running linknamespace tests.
The ia64 libm code has long provided two additional hooks in this space:
matherrf (for floats)
matherrl (for long doubles)
Which causes the conform tests to fail with chains that all look like:
[initial] __atan2 ->
[libm.a(e_atan2.o)] __libm_error_support ->
[libm.a(libm_error.o)] matherrf
We can't (losslessly) redirect existing usage of these funcs to matherr
because the structure passed in is different -- matherr uses a struct with
doubles while matherrf/matherrl use floats and long doubles respectively.
Plus, this has been part of the exported ABI since glibc-2.2.3, so it
doesn't feel right to change it so late.
Until we get around to obsoleting matherr entirely, whitelist these two
additional ia64 symbols.
Since ia64 is little endian, sa_flags has to come before the padding
when splitting it from 64bits to 32bits.
Reported-by: Joseph Myers <joseph@codesourcery.com>
When an TLS destructor is registered, we set the DF_1_NODELETE flag to
signal that the object should not be destroyed. We then clear the
DF_1_NODELETE flag when all destructors are called, which is wrong -
the flag could have been set by other means too.
This patch replaces this use of the flag by using l_tls_dtor_count
directly to determine whether it is safe to unload the object. This
change has the added advantage of eliminating the lock taking when
calling the destructors, which could result in a deadlock. The patch
also fixes the test case tst-tls-atexit - it was making an invalid
dlclose call, which would just return an error silently.
I have also added a detailed note on concurrency which also aims to
justify why I chose the semantics I chose for accesses to
l_tls_dtor_count. Thanks to Torvald for his help in getting me
started on this and (literally) teaching my how to approach the
problem.
Change verified on x86_64; the test suite does not show any
regressions due to the patch.
ChangeLog:
[BZ #18657]
* elf/dl-close.c (_dl_close_worker): Don't unload DSO if there
are pending TLS destructor calls.
* include/link.h (struct link_map): Add concurrency note for
L_TLS_DTOR_COUNT.
* stdlib/cxa_thread_atexit_impl.c (__cxa_thread_atexit_impl):
Don't touch the link map flag. Atomically increment
l_tls_dtor_count.
(__call_tls_dtors): Atomically decrement l_tls_dtor_count.
Avoid taking the load lock and don't touch the link map flag.
* stdlib/tst-tls-atexit-nodelete.c: New test case.
* stdlib/Makefile (tests): Use it.
* stdlib/tst-tls-atexit.c (do_test): dlopen
tst-tls-atexit-lib.so again before dlclose. Add conditionals
to allow tst-tls-atexit-nodelete test case to use it.