In <https://sourceware.org/ml/libc-alpha/2015-12/msg00543.html>,
Florian noted highly parallel builds being slowed down by
gen-libm-test.pl running during the build, when it should only run for
testing, not for building glibc itself.
This is a consequence of libm-test.c being listed in before-compile.
That listing in before-compile arose from the error reported in
<https://sourceware.org/ml/libc-hacker/1999-10/msg00054.html> when
building dependencies: at that time, dependencies were generated
separation from compilation, so if a source file included a generated
file it wasn't enough for the dependencies for the .o file to be
correct, the generated file needed to be listed in before-compile.
Since <https://sourceware.org/ml/libc-hacker/2003-05/msg00001.html>,
dependencies are generated as a side-effect of compilation. This
means that having the right dependencies for the .o files for the
tests fully suffices to ensure that libm-test.c is generated by the
time it's needed; no entry in before-compile is needed. And we indeed
have such a dependency for all the tests using libm-test.c:
$(addprefix $(objpfx), $(libm-tests.o)): $(objpfx)libm-test.stmp
Thus, the before-compile definition is unnecessary, and this patch
removes it. (This may of course move serialization from the glibc
build to glibc testing, but I intend to split up libm-test.inc so that
tests for each (floating-point type, libm function) pair are built and
run separately, which should reduce that serialization.)
Tested for x86_64.
* math/Makefile (before-compile): Remove.
It is no longer needed to preserve the flags parameter to `clone' since
the commit c579f48edb (Remove cached
PID/TID in clone).
Testing was performed successfully on sparcv9/Linux.
[BZ #21075]
* sysdeps/unix/sysv/linux/sparc/sparc64/clone.S (__clone): Remove
unused assignment.
* sysdeps/unix/sysv/linux/sparc/sparc32/clone.S (__clone): Likewise.
The macros lll_trylock, lll_cond_trylock are extended by an __glibc_unlikely
hint. Now the trylock macros are based on the same assumption about a
free/busy lock as lll_lock.
With the hint gcc emits code in e.g. pthread_mutex_trylock which does
not use jumps if the lock is free. Without the hint it had to jump away
if the lock is free.
Tested on s390x, ppc.
ChangeLog:
* sysdeps/nptl/lowlevellock.h (lll_trylock, lll_cond_trylock):
Add __glibc_unlikely hint.
Based on comments on previous attempt to address BZ#16640 [1],
the idea is not support invalid use of strtok (the original
bug report proposal). This leader to a new strtok optimized
strtok implementation [2].
The idea of this patch is to fix BZ#16640 to align all the
implementations to a same contract. However, with newer strtok
code it is better to get remove the old assembly ones instead of
fix them.
For x86 is a gain in all cases since the new implementation can
potentially use sse2/sse42 implementation for strspn and strcspn.
This shows a better performance on both i686 and x86_64 using
the string benchtests.
On powerpc64 the gains are mixed, where only for larger inputs
or keys some gains are showns (based on benchtest it seems that
it shows some gains for keys larger than 10 and inputs larger
than 32). I would prefer to remove the optimized implementation
based on first code simplicity and second because some more gain
could be optimized using a better optimized strcspn/strspn
code (as for x86). However if powerpc arch maintainers prefer I
can send a v2 with the assembly code adjusted instead.
Checked on x86_64-linux-gnu, i686-linux-gnu, and powerpc64le-linux-gnu.
[BZ #16640]
* sysdeps/i386/i686/strtok.S: Remove file.
* sysdeps/i386/i686/strtok_r.S: Likewise.
* sysdeps/i386/strtok.S: Likewise.
* sysdeps/i386/strtok_r.S: Likewise.
* sysdeps/powerpc/powerpc64/strtok.S: Likewise.
* sysdeps/powerpc/powerpc64/strtok_r.S: Likewise.
* sysdeps/x86_64/strtok.S: Likewise.
* sysdeps/x86_64/strtok_r.S: Likewise.
[1] https://sourceware.org/ml/libc-alpha/2016-10/msg00411.html
[2] https://sourceware.org/ml/libc-alpha/2016-12/msg00461.html
As noted by c1f0601389, previous posix_fadvise consolidation
broke on mips o32. As stated in commit message, MIPS o32 only defines
__NR_fadvise64 and it is behaves like __NR_fadvise64_64.
This patches consolidates both ARM and mips o32 version by fixing
the ARM used option (__NR_fadvise64_64 withouth the alignment required
by abi) and added another option, __ASSUME_FADVISE64_AS_64_64,
which is used on mips o32.
When this option is used, posix_fadvise will use __NR_fadvise64_64
behavior (by defining or not __ASSUME_FADVISE64_64_6ARG). For
mips, if __NR_fadvise64_64 is not defined, __NR_fadvise will be used.
I also updated the posix_fadvise comments to explain better the
different kernel abi used in the supported architectures.
I checked with a mips o32 and verified that posix_fadvise.o is
indeed using 7 argument syscall with the expected argument position.
I also checked on i686-linux-gnu and arm-gnu-eabihf.
* sysdeps/unix/sysv/linux/arm/posix_fadvise.c: Remove file.
* sysdeps/unix/sysv/linux/mips/mips32/posix_fadvise.c: Likewise.
* sysdeps/unix/sysv/linux/mips/kernel-features.h
(__ASSUME_FADVISE64_AS_64_64): Define.
* sysdeps/unix/sysv/linux/posix_fadvise.c [__NR_fadvise64]: Add
!defined __ASSUME_FADVISE64_AS_64_64 to use syscall issue.
[!__NR_fadvise64 && __ASSUME_FADVISE64_64_6ARG]: Remove
__ALIGNMENT_ARG usage.
[!__NR_fadvise64 && !__ASSUME_FADVISE64_64_6ARG]: Define
__NR_fadvise64_64 if it is not defined.
The child process of the tst-env-setuid process was failing correctly
with EXIT_UNSUPPORTED but the parent did not carry that status forward
and failed instead. This patch fixes this so that tests on nosuid
/tmp fails gracefully with UNSUPPORTED. Tested by making my tmpfs
nosuid.
* elf/tst-env-setuid.c (do_execve): Return EXIT_UNSUPPORTED in
parent if child exited in that manner. Print WEXITSTATUS
instead of the raw status.
(do_test_prep): Rename to do_test.
(do_test): Return the result of run_executable_sgid.
(TEST_FUNCTION_ARGV): Adjust.
In _dl_nothread_init_static_tls() and init_one_static_tls() we must not
touch the DTV of other threads since we do not have ownership of them.
The DTV need not be initialized at this point anyway since only LD/GD
accesses will use them. If LD/GD accesses occur they will take care to
initialize their own thread's DTV.
Concurrency comments were removed from the patch since they need to be
reworked along with a full description of DTV ownership and when it is
or is not safe to modify these structures.
Alexandre Oliva's original patch and discussion:
https://sourceware.org/ml/libc-alpha/2016-09/msg00512.html
IFUNC relocation against definition in unrelocated shared library
will lead to segfault when the IFUNC function is called. This
patch allows such IFUNC relocations with a warning. This isn't
a real fix for
https://sourceware.org/bugzilla/show_bug.cgi?id=21041
It simply allows the program to load. The program will segfault
when longjmp is called.
* sysdeps/i386/dl-machine.h (elf_machine_rel): Replace
_dl_fatal_printf with _dl_error_printf for IFUNC relocation
against unrelocated shared library.
* sysdeps/x86_64/dl-machine.h (elf_machine_rela): Likewise.
A setxid program that uses a glibc with tunables disabled may pass on
GLIBC_TUNABLES as is to its child processes. If the child process
ends up using a different glibc that has tunables enabled, it will end
up getting access to unsafe tunables. To fix this, remove
GLIBC_TUNABLES from the environment for setxid process.
* sysdeps/generic/unsecvars.h: Add GLIBC_TUNABLES.
* elf/tst-env-setuid-tunables.c
(test_child_tunables)[!HAVE_TUNABLES]: Verify that
GLIBC_TUNABLES is removed in a setgid process.
Florian Weimer pointed out that we have three different kinds of
environment variables (and hence tunables):
1. Variables that are removed for setxid processes
2. Variables that are ignored in setxid processes but is passed on to
child processes
3. Variables that are passed on to child processes all the time
Tunables currently only does (2) and (3) when it should be doing (1)
for MALLOC_CHECK_. This patch enhances the is_secure flag in tunables
to an enum value that can specify which of the above three categories
the tunable (and its envvar alias) belongs to.
The default is for tunables to be in (1). Hence, all of the malloc
tunables barring MALLOC_CHECK_ are explicitly specified to belong to
category (2). There were discussions around abolishing category (2)
completely but we can do that as a separate exercise in 2.26.
Tested on x86_64 to verify that there are no regressions.
[BZ #21073]
* elf/dl-tunable-types.h (tunable_seclevel_t): New enum.
* elf/dl-tunables.c (tunables_strdup): Remove.
(get_next_env): Also return the previous envp.
(parse_tunables): Erase tunables of category
TUNABLES_SECLEVEL_SXID_ERASE.
(maybe_enable_malloc_check): Make MALLOC_CHECK_
TUNABLE_SECLEVEL_NONE if /etc/setuid-debug is accessible.
(__tunables_init)[TUNABLES_FRONTEND ==
TUNABLES_FRONTEND_valstring]: Update GLIBC_TUNABLES envvar
after parsing.
[TUNABLES_FRONTEND != TUNABLES_FRONTEND_valstring]: Erase
tunable envvars of category TUNABLES_SECLEVEL_SXID_ERASE.
* elf/dl-tunables.h (struct _tunable): Change member is_secure
to security_level.
* elf/dl-tunables.list: Add security_level annotations for all
tunables.
* scripts/gen-tunables.awk: Recognize and generate enum values
for security_level.
* elf/tst-env-setuid.c: New test case.
* elf/tst-env-setuid-tunables: new test case.
* elf/Makefile (tests-static): Add them.
Since memset-vec-unaligned-erms.S has VDUP_TO_VEC0_AND_SET_RETURN at
function entry, memset optimized for AVX2 and AVX512 will always use
ymm/zmm register. VZEROUPPER should be placed before ret in
L(stosb):
movq %rdx, %rcx
movzbl %sil, %eax
movq %rdi, %rdx
rep stosb
movq %rdx, %rax
ret
since it can be reached from
L(stosb_more_2x_vec):
cmpq $REP_STOSB_THRESHOLD, %rdx
ja L(stosb)
[BZ #21081]
* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S
(L(stosb)): Add VZEROUPPER before ret.
The commit documents the ownership rules around 'struct pthread' and
when a thread can read or write to the descriptor. With those ownership
rules in place it becomes obvious that pd->stopped_start should not be
touched in several of the paths during thread startup, particularly so
for detached threads. In the case of detached threads, between the time
the thread is created by the OS kernel and the creating thread checks
pd->stopped_start, the detached thread might have already exited and the
memory for pd unmapped. As a regression test we add a simple test which
exercises this exact case by quickly creating detached threads with
large enough stacks to ensure the thread stack cache is bypassed and the
stacks are unmapped. Before the fix the testcase segfaults, after the
fix it works correctly and completes without issue.
For a detailed discussion see:
https://www.sourceware.org/ml/libc-alpha/2017-01/msg00505.html
The problem is basically that sys/ucontext.h is defining R0..R15
which happens to conflict with some packages like Firefox when
trying to build on SH.
The very same problem existed on arm back then [1] and it was fixed by
renaming R0..R15 to REG_R0..REG_R15. This patch imploy a similar
strategy for SH.
Checked on sh4-linux-gnu with run-built-tests=no and I also got reports
that it fixes Firefox build on Debian sh4.
* sysdeps/unix/sysv/linux/sh/sh3/ucontext_i.sym: Use new REG_R*
constants instead of the old R* ones.
* sysdeps/unix/sysv/linux/sh/sh4/ucontext_i.sym: Likewise.
* sysdeps/unix/sysv/linux/sh/sys/ucontext.h (NGPREG): Rename...
(NGREG): ... to this, to fit in with other architectures.
(gpregset_t): Use new NGREG macro.
[__USE_GNU]: Remove condition; all architectures other than tile
are unconditional.
(R*): Rename to REG_R*.
(tunable_set_val_if_valid_range_signed) ... this, and ...
(tunable_set_val_if_valid_range_unsigned) ... this.
(tunable_initialize): Call the correct one of the above based on type.
I noticed that some libm-test-ulps files still had long-obsolete
entries for *_tonearest functions, which will no longer be used since
functions with FE_TONEAREST explicitly set aren't tested separately
from those functions with it as the default rounding mode any more.
This patch removes those obsolete entries. However, as they are a
sign of libm-test-ulps not having been regenerated from scratch for a
long time, I strongly advise people testing on those platforms to
remove / truncate the libm-test-ulps file, run "make regen-ulps" and
commit the regenerated-from-scratch file. (Ideally any failures of
libm tests still present after regeneration would be investigated /
fixed - there are several open "math" bugs spread across these
platforms - but simply regenerating from scratch improves things.)
* sysdeps/hppa/fpu/libm-test-ulps: Remove *_tonearest entries.
* sysdeps/ia64/fpu/libm-test-ulps: Likewise.
* sysdeps/m68k/m680x0/fpu/libm-test-ulps: Likewise.
* sysdeps/microblaze/libm-test-ulps: Likewise.
* sysdeps/sh/libm-test-ulps: Likewise.
This patch updates math/README.libm-test to have a more complete and
up-to-date list of the characters used in TEST_* macros to indicate
the types of function inputs and outputs.
* math/README.libm-test: Update list of characters for input and
output types.
This fixes the mutex pretty printer so that, if the owner ID isn't recorded
(such as in the current lock elision implementation), "Owner ID" will be shown
as "Unknown" instead of 0. It also changes the mutex printer output so that it
says "Acquired" instead of "Locked". The mutex tests are updated accordingly.
In addition, this adds a paragraph to the "Known issues" section of the
printers README explaining that the printer output isn't guaranteed to cover
every detail.
2017-01-14 Martin Galvan <martingalvan@sourceware.org>
* README.pretty-printers (Known issues): Warn about printers not
always covering everything.
* nptl/nptl-printers.py (MutexPrinter): Change output.
* nptl/test-mutex-printers.py: Fix test and adapt to changed output.
This patch adjusts s390 specific lock elision code after review
of the following patches:
-S390: Use own tbegin macro instead of __builtin_tbegin.
(8bfc4a2ab4)
-S390: Use new __libc_tbegin_retry macro in elision-lock.c.
(53c5c3d5ac)
-S390: Optimize lock-elision by decrementing adapt_count at unlock.
(dd037fb3df)
The futex value is not tested before starting a transaction,
__glibc_likely is used instead of __builtin_expect and comments
are adjusted.
ChangeLog:
* sysdeps/unix/sysv/linux/s390/htm.h: Adjust comments.
* sysdeps/unix/sysv/linux/s390/elision-unlock.c: Likewise.
* sysdeps/unix/sysv/linux/s390/elision-lock.c: Adjust comments.
(__lll_lock_elision): Do not test futex before starting a
transaction. Use __glibc_likely instead of __builtin_expect.
* sysdeps/unix/sysv/linux/s390/elision-trylock.c: Adjust comments.
(__lll_trylock_elision): Do not test futex before starting a
transaction. Use __glibc_likely instead of __builtin_expect.
Add a convenience target for maintainers to download and incorporate
translation updates from translations.org. Invoke as follows:
make -r PARALLELMFLAGS="" -C ../po objdir=`pwd` update-translations
similar to generating libc.pot.
* po/Makefile (update-translations): New target.
MicroBlaze had clock_* functions exported from librt in glibc 2.18 and
2.19, as confirmed in
<https://sourceware.org/ml/libc-alpha/2017-01/msg00369.html>, and they
then disappeared in 2.20, presumably as a result of the fix
<https://sourceware.org/ml/libc-alpha/2014-02/msg00598.html> for a
Versions.def bug that had resulted in their unintended inclusion in
2.18 (followed by removal of the Versions.def mechanism that allowed
such bugs).
As they were released in that library, they should be considered part
of the GLIBC_2.18 ABI and so restored for the sake of any binaries
that expect them in that library. This patch restores them by adding
a MicroBlaze version of clock-compat.c that overrides SHLIB_COMPAT.
Tested (compilation only) with build-many-glibcs.py (where this fixes
the librt ABI test failure; elf/check-execstack still fails and still
needs architecture maintainer attention to fix it or XFAIL it with an
appropriate explanatory comment).
[BZ #21061]
* sysdeps/unix/sysv/linux/microblaze/clock-compat.c: New file.
The condition when the value of an envvar is empty (not just '\0'),
the loop in tunables_init gets stuck infinitely because envp is not
incremented. Fix that by always incrementing envp in the loop.
Added test case (tst-empty-env.c) verifies the fix when the source is
configured with --enable-hardcoded-path-in-tests, thanks Josh Stone for
providing the test case. Verified on x86_64.
* elf/dl-tunables (get_next_env): Always advance envp.
* stdlib/tst-empty-env.c: New test case.
* stdlib/Makefile (tests): Use it.
Bug 21047 reports that the clang assembler disallows the ARM
implementations of _FPU_GETCW and _FPU_SETCW.
These are deliberately written the way they are, using generic
coprocessor instructions (from the days when VFP was just one possible
coprocessor for ARM) that have the right encodings, to handle the case
of the instructions being used runtime-conditionally inside glibc,
where use of these macros is not meant to result in either the
assembler requiring VFP to be enabled at assembly time or in it
marking the object as using VFP. However, more recent ARM ARM
versions have restricted the definitions of the coprocessor
instructions and reportedly the clang assembler follows that in
disallowing those names for VFP instructions.
In the non-__SOFTFP__ case - which in fact is the only case where
these macro definitions can be used outside the build of glibc itself
- using VFP instruction names is of course fine, since we know that
VFP is enabled for that compilation. Thus, this patch uses the
current VFP names for these instructions in that case to improve
compatibility for this header file.
Tested for hard-float and soft-float builds of glibc, including that
installed stripped shared libraries are unchanged by the patch.
[BZ #21047]
* sysdeps/arm/fpu_control.h [!__SOFTFP__] (_FPU_GETCW): Use VFP
name for instruction.
[!__SOFTFP__] (_FPU_SETCW): Likewise.
A recent build-many-glibcs.py build
<https://sourceware.org/ml/libc-testresults/2017-q1/msg00067.html> ran
into what proves to be an old known bug
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=42980> with parallel
install of GCC (one which as discussed there might require automake
changes to fix). This patch makes build-many-glibcs.py avoid such
intermittent failures from parallel install by using -j1 for GCC make
install (the code in question also applies to binutils make install,
but it doesn't seem worth trying to avoid -j1 there; the builds and
installs of different toolchains are still fully parallel with each
other, this is only about the case when there are few enough of those
that multiple jobs can get used within a single make install).
* scripts/build-many-glibcs.py (Config.build_cross_tool): Use -j1
for make install.
On s390x this test failed with:
FAIL: explicit clear/test: expected 0 got 1
In setup_explicit_clear, the buffer is filled with the test_pattern.
On s390x the memcpy in prepare_test_buffer is done by loading
r4 / r5 with the test_pattern and using store multiple instruction
to store r4 / r5 to buf.
If explicit_bzero is resolved in setup_explicit_clear, r4 / r5 is
stored to stack by _dl_runtime_resolve and the call to memmem in
count_test_patterns finds a hit of the test_pattern on the stack.
This patch resolves all symbols at program startup by linking with
-z now. This omits the call of _dl_runtime_resolve within
setup_explicit_clear and the test passes.
ChangeLog:
[BZ #21006]
* string/Makefile (LDFLAGS-tst-xbzero-opt): New variable.
The soft-float powerpc version of swapcontext does not restore the
signal mask, resulting in stdlib/tst-setcontext2 failing:
after getcontext
after setcontext
after swapcontext
FAIL: SIGUSR2 is blocked after swapcontext.
This patch fixes this by adjusting the arguments passed to
__sigprocmask so that it restores the saved signal mask as well as
saving the existing one. (For hard-float, this code is only used for
a compat symbol, not for the current version of swapcontext.)
Tested for soft-float powerpc.
[BZ #21045]
* sysdeps/unix/sysv/linux/powerpc/powerpc32/swapcontext-common.S
(__CONTEXT_FUNC_NAME): Pass address of signal mask to be restored
to __sigprocmask.
As was done in b224637928, check for large size causing an overflow
in the loop that walks over the array.
Branching out of line here is the fastest approach for handling this
problem, since tile can bundle the instructions to compute the branch
test in parallel with doing the required memchr loop setup computation.
Unfortunately, the existing saturated ops (e.g. tilegx addxsc) are
all signed saturing ops, so don't help with unsigned saturation.
In 1e5834c38a ("Refactor Linux ipc_priv header") a different
approach to passing __IPC_64 as zero was created. The tile
architecture also needs to pass __IPC_64 as zero since it does
not set CONFIG_ARCH_WANT_IPC_PARSE_VERSION in the kernel.
So create a minimal ipc_priv.h that specifies __IPC_64 as zero.
The ip6-bytestring resolver corresponds to the RES_USEBSTRING flag and
not RES_NOIP6DOTINT. Thank you Michael Kerrisk for noticing and
pointing it out.
Any changes to the per-thread list of robust mutexes currently acquired as
well as the pending-operations entry are not simply sequential code but
basically concurrent with any actions taken by the kernel when it tries
to clean up after a crash. This is not quite like multi-thread concurrency
but more like signal-handler concurrency.
This patch fixes latent bugs by adding compiler barriers where necessary so
that it is ensured that the kernel crash handling sees consistent data.
This is meant to be easy to backport, so we do not use C11-style signal
fences yet.
* nptl/descr.h (ENQUEUE_MUTEX_BOTH, DEQUEUE_MUTEX): Add compiler
barriers and comments.
* nptl/pthread_mutex_lock.c (__pthread_mutex_lock_full): Likewise.
* nptl/pthread_mutex_timedlock.c (pthread_mutex_timedlock): Likewise.
* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise.
Robust mutexes acquired at the time of a call to fork() do not remain
acquired by the forked child process. We have to clear the list of
acquired robust mutexes before registering this list with the kernel;
otherwise, if some of the robust mutexes are process-shared, the parent
process can alter the child's robust mutex list, which can lead to
deadlocks or even modification of memory that may not be occupied by a
mutex anymore.
[BZ #19402]
* sysdeps/nptl/fork.c (__libc_fork): Clear list of acquired robust
mutexes.
lll_robust_unlock on i386 and x86_64 first sets the futex word to
FUTEX_WAITERS|0 before calling __lll_unlock_wake, which will set the
futex word to 0. If the thread is killed between these steps, then the
futex word will be FUTEX_WAITERS|0, and the kernel (at least current
upstream) will not set it to FUTEX_OWNER_DIED|FUTEX_WAITERS because 0 is
not equal to the TID of the crashed thread.
The lll_robust_lock assembly code on i386 and x86_64 is not prepared to
deal with this case because the fastpath tries to only CAS 0 to TID and
not FUTEX_WAITERS|0 to TID; the slowpath simply waits until it can CAS 0
to TID or the futex_word has the FUTEX_OWNER_DIED bit set.
This issue is fixed by removing the custom x86 assembly code and using
the generic C code instead. However, instead of adding more duplicate
code to the custom x86 lowlevellock.h, the code of the lll_robust* functions
is inlined into the single call sites that exist for each of these functions
in the pthread_mutex_* functions. The robust mutex paths in the latter
have been slightly reorganized to make them simpler.
This patch is meant to be easy to backport, so C11-style atomics are not
used.
[BZ #20985]
* nptl/Makefile: Adapt.
* nptl/pthread_mutex_cond_lock.c (LLL_ROBUST_MUTEX_LOCK): Remove.
(LLL_ROBUST_MUTEX_LOCK_MODIFIER): New.
* nptl/pthread_mutex_lock.c (LLL_ROBUST_MUTEX_LOCK): Remove.
(LLL_ROBUST_MUTEX_LOCK_MODIFIER): New.
(__pthread_mutex_lock_full): Inline lll_robust* functions and adapt.
* nptl/pthread_mutex_timedlock.c (pthread_mutex_timedlock): Inline
lll_robust* functions and adapt.
* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): Likewise.
* sysdeps/nptl/lowlevellock.h (__lll_robust_lock_wait,
__lll_robust_lock, lll_robust_cond_lock, __lll_robust_timedlock_wait,
__lll_robust_timedlock, __lll_robust_unlock): Remove.
* sysdeps/unix/sysv/linux/i386/lowlevellock.h (lll_robust_lock,
lll_robust_cond_lock, lll_robust_timedlock, lll_robust_unlock): Remove.
* sysdeps/unix/sysv/linux/x86_64/lowlevellock.h (lll_robust_lock,
lll_robust_cond_lock, lll_robust_timedlock, lll_robust_unlock): Remove.
* sysdeps/unix/sysv/linux/sparc/lowlevellock.h (__lll_robust_lock_wait,
__lll_robust_lock, lll_robust_cond_lock, __lll_robust_timedlock_wait,
__lll_robust_timedlock, __lll_robust_unlock): Remove.
* nptl/lowlevelrobustlock.c: Remove file.
* nptl/lowlevelrobustlock.sym: Likewise.
* sysdeps/unix/sysv/linux/i386/lowlevelrobustlock.S: Likewise.
* sysdeps/unix/sysv/linux/x86_64/lowlevelrobustlock.S: Likewise.
After this update, math/test-ildouble, math/test-ldouble and
math/test-ldouble-finite pass on hard float, POWER < 7 builds.
Tested on powerpc, powerpc64 and powerpc64le.