Commit Graph

38104 Commits

Author SHA1 Message Date
Adhemerval Zanella
ecb94e9587 aarch64: Add math-use-builtins-f{max,min}.h
It allows to remove the arch-specific implementations.
2021-12-13 10:08:46 -03:00
Adhemerval Zanella
583c4d424e math: Add math-use-builtinds-fmin.h
It allows the architecture to use the builtin instead of generic
implementation.
2021-12-13 10:08:43 -03:00
Adhemerval Zanella
72ab1eaec7 math: Add math-use-builtinds-fmax.h
It allows the architecture to use the builtin instead of generic
implementation.
2021-12-13 09:08:07 -03:00
Adhemerval Zanella
2eb1cd2f47 math: Remove powerpc e_hypot
The generic implementation is shows only slight worse performance:

POWER10    reciprocal-throughput    latency
master                   8.28478    13.7253
new hypot                7.21945    13.1933

POWER9     reciprocal-throughput    latency
master                   13.4024    14.0967
new hypot                14.8479    15.8061

POWER8     reciprocal-throughput    latency
master                   15.5767    16.8885
new hypot                16.5371    18.4057

One way to improve might to make gcc generate xsmaxdp/xsmindp for
fmax/fmin (it onl does for -ffast-math, clang does for default
options).

Checked on powerpc64-linux-gnu (power8) and powerpc64le-linux-gnu
(power9).
2021-12-13 09:08:07 -03:00
Adhemerval Zanella
a1d3c9b642 i386: Move hypot implementation to C
The generic hypotf is slight slower, mostly due the tricks the assembly
does to optimize the isinf/isnan/issignaling.  The generic hypot is way
slower, since the optimized implementation uses the i386 default
excessive precision to issue the operation directly.  A similar
implementation is provided instead of using the generic implementation:

Checked on i686-linux-gnu.
2021-12-13 09:08:02 -03:00
Adhemerval Zanella
c212d6397e math: Use an improved algorithm for hypotl (ldbl-128)
This implementation is based on 'An Improved Algorithm for hypot(a,b)'
by Carlos F. Borges [1] using the MyHypot3 with the following changes:

  - Handle qNaN and sNaN.
  - Tune the 'widely varying operands' to avoid spurious underflow
    due the multiplication and fix the return value for upwards
    rounding mode.
  - Handle required underflow exception for subnormal results.

The main advantage of the new algorithm is its precision.  With a
random 1e9 input pairs in the range of [LDBL_MIN, LDBL_MAX], glibc
current implementation shows around 0.05% results with an error of
1 ulp (453266 results) while the new implementation only shows
0.0001% of total (1280).

Checked on aarch64-linux-gnu and x86_64-linux-gnu.

[1] https://arxiv.org/pdf/1904.09481.pdf
2021-12-13 09:02:34 -03:00
Adhemerval Zanella
aa9c28cde3 math: Use an improved algorithm for hypotl (ldbl-96)
This implementation is based on 'An Improved Algorithm for hypot(a,b)'
by Carlos F. Borges [1] using the MyHypot3 with the following changes:

 - Handle qNaN and sNaN.
 - Tune the 'widely varying operands' to avoid spurious underflow
   due the multiplication and fix the return value for upwards
   rounding mode.
 - Handle required underflow exception for subnormal results.

The main advantage of the new algorithm is its precision.  With a
random 1e8 input pairs in the range of [LDBL_MIN, LDBL_MAX], glibc
current implementation shows around 0.02% results with an error of
1 ulp (23158 results) while the new implementation only shows
0.0001% of total (111).

[1] https://arxiv.org/pdf/1904.09481.pdf
2021-12-13 09:02:34 -03:00
Wilco Dijkstra
ccfa865a82 math: Improve hypot performance with FMA
Improve hypot performance significantly by using fma when available. The
fma version has twice the throughput of the previous version and 70% of
the latency.  The non-fma version has 30% higher throughput and 10%
higher latency.

Max ULP error is 0.949 with fma and 0.792 without fma.

Passes GLIBC testsuite.
2021-12-13 09:02:34 -03:00
Wilco Dijkstra
6c848d7038 math: Use an improved algorithm for hypot (dbl-64)
This implementation is based on the 'An Improved Algorithm for
hypot(a,b)' by Carlos F. Borges [1] using the MyHypot3 with the
following changes:

 - Handle qNaN and sNaN.
 - Tune the 'widely varying operands' to avoid spurious underflow
   due the multiplication and fix the return value for upwards
   rounding mode.
 - Handle required underflow exception for denormal results.

The main advantage of the new algorithm is its precision: with a
random 1e9 input pairs in the range of [DBL_MIN, DBL_MAX], glibc
current implementation shows around 0.34% results with an error of
1 ulp (3424869 results) while the new implementation only shows
0.002% of total (18851).

The performance result are also only slight worse than current
implementation.  On x86_64 (Ryzen 5900X) with gcc 12:

Before:

  "hypot": {
   "workload-random": {
    "duration": 3.73319e+09,
    "iterations": 1.12e+08,
    "reciprocal-throughput": 22.8737,
    "latency": 43.7904,
    "max-throughput": 4.37184e+07,
    "min-throughput": 2.28361e+07
   }
  }

After:

  "hypot": {
   "workload-random": {
    "duration": 3.7597e+09,
    "iterations": 9.8e+07,
    "reciprocal-throughput": 23.7547,
    "latency": 52.9739,
    "max-throughput": 4.2097e+07,
    "min-throughput": 1.88772e+07
   }
  }

Co-Authored-By: Adhemerval Zanella  <adhemerval.zanella@linaro.org>

Checked on x86_64-linux-gnu and aarch64-linux-gnu.

[1] https://arxiv.org/pdf/1904.09481.pdf
2021-12-13 09:02:34 -03:00
Adhemerval Zanella
7fe0ace3e2 math: Simplify hypotf implementation
Use a more optimized comparison for check for NaN and infinite and
add an inlined issignaling implementation for float.  With gcc it
results in 2 FP comparisons.

The file Copyright is also changed to use  GPL, the implementation was
completely changed by 7c10fd3515 to use double precision instead of
scaling and this change removes all the GET_FLOAT_WORD usage.

Checked on x86_64-linux-gnu.
2021-12-13 09:02:30 -03:00
Siddhesh Poyarekar
5afe4c0d69 Cleanup encoding in comments
Replace non-UTF-8 and non-ASCII characters in comments with their UTF-8
equivalents so that files don't end up with mixed encodings.  With this,
all files (except tests that actually test different encodings) have a
single encoding.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2021-12-13 10:01:45 +05:30
Siddhesh Poyarekar
23645707f1 Replace --enable-static-pie with --disable-default-pie
Build glibc programs and tests as PIE by default and enable static-pie
automatically if the architecture and toolchain supports it.

Also add a new configuration option --disable-default-pie to prevent
building programs as PIE.

Only the following architectures now have PIE disabled by default
because they do not work at the moment.  hppa, ia64, alpha and csky
don't work because the linker is unable to handle a pcrel relocation
generated from PIE objects.  The microblaze compiler is currently
failing with an ICE.  GNU hurd tries to enable static-pie, which does
not work and hence fails.  All these targets have default PIE disabled
at the moment and I have left it to the target maintainers to enable PIE
on their targets.

build-many-glibcs runs clean for all targets.  I also tested x86_64 on
Fedora and Ubuntu, to verify that the default build as well as
--disable-default-pie work as expected with both system toolchains.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2021-12-13 08:08:59 +05:30
Samuel Thibault
556a6126f8 hurd: Add rules for static PIE build
This fixes [BZ #28671].
2021-12-12 00:42:13 +01:00
Samuel Thibault
26803075e4 hurd: Fix gmon-static
We need to use crt0 for gmon-static too.
2021-12-12 00:42:12 +01:00
H.J. Lu
ea5814467a x86-64: Remove LD_PREFER_MAP_32BIT_EXEC support [BZ #28656]
Remove the LD_PREFER_MAP_32BIT_EXEC environment variable support since
the first PT_LOAD segment is no longer executable due to defaulting to
-z separate-code.

This fixes [BZ #28656].

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2021-12-10 14:01:34 -08:00
Florian Weimer
f1eeef945d elf: Use errcode instead of (unset) errno in rtld_chain_load 2021-12-10 21:34:30 +01:00
H.J. Lu
fc2334ab32 Add a testcase to check alignment of PT_LOAD segment [BZ #28676] 2021-12-10 11:26:08 -08:00
Rongwei Wang
718fdd87b1 elf: Properly align PT_LOAD segments [BZ #28676]
When PT_LOAD segment alignment > the page size, allocate enough space to
ensure that the segment can be properly aligned.  This change helps code
segments use huge pages become simple and available.

This fixes [BZ #28676].

Signed-off-by: Xu Yu <xuyu@linux.alibaba.com>
Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
2021-12-10 11:25:37 -08:00
Florian Weimer
2e75604f83 elf: Install a symbolic link to ld.so as /usr/bin/ld.so
This makes ld.so features such as --preload, --audit,
and --list-diagnostics more accessible to end users because they
do not need to know the ABI name of the dynamic loader.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-12-10 16:06:58 +01:00
Florian Weimer
5cc3385654 nptl: Add one more barrier to nptl/tst-create1
Without the bar_ctor_finish barrier, it was possible that thread2
re-locked user_lock before ctor had a chance to lock it.  ctor then
blocked in its locking operation, xdlopen from the main thread
did not return, and thread2 was stuck waiting in bar_dtor:

thread 1: started.
thread 2: started.
thread 2: locked user_lock.
constructor started: 0.
thread 1: in ctor: started.
thread 3: started.
thread 3: done.
thread 2: unlocked user_lock.
thread 2: locked user_lock.

Fixes the test in commit 83b5323261
("elf: Avoid deadlock between pthread_create and ctors [BZ #28357]").

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2021-12-10 11:51:25 +01:00
Florian Weimer
627f5ede70 Remove TLS_TCB_ALIGN and TLS_INIT_TCB_ALIGN
TLS_INIT_TCB_ALIGN is not actually used.  TLS_TCB_ALIGN was likely
introduced to support a configuration where the thread pointer
has not the same alignment as THREAD_SELF.  Only ia64 seems to use
that, but for the stack/pointer guard, not for storing tcbhead_t.
Some ports use TLS_TCB_OFFSET and TLS_PRE_TCB_SIZE to shift
the thread pointer, potentially landing in a different residue class
modulo the alignment, but the changes should not impact that.

In general, given that TLS variables have their own alignment
requirements, having different alignment for the (unshifted) thread
pointer and struct pthread would potentially result in dynamic
offsets, leading to more complexity.

hppa had different values before: __alignof__ (tcbhead_t), which
seems to be 4, and __alignof__ (struct pthread), which was 8
(old default) and is now 32.  However, it defines THREAD_SELF as:

/* Return the thread descriptor for the current thread.  */
# define THREAD_SELF \
  ({ struct pthread *__self;			\
	__self = __get_cr27();			\
	__self - 1;				\
   })

So the thread pointer points after struct pthread (hence __self - 1),
and they have to have the same alignment on hppa as well.

Similarly, on ia64, the definitions were different.  We have:

# define TLS_PRE_TCB_SIZE \
  (sizeof (struct pthread)						\
   + (PTHREAD_STRUCT_END_PADDING < 2 * sizeof (uintptr_t)		\
      ? ((2 * sizeof (uintptr_t) + __alignof__ (struct pthread) - 1)	\
	 & ~(__alignof__ (struct pthread) - 1))				\
      : 0))
# define THREAD_SELF \
  ((struct pthread *) ((char *) __thread_self - TLS_PRE_TCB_SIZE))

And TLS_PRE_TCB_SIZE is a multiple of the struct pthread alignment
(confirmed by the new _Static_assert in sysdeps/ia64/libc-tls.c).

On m68k, we have a larger gap between tcbhead_t and struct pthread.
But as far as I can tell, the port is fine with that.  The definition
of TCB_OFFSET is sufficient to handle the shifted TCB scenario.

This fixes commit 23c77f6018
("nptl: Increase default TCB alignment to 32").

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2021-12-09 23:47:49 +01:00
Florian Weimer
a41c8e9235 nptl: rseq failure after registration on main thread is fatal
This simplifies the application programming model.

Browser sandboxes have already been fixed:

  Sandbox is incompatible with rseq registration
  <https://bugzilla.mozilla.org/show_bug.cgi?id=1651701>

  Allow rseq in the Linux sandboxes. r=gcp
  <https://hg.mozilla.org/mozilla-central/rev/042425712eb1>

  Sandbox needs to support rseq system call
  <https://bugs.chromium.org/p/chromium/issues/detail?id=1104160>

  Linux sandbox: Allow rseq(2)
  <https://chromium.googlesource.com/chromium/src.git/+/230675d9ac8f1>

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2021-12-09 09:49:32 +01:00
Florian Weimer
c901c3e764 nptl: Add public rseq symbols and <sys/rseq.h>
The relationship between the thread pointer and the rseq area
is made explicit.  The constant offset can be used by JIT compilers
to optimize rseq access (e.g., for really fast sched_getcpu).

Extensibility is provided through __rseq_size and __rseq_flags.
(In the future, the kernel could request a different rseq size
via the auxiliary vector.)

Co-Authored-By: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2021-12-09 09:49:32 +01:00
Florian Weimer
e3e589829d nptl: Add glibc.pthread.rseq tunable to control rseq registration
This tunable allows applications to register the rseq area instead
of glibc.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2021-12-09 09:49:32 +01:00
Florian Weimer
1d350aa060 Linux: Use rseq to accelerate sched_getcpu
Co-Authored-By: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2021-12-09 09:49:32 +01:00
Florian Weimer
95e114a091 nptl: Add rseq registration
The rseq area is placed directly into struct pthread.  rseq
registration failure is not treated as an error, so it is possible
that threads run with inconsistent registration status.

<sys/rseq.h> is not yet installed as a public header.

Co-Authored-By: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
2021-12-09 09:49:32 +01:00
Florian Weimer
8d1927d8dc nptl: Introduce THREAD_GETMEM_VOLATILE
This will be needed for rseq TCB access.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2021-12-09 09:49:32 +01:00
Florian Weimer
ce2248ab91 nptl: Introduce <tcb-access.h> for THREAD_* accessors
These are common between most architectures.  Only the x86 targets
are outliers.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2021-12-09 09:49:32 +01:00
Florian Weimer
8dbeb0561e nptl: Add <thread_pointer.h> for defining __thread_pointer
<tls.h> already contains a definition that is quite similar,
but it is not consistent across architectures.

Only architectures for which rseq support is added are covered.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2021-12-09 09:49:32 +01:00
John David Anglin
409a735816 String: test-memcpy used unaligned types for buffers [BZ 28572]
commit d585ba47fc
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Mon Nov 1 00:49:48 2021 -0500

    string: Make tests birdirectional test-memcpy.c

Add tests that had src/dst non 4-byte aligned. Since src/dst are
initialized/compared as uint32_t type which is 4-byte aligned this can
break on some targets.

Fix the issue by specifying a new non-aligned 4-byte
`unaligned_uint32_t` for src/dst.

Another alternative is to rely on memcpy/memcmp for
initializing/testing src/dst. Using memcpy for initializing in memcpy
tests, however, could lead to future bugs.

Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2021-12-07 22:19:50 -06:00
Aurelien Jarno
cbab7f7268 localedef: check magic value on archive load [BZ #28650]
localedef currently blindly trust the archive header. When passed an
archive file with the wrong endianess, this leads to a segmentation
fault:

  $ localedef --big-endian --list-archive /usr/lib/locale/locale-archive
  Segmentation fault (core dumped)

When passed non-archive files, asserts are reported on the best case,
but sometimes it can lead to a segmentation fault:

  $ localedef --list-archive /bin/true
  localedef: programs/locarchive.c:1643: show_archive_content: Assertion `used < GET (head->namehash_used)' failed.
  Aborted (core dumped)

  $ localedef --list-archive /usr/lib/locale/C.utf8/LC_COLLATE
  Segmentation fault (core dumped)

This patch improves the user experience by looking at the magic value,
which is always written, but never checked. It should still be possible
to trigger a segmentation fault with crafted files, but this already
catch many cases.
2021-12-07 23:32:53 +01:00
H.J. Lu
ceeffe968c x86: Don't set Prefer_No_AVX512 for processors with AVX512 and AVX-VNNI
Don't set Prefer_No_AVX512 on processors with AVX512 and AVX-VNNI since
they won't lower CPU frequency when ZMM load and store instructions are
used.
2021-12-06 07:14:12 -08:00
Adhemerval Zanella
a329f68f2e linux: Add generic ioctl implementation
The powerpc is refactor to use the default implementation.
2021-12-06 08:03:18 -03:00
Adhemerval Zanella
00baddbb93 linux: Add generic syscall implementation
It allows also to remove hppa specific implementation and simplify
riscv implementation a bit.
2021-12-06 08:03:11 -03:00
Florian Weimer
68007900be misc, nptl: Remove stray references to __condvar_load_64_relaxed
The function was renamed to __atomic_wide_counter_load_relaxed
in commit 8bd336a00a ("nptl: Extract
<bits/atomic_wide_counter.h> from pthread_cond_common.c").
2021-12-06 08:01:08 +01:00
Florian Weimer
4fb4e7e821 csu: Always use __executable_start in gmon-start.c
Current binutils defines __executable_start as the lowest text
address, so using the entry point address as a fallback is no
longer necessary.  As a result, overriding <entry.h> is only
necessary if the entry point is not called _start.

The previous approach to define __ASSEMBLY__ to suppress the
declaration breaks if headers included by <entry.h> are not
compatible with __ASSEMBLY__.  This happens with rseq integration
because it is necessary to include kernel headers in more places.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2021-12-05 13:50:21 +01:00
Florian Weimer
c1cb2deeca elf: execve statically linked programs instead of crashing [BZ #28648]
Programs without dynamic dependencies and without a program
interpreter are now run via execve.

Previously, the dynamic linker either crashed while attempting to
read a non-existing dynamic segment (looking for DT_AUDIT/DT_DEPAUDIT
data), or the self-relocated in the static PIE executable crashed
because the outer dynamic linker had already applied RELRO protection.

<dl-execve.h> is needed because execve is not available in the
dynamic loader on Hurd.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2021-12-05 11:28:34 +01:00
H.J. Lu
bada2e312a Add --with-timeoutfactor=NUM to specify TIMEOUTFACTOR
On Ice Lake and Tiger Lake laptops, some test programs timeout when there
are 3 "make check -j8" runs in parallel.  Add --with-timeoutfactor=NUM to
specify an integer to scale the timeout of test programs, which can be
overriden by TIMEOUTFACTOR environment variable.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2021-12-04 12:58:28 -08:00
Noah Goldstein
4df1fa6ddc x86-64: Use notl in EVEX strcmp [BZ #28646]
Must use notl %edi here as lower bits are for CHAR comparisons
potentially out of range thus can be 0 without indicating mismatch.
This fixes BZ #28646.

Co-Authored-By: H.J. Lu <hjl.tools@gmail.com>
2021-12-03 21:14:11 -08:00
Florian Weimer
23c77f6018 nptl: Increase default TCB alignment to 32
rseq support will use a 32-byte aligned field in struct pthread,
so the whole struct needs to have at least that alignment.

nptl/tst-tls3mod.c uses TCB_ALIGNMENT, therefore include <descr.h>
to obtain the fallback definition.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2021-12-03 20:43:31 +01:00
Luca Boccassi
0656b649c5 elf: add definition for ELF_NOTE_FDO and NT_FDO_PACKAGING_METADATA note
As defined on: https://systemd.io/COREDUMP_PACKAGE_METADATA/
this note will be used starting from Fedora 36.

Signed-off-by: Luca Boccassi <bluca@debian.org>
2021-12-02 23:01:51 +01:00
Wilco Dijkstra
b31bd11454 AArch64: Improve A64FX memcpy
v2 is a complete rewrite of the A64FX memcpy. Performance is improved
by streamlining the code, aligning all large copies and using a single
unrolled loop for all sizes. The code size for memcpy and memmove goes
down from 1796 bytes to 868 bytes. Performance is better in all cases:
bench-memcpy-random is 2.3% faster overall, bench-memcpy-large is ~33%
faster for large sizes, bench-memcpy-walk is 25% faster for small sizes
and 20% for the largest sizes. The geomean of all tests in bench-memcpy
is 5.1% faster, and total time is reduced by 4%.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2021-12-02 18:36:03 +00:00
Wilco Dijkstra
b51eb35c57 AArch64: Optimize memcmp
Rewrite memcmp to improve performance. On small and medium inputs performance
is 10-20% better. Large inputs use a SIMD loop processing 64 bytes per
iteration, which is 30-50% faster depending on the size.

Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2021-12-02 18:35:53 +00:00
Matheus Castanho
d120fb9941 powerpc64[le]: Fix CFI and LR save address for asm syscalls [BZ #28532]
Syscalls based on the assembly templates are missing CFI for r31, which gets
clobbered when scv is used, and info for LR is inaccurate, placed in the wrong
LOC and not using the proper offset. LR was also being saved to the callee's
frame, while the ABI mandates it to be saved to the caller's frame. These are
fixed by this commit.

After this change:

$ readelf -wF libc.so.6 | grep 0004b9d4.. -A 7 && objdump --disassemble=kill libc.so.6
00004a48 0000000000000020 00004a4c FDE cie=00000000 pc=000000000004b9d4..000000000004ba3c
   LOC           CFA      r31   ra
000000000004b9d4 r1+0     u     u
000000000004b9e4 r1+48    u     u
000000000004b9e8 r1+48    c-16  u
000000000004b9fc r1+48    c-16  c+16
000000000004ba08 r1+48    c-16
000000000004ba18 r1+48    u
000000000004ba1c r1+0     u

libc.so.6:     file format elf64-powerpcle

Disassembly of section .text:

000000000004b9d4 <kill>:
   4b9d4:       1f 00 4c 3c     addis   r2,r12,31
   4b9d8:       2c c3 42 38     addi    r2,r2,-15572
   4b9dc:       25 00 00 38     li      r0,37
   4b9e0:       d1 ff 21 f8     stdu    r1,-48(r1)
   4b9e4:       20 00 e1 fb     std     r31,32(r1)
   4b9e8:       98 8f ed eb     ld      r31,-28776(r13)
   4b9ec:       10 00 ff 77     andis.  r31,r31,16
   4b9f0:       1c 00 82 41     beq     4ba0c <kill+0x38>
   4b9f4:       a6 02 28 7d     mflr    r9
   4b9f8:       40 00 21 f9     std     r9,64(r1)
   4b9fc:       01 00 00 44     scv     0
   4ba00:       40 00 21 e9     ld      r9,64(r1)
   4ba04:       a6 03 28 7d     mtlr    r9
   4ba08:       08 00 00 48     b       4ba10 <kill+0x3c>
   4ba0c:       02 00 00 44     sc
   4ba10:       00 00 bf 2e     cmpdi   cr5,r31,0
   4ba14:       20 00 e1 eb     ld      r31,32(r1)
   4ba18:       30 00 21 38     addi    r1,r1,48
   4ba1c:       18 00 96 41     beq     cr5,4ba34 <kill+0x60>
   4ba20:       01 f0 20 39     li      r9,-4095
   4ba24:       40 48 23 7c     cmpld   r3,r9
   4ba28:       20 00 e0 4d     bltlr+
   4ba2c:       d0 00 63 7c     neg     r3,r3
   4ba30:       08 00 00 48     b       4ba38 <kill+0x64>
   4ba34:       20 00 e3 4c     bnslr+
   4ba38:       c8 32 fe 4b     b       2ed00 <__syscall_error>
        ...
   4ba44:       40 20 0c 00     .long 0xc2040
   4ba48:       68 00 00 00     .long 0x68
   4ba4c:       06 00 5f 5f     rlwnm   r31,r26,r0,0,3
   4ba50:       6b 69 6c 6c     xoris   r12,r3,26987
2021-11-30 15:18:52 -03:00
Adhemerval Zanella
efc6b2dbc4 linux: Implement pipe in terms of __NR_pipe2
The syscall pipe2 was added in linux 2.6.27 and glibc requires linux
3.2.0.  The patch removes the arch-specific implementation for alpha,
ia64, mips, sh, and sparc which requires a different kernel ABI
than the usual one.

Checked on x86_64-linux-gnu and with a build for the affected ABIs.
2021-11-30 13:13:03 -03:00
Adhemerval Zanella
5b3e31e312 linux: Implement mremap in C
Variadic function calls in syscalls.list does not work for all ABIs
(for instance where the argument are passed on the stack instead of
registers) and might have underlying issues depending of the variadic
type (for instance if a 64-bit argument is used).

Checked on x86_64-linux-gnu.
2021-11-30 13:13:03 -03:00
Adhemerval Zanella
83008fa495 linux: Add prlimit64 C implementation
The LFS prlimit64 requires a arch-specific implementation in
syscalls.list.  Instead add a generic one that handles the
required symbol alias for __RLIM_T_MATCHES_RLIM64_T.

HPPA is the only outlier which requires a different default
symbol.

Checked on x86_64-linux-gnu and with build for the affected ABIs.
2021-11-30 13:13:03 -03:00
Florian Weimer
df4cb2280e elf: Include <stdbool.h> in tst-tls20.c
The test uses the bool type.
2021-11-30 15:39:17 +01:00
Florian Weimer
3c7c511782 elf: Include <stdint.h> in tst-tls20.c
The test uses standard integer types.
2021-11-30 14:35:54 +01:00
Samuel Thibault
e49c3c5d7a hurd: Let report-wait use a weak reference to _hurd_itimer_thread
libc.so.0.3 does not seem to need this defined any more.
2021-11-28 21:26:25 +01:00