The first segment in a shared library may be read-only, not executable.
To support LD_PREFER_MAP_32BIT_EXEC on such shared libraries, we also
check MAP_DENYWRITE to decide if MAP_32BIT should be passed to mmap.
Normally the first segment is mapped with MAP_COPY, which is defined
as (MAP_PRIVATE | MAP_DENYWRITE). But if the segment alignment is
greater than the page size, MAP_COPY isn't used to allocate enough
space to ensure that the segment can be properly aligned. Map the
first segment with MAP_COPY in this case to fix BZ #30452.
Optimised implementations for single and double precision, Advanced
SIMD and SVE, copied from Arm Optimized Routines.
As previously, data tables are used via a barrier to prevent
overly aggressive constant inlining. Special-case handlers are
marked NOINLINE to avoid incurring the penalty of switching call
standards unnecessarily.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Optimised implementations for single and double precision, Advanced
SIMD and SVE, copied from Arm Optimized Routines. Log lookup table
added as HIDDEN symbol to allow it to be shared between AdvSIMD and
SVE variants.
As previously, data tables are used via a barrier to prevent
overly aggressive constant inlining. Special-case handlers are
marked NOINLINE to avoid incurring the penalty of switching call
standards unnecessarily.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Optimised implementations for single and double precision, Advanced
SIMD and SVE, copied from Arm Optimized Routines.
As previously, data tables are used via a barrier to prevent
overly aggressive constant inlining. Special-case handlers are
marked NOINLINE to avoid incurring the penalty of switching call
standards unnecessarily.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Replace the loop-over-scalar placeholder routines with optimised
implementations from Arm Optimized Routines (AOR).
Also add some headers containing utilities for aarch64 libmvec
routines, and update libm-test-ulps.
Data tables for new routines are used via a pointer with a
barrier on it, in order to prevent overly aggressive constant
inlining in GCC. This allows a single adrp, combined with offset
loads, to be used for every constant in the table.
Special-case handlers are marked NOINLINE in order to confine the
save/restore overhead of switching from vector to normal calling
standard. This way we only incur the extra memory access in the
exceptional cases. NOINLINE definitions have been moved to
math_private.h in order to reduce duplication.
AOR exposes a config option, WANT_SIMD_EXCEPT, to enable
selective masking (and later fixing up) of invalid lanes, in
order to trigger fp exceptions correctly (AdvSIMD only). This is
tested and maintained in AOR, however it is configured off at
source level here for performance reasons. We keep the
WANT_SIMD_EXCEPT blocks in routine sources to greatly simplify
the upstreaming process from AOR to glibc.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Linux 6.4 adds the riscv_hwprobe syscall on riscv and enables
memfd_secret on s390. Update syscall-names.list and regenerate the
arch-syscall.h headers with build-many-glibcs.py update-syscalls.
Tested with build-many-glibcs.py.
Trying to mount procfs can fail due multiples reasons: proc is locked
due the container configuration, mount syscall is filtered by a
Linux Secuirty Module, or any other security or hardening mechanism
that Linux might eventually add.
The tests does require a new procfs without binding to parent, and
to fully fix it would require to change how the container was created
(which is out of the scope of the test itself). Instead of trying to
foresee any possible scenario, if procfs can not be mount fail with
unsupported.
Checked on aarch64-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The tst-ttyname-direct.c checks the ttyname with procfs mounted in
bind mode (MS_BIND|MS_REC), while tst-ttyname-namespace.c checks
with procfs mount with MS_NOSUID|MS_NOEXEC|MS_NODEV in a new
namespace.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
These files could be useful to any port that wants to use ld.so.cache.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Few tests needed to properly check for asprintf and system calls return
values with _FORTIFY_SOURCE enabled.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The fread routine return value needs to be checked when fortification
is enabled, hence use xfread helper.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
On i386 and x86_64, for libc.a specifically, __mempcpy_chk calls
mempcpy which leads POSIX routines to call non-POSIX mempcpy indirectly.
This leads the linknamespace test to fail when glibc is built with
__FORTIFY_SOURCE=3.
Since calling mempcpy doesn't bring any benefit for libc.a, directly
call __mempcpy instead.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Replace alloca with a scratch_buffer to avoid potential stack overflows.
Checked on i686-gnu and x86_64-linux-gnu
Message-Id: <20230619144334.2902429-1-josimmon@redhat.com>
There is a potential memory leak for large writes due to writev being a
"shall occur" cancellation point. Add back the cleanup handler removed
in cf30aa43a5.
Checked on i686-gnu and x86_64-linux-gnu.
Message-Id: <20230619143842.2901522-1-josimmon@redhat.com>
With fortification enabled, read calls return result needs to be checked,
has it gets the __wur macro enabled.
Note on read call removal from sysdeps/pthread/tst-cancel20.c and
sysdeps/pthread/tst-cancel21.c:
It is assumed that this second read call was there to overcome the race
condition between pipe closure and thread cancellation that could happen
in the original code. Since this race condition got fixed by
d0e3ffb7a5 the second call seems
superfluous. Hence, instead of checking for the return value of read, it
looks reasonable to simply remove it.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Use a scratch_buffer rather than alloca to avoid potential stack
overflows.
Checked on i686-gnu and x86_64-linux-gnu
Message-Id: <20230608155844.976554-1-josimmon@redhat.com>
These functions are about to be added to POSIX, under Austin Group
issue 986.
The fortified strlcat implementation does not raise SIGABRT if the
destination buffer does not contain a null terminator, it just
inherits the non-failing regular strlcat behavior.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
With fortification enabled, fgets calls return result needs to be checked,
has it gets the __wur macro enabled.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Different systems prefer a different divisors.
From benchmarks[1] so far the following divisors have been found:
ICX : 2
SKX : 2
BWD : 8
For Intel, we are generalizing that BWD and older prefers 8 as a
divisor, and SKL and newer prefers 2. This number can be further tuned
as benchmarks are run.
[1]: https://github.com/goldsteinn/memcpy-nt-benchmarks
Reviewed-by: DJ Delorie <dj@redhat.com>
This patch should have no affect on existing functionality.
The current code, which has a single switch for model detection and
setting prefered features, is difficult to follow/extend. The cases
use magic numbers and many microarchitectures are missing. This makes
it difficult to reason about what is implemented so far and/or
how/where to add support for new features.
This patch splits the model detection and preference setting stages so
that CPU preferences can be set based on a complete list of available
microarchitectures, rather than based on model magic numbers.
Reviewed-by: DJ Delorie <dj@redhat.com>
Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`
The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.
Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.
Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.
As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.
The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.
Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks
Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Container management default seccomp filter [1] only accepts
personality(2) with PER_LINUX, (0x0), UNAME26 (0x20000),
PER_LINUX32 (0x8), UNAME26 | PER_LINUX32, and 0xffffffff (to query
current personality)
Although the documentation only state it is blocked to prevent
'enabling BSD emulation' (PER_BSD, not implemented by Linux), checking
on repository log the real reason is to block ASLR disable flag
(ADDR_NO_RANDOMIZE) and other poorly support emulations.
So handle EPERM and fail as UNSUPPORTED if we can really check for
BZ#19408.
Checked on aarch64-linux-gnu.
[1] https://github.com/moby/moby/blob/master/profiles/seccomp/default.json
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Since the area of the user's stack we use for the registers dump (and
otherwise as __sigreturn2's stack) can and does overlap the sigcontext,
we have to be very careful about the order of loads and stores that we
do. In particular we have to load sc_reply_port before we start
clobbering the sigcontext.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
epoll_pwait2(2)'s second argument should be nonnull. We're going to add
__nonnull to the prototype, so let's fix the test accordingly. We can
use a dummy variable to avoid passing NULL.
Reported-by: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
With fortification enabled, few function calls return result need to be
checked, has they get the __wur macro enabled.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Since the assembly source file with -evex suffix should use YMM registers,
not ZMM registers, include x86-evex256-vecs.h by default to use YMM
registers in memcmpeq-evex.S
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
Different than other 64 bit architectures, powerpc64 defines the
LFS POSIX lock constants with values similar to 32 ABI, which
are meant to be used with fcntl64 syscall. Since powerpc64 kABI
does not have fcntl, the constants are adjusted with the
FCNTL_ADJUST_CMD macro.
The 4d0fe291ae changed the logic of generic constants
LFS value are equal to the default values; which is now wrong
for powerpc64.
Fix the value by explicit define the previous glibc constants
(powerpc64 does not need to use the 32 kABI value, but it simplifies
the FCNTL_ADJUST_CMD which should be kept as compatibility).
Checked on powerpc64-linux-gnu and powerpc-linux-gnu.
For architecture with default 64 bit time_t support, the kernel
does not provide LFS and non-LFS values for F_GETLK, F_GETLK, and
F_GETLK (the default value used for 64 bit architecture are used).
This is might be considered an ABI break, but the currenct exported
values is bogus anyway.
The POSIX lockf is not affected since it is aliased to lockf64,
which already uses the LFS values.
Checked on i686-linux-gnu and the new tests on a riscv32.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
The LoongArch glibc was using the value of the SHMLBA macro from common code,
which is __getpagesize() (16k), but this was inconsistent with the value of
the SHMLBA macro in the kernel, which is SZ_64K (64k). This caused several
shmat-related tests in LTP (Linux Test Project) to fail. This commit fixes
the issue by ensuring that the glibc's SHMLBA macro value matches the value
used in the kernel like other architectures.
Use a scratch_buffer rather than either alloca or malloc to reduce the
possibility of a stack overflow.
Suggested-by: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
If `non_temporal_threshold` is below `minimum_non_temporal_threshold`,
it almost certainly means we failed to read the systems cache info.
In this case, rather than defaulting the minimum correct value, we
should default to a value that gets at least reasonable
performance. 64MB is chosen conservatively to be at the very high
end. This should never cause non-temporal stores when, if we had read
cache info, we wouldn't have otherwise.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Linux 6.3 adds new constants MFD_NOEXEC_SEAL and MFD_EXEC. Add these
to bits/mman-shared.h (conditional on MFD_NOEXEC_SEAL not already
being defined, similar to the existing conditional on the older MFD_*
macros).
Tested for x86_64.
All fixes are in comments, so the binaries should be identical
before/after this commit, but I can't verify this.
Reviewed-by: Rajalakshmi Srinivasaraghavan <rajis@linux.ibm.com>
Applying this commit results in a bit-identical rebuild of
mathvec/libmvec.so.1 (which is the only binary that gets rebuilt).
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
We do not want mach_i386.h to get installed into machine/, but into
i386/ or x86_64/ depending where mach_i386.defs was found, i.e.
according to 32/64 bitness.
Some of the s390-specific configure checks are using compile and
link configure tests. Now use only compile tests as the link
tests fails when e.g. bootstrapping a cross-toolchain due to
missing crt-files/libc.so. This is achieved by using
AC_COMPILE_IFELSE in configure.ac file.
This is observable e.g. when using buildroot which builds glibc
only once or the build-many-glibcs.py script. Note that the latter
one is building glibc twice in the compilers-step (configure-checks
fails) and in the glibcs-step (configure-checks succeed).
Note, that the s390 specific configure tests for static PIE have to
link an executable to test binutils support. Thus we can't fix
those tests.
The __hurd_fail () inline function is the dedicated, idiomatic way of
reporting errors in the Hurd part of glibc. Not only is it more concise
than '{ errno = err; return -1; }', it is since commit
6639cc1002
"hurd: Mark error functions as __COLD" marked with the cold attribute,
telling the compiler that this codepath is unlikely to be executed.
In one case, use __hurd_dfail () over the plain __hurd_fail ().
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230520115531.3911877-1-bugaevc@gmail.com>
Create a private hidden __hurd_thread_self alias, and use that one.
Fixes 2f8ecb58a5
"hurd: Fix x86_64 _hurd_tls_fork" and
c7fcce38c8
"hurd: Make sure to not use tcb->self"
Reported-by: Joseph Myers <joseph@codesourcery.com>
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Linux 6.3 adds six HWCAP2_SME* constants for AArch64; add them to the
corresponding bits/hwcap.h in glibc.
Tested with build-many-glibcs.py for aarch64-linux-gnu.
strlen, which is another ifunc-selected function, is invoked during
early static executable startup if the argv arrives from the exec
server. Make it not crash.
Checked on x86_64-gnu: statically linked executables launched after the
exec server is up now start up successfully.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230517191436.73636-10-bugaevc@gmail.com>
On x86_64, we have to pass function arguments in registers, not on the
stack. We also have to align the stack pointer in a specific way. Since
sharing the logic with i386 does not bring much benefit, split the file
back into i386- and x86_64-specific versions, and fix the x86_64 version
to set up the thread properly.
Bonus: i386 keeps doing the extra RPC inside __thread_set_pcsptp to
fetch the state of the thread before setting it; but x86_64 no lnoger
does that.
Checked on x86_64-gnu and i686-gnu.
Fixes be6d002ca2
"hurd: Set up the basic tree for x86_64-gnu"
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230517191436.73636-9-bugaevc@gmail.com>
It is illegal to call thread_get_state () on mach_thread_self (), so
this codepath cannot be used as-is to fork the calling thread's TLS.
Fortunately we can use THREAD_SELF (aka %fs:0x0) to find out the value
of our fs_base without calling into the kernel.
Fixes: f6cf701efc
"hurd: Implement TLS for x86_64"
Checked on x86_64-gnu: fork () now works!
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230517191436.73636-8-bugaevc@gmail.com>
Unlike sigstate->thread, tcb->self did not hold a Mach port reference on
the thread port it names. This means that the port can be deallocated,
and the name reused for something else, without anyone noticing. Using
tcb->self will then lead to port use-after-free.
Fortunately nothing was accessing tcb->self, other than it being
intially set to then-valid thread port name upon TCB initialization. To
assert that this keeps being the case without altering TCB layout,
rename self -> self_do_not_use, and stop initializing it.
Also, do not (re-)allocate a whole separate and unused stack for the
main thread, and just exit __pthread_setup early in this case.
Found upon attempting to use tcb->self and getting unexpected crashes.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230517191436.73636-7-bugaevc@gmail.com>
...instead of mach_setup_thread (), which is unsuitable for setting up
function calls.
Checked on x86_64-gnu: the signal thread no longer crashes upon trying
to process a message.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230517191436.73636-6-bugaevc@gmail.com>
The existing two macros, MACHINE_THREAD_STATE_SET_PC and
MACHINE_THREAD_STATE_SET_SP, can be used to set program counter and the
stack pointer registers in a machine-specific thread state structure.
Useful as it is, this may not be enough to set up the thread to make a
function call, because the machine-specific ABI may impose additional
requirements. In particular, x86_64 ABI requires that upon function
entry, the stack pointer is 8 less than 16-byte aligned (sp & 15 == 8).
To deal with this, introduce a new macro,
MACHINE_THREAD_STATE_SETUP_CALL (), which sets both stack and
instruction pointers, and also applies any machine-specific requirements
to make a valid function call. The default implementation simply
forwards to MACHINE_THREAD_STATE_SET_PC and MACHINE_THREAD_STATE_SET_SP,
but on x86_64 we additionally align the stack pointer.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230517191436.73636-3-bugaevc@gmail.com>
This hasn't caused any problems yet but we are passing a pointer to struct
task_thread_times_info which can cause problems if we populate over the
existing size of the struct.
Message-Id: <ZGRDDNcOM2hA3CuT@jupiter.tail36e24.ts.net>
This patch updates the kernel version in the tests tst-mman-consts.py,
tst-mount-consts.py and tst-pidfd-consts.py to 6.3. (There are no new
constants covered by these tests in 6.3 that need any other header
changes.)
Tested with build-many-glibcs.py.
So I was able to reproduce the hangs in the original source, and debug
it, and fix it. In doing so, I realized that we can't use anything
complex to trigger the thread because that "anything" might also cause
the expected segfault and force everything out of sync again.
Here's what I ended up with, and it doesn't seem to hang where the
original one hung quite often (in a tight while..end loop). The key
changes are:
1. Calls to futex are error checked, with retries, to ensure that the
futexes are actually doing what they're supposed to be doing. In the
original code, nearly every futex call returned an error.
2. The main loop has checks for whether the thread ran or not, and
"unlocks" the thread if it didn't (this is how the original source
hangs).
Note: the usleep() is not for timing purposes, but just to give the
kernel an excuse to run the other thread at that time. The test will
not hang without it, but is more likely to test the right bugfix
if the usleep() is present.
The real i386_thread_state Mach structure has an alignment of 8 on
x86_64. However, in struct sigcontext, the compiler was packing sc_gs
(which is the first member of sc_i386_thread_state) into the same 8-byte
slot as sc_error; this resulted in the rest of sc_i386_thread_state
members having wrong offsets relative to each other, and the overall
sc_i386_thread_state layout mismatching that of i386_thread_state.
Fix this by explicitly adding the required padding members, and
statically asserting that this results in the desired alignment.
The same goes for sc_i386_float_state.
Checked on x86_64-gnu.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230515083323.1358039-4-bugaevc@gmail.com>
sizeof (*stackframe) appears to be divisible by 16, but we should not
rely on that. So make sure to leave enough space for the stackframe
first, and then align the final pointer at 16 bytes.
Checked on x86_64-gnu.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230515083323.1358039-3-bugaevc@gmail.com>
Fixes 60f9bf9746
"hurd: Port trampoline.c to x86_64"
Checked on x86_64-gnu.
Reported-by: Bruno Haible <bruno@clisp.org>
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230515083323.1358039-2-bugaevc@gmail.com>
Reflow Makefile.
Sort using scripts/sort-makefile-lines.py.
No code generation changes observed in binary artifacts.
No regressions on x86_64 and i686.
Linux 6.3 has no new syscalls. Update the version number in
syscall-names.list to reflect that it is still current for 6.3.
Tested with build-many-glibcs.py.
While mach/kern_return.h happens to pull mach/machine/kern_return.h,
mach/machine/boolean.h, and mach/machine/vm_types.h (and realpath-ing them
exposes the machine-specific machine symlink content), those headers do not
actually define anything machine-specific for the content of errno.h.
So we can just rule out these machine-specific from the dependency
comment.
We already did the same change for Hurd
(https://git.savannah.gnu.org/cgit/hurd/hurd.git/commit/?id=ef5924402864ef049f40a39e73967628583bc1a4)
Due to MiG requiring the subsystem to be defined early in order to know the
size of a port, this was causing a division by zero error during ./configure.
We could have just move subsystem to the top of the snippet, however it is
simpler to just remove the check given that we have no plans to use some other
MiG anyway.
HAVE_MIG_RETCODE is removed completely since this will be a no-op either
way (compiling against old Hurd headers will work the same, new Hurd
headers will result in the same stubs since retcode is a no-op).
Message-Id: <ZFspor91aoMwbh9T@jupiter.tail36e24.ts.net>
This patch redirects the error functions to the appropriate
longdouble variants which enables the compiler to optimize
for the abi ieeelongdouble.
Signed-off-by: Sachin Monga <smonga@linux.ibm.com>
Reflow all long lines adding comment terminators.
Sort all reflowed text using scripts/sort-makefile-lines.py.
No code generation changes observed in binary artifacts.
No regressions on x86_64 and i686.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Summary of changes:
- Use BAD_TYPECHECK to perform type checking in a cleaner way.
BAD_TYPECHECK is moved into sysdeps/mach/rpc.h to avoid duplication.
- Remove assertions for mach_msg_type_t since those won't work for
x86_64.
- Update message structs to use mach_msg_type_t directly.
- Use designated initializers.
Message-Id: <ZFa+roan3ioo0ONM@jupiter.tail36e24.ts.net>
Summary of the changes:
- Update msg_align to use ALIGN_UP like we have done in previous
patches. Use it below whenever necessary to avoid repeating the same
alignment logic.
- Define BAD_TYPECHECK to make it easier to do type checking in a few
places below.
- Update io2mach_type to use designated initializers.
- Make RetCodeType use mach_msg_type_t. mach_msg_type_t is 8 byte for
x86_64, so this make it portable.
- Also call msg_align for _IOT_COUNT2/_IOT_TYPE2 since it is more
correct.
Message-Id: <ZFMvVsuFKwIy2dUS@jupiter.tail36e24.ts.net>
arm_sve.h depends on stdint.h but that relies on libc headers unless
compiled in freestanding mode. Without this change a bootstrap glibc
build (that uses a compiler without installed libc headers) failed with
checking for availability of SVE ACLE... In file included from [...]/arm_sve.h:28,
from conftest.c:1:
[...]/stdint.h:9:16: fatal error: stdint.h: No such file or directory
9 | # include_next <stdint.h>
| ^~~~~~~~~~
compilation terminated.
configure: error: mathvec is enabled but compiler does not have SVE ACLE. [...]
This patch enables libmvec on AArch64. The proposed change is mainly
implementing build infrastructure to add the new routines to ABI,
tests and benchmarks. I have demonstrated how this all fits together
by adding implementations for vector cos, in both single and double
precision, targeting both Advanced SIMD and SVE.
The implementations of the routines themselves are just loops over the
scalar routine from libm for now, as we are more concerned with
getting the plumbing right at this point. We plan to contribute vector
routines from the Arm Optimized Routines repo that are compliant with
requirements described in the libmvec wiki.
Building libmvec requires minimum GCC 10 for SVE ACLE. To avoid raising
the minimum GCC by such a big jump, we allow users to disable libmvec
if their compiler is too old.
Note that at this point users have to manually call the vector math
functions. This seems to be acceptable to some downstream users.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
dev_t are 64bit on Linux ports, so better increase their size on 64bit
Hurd. It happens that this helps with BZ 23084 there: st_dev has type fsid_t
(quad) and is specified by POSIX to have type dev_t. Making dev_t 64bit
makes these match.
The standards want msg_lspid/msg_lrpid/shm_cpid/shm_lpid to be pid_t, see BZ
23083 and 23085.
We can leave them __rpc_pid_t on i386 for ABI compatibility, but avoid
hitting the issue on 64bit.
The standards want uid/cuid to be uid_t, gid/cgid to be gid_t and mode to be
mode_t, see BZ 23082.
We can leave them short ints on i386 for ABI compatibility, but avoid
hitting the issue on 64bit.
bits/ipc.h ends up being exactly the same in sysdeps/gnu/ and
sysdeps/unix/sysv/linux/, so remove the latter.
The standards want l_type and l_whence to be short ints, see BZ 23081.
We can leave them ints on i386 for ABI compatibility, but avoid hitting the
issue on 64bit.
These were created by creating stub files, running 'make update-abi',
and reviewing the results.
Also, set baseline ABI to GLIBC_2.38, the (upcoming) first glibc
release to first have x86_64-gnu support.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
If we're trying to interrupt an interruptible RPC, but the server fails
to respond to our __interrupt_operation () call, we instead destroy the
reply port we were expecting the reply to the RPC on.
Instead of deallocating the name completely, replace it with a dead
name, so the name won't get reused for some other right, and deallocate
it in _hurd_intr_rpc_mach_msg once we return from the signal handler.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230429201822.2605207-4-bugaevc@gmail.com>
We make lib{mach,hurd}user.so only call __mig_strlen which can be
relocated before libc.so is relocated, similar to what is done with
__mig_memcpy.
Message-Id: <ZE8DTRDpY2hpPZlJ@jupiter.tail36e24.ts.net>
Normally, in static builds, the first code that runs is _start, in e.g.
sysdeps/x86_64/start.S, which quickly calls __libc_start_main, passing
it the argv etc. Among the first things __libc_start_main does is
initializing the tunables (based on env), then CPU features, and then
calls _dl_relocate_static_pie (). Specifically, this runs ifunc
resolvers to pick, based on the CPU features discovered earlier, the
most suitable implementation of "string" functions such as memcpy.
Before that point, calling memcpy (or other ifunc-resolved functions)
will not work.
In the Hurd port, things are more complex. In order to get argv/env for
our process, glibc normally needs to do an RPC to the exec server,
unless our args/env are already located on the stack (which is what
happens to bootstrap processes spawned by GNU Mach). Fetching our
argv/env from the exec server has to be done before the call to
__libc_start_main, since we need to know what our argv/env are to pass
them to __libc_start_main.
On the other hand, the implementation of the RPC (and other initial
setup needed on the Hurd before __libc_start_main can be run) is not
very trivial. In particular, it may (and on x86_64, will) use memcpy.
But as described above, calling memcpy before __libc_start_main can not
work, since the GOT entry for it is not yet initialized at that point.
Work around this by pre-filling the GOT entry with the baseline version
of memcpy, __memcpy_sse2_unaligned. This makes it possible for early
calls to memcpy to just work. The initial value of the GOT entry is
unused on x86_64, and changing it won't interfere with the relocation
being performed later: once _dl_relocate_static_pie () is called, the
baseline version will get replaced with the most suitable one, and that
is what subsequent calls of memcpy are going to call.
Checked on x86_64-gnu.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230429201822.2605207-6-bugaevc@gmail.com>
Checked on x86_64-gnu.
[samuel.thibault@ens-lyon.org: Restored same comments as on i386]
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230429201822.2605207-3-bugaevc@gmail.com>
If any of the early boot-up tasks calls exit () or returns from main (),
terminate it properly instead of crashing on trying to dereference
_hurd_ports and getting forcibly terminated by the kernel.
We sadly cannot make the __USEPORT macro do the check for _hurd_ports
being unset, because it evaluates to the value of the expression
provided as the second argument, and that can be of any type; so there
is no single suitable fallback value for the macro to evaluate to in
case _hurd_ports is unset. Instead, each use site that wants to care for
this case will have to do its own checking.
Checked on x86_64-gnu.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230429131354.2507443-4-bugaevc@gmail.com>
There are reports for hang in __check_pf:
https://github.com/JoeDog/siege/issues/4
It is reproducible only under specific configurations:
1. Large number of cores (>= 64) and large number of threads (> 3X of
the number of cores) with long lived socket connection.
2. Low power (frequency) mode.
3. Power management is enabled.
While holding lock, __check_pf calls make_request which calls __sendto
and __recvmsg. Since __sendto and __recvmsg are cancellation points,
lock held by __check_pf won't be released and can cause deadlock when
thread cancellation happens in __sendto or __recvmsg. Add a cancellation
cleanup handler for __check_pf to unlock the lock when cancelled by
another thread. This fixes BZ #20975 and the siege hang issue.
In some cases, we do not want to go through the resolver for function
calls. For example, functions with vector arguments will use vector
registers to pass arguments. In the resolver, we do not save/restore the
vector argument registers for lazy binding efficiency. To avoid ruining
the vector arguments, functions with vector arguments will not go
through the resolver.
To achieve the goal, we will annotate the function symbols with
STO_RISCV_VARIANT_CC flag and add DT_RISCV_VARIANT_CC tag in the dynamic
section. In the first pass on PLT relocations, we do not set up to call
_dl_runtime_resolve. Instead, we resolve the functions directly.
Signed-off-by: Hsiangkai Wang <kai.wang@sifive.com>
Signed-off-by: Vincent Chen <vincent.chen@sifive.com>
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
Link: https://inbox.sourceware.org/libc-alpha/20230314162512.35802-1-kito.cheng@sifive.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
The build of glibc for i686-gnu has been failing for a while with GCC
mainline / GCC 13:
../sysdeps/mach/hurd/getcwd.c: In function '__hurd_canonicalize_directory_name_internal':
../sysdeps/mach/hurd/getcwd.c:242:48: error: pointer 'file_name' may be used after 'realloc' [-Werror=use-after-free]
242 | file_namep = &buf[file_namep - file_name + size / 2];
| ~~~~~~~~~~~^~~~~~~~~~~
../sysdeps/mach/hurd/getcwd.c:236:25: note: call to 'realloc' here
236 | buf = realloc (file_name, size);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
Fix by doing the subtraction before the reallocation.
Tested with build-many-glibcs.py for i686-gnu.
[samuel.thibault@ens-lyon.rg: Removed mention of this being a bug]
Message-Id: <18587337-7815-4056-ebd0-724df262d591@codesourcery.com>
As fixed in 0822e3552a ("hurd: Don't pass FD_CLOEXEC in CMSG_DATA"),
senders currently don't have any flag to pass. We shouldn't blindly take
random flags that senders could be erroneously giving us.
This is a new flag that can be passed to recvmsg () to make it
atomically set the CLOEXEC flag on all the file descriptors received
using the SCM_RIGHTS mechanism. This is useful for all the same reasons
that the other XXX_CLOEXEC flags are useful: namely, it provides
atomicity with respect to another thread of the same process calling
(fork and then) exec at the same time.
This flag is already supported on Linux and FreeBSD. The flag's value,
0x40000, is choosen to match FreeBSD's.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230423160548.126576-2-bugaevc@gmail.com>
The flags are used by _hurd_intern_fd, which takes O_* flags, not FD_*.
Also, it is of no concern to the receiving process whether or not
the sender process wants to close its copy of sent file descriptor
upon exec, and it should not influence whether or not the received
file descriptor gets the FD_CLOEXEC flag set in the receiving process.
The latter should in fact be dependent on the MSG_CMSG_CLOEXEC flag
being passed to the recvmsg () call, which is going to be implemented
in the following commit.
Fixes 344e755248
"hurd: Support sending file descriptors over Unix sockets"
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
This makes the prefer_map_32bit_exec tunable no longer Linux-specific.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230423215526.346009-4-bugaevc@gmail.com>
This is a flag that can be passed to mmap () to request that the mapping
being established should be located in the lower 2 GB area of the
address space, so only the lower 31 (not 32) bits can be set in its
address, and the address can be represented as a 32-bit integer without
truncating it.
This flag is intended to be compatible with Linux, FreeBSD, and Darwin
flags of the same name. Out of those systems, it appears Linux and
FreeBSD take MAP_32BIT to mean "map 31 bit", whereas Darwin allows the
32nd bit to be set in the address as well. The Hurd follows Linux and
FreeBSD behavior.
Unlike on those systems, on the Hurd MAP_32BIT is defined on all
supported architectures (which currently are only i386 and x86_64).
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230423215526.346009-1-bugaevc@gmail.com>
When opening a temporary file without O_CLOEXEC we risk leaking the
file descriptor if another thread calls (fork and then) exec while we
have the fd open. Fix this by consistently passing O_CLOEXEC everywhere
where we open a file for internal use (and not to return it to the user,
in which case the API defines whether or not the close-on-exec flag
shall be set on the returned fd).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230419160207.65988-4-bugaevc@gmail.com>
Properly differentiate between setting up the real TLS with
TLS_INIT_TP, and setting up the early TLS (__init1_tcbhead) in static
builds. In the latter case, don't yet migrate the reply port into the
TCB, and don't yet set __libc_tls_initialized to 1.
This also lets us move the __init1_desc assignment inside
_hurd_tls_init ().
Fixes cd019ddd89
"hurd: Don't leak __hurd_reply_port0"
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Created tunable glibc.pthread.stack_hugetlb to control when hugepages
can be used for stack allocation.
In case THP are enabled and glibc.pthread.stack_hugetlb is set to
0, glibc will madvise the kernel not to use allow hugepages for stack
allocations.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
We must not use the user's reply port (scp->sc_reply_port) for any of
our own RPCs, otherwise various things break. So, use MACH_PORT_DEAD as
a reply port when destroying our reply port, and make sure to do this
after _hurd_sigstate_unlock (), which may do a gsync_wake () RPC.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Optimize the fast paths (x < y) and (x/y < 2^12). Delay handling of special
cases to reduce the number of instructions executed before the fast paths.
Performance improvements for fmod:
Skylake Zen2 Neoverse V1
subnormals 11.8% 4.2% 11.5%
normal 3.9% 0.01% -0.5%
close-exponents 6.3% 5.6% 19.4%
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
When glibc is built as a shared library, TLS is always initialized by
the call of TLS_INIT_TP () macro made inside the dynamic loader, prior
to running the main program (see dl-call_tls_init_tp.h). We can take
advantage of this: we know for sure that __LIBC_NO_TLS () will evaluate
to 0 in all other cases, so let the compiler know that explicitly too.
Also, only define _hurd_tls_init () and TLS_INIT_TP () under the same
conditions (either !SHARED or inside rtld), to statically assert that
this is the case.
Other than a microoptimization, this also helps with avoiding awkward
sharing of the __libc_tls_initialized variable between ld.so and libc.so
that we would have to do otherwise -- we know for sure that no sharing
is required, simply because __libc_tls_initialized would always be set
to true inside libc.so.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-25-bugaevc@gmail.com>
Now that the signal code no longer accesses it, the only real user of it
was mig-reply.c, so move the logic for managing the port there.
If we're in SHARED and outside of rtld, we know that __LIBC_NO_TLS ()
always evaluates to 0, and a TLS reply port will always be used, not
__hurd_reply_port0. Still, the compiler does not see that
__hurd_reply_port0 is never used due to its address being taken. To deal
with this, explicitly compile out __hurd_reply_port0 when we know we
won't use it.
Also, instead of accessing the port via THREAD_SELF->reply_port, this
uses THREAD_GETMEM and THREAD_SETMEM directly, avoiding possible
miscompilations.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
This reverts commit b37899d34d.
Apparently we load libc.so (and thus start using its functions) before
calling TLS_INIT_TP, so libc.so functions should not actually assume
that TLS is always set up.
Previously, once we set up TLS, we would implicitly switch from using
__hurd_reply_port0 to reply_port inside the TCB, leaving the former
unused. But we never deallocated it, so it got leaked.
Instead, migrate the port into the new TCB's reply_port slot. This
avoids both the port leak and an extra syscall to create a new reply
port for the TCB.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-28-bugaevc@gmail.com>
If we're doing signals, that means we've already got the signal thread
running, and that implies TLS having been set up. So we know that
__hurd_local_reply_port will resolve to THREAD_SELF->reply_port, and can
access that directly using the THREAD_GETMEM and THREAD_SETMEM macros.
This avoids potential miscompilations, and should also be a tiny bit
faster.
Also, use mach_port_mod_refs () and not mach_port_destroy () to destroy
the receive right. mach_port_destroy () should *never* be used on
mach_task_self (); this can easily lead to port use-after-free
vulnerabilities if the task has any other references to the same port.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-26-bugaevc@gmail.com>
When glibc is built as a shared library, TLS is always initialized by
the call of TLS_INIT_TP () macro made inside the dynamic loader, prior
to running the main program (see dl-call_tls_init_tp.h). We can take
advantage of this: we know for sure that __LIBC_NO_TLS () will evaluate
to 0 in all other cases, so let the compiler know that explicitly too.
Also, only define _hurd_tls_init () and TLS_INIT_TP () under the same
conditions (either !SHARED or inside rtld), to statically assert that
this is the case.
Other than a microoptimization, this also helps with avoiding awkward
sharing of the __libc_tls_initialized variable between ld.so and libc.so
that we would have to do otherwise -- we know for sure that no sharing
is required, simply because __libc_tls_initialized would always be set
to true inside libc.so.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-25-bugaevc@gmail.com>
These are just regular local variables that are not accessed in any
funny ways, not even though a pointer. There's absolutely no reason to
declare them volatile. It only ends up hurting the quality of the
generated machine code.
If anything, it would make sense to decalre sigsp as *pointing* to
volatile memory (volatile void *sigsp), but evidently that's not needed
either.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230403115621.258636-2-bugaevc@gmail.com>
This is based on the Linux port's version, but laid out to match Mach's
struct i386_thread_state, much like the i386 version does.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Rename x86_cpu_INDEX_7_ECX_1 to x86_cpu_INDEX_7_ECX_15 for the unused bit
15 in ECX from CPUID with EAX == 0x7 and ECX == 0.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
sysdeps/mach/hurd/htl/pt-pthread_self.c: New file.
htl/Makefile: .. Add it to libc routine.
sysdeps/mach/hurd/htl/pt-sysdep.c(__pthread_self): Remove it.
sysdeps/mach/hurd/htl/pt-sysdep.h(__pthread_self): Add hidden propertie.
htl/Versions(__pthread_self) Version it as private symbol.
Signed-off-by: Guy-Fleury Iteriteka <gfleury@disroot.org>
Message-Id: <20230318095826.1125734-3-gfleury@disroot.org>
As indicated by sparc kernel-features.h, even though sparc64 defines
__NR_pause, it is not supported (ENOSYS). Always use ppoll or the
64 bit time_t variant instead.
The error handling is moved to sysdeps/ieee754 version with no SVID
support. The compatibility symbol versions still use the wrapper
with SVID error handling around the new code. There is no new symbol
version nor compatibility code on !LIBM_SVID_COMPAT targets
(e.g. riscv).
The ia64 is unchanged, since it still uses the arch specific
__libm_error_region on its implementation. For both i686 and m68k,
which provive arch specific implementation, wrappers are added so
no new symbol are added (which would require to change the
implementations).
It shows an small improvement, the results for fmod:
Architecture | Input | master | patch
-----------------|-----------------|----------|--------
x86_64 (Ryzen 9) | subnormals | 12.5049 | 9.40992
x86_64 (Ryzen 9) | normal | 296.939 | 296.738
x86_64 (Ryzen 9) | close-exponents | 16.0244 | 13.119
aarch64 (N1) | subnormal | 6.81778 | 4.33313
aarch64 (N1) | normal | 155.620 | 152.915
aarch64 (N1) | close-exponents | 8.21306 | 5.76138
armhf (N1) | subnormal | 15.1083 | 14.5746
armhf (N1) | normal | 244.833 | 241.738
armhf (N1) | close-exponents | 21.8182 | 22.457
Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
This uses a new algorithm similar to already proposed earlier [1].
With x = mx * 2^ex and y = my * 2^ey (mx, my, ex, ey being integers),
the simplest implementation is:
mx * 2^ex == 2 * mx * 2^(ex - 1)
while (ex > ey)
{
mx *= 2;
--ex;
mx %= my;
}
With mx/my being mantissa of double floating pointer, on each step the
argument reduction can be improved 8 (which is sizeof of uint32_t minus
MANTISSA_WIDTH plus the signal bit):
while (ex > ey)
{
mx << 8;
ex -= 8;
mx %= my;
} */
The implementation uses builtin clz and ctz, along with shifts to
convert hx/hy back to doubles. Different than the original patch,
this path assume modulo/divide operation is slow, so use multiplication
with invert values.
I see the following performance improvements using fmod benchtests
(result only show the 'mean' result):
Architecture | Input | master | patch
-----------------|-----------------|----------|--------
x86_64 (Ryzen 9) | subnormals | 17.2549 | 12.0318
x86_64 (Ryzen 9) | normal | 85.4096 | 49.9641
x86_64 (Ryzen 9) | close-exponents | 19.1072 | 15.8224
aarch64 (N1) | subnormal | 10.2182 | 6.81778
aarch64 (N1) | normal | 60.0616 | 20.3667
aarch64 (N1) | close-exponents | 11.5256 | 8.39685
I also see similar improvements on arm-linux-gnueabihf when running on
the N1 aarch64 chips, where it a lot of soft-fp implementation (for
modulo, and multiplication):
Architecture | Input | master | patch
-----------------|-----------------|----------|--------
armhf (N1) | subnormal | 11.6662 | 10.8955
armhf (N1) | normal | 69.2759 | 34.1524
armhf (N1) | close-exponents | 13.6472 | 18.2131
Instead of using the math_private.h definitions, I used the
math_config.h instead which is used on newer math implementations.
Co-authored-by: kirill <kirill.okhotnikov@gmail.com>
[1] https://sourceware.org/pipermail/libc-alpha/2020-November/119794.html
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
This uses a new algorithm similar to already proposed earlier [1].
With x = mx * 2^ex and y = my * 2^ey (mx, my, ex, ey being integers),
the simplest implementation is:
mx * 2^ex == 2 * mx * 2^(ex - 1)
while (ex > ey)
{
mx *= 2;
--ex;
mx %= my;
}
With mx/my being mantissa of double floating pointer, on each step the
argument reduction can be improved 11 (which is sizeo of uint64_t minus
MANTISSA_WIDTH plus the signal bit):
while (ex > ey)
{
mx << 11;
ex -= 11;
mx %= my;
} */
The implementation uses builtin clz and ctz, along with shifts to
convert hx/hy back to doubles. Different than the original patch,
this path assume modulo/divide operation is slow, so use multiplication
with invert values.
I see the following performance improvements using fmod benchtests
(result only show the 'mean' result):
Architecture | Input | master | patch
-----------------|-----------------|----------|--------
x86_64 (Ryzen 9) | subnormals | 19.1584 | 12.5049
x86_64 (Ryzen 9) | normal | 1016.51 | 296.939
x86_64 (Ryzen 9) | close-exponents | 18.4428 | 16.0244
aarch64 (N1) | subnormal | 11.153 | 6.81778
aarch64 (N1) | normal | 528.649 | 155.62
aarch64 (N1) | close-exponents | 11.4517 | 8.21306
I also see similar improvements on arm-linux-gnueabihf when running on
the N1 aarch64 chips, where it a lot of soft-fp implementation (for
modulo, clz, ctz, and multiplication):
Architecture | Input | master | patch
-----------------|-----------------|----------|--------
armhf (N1) | subnormal | 15.908 | 15.1083
armhf (N1) | normal | 837.525 | 244.833
armhf (N1) | close-exponents | 16.2111 | 21.8182
Instead of using the math_private.h definitions, I used the
math_config.h instead which is used on newer math implementations.
Co-authored-by: kirill <kirill.okhotnikov@gmail.com>
[1] https://sourceware.org/pipermail/libc-alpha/2020-November/119794.html
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Linux kernel uses AT_HWCAP2 to indicate if FSGSBASE instructions are
enabled. If the HWCAP2_FSGSBASE bit in AT_HWCAP2 is set, FSGSBASE
instructions can be used in user space. Define dl_check_hwcap2 to set
the FSGSBASE feature to active on Linux when the HWCAP2_FSGSBASE bit is
set.
Add a test to verify that FSGSBASE is active on current kernels.
NB: This test will fail if the kernel doesn't set the HWCAP2_FSGSBASE
bit in AT_HWCAP2 while fsgsbase shows up in /proc/cpuinfo.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
The divss instruction clobbers its first argument, and the constraints
need to reflect that. Fortunately, with GCC 12, generated code does
not actually change, so there is no externally visible bug.
Suggested-by: Jakub Jelinek <jakub@redhat.com>
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
Just like the other existing rtld-str* files, this provides rtld with
usable versions of stpncpy and strncpy.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-22-bugaevc@gmail.com>
The source code is the same as sysdeps/i386/htl/tcb-offsets.sym, but of
course the produced tcb-offsets.h will be different.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-21-bugaevc@gmail.com>
These do not need any changes to be used on x86_64.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-20-bugaevc@gmail.com>
This is more correct, if only because these fields are defined as having
the type unsigned int in the Mach headers, so casting them to a signed
int and then back is suboptimal.
Also, remove an extra reassignment of uesp -- this is another remnant of
the ecx kludge.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-16-bugaevc@gmail.com>
There's nothing Mach- or Hurd-specific about it; any port that ends
up with rtld pulling in strncpy will need this.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-15-bugaevc@gmail.com>
This was used for the value of libc-lock's owner when TLS is not yet set
up, so THREAD_SELF can not be used. Since the value need not be anything
specific -- it just has to be non-NULL -- we can just use a plain
constant, such as (void *) 1, for this. This avoids accessing the symbol
through GOT, and exporting it from libc.so in the first place.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-12-bugaevc@gmail.com>
Noone is or should be using __hurd_threadvar_stack_{offset,mask}, we
have proper TLS now. These two remaining variables are never set to
anything other than zero, so any code that would try to use them as
described would just dereference a zero pointer and crash. So remove
them entirely.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230319151017.531737-6-bugaevc@gmail.com>
And make always supported. The configure option was added on glibc 2.25
and some features require it (such as hwcap mask, huge pages support, and
lock elisition tuning). It also simplifies the build permutations.
Changes from v1:
* Remove glibc.rtld.dynamic_sort changes, it is orthogonal and needs
more discussion.
* Cleanup more code.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Prevent sh from interpreting a user string as shell options if it
starts with '-' or '+'. Since the version of /bin/sh used for testing
system() is different from the full-fledged system /bin/sh add support
to it for handling "--" after "-c". Add a testcase to ensure the
expected behavior.
Signed-off-by: Joe Simmons-Talbott <josimmon@redhat.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Instead define the required fields in system dependend files. The only
system dependent definition is FILENAME_MAX, which should match POSIX
PATH_MAX, and it is obtained from either kernel UAPI or mach headers.
Currently set pre-defined value from current kernels.
It avoids a circular dependendy when including stdio.h in
gen-as-const-headers files.
Checked on x86_64-linux-gnu and i686-linux-gnu
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
They are both used by __libc_freeres to free all library malloc
allocated resources to help tooling like mtrace or valgrind with
memory leak tracking.
The current scheme uses assembly markers and linker script entries
to consolidate the free routine function pointers in the RELRO segment
and to be freed buffers in BSS.
This patch changes it to use specific free functions for
libc_freeres_ptrs buffers and call the function pointer array directly
with call_function_static_weak.
It allows the removal of both the internal macros and the linker
script sections.
Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This allows other targets to use the same inputs for their own libmvec
microbenchmarks without having to duplicate them in their own
subdirectory.
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
Binutils 2.40 sets EF_LARCH_OBJABI_V1 for shared objects:
$ ld --version | head -n1
GNU ld (GNU Binutils) 2.40
$ echo 'int dummy;' > dummy.c
$ cc dummy.c -shared
$ readelf -h a.out | grep Flags
Flags: 0x43, DOUBLE-FLOAT, OBJ-v1
We need to ignore it in ldconfig or ldconfig will consider all shared
objects linked by Binutils 2.40 "unsupported". Maybe we should stop
setting EF_LARCH_OBJABI_V1 for shared objects, but Binutils 2.40 is
already released and we cannot change it.
Linux threads were removed about 12 years ago and the current
nptl implementation only requires 4-byte alignment for pthread
locks.
The 16-byte alignment causes various issues. For example in
building ignition-msgs, we have:
/usr/include/google/protobuf/map.h:124:37: error: static assertion failed
124 | static_assert(alignof(value_type) <= 8, "");
| ~~~~~~~~~~~~~~~~~~~~^~~~
This is caused by the 16-byte pthread lock alignment.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
For better debug experience use separate code block with extra
cfi_* directives to run child (same as in __clone3).
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Use the clone3 wrapper on ARC. It doesn't care about stack alignment.
All callers should provide an aligned stack.
It follows the internal signature:
extern int clone3 (struct clone_args *__cl_args, size_t __size,
int (*__func) (void *__arg), void *__arg);
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Between versions v2.11 and v2.12 struct ntptimeval got new fields.
That wasn't a problem because new function ntp_gettimex was created
(and made default) to support new struct. Old ntp_gettime was not
using new fields so it was safe to call with old struct
definition. Then commits 5613afe9e3 and b6ad64b907 (added for
64 bit time_t support), ntp_gettime start setting new fields.
Sets fields manually to maintain compatibility with v2.11 struct
definition.
Resolves#30156
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
according to man-pages-posix-2017, shm_open() function may fail if the length
of the name argument exceeds {_POSIX_PATH_MAX} and set ENAMETOOLONG
Signed-off-by: abushwang <abushwangs@gmail.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Recorded in [BZ #30183]:
1. export GLIBC_TUNABLES=glibc.cpu.hwcaps=-AVX512
2. Add _dl_printf("p -- %s\n", p); just before switch(nl) in
sysdeps/x86/cpu-tunables.c
3. compiled and run ./testrun.sh /usr/bin/ls
you will get:
p -- -AVX512
p -- LC_ADDRESS=en_US.UTF-8
p -- LC_NUMERIC=C
...
The function, TUNABLE_CALLBACK (set_hwcaps)
(tunable_val_t *valp), checks far more than it should and it
should stop at end of "-AVX512".
Fix bug that SIGCHLD is erroneously blocked forever in the following
scenario:
1. Thread A calls system but hasn't returned yet
2. Thread B calls another system but returns
SIGCHLD would be blocked forever in thread B after its system() returns,
even after the system() in thread A returns.
Although POSIX does not require, glibc system implementation aims to be
thread and cancellation safe. This bug was introduced in
5fb7fc9635 when we moved reverting signal
mask to happen when the last concurrently running system returns,
despite that signal mask is per thread. This commit reverts this logic
and adds a test.
Signed-off-by: Adam Yi <ayi@janestreet.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch updates the kernel version in the tests tst-mman-consts.py,
tst-mount-consts.py and tst-pidfd-consts.py to 6.2. (There are no new
constants covered by these tests in 6.2 that need any other header
changes, and the removed MAP_VARIABLE for hppa was addressed
separately.)
Tested with build-many-glibcs.py.
The __builtin_arm_uqsub8 is an internal GCC builtin which might change
in future release (the correct way is to include "arm_acle.h" and use
__uqsub8 ()). Since not all compilers support it, just use the
inline assembler instead.
Checked on armv7a-linux-gnueabihf.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
The generic implementation already cover word access along with
cmpbge for both aligned and unaligned, so use it instead.
Checked qemu static for alpha-linux-gnu.
The default, and power7 implementation just adds word aligned
access when inputs have the same aligment. The unaligned case
is still done by byte operations.
This is already covered by the generic implementation, which also add
the unaligned input optimization.
Checked on powerpc64-linux-gnu built without multi-arch for powerpc64,
power7, power8, and power9 (build for le).
Reviewed-by: Rajalakshmi Srinivasaraghavan <rajis@linux.ibm.com>
The default, power4, and power7 implementation just adds word aligned
access when inputs have the same aligment. The unaligned case
is still done by byte operations.
This is already covered by the generic implementation, which also add
the unaligned input optimization.
Checked on powerpc-linux-gnu built without multi-arch for powerpc,
power4, and power7.
Reviewed-by: Rajalakshmi Srinivasaraghavan <rajis@linux.ibm.com>
C2x adds binary integer constants starting with 0b or 0B, and supports
those constants for the %i scanf format (in addition to the %b format,
which isn't yet implemented for scanf in glibc). Implement that scanf
support for glibc.
As with the strtol support, this is incompatible with previous C
standard versions, in that such an input string starting with 0b or 0B
was previously required to be parsed as 0 (with the rest of the input
potentially matching subsequent parts of the scanf format string).
Thus this patch adds 12 new __isoc23_* functions per long double
format (12, 24 or 36 depending on how many long double formats the
glibc configuration supports), with appropriate header redirection
support (generally very closely following that for the __isoc99_*
scanf functions - note that __GLIBC_USE (DEPRECATED_SCANF) takes
precedence over __GLIBC_USE (C2X_STRTOL), so the case of GNU
extensions to C89 continues to get old-style GNU %a and does not get
this new feature). The function names would remain as __isoc23_* even
if C2x ends up published in 2024 rather than 2023.
When scanf %b support is added, I think it will be appropriate for all
versions of scanf to follow C2x rules for inputs to the %b format
(given that there are no compatibility concerns for a new format).
Tested for x86_64 (full glibc testsuite). The first version was also
tested for powerpc (32-bit) and powerpc64le (stdio-common/ and wcsmbs/
tests), and with build-many-glibcs.py.
Before GCC r13-2728, it would produce a normal dynamic-linked executable
with -static-pie. I mistakely believed it would produce a static-linked
executable, so failed to detect the breakage. Then with Binutils 2.40
and (vanilla) GCC 12, libc_cv_static_pie_on_loongarch is mistakenly
enabled and cause a building failure with "undefined reference to
_DYNAMIC".
Fix the issue by disabling static PIE if -static-pie creates something
with a INTERP header.
"We don't need it any more"
The INTR_MSG_TRAP macro in intr-msg.h used to play little trick with
the stack pointer: it would temporarily save the "real" stack pointer
into ecx, while setting esp to point to just before the message buffer,
and then invoke the mach_msg trap. This way, INTR_MSG_TRAP reused the
on-stack arguments laid out for the containing call of
_hurd_intr_rpc_mach_msg (), passing them to the mach_msg trap directly.
This, however, required special support in hurdsig.c and trampoline.c,
since they now had to recognize when a thread is inside the piece of
code where esp doesn't point to the real tip of the stack, and handle
this situation specially.
Commit 1d20f33ff4 has removed the actual
temporary change of esp by actually re-pushing mach_msg arguments onto
the stack, and popping them back at end. It did not, however, deal with
the rest of "the ecx kludge" code in other files, resulting in potential
crashes if a signal arrives in the middle of pushing arguments onto the
stack.
Fix that by removing "the ecx kludge". Instead, when we want a thread
to skip the RPC, but cannot make just make it jump to after the trap
since it's not done adjusting the stack yet, set the SYSRETURN register
to MACH_SEND_INTERRUPTED (as we do anyway), and rely on the thread
itself for detecting this case and skipping the RPC.
This simplifies things somewhat and paves the way for a future x86_64
port of this code.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230301162355.426887-1-bugaevc@gmail.com>
The _FPU_SETCW and _FPU_GETCW macros are defined with inline assemblies.
They use the sfpc and efpc instructions, respectively. But both contain
a spurious second operand that leads to a compile error with Clang.
Removing this operand works both with gcc/gas (since binutils 2.18) as
well as with clang/llvm.
Needed due to recent commits:
- "added pair of inputs for hypotf in binary32"
commit ID cf7ffdd8a5
- "update auto-libm-test-out-hypot"
commit ID 3efbf11fdf
Linux 6.2 adds six new Arm HWCAP values and two new HWCAP2 values; add
them to glibc's Arm bits/hwcap.h, with corresponding dl-procinfo.c and
dl-procinfo.h updates.
Tested with build-many-glibcs.py for arm-linux-gnueabi.
This is for future-proofing. On i386, it is 4-byte aligned anyway, but
on x86_64, we want it 8-byte aligned, not 4-byte aligned.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230214173722.428140-4-bugaevc@gmail.com>
Update libm test ulps for
commit 3efbf11fdf
Author: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Date: Tue Feb 14 11:24:59 2023 +0100
update auto-libm-test-out-hypot
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This patch implements the LoongArch specific math barriers in order to omit
the store and load from stack if possible.
Signed-off-by: Xi Ruoyao <xry111@xry111.site>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The Linux kernel upstream commit 71bdea6f798b ("parisc: Align parisc
MADV_XXX constants with all other architectures") dropped the
parisc-specific MADV_* values in favour of the same constants as
other architectures. In the same commit a wrapper was added which
translates the old values to the standard MADV_* values to avoid
breakage of existing programs.
This upstream patch has been downported to all stable kernel trees as
well.
This patch now drops the parisc specific constants from glibc to
allow newly compliled programs to use the standard MADV_* constants.
v2: Added NEWS section, based on feedback from Florian Weimer
Signed-off-by: Helge Deller <deller@gmx.de>
This drops all of the return address rewriting kludges. The only
remaining hack is the jump out of a call stack while adjusting the
stack pointer.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Linux 6.2 has no new syscalls. Update the version number in
syscall-names.list to reflect that it is still current for 6.2.
Tested with build-many-glibcs.py.
Crossing 2GB boundaries with indirect calls and jumps can use more
branch prediction resources on Intel Golden Cove CPU (see the
"Misprediction for Branches >2GB" section in Intel 64 and IA-32
Architectures Optimization Reference Manual.) There is visible
performance improvement on workloads with many PLT calls when executable
and shared libraries are mmapped below 2GB. Add the Prefer_MAP_32BIT_EXEC
bit so that mmap will try to map executable or denywrite pages in shared
libraries with MAP_32BIT first.
NB: Prefer_MAP_32BIT_EXEC reduces bits available for address space
layout randomization (ASLR), which is always disabled for SUID programs
and can only be enabled by the tunable, glibc.cpu.prefer_map_32bit_exec,
or the environment variable, LD_PREFER_MAP_32BIT_EXEC. This works only
between shared libraries or between shared libraries and executables with
addresses below 2GB. PIEs are usually loaded at a random address above
4GB by the kernel.
Linux 6.2 removed the hppa compatibility MAP_VARIABLE define. That
means that, whether or not we remove it in glibc, it needs to be
ignored in tst-mman-consts.py (since this macro comparison
infrastructure expects that new kernel header versions only add new
macros, not remove old ones).
Tested with build-many-glibcs.py for hppa-linux-gnu (Linux 6.2
headers).
Fix the computation to allow for cntfrq_el0 being larger than 1GHz.
Assume cntfrq_el0 is a multiple of 1MHz to increase the maximum
interval (1024 seconds at 1GHz).
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
It fixes the build after 7ea510127e and 22999b2f0f.
Checked with build for s390x-linux-gnu with -march=z13.
Reviewed-by: Arjun Shankar <arjun@redhat.com>
__builtin_arm_uqsub8 is only available on gcc newer or equal than 10.
Checked on arm-linux-gnueabihf built with gcc 9.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
The default Linux implementation already handled the Linux generic
ABIs interface used on newer architectures, so there is no need to
Imply the generic any longer.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
And disable if kernel does not support it.
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
And disable if kernel does not support it.
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
And remove redundant entries on other architectures Version. The
version for fallocate64 was supposed to be 2.10, but it was then
added to 32-bit platforms in 2.11 because it mistakenly wasn't
exported for them in 2.10 (see the commit message for
1f3615a1c9).
The linux/generic did not exist before 2.15, i.e. when the tile
ports were added (and microblaze did not exist before 2.18), which
explains those differences but also illustrates that "2.11 for 32-bit,
2.10 for 64-bit" should be sufficient since versions older than the
minimum for the architecture are automatically adjusted.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
While cleaning up old libc version support, the deprecated libc4 code was
accidentally kept in `implicit_soname`, instead of the libc6 code.
This causes additional symlinks to be created by `ldconfig` for libraries
without a soname, e.g. a library `libsomething.123.456.789` without a soname
will create a `libsomething.123` -> `libsomething.123.456.789` symlink.
As the libc6 version of the `implicit_soname` code is a trivial `xstrdup`,
just inline it and remove `implicit_soname` altogether.
Some further simplification looks possible (e.g. the call to `create_links`
looks like a no-op if `soname == NULL`, other than the verbose printfs), but
logic is kept as-is for now.
Fixes: BZ #30125
Fixes: 8ee878592c ("Assume only FLAG_ELF_LIBC6 suport")
Signed-off-by: Joan Bruguera <joanbrugueram@gmail.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
And make it a bit more 64-bit ready. This is in preparation to moving this
file into x86/
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230218203717.373211-6-bugaevc@gmail.com>
This ensures that a timer_t value can be cast to struct timer_node *
and back.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230218203717.373211-5-bugaevc@gmail.com>
Fix a few more cases of build errors caused by mismatched types. This is a
continuation of f4315054b4.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230218203717.373211-3-bugaevc@gmail.com>
This is going to be done differently on x86_64.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230218203717.373211-2-bugaevc@gmail.com>
Add extra check for compiler definitions to ensure that compiler provides
sqrt and fma hw fpu instructions else use software implementation.
As divide/sqrt and FMA hw support from CPU side is optional,
the compiler can be configured by options to generate hw FPU instructions,
but without use of FDDIV, FDSQRT, FSDIV, FSSQRT, FDMADD and FSMADD
instructions. In this case __builtin_sqrt and __builtin_sqrtf provided by
compiler can't be used inside the glibc code, as these builtins are used
in implementations of sqrt() and sqrtf() functions but at the same time
these builtins unfold to sqrt() and sqrtf(). So it is possible to receive
code like that:
0001c4b4 <__ieee754_sqrtf>:
1c4b4: 0001 0000 b 0 ;1c4b4 <__ieee754_sqrtf>
The same is also true for __builtin_fma and __builtin_fmaf.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The ARCv2 ABI requires 4 byte stack pointer alignment. Don't allow to
use unaligned child stack in clone. As the stack grows down,
align it down.
This was pointed by misc/tst-misalign-clone-internal and
misc/tst-misalign-clone tests. Stack alignmet fixes these tests
fails.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
And use a packed structure instead. The compiler generates optimized
unaligned code if the architecture supports it.
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Builds for s390 recently started failing with:
../sysdeps/s390/multiarch/ifunc-impl-list.c: In function '__libc_ifunc_impl_list':
../sysdeps/s390/multiarch/ifunc-impl-list.c:83:21: error: unused variable 'dl_hwcap' [-Werror=unused-variable]
83 | unsigned long int dl_hwcap = features->hwcap;
| ^~~~~~~~
https://sourceware.org/pipermail/libc-testresults/2023q1/010855.html
Add __attribute__ ((unused)) as already done for another variable
there.
Tested with build-many-glibcs.py (compilers and glibcs) for
s390x-linux-gnu and s390-linux-gnu.
Note: s390x-linux-gnu-O3 started failing with a different error
earlier; that problem may still need to be fixed after this fix is in.
https://sourceware.org/pipermail/libc-testresults/2023q1/010829.html
C2x adds binary integer constants starting with 0b or 0B, and supports
those constants in strtol-family functions when the base passed is 0
or 2. Implement that strtol support for glibc.
As discussed at
<https://sourceware.org/pipermail/libc-alpha/2020-December/120414.html>,
this is incompatible with previous C standard versions, in that such
an input string starting with 0b or 0B was previously required to be
parsed as 0 (with the rest of the string unprocessed). Thus, as
proposed there, this patch adds 20 new __isoc23_* functions with
appropriate header redirection support. This patch does *not* do
anything about scanf %i (which will need 12 new functions per long
double variant, so 12, 24 or 36 depending on the glibc configuration),
instead leaving that for a future patch. The function names would
remain as __isoc23_* even if C2x ends up published in 2024 rather than
2023.
Making this change leads to the question of what should happen to
internal uses of these functions in glibc and its tests. The header
redirection (which applies for _GNU_SOURCE or any other feature test
macros enabling C2x features) has the effect of redirecting internal
uses but without those uses then ending up at a hidden alias (see the
comment in include/stdio.h about interaction with libc_hidden_proto).
It seems desirable for the default for internal uses to be the same
versions used by normal code using _GNU_SOURCE, so rather than doing
anything to disable that redirection, similar macro definitions to
those in include/stdio.h are added to the include/ headers for the new
functions.
Given that the default for uses in glibc is for the redirections to
apply, the next question is whether the C2x semantics are correct for
all those uses. Uses with the base fixed to 10, 16 or any other value
other than 0 or 2 can be ignored. I think this leaves the following
internal uses to consider (an important consideration for review of
this patch will be both whether this list is complete and whether my
conclusions on all entries in it are correct):
benchtests/bench-malloc-simple.c
benchtests/bench-string.h
elf/sotruss-lib.c
math/libm-test-support.c
nptl/perf.c
nscd/nscd_conf.c
nss/nss_files/files-parse.c
posix/tst-fnmatch.c
posix/wordexp.c
resolv/inet_addr.c
rt/tst-mqueue7.c
soft-fp/testit.c
stdlib/fmtmsg.c
support/support_test_main.c
support/test-container.c
sysdeps/pthread/tst-mutex10.c
I think all of these places are OK with the new semantics, except for
resolv/inet_addr.c, where the POSIX semantics of inet_addr do not
allow for binary constants; thus, I changed that file (to use
__strtoul_internal, whose semantics are unchanged) and added a test
for this case. In the case of posix/wordexp.c I think accepting
binary constants is OK since POSIX explicitly allows additional forms
of shell arithmetic expressions, and in stdlib/fmtmsg.c SEV_LEVEL is
not in POSIX so again I think accepting binary constants is OK.
Functions such as __strtol_internal, which are only exported for
compatibility with old binaries from when those were used in inline
functions in headers, have unchanged semantics; the __*_l_internal
versions (purely internal to libc and not exported) have a new
argument to specify whether to accept binary constants.
As well as for the standard functions, the header redirection also
applies to the *_l versions (GNU extensions), and to legacy functions
such as strtoq, to avoid confusing inconsistency (the *q functions
redirect to __isoc23_*ll rather than needing their own __isoc23_*
entry points). For the functions that are only declared with
_GNU_SOURCE, this means the old versions are no longer available for
normal user programs at all. An internal __GLIBC_USE_C2X_STRTOL macro
is used to control the redirections in the headers, and cases in glibc
that wish to avoid the redirections - the function implementations
themselves and the tests of the old versions of the GNU functions -
then undefine and redefine that macro to allow the old versions to be
accessed. (There would of course be greater complexity should we wish
to make any of the old versions into compat symbols / avoid them being
defined at all for new glibc ABIs.)
strtol_l.c has some similarity to strtol.c in gnulib, but has already
diverged some way (and isn't listed at all at
https://sourceware.org/glibc/wiki/SharedSourceFiles unlike strtoll.c
and strtoul.c); I haven't made any attempts at gnulib compatibility in
the changes to that file.
I note incidentally that inttypes.h and wchar.h are missing the
__nonnull present on declarations of this family of functions in
stdlib.h; I didn't make any changes in that regard for the new
declarations added.
This macro from Mach headers conflicts with how
sysdeps/x86_64/multiarch/strcmp-sse2.S expects it to be defined.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230214173722.428140-3-bugaevc@gmail.com>
* Micro-optimize TLS access using GCC's native support for gs-based
addressing when available;
* Just use THREAD_GETMEM and THREAD_SETMEM instead of more inline
assembly;
* Sync tcbhead_t layout with NPTL, in particular update/fix __private_ss
offset;
* Statically assert that the two offsets that are a part of ABI are what
we expect them to be.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230214173722.428140-2-bugaevc@gmail.com>
'sem' is the opaque 'sem_t', 'isem' is the actual 'struct new_sem'.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230212111044.610942-6-bugaevc@gmail.com>
It has been decided that on x86_64, mach_msg_type_number_t stays 32-bit.
Therefore, it's not possible to use mach_msg_type_number_t
interchangeably with size_t, in particular this breaks when a pointer to
a variable is passed to a MIG routine.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230212111044.610942-3-bugaevc@gmail.com>
Make the code flow more linear using early returns where possible. This
makes it so much easier to reason about what runs on error / successful
code paths.
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-Id: <20230212111044.610942-2-bugaevc@gmail.com>
We used to use .cfi_adjust_cfa_offset around %esp manipulation
asm instructions to fix unwinding, but when building glibc with
-fno-omit-frame-pointer this is bogus since in that case %ebp is the CFA and
does not move.
Instead, let's force -fno-omit-frame-pointer when building intr-msg.c so
that %ebp can always be used and no .cfi_adjust_cfa_offset is needed.
It follows the internal signature:
extern int clone3 (struct clone_args *__cl_args, size_t __size,
int (*__func) (void *__arg), void *__arg);
The powerpc64 ABI requires an initial stackframe so the child can
store/restore the TOC. It is create prior calling clone3 by
adjusting the stack size (since kernel will compute the stack as
stack plus size).
Checked on powerpc64-linux-gnu (power8, kernel 6.0) and
powerpc64le-linux-gnu (power9, kernel 4.18).
Reviewed-by: Paul E. Murphy <murphyp@linux.ibm.com>
Although static linker can optimize it to local call, it follows the
internal scheme to provide hidden proto and definitions.
Reviewed-by: Carlos Eduardo Seo <carlos.seo@linaro.org>
Although static linker can optimize it to local call, it follows the
internal scheme to provide hidden proto and definitions.
Reviewed-by: Carlos Eduardo Seo <carlos.seo@linaro.org>
The hard float abi and hard float are different,
Hard float abi: Use float register to pass float type arguments.
Hard float: Enable the hard float ISA feature.
So the with_fp_cond cannot represent these two features. When
-mfloat-abi=softfp, the float abi is soft and hard float is enabled.
So add 'with_hard_float_abi' in preconfigure and define 'CSKY_HARD_FLOAT_ABI'
if float abi is hard, and use 'CSKY_HARD_FLOAT_ABI' to determine
dynamic linker because it is what determines compatibility.
And with_fp_cond is still needed to tell glibc whether to enable
hard floating feature.
In addition, use AC_TRY_COMMAND to test gcc to ensure compatibility
between different versions of gcc. The original way has a problem
that __CSKY_HARD_FLOAT_FPU_SF__ means the target only has single
hard float-points ISA, so it's not defined in CPUs like ck810f.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This patch enables the option to influence hwcaps and stfle bits used
by the s390 specific ifunc-resolvers. The currently x86-specific
tunable glibc.cpu.hwcaps is also used on s390x to achieve the task. In
addition the user can also set a CPU arch-level like z13 instead of
single HWCAP and STFLE features.
Note that the tunable only handles the features which are really used
in the IFUNC-resolvers. All others are ignored as the values are only
used inside glibc. Thus we can influence:
- HWCAP_S390_VXRS (z13)
- HWCAP_S390_VXRS_EXT (z14)
- HWCAP_S390_VXRS_EXT2 (z15)
- STFLE_MIE3 (z15)
The influenced hwcap/stfle-bits are stored in the s390-specific
cpu_features struct which also contains reserved fields for future
usage.
The ifunc-resolvers and users of stfle bits are adjusted to use the
information from cpu_features struct.
On 31bit, the ELF_MACHINE_IRELATIVE macro is now also defined.
Otherwise the new ifunc-resolvers segfaults as they depend on
the not yet processed_rtld_global_ro@GLIBC_PRIVATE relocation.
It uses the bitmanip extension to optimize index_fist and index_last
with clz/ctz (using generic implementation that routes to compiler
builtin) and orc.b to check null bytes.
Checked the string test on riscv64 user mode.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
While ppc has the more important string functions in assembly,
there are still a few generic routines used.
Use the Power 6 CMPB insn for testing of zeros.
Checked on powerpc64le-linux-gnu.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
While arm has the more important string functions in assembly,
there are still a few generic routines used.
Use the UQSUB8 insn for testing of zeros.
Checked on armv7-linux-gnueabihf
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
While alpha has the more important string functions in assembly,
there are still a few for find the generic routines are used.
Use the CMPBGE insn, via the builtin, for testing of zeros. Use a
simplified expansion of __builtin_ctz when the insn isn't available.
Checked on alpha-linux-gnu.
Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Use UXOR,SBZ to test for a zero byte within a word. While we can
get semi-decent code out of asm-goto, we would do slightly better
with a compiler builtin.
For index_zero et al, sequential testing of bytes is less expensive than
any tricks that involve a count-leading-zeros insn that we don't have.
Checked on hppa-linux-gnu.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
GCC's combine pass cannot merge (x >> c | y << (32 - c)) into a
double-word shift unless (1) the subtract is in the same basic block
and (2) the result of the subtract is used exactly once. Neither
condition is true for any use of MERGE.
By forcing the use of a double-word shift, we not only reduce
contention on SAR, but also allow the setting of SAR to be hoisted
outside of a loop.
Checked on hppa-linux-gnu.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
It also cleanups the multiple inclusion by leaving the ifunc
implementation to undef the weak_alias and libc_hidden_def.
Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
New algorithm read the first aligned address and mask off the
unwanted bytes (this strategy is similar to arch-specific
implementations used on powerpc, sparc, and sh).
The loop now read word-aligned address and check using the has_eq
macro.
Checked on x86_64-linux-gnu, i686-linux-gnu, powerpc-linux-gnu,
and powerpc64-linux-gnu by removing the arch-specific assembly
implementation and disabling multi-arch (it covers both LE and BE
for 64 and 32 bits).
Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
New algorithm now calls strchrnul.
Checked on x86_64-linux-gnu, i686-linux-gnu, powerpc-linux-gnu,
and powerpc64-linux-gnu by removing the arch-specific assembly
implementation and disabling multi-arch (it covers both LE and BE
for 64 and 32 bits).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
New algorithm read the first aligned address and mask off the unwanted
bytes (this strategy is similar to arch-specific implementations used
on powerpc, sparc, and sh).
The loop now read word-aligned address and check using the has_zero_eq
function.
Checked on x86_64-linux-gnu, i686-linux-gnu, powerpc64-linux-gnu,
and powerpc-linux-gnu by removing the arch-specific assembly
implementation and disabling multi-arch (it covers both LE and BE
for 64 and 32 bits).
Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
New algorithm read the first aligned address and mask off the
unwanted bytes (this strategy is similar to arch-specific
implementations used on powerpc, sparc, and sh).
The loop now read word-aligned address and check using the has_zero
macro.
Checked on x86_64-linux-gnu, i686-linux-gnu, powerpc-linux-gnu,
and powercp64-linux-gnu by removing the arch-specific assembly
implementation and disabling multi-arch (it covers both LE and BE
for 64 and 32 bits).
Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>