and add _nocancel variants.
* sysdeps/mach/hurd/pread64.c (__libc_pread64): Call __pread64_nocancel
surrounded by enabling async cancel, to replace implementation moved to...
* sysdeps/mach/hurd/pread64_nocancel.c (__pread64_nocancel): ... here.
* sysdeps/mach/hurd/read.c (__libc_read): Call __read_nocancel surrounded by
enabling async cancel, to replace implementation moved to...
* sysdeps/mach/hurd/read_nocancel.c (__read_nocancel): ... here.
* sysdeps/mach/hurd/Makefile (sysdep_routines): Add read_nocancel and
pread64_nocancel.
* sysdeps/mach/hurd/not-cancel.h (__read_nocancel, __pread64_nocancel):
Replace macros with prototypes with a hidden proto on libc.
* sysdeps/mach/hurd/dl-sysdep.c: Include <not-cancel.h>.
(__pread64_nocancel): New alias, check that it is not hidden.
(__read_nocancel): New alias, check that it is not hidden.
* sysdeps/mach/hurd/Versions (libc.GLIBC_PRIVATE): Add __read_nocancel and
__pread64_nocancel.
(ld.GLIBC_2.1): Add __pread64.
(ld.GLIBC_PRIVATE): Add __read_nocancel and __pread64_nocancel.
* sysdeps/mach/hurd/i386/ld.abilist (__pread64): Add symbol.
* sysdeps/mach/hurd/i386/localplt.data (__read_nocancel, __pread64,
__pread64_nocancel): Add references.
* sysdeps/i386/htl/Makefile: New file.
* sysdeps/i386/htl/tcb-offsets.sym: New file.
* sysdeps/mach/hurd/i386/Makefile [setjmp] (gen-as-const-headers): Add
signal-defines.sym.
* sysdeps/mach/hurd/i386/____longjmp_chk.S: Include tcb-offsets.h.
(____longjmp_chk): Harmonize with i386's __longjmp. Clear SS_ONSTACK
when jumping off the alternate stack.
* sysdeps/mach/hurd/i386/__longjmp.S: New file.
The existing macros are fragile and expect local variables with a
certain name. Fix this by defining them as functions with default
implementation in a new header dl-runtime.h which arches can override
if need be.
This came up during ARC port review, hence the need for argument pltgot
in reloc_index() which is not needed by existing ports.
This patch potentially only affects hppa/x86 ports,
build tested for both those configs and a few more.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This started as a trivial change to Anton's rawmemchr. I got
carried away. This is a hybrid between P8's asympotically
faster 64B checks with extremely efficient small string checks
e.g <64B (and sometimes a little bit more depending on alignment).
The second trick is to align to 64B by running a 48B checking loop
16B at a time until we naturally align to 64B (i.e checking 48/96/144
bytes/iteration based on the alignment after the first 5 comparisons).
This allieviates the need to check page boundaries.
Finally, explicly use the P7 strlen with the runtime loader when building
P9. We need to be cautious about vector/vsx extensions here on P9 only
builds.
This defines the macro such that it should behave best on all
supported powerpc targets. Likewise, this allows us to remove the
ppc64le specific s_fmaf128.c.
I have verified powerpc64le multiarch and powerpc64le power9
no-multiarch builds continue to generate optimize fmaf128.
commit 7621e38bf3
Author: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Date: Tue Jan 29 17:43:45 2019 +0000
Add generic hp-timing support
removed the clock_gettime option. Restore the clock_gettime option for
some x86 CPUs on which value from RDTSC may not be incremented at a fixed
rate.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
commit e9698175b0
Author: Lukasz Majewski <lukma@denx.de>
Date: Mon Mar 16 08:31:41 2020 +0100
y2038: Replace __clock_gettime with __clock_gettime64
breaks benchtests with sysdeps/generic/hp-timing.h:
In file included from ./bench-timing.h:23,
from ./bench-skeleton.c:25,
from
/export/build/gnu/tools-build/glibc-gitlab/build-x86_64-linux/benchtests/bench-rint.c:45:
./bench-skeleton.c: In function ‘main’:
../sysdeps/generic/hp-timing.h:37:23: error: storage size of ‘tv’ isn’t known
37 | struct __timespec64 tv; \
| ^~
Define HP_TIMING_NOW with clock_gettime in sysdeps/generic/hp-timing.h
if _ISOMAC is defined. Don't define __clock_gettime in bench-timing.h
since it is no longer needed.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
There are:
#define TUNABLE_SET_VAL_IF_VALID_RANGE(__cur, __val, __type) \
({ \
__type min = (__cur)->type.min; \
__type max = (__cur)->type.max; \
\
if ((__type) (__val) >= min && (__type) (val) <= max) \
^^^ Should be __val
{ \
(__cur)->val.numval = val; \
^^^ Should be __val
(__cur)->initialized = true; \
} \
})
Luckily since all TUNABLE_SET_VAL_IF_VALID_RANGE usages are
TUNABLE_SET_VAL_IF_VALID_RANGE (cur, val, int64_t);
this didn't cause any issues.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
When detecting hole support, we write at 16MiB, and filesystems will
typically need two levels of data to record that. On filesystems with
8KB block, the two indirection blocks will require a total of 16KB
overhead, thus 32 512-byte sectors.
Spotted on GNU/Hurd with a 4KB blocks filesystem, but also happens on Linux
with 4KB or 8KB blocks filesystems.
* support/support_descriptor_supports_holes.c
(support_descriptor_supports_holes): Set block_headroom to 32.
The build uses an undefined macro evaluation for fmaf128 build.
For now set USE_FMAL_BUILTIN and USE_FMAF128_BUILTIN to 0.
Checked with a build for:
powerpc64le-linux-gnu-power9-disable-multi-arch
powerpc64le-linux-gnu-power9
powerpc64le-linux-gnu
powerpc64-linux-gnu-power8
powerpc64-linux-gnu
powerpc-linux-gnu-power4
powerpc-linux-gnu
timer_create needs to create threads with all signals blocked,
including SIGTIMER (which happens to equal SIGCANCEL).
Fixes commit b3cae39dcb ("nptl: Start
new threads with all signals blocked [BZ #25098]").
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This introduces the function __pthread_attr_extension to allocate the
extension space, which is freed by pthread_attr_destroy.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
union pthread_attr_transparent has always the correct size, even if
pthread_attr_t has padding that is not present in struct pthread_attr.
This should not result in an observable behavioral change. The
existing code appears to have been correct, but it was brittle because
it was not clear which functions were allowed to write to an entire
pthread_attr_t argument (e.g., by copying it).
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This allows to reuse the storage after calling pthread_cond_destroy.
* sysdeps/htl/bits/types/struct___pthread_cond.h (__pthread_cond):
Replace unused struct __pthread_condimpl *__impl field with unsigned int
__wrefs.
(__PTHREAD_COND_INITIALIZER): Update accordingly.
* sysdeps/htl/pt-cond-timedwait.c (__pthread_cond_timedwait_internal):
Register as waiter in __wrefs field. On unregistering, wake any pending
pthread_cond_destroy.
* sysdeps/htl/pt-cond-destroy.c (__pthread_cond_destroy): Register wake
request in __wrefs.
* nptl/Makefile (tests): Move tst-cond20 tst-cond21 to...
* sysdeps/pthread/Makefile (tests): ... here.
* nptl/tst-cond20.c nptl/tst-cond21.c: Move to...
* sysdeps/pthread/tst-cond20.c sysdeps/pthread/tst-cond21.c: ... here.
The function mbstowcs, by an XSI extension to POSIX, accepts a null
pointer for the destination wchar_t array. This API behaviour allows
you to use the function to compute the length of the required wchar_t
array i.e. does the conversion without storing it and returns the
number of wide characters required.
We remove the __write_only__ markup for the first argument because it
is not true since the destination may be a null pointer, and so the
length argument may not apply. We remove the markup otherwise the new
test case cannot be compiled with -Werror=nonnull.
We add a new test case for mbstowcs which exercises the destination is
a null pointer behaviour which we have now explicitly documented.
The mbsrtowcs and mbsnrtowcs behave similarly, and mbsrtowcs is
documented as doing this in C11, even if the standard doesn't come out
and call out this specific use case. We add one note to each of
mbsrtowcs and mbsnrtowcs to call out that they support a null pointer
for the destination.
The wcsrtombs function behaves similarly but in the other way around
and allows you to use a null destination pointer to compute how many
bytes you would need to convert the wide character input. We document
this particular case also, but leave wcsnrtombs as a references to
wcsrtombs, so the reader must still read the details of the semantics
for wcsrtombs.
Validation for pointer returned by backtrace_symbols () added.
Type of variables size and i is changed from size_t to int.
Variable size is used to collect the result from backtrace ()
that is an int. i is the loop counter variable so it can be an int.
Since, size_t size is changed to int size, in printf %zd is changed to %d.
Reviewed-by: DJ Delorie <dj@redhat.com>
Linux overrides this file via sysdeps/unix/sysv/linux/i386/sysdep.c.
Hurd does not have sysdeps/unix/i386 on its search path, so it uses
csu/sysdep.c instead.
_hurdsig_preemptors and _hurdsig_preempted_set are not ABI symbols,
so do not declare them. HURD_PREEMPT_SIGNAL_P is an implementation
detail, so move it as well.
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
This fixes various build errors due to deprecation warnings.
Fixes commit 02802fafcf
("signal: Deprecate additional legacy signal handling functions").
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
In case the signal arrives before the __mach_msg call, we need to catch
between the sigprocmask call and the __mach_msg call. Let's just reuse
the support for sigsuspend to make the signal send a message that
our __mach_msg call will just receive.
* hurd/hurdselect.c (_hurd_select): Add sigport and ss variables. When
sigmask is not NULL, create a sigport port and register as
ss->suspended. Add it to the portset. When we receive a message on it,
set error to EINTR. Clean up sigport and portset appropriately.
* hurd/hurdsig.c (wake_sigsuspend): Note that pselect also uses it.
Historically, this mechanism was used to process "nosegneg"
subdirectories, and it is still used to include the "tls"
subdirectories. With nosegneg support gone from ld.so, this is part
no longer useful.
The entire mechanism is not well-designed because it causes the
meaning of hwcap bits in ld.so.cache to depend on the kernel version
that was used to generate the cache, which makes it difficult to use
this mechanism for anything else in the future.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>