nptl: Fix Race conditions in pthread cancellation [BZ#12683]

The current racy approach is to enable asynchronous cancellation
before making the syscall and restore the previous cancellation
type once the syscall returns, and check if cancellation has happen
during the cancellation entrypoint.

As described in BZ#12683, this approach shows 2 problems:

  1. Cancellation can act after the syscall has returned from the
     kernel, but before userspace saves the return value.  It might
     result in a resource leak if the syscall allocated a resource or a
     side effect (partial read/write), and there is no way to program
     handle it with cancellation handlers.

  2. If a signal is handled while the thread is blocked at a cancellable
     syscall, the entire signal handler runs with asynchronous
     cancellation enabled.  This can lead to issues if the signal
     handler call functions which are async-signal-safe but not
     async-cancel-safe.

For the cancellation to work correctly, there are 5 points at which the
cancellation signal could arrive:

	[ ... )[ ... )[ syscall ]( ...
	   1      2        3    4   5

  1. Before initial testcancel, e.g. [*... testcancel)
  2. Between testcancel and syscall start, e.g. [testcancel...syscall start)
  3. While syscall is blocked and no side effects have yet taken
     place, e.g. [ syscall ]
  4. Same as 3 but with side-effects having occurred (e.g. a partial
     read or write).
  5. After syscall end e.g. (syscall end...*]

And libc wants to act on cancellation in cases 1, 2, and 3 but not
in cases 4 or 5.  For the 4 and 5 cases, the cancellation will eventually
happen in the next cancellable entrypoint without any further external
event.

The proposed solution for each case is:

  1. Do a conditional branch based on whether the thread has received
     a cancellation request;

  2. It can be caught by the signal handler determining that the saved
     program counter (from the ucontext_t) is in some address range
     beginning just before the "testcancel" and ending with the
     syscall instruction.

  3. SIGCANCEL can be caught by the signal handler and determine that
     the saved program counter (from the ucontext_t) is in the address
     range beginning just before "testcancel" and ending with the first
     uninterruptable (via a signal) syscall instruction that enters the
      kernel.

  4. In this case, except for certain syscalls that ALWAYS fail with
     EINTR even for non-interrupting signals, the kernel will reset
     the program counter to point at the syscall instruction during
     signal handling, so that the syscall is restarted when the signal
     handler returns.  So, from the signal handler's standpoint, this
     looks the same as case 2, and thus it's taken care of.

  5. For syscalls with side-effects, the kernel cannot restart the
     syscall; when it's interrupted by a signal, the kernel must cause
     the syscall to return with whatever partial result is obtained
     (e.g. partial read or write).

  6. The saved program counter points just after the syscall
     instruction, so the signal handler won't act on cancellation.
     This is similar to 4. since the program counter is past the syscall
     instruction.

So The proposed fixes are:

  1. Remove the enable_asynccancel/disable_asynccancel function usage in
     cancellable syscall definition and instead make them call a common
     symbol that will check if cancellation is enabled (__syscall_cancel
     at nptl/cancellation.c), call the arch-specific cancellable
     entry-point (__syscall_cancel_arch), and cancel the thread when
     required.

  2. Provide an arch-specific generic system call wrapper function
     that contains global markers.  These markers will be used in
     SIGCANCEL signal handler to check if the interruption has been
     called in a valid syscall and if the syscalls has side-effects.

     A reference implementation sysdeps/unix/sysv/linux/syscall_cancel.c
     is provided.  However, the markers may not be set on correct
     expected places depending on how INTERNAL_SYSCALL_NCS is
     implemented by the architecture.  It is expected that all
     architectures add an arch-specific implementation.

  3. Rewrite SIGCANCEL asynchronous handler to check for both canceling
     type and if current IP from signal handler falls between the global
     markers and act accordingly.

  4. Adjust libc code to replace LIBC_CANCEL_ASYNC/LIBC_CANCEL_RESET to
     use the appropriate cancelable syscalls.

  5. Adjust 'lowlevellock-futex.h' arch-specific implementations to
     provide cancelable futex calls.

Some architectures require specific support on syscall handling:

  * On i386 the syscall cancel bridge needs to use the old int80
    instruction because the optimized vDSO symbol the resulting PC value
    for an interrupted syscall points to an address outside the expected
    markers in __syscall_cancel_arch.  It has been discussed in LKML [1]
    on how kernel could help userland to accomplish it, but afaik
    discussion has stalled.

    Also, sysenter should not be used directly by libc since its calling
    convention is set by the kernel depending of the underlying x86 chip
    (check kernel commit 30bfa7b3488bfb1bb75c9f50a5fcac1832970c60).

  * mips o32 is the only kABI that requires 7 argument syscall, and to
    avoid add a requirement on all architectures to support it, mips
    support is added with extra internal defines.

Checked on aarch64-linux-gnu, arm-linux-gnueabihf, powerpc-linux-gnu,
powerpc64-linux-gnu, powerpc64le-linux-gnu, i686-linux-gnu, and
x86_64-linux-gnu.

[1] https://lkml.org/lkml/2016/3/8/1105
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
This commit is contained in:
Adhemerval Zanella 2024-06-25 16:17:44 -03:00
parent 55cd51d971
commit 89b53077d2
53 changed files with 2525 additions and 232 deletions

View File

@ -1328,11 +1328,8 @@ $(objpfx)dl-allobjs.os: $(all-rtld-routines:%=$(objpfx)%.os)
# discovery mechanism is not compatible with the libc implementation # discovery mechanism is not compatible with the libc implementation
# when compiled for libc. # when compiled for libc.
rtld-stubbed-symbols = \ rtld-stubbed-symbols = \
__GI___pthread_disable_asynccancel \
__GI___pthread_enable_asynccancel \
__libc_assert_fail \ __libc_assert_fail \
__pthread_disable_asynccancel \ __syscall_cancel \
__pthread_enable_asynccancel \
calloc \ calloc \
free \ free \
malloc \ malloc \

View File

@ -204,6 +204,7 @@ routines = \
sem_timedwait \ sem_timedwait \
sem_unlink \ sem_unlink \
sem_wait \ sem_wait \
syscall_cancel \
tpp \ tpp \
unwind \ unwind \
vars \ vars \
@ -235,7 +236,8 @@ CFLAGS-pthread_setcanceltype.c += -fexceptions -fasynchronous-unwind-tables
# These are internal functions which similar functionality as setcancelstate # These are internal functions which similar functionality as setcancelstate
# and setcanceltype. # and setcanceltype.
CFLAGS-cancellation.c += -fasynchronous-unwind-tables CFLAGS-cancellation.c += -fexceptions -fasynchronous-unwind-tables
CFLAGS-syscall_cancel.c += -fexceptions -fasynchronous-unwind-tables
# Calling pthread_exit() must cause the registered cancel handlers to # Calling pthread_exit() must cause the registered cancel handlers to
# be executed. Therefore exceptions have to be thrown through this # be executed. Therefore exceptions have to be thrown through this
@ -279,6 +281,7 @@ tests = \
tst-cancel7 \ tst-cancel7 \
tst-cancel17 \ tst-cancel17 \
tst-cancel24 \ tst-cancel24 \
tst-cancel31 \
tst-cond26 \ tst-cond26 \
tst-context1 \ tst-context1 \
tst-default-attr \ tst-default-attr \
@ -404,7 +407,10 @@ xtests += tst-eintr1
test-srcs = tst-oddstacklimit test-srcs = tst-oddstacklimit
gen-as-const-headers = unwindbuf.sym gen-as-const-headers = \
descr-const.sym \
unwindbuf.sym \
# gen-as-const-headers
gen-py-const-headers := nptl_lock_constants.pysym gen-py-const-headers := nptl_lock_constants.pysym
pretty-printers := nptl-printers.py pretty-printers := nptl-printers.py

View File

@ -18,74 +18,93 @@
#include <setjmp.h> #include <setjmp.h>
#include <stdlib.h> #include <stdlib.h>
#include "pthreadP.h" #include "pthreadP.h"
#include <futex-internal.h>
/* Called by the INTERNAL_SYSCALL_CANCEL macro, check for cancellation and
returns the syscall value or its negative error code. */
long int
__internal_syscall_cancel (__syscall_arg_t a1, __syscall_arg_t a2,
__syscall_arg_t a3, __syscall_arg_t a4,
__syscall_arg_t a5, __syscall_arg_t a6,
__SYSCALL_CANCEL7_ARG_DEF
__syscall_arg_t nr)
{
long int result;
struct pthread *pd = THREAD_SELF;
/* The next two functions are similar to pthread_setcanceltype() but /* If cancellation is not enabled, call the syscall directly and also
more specialized for the use in the cancelable functions like write(). for thread terminatation to avoid call __syscall_do_cancel while
They do not need to check parameters etc. These functions must be executing cleanup handlers. */
AS-safe, with the exception of the actual cancellation, because they int ch = atomic_load_relaxed (&pd->cancelhandling);
are called by wrappers around AS-safe functions like write().*/ if (SINGLE_THREAD_P || !cancel_enabled (ch) || cancel_exiting (ch))
int {
__pthread_enable_asynccancel (void) result = INTERNAL_SYSCALL_NCS_CALL (nr, a1, a2, a3, a4, a5, a6
__SYSCALL_CANCEL7_ARCH_ARG7);
if (INTERNAL_SYSCALL_ERROR_P (result))
return -INTERNAL_SYSCALL_ERRNO (result);
return result;
}
/* Call the arch-specific entry points that contains the globals markers
to be checked by SIGCANCEL handler. */
result = __syscall_cancel_arch (&pd->cancelhandling, nr, a1, a2, a3, a4, a5,
a6 __SYSCALL_CANCEL7_ARCH_ARG7);
/* If the cancellable syscall was interrupted by SIGCANCEL and it has no
side-effect, cancel the thread if cancellation is enabled. */
ch = atomic_load_relaxed (&pd->cancelhandling);
/* The behaviour here assumes that EINTR is returned only if there are no
visible side effects. POSIX Issue 7 has not yet provided any stronger
language for close, and in theory the close syscall could return EINTR
and leave the file descriptor open (conforming and leaks). It expects
that no such kernel is used with glibc. */
if (result == -EINTR && cancel_enabled_and_canceled (ch))
__syscall_do_cancel ();
return result;
}
/* Called by the SYSCALL_CANCEL macro, check for cancellation and return the
syscall expected success value (usually 0) or, in case of failure, -1 and
sets errno to syscall return value. */
long int
__syscall_cancel (__syscall_arg_t a1, __syscall_arg_t a2,
__syscall_arg_t a3, __syscall_arg_t a4,
__syscall_arg_t a5, __syscall_arg_t a6,
__SYSCALL_CANCEL7_ARG_DEF __syscall_arg_t nr)
{
int r = __internal_syscall_cancel (a1, a2, a3, a4, a5, a6,
__SYSCALL_CANCEL7_ARG nr);
return __glibc_unlikely (INTERNAL_SYSCALL_ERROR_P (r))
? SYSCALL_ERROR_LABEL (INTERNAL_SYSCALL_ERRNO (r))
: r;
}
/* Called by __syscall_cancel_arch or function above start the thread
cancellation. */
_Noreturn void
__syscall_do_cancel (void)
{ {
struct pthread *self = THREAD_SELF; struct pthread *self = THREAD_SELF;
int oldval = atomic_load_relaxed (&self->cancelhandling);
/* Disable thread cancellation to avoid cancellable entrypoints calling
__syscall_do_cancel recursively. We atomic load relaxed to check the
state of cancelhandling, there is no particular ordering requirement
between the syscall call and the other thread setting our cancelhandling
with a atomic store acquire.
POSIX Issue 7 notes that the cancellation occurs asynchronously on the
target thread, that implies there is no ordering requirements. It does
not need a MO release store here. */
int oldval = atomic_load_relaxed (&self->cancelhandling);
while (1) while (1)
{ {
int newval = oldval | CANCELTYPE_BITMASK; int newval = oldval | CANCELSTATE_BITMASK;
if (oldval == newval)
if (newval == oldval)
break; break;
if (atomic_compare_exchange_weak_acquire (&self->cancelhandling, if (atomic_compare_exchange_weak_acquire (&self->cancelhandling,
&oldval, newval)) &oldval, newval))
{ break;
if (cancel_enabled_and_canceled_and_async (newval))
{
self->result = PTHREAD_CANCELED;
__do_cancel ();
}
break;
}
} }
return oldval; __do_cancel (PTHREAD_CANCELED);
} }
libc_hidden_def (__pthread_enable_asynccancel)
/* See the comment for __pthread_enable_asynccancel regarding
the AS-safety of this function. */
void
__pthread_disable_asynccancel (int oldtype)
{
/* If asynchronous cancellation was enabled before we do not have
anything to do. */
if (oldtype & CANCELTYPE_BITMASK)
return;
struct pthread *self = THREAD_SELF;
int newval;
int oldval = atomic_load_relaxed (&self->cancelhandling);
do
{
newval = oldval & ~CANCELTYPE_BITMASK;
}
while (!atomic_compare_exchange_weak_acquire (&self->cancelhandling,
&oldval, newval));
/* We cannot return when we are being canceled. Upon return the
thread might be things which would have to be undone. The
following loop should loop until the cancellation signal is
delivered. */
while (__glibc_unlikely ((newval & (CANCELING_BITMASK | CANCELED_BITMASK))
== CANCELING_BITMASK))
{
futex_wait_simple ((unsigned int *) &self->cancelhandling, newval,
FUTEX_PRIVATE);
newval = atomic_load_relaxed (&self->cancelhandling);
}
}
libc_hidden_def (__pthread_disable_asynccancel)

View File

@ -82,10 +82,7 @@ ___pthread_unregister_cancel_restore (__pthread_unwind_buf_t *buf)
&cancelhandling, newval)); &cancelhandling, newval));
if (cancel_enabled_and_canceled (cancelhandling)) if (cancel_enabled_and_canceled (cancelhandling))
{ __do_cancel (PTHREAD_CANCELED);
self->result = PTHREAD_CANCELED;
__do_cancel ();
}
} }
} }
versioned_symbol (libc, ___pthread_unregister_cancel_restore, versioned_symbol (libc, ___pthread_unregister_cancel_restore,

6
nptl/descr-const.sym Normal file
View File

@ -0,0 +1,6 @@
#include <tls.h>
-- Not strictly offsets, these values are using thread cancellation by arch
-- specific cancel entrypoint.
TCB_CANCELED_BIT CANCELED_BIT
TCB_CANCELED_BITMASK CANCELED_BITMASK

View File

@ -425,6 +425,24 @@ struct pthread
+ sizeof ((struct pthread) {}.rseq_area)) + sizeof ((struct pthread) {}.rseq_area))
} __attribute ((aligned (TCB_ALIGNMENT))); } __attribute ((aligned (TCB_ALIGNMENT)));
static inline bool
cancel_enabled (int value)
{
return (value & CANCELSTATE_BITMASK) == 0;
}
static inline bool
cancel_async_enabled (int value)
{
return (value & CANCELTYPE_BITMASK) != 0;
}
static inline bool
cancel_exiting (int value)
{
return (value & EXITING_BITMASK) != 0;
}
static inline bool static inline bool
cancel_enabled_and_canceled (int value) cancel_enabled_and_canceled (int value)
{ {

View File

@ -69,10 +69,7 @@ __libc_cleanup_pop_restore (struct _pthread_cleanup_buffer *buffer)
&cancelhandling, newval)); &cancelhandling, newval));
if (cancel_enabled_and_canceled (cancelhandling)) if (cancel_enabled_and_canceled (cancelhandling))
{ __do_cancel (PTHREAD_CANCELED);
self->result = PTHREAD_CANCELED;
__do_cancel ();
}
} }
} }
libc_hidden_def (__libc_cleanup_pop_restore) libc_hidden_def (__libc_cleanup_pop_restore)

View File

@ -23,6 +23,7 @@
#include <sysdep.h> #include <sysdep.h>
#include <unistd.h> #include <unistd.h>
#include <unwind-link.h> #include <unwind-link.h>
#include <cancellation-pc-check.h>
#include <stdio.h> #include <stdio.h>
#include <gnu/lib-names.h> #include <gnu/lib-names.h>
#include <sys/single_threaded.h> #include <sys/single_threaded.h>
@ -40,31 +41,16 @@ sigcancel_handler (int sig, siginfo_t *si, void *ctx)
|| si->si_code != SI_TKILL) || si->si_code != SI_TKILL)
return; return;
/* Check if asynchronous cancellation mode is set or if interrupted
instruction pointer falls within the cancellable syscall bridge. For
interruptable syscalls with external side-effects (i.e. partial reads),
the kernel will set the IP to after __syscall_cancel_arch_end, thus
disabling the cancellation and allowing the process to handle such
conditions. */
struct pthread *self = THREAD_SELF; struct pthread *self = THREAD_SELF;
int oldval = atomic_load_relaxed (&self->cancelhandling); int oldval = atomic_load_relaxed (&self->cancelhandling);
while (1) if (cancel_async_enabled (oldval) || cancellation_pc_check (ctx))
{ __syscall_do_cancel ();
/* We are canceled now. When canceled by another thread this flag
is already set but if the signal is directly send (internally or
from another process) is has to be done here. */
int newval = oldval | CANCELING_BITMASK | CANCELED_BITMASK;
if (oldval == newval || (oldval & EXITING_BITMASK) != 0)
/* Already canceled or exiting. */
break;
if (atomic_compare_exchange_weak_acquire (&self->cancelhandling,
&oldval, newval))
{
self->result = PTHREAD_CANCELED;
/* Make sure asynchronous cancellation is still enabled. */
if ((oldval & CANCELTYPE_BITMASK) != 0)
/* Run the registered destructors and terminate the thread. */
__do_cancel ();
}
}
} }
int int
@ -106,15 +92,13 @@ __pthread_cancel (pthread_t th)
/* Some syscalls are never restarted after being interrupted by a signal /* Some syscalls are never restarted after being interrupted by a signal
handler, regardless of the use of SA_RESTART (they always fail with handler, regardless of the use of SA_RESTART (they always fail with
EINTR). So pthread_cancel cannot send SIGCANCEL unless the cancellation EINTR). So pthread_cancel cannot send SIGCANCEL unless the cancellation
is enabled and set as asynchronous (in this case the cancellation will is enabled.
be acted in the cancellation handler instead by the syscall wrapper). In this case the target thread is set as 'cancelled' (CANCELED_BITMASK)
Otherwise the target thread is set as 'cancelling' (CANCELING_BITMASK)
by atomically setting 'cancelhandling' and the cancelation will be acted by atomically setting 'cancelhandling' and the cancelation will be acted
upon on next cancellation entrypoing in the target thread. upon on next cancellation entrypoing in the target thread.
It also requires to atomically check if cancellation is enabled and It also requires to atomically check if cancellation is enabled, so the
asynchronous, so both cancellation state and type are tracked on state are also tracked on 'cancelhandling'. */
'cancelhandling'. */
int result = 0; int result = 0;
int oldval = atomic_load_relaxed (&pd->cancelhandling); int oldval = atomic_load_relaxed (&pd->cancelhandling);
@ -122,19 +106,17 @@ __pthread_cancel (pthread_t th)
do do
{ {
again: again:
newval = oldval | CANCELING_BITMASK | CANCELED_BITMASK; newval = oldval | CANCELED_BITMASK;
if (oldval == newval) if (oldval == newval)
break; break;
/* If the cancellation is handled asynchronously just send a /* Only send the SIGANCEL signal if cancellation is enabled, since some
signal. We avoid this if possible since it's more syscalls are never restarted even with SA_RESTART. The signal
expensive. */ will act iff async cancellation is enabled. */
if (cancel_enabled_and_canceled_and_async (newval)) if (cancel_enabled (newval))
{ {
/* Mark the cancellation as "in progress". */
int newval2 = oldval | CANCELING_BITMASK;
if (!atomic_compare_exchange_weak_acquire (&pd->cancelhandling, if (!atomic_compare_exchange_weak_acquire (&pd->cancelhandling,
&oldval, newval2)) &oldval, newval))
goto again; goto again;
if (pd == THREAD_SELF) if (pd == THREAD_SELF)
@ -143,9 +125,8 @@ __pthread_cancel (pthread_t th)
pthread_create, so the signal handler may not have been pthread_create, so the signal handler may not have been
set up for a self-cancel. */ set up for a self-cancel. */
{ {
pd->result = PTHREAD_CANCELED; if (cancel_async_enabled (newval))
if ((newval & CANCELTYPE_BITMASK) != 0) __do_cancel (PTHREAD_CANCELED);
__do_cancel ();
} }
else else
/* The cancellation handler will take care of marking the /* The cancellation handler will take care of marking the
@ -154,19 +135,18 @@ __pthread_cancel (pthread_t th)
break; break;
} }
/* A single-threaded process should be able to kill itself, since
there is nothing in the POSIX specification that says that it
cannot. So we set multiple_threads to true so that cancellation
points get executed. */
THREAD_SETMEM (THREAD_SELF, header.multiple_threads, 1);
#ifndef TLS_MULTIPLE_THREADS_IN_TCB
__libc_single_threaded_internal = 0;
#endif
} }
while (!atomic_compare_exchange_weak_acquire (&pd->cancelhandling, &oldval, while (!atomic_compare_exchange_weak_acquire (&pd->cancelhandling, &oldval,
newval)); newval));
/* A single-threaded process should be able to kill itself, since there is
nothing in the POSIX specification that says that it cannot. So we set
multiple_threads to true so that cancellation points get executed. */
THREAD_SETMEM (THREAD_SELF, header.multiple_threads, 1);
#ifndef TLS_MULTIPLE_THREADS_IN_TCB
__libc_single_threaded_internal = 0;
#endif
return result; return result;
} }
versioned_symbol (libc, __pthread_cancel, pthread_cancel, GLIBC_2_34); versioned_symbol (libc, __pthread_cancel, pthread_cancel, GLIBC_2_34);

View File

@ -31,9 +31,7 @@ __pthread_exit (void *value)
" must be installed for pthread_exit to work\n"); " must be installed for pthread_exit to work\n");
} }
THREAD_SETMEM (THREAD_SELF, result, value); __do_cancel (value);
__do_cancel ();
} }
libc_hidden_def (__pthread_exit) libc_hidden_def (__pthread_exit)
weak_alias (__pthread_exit, pthread_exit) weak_alias (__pthread_exit, pthread_exit)

View File

@ -48,7 +48,7 @@ __pthread_setcancelstate (int state, int *oldstate)
&oldval, newval)) &oldval, newval))
{ {
if (cancel_enabled_and_canceled_and_async (newval)) if (cancel_enabled_and_canceled_and_async (newval))
__do_cancel (); __do_cancel (PTHREAD_CANCELED);
break; break;
} }

View File

@ -48,7 +48,7 @@ __pthread_setcanceltype (int type, int *oldtype)
if (cancel_enabled_and_canceled_and_async (newval)) if (cancel_enabled_and_canceled_and_async (newval))
{ {
THREAD_SETMEM (self, result, PTHREAD_CANCELED); THREAD_SETMEM (self, result, PTHREAD_CANCELED);
__do_cancel (); __do_cancel (PTHREAD_CANCELED);
} }
break; break;

View File

@ -25,10 +25,7 @@ ___pthread_testcancel (void)
struct pthread *self = THREAD_SELF; struct pthread *self = THREAD_SELF;
int cancelhandling = atomic_load_relaxed (&self->cancelhandling); int cancelhandling = atomic_load_relaxed (&self->cancelhandling);
if (cancel_enabled_and_canceled (cancelhandling)) if (cancel_enabled_and_canceled (cancelhandling))
{ __do_cancel (PTHREAD_CANCELED);
self->result = PTHREAD_CANCELED;
__do_cancel ();
}
} }
versioned_symbol (libc, ___pthread_testcancel, pthread_testcancel, GLIBC_2_34); versioned_symbol (libc, ___pthread_testcancel, pthread_testcancel, GLIBC_2_34);
libc_hidden_ver (___pthread_testcancel, __pthread_testcancel) libc_hidden_ver (___pthread_testcancel, __pthread_testcancel)

100
nptl/tst-cancel31.c Normal file
View File

@ -0,0 +1,100 @@
/* Verify side-effects of cancellable syscalls (BZ #12683).
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<https://www.gnu.org/licenses/>. */
/* This testcase checks if there is resource leakage if the syscall has
returned from kernelspace, but before userspace saves the return
value. The 'leaker' thread should be able to close the file descriptor
if the resource is already allocated, meaning that if the cancellation
signal arrives *after* the open syscal return from kernel, the
side-effect should be visible to application. */
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdlib.h>
#include <support/xunistd.h>
#include <support/xthread.h>
#include <support/check.h>
#include <support/temp_file.h>
#include <support/support.h>
#include <support/descriptors.h>
static void *
writeopener (void *arg)
{
int fd;
for (;;)
{
fd = open (arg, O_WRONLY);
xclose (fd);
}
return NULL;
}
static void *
leaker (void *arg)
{
int fd = open (arg, O_RDONLY);
TEST_VERIFY_EXIT (fd > 0);
pthread_setcancelstate (PTHREAD_CANCEL_DISABLE, 0);
xclose (fd);
return NULL;
}
static int
do_test (void)
{
enum {
iter_count = 1000
};
char *dir = support_create_temp_directory ("tst-cancel28");
char *name = xasprintf ("%s/fifo", dir);
TEST_COMPARE (mkfifo (name, 0600), 0);
add_temp_file (name);
struct support_descriptors *descrs = support_descriptors_list ();
srand (1);
xpthread_create (NULL, writeopener, name);
for (int i = 0; i < iter_count; i++)
{
pthread_t td = xpthread_create (NULL, leaker, name);
struct timespec ts =
{ .tv_nsec = rand () % 100000, .tv_sec = 0 };
nanosleep (&ts, NULL);
/* Ignore pthread_cancel result because it might be the
case when pthread_cancel is called when thread is already
exited. */
pthread_cancel (td);
xpthread_join (td);
}
support_descriptors_check (descrs);
support_descriptors_free (descrs);
free (name);
return 0;
}
#include <support/test-driver.c>

View File

@ -0,0 +1,25 @@
/* Types and macros used for syscall issuing.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<https://www.gnu.org/licenses/>. */
#ifndef _SYSCALL_TYPES_H
#define _SYSCALL_TYPES_H
typedef long int __syscall_arg_t;
#define __SSC(__x) ((__syscall_arg_t) (__x))
#endif

View File

@ -0,0 +1,54 @@
/* Architecture specific code for pthread cancellation handling.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#ifndef _NPTL_CANCELLATION_PC_CHECK
#define _NPTL_CANCELLATION_PC_CHECK
#include <sigcontextinfo.h>
/* For syscalls with side-effects (e.g read that might return partial read),
the kernel cannot restart the syscall when interrupted by a signal, it must
return from the call with whatever partial result. In this case, the saved
program counter is set just after the syscall instruction, so the SIGCANCEL
handler should not act on cancellation.
The __syscall_cancel_arch function, used for all cancellable syscalls,
contains two extra markers, __syscall_cancel_arch_start and
__syscall_cancel_arch_end. The former points to just before the initial
conditional branch that checks if the thread has received a cancellation
request, while former points to the instruction after the one responsible
to issue the syscall.
The function check if the program counter (PC) from ucontext_t CTX is
within the start and then end boundary from the __syscall_cancel_arch
bridge. Return TRUE if the PC is within the boundary, meaning the
syscall does not have any side effects; or FALSE otherwise. */
static __always_inline bool
cancellation_pc_check (void *ctx)
{
/* Both are defined in syscall_cancel.S. */
extern const char __syscall_cancel_arch_start[1];
extern const char __syscall_cancel_arch_end[1];
uintptr_t pc = sigcontext_get_pc (ctx);
return pc >= (uintptr_t) __syscall_cancel_arch_start
&& pc < (uintptr_t) __syscall_cancel_arch_end;
}
#endif

View File

@ -21,7 +21,6 @@
#ifndef __ASSEMBLER__ #ifndef __ASSEMBLER__
# include <sysdep.h> # include <sysdep.h>
# include <sysdep-cancel.h>
# include <kernel-features.h> # include <kernel-features.h>
#endif #endif
@ -120,21 +119,10 @@
nr_wake, nr_move, mutex, val) nr_wake, nr_move, mutex, val)
/* Like lll_futex_wait, but acting as a cancellable entrypoint. */ /* Like lll_futex_wait, but acting as a cancellable entrypoint. */
# define lll_futex_wait_cancel(futexp, val, private) \ # define lll_futex_wait_cancel(futexp, val, private) \
({ \ ({ \
int __oldtype = LIBC_CANCEL_ASYNC (); \ int __op = __lll_private_flag (FUTEX_WAIT, private); \
long int __err = lll_futex_wait (futexp, val, LLL_SHARED); \ INTERNAL_SYSCALL_CANCEL (futex, futexp, __op, val, NULL); \
LIBC_CANCEL_RESET (__oldtype); \
__err; \
})
/* Like lll_futex_timed_wait, but acting as a cancellable entrypoint. */
# define lll_futex_timed_wait_cancel(futexp, val, timeout, private) \
({ \
int __oldtype = LIBC_CANCEL_ASYNC (); \
long int __err = lll_futex_timed_wait (futexp, val, timeout, private); \
LIBC_CANCEL_RESET (__oldtype); \
__err; \
}) })
#endif /* !__ASSEMBLER__ */ #endif /* !__ASSEMBLER__ */

View File

@ -261,10 +261,12 @@ libc_hidden_proto (__pthread_unregister_cancel)
/* Called when a thread reacts on a cancellation request. */ /* Called when a thread reacts on a cancellation request. */
static inline void static inline void
__attribute ((noreturn, always_inline)) __attribute ((noreturn, always_inline))
__do_cancel (void) __do_cancel (void *result)
{ {
struct pthread *self = THREAD_SELF; struct pthread *self = THREAD_SELF;
self->result = result;
/* Make sure we get no more cancellations. */ /* Make sure we get no more cancellations. */
atomic_fetch_or_relaxed (&self->cancelhandling, EXITING_BITMASK); atomic_fetch_or_relaxed (&self->cancelhandling, EXITING_BITMASK);
@ -272,6 +274,13 @@ __do_cancel (void)
THREAD_GETMEM (self, cleanup_jmp_buf)); THREAD_GETMEM (self, cleanup_jmp_buf));
} }
extern long int __syscall_cancel_arch (volatile int *, __syscall_arg_t nr,
__syscall_arg_t arg1, __syscall_arg_t arg2, __syscall_arg_t arg3,
__syscall_arg_t arg4, __syscall_arg_t arg5, __syscall_arg_t arg6
__SYSCALL_CANCEL7_ARCH_ARG_DEF) attribute_hidden;
extern _Noreturn void __syscall_do_cancel (void) attribute_hidden;
/* Internal prototypes. */ /* Internal prototypes. */

View File

@ -104,6 +104,9 @@ GOT_LABEL: ; \
# define JUMPTARGET(name) name # define JUMPTARGET(name) name
#endif #endif
#define TAIL_CALL_NO_RETURN(__func) \
b __func@local
#if defined SHARED && defined PIC && !defined NO_HIDDEN #if defined SHARED && defined PIC && !defined NO_HIDDEN
# undef HIDDEN_JUMPTARGET # undef HIDDEN_JUMPTARGET
# define HIDDEN_JUMPTARGET(name) __GI_##name##@local # define HIDDEN_JUMPTARGET(name) __GI_##name##@local

View File

@ -352,6 +352,25 @@ LT_LABELSUFFIX(name,_name_end): ; \
ENTRY (name); \ ENTRY (name); \
DO_CALL (SYS_ify (syscall_name)) DO_CALL (SYS_ify (syscall_name))
#ifdef SHARED
# define TAIL_CALL_NO_RETURN(__func) \
b JUMPTARGET(__func)
#else
# define TAIL_CALL_NO_RETURN(__func) \
.ifdef .Local ## __func; \
b .Local ## __func; \
.else; \
.Local ## __func: \
mflr 0; \
std 0,FRAME_LR_SAVE(1); \
stdu 1,-FRAME_MIN_SIZE(1); \
cfi_adjust_cfa_offset(FRAME_MIN_SIZE); \
cfi_offset(lr,FRAME_LR_SAVE); \
bl JUMPTARGET(__func); \
nop; \
.endif
#endif
#ifdef SHARED #ifdef SHARED
#define TAIL_CALL_SYSCALL_ERROR \ #define TAIL_CALL_SYSCALL_ERROR \
b JUMPTARGET (NOTOC (__syscall_error)) b JUMPTARGET (NOTOC (__syscall_error))

View File

@ -32,6 +32,10 @@ tf (void *arg)
char buf[100000]; char buf[100000];
while (write (fd[1], buf, sizeof (buf)) > 0); while (write (fd[1], buf, sizeof (buf)) > 0);
/* The write can return -1/EPIPE if the pipe was closed before the
thread calls write, which signals a side-effect that must be
signaled to the thread. */
pthread_testcancel ();
return (void *) 42l; return (void *) 42l;
} }

View File

@ -24,6 +24,7 @@
#define ALIGNARG(log2) log2 #define ALIGNARG(log2) log2
#define ASM_SIZE_DIRECTIVE(name) .size name,.-name #define ASM_SIZE_DIRECTIVE(name) .size name,.-name
#define L(label) .L##label
#ifdef SHARED #ifdef SHARED
#define PLTJMP(_x) _x##@PLT #define PLTJMP(_x) _x##@PLT

View File

@ -24,6 +24,9 @@
#define SYSCALL__(name, args) PSEUDO (__##name, name, args) #define SYSCALL__(name, args) PSEUDO (__##name, name, args)
#define SYSCALL(name, args) PSEUDO (name, name, args) #define SYSCALL(name, args) PSEUDO (name, name, args)
#ifndef __ASSEMBLER__
# include <errno.h>
#define __SYSCALL_CONCAT_X(a,b) a##b #define __SYSCALL_CONCAT_X(a,b) a##b
#define __SYSCALL_CONCAT(a,b) __SYSCALL_CONCAT_X (a, b) #define __SYSCALL_CONCAT(a,b) __SYSCALL_CONCAT_X (a, b)
@ -108,42 +111,148 @@
#define INLINE_SYSCALL_CALL(...) \ #define INLINE_SYSCALL_CALL(...) \
__INLINE_SYSCALL_DISP (__INLINE_SYSCALL, __VA_ARGS__) __INLINE_SYSCALL_DISP (__INLINE_SYSCALL, __VA_ARGS__)
#if IS_IN (rtld) #define __INTERNAL_SYSCALL_NCS0(name) \
/* All cancellation points are compiled out in the dynamic loader. */ INTERNAL_SYSCALL_NCS (name, 0)
# define NO_SYSCALL_CANCEL_CHECKING 1 #define __INTERNAL_SYSCALL_NCS1(name, a1) \
INTERNAL_SYSCALL_NCS (name, 1, a1)
#define __INTERNAL_SYSCALL_NCS2(name, a1, a2) \
INTERNAL_SYSCALL_NCS (name, 2, a1, a2)
#define __INTERNAL_SYSCALL_NCS3(name, a1, a2, a3) \
INTERNAL_SYSCALL_NCS (name, 3, a1, a2, a3)
#define __INTERNAL_SYSCALL_NCS4(name, a1, a2, a3, a4) \
INTERNAL_SYSCALL_NCS (name, 4, a1, a2, a3, a4)
#define __INTERNAL_SYSCALL_NCS5(name, a1, a2, a3, a4, a5) \
INTERNAL_SYSCALL_NCS (name, 5, a1, a2, a3, a4, a5)
#define __INTERNAL_SYSCALL_NCS6(name, a1, a2, a3, a4, a5, a6) \
INTERNAL_SYSCALL_NCS (name, 6, a1, a2, a3, a4, a5, a6)
#define __INTERNAL_SYSCALL_NCS7(name, a1, a2, a3, a4, a5, a6, a7) \
INTERNAL_SYSCALL_NCS (name, 7, a1, a2, a3, a4, a5, a6, a7)
/* Issue a syscall defined by syscall number plus any other argument required.
It is similar to INTERNAL_SYSCALL_NCS macro, but without the need to pass
the expected argument number as third parameter. */
#define INTERNAL_SYSCALL_NCS_CALL(...) \
__INTERNAL_SYSCALL_DISP (__INTERNAL_SYSCALL_NCS, __VA_ARGS__)
/* Cancellation macros. */
#include <syscall_types.h>
/* Adjust both the __syscall_cancel and the SYSCALL_CANCEL macro to support
7 arguments instead of default 6 (curently only mip32). It avoid add
the requirement to each architecture to support 7 argument macros
{INTERNAL,INLINE}_SYSCALL. */
#ifdef HAVE_CANCELABLE_SYSCALL_WITH_7_ARGS
# define __SYSCALL_CANCEL7_ARG_DEF __syscall_arg_t a7,
# define __SYSCALL_CANCEL7_ARCH_ARG_DEF ,__syscall_arg_t a7
# define __SYSCALL_CANCEL7_ARG 0,
# define __SYSCALL_CANCEL7_ARG7 a7,
# define __SYSCALL_CANCEL7_ARCH_ARG7 , a7
#else #else
# define NO_SYSCALL_CANCEL_CHECKING SINGLE_THREAD_P # define __SYSCALL_CANCEL7_ARG_DEF
# define __SYSCALL_CANCEL7_ARCH_ARG_DEF
# define __SYSCALL_CANCEL7_ARG
# define __SYSCALL_CANCEL7_ARG7
# define __SYSCALL_CANCEL7_ARCH_ARG7
#endif
long int __internal_syscall_cancel (__syscall_arg_t a1, __syscall_arg_t a2,
__syscall_arg_t a3, __syscall_arg_t a4,
__syscall_arg_t a5, __syscall_arg_t a6,
__SYSCALL_CANCEL7_ARG_DEF
__syscall_arg_t nr) attribute_hidden;
long int __syscall_cancel (__syscall_arg_t arg1, __syscall_arg_t arg2,
__syscall_arg_t arg3, __syscall_arg_t arg4,
__syscall_arg_t arg5, __syscall_arg_t arg6,
__SYSCALL_CANCEL7_ARG_DEF
__syscall_arg_t nr) attribute_hidden;
#define __SYSCALL_CANCEL0(name) \
__syscall_cancel (0, 0, 0, 0, 0, 0, __SYSCALL_CANCEL7_ARG __NR_##name)
#define __SYSCALL_CANCEL1(name, a1) \
__syscall_cancel (__SSC (a1), 0, 0, 0, 0, 0, \
__SYSCALL_CANCEL7_ARG __NR_##name)
#define __SYSCALL_CANCEL2(name, a1, a2) \
__syscall_cancel (__SSC (a1), __SSC (a2), 0, 0, 0, 0, \
__SYSCALL_CANCEL7_ARG __NR_##name)
#define __SYSCALL_CANCEL3(name, a1, a2, a3) \
__syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), 0, 0, 0, \
__SYSCALL_CANCEL7_ARG __NR_##name)
#define __SYSCALL_CANCEL4(name, a1, a2, a3, a4) \
__syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \
__SSC(a4), 0, 0, __SYSCALL_CANCEL7_ARG __NR_##name)
#define __SYSCALL_CANCEL5(name, a1, a2, a3, a4, a5) \
__syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC(a4), \
__SSC (a5), 0, __SYSCALL_CANCEL7_ARG __NR_##name)
#define __SYSCALL_CANCEL6(name, a1, a2, a3, a4, a5, a6) \
__syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC (a4), \
__SSC (a5), __SSC (a6), __SYSCALL_CANCEL7_ARG \
__NR_##name)
#define __SYSCALL_CANCEL7(name, a1, a2, a3, a4, a5, a6, a7) \
__syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), __SSC (a4), \
__SSC (a5), __SSC (a6), __SSC (a7), __NR_##name)
#define __SYSCALL_CANCEL_NARGS_X(a,b,c,d,e,f,g,h,n,...) n
#define __SYSCALL_CANCEL_NARGS(...) \
__SYSCALL_CANCEL_NARGS_X (__VA_ARGS__,7,6,5,4,3,2,1,0,)
#define __SYSCALL_CANCEL_CONCAT_X(a,b) a##b
#define __SYSCALL_CANCEL_CONCAT(a,b) __SYSCALL_CANCEL_CONCAT_X (a, b)
#define __SYSCALL_CANCEL_DISP(b,...) \
__SYSCALL_CANCEL_CONCAT (b,__SYSCALL_CANCEL_NARGS(__VA_ARGS__))(__VA_ARGS__)
/* Issue a cancellable syscall defined first argument plus any other argument
required. If and error occurs its value, the macro returns -1 and sets
errno accordingly. */
#define __SYSCALL_CANCEL_CALL(...) \
__SYSCALL_CANCEL_DISP (__SYSCALL_CANCEL, __VA_ARGS__)
#define __INTERNAL_SYSCALL_CANCEL0(name) \
__internal_syscall_cancel (0, 0, 0, 0, 0, 0, __SYSCALL_CANCEL7_ARG \
__NR_##name)
#define __INTERNAL_SYSCALL_CANCEL1(name, a1) \
__internal_syscall_cancel (__SSC (a1), 0, 0, 0, 0, 0, \
__SYSCALL_CANCEL7_ARG __NR_##name)
#define __INTERNAL_SYSCALL_CANCEL2(name, a1, a2) \
__internal_syscall_cancel (__SSC (a1), __SSC (a2), 0, 0, 0, 0, \
__SYSCALL_CANCEL7_ARG __NR_##name)
#define __INTERNAL_SYSCALL_CANCEL3(name, a1, a2, a3) \
__internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), 0, \
0, 0, __SYSCALL_CANCEL7_ARG __NR_##name)
#define __INTERNAL_SYSCALL_CANCEL4(name, a1, a2, a3, a4) \
__internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \
__SSC(a4), 0, 0, \
__SYSCALL_CANCEL7_ARG __NR_##name)
#define __INTERNAL_SYSCALL_CANCEL5(name, a1, a2, a3, a4, a5) \
__internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \
__SSC(a4), __SSC (a5), 0, \
__SYSCALL_CANCEL7_ARG __NR_##name)
#define __INTERNAL_SYSCALL_CANCEL6(name, a1, a2, a3, a4, a5, a6) \
__internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \
__SSC (a4), __SSC (a5), __SSC (a6), \
__SYSCALL_CANCEL7_ARG __NR_##name)
#define __INTERNAL_SYSCALL_CANCEL7(name, a1, a2, a3, a4, a5, a6, a7) \
__internal_syscall_cancel (__SSC (a1), __SSC (a2), __SSC (a3), \
__SSC (a4), __SSC (a5), __SSC (a6), \
__SSC (a7), __NR_##name)
/* Issue a cancellable syscall defined by syscall number NAME plus any other
argument required. If an error occurs its value is returned as an negative
number unmodified and errno is not set. */
#define __INTERNAL_SYSCALL_CANCEL_CALL(...) \
__SYSCALL_CANCEL_DISP (__INTERNAL_SYSCALL_CANCEL, __VA_ARGS__)
#if IS_IN (rtld)
/* The loader does not need to handle thread cancellation, use direct
syscall instead. */
# define INTERNAL_SYSCALL_CANCEL(...) INTERNAL_SYSCALL_CALL(__VA_ARGS__)
# define SYSCALL_CANCEL(...) INLINE_SYSCALL_CALL (__VA_ARGS__)
#else
# define INTERNAL_SYSCALL_CANCEL(...) \
__INTERNAL_SYSCALL_CANCEL_CALL (__VA_ARGS__)
# define SYSCALL_CANCEL(...) \
__SYSCALL_CANCEL_CALL (__VA_ARGS__)
#endif #endif
#define SYSCALL_CANCEL(...) \ #endif /* __ASSEMBLER__ */
({ \
long int sc_ret; \
if (NO_SYSCALL_CANCEL_CHECKING) \
sc_ret = INLINE_SYSCALL_CALL (__VA_ARGS__); \
else \
{ \
int sc_cancel_oldtype = LIBC_CANCEL_ASYNC (); \
sc_ret = INLINE_SYSCALL_CALL (__VA_ARGS__); \
LIBC_CANCEL_RESET (sc_cancel_oldtype); \
} \
sc_ret; \
})
/* Issue a syscall defined by syscall number plus any other argument
required. Any error will be returned unmodified (including errno). */
#define INTERNAL_SYSCALL_CANCEL(...) \
({ \
long int sc_ret; \
if (NO_SYSCALL_CANCEL_CHECKING) \
sc_ret = INTERNAL_SYSCALL_CALL (__VA_ARGS__); \
else \
{ \
int sc_cancel_oldtype = LIBC_CANCEL_ASYNC (); \
sc_ret = INTERNAL_SYSCALL_CALL (__VA_ARGS__); \
LIBC_CANCEL_RESET (sc_cancel_oldtype); \
} \
sc_ret; \
})
/* Machine-dependent sysdep.h files are expected to define the macro /* Machine-dependent sysdep.h files are expected to define the macro
PSEUDO (function_name, syscall_name) to emit assembly code to define the PSEUDO (function_name, syscall_name) to emit assembly code to define the

View File

@ -0,0 +1,59 @@
/* Cancellable syscall wrapper. Linux/AArch64 version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int [x0] __syscall_cancel_arch (int *cancelhandling [x0],
long int nr [x1],
long int arg1 [x2],
long int arg2 [x3],
long int arg3 [x4],
long int arg4 [x5],
long int arg5 [x6],
long int arg6 [x7]) */
ENTRY (__syscall_cancel_arch)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
ldr w0, [x0]
tbnz w0, TCB_CANCELED_BIT, 1f
/* Issue a 6 argument syscall, the nr [x1] being the syscall
number. */
mov x8, x1
mov x0, x2
mov x1, x3
mov x2, x4
mov x3, x5
mov x4, x6
mov x5, x7
svc 0x0
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
ret
1:
b __syscall_do_cancel
END (__syscall_cancel_arch)

View File

@ -0,0 +1,80 @@
/* Cancellable syscall wrapper. Linux/alpha version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *ch,
__syscall_arg_t nr,
__syscall_arg_t arg1,
__syscall_arg_t arg2,
__syscall_arg_t arg3,
__syscall_arg_t arg4,
__syscall_arg_t arg5,
__syscall_arg_t arg6) */
.set noreorder
.set noat
.set nomacro
ENTRY (__syscall_cancel_arch)
.frame sp, 16, ra, 0
.mask 0x4000000,-16
cfi_startproc
ldah gp, 0(t12)
lda gp, 0(gp)
lda sp, -16(sp)
cfi_def_cfa_offset (16)
mov a1, v0
stq ra, 0(sp)
cfi_offset (26, -16)
.prologue 1
.global __syscall_cancel_arch_start
__syscall_cancel_arch_start:
ldl t0, 0(a0)
addl zero, t0, t0
/* if (*ch & CANCELED_BITMASK) */
and t0, TCB_CANCELED_BITMASK, t0
bne t0, 1f
mov a2, a0
mov a3, a1
mov a4, a2
ldq a4, 16(sp)
mov a5, a3
ldq a5, 24(sp)
.set macro
callsys
.set nomacro
.global __syscall_cancel_arch_end
__syscall_cancel_arch_end:
subq zero, v0, t0
ldq ra, 0(sp)
cmovne a3, t0, v0
lda sp, 16(sp)
cfi_remember_state
cfi_restore (26)
cfi_def_cfa_offset (0)
ret zero, (ra), 1
.align 4
1:
cfi_restore_state
ldq t12, __syscall_do_cancel(gp) !literal!2
jsr ra, (t12), __syscall_do_cancel !lituse_jsr!2
cfi_endproc
END (__syscall_cancel_arch)

View File

@ -0,0 +1,56 @@
/* Cancellable syscall wrapper. Linux/ARC version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
__syscall_arg_t nr,
__syscall_arg_t arg1,
__syscall_arg_t arg2,
__syscall_arg_t arg3,
__syscall_arg_t arg4,
__syscall_arg_t arg5,
__syscall_arg_t arg6) */
ENTRY (__syscall_cancel_arch)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
ld_s r12,[r0]
bbit1 r12, TCB_CANCELED_BITMASK, 1f
mov_s r8, r1
mov_s r0, r2
mov_s r1, r3
mov_s r2, r4
mov_s r3, r5
mov_s r4, r6
mov_s r5, r7
trap_s 0
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
j_s [blink]
.align 4
1: push_s blink
cfi_def_cfa_offset (4)
cfi_offset (31, -4)
bl @__syscall_do_cancel
END (__syscall_cancel_arch)

View File

@ -0,0 +1,78 @@
/* Cancellable syscall wrapper. Linux/arm version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int [r0] __syscall_cancel_arch (int *cancelhandling [r0],
long int nr [r1],
long int arg1 [r2],
long int arg2 [r3],
long int arg3 [SP],
long int arg4 [SP+4],
long int arg5 [SP+8],
long int arg6 [SP+12]) */
.syntax unified
ENTRY (__syscall_cancel_arch)
.fnstart
mov ip, sp
stmfd sp!, {r4, r5, r6, r7, lr}
.save {r4, r5, r6, r7, lr}
cfi_adjust_cfa_offset (20)
cfi_rel_offset (r4, 0)
cfi_rel_offset (r5, 4)
cfi_rel_offset (r6, 8)
cfi_rel_offset (r7, 12)
cfi_rel_offset (lr, 16)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
ldr r0, [r0]
tst r0, #TCB_CANCELED_BITMASK
bne 1f
/* Issue a 6 argument syscall, the nr [r1] being the syscall
number. */
mov r7, r1
mov r0, r2
mov r1, r3
ldmfd ip, {r2, r3, r4, r5, r6}
svc 0x0
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
ldmfd sp!, {r4, r5, r6, r7, lr}
cfi_adjust_cfa_offset (-20)
cfi_restore (r4)
cfi_restore (r5)
cfi_restore (r6)
cfi_restore (r7)
cfi_restore (lr)
BX (lr)
1:
ldmfd sp!, {r4, r5, r6, r7, lr}
b __syscall_do_cancel
.fnend
END (__syscall_cancel_arch)

View File

@ -0,0 +1,114 @@
/* Cancellable syscall wrapper. Linux/csky version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
__syscall_arg_t nr,
__syscall_arg_t arg1,
__syscall_arg_t arg2,
__syscall_arg_t arg3,
__syscall_arg_t arg4,
__syscall_arg_t arg5,
__syscall_arg_t arg6) */
#ifdef SHARED
# define STACK_ADJ 4
#else
# define STACK_ADJ 0
#endif
ENTRY (__syscall_cancel_arch)
subi sp, sp, 16 + STACK_ADJ
cfi_def_cfa_offset (16 + STACK_ADJ)
#ifdef SHARED
st.w gb, (sp, 16)
lrw t1, 1f@GOTPC
cfi_offset (gb, -4)
grs gb, 1f
1:
#endif
st.w lr, (sp, 12)
st.w l3, (sp, 8)
st.w l1, (sp, 4)
st.w l0, (sp, 0)
#ifdef SHARED
addu gb, gb, t1
#endif
subi sp, sp, 16
cfi_def_cfa_offset (32 + STACK_ADJ)
cfi_offset (lr, -( 4 + STACK_ADJ))
cfi_offset (l3, -( 8 + STACK_ADJ))
cfi_offset (l1, -(12 + STACK_ADJ))
cfi_offset (l0, -(16 + STACK_ADJ))
mov l3, a1
mov a1, a3
ld.w a3, (sp, 32 + STACK_ADJ)
st.w a3, (sp, 0)
ld.w a3, (sp, 36 + STACK_ADJ)
st.w a3, (sp, 4)
ld.w a3, (sp, 40 + STACK_ADJ)
st.w a3, (sp, 8)
ld.w a3, (sp, 44 + STACK_ADJ)
st.w a3, (sp, 12)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
ld.w t0, (a0, 0)
andi t0, t0, TCB_CANCELED_BITMASK
jbnez t0, 2f
mov a0, a2
ld.w a3, (sp, 4)
ld.w a2, (sp, 0)
ld.w l0, (sp, 8)
ld.w l1, (sp, 12)
trap 0
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
addi sp, sp, 16
cfi_remember_state
cfi_def_cfa_offset (16 + STACK_ADJ)
#ifdef SHARED
ld.w gb, (sp, 16)
cfi_restore (gb)
#endif
ld.w lr, (sp, 12)
cfi_restore (lr)
ld.w l3, (sp, 8)
cfi_restore (l3)
ld.w l1, (sp, 4)
cfi_restore (l1)
ld.w l0, (sp, 0)
cfi_restore (l0)
addi sp, sp, 16
cfi_def_cfa_offset (0)
rts
2:
cfi_restore_state
#ifdef SHARED
lrw a3, __syscall_do_cancel@GOTOFF
addu a3, a3, gb
jsr a3
#else
jbsr __syscall_do_cancel
#endif
END (__syscall_cancel_arch)

View File

@ -0,0 +1,81 @@
/* Cancellable syscall wrapper. Linux/hppa version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library. If not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
long int nr,
long int arg1,
long int arg2,
long int arg3,
long int arg4,
long int arg5,
long int arg6) */
.text
ENTRY(__syscall_cancel_arch)
stw %r2,-20(%r30)
ldo 128(%r30),%r30
cfi_def_cfa_offset (-128)
cfi_offset (2, -20)
ldw -180(%r30),%r28
copy %r26,%r20
stw %r28,-108(%r30)
ldw -184(%r30),%r28
copy %r24,%r26
stw %r28,-112(%r30)
ldw -188(%r30),%r28
stw %r28,-116(%r30)
ldw -192(%r30),%r28
stw %r4,-104(%r30)
stw %r28,-120(%r30)
copy %r25,%r28
copy %r23,%r25
#ifdef __PIC__
stw %r19,-32(%r30)
#endif
cfi_offset (4, 24)
.global __syscall_cancel_arch_start
__syscall_cancel_arch_start:
ldw 0(%r20),%r20
bb,< %r20,31-TCB_CANCELED_BIT,1f
ldw -120(%r30),%r21
ldw -116(%r30),%r22
ldw -112(%r30),%r23
ldw -108(%r30),%r24
copy %r19, %r4
ble 0x100(%sr2, %r0)
.global __syscall_cancel_arch_end
__syscall_cancel_arch_end:
copy %r28,%r20
copy %r4,%r19
ldw -148(%r30),%r2
ldw -104(%r30),%r4
bv %r0(%r2)
ldo -128(%r30),%r30
1:
bl __syscall_do_cancel,%r2
nop
nop
END(__syscall_cancel_arch)

View File

@ -0,0 +1,104 @@
/* Cancellable syscall wrapper. Linux/i686 version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int [eax] __syscall_cancel_arch (int *cancelhandling [SP],
long int nr [SP+4],
long int arg1 [SP+8],
long int arg2 [SP+12],
long int arg3 [SP+16],
long int arg4 [SP+20],
long int arg5 [SP+24],
long int arg6 [SP+28]) */
ENTRY (__syscall_cancel_arch)
pushl %ebp
cfi_def_cfa_offset (8)
cfi_offset (ebp, -8)
pushl %edi
cfi_def_cfa_offset (12)
cfi_offset (edi, -12)
pushl %esi
cfi_def_cfa_offset (16)
cfi_offset (esi, -16)
pushl %ebx
cfi_def_cfa_offset (20)
cfi_offset (ebx, -20)
.global __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
movl 20(%esp), %eax
testb $TCB_CANCELED_BITMASK, (%eax)
jne 1f
/* Issue a 6 argument syscall, the nr [%eax] being the syscall
number. */
movl 24(%esp), %eax
movl 28(%esp), %ebx
movl 32(%esp), %ecx
movl 36(%esp), %edx
movl 40(%esp), %esi
movl 44(%esp), %edi
movl 48(%esp), %ebp
/* We can not use the vDSO helper for syscall (__kernel_vsyscall)
because the returned PC from kernel will point to the vDSO page
instead of the expected __syscall_cancel_arch_{start,end}
marks. */
int $0x80
.global __syscall_cancel_arch_end
__syscall_cancel_arch_end:
popl %ebx
cfi_restore (ebx)
cfi_def_cfa_offset (16)
popl %esi
cfi_restore (esi)
cfi_def_cfa_offset (12)
popl %edi
cfi_restore (edi)
cfi_def_cfa_offset (8)
popl %ebp
cfi_restore (ebp)
cfi_def_cfa_offset (4)
ret
1:
/* Although the __syscall_do_cancel do not return, we need to stack
being set correctly for unwind. */
popl %ebx
cfi_restore (ebx)
cfi_def_cfa_offset (16)
popl %esi
cfi_restore (esi)
cfi_def_cfa_offset (12)
popl %edi
cfi_restore (edi)
cfi_def_cfa_offset (8)
popl %ebp
cfi_restore (ebp)
cfi_def_cfa_offset (4)
jmp __syscall_do_cancel
END (__syscall_cancel_arch)

View File

@ -0,0 +1,50 @@
/* Cancellable syscall wrapper. Linux/loongarch version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
ENTRY (__syscall_cancel_arch)
.global __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
ld.w t0, a0, 0
andi t0, t0, TCB_CANCELED_BITMASK
bnez t0, 1f
/* Issue a 6 argument syscall. */
move t1, a1
move a0, a2
move a1, a3
move a2, a4
move a3, a5
move a4, a6
move a5, a7
move a7, t1
syscall 0
.global __syscall_cancel_arch_end
__syscall_cancel_arch_end:
jr ra
1:
b __syscall_do_cancel
END (__syscall_cancel_arch)

View File

@ -0,0 +1,84 @@
/* Cancellable syscall wrapper. Linux/m68k version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
__syscall_arg_t nr,
__syscall_arg_t arg1,
__syscall_arg_t arg2,
__syscall_arg_t arg3,
__syscall_arg_t arg4,
__syscall_arg_t arg5,
__syscall_arg_t arg6) */
ENTRY (__syscall_cancel_arch)
#ifdef __mcoldfire__
lea (-16,%sp),%sp
movem.l %d2-%d5,(%sp)
#else
movem.l %d2-%d5,-(%sp)
#endif
cfi_def_cfa_offset (20)
cfi_offset (2, -20)
cfi_offset (3, -16)
cfi_offset (4, -12)
cfi_offset (5, -8)
.global __syscall_cancel_arch_start
__syscall_cancel_arch_start:
move.l 20(%sp),%a0
move.l (%a0),%d0
#ifdef __mcoldfire__
move.w %d0,%ccr
jeq 1f
#else
btst #TCB_CANCELED_BIT,%d0
jne 1f
#endif
move.l 48(%sp),%a0
move.l 44(%sp),%d5
move.l 40(%sp),%d4
move.l 36(%sp),%d3
move.l 32(%sp),%d2
move.l 28(%sp),%d1
move.l 24(%sp),%d0
trap #0
.global __syscall_cancel_arch_end
__syscall_cancel_arch_end:
#ifdef __mcoldfire__
movem.l (%sp),%d2-%d5
lea (16,%sp),%sp
#else
movem.l (%sp)+,%d2-%d5
#endif
rts
1:
#ifdef PIC
bsr.l __syscall_do_cancel
#else
jsr __syscall_do_cancel
#endif
END (__syscall_cancel_arch)

View File

@ -0,0 +1,61 @@
/* Cancellable syscall wrapper. Linux/microblaze version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
long int nr,
long int arg1,
long int arg2,
long int arg3,
long int arg4,
long int arg5,
long int arg6) */
ENTRY (__syscall_cancel_arch)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
lwi r3,r5,0
andi r3,r3,TCB_CANCELED_BITMASK
bneid r3,1f
addk r12,r6,r0
addk r5,r7,r0
addk r6,r8,r0
addk r7,r9,r0
addk r8,r10,r0
lwi r9,r1,56
lwi r10,r1,60
brki r14,8
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
nop
lwi r15,r1,0
rtsd r15,8
addik r1,r1,28
1:
brlid r15, __syscall_do_cancel
nop
END (__syscall_cancel_arch)

View File

@ -0,0 +1,128 @@
/* Cancellable syscall wrapper. Linux/mips32 version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <sys/asm.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
__syscall_arg_t nr,
__syscall_arg_t arg1,
__syscall_arg_t arg2,
__syscall_arg_t arg3,
__syscall_arg_t arg4,
__syscall_arg_t arg5,
__syscall_arg_t arg6,
__syscall_arg_t arg7) */
#define FRAME_SIZE 56
NESTED (__syscall_cancel_arch, FRAME_SIZE, fp)
.mask 0xc0070000,-SZREG
.fmask 0x00000000,0
PTR_ADDIU sp, -FRAME_SIZE
cfi_def_cfa_offset (FRAME_SIZE)
sw fp, 48(sp)
sw ra, 52(sp)
sw s2, 44(sp)
sw s1, 40(sp)
sw s0, 36(sp)
#ifdef __PIC__
.cprestore 16
#endif
cfi_offset (ra, -4)
cfi_offset (fp, -8)
cfi_offset (s2, -12)
cfi_offset (s1, -16)
cfi_offset (s0, -20)
move fp ,sp
cfi_def_cfa_register (fp)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
lw v0, 0(a0)
andi v0, v0, TCB_CANCELED_BITMASK
bne v0, zero, 2f
addiu sp, sp, -16
addiu v0, sp, 16
sw v0, 24(fp)
move s0, a1
move a0, a2
move a1, a3
lw a2, 72(fp)
lw a3, 76(fp)
lw v0, 84(fp)
lw s1, 80(fp)
lw s2, 88(fp)
.set noreorder
subu sp, 32
sw s1, 16(sp)
sw v0, 20(sp)
sw s2, 24(sp)
move v0, s0
syscall
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
addiu sp, sp, 32
.set reorder
beq a3, zero, 1f
subu v0, zero, v0
1:
move sp, fp
cfi_remember_state
cfi_def_cfa_register (sp)
lw ra, 52(fp)
lw fp, 48(sp)
lw s2, 44(sp)
lw s1, 40(sp)
lw s0, 36(sp)
.set noreorder
.set nomacro
jr ra
addiu sp,sp,FRAME_SIZE
.set macro
.set reorder
cfi_def_cfa_offset (0)
cfi_restore (s0)
cfi_restore (s1)
cfi_restore (s2)
cfi_restore (fp)
cfi_restore (ra)
2:
cfi_restore_state
#ifdef __PIC__
PTR_LA t9, __syscall_do_cancel
jalr t9
#else
jal __syscall_do_cancel
#endif
END (__syscall_cancel_arch)

View File

@ -18,6 +18,10 @@
#ifndef _LINUX_MIPS_MIPS32_SYSDEP_H #ifndef _LINUX_MIPS_MIPS32_SYSDEP_H
#define _LINUX_MIPS_MIPS32_SYSDEP_H 1 #define _LINUX_MIPS_MIPS32_SYSDEP_H 1
/* mips32 have cancelable syscalls with 7 arguments (currently only
sync_file_range). */
#define HAVE_CANCELABLE_SYSCALL_WITH_7_ARGS 1
/* There is some commonality. */ /* There is some commonality. */
#include <sysdeps/unix/sysv/linux/mips/sysdep.h> #include <sysdeps/unix/sysv/linux/mips/sysdep.h>
#include <sysdeps/unix/sysv/linux/sysdep.h> #include <sysdeps/unix/sysv/linux/sysdep.h>

View File

@ -0,0 +1,28 @@
/* Types and macros used for syscall issuing. MIPS64n32 version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<https://www.gnu.org/licenses/>. */
#ifndef _SYSCALL_TYPES_H
#define _SYSCALL_TYPES_H
typedef long long int __syscall_arg_t;
/* Convert X to a long long, without losing any bits if it is one
already or warning if it is a 32-bit pointer. */
#define __SSC(__x) ((__syscall_arg_t) (__typeof__ ((__x) - (__x))) (__x))
#endif

View File

@ -0,0 +1,108 @@
/* Cancellable syscall wrapper. Linux/mips64 version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <sys/asm.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
__syscall_arg_t nr,
__syscall_arg_t arg1,
__syscall_arg_t arg2,
__syscall_arg_t arg3,
__syscall_arg_t arg4,
__syscall_arg_t arg5,
__syscall_arg_t arg6,
__syscall_arg_t arg7) */
#define FRAME_SIZE 32
.text
NESTED (__syscall_cancel_arch, FRAME_SIZE, ra)
.mask 0x90010000, -SZREG
.fmask 0x00000000, 0
LONG_ADDIU sp, sp, -FRAME_SIZE
cfi_def_cfa_offset (FRAME_SIZE)
sd gp, 16(sp)
cfi_offset (gp, -16)
lui gp, %hi(%neg(%gp_rel(__syscall_cancel_arch)))
LONG_ADDU gp, gp, t9
sd ra, 24(sp)
sd s0, 8(sp)
cfi_offset (ra, -8)
cfi_offset (s0, -24)
LONG_ADDIU gp, gp, %lo(%neg(%gp_rel(__syscall_cancel_arch)))
.global __syscall_cancel_arch_start
__syscall_cancel_arch_start:
lw v0, 0(a0)
andi v0, v0, TCB_CANCELED_BITMASK
.set noreorder
.set nomacro
bne v0, zero, 2f
move s0, a1
.set macro
.set reorder
move a0, a2
move a1, a3
move a2, a4
move a3, a5
move a4, a6
move a5, a7
.set noreorder
move v0, s0
syscall
.set reorder
.global __syscall_cancel_arch_end
__syscall_cancel_arch_end:
.set noreorder
.set nomacro
bnel a3, zero, 1f
SUBU v0, zero, v0
.set macro
.set reorder
1:
ld ra, 24(sp)
ld gp, 16(sp)
ld s0, 8(sp)
.set noreorder
.set nomacro
jr ra
LONG_ADDIU sp, sp, FRAME_SIZE
.set macro
.set reorder
cfi_remember_state
cfi_def_cfa_offset (0)
cfi_restore (s0)
cfi_restore (gp)
cfi_restore (ra)
.align 3
2:
cfi_restore_state
LONG_L t9, %got_disp(__syscall_do_cancel)(gp)
.reloc 3f, R_MIPS_JALR, __syscall_do_cancel
3: jalr t9
END (__syscall_cancel_arch)

View File

@ -44,15 +44,7 @@
#undef HAVE_INTERNAL_BRK_ADDR_SYMBOL #undef HAVE_INTERNAL_BRK_ADDR_SYMBOL
#define HAVE_INTERNAL_BRK_ADDR_SYMBOL 1 #define HAVE_INTERNAL_BRK_ADDR_SYMBOL 1
#if _MIPS_SIM == _ABIN32 #include <syscall_types.h>
/* Convert X to a long long, without losing any bits if it is one
already or warning if it is a 32-bit pointer. */
# define ARGIFY(X) ((long long int) (__typeof__ ((X) - (X))) (X))
typedef long long int __syscall_arg_t;
#else
# define ARGIFY(X) ((long int) (X))
typedef long int __syscall_arg_t;
#endif
/* Note that the original Linux syscall restart convention required the /* Note that the original Linux syscall restart convention required the
instruction immediately preceding SYSCALL to initialize $v0 with the instruction immediately preceding SYSCALL to initialize $v0 with the
@ -120,7 +112,7 @@ typedef long int __syscall_arg_t;
long int _sys_result; \ long int _sys_result; \
\ \
{ \ { \
__syscall_arg_t _arg1 = ARGIFY (arg1); \ __syscall_arg_t _arg1 = __SSC (arg1); \
register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
= (number); \ = (number); \
register __syscall_arg_t __v0 asm ("$2"); \ register __syscall_arg_t __v0 asm ("$2"); \
@ -144,8 +136,8 @@ typedef long int __syscall_arg_t;
long int _sys_result; \ long int _sys_result; \
\ \
{ \ { \
__syscall_arg_t _arg1 = ARGIFY (arg1); \ __syscall_arg_t _arg1 = __SSC (arg1); \
__syscall_arg_t _arg2 = ARGIFY (arg2); \ __syscall_arg_t _arg2 = __SSC (arg2); \
register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
= (number); \ = (number); \
register __syscall_arg_t __v0 asm ("$2"); \ register __syscall_arg_t __v0 asm ("$2"); \
@ -170,9 +162,9 @@ typedef long int __syscall_arg_t;
long int _sys_result; \ long int _sys_result; \
\ \
{ \ { \
__syscall_arg_t _arg1 = ARGIFY (arg1); \ __syscall_arg_t _arg1 = __SSC (arg1); \
__syscall_arg_t _arg2 = ARGIFY (arg2); \ __syscall_arg_t _arg2 = __SSC (arg2); \
__syscall_arg_t _arg3 = ARGIFY (arg3); \ __syscall_arg_t _arg3 = __SSC (arg3); \
register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
= (number); \ = (number); \
register __syscall_arg_t __v0 asm ("$2"); \ register __syscall_arg_t __v0 asm ("$2"); \
@ -199,10 +191,10 @@ typedef long int __syscall_arg_t;
long int _sys_result; \ long int _sys_result; \
\ \
{ \ { \
__syscall_arg_t _arg1 = ARGIFY (arg1); \ __syscall_arg_t _arg1 = __SSC (arg1); \
__syscall_arg_t _arg2 = ARGIFY (arg2); \ __syscall_arg_t _arg2 = __SSC (arg2); \
__syscall_arg_t _arg3 = ARGIFY (arg3); \ __syscall_arg_t _arg3 = __SSC (arg3); \
__syscall_arg_t _arg4 = ARGIFY (arg4); \ __syscall_arg_t _arg4 = __SSC (arg4); \
register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
= (number); \ = (number); \
register __syscall_arg_t __v0 asm ("$2"); \ register __syscall_arg_t __v0 asm ("$2"); \
@ -229,11 +221,11 @@ typedef long int __syscall_arg_t;
long int _sys_result; \ long int _sys_result; \
\ \
{ \ { \
__syscall_arg_t _arg1 = ARGIFY (arg1); \ __syscall_arg_t _arg1 = __SSC (arg1); \
__syscall_arg_t _arg2 = ARGIFY (arg2); \ __syscall_arg_t _arg2 = __SSC (arg2); \
__syscall_arg_t _arg3 = ARGIFY (arg3); \ __syscall_arg_t _arg3 = __SSC (arg3); \
__syscall_arg_t _arg4 = ARGIFY (arg4); \ __syscall_arg_t _arg4 = __SSC (arg4); \
__syscall_arg_t _arg5 = ARGIFY (arg5); \ __syscall_arg_t _arg5 = __SSC (arg5); \
register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
= (number); \ = (number); \
register __syscall_arg_t __v0 asm ("$2"); \ register __syscall_arg_t __v0 asm ("$2"); \
@ -261,12 +253,12 @@ typedef long int __syscall_arg_t;
long int _sys_result; \ long int _sys_result; \
\ \
{ \ { \
__syscall_arg_t _arg1 = ARGIFY (arg1); \ __syscall_arg_t _arg1 = __SSC (arg1); \
__syscall_arg_t _arg2 = ARGIFY (arg2); \ __syscall_arg_t _arg2 = __SSC (arg2); \
__syscall_arg_t _arg3 = ARGIFY (arg3); \ __syscall_arg_t _arg3 = __SSC (arg3); \
__syscall_arg_t _arg4 = ARGIFY (arg4); \ __syscall_arg_t _arg4 = __SSC (arg4); \
__syscall_arg_t _arg5 = ARGIFY (arg5); \ __syscall_arg_t _arg5 = __SSC (arg5); \
__syscall_arg_t _arg6 = ARGIFY (arg6); \ __syscall_arg_t _arg6 = __SSC (arg6); \
register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\ register __syscall_arg_t __s0 asm ("$16") __attribute__ ((unused))\
= (number); \ = (number); \
register __syscall_arg_t __v0 asm ("$2"); \ register __syscall_arg_t __v0 asm ("$2"); \

View File

@ -0,0 +1,95 @@
/* Cancellable syscall wrapper. Linux/nios2 version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
__syscall_arg_t nr,
__syscall_arg_t arg1,
__syscall_arg_t arg2,
__syscall_arg_t arg3,
__syscall_arg_t arg4,
__syscall_arg_t arg5,
__syscall_arg_t arg6) */
ENTRY (__syscall_cancel_arch)
#ifdef SHARED
addi sp, sp, -8
stw r22, 0(sp)
nextpc r22
1:
movhi r8, %hiadj(_gp_got - 1b)
addi r8, r8, %lo(_gp_got - 1b)
stw ra, 4(sp)
add r22, r22, r8
#else
addi sp, sp, -4
cfi_def_cfa_offset (4)
stw ra, 0(sp)
cfi_offset (31, -4)
#endif
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
ldw r3, 0(r4)
andi r3, r3, TCB_CANCELED_BITMASK
bne r3, zero, 3f
mov r10, r6
mov r2, r5
#ifdef SHARED
# define STACK_ADJ 4
#else
# define STACK_ADJ 0
#endif
ldw r9, (16 + STACK_ADJ)(sp)
mov r5, r7
ldw r8, (12 + STACK_ADJ)(sp)
ldw r7, (8 + STACK_ADJ)(sp)
ldw r6, (4 + STACK_ADJ)(sp)
mov r4, r10
trap
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
beq r7, zero, 2f
sub r2, zero, r2
2:
#ifdef SHARED
ldw ra, 4(sp)
ldw r22, 0(sp)
addi sp, sp, 8
#else
ldw ra, (0 + STACK_ADJ)(sp)
cfi_remember_state
cfi_restore (31)
addi sp, sp, 4
cfi_def_cfa_offset (0)
#endif
ret
3:
#ifdef SHARED
ldw r2, %call(__syscall_do_cancel)(r22)
callr r2
#else
cfi_restore_state
call __syscall_do_cancel
#endif
END (__syscall_cancel_arch)

View File

@ -0,0 +1,63 @@
/* Cancellable syscall wrapper. Linux/or1k version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
ENTRY (__syscall_cancel_arch)
l.addi r1, r1, -4
cfi_def_cfa_offset (4)
l.sw 0(r1), r9
cfi_offset (9, -4)
.global __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
l.movhi r19, hi(0)
l.lwz r17, 0(r3)
l.andi r17, r17, 8
l.sfeq r17, r19
l.bnf 1f
/* Issue a 6 argument syscall. */
l.or r11, r4, r4
l.or r3, r5, r5
l.or r4, r6, r6
l.or r5, r7, r7
l.or r6, r8, r8
l.lwz r7, 4(r1)
l.lwz r8, 8(r1)
l.sys 1
l.nop
.global __syscall_cancel_arch_end
__syscall_cancel_arch_end:
l.lwz r9, 0(r1)
l.jr r9
l.addi r1, r1, 4
cfi_remember_state
cfi_def_cfa_offset (0)
cfi_restore (9)
1:
cfi_restore_state
l.jal __syscall_do_cancel
l.nop
END (__syscall_cancel_arch)

View File

@ -0,0 +1,65 @@
/* Architecture specific code for pthread cancellation handling.
Linux/PowerPC version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#ifndef _NPTL_CANCELLATION_PC_CHECK
#define _NPTL_CANCELLATION_PC_CHECK
#include <sigcontextinfo.h>
/* For syscalls with side-effects (e.g read that might return partial read),
the kernel cannot restart the syscall when interrupted by a signal, it must
return from the call with whatever partial result. In this case, the saved
program counter is set just after the syscall instruction, so the SIGCANCEL
handler should not act on cancellation.
The __syscall_cancel_arch function, used for all cancellable syscalls,
contains two extra markers, __syscall_cancel_arch_start and
__syscall_cancel_arch_end. The former points to just before the initial
conditional branch that checks if the thread has received a cancellation
request, while former points to the instruction after the one responsible
to issue the syscall.
The function check if the program counter (PC) from ucontext_t CTX is
within the start and then end boundary from the __syscall_cancel_arch
bridge. Return TRUE if the PC is within the boundary, meaning the
syscall does not have any side effects; or FALSE otherwise. */
static __always_inline bool
cancellation_pc_check (void *ctx)
{
/* Both are defined in syscall_cancel.S. */
extern const char __syscall_cancel_arch_start[1];
extern const char __syscall_cancel_arch_end_sc[1];
#if defined(USE_PPC_SVC) && defined(__powerpc64__)
extern const char __syscall_cancel_arch_end_svc[1];
#endif
uintptr_t pc = sigcontext_get_pc (ctx);
return pc >= (uintptr_t) __syscall_cancel_arch_start
#if defined(USE_PPC_SVC) && defined(__powerpc64__)
&& THREAD_GET_HWCAP() & PPC_FEATURE2_SCV
? pc < (uintptr_t) __syscall_cancel_arch_end_sc
: pc < (uintptr_t) __syscall_cancel_arch_end_svc;
#else
&& pc < (uintptr_t) __syscall_cancel_arch_end_sc;
#endif
}
#endif

View File

@ -0,0 +1,86 @@
/* Cancellable syscall wrapper. Linux/powerpc version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int [r3] __syscall_cancel_arch (int *cancelhandling [r3],
long int nr [r4],
long int arg1 [r5],
long int arg2 [r6],
long int arg3 [r7],
long int arg4 [r8],
long int arg5 [r9],
long int arg6 [r10]) */
ENTRY (__syscall_cancel_arch)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
lwz r0,0(r3)
andi. r0,r0,TCB_CANCELED_BITMASK
bne 1f
/* Issue a 6 argument syscall, the nr [r4] being the syscall
number. */
mr r0,r4
mr r3,r5
mr r4,r6
mr r5,r7
mr r6,r8
mr r7,r9
mr r8,r10
#if defined(USE_PPC_SVC) && defined(__powerpc64__)
CHECK_SCV_SUPPORT r9 0f
stdu r1, -SCV_FRAME_SIZE(r1)
cfi_adjust_cfa_offset (SCV_FRAME_SIZE)
.machine "push"
.machine "power9"
scv 0
.machine "pop"
.globl __syscall_cancel_arch_end_svc
__syscall_cancel_arch_end_svc:
ld r9, SCV_FRAME_SIZE + FRAME_LR_SAVE(r1)
mtlr r9
addi r1, r1, SCV_FRAME_SIZE
cfi_restore (lr)
li r9, -4095
cmpld r3, r9
bnslr+
neg r3,r3
blr
0:
#endif
sc
.globl __syscall_cancel_arch_end_sc
__syscall_cancel_arch_end_sc:
bnslr+
neg r3,r3
blr
/* Although the __syscall_do_cancel do not return, we need to stack
being set correctly for unwind. */
1:
TAIL_CALL_NO_RETURN (__syscall_do_cancel)
END (__syscall_cancel_arch)

View File

@ -0,0 +1,67 @@
/* Cancellable syscall wrapper. Linux/riscv version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
__syscall_arg_t nr,
__syscall_arg_t arg1,
__syscall_arg_t arg2,
__syscall_arg_t arg3,
__syscall_arg_t arg4,
__syscall_arg_t arg5,
__syscall_arg_t arg6) */
#ifdef SHARED
.option pic
#else
.option nopic
#endif
ENTRY (__syscall_cancel_arch)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
lw t1, 0(a0)
/* if (*ch & CANCELED_BITMASK) */
andi t1, t1, TCB_CANCELED_BITMASK
bne t1, zero, 1f
mv t3, a1
mv a0, a2
mv a1, a3
mv a2, a4
mv a3, a5
mv a4, a6
mv a5, a7
mv a7, t3
scall
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
ret
1:
addi sp, sp, -16
cfi_def_cfa_offset (16)
REG_S ra, (16-SZREG)(sp)
cfi_offset (ra, -SZREG)
call __syscall_do_cancel
END (__syscall_cancel_arch)

View File

@ -0,0 +1,62 @@
/* Cancellable syscall wrapper. Linux/s390 version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
__syscall_arg_t nr,
__syscall_arg_t arg1,
__syscall_arg_t arg2,
__syscall_arg_t arg3,
__syscall_arg_t arg4,
__syscall_arg_t arg5,
__syscall_arg_t arg6) */
ENTRY (__syscall_cancel_arch)
stm %r6,%r7,24(%r15)
cfi_offset (%r6, -72)
cfi_offset (%r7, -68)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
tm 3(%r2),TCB_CANCELED_BITMASK
jne 1f
/* Issue a 6 argument syscall, the nr [%r1] being the syscall
number. */
lr %r1,%r3
lr %r2,%r4
lr %r3,%r5
lr %r4,%r6
lm %r5,%r7,96(%r15)
svc 0
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
lm %r6,%r7,24(%r15)
cfi_remember_state
cfi_restore (%r7)
cfi_restore (%r6)
br %r14
1:
cfi_restore_state
jg __syscall_do_cancel
END (__syscall_cancel_arch)

View File

@ -0,0 +1,62 @@
/* Cancellable syscall wrapper. Linux/s390x version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
__syscall_arg_t nr,
__syscall_arg_t arg1,
__syscall_arg_t arg2,
__syscall_arg_t arg3,
__syscall_arg_t arg4,
__syscall_arg_t arg5,
__syscall_arg_t arg6) */
ENTRY (__syscall_cancel_arch)
stmg %r6,%r7,48(%r15)
cfi_offset (%r6, -112)
cfi_offset (%r7, -104)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
tm 3(%r2),TCB_CANCELED_BITMASK
jne 1f
/* Issue a 6 argument syscall, the nr [%r1] being the syscall
number. */
lgr %r1,%r3
lgr %r2,%r4
lgr %r3,%r5
lgr %r4,%r6
lmg %r5,%r7,160(%r15)
svc 0
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
lmg %r6,%r7,48(%r15)
cfi_remember_state
cfi_restore (%r7)
cfi_restore (%r6)
br %r14
1:
cfi_restore_state
jg __syscall_do_cancel
END (__syscall_cancel_arch)

View File

@ -0,0 +1,126 @@
/* Cancellable syscall wrapper. Linux/sh version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
long int nr,
long int arg1,
long int arg2,
long int arg3,
long int arg4,
long int arg5,
long int arg6) */
ENTRY (__syscall_cancel_arch)
#ifdef SHARED
mov.l r12,@-r15
cfi_def_cfa_offset (4)
cfi_offset (12, -4)
mova L(GT),r0
mov.l L(GT),r12
sts.l pr,@-r15
cfi_def_cfa_offset (8)
cfi_offset (17, -8)
add r0,r12
#else
sts.l pr,@-r15
cfi_def_cfa_offset (4)
cfi_offset (17, -4)
#endif
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
mov.l @r4,r0
tst #TCB_CANCELED_BITMASK,r0
bf/s 1f
/* Issue a 6 argument syscall. */
mov r5,r3
mov r6,r4
mov r7,r5
#ifdef SHARED
mov.l @(8,r15),r6
mov.l @(12,r15),r7
mov.l @(16,r15),r0
mov.l @(20,r15),r1
#else
mov.l @(4,r15),r6
mov.l @(8,r15),r7
mov.l @(12,r15),r0
mov.l @(16,r15),r1
#endif
trapa #0x16
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
/* The additional or is a workaround for a hardware issue:
http://documentation.renesas.com/eng/products/mpumcu/tu/tnsh7456ae.pdf
*/
or r0,r0
or r0,r0
or r0,r0
or r0,r0
or r0,r0
lds.l @r15+,pr
cfi_remember_state
cfi_restore (17)
#ifdef SHARED
cfi_def_cfa_offset (4)
rts
mov.l @r15+,r12
cfi_def_cfa_offset (0)
cfi_restore (12)
.align 1
1:
cfi_restore_state
mov.l L(SC),r1
bsrf r1
L(M):
nop
.align 2
L(GT):
.long _GLOBAL_OFFSET_TABLE_
L(SC):
.long __syscall_do_cancel-(L(M)+2)
#else
cfi_def_cfa_offset (0)
rts
nop
.align 1
1:
cfi_restore_state
mov.l 2f,r1
jsr @r1
nop
.align 2
2:
.long __syscall_do_cancel
#endif
END (__syscall_cancel_arch)

View File

@ -88,14 +88,33 @@
sc_ret; \ sc_ret; \
}) })
#define __SOCKETCALL_CANCEL1(__name, __a1) \
SYSCALL_CANCEL (socketcall, __name, \
((long int [1]) { (long int) __a1 }))
#define __SOCKETCALL_CANCEL2(__name, __a1, __a2) \
SYSCALL_CANCEL (socketcall, __name, \
((long int [2]) { (long int) __a1, (long int) __a2 }))
#define __SOCKETCALL_CANCEL3(__name, __a1, __a2, __a3) \
SYSCALL_CANCEL (socketcall, __name, \
((long int [3]) { (long int) __a1, (long int) __a2, (long int) __a3 }))
#define __SOCKETCALL_CANCEL4(__name, __a1, __a2, __a3, __a4) \
SYSCALL_CANCEL (socketcall, __name, \
((long int [4]) { (long int) __a1, (long int) __a2, (long int) __a3, \
(long int) __a4 }))
#define __SOCKETCALL_CANCEL5(__name, __a1, __a2, __a3, __a4, __a5) \
SYSCALL_CANCEL (socketcall, __name, \
((long int [5]) { (long int) __a1, (long int) __a2, (long int) __a3, \
(long int) __a4, (long int) __a5 }))
#define __SOCKETCALL_CANCEL6(__name, __a1, __a2, __a3, __a4, __a5, __a6) \
SYSCALL_CANCEL (socketcall, __name, \
((long int [6]) { (long int) __a1, (long int) __a2, (long int) __a3, \
(long int) __a4, (long int) __a5, (long int) __a6 }))
#define SOCKETCALL_CANCEL(name, args...) \ #define __SOCKETCALL_CANCEL(...) __SOCKETCALL_DISP (__SOCKETCALL_CANCEL,\
({ \ __VA_ARGS__)
int oldtype = LIBC_CANCEL_ASYNC (); \
long int sc_ret = __SOCKETCALL (SOCKOP_##name, args); \ #define SOCKETCALL_CANCEL(name, args...) \
LIBC_CANCEL_RESET (oldtype); \ __SOCKETCALL_CANCEL (SOCKOP_##name, args)
sc_ret; \
})
#endif /* sys/socketcall.h */ #endif /* sys/socketcall.h */

View File

@ -0,0 +1,71 @@
/* Cancellable syscall wrapper. Linux/sparc32 version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int __syscall_cancel_arch (int *cancelhandling,
long int nr,
long int arg1,
long int arg2,
long int arg3,
long int arg4,
long int arg5,
long int arg6) */
ENTRY (__syscall_cancel_arch)
save %sp, -96, %sp
cfi_window_save
cfi_register (%o7, %i7)
cfi_def_cfa_register (%fp)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
ld [%i0], %g2
andcc %g2, TCB_CANCELED_BITMASK, %g0
bne,pn %icc, 2f
/* Issue a 6 argument syscall. */
mov %i1, %g1
mov %i2, %o0
mov %i3, %o1
mov %i4, %o2
mov %i5, %o3
ld [%fp+92], %o4
ld [%fp+96], %o5
ta 0x10
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
bcc 1f
nop
sub %g0, %o0, %o0
1:
mov %o0, %i0
return %i7+8
nop
2:
call __syscall_do_cancel, 0
nop
nop
END (__syscall_cancel_arch)

View File

@ -0,0 +1,74 @@
/* Cancellable syscall wrapper. Linux/sparc64 version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
.register %g2, #scratch
/* long int __syscall_cancel_arch (int *cancelhandling,
long int nr,
long int arg1,
long int arg2,
long int arg3,
long int arg4,
long int arg5,
long int arg6) */
ENTRY (__syscall_cancel_arch)
save %sp, -176, %sp
cfi_window_save
cfi_register (%o7, %i7)
cfi_def_cfa_register (%fp)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
lduw [%i0], %g2
andcc %g2, TCB_CANCELED_BITMASK, %g0
bne,pn %xcc, 2f
/* Issue a 6 argument syscall. */
mov %i1, %g1
mov %i2, %o0
mov %i3, %o1
mov %i4, %o2
mov %i5, %o3
ldx [%fp + STACK_BIAS + 176], %o4
ldx [%fp + STACK_BIAS + 184], %o5
ta 0x6d
.global __syscall_cancel_arch_end
__syscall_cancel_arch_end:
bcc,pt %xcc, 1f
nop
sub %g0, %o0, %o0
1:
mov %o0, %i0
return %i7+8
nop
2:
call __syscall_do_cancel, 0
nop
nop
END (__syscall_cancel_arch)

View File

@ -0,0 +1,73 @@
/* Pthread cancellation syscall bridge. Default Linux version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <pthreadP.h>
#warning "This implementation should be use just as reference or for bootstrapping"
/* This is the generic version of the cancellable syscall code which
adds the label guards (__syscall_cancel_arch_{start,end}) used on SIGCANCEL
handler to check if the cancelled syscall have side-effects that need to be
returned to the caller.
This implementation should be used as a reference one to document the
implementation constraints:
1. The __syscall_cancel_arch_start should point just before the test
that thread is already cancelled,
2. The __syscall_cancel_arch_end should point to the immediate next
instruction after the syscall one.
3. It should return the syscall value or a negative result if is has
failed, similar to INTERNAL_SYSCALL_CALL.
The __syscall_cancel_arch_end one is because the kernel will signal
interrupted syscall with side effects by setting the signal frame program
counter (on the ucontext_t third argument from SA_SIGINFO signal handler)
right after the syscall instruction.
For some architecture, the INTERNAL_SYSCALL_NCS macro use more instructions
to get the error condition from kernel (as for powerpc and sparc that
checks for the conditional register), or uses an out of the line helper
(ARM thumb), or uses a kernel helper gate (i686 or ia64). In this case
the architecture should either adjust the macro or provide a custom
__syscall_cancel_arch implementation. */
long int
__syscall_cancel_arch (volatile int *ch, __syscall_arg_t nr,
__syscall_arg_t a1, __syscall_arg_t a2,
__syscall_arg_t a3, __syscall_arg_t a4,
__syscall_arg_t a5, __syscall_arg_t a6
__SYSCALL_CANCEL7_ARG_DEF)
{
#define ADD_LABEL(__label) \
asm volatile ( \
".global " __label "\t\n" \
__label ":\n");
ADD_LABEL ("__syscall_cancel_arch_start");
if (__glibc_unlikely (*ch & CANCELED_BITMASK))
__syscall_do_cancel();
long int result = INTERNAL_SYSCALL_NCS_CALL (nr, a1, a2, a3, a4, a5, a6
__SYSCALL_CANCEL7_ARG7);
ADD_LABEL ("__syscall_cancel_arch_end");
if (__glibc_unlikely (INTERNAL_SYSCALL_ERROR_P (result)))
return -INTERNAL_SYSCALL_ERRNO (result);
return result;
}

View File

@ -21,17 +21,5 @@
#define _SYSDEP_CANCEL_H #define _SYSDEP_CANCEL_H
#include <sysdep.h> #include <sysdep.h>
#include <tls.h>
#include <errno.h>
/* Set cancellation mode to asynchronous. */
extern int __pthread_enable_asynccancel (void);
libc_hidden_proto (__pthread_enable_asynccancel)
#define LIBC_CANCEL_ASYNC() __pthread_enable_asynccancel ()
/* Reset to previous cancellation mode. */
extern void __pthread_disable_asynccancel (int oldtype);
libc_hidden_proto (__pthread_disable_asynccancel)
#define LIBC_CANCEL_RESET(oldtype) __pthread_disable_asynccancel (oldtype)
#endif #endif

View File

@ -0,0 +1,57 @@
/* Cancellable syscall wrapper. Linux/x86_64 version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<http://www.gnu.org/licenses/>. */
#include <sysdep.h>
#include <descr-const.h>
/* long int [rax] __syscall_cancel_arch (volatile int *cancelhandling [%rdi],
__syscall_arg_t nr [%rsi],
__syscall_arg_t arg1 [%rdx],
__syscall_arg_t arg2 [%rcx],
__syscall_arg_t arg3 [%r8],
__syscall_arg_t arg4 [%r9],
__syscall_arg_t arg5 [SP+8],
__syscall_arg_t arg6 [SP+16]) */
ENTRY (__syscall_cancel_arch)
.globl __syscall_cancel_arch_start
__syscall_cancel_arch_start:
/* if (*cancelhandling & CANCELED_BITMASK)
__syscall_do_cancel() */
mov (%rdi),%eax
testb $TCB_CANCELED_BITMASK, (%rdi)
jne __syscall_do_cancel
/* Issue a 6 argument syscall, the nr [%rax] being the syscall
number. */
mov %rdi,%r11
mov %rsi,%rax
mov %rdx,%rdi
mov %rcx,%rsi
mov %r8,%rdx
mov %r9,%r10
mov 8(%rsp),%r8
mov 16(%rsp),%r9
mov %r11,8(%rsp)
syscall
.globl __syscall_cancel_arch_end
__syscall_cancel_arch_end:
ret
END (__syscall_cancel_arch)

View File

@ -0,0 +1,34 @@
/* Types and macros used for syscall issuing. x86_64/x32 version.
Copyright (C) 2023 Free Software Foundation, Inc.
This file is part of the GNU C Library.
The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, see
<https://www.gnu.org/licenses/>. */
#ifndef _SYSCALL_TYPES_H
#define _SYSCALL_TYPES_H
#include <libc-diag.h>
typedef long long int __syscall_arg_t;
/* Syscall arguments for x32 follows x86_64 ABI, however pointers are 32 bits
should be zero extended. */
#define __SSC(__x) \
({ \
TYPEFY (__x, __tmp) = ARGIFY (__x); \
(__syscall_arg_t) __tmp; \
})
#endif

View File

@ -13,6 +13,3 @@ MULTIPLE_THREADS_OFFSET offsetof (tcbhead_t, multiple_threads)
POINTER_GUARD offsetof (tcbhead_t, pointer_guard) POINTER_GUARD offsetof (tcbhead_t, pointer_guard)
FEATURE_1_OFFSET offsetof (tcbhead_t, feature_1) FEATURE_1_OFFSET offsetof (tcbhead_t, feature_1)
SSP_BASE_OFFSET offsetof (tcbhead_t, ssp_base) SSP_BASE_OFFSET offsetof (tcbhead_t, ssp_base)
-- Not strictly offsets, but these values are also used in the TCB.
TCB_CANCELED_BITMASK CANCELED_BITMASK