Commit Graph

171 Commits

Author SHA1 Message Date
Adhemerval Zanella
290db09546 nptl: Handle spurious EINTR when thread cancellation is disabled (BZ#29029)
Some Linux interfaces never restart after being interrupted by a signal
handler, regardless of the use of SA_RESTART [1].  It means that for
pthread cancellation, if the target thread disables cancellation with
pthread_setcancelstate and calls such interfaces (like poll or select),
it should not see spurious EINTR failures due the internal SIGCANCEL.

However recent changes made pthread_cancel to always sent the internal
signal, regardless of the target thread cancellation status or type.
To fix it, the previous semantic is restored, where the cancel signal
is only sent if the target thread has cancelation enabled in
asynchronous mode.

The cancel state and cancel type is moved back to cancelhandling
and atomic operation are used to synchronize between threads.  The
patch essentially revert the following commits:

  8c1c0aae20 nptl: Move cancel type out of cancelhandling
  2b51742531 nptl: Move cancel state out of cancelhandling
  26cfbb7162 nptl: Remove CANCELING_BITMASK

However I changed the atomic operation to follow the internal C11
semantic and removed the MACRO usage, it simplifies a bit the
resulting code (and removes another usage of the old atomic macros).

Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu,
and powerpc64-linux-gnu.

[1] https://man7.org/linux/man-pages/man7/signal.7.html

Reviewed-by: Florian Weimer <fweimer@redhat.com>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>

(cherry-picked from commit 404656009b)
2022-04-15 09:52:54 -03:00
Adhemerval Zanella
efb21b5fb2 elf: Fix initial-exec TLS access on audit modules (BZ #28096)
For audit modules and dependencies with initial-exec TLS, we can not
set the initial TLS image on default loader initialization because it
would already be set by the audit setup.  However, subsequent thread
creation would need to follow the default behaviour.

This patch fixes it by setting l_auditing link_map field not only
for the audit modules, but also for all its dependencies.  This is
used on _dl_allocate_tls_init to avoid the static TLS initialization
at load time.

Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 254d3d5aef)
2022-04-08 14:18:12 -04:00
Florian Weimer
a8ac8c4725 nptl: Fix race between pthread_kill and thread exit (bug 12889)
A new thread exit lock and flag are introduced.  They are used to
detect that the thread is about to exit or has exited in
__pthread_kill_internal, and the signal is not sent in this case.

The test sysdeps/pthread/tst-pthread_cancel-select-loop.c is derived
from a downstream test originally written by Marek Polacek.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
(cherry picked from commit 526c3cf11e)
2021-09-13 13:38:51 +02:00
H.J. Lu
d8ea0d0168 Add an internal wrapper for clone, clone2 and clone3
The clone3 system call (since Linux 5.3) provides a superset of the
functionality of clone and clone2.  It also provides a number of API
improvements, including the ability to specify the size of the child's
stack area which can be used by kernel to compute the shadow stack size
when allocating the shadow stack.  Add:

extern int __clone_internal (struct clone_args *__cl_args,
			     int (*__func) (void *__arg), void *__arg);

to provide an abstract interface for clone, clone2 and clone3.

1. Simplify stack management for thread creation by passing both stack
base and size to create_thread.
2. Consolidate clone vs clone2 differences into a single file.
3. Call __clone3 if HAVE_CLONE3_WAPPER is defined.  If __clone3 returns
-1 with ENOSYS, fall back to clone or clone2.
4. Use only __clone_internal to clone a thread.  Since the stack size
argument for create_thread is now unconditional, always pass stack size
to create_thread.
5. Enable the public clone3 wrapper in the future after it has been
added to all targets.

NB: Sandbox will return ENOSYS on clone3 in both Chromium:

The following revision refers to this bug:
  218438259d

commit 218438259dd795456f0a48f67cbe5b4e520db88b
Author: Matthew Denton <mpdenton@chromium.org>
Date: Thu Jun 03 20:06:13 2021

Linux sandbox: return ENOSYS for clone3

Because clone3 uses a pointer argument rather than a flags argument, we
cannot examine the contents with seccomp, which is essential to
preventing sandboxed processes from starting other processes. So, we
won't be able to support clone3 in Chromium. This CL modifies the
BPF policy to return ENOSYS for clone3 so glibc always uses the fallback
to clone.

Bug: 1213452
Change-Id: I7c7c585a319e0264eac5b1ebee1a45be2d782303
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2936184
Reviewed-by: Robert Sesek <rsesek@chromium.org>
Commit-Queue: Matthew Denton <mpdenton@chromium.org>
Cr-Commit-Position: refs/heads/master@{#888980}

[modify] https://crrev.com/218438259dd795456f0a48f67cbe5b4e520db88b/sandbox/linux/seccomp-bpf-helpers/baseline_policy.cc

and Firefox:

https://hg.mozilla.org/integration/autoland/rev/ecb4011a0c76

Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2021-07-14 06:33:58 -07:00
Adhemerval Zanella
8c1c0aae20 nptl: Move cancel type out of cancelhandling
Now that the thread cancellation type is not accessed concurrently
anymore, it is possible to move it out the cancelhandling.

By removing the cancel state out of the internal thread cancel handling
state there is no need to check if cancelled bit was set in CAS
operation.

It allows simplifing the cancellation wrappers and the
CANCEL_CANCELED_AND_ASYNCHRONOUS is removed.

Checked on x86_64-linux-gnu and aarch64-linux-gnu.
2021-06-09 15:16:45 -03:00
Adhemerval Zanella
2b51742531 nptl: Move cancel state out of cancelhandling
Now that thread cancellation state is not accessed concurrently anymore,
it is possible to move it out the 'cancelhandling'.

The code is also simplified: CANCELLATION_P is replaced with a
internal pthread_testcancel call and the CANCELSTATE_BIT{MASK} is
removed.

With this behavior pthread_setcancelstate does not require to act on
cancellation if cancel type is asynchronous (is already handled either
by pthread_setcanceltype or by the signal handler).

Checked on x86_64-linux-gnu and aarch64-linux-gnu.
2021-06-09 15:16:45 -03:00
Adhemerval Zanella
02189e8fb0 nptl: Deallocate the thread stack on setup failure (BZ #19511)
To setup either the thread scheduling parameters or affinity,
pthread_create enforce synchronization on created thread to wait until
its parent either release PD ownership or send a cancellation signal if
a failure occurs.

However, cancelling the thread does not deallocate the newly created
stack since cancellation expects that a pthread_join to deallocate any
allocated thread resouces (threads stack or TLS).

This patch changes on how the thread resource is deallocate in case of
failure to be synchronous, where the creating thread will signal the
created thread to exit early so it could be joined.  The creating thread
will be reponsible for the resource cleanup before returning to the
caller.

To signal the creating thread that a failure has occured, an unused
'struct pthread' member, parent_cancelhandling_unsed, now indicates
whether the setup has failed so creating thread can proper exit.

This strategy also simplifies by not using thread cancellation and
thus not running libgcc_so load in the signal handler (which is
avoided in thread cancellation since 'pthread_cancel' is the one
responsible to dlopen libgcc_s).  Another advantage is since the
early exit is move to first step at thread creation, the signal
mask is not already set and thus it can not act on change ID setxid
handler.

Checked on x86_64-linux-gnu and aarch64-linux-gnu.
2021-06-09 15:16:45 -03:00
Florian Weimer
d03511f48f nptl: Eliminate the __static_tls_size, __static_tls_align_m1 variables
Use the  __nptl_tls_static_size_for_stack inline function instead,
and the GLRO (dl_tls_static_align) value directly.

The computation of GLRO (dl_tls_static_align)  in
_dl_determine_tlsoffset ensures that the alignment is at least
TLS_TCB_ALIGN, which at least STACK_ALIGN (see allocate_stack).
Therefore, the additional rounding-up step is removed.

ALso move the initialization of the default stack size from
__pthread_initialize_minimal_internal to __pthread_early_init.
This introduces an extra system call during single-threaded startup,
but this simplifies the initialization sequence.  No locking is
needed around the writes to __default_pthread_attr because the
process is single-threaded at this point.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2021-05-21 22:35:00 +02:00
Florian Weimer
c79a31fb36 nptl: Move stack cache management, __libpthread_freeres into libc
This replaces the FREE_P macro with the __nptl_stack_in_use inline
function.  stack_list_del is renamed to __nptl_stack_list_del,
stack_list_add to __nptl_stack_list_add, __deallocate_stack to
__nptl_deallocate_stack, free_stacks to __nptl_free_stacks.

It is convenient to move __libpthread_freeres into libc at the
same time.  This removes the temporary __default_pthread_attr_freeres
export and restores full freeres coverage for __default_pthread_attr.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2021-05-11 11:22:33 +02:00
Florian Weimer
732139dabe Linux: Move __reclaim_stacks into the fork implementation in libc
As a result, __libc_pthread_init is no longer needed.

Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-05-10 10:31:42 +02:00
Florian Weimer
652c7c6fe7 nptl: Simplify resetting the in-flight stack in __reclaim_stacks
stack_list_del overwrites the in-flight stack variable.

Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-05-10 10:31:41 +02:00
Florian Weimer
2dd87703d4 nptl: Move changing of stack permissions into ld.so
All the stack lists are now in _rtld_global, so it is possible
to change stack permissions directly from there, instead of
calling into libpthread to do the change.

Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-05-10 10:31:41 +02:00
Florian Weimer
ee07b3a722 nptl: Simplify the change_stack_perm calling convention
Only ia64 needs the page mask, and it is straightforward
to compute the value within the function itself.

Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-05-10 10:31:41 +02:00
Florian Weimer
9d124d81f0 nptl: Move more stack management variables into _rtld_global
Permissions of the cached stacks may have to be updated if an object
is loaded that requires executable stacks, so the dynamic loader
needs to know about these cached stacks.

The move of in_flight_stack and stack_cache_actsize is a requirement for
merging __reclaim_stacks into the fork implementation in libc.

Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-05-10 10:31:41 +02:00
Florian Weimer
0df5d8d404 nptl: Eliminate __pthread_multiple_threads
It is no longer needed after the SINGLE_THREADED_P consolidation.

Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-05-10 10:31:41 +02:00
Florian Weimer
321789f61a nptl: Export __libc_multiple_threads from libc as an internal symbol
This allows the elimination of the __libc_multiple_threads_ptr
variable in libpthread and its initialization procedure.

Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-05-10 10:31:41 +02:00
Florian Weimer
7cbf1c8416 elf, nptl: Initialize static TLS directly in ld.so
The stack list is available in ld.so since commit
1daccf403b ("nptl: Move stack list
variables into _rtld_global"), so it's possible to walk the stack
list directly in ld.so and perform the initialization there.

This eliminates an unprotected function pointer from _rtld_global
and reduces the libpthread initialization code.
2021-05-05 06:20:31 +02:00
Florian Weimer
486010a3c8 nptl: Move setxid broadcast implementation into libc
The signal handler is exported as __nptl_setxid_sighandler, so
that the libpthread initialization code can install it.  This
is sufficient for now because it is guarantueed to happen before
the first pthread_create call.
2021-04-21 19:49:51 +02:00
H.J. Lu
3e2f285c5f nptl: Remove MULTI_PAGE_ALIASING [BZ #23554]
MULTI_PAGE_ALIASING was introduced to mitigate an aliasing issue on
Pentium 4.  It is no longer needed for processors after Pentium 4.
2021-03-19 15:04:17 -07:00
Paul Eggert
2b778ceb40 Update copyright dates with scripts/update-copyrights
I used these shell commands:

../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")

and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 6694 files FOO.
I then removed trailing white space from benchtests/bench-pthread-locks.c
and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this
diagnostic from Savannah:
remote: *** pre-commit check failed ...
remote: *** error: lines with trailing whitespace found
remote: error: hook declined to update refs/heads/master
2021-01-02 12:17:34 -08:00
Florian Weimer
1daccf403b nptl: Move stack list variables into _rtld_global
Now __thread_gscope_wait (the function behind THREAD_GSCOPE_WAIT,
formerly __wait_lookup_done) can be implemented directly in ld.so,
eliminating the unprotected GL (dl_wait_lookup_done) function
pointer.

Reviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>
2020-11-16 19:33:30 +01:00
Szabolcs Nagy
238032ead6 aarch64: enforce >=64K guard size [BZ #26691]
There are several compiler implementations that allow large stack
allocations to jump over the guard page at the end of the stack and
corrupt memory beyond that. See CVE-2017-1000364.

Compilers can emit code to probe the stack such that the guard page
cannot be skipped, but on aarch64 the probe interval is 64K by default
instead of the minimum supported page size (4K).

This patch enforces at least 64K guard on aarch64 unless the guard
is disabled by setting its size to 0.  For backward compatibility
reasons the increased guard is not reported, so it is only observable
by exhausting the address space or parsing /proc/self/maps on linux.

On other targets the patch has no effect. If the stack probe interval
is larger than a page size on a target then ARCH_MIN_GUARD_SIZE can
be defined to get large enough stack guard on libc allocated stacks.

The patch does not affect threads with user allocated stacks.

Fixes bug 26691.
2020-10-02 09:57:44 +01:00
Adhemerval Zanella
9deec7c8ba string: Remove old TLS usage on strsignal
The per-thread state is refactored two use two strategies:

  1. The default one uses a TLS structure, which will be placed in the
     static TLS space (using __thread keyword).

  2. Linux allocates via struct pthread and access it through THREAD_*
     macros.

The default strategy has the disadvantage of increasing libc.so static
TLS consumption and thus decreasing the possible surplus used in
some scenarios (which might be mitigated by BZ#25051 fix).

It is used only on Hurd, where accessing the thread storage in the in
single thread case is not straightforward (afaiu, Hurd developers could
correct me here).

The fallback static allocation used for allocation failure is also
removed: defining its size is problematic without synchronizing with
translated messages (to avoid partial translation) and the resulting
usage is not thread-safe.

Checked on x86-64-linux-gnu, i686-linux-gnu, powerpc64le-linux-gnu,
and s390x-linux-gnu.

Tested-by: Carlos O'Donell <carlos@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-07-07 14:10:58 -03:00
Florian Weimer
c2322a561f nptl: Change type of __default_pthread_attr
union pthread_attr_transparent has always the correct size, even if
pthread_attr_t has padding that is not present in struct pthread_attr.

This should not result in an observable behavioral change.  The
existing code appears to have been correct, but it was brittle because
it was not clear which functions were allowed to write to an entire
pthread_attr_t argument (e.g., by copying it).

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-06-02 10:32:37 +02:00
Adhemerval Zanella
bc2eb9321e linux: Remove INTERNAL_SYSCALL_DECL
With all Linux ABIs using the expected Linux kABI to indicate
syscalls errors, the INTERNAL_SYSCALL_DECL is an empty declaration
on all ports.

This patch removes the 'err' argument on INTERNAL_SYSCALL* macro
and remove the INTERNAL_SYSCALL_DECL usage.

Checked with a build against all affected ABIs.
2020-02-14 21:12:45 -03:00
Joseph Myers
d614a75396 Update copyright dates with scripts/update-copyrights. 2020-01-01 00:14:33 +00:00
Florian Weimer
e4b3707cea nptl: SIGCANCEL, SIGTIMER, SIGSETXID are always defined
All nptl targets have these signal definitions nowadays.  This
changes also replaces the nptl-generic version of pthread_sigmask
with the Linux version.

Tested on x86_64-linux-gnu and i686-linux-gnu.  Built with
build-many-glibcs.py.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2019-10-18 14:29:04 +02:00
Paul Eggert
5a82c74822 Prefer https to http for gnu.org and fsf.org URLs
Also, change sources.redhat.com to sourceware.org.
This patch was automatically generated by running the following shell
script, which uses GNU sed, and which avoids modifying files imported
from upstream:

sed -ri '
  s,(http|ftp)(://(.*\.)?(gnu|fsf|sourceware)\.org($|[^.]|\.[^a-z])),https\2,g
  s,(http|ftp)(://(.*\.)?)sources\.redhat\.com($|[^.]|\.[^a-z]),https\2sourceware.org\4,g
' \
  $(find $(git ls-files) -prune -type f \
      ! -name '*.po' \
      ! -name 'ChangeLog*' \
      ! -path COPYING ! -path COPYING.LIB \
      ! -path manual/fdl-1.3.texi ! -path manual/lgpl-2.1.texi \
      ! -path manual/texinfo.tex ! -path scripts/config.guess \
      ! -path scripts/config.sub ! -path scripts/install-sh \
      ! -path scripts/mkinstalldirs ! -path scripts/move-if-change \
      ! -path INSTALL ! -path  locale/programs/charmap-kw.h \
      ! -path po/libc.pot ! -path sysdeps/gnu/errlist.c \
      ! '(' -name configure \
            -execdir test -f configure.ac -o -f configure.in ';' ')' \
      ! '(' -name preconfigure \
            -execdir test -f preconfigure.ac ';' ')' \
      -print)

and then by running 'make dist-prepare' to regenerate files built
from the altered files, and then executing the following to cleanup:

  chmod a+x sysdeps/unix/sysv/linux/riscv/configure
  # Omit irrelevant whitespace and comment-only changes,
  # perhaps from a slightly-different Autoconf version.
  git checkout -f \
    sysdeps/csky/configure \
    sysdeps/hppa/configure \
    sysdeps/riscv/configure \
    sysdeps/unix/sysv/linux/csky/configure
  # Omit changes that caused a pre-commit check to fail like this:
  # remote: *** error: sysdeps/powerpc/powerpc64/ppc-mcount.S: trailing lines
  git checkout -f \
    sysdeps/powerpc/powerpc64/ppc-mcount.S \
    sysdeps/unix/sysv/linux/s390/s390-64/syscall.S
  # Omit change that caused a pre-commit check to fail like this:
  # remote: *** error: sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S: last line does not end in newline
  git checkout -f sysdeps/sparc/sparc64/multiarch/memcpy-ultra3.S
2019-09-07 02:43:31 -07:00
Adhemerval Zanella
38cc11daa4 nptl: Remove pthread_clock_gettime pthread_clock_settime
This patch removes CLOCK_THREAD_CPUTIME_ID and CLOCK_PROCESS_CPUTIME_ID support
from clock_gettime and clock_settime generic implementation.  For Linux, kernel
already provides supports through the syscall and Hurd HTL lacks
__pthread_clock_gettime and __pthread_clock_settime internal implementation.

As described in clock_gettime man-page [1] on 'Historical note for SMP
system', implementing CLOCK_{THREAD,PROCESS}_CPUTIME_ID with timer registers
is error-prone and susceptible to timing and accurary issues that the libc
can not deal without kernel support.

This allows removes unused code which, however, still incur in some runtime
overhead in thread creation (the struct pthread cpuclock_offset
initialization).

If hurd eventually wants to support them it should either either implement as
a kernel facility (or something related due its architecture) or in system
specific implementation.

Checked on aarch64-linux-gnu, x86_64-linux-gnu, and i686-linux-gnu. I also
checked on a i686-gnu build.

	* nptl/Makefile (libpthread-routines): Remove pthread_clock_gettime and
	pthread_clock_settime.
	* nptl/pthreadP.h (__find_thread_by_id): Remove prototype.
	* elf/dl-support.c [!HP_TIMING_NOAVAIL] (_dl_cpuclock_offset): Remove.
	(_dl_non_dynamic_init): Remove _dl_cpuclock_offset setting.
	* elf/rtld.c (_dl_start_final): Likewise.
	* nptl/allocatestack.c (__find_thread_by_id): Remove function.
	* sysdeps/generic/ldsodefs.h [!HP_TIMING_NOAVAIL] (_dl_cpuclock_offset):
	Remove.
	* sysdeps/mach/hurd/dl-sysdep.c [!HP_TIMING_NOAVAIL]
	(_dl_cpuclock_offset): Remove.
	* nptl/descr.h (struct pthread): Rename cpuclock_offset to
	cpuclock_offset_ununsed.
	* nptl/nptl-init.c (__pthread_initialize_minimal_internal): Remove
	cpuclock_offset set.
	* nptl/pthread_create.c (START_THREAD_DEFN): Likewise.
	* sysdeps/nptl/fork.c (__libc_fork): Likewise.
	* nptl/pthread_clock_gettime.c: Remove file.
	* nptl/pthread_clock_settime.c: Likewise.
	* sysdeps/unix/clock_gettime.c (hp_timing_gettime): Remove function.
	[HP_TIMING_AVAIL] (realtime_gettime): Remove CLOCK_THREAD_CPUTIME_ID
	and CLOCK_PROCESS_CPUTIME_ID support.
	* sysdeps/unix/clock_settime.c (hp_timing_gettime): Likewise.
	[HP_TIMING_AVAIL] (realtime_gettime): Likewise.
	* sysdeps/posix/clock_getres.c (hp_timing_getres): Likewise.
	[HP_TIMING_AVAIL] (__clock_getres): Likewise.
	* sysdeps/unix/clock_nanosleep.c (CPUCLOCK_P, INVALID_CLOCK_P):
	Likewise.
	(__clock_nanosleep): Remove CPUCLOCK_P and INVALID_CLOCK_P usage.

[1] http://man7.org/linux/man-pages/man2/clock_gettime.2.html
2019-03-22 15:37:43 -03:00
Joseph Myers
c2d8f0b704 Avoid "inline" after return type in function definitions.
One group of warnings seen with -Wextra is warnings for static or
inline not at the start of a declaration (-Wold-style-declaration).

This patch fixes various such cases for inline, ensuring it comes at
the start of the declaration (after any static).  A common case of the
fix is "static inline <type> __always_inline"; the definition of
__always_inline starts with __inline, so the natural change is to
"static __always_inline <type>".  Other cases of the warning may be
harder to fix (one pattern is a function definition that gets
rewritten to be static by an including file, "#define funcname static
wrapped_funcname" or similar), but it seems worth fixing these cases
with inline anyway.

Tested for x86_64.

	* elf/dl-load.h (_dl_postprocess_loadcmd): Use __always_inline
	before return type, without separate inline.
	* elf/dl-tunables.c (maybe_enable_malloc_check): Likewise.
	* elf/dl-tunables.h (tunable_is_name): Likewise.
	* malloc/malloc.c (do_set_trim_threshold): Likewise.
	(do_set_top_pad): Likewise.
	(do_set_mmap_threshold): Likewise.
	(do_set_mmaps_max): Likewise.
	(do_set_mallopt_check): Likewise.
	(do_set_perturb_byte): Likewise.
	(do_set_arena_test): Likewise.
	(do_set_arena_max): Likewise.
	(do_set_tcache_max): Likewise.
	(do_set_tcache_count): Likewise.
	(do_set_tcache_unsorted_limit): Likewise.
	* nis/nis_subr.c (count_dots): Likewise.
	* nptl/allocatestack.c (advise_stack_range): Likewise.
	* sysdeps/ieee754/dbl-64/s_sin.c (do_cos): Likewise.
	(do_sin): Likewise.
	(reduce_sincos): Likewise.
	(do_sincos): Likewise.
	* sysdeps/unix/sysv/linux/x86/elision-conf.c
	(do_set_elision_enable): Likewise.
	(TUNABLE_CALLBACK_FNDECL): Likewise.
2019-02-06 17:16:43 +00:00
Stefan Liebler
bc79db3fd4 Fix alignment of TLS variables for tls variant TLS_TCB_AT_TP [BZ #23403]
The alignment of TLS variables is wrong if accessed from within a thread
for architectures with tls variant TLS_TCB_AT_TP.
For the main thread the static tls data is properly aligned.
For other threads the alignment depends on the alignment of the thread
pointer as the static tls data is located relative to this pointer.

This patch adds this alignment for TLS_TCB_AT_TP variants in the same way
as it is already done for TLS_DTV_AT_TP. The thread pointer is also already
properly aligned if the user provides its own stack for the new thread.

This patch extends the testcase nptl/tst-tls1.c in order to check the
alignment of the tls variables and it adds a pthread_create invocation
with a user provided stack.
The test itself is migrated from test-skeleton.c to test-driver.c
and the missing support functions xpthread_attr_setstack and xposix_memalign
are added.

ChangeLog:

	[BZ #23403]
	* nptl/allocatestack.c (allocate_stack): Align pointer pd for
	TLS_TCB_AT_TP tls variant.
	* nptl/tst-tls1.c: Migrate to support/test-driver.c.
	Add alignment checks.
	* support/Makefile (libsupport-routines): Add xposix_memalign and
	xpthread_setstack.
	* support/support.h: Add xposix_memalign.
	* support/xthread.h: Add xpthread_attr_setstack.
	* support/xposix_memalign.c: New File.
	* support/xpthread_attr_setstack.c: Likewise.
2019-02-06 09:06:34 +01:00
Joseph Myers
04277e02d7 Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2019-01-01 00:11:28 +00:00
Florian Weimer
046bfed9de nptl: Use __mprotect consistently for _STACK_GROWS_UP 2018-07-12 15:01:43 +02:00
Carlos O'Donell
2827ab990a libc: Extend __libc_freeres framework (Bug 23329).
The __libc_freeres framework does not extend to non-libc.so objects.
This causes problems in general for valgrind and mtrace detecting
unfreed objects in both libdl.so and libpthread.so.  This change is
a pre-requisite to properly moving the malloc hooks out of malloc
since such a move now requires precise accounting of all allocated
data before destructors are run.

This commit adds a proper hook in libc.so.6 for both libdl.so and
for libpthread.so, this ensures that shm-directory.c which uses
freeit () to free memory is called properly.  We also remove the
nptl_freeres hook and fall back to using weak-ref-and-check idiom
for a loaded libpthread.so, thus making this process similar for
all DSOs.

Lastly we follow best practice and use explicit free calls for
both libdl.so and libpthread.so instead of the generic hook process
which has undefined order.

Tested on x86_64 with no regressions.

Signed-off-by: DJ Delorie <dj@redhat.com>
Signed-off-by: Carlos O'Donell <carlos@redhat.com>
2018-06-29 22:39:06 -04:00
H.J. Lu
0068c08588 nptl: Remove __ASSUME_PRIVATE_FUTEX
Since __ASSUME_PRIVATE_FUTEX is always defined, this patch removes the
!__ASSUME_PRIVATE_FUTEX paths.

Tested with build-many-glibcs.py.

	* nptl/allocatestack.c (allocate_stack): Remove the
	!__ASSUME_PRIVATE_FUTEX paths.
	* nptl/descr.h (header): Remove the !__ASSUME_PRIVATE_FUTEX path.
	* nptl/nptl-init.c (__pthread_initialize_minimal_internal):
	Likewise.
	* sysdeps/i386/nptl/tcb-offsets.sym (PRIVATE_FUTEX): Removed.
	* sysdeps/powerpc/nptl/tcb-offsets.sym (PRIVATE_FUTEX): Likewise.
	* sysdeps/sh/nptl/tcb-offsets.sym (PRIVATE_FUTEX): Likewise.
	* sysdeps/x86_64/nptl/tcb-offsets.sym (PRIVATE_FUTEX): Likewise.
	* sysdeps/i386/nptl/tls.h: (tcbhead_t): Remve the
	!__ASSUME_PRIVATE_FUTEX path.
	* sysdeps/s390/nptl/tls.h (tcbhead_t): Likewise.
	* sysdeps/sparc/nptl/tls.h (tcbhead_t): Likewise.
	* sysdeps/x86_64/nptl/tls.h (tcbhead_t): Likewise.
	* sysdeps/unix/sysv/linux/i386/lowlevellock.S: Remove the
	!__ASSUME_PRIVATE_FUTEX macros.
	* sysdeps/unix/sysv/linux/lowlevellock-futex.h: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/cancellation.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/lowlevellock.S: Likewise.
	* sysdeps/unix/sysv/linux/kernel-features.h
	(__ASSUME_PRIVATE_FUTEX): Removed.
2018-05-17 04:25:10 -07:00
Szabolcs Nagy
630f4cc3aa [BZ #22637] Fix stack guard size accounting
Previously if user requested S stack and G guard when creating a
thread, the total mapping was S and the actual available stack was
S - G - static_tls, which is not what the user requested.

This patch fixes the guard size accounting by pretending the user
requested S+G stack.  This way all later logic works out except
when reporting the user requested stack size (pthread_getattr_np)
or when computing the minimal stack size (__pthread_get_minstack).

Normally this will increase thread stack allocations by one page.
TLS accounting is not affected, that will require a separate fix.

	[BZ #22637]
	* nptl/descr.h (stackblock, stackblock_size): Update comments.
	* nptl/allocatestack.c (allocate_stack): Add guardsize to stacksize.
	* nptl/nptl-init.c (__pthread_get_minstack): Remove guardsize from
	stacksize.
	* nptl/pthread_getattr_np.c (pthread_getattr_np): Likewise.
2018-01-08 19:02:11 +00:00
Joseph Myers
688903eb3e Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2018-01-01 00:32:25 +00:00
Adhemerval Zanella
06be6368da nptl: Define __PTHREAD_MUTEX_{NUSERS_AFTER_KIND,USE_UNION}
This patch adds two new internal defines to set the internal
pthread_mutex_t layout required by the supported ABIS:

  1. __PTHREAD_MUTEX_NUSERS_AFTER_KIND which control whether to define
     __nusers fields before or after __kind.  The preferred value for
     is 0 for new ports and it sets __nusers before __kind.

  2. __PTHREAD_MUTEX_USE_UNION which control whether internal __spins and
     __list members will be place inside an union for linuxthreads
     compatibility.  The preferred value is 0 for ports and it sets
     to not use an union to define both fields.

It fixes the wrong offsets value for __kind value on x86_64-linux-gnu-x32.
Checked with a make check run-built-tests=no on all afected ABIs.

	[BZ #22298]
	* nptl/allocatestack.c (allocate_stack): Check if
	__PTHREAD_MUTEX_HAVE_PREV is non-zero, instead if
	__PTHREAD_MUTEX_HAVE_PREV is defined.
	* nptl/descr.h (pthread): Likewise.
	* nptl/nptl-init.c (__pthread_initialize_minimal_internal):
	Likewise.
	* nptl/pthread_create.c (START_THREAD_DEFN): Likewise.
	* sysdeps/nptl/fork.c (__libc_fork): Likewise.
	* sysdeps/nptl/pthread.h (PTHREAD_MUTEX_INITIALIZER): Likewise.
	* sysdeps/nptl/bits/thread-shared-types.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
	defines.
	(__pthread_internal_list): Check __PTHREAD_MUTEX_USE_UNION instead
	of __WORDSIZE for internal layout.
	(__pthread_mutex_s): Check __PTHREAD_MUTEX_NUSERS_AFTER_KIND instead
	of __WORDSIZE for internal __nusers layout and __PTHREAD_MUTEX_USE_UNION
	instead of __WORDSIZE whether to use an union for __spins and __list
	fields.
	(__PTHREAD_MUTEX_HAVE_PREV): Define also for __PTHREAD_MUTEX_USE_UNION
	case.
	* sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
	defines.
	* sysdeps/alpha/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/arm/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/hppa/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/ia64/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/m68k/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/mips/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/nios2/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/s390/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/sh/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/sparc/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/tile/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.
	* sysdeps/x86/nptl/bits/pthreadtypes-arch.h
	(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
	Likewise.

Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2017-11-07 09:48:41 -02:00
Peter Zelezny
e4f530da0d nptl: Preserve error in setxid thread broadcast in coredumps [BZ #22153] 2017-10-13 22:51:56 +02:00
Florian Weimer
83b09837ed nptl: Remove internal_function attribute 2017-08-31 18:52:00 +02:00
Adhemerval Zanella
01b87c656f ia64: Fix thread stack allocation permission set (BZ #21672)
This patch fixes ia64 failures on thread exit by madvise the required
area taking in consideration its disjoing stacks
(NEED_SEPARATE_REGISTER_STACK).  Also the snippet that setup the
madvise call to advertise kernel the area won't be used anymore in
near future is reallocated in allocatestack.c (for consistency to
put all stack management function in one place).

Checked on x86_64-linux-gnu and i686-linux-gnu for sanity (since
it is not expected code changes for architecture that do not
define NEED_SEPARATE_REGISTER_STACK) and also got a report that
it fixes ia64-linux-gnu failures from Sergei Trofimovich
<slyfox@gentoo.org>.

	[BZ #21672]
	* nptl/allocatestack.c [_STACK_GROWS_DOWN] (setup_stack_prot):
	Set to use !NEED_SEPARATE_REGISTER_STACK as well.
	(advise_stack_range): New function.
	* nptl/pthread_create.c (START_THREAD_DEFN): Move logic to mark
	stack non required to advise_stack_range at allocatestack.c
2017-08-29 13:29:19 -03:00
Florian Weimer
e1d2ae8d21 NPTL: Remove internal_function from stack marking functions
These are called across DSO boundaries and therefore should use
the ABI calling convention.
2017-08-13 21:11:38 +02:00
John David Anglin
075385f98a Fix guard alignment in allocate_stack when stack grows up. 2017-07-15 12:40:13 -04:00
Adhemerval Zanella
fa872e1b62 Clean pthread functions namespaces for C11 threads
This patch adds internal definition (through {libc_}hidden_{proto,def}) and
also change some strong to weak alias for symbols that might be used by C11
threads implementations.

The patchset should not change libc/libpthread functional, although object
changes are expected (since now internal symbols are used instead) and final
exported symbols through GLIBC_PRIVATE is also expanded (to cover libpthread
usage of __mmap{64}, __munmap, __mprotect).

Checked with a build for all major ABI (aarch64-linux-gnu, alpha-linux-gnu,
arm-linux-gnueabi, i386-linux-gnu, ia64-linux-gnu, m68k-linux-gnu,
microblaze-linux-gnu [1], mips{64}-linux-gnu, nios2-linux-gnu,
powerpc{64le}-linux-gnu, s390{x}-linux-gnu, sparc{64}-linux-gnu,
tile{pro,gx}-linux-gnu, and x86_64-linux-gnu).

	* include/sched.h (__sched_get_priority_max): Add libc hidden proto.
	(__sched_get_prioriry_min): Likewise.
	* include/sys/mman.h (__mmap): Likewise.
	(__mmap64): Likewise.
	(__munmap): Likewise.
	(__mprotect): Likewise.
	* include/termios.h (__tcsetattr): Likewise.
	* include/time.h (__nanosleep): Use hidden_proto instead of
	libc_hidden_proto.
	* posix/nanosleep.c (__nanosleep): Likewise.
	* misc/Versions (libc): Export __mmap, __munmap, __mprotect,
	__sched_get_priority_min, and __sched_get_priority_max under
	GLIBC_PRIVATE.
	* nptl/allocatestack.c (__free_stacks): Use internal definition for
	libc symbols.
	(change_stack_perm): Likewise.
	(allocate_stack): Likewise.
	* sysdeps/posix/gethostname.c: Likewise.
	* nptl/tpp.c (__init_sched_fifo_prio): Likewise.
	* sysdeps/unix/sysv/linux/i386/smp.h (is_smp_system): Likewise.
	* sysdeps/unix/sysv/linux/powerpc/ioctl.c (__ioctl): Likewise.
	* nptl/pthreadP.h (__pthread_mutex_timedlock): Add definition.
	(__pthread_key_delete): Likewise.
	(__pthread_detach): Likewise.
	(__pthread_cancel): Likewise.
	(__pthread_mutex_trylock): Likewise.
	(__pthread_mutexattr_init): Likewise.
	(__pthread_mutexattr_settype): Likewise.
	* nptl/pthread_cancel.c (pthread_cancel): Change to internal name and
	create alias for exported one.
	* nptl/pthread_join.c (pthread_join): Likewise.
	* nptl/pthread_detach.c (pthread_detach): Likewise.
	* nptl/pthread_key_delete.c (pthread_key_delete): Likewise.
	* nptl/pthread_mutex_timedlock.c (pthread_mutex_timedlock): Likewise.
	* nptl/pthread_create.c: Change static requirements for pthread
	symbols.
	* nptl/pthread_equal.c (__pthread_equal): Change strong alias to weak
	for internal definition.
	* nptl/pthread_exit.c (__pthread_exit): Likewise.
	* nptl/pthread_getspecific.c (__pthread_getspecific): Likewise.
	* nptl/pthread_key_create.c (__pthread_key_create): Likewise.
	* nptl/pthread_mutex_destroy.c (__pthread_mutex_destroy): Likewise.
	* nptl/pthread_mutex_init.c (__pthread_mutex_init): Likewise.
	* nptl/pthread_mutex_lock.c (__pthread_mutex_lock): Likewise.
	* nptl/pthread_mutex_trylock.c (__pthread_mutex_trylock): Likewise.
	* nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock): Likewise.
	* nptl/pthread_mutexattr_init.c (__pthread_mutexattr_init): Likwise.
	* nptl/pthread_mutexattr_settype.c (__pthread_mutexattr_settype):
	Likewise.
	* nptl/pthread_self.c (__pthread_self): Likewise.
	* nptl/pthread_setspecific.c (__pthread_setspecific): Likewise.
	* sysdeps/unix/sysv/linux/tcsetattr.c (tcsetattr): Likewise.
	* misc/mmap.c (__mmap): Add internal symbol definition.
	* misc/mmap.c (__mmap64): Likewise.
	* sysdeps/unix/sysv/linux/mmap.c (__mmap): Likewise.
	* sysdeps/unix/sysv/linux/mmap64.c (__mmap): Likewise.
	(__mmap64): Likewise.
	* sysdeps/unix/sysv/linux/i386/Versions (libc) [GLIBC_PRIVATE):
	Add __uname.
2017-06-23 17:38:17 -03:00
Adhemerval Zanella
0edbf12301 nptl: Invert the mmap/mprotect logic on allocated stacks (BZ#18988)
Current allocate_stack logic for create stacks is to first mmap all
the required memory with the desirable memory and then mprotect the
guard area with PROT_NONE if required.  Although it works as expected,
it pessimizes the allocation because it requires the kernel to actually
increase commit charge (it counts against the available physical/swap
memory available for the system).

The only issue is to actually check this change since side-effects are
really Linux specific and to actually account them it would require a
kernel specific tests to parse the system wide information.  On the kernel
I checked /proc/self/statm does not show any meaningful difference for
vmm and/or rss before and after thread creation.  I could only see
really meaningful information checking on system wide /proc/meminfo
between thread creation: MemFree, MemAvailable, and Committed_AS shows
large difference without the patch.  I think trying to use these
kind of information on a testcase is fragile.

The BZ#18988 reports shows that the commit pages are easily seen with
mlockall (MCL_FUTURE) (with lock all pages that become mapped in the
process) however a more straighfoward testcase shows that pthread_create
could be faster using this patch:

--
static const int inner_count = 256;
static const int outer_count = 128;

static
void *thread1(void *arg)
{
  return NULL;
}

static
void *sleeper(void *arg)
{
  pthread_t ts[inner_count];
  for (int i = 0; i < inner_count; i++)
    pthread_create (&ts[i], &a, thread1, NULL);
  for (int i = 0; i < inner_count; i++)
    pthread_join (ts[i], NULL);

  return NULL;
}

int main(void)
{
  pthread_attr_init(&a);
  pthread_attr_setguardsize(&a, 1<<20);
  pthread_attr_setstacksize(&a, 1134592);

  pthread_t ts[outer_count];
  for (int i = 0; i < outer_count; i++)
    pthread_create(&ts[i], &a, sleeper, NULL);
  for (int i = 0; i < outer_count; i++)
    pthread_join(ts[i], NULL);
    assert(r == 0);
  }
  return 0;
}

--

On x86_64 (4.4.0-45-generic, gcc 5.4.0) running the small benchtests
I see:

$ time ./test

real	0m3.647s
user	0m0.080s
sys	0m11.836s

While with the patch I see:

$ time ./test

real	0m0.696s
user	0m0.040s
sys	0m1.152s

So I added a pthread_create benchtest (thread_create) which check
the thread creation latency.  As for the simple benchtests, I saw
improvements in thread creation on all architectures I tested the
change.

Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu,
arm-linux-gnueabihf, powerpc64le-linux-gnu, sparc64-linux-gnu,
and sparcv9-linux-gnu.

	[BZ #18988]
	* benchtests/thread_create-inputs: New file.
	* benchtests/thread_create-source.c: Likewise.
	* support/xpthread_attr_setguardsize.c: Likewise.
	* support/Makefile (libsupport-routines): Add
	xpthread_attr_setguardsize object.
	* support/xthread.h: Add xpthread_attr_setguardsize prototype.
	* benchtests/Makefile (bench-pthread): Add thread_create.
	* nptl/allocatestack.c (allocate_stack): Call mmap with PROT_NONE and
	then mprotect the required area.
2017-06-14 17:22:35 -03:00
Adhemerval Zanella
37f8abad1c nptl: Remove COLORING_INCREMENT
This patch removes the COLORING_INCREMENT define and usage on allocatestack.c.
It has not been used since 564cd8b67e (glibc-2.3.3) by any architecture.
The idea is to simplify the code by removing obsolete code.

	* nptl/allocatestack.c [COLORING_INCREMENT] (nptl_ncreated): Remove.
	(allocate_stack): Remove COLORING_INCREMENT usage.
	* nptl/stack-aliasing.h (COLORING_INCREMENT). Likewise.
	* sysdeps/i386/i686/stack-aliasing.h (COLORING_INCREMENT): Likewise.
2017-02-06 15:58:32 -02:00
Alexandre Oliva
d675eaf7d9 Bug 20915: Do not initialize DTV of other threads.
In _dl_nothread_init_static_tls() and init_one_static_tls() we must not
touch the DTV of other threads since we do not have ownership of them.
The DTV need not be initialized at this point anyway since only LD/GD
accesses will use them. If LD/GD accesses occur they will take care to
initialize their own thread's DTV.

Concurrency comments were removed from the patch since they need to be
reworked along with a full description of DTV ownership and when it is
or is not safe to modify these structures.

Alexandre Oliva's original patch and discussion:
https://sourceware.org/ml/libc-alpha/2016-09/msg00512.html
2017-02-03 21:34:14 -05:00
Joseph Myers
bfff8b1bec Update copyright dates with scripts/update-copyrights. 2017-01-01 00:14:16 +00:00
Adhemerval Zanella
c579f48edb Remove cached PID/TID in clone
This patch remove the PID cache and usage in current GLIBC code.  Current
usage is mainly used a performance optimization to avoid the syscall,
however it adds some issues:

  - The exposed clone syscall will try to set pid/tid to make the new
    thread somewhat compatible with current GLIBC assumptions.  This cause
    a set of issue with new workloads and usecases (such as BZ#17214 and
    [1]) as well for new internal usage of clone to optimize other algorithms
    (such as clone plus CLONE_VM for posix_spawn, BZ#19957).

  - The caching complexity also added some bugs in the past [2] [3] and
    requires more effort of each port to handle such requirements (for
    both clone and vfork implementation).

  - Caching performance gain in mainly on getpid and some specific
    code paths.  The getpid performance leverage is questionable [4],
    either by the idea of getpid being a hotspot as for the getpid
    implementation itself (if it is indeed a justifiable hotspot a
    vDSO symbol could let to a much more simpler solution).

    Other usage is mainly for non usual code paths, such as pthread
    cancellation signal and handling.

For thread creation (on stack allocation) the code simplification in fact
adds some performance gain due the no need of transverse the stack cache
and invalidate each element pid.

Other thread usages will require a direct getpid syscall, such as
cancellation/setxid signal, thread cancellation, thread fail path (at
create_thread), and thread signal (pthread_kill and pthread_sigqueue).
However these are hardly usual hotspots and I think adding a syscall is
justifiable.

It also simplifies both the clone and vfork arch-specific implementation.
And by review each fork implementation there are some discrepancies that
this patch also solves:

  - microblaze clone/vfork does not set/reset the pid/tid field
  - hppa uses the default vfork implementation that fallback to fork.
    Since vfork is deprecated I do not think we should bother with it.

The patch also removes the TID caching in clone. My understanding for
such semantic is try provide some pthread usage after a user program
issue clone directly (as done by thread creation with CLONE_PARENT_SETTID
and pthread tid member).  However, as stated before in multiple discussions
threads, GLIBC provides clone syscalls without further supporting all this
semantics.

I ran a full make check on x86_64, x32, i686, armhf, aarch64, and powerpc64le.
For sparc32, sparc64, and mips I ran the basic fork and vfork tests from
posix/ folder (on a qemu system).  So it would require further testing
on alpha, hppa, ia64, m68k, nios2, s390, sh, and tile (I excluded microblaze
because it is already implementing the patch semantic regarding clone/vfork).

[1] https://codereview.chromium.org/800183004/
[2] https://sourceware.org/ml/libc-alpha/2006-07/msg00123.html
[3] https://sourceware.org/bugzilla/show_bug.cgi?id=15368
[4] http://yarchive.net/comp/linux/getpid_caching.html

	* sysdeps/nptl/fork.c (__libc_fork): Remove pid cache setting.
	* nptl/allocatestack.c (allocate_stack): Likewise.
	(__reclaim_stacks): Likewise.
	(setxid_signal_thread): Obtain pid through syscall.
	* nptl/nptl-init.c (sigcancel_handler): Likewise.
	(sighandle_setxid): Likewise.
	* nptl/pthread_cancel.c (pthread_cancel): Likewise.
	* sysdeps/unix/sysv/linux/pthread_kill.c (__pthread_kill): Likewise.
	* sysdeps/unix/sysv/linux/pthread_sigqueue.c (pthread_sigqueue):
	Likewise.
	* sysdeps/unix/sysv/linux/createthread.c (create_thread): Likewise.
	* sysdeps/unix/sysv/linux/getpid.c: Remove file.
	* nptl/descr.h (struct pthread): Change comment about pid value.
	* nptl/pthread_getattr_np.c (pthread_getattr_np): Remove thread
	pid assert.
	* sysdeps/unix/sysv/linux/pthread-pids.h (__pthread_initialize_pids):
	Do not set pid value.
	* nptl_db/td_ta_thr_iter.c (iterate_thread_list): Remove thread
	pid cache check.
	* nptl_db/td_thr_validate.c (td_thr_validate): Likewise.
	* sysdeps/aarch64/nptl/tcb-offsets.sym: Remove pid offset.
	* sysdeps/alpha/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/arm/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/hppa/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/i386/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/ia64/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/m68k/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/microblaze/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/mips/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/nios2/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/powerpc/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/s390/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/sh/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/sparc/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/tile/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/x86_64/nptl/tcb-offsets.sym: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/clone.S: Remove pid and tid caching.
	* sysdeps/unix/sysv/linux/alpha/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/arm/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/hppa/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/ia64/clone2.S: Likewise.
	* sysdeps/unix/sysv/linux/mips/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/nios2/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sh/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/tile/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/aarch64/vfork.S: Remove pid set and reset.
	* sysdeps/unix/sysv/linux/alpha/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/arm/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/ia64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/m68k/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/m68k/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/mips/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/nios2/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sh/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/tile/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/vfork.S: Likewise.
	* sysdeps/unix/sysv/linux/tst-clone2.c (f): Remove direct pthread
	struct access.
	(clone_test): Remove function.
	(do_test): Rewrite to take in consideration pid is not cached anymore.
2016-11-24 19:38:51 -02:00
Alexandre Oliva
17af5da98c [PR19826] fix non-LE TLS in static programs
An earlier fix for TLS dropped early initialization of DTV entries for
modules using static TLS, leaving it for __tls_get_addr to set them
up.  That worked on platforms that require the GD access model to be
relaxed to LE in the main executable, but it caused a regression on
platforms that allow GD in the main executable, particularly in
statically-linked programs: they use a custom __tls_get_addr that does
not update the DTV, which fails when the DTV early initialization is
not performed.

In static programs, __libc_setup_tls performs the DTV initialization
for the main thread, but the DTV of other threads is set up in
_dl_allocate_tls_init, so that's the fix that matters.

Restoring the initialization in the remaining functions modified by
this patch was just for uniformity.  It's not clear that it is ever
needed: even on platforms that allow GD in the main executable, the
dynamically-linked version of __tls_get_addr would set up the DTV
entries, even for static TLS modules, while updating the DTV counter.

for  ChangeLog

	[BZ #19826]
	* elf/dl-tls.c (_dl_allocate_tls_init): Restore DTV early
	initialization of static TLS entries.
	* elf/dl-reloc.c (_dl_nothread_init_static_tls): Likewise.
	* nptl/allocatestack.c (init_one_static_tls): Likewise.
2016-09-21 22:01:16 -03:00