During packet parsing, only the binary name is available. If the name
equality check is performed before conversion to text, we can sometimes
skip the last step.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
During package parsing, only the binary representation is available,
and it is convenient to check that directly for conformance with host
name requirements.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Redirect internal assertion failures to __libc_assert_fail, based on
based on __libc_message, which writes directly to STDERR_FILENO
and calls abort. Also disable message translation and reword the
error message slightly (adjusting stdlib/tst-bz20544 accordingly).
As a result of these changes, malloc no longer needs its own
redefinition of __assert_fail.
__libc_assert_fail needs to be stubbed out during rtld dependency
analysis because the rtld rebuilds turn __libc_assert_fail into
__assert_fail, which is unconditionally provided by elf/dl-minimal.c.
This change is not possible for the public assert macro and its
__assert_fail function because POSIX requires that the diagnostic
is written to stderr.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Since commit ec2c1fcefb ("malloc:
Abort on heap corruption, without a backtrace [BZ #21754]"),
__libc_message always terminates the process. Since commit
a289ea09ea ("Do not print backtraces
on fatal glibc errors"), the backtrace facility has been removed.
Therefore, remove enum __libc_message_action and the action
argument of __libc_message, and mark __libc_message as _No_return.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Rather than buffering 16 MiB of entropy in userspace (by way of
chacha20), simply call getrandom() every time.
This approach is doubtlessly slower, for now, but trying to prematurely
optimize arc4random appears to be leading toward all sorts of nasty
properties and gotchas. Instead, this patch takes a much more
conservative approach. The interface is added as a basic loop wrapper
around getrandom(), and then later, the kernel and libc together can
work together on optimizing that.
This prevents numerous issues in which userspace is unaware of when it
really must throw away its buffer, since we avoid buffering all
together. Future improvements may include userspace learning more from
the kernel about when to do that, which might make these sorts of
chacha20-based optimizations more possible. The current heuristic of 16
MiB is meaningless garbage that doesn't correspond to anything the
kernel might know about. So for now, let's just do something
conservative that we know is correct and won't lead to cryptographic
issues for users of this function.
This patch might be considered along the lines of, "optimization is the
root of all evil," in that the much more complex implementation it
replaces moves too fast without considering security implications,
whereas the incremental approach done here is a much safer way of going
about things. Once this lands, we can take our time in optimizing this
properly using new interplay between the kernel and userspace.
getrandom(0) is used, since that's the one that ensures the bytes
returned are cryptographically secure. But on systems without it, we
fallback to using /dev/urandom. This is unfortunate because it means
opening a file descriptor, but there's not much of a choice. Secondly,
as part of the fallback, in order to get more or less the same
properties of getrandom(0), we poll on /dev/random, and if the poll
succeeds at least once, then we assume the RNG is initialized. This is a
rough approximation, as the ancient "non-blocking pool" initialized
after the "blocking pool", not before, and it may not port back to all
ancient kernels, though it does to all kernels supported by glibc
(≥3.2), so generally it's the best approximation we can do.
The motivation for including arc4random, in the first place, is to have
source-level compatibility with existing code. That means this patch
doesn't attempt to litigate the interface itself. It does, however,
choose a conservative approach for implementing it.
Cc: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Cristian Rodríguez <crrodriguez@opensuse.org>
Cc: Paul Eggert <eggert@cs.ucla.edu>
Cc: Mark Harris <mark.hsj@gmail.com>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The implementation is based on scalar Chacha20 with per-thread cache.
It uses getrandom or /dev/urandom as fallback to get the initial entropy,
and reseeds the internal state on every 16MB of consumed buffer.
To improve performance and lower memory consumption the per-thread cache
is allocated lazily on first arc4random functions call, and if the
memory allocation fails getentropy or /dev/urandom is used as fallback.
The cache is also cleared on thread exit iff it was initialized (so if
arc4random is not called it is not touched).
Although it is lock-free, arc4random is still not async-signal-safe
(the per thread state is not updated atomically).
The ChaCha20 implementation is based on RFC8439 [1], omitting the final
XOR of the keystream with the plaintext because the plaintext is a
stream of zeros. This strategy is similar to what OpenBSD arc4random
does.
The arc4random_uniform is based on previous work by Florian Weimer,
where the algorithm is based on Jérémie Lumbroso paper Optimal Discrete
Uniform Generation from Coin Flips, and Applications (2013) [2], who
credits Donald E. Knuth and Andrew C. Yao, The complexity of nonuniform
random number generation (1976), for solving the general case.
The main advantage of this method is the that the unit of randomness is not
the uniform random variable (uint32_t), but a random bit. It optimizes the
internal buffer sampling by initially consuming a 32-bit random variable
and then sampling byte per byte. Depending of the upper bound requested,
it might lead to better CPU utilization.
Checked on x86_64-linux-gnu, aarch64-linux, and powerpc64le-linux-gnu.
Co-authored-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Yann Droneaud <ydroneaud@opteya.com>
[1] https://datatracker.ietf.org/doc/html/rfc8439
[2] https://arxiv.org/pdf/1304.1916.pdf
Compilers may not be able to apply asm redirections to functions after
these functions are used for the first time, e.g. clang 13.
Fix [BZ #27087] by applying all long double-related asm redirections
before using functions in bits/stdio.h.
However, as these asm redirections depend on the declarations provided
by libio/bits/stdio2.h, this header was split in 2:
- libio/bits/stdio2-decl.h contains all function declarations;
- libio/bits/stdio2.h remains with the remaining contents, including
redirections.
This also adds the access attribute to __vsnprintf_chk that was missing.
Tested with build-many-glibcs.py.
Reviewed-by: Paul E. Murphy <murphyp@linux.ibm.com>
Beside weak_hidden_alias/declare_symbol_alias/hidden_data_ver, many
*_hidden_* macros are removed. If there is a rare need to use one, one
may write something like `#if IS_IN (libm)\nhidden_def (...)\n#endif`
instead.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
By adding an internal alias to avoid the GOT indirection.
On some architecture, __libc_single_thread may be accessed through
copy relocations and thus it requires to update also the copies
default copy.
This is done by adding a new internal macro,
libc_hidden_data_{proto,def}, which has an addition argument that
specifies the alias name (instead of default __GI_ one).
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Fangrui Song <maskray@google.com>
Add a proper bounds check to __libc_ifunc_impl_list. This makes MAX_IFUNC
redundant and fixes several targets that will write outside the array.
To avoid unnecessary large diffs, pass the maximum in the argument 'i' to
IFUNC_IMPL_ADD - 'max' can be used in new ifunc definitions and existing
ones can be updated if desired.
Passes buildmanyglibc.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
In multi-threaded programs, registering via pthread_atfork,
de-registering implicitly via dlclose, or running pthread_atfork
handlers during fork was protected by an internal lock. This meant
that a pthread_atfork handler attempting to register another handler or
dlclose a dynamically loaded library would lead to a deadlock.
This commit fixes the deadlock in the following way:
During the execution of handlers at fork time, the atfork lock is
released prior to the execution of each handler and taken again upon its
return. Any handler registrations or de-registrations that occurred
during the execution of the handler are accounted for before proceeding
with further handler execution.
If a handler that hasn't been executed yet gets de-registered by another
handler during fork, it will not be executed. If a handler gets
registered by another handler during fork, it will not be executed
during that particular fork.
The possibility that handlers may now be registered or deregistered
during handler execution means that identifying the next handler to be
run after a given handler may register/de-register others requires some
bookkeeping. The fork_handler struct has an additional field, 'id',
which is assigned sequentially during registration. Thus, handlers are
executed in ascending order of 'id' during 'prepare', and descending
order of 'id' during parent/child handler execution after the fork.
Two tests are included:
* tst-atfork3: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This test exercises calling dlclose from prepare, parent, and child
handlers.
* tst-atfork4: This test exercises calling pthread_atfork and dlclose
from the prepare handler.
[BZ #24595, BZ #27054]
Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The type does not depend on wide vs narrow preprocessor macros,
so it does not need to be customized in stdio-common/printf-parse.h.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The siglist.c is built with -fno-toplevel-reorder to avoid compiler
to reorder the compat assembly directives due an assembler
issue [1] (fixed on 2.39).
This patch removes the compiler flags by split the compat symbol
generation in two phases. First the __sys_siglist and __sys_sigabbrev
without any compat symbol directive is preprocessed to generate an
assembly source code. This generate assembly is then used as input
on a platform agnostic siglist.S which then creates the compat
definitions. This prevents compiler to move any compat directive
prior the _sys_errlist definition itself.
Checked on a make check run-built-tests=no on all affected ABIs.
Reviewed-by: Fangrui Song <maskray@google.com>
The errlist.c is built with -fno-toplevel-reorder to avoid compiler to
reorder the compat assembly directives due an assembler issue [1]
(fixed on 2.39).
This patch removes the compiler flags by split the compat symbol
generation in two phases. First the _sys_errlist_internal internal
without any compat symbol directive is preprocessed to generate an
assembly source code. This generate assembly is then used as input
on a platform agnostic errlist-data.S which then creates the compat
definitions. This prevents compiler to move any compat directive
prior the _sys_errlist_internal definition itself.
Checked on a make check run-built-tests=no on all affected ABIs.
[1] https://sourceware.org/bugzilla/show_bug.cgi?id=29012
The GNU implementation of wcrtomb assumes that there are at least
MB_CUR_MAX bytes available in the destination buffer passed to wcrtomb
as the first argument. This is not compatible with the POSIX
definition, which only requires enough space for the input wide
character.
This does not break much in practice because when users supply buffers
smaller than MB_CUR_MAX (e.g. in ncurses), they compute and dynamically
allocate the buffer, which results in enough spare space (thanks to
usable_size in malloc and padding in alloca) that no actual buffer
overflow occurs. However when the code is built with _FORTIFY_SOURCE,
it runs into the hard check against MB_CUR_MAX in __wcrtomb_chk and
hence fails. It wasn't evident until now since dynamic allocations
would result in wcrtomb not being fortified but since _FORTIFY_SOURCE=3,
that limitation is gone, resulting in such code failing.
To fix this problem, introduce an internal buffer that is MB_LEN_MAX
long and use that to perform the conversion and then copy the resultant
bytes into the destination buffer. Also move the fortification check
into the main implementation, which checks the result after conversion
and aborts if the resultant byte count is greater than the destination
buffer size.
One complication is that applications that assume the MB_CUR_MAX
limitation to be gone may not be able to run safely on older glibcs if
they use static destination buffers smaller than MB_CUR_MAX; dynamic
allocations will always have enough spare space that no actual overruns
will occur. One alternative to fixing this is to bump symbol version to
prevent them from running on older glibcs but that seems too strict a
constraint. Instead, since these users will only have made this
decision on reading the manual, I have put a note in the manual warning
them about the pitfalls of having static buffers smaller than
MB_CUR_MAX and running them on older glibc.
Benchmarking:
The wcrtomb microbenchmark shows significant increases in maximum
execution time for all locales, ranging from 10x for ar_SA.UTF-8 to
1.5x-2x for nearly everything else. The mean execution time however saw
practically no impact, with some results even being quicker, indicating
that cache locality has a much bigger role in the overhead.
Given that the additional copy uses a temporary buffer inside wcrtomb,
it's likely that a hot path will end up putting that buffer (which is
responsible for the additional overhead) in a similar place on stack,
giving the necessary cache locality to negate the overhead. However in
situations where wcrtomb ends up getting called at wildly different
spots on the call stack (or is on different call stacks, e.g. with
threads or different execution contexts) and is still a hotspot, the
performance lag will be visible.
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The EI_ABIVERSION field of the ELF header in executables and shared
libraries can be bumped to indicate the minimum ABI requirement on the
dynamic linker. However, EI_ABIVERSION in executables isn't checked by
the Linux kernel ELF loader nor the existing dynamic linker. Executables
will crash mysteriously if the dynamic linker doesn't support the ABI
features required by the EI_ABIVERSION field. The dynamic linker should
be changed to check EI_ABIVERSION in executables.
Add a glibc version, GLIBC_ABI_DT_RELR, to indicate DT_RELR support so
that the existing dynamic linkers will issue an error on executables with
GLIBC_ABI_DT_RELR dependency. When there is a DT_VERNEED entry with
libc.so on DT_NEEDED, issue an error if there is a DT_RELR entry without
GLIBC_ABI_DT_RELR dependency.
Support __placeholder_only_for_empty_version_map as the placeholder symbol
used only for empty version map to generate GLIBC_ABI_DT_RELR without any
symbols.
After 73fc4e28b9,
__libc_enable_secure_decided is always 0 and a statically linked
executable may overwrite __libc_enable_secure without considering
AT_SECURE.
The __libc_enable_secure has been correctly initialized in _dl_aux_init,
so just remove __libc_enable_secure_decided and __libc_init_secure.
This allows us to remove some startup_get*id functions from
22b79ed7f4.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Prelinked binaries and libraries still work, the dynamic tags
DT_GNU_PRELINKED, DT_GNU_LIBLIST, DT_GNU_CONFLICT just ignored
(meaning the process is reallocated as default).
The loader environment variable TRACE_PRELINKING is also removed,
since it used solely on prelink.
Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
The timestamps created by __convert_scm_timestamps only make sense for
64 bit time_t programs, 32 bit time_t programs will ignore 64 bit time_t
timestamps since SO_TIMESTAMP will be defined to old values (either by
glibc or kernel headers).
Worse, if the buffer is not suffice MSG_CTRUNC is set to indicate it
(which breaks some programs [1]).
This patch makes only 64 bit time_t recvmsg and recvmmsg to call
__convert_scm_timestamps. Also, the assumption to called it is changed
from __ASSUME_TIME64_SYSCALLS to __TIMESIZE != 64 since the setsockopt
might be called by libraries built without __TIME_BITS=64. The
MSG_CTRUNC is only set for the 64 bit symbols, it should happen only
if 64 bit time_t programs run older kernels.
Checked on x86_64-linux-gnu and i686-linux-gnu.
[1] https://github.com/systemd/systemd/pull/20567
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Currently there is no proper way to set the controlling terminal through
posix_spawn in race free manner [1]. This forces shell implementations
to keep using fork+exec when launching background process groups,
even when using posix_spawn yields better performance.
This patch adds a new GNU extension so the creating process can
configure the created process terminal group. This is done with a new
flag, POSIX_SPAWN_TCSETPGROUP, along with two new attribute functions:
posix_spawnattr_tcsetpgrp_np, and posix_spawnattr_tcgetpgrp_np.
The function sets a new attribute, spawn-tcgroupfd, that references to
the controlling terminal.
The controlling terminal is set after the spawn-pgroup attribute, and
uses the spawn-tcgroupfd along with current creating process group
(so it is composable with POSIX_SPAWN_SETPGROUP).
To create a process and set the controlling terminal, one can use the
following sequence:
posix_spawnattr_t attr;
posix_spawnattr_init (&attr);
posix_spawnattr_setflags (&attr, POSIX_SPAWN_TCSETPGROUP);
posix_spawnattr_tcsetpgrp_np (&attr, tcfd);
If the idea is also to create a new process groups:
posix_spawnattr_t attr;
posix_spawnattr_init (&attr);
posix_spawnattr_setflags (&attr, POSIX_SPAWN_TCSETPGROUP
| POSIX_SPAWN_SETPGROUP);
posix_spawnattr_tcsetpgrp_np (&attr, tcfd);
posix_spawnattr_setpgroup (&attr, 0);
The controlling terminal file descriptor is ignored if the new flag is
not set.
This interface is slight different than the one provided by QNX [2],
which only provides the POSIX_SPAWN_TCSETPGROUP flag. The QNX
documentation does not specify how the controlling terminal is obtained
nor how it iteracts with POSIX_SPAWN_SETPGROUP. Since a glibc
implementation is library based, it is more straightforward and avoid
requires additional file descriptor operations to request the caller
to setup the controlling terminal file descriptor (and it also allows
a bit less error handling by posix_spawn).
Checked on x86_64-linux-gnu and i686-linux-gnu.
[1] https://github.com/ksh93/ksh/issues/79
[2] https://www.qnx.com/developers/docs/7.0.0/index.html#com.qnx.doc.neutrino.lib_ref/topic/p/posix_spawn.html
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Tested-by: Carlos O'Donell <carlos@redhat.com>
It is similar to epoll_wait, with the difference the timeout has
nanosecond resoluting by using struct timespec instead of int.
Although Linux interface only provides 64 bit time_t support, old
32 bit interface is also provided (so keep in sync with current
practice and to no force opt-in on 64 bit time_t).
Checked on x86_64-linux-gnu and i686-linux-gnu.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
gcc 12 now has support for the __builtin_dynamic_object_size builtin.
Adapt the macro checks to enable _FORTIFY_SOURCE=3 on gcc 12 and above.
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
I used these shell commands:
../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright
(cd ../glibc && git commit -am"[this commit message]")
and then ignored the output, which consisted lines saying "FOO: warning:
copyright statement not found" for each of 7061 files FOO.
I then removed trailing white space from math/tgmath.h,
support/tst-support-open-dev-null-range.c, and
sysdeps/x86_64/multiarch/strlen-vec.S, to work around the following
obscure pre-commit check failure diagnostics from Savannah. I don't
know why I run into these diagnostics whereas others evidently do not.
remote: *** 912-#endif
remote: *** 913:
remote: *** 914-
remote: *** error: lines with trailing whitespace found
...
remote: *** error: sysdeps/unix/sysv/linux/statx_cp.c: trailing lines
It can be used to speed up the libgcc unwinder, and the internal
_dl_find_dso_for_object function (which is used for caller
identification in dlopen and related functions, and in dladdr).
_dl_find_object is in the internal namespace due to bug 28503.
If libgcc switches to _dl_find_object, this namespace issue will
be fixed. It is located in libc for two reasons: it is necessary
to forward the call to the static libc after static dlopen, and
there is a link ordering issue with -static-libgcc and libgcc_eh.a
because libc.so is not a linker script that includes ld.so in the
glibc build tree (so that GCC's internal -lc after libgcc_eh.a does
not pick up ld.so).
It is necessary to do the i386 customization in the
sysdeps/x86/bits/dl_find_object.h header shared with x86-64 because
otherwise, multilib installations are broken.
The implementation uses software transactional memory, as suggested
by Torvald Riegel. Two copies of the supporting data structures are
used, also achieving full async-signal-safety.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
The vDSO is is listed in the link_map chain, but is never the subject of
an la_objopen call. A new internal flag __RTLD_VDSO is added that
acts as __RTLD_OPENEXEC to allocate the required 'struct auditstate'
extra space for the 'struct link_map'.
The return value from the callback is currently ignored, since there
is no PLT call involved by glibc when using the vDSO, neither the vDSO
are exported directly.
Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
The rtld-audit interfaces introduces a slowdown due to enabling
profiling instrumentation (as if LD_AUDIT implied LD_PROFILE).
However, instrumenting is only necessary if one of audit libraries
provides PLT callbacks (la_pltenter or la_pltexit symbols). Otherwise,
the slowdown can be avoided.
The following patch adjusts the logic that enables profiling to iterate
over all audit modules and check if any of those provides a PLT hook.
To keep la_symbind to work even without PLT callbacks, _dl_fixup now
calls the audit callback if the modules implements it.
Co-authored-by: Alexander Monakov <amonakov@ispras.ru>
Checked on x86_64-linux-gnu, i686-linux-gnu, and aarch64-linux-gnu.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
hurd initialization stages use RUN_HOOK to run various initialization
functions. That is however using absolute addresses which need to be
relocated, which is done later by csu. We can however easily make the
linker compute relative addresses which thus don't need a relocation.
The new SET_RELHOOK and RUN_RELHOOK macros implement this.
To increase effectiveness with Transparent Huge Page with madvise, the
large page size is use instead page size for sbrk increment for the
main arena.
Checked on x86_64-linux-gnu.
Reviewed-by: DJ Delorie <dj@redhat.com>
Now that Hurd implementis both close_range and closefrom (f2c996597d),
we can make close_range() a base ABI, and make the default closefrom()
implementation on top of close_range().
The generic closefrom() implementation based on __getdtablesize() is
moved to generic close_range(). On Linux it will be overriden by
the auto-generation syscall while on Hurd it will be a system specific
implementation.
The closefrom() now calls close_range() and __closefrom_fallback().
Since on Hurd close_range() does not fail, __closefrom_fallback() is an
empty static inline function set by__ASSUME_CLOSE_RANGE.
The __ASSUME_CLOSE_RANGE also allows optimize Linux
__closefrom_fallback() implementation when --enable-kernel=5.9 or
higher is used.
Finally the Linux specific tst-close_range.c is moved to io and
enabled as default. The Linuxism and CLOSE_RANGE_UNSHARE are
guarded so it can be built for Hurd (I have not actually test it).
Checked on x86_64-linux-gnu, i686-linux-gnu, and with a i686-gnu
build.
It requires less boilerplate code for newer ports. The _Static_assert
checks from internal setjmp are moved to its own internal test since
setjmp.h is included early by multiple headers (to generate
rtld-sizes.sym).
The riscv jmp_buf-macros.h check is also redundant, it is already
done by riscv configure.ac.
Checked with a build for the affected architectures.
And make it an installed header. This addresses a few aliasing
violations (which do not seem to result in miscompilation due to
the use of atomics), and also enables use of wide counters in other
parts of the library.
The debug output in nptl/tst-cond22 has been adjusted to print
the 32-bit values instead because it avoids a big-endian/little-endian
difference.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This second patch contains the actual implementation of a new sorting algorithm
for shared objects in the dynamic loader, which solves the slow behavior that
the current "old" algorithm falls into when the DSO set contains circular
dependencies.
The new algorithm implemented here is simply depth-first search (DFS) to obtain
the Reverse-Post Order (RPO) sequence, a topological sort. A new l_visited:1
bitfield is added to struct link_map to more elegantly facilitate such a search.
The DFS algorithm is applied to the input maps[nmap-1] backwards towards
maps[0]. This has the effect of a more "shallow" recursion depth in general
since the input is in BFS. Also, when combined with the natural order of
processing l_initfini[] at each node, this creates a resulting output sorting
closer to the intuitive "left-to-right" order in most cases.
Another notable implementation adjustment related to this _dl_sort_maps change
is the removing of two char arrays 'used' and 'done' in _dl_close_worker to
represent two per-map attributes. This has been changed to simply use two new
bit-fields l_map_used:1, l_map_done:1 added to struct link_map. This also allows
discarding the clunky 'used' array sorting that _dl_sort_maps had to sometimes
do along the way.
Tunable support for switching between different sorting algorithms at runtime is
also added. A new tunable 'glibc.rtld.dynamic_sort' with current valid values 1
(old algorithm) and 2 (new DFS algorithm) has been added. At time of commit
of this patch, the default setting is 1 (old algorithm).
Signed-off-by: Chung-Lin Tang <cltang@codesourcery.com>
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
This is an internal function meant to return the number of avaliable
processor where the process can scheduled, different than the
__get_nprocs which returns a the system available online CPU.
The Linux implementation currently only calls __get_nprocs(), which
in tuns calls sched_getaffinity.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
TS 18661-1 and C2X specify predefined macros __STDC_IEC_60559_BFP__
and __STDC_IEC_60559_COMPLEX__, making __STDC_IEC_559__ and
__STDC_IEC_559_COMPLEX__ obsolescent (but still included in the
standard). Now that we have all the functions from TS 18661-1, define
these macros in stdc-predef.h, under the same conditions in which the
older macros are defined, since support for the floating-point
features in TS 18661-1 is now at the same level as that for those in
C11 and before (all library functions and other library APIs present,
but no standard pragma support).
The macros are defined for now with their TS 18661-1 values. C2X will
give them new values (listed as yyyymmL in the working drafts until
the final standard), at which point there will be the question of what
value to use in stdc-predef.h (where it could depend on
__STDC_VERSION__, but not on feature test macros defined by the user).
My inclination then would be to use the C2X value unconditionally
rather than using an older value to indicate TS support, and only have
any C standard version conditionals for the value when subsequent C
standard versions define further values.
(Note that I'm also inclined, when we implement the C2X change to the
return types of fromfp functions, to make that change unconditional
much like the change made to the types of totalorder functions, with
the old version only supported with compat symbols for already-linked
programs and not as an API for newly built objects. So using the C2X
value would also accurately reflect not supporting the versions of
APIs in the TS where those ended up being incompatible with the first
version actually added to the standard.)
Tested for x86_64.
We can't relocate entries in dynamic section if it is readonly:
1. Add a l_ld_readonly field to struct link_map to indicate if dynamic
section is readonly and set it based on p_flags of PT_DYNAMIC segment.
2. Replace DL_RO_DYN_SECTION with dl_relocate_ld to decide if dynamic
section should be relocated.
3. Remove DL_RO_DYN_TEMP_CNT.
4. Don't use a static dynamic section to make readonly dynamic section
in vDSO writable.
5. Remove the temp argument from elf_get_dynamic_info.
This fixes BZ #28340.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Copy regex-related files back from Gnulib, to fix a problem with
static checking of regex calls noted by Martin Sebor. This merges the
following changes:
* New macro __attribute_nonnull__ in misc/sys/cdefs.h, for use later
when copying other files back from Gnulib.
* Use __GNULIB_CDEFS instead of __GLIBC__ when deciding
whether to include bits/wordsize.h etc.
* Avoid duplicate entries in epsilon closure table.
* New regex.h macro _REGEX_NELTS to let regexec say that its pmatch
arg should contain nmatch elts. Use that for regexec, instead of
__attr_access (which is incorrect).
* New regex.h macro _Attr_access_ which is like __attr_access except
portable to non-glibc platforms.
* Add some DEBUG_ASSERTs to pacify gcc -fanalyzer and to catch
recently-fixed performance bugs if they recur.
* Add Gnulib-specific stuff to port the dynarray- and lock-using parts
of regex code to non-glibc platforms.
* Fix glibc bug 11053.
* Avoid some undefined behavior when popping an empty fail stack.
Glibc does not provide an interface for debugger to access libraries
loaded in multiple namespaces via dlmopen.
The current rtld-debugger interface is described in the file:
elf/rtld-debugger-interface.txt
under the "Standard debugger interface" heading. This interface only
provides access to the first link-map (LM_ID_BASE).
1. Bump r_version to 2 when multiple namespaces are used. This triggers
the GDB bug:
https://sourceware.org/bugzilla/show_bug.cgi?id=28236
2. Add struct r_debug_extended to extend struct r_debug into a linked-list,
where each element correlates to an unique namespace.
3. Initialize the r_debug_extended structure. Bump r_version to 2 for
the new namespace and add the new namespace to the namespace linked list.
4. Add _dl_debug_update to return the address of struct r_debug' of a
namespace.
5. Add a hidden symbol, _r_debug_extended, for struct r_debug_extended.
6. Provide the symbol, _r_debug, with size of struct r_debug, as an alias
of _r_debug_extended, for programs which reference _r_debug.
This fixes BZ #15971.
Reviewed-by: Florian Weimer <fweimer@redhat.com>
include/math.h has a mechanism to redirect internal calls to various
libm functions, that can often be inlined by the compiler, to call
non-exported __* names for those functions in the case when the calls
aren't inlined, with the redirection being disabled when
NO_MATH_REDIRECT. Add fma to the functions to which this mechanism is
applied.
At present, libm-internal fma calls (generally to __builtin_fma*
functions) are only done when it's known the call will be inlined,
with alternative code not relying on an fma operation being used in
the caller otherwise. This patch is in preparation for adding the TS
18661 / C2X narrowing fma functions to glibc; it will be natural for
the narrowing function implementations to call the underlying fma
functions unconditionally, with this either being inlined or resulting
in an __fma* call. (Using two levels of round-to-odd computation like
that, in the case where there isn't an fma hardware instruction, isn't
optimal but is certainly a lot simpler for the initial implementation
than writing different narrowing fma implementations for all the
various pairs of formats.)
Tested with build-many-glibcs.py that installed stripped shared
libraries are unchanged by the patch (using
<https://sourceware.org/pipermail/libc-alpha/2021-September/130991.html>
to fix installed library stripping in build-many-glibcs.py). Also
tested for x86_64.
This macro definition enforces two arguments even with newer compilers
that accept the single-argument form, too.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
We stopped adding "Contributed by" or similar lines in sources in 2012
in favour of git logs and keeping the Contributors section of the
glibc manual up to date. Removing these lines makes the license
header a bit more consistent across files and also removes the
possibility of error in attribution when license blocks or files are
copied across since the contributed-by lines don't actually reflect
reality in those cases.
Move all "Contributed by" and similar lines (Written by, Test by,
etc.) into a new file CONTRIBUTED-BY to retain record of these
contributions. These contributors are also mentioned in
manual/contrib.texi, so we just maintain this additional record as a
courtesy to the earlier developers.
The following scripts were used to filter a list of files to edit in
place and to clean up the CONTRIBUTED-BY file respectively. These
were not added to the glibc sources because they're not expected to be
of any use in future given that this is a one time task:
https://gist.github.com/siddhesh/b5ecac94eabfd72ed2916d6d8157e7dchttps://gist.github.com/siddhesh/15ea1f5e435ace9774f485030695ee02
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
On the port of OpenRISC I am working on and it appears the rv32 port
we have sets __TIMESIZE == 64 && __WORDSIZE == 32. This causes the
size of time_t to be 8 bytes, but the tv_sec in the kernel is still 32-bit
causing truncation.
The truncations are unavoidable on these systems so skip the
testing/failures by guarding with __KERNEL_OLD_TIMEVAL_MATCHES_TIMEVAL64.
Also, futher in the tests and in other parts of code checking for time_t
overflow does not work on 32-bit systems when time_t is 64-bit. As
suggested by Adhemerval, update the in_time_t_range function to assume
32-bits by using int32_t.
This also brings in the header for stdint.h so we can update other
usages of __int32_t to int32_t as suggested by Adhemerval.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Tell the compiler that xmalloc family of allocators always return
non-NULL. xrealloc in locale/programs also always returns non-NULL,
but that conflicts with default realloc behaviour and that of xrealloc
in libsupport, so keep it as is for now and resolve the differences
later.
Reviewed-by: Florian Weimer <fweimer@redhat.com>