Commit Graph

41112 Commits

Author SHA1 Message Date
Alejandro Colomar
077613291b manual: floor(log2(fabs(x))) has rounding errors
Link: <https://inbox.sourceware.org/libc-alpha/20240305150131.GD3653@qaa.vinc17.org/T/#m3ceecda630012995339bcc5448fee451cf277a8b>
Reported-by: Vincent Lefevre <vincent@vinc17.net>
Suggested-by: Vincent Lefevre <vincent@vinc17.net>
Reviewed-by: DJ Delorie <dj@redhat.com>
Cc: Morten Welinder <mwelinder@gmail.com>
Cc: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Cc: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2024-04-03 09:16:22 -03:00
Alejandro Colomar
b7d15bd1f0 manual: logb(x) is floor(log2(fabs(x)))
log2(3) doesn't accept negative input, but it seems logb(3) does accept
it.

Link: <https://lore.kernel.org/linux-man/ZeYKUOKYS7G90SaV@debian/T/#u>
Reported-by: Morten Welinder <mwelinder@gmail.com>
Reviewed-by: DJ Delorie <dj@redhat.com>
Cc: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: Vincent Lefevre <vincent@vinc17.net>
Cc: Paul Zimmermann <Paul.Zimmermann@inria.fr>
Cc: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2024-04-03 09:16:22 -03:00
Adhemerval Zanella
4dcd674b66 powerpc: Add missing arch flags on rounding ifunc variants
The ifunc variants now uses the powerpc implementation which in turn
uses the compiler builtin.  Without the proper -mcpu switch the builtin
does not generate the expected optimization.

Checked on powerpc-linux-gnu.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2024-04-02 15:49:31 -03:00
Adhemerval Zanella
c0d59e3e0d math: Reformat Makefile.
Reflow all long lines adding comment terminators.
Sort all reflowed text using scripts/sort-makefile-lines.py.

No code generation changes observed in binary artifacts.
No regressions on x86_64 and i686.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-04-02 15:49:20 -03:00
Adhemerval Zanella
a4ed0471d7 Always define __USE_TIME_BITS64 when 64 bit time_t is used
It was raised on libc-help [1] that some Linux kernel interfaces expect
the libc to define __USE_TIME_BITS64 to indicate the time_t size for the
kABI.  Different than defined by the initial y2038 design document [2],
the __USE_TIME_BITS64 is only defined for ABIs that support more than
one time_t size (by defining the _TIME_BITS for each module).

The 64 bit time_t redirects are now enabled using a different internal
define (__USE_TIME64_REDIRECTS). There is no expected change in semantic
or code generation.

Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu, and
arm-linux-gnueabi

[1] https://sourceware.org/pipermail/libc-help/2024-January/006557.html
[2] https://sourceware.org/glibc/wiki/Y2038ProofnessDesign

Reviewed-by: DJ Delorie <dj@redhat.com>
2024-04-02 15:28:36 -03:00
Adhemerval Zanella
a0698a5e92 benchtests: Improve benchtests for strstr
Use same strategy as bench-strstr.c (93eebae516 and 80b2bfb535)
and use json_ctx for output to help standardize format across all
benchtests.
Reviewed-by: Arjun Shankar <arjun@redhat.com>
2024-04-01 13:52:00 -03:00
Adhemerval Zanella
721314c980 x86_64: Remove avx512 strstr implementation
As indicated in a recent thread, this it is a simple brute-force
algorithm that checks the whole needle at a matching character pair
(and does so 1 byte at a time after the first 64 bytes of a needle).
Also it never skips ahead and thus can match at every haystack
position after trying to match all of the needle, which generic
implementation avoids.

As indicated by Wilco, a 4x larger needle and 16x larger haystack gives
a clear 65x slowdown both basic_strstr and __strstr_avx512:

  "ifuncs": ["basic_strstr", "twoway_strstr", "__strstr_avx512",
"__strstr_sse2_unaligned", "__strstr_generic"],

    {
     "len_haystack": 65536,
     "len_needle": 1024,
     "align_haystack": 0,
     "align_needle": 0,
     "fail": 1,
     "desc": "Difficult bruteforce needle",
     "timings": [4.0948e+07, 15094.5, 3.20818e+07, 108558, 10839.2]
    },
    {
     "len_haystack": 1048576,
     "len_needle": 4096,
     "align_haystack": 0,
     "align_needle": 0,
     "fail": 1,
     "desc": "Difficult bruteforce needle",
     "timings": [2.69767e+09, 100797, 2.08535e+09, 495706, 82666.9]
    }

PS: I don't have an AVX512 capable machine to verify this issues, but
    skimming through the code it does seems to follow what Wilco has
    described.
Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2024-03-27 13:48:16 -03:00
Adhemerval Zanella
2e53eb9234 signal: Avoid system signal disposition to interfere with tests
Both tst-sigset2 and tst-signal1 expectes that SIGINT disposition
is set to SIG_DFL.
2024-03-27 13:47:09 -03:00
Palmer Dabbelt
96d1b9ac23 RISC-V: Fix the static-PIE non-relocated object check
The value of l_scope is only valid post relocation, so this original
check was triggering undefined behavior.  Instead just directly check to
see if the object has been relocated, at which point using l_scope is
safe.

Reported-by: Andreas Schwab <schwab@suse.de>
Closes: BZ #31317
Fixes: e0590f41fe ("RISC-V: Enable static-pie.")
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-03-25 15:17:13 +01:00
Sergey Bugaev
dc1a77269c htl: Implement some support for TLS_DTV_AT_TP
Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-ID: <20240323173301.151066-19-bugaevc@gmail.com>
2024-03-23 23:00:30 +01:00
Sergey Bugaev
a4273efa21 htl: Respect GL(dl_stack_flags) when allocating stacks
Previously, HTL would always allocate non-executable stacks.  This has
never been noticed, since GNU Mach on x86 ignores VM_PROT_EXECUTE and
makes all pages implicitly executable.  Since GNU Mach on AArch64
supports non-executable pages, HTL forgetting to pass VM_PROT_EXECUTE
immediately breaks any code that (unfortunately, still) relies on
executable stacks.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-ID: <20240323173301.151066-7-bugaevc@gmail.com>
2024-03-23 22:48:44 +01:00
Sergey Bugaev
b467cfcaee hurd: Use the RETURN_ADDRESS macro
This gives us PAC stripping on AArch64.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-ID: <20240323173301.151066-6-bugaevc@gmail.com>
2024-03-23 22:48:01 +01:00
Sergey Bugaev
6afeac1289 hurd: Disable Prefer_MAP_32BIT_EXEC on non-x86_64 for now
While we could support it on any architecture, the tunable is currently
only defined on x86_64.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-ID: <20240323173301.151066-5-bugaevc@gmail.com>
2024-03-23 22:47:46 +01:00
Sergey Bugaev
49aa652db8 Allow glibc to be compiled without EXEC_PAGESIZE
We would like to avoid statically defining any specific page size on
aarch64-gnu, and instead make sure that everything uses the dynamic
page size, available via vm_page_size and GLRO(dl_pagesize).

There are currently a few places in glibc that require EXEC_PAGESIZE
to be defined. Per Roland's suggestion [0], drop the static
GLRO(dl_pagesize) initializers (for now, only if EXEC_PAGESIZE is not
defined), and don't require EXEC_PAGESIZE definition for libio to
enable mmap usage.

[0]: https://mail.gnu.org/archive/html/bug-hurd/2011-10/msg00035.html

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-ID: <20240323173301.151066-4-bugaevc@gmail.com>
2024-03-23 22:47:26 +01:00
Sergey Bugaev
4648bfbbde hurd: Stop relying on VM_MAX_ADDRESS
We'd like to avoid committing to a specific size of virtual address
space (i.e. the value of VM_AARCH64_T0SZ) on AArch64.  While the current
version of GNU Mach still exports VM_MAX_ADDRESS for compatibility, we
should try to avoid relying on it when we can.  This piece of logic in
_hurdsig_getenv () doesn't actually care about the size of user-
accessible virtual address space, it just wants to preempt faults on any
addresses starting from the value of the P pointer and above.  So, use
(unsigned long int) -1 instead of VM_MAX_ADDRESS.

While at it, change the casts to (unsigned long int) and not just
(long int), since the type of struct hurd_signal_preemptor.{first,last}
is unsigned long int.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-ID: <20240323173301.151066-3-bugaevc@gmail.com>
2024-03-23 22:44:02 +01:00
Sergey Bugaev
7f02511e5b hurd: Move internal functions to internal header
Move _hurd_self_sigstate (), _hurd_critical_section_lock (), and
_hurd_critical_section_unlock () inline implementations (that were
already guarded by #if defined _LIBC) to the internal version of the
header.  While at it, add <tls.h> to the includes, and use
__LIBC_NO_TLS () unconditionally.

Signed-off-by: Sergey Bugaev <bugaevc@gmail.com>
Message-ID: <20240323173301.151066-2-bugaevc@gmail.com>
2024-03-23 22:43:07 +01:00
Stafford Horne
4a13b3ef46 stdlib: Fix tst-makecontext2 log when swapcontext fails
The log incorrectly prints, setcontext failed.  Update this to indicate
that actually swapcontext failed.

Signed-off-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-03-23 08:36:28 +00:00
Stafford Horne
ad05a42370 or1k: Add prctl wrapper to unwrap variadic args
On OpenRISC variadic functions and regular functions have different
calling conventions so this wrapper is needed to translate.  This
wrapper is copied from x86_64/x32.  I don't know the build system enough
to find a cleaner way to share the code between x86_64/x32 and or1k
(maybe Implies?), so I went with the straight copy.

This fixes test failures:

  misc/tst-prctl
  nptl/tst-setgetname
2024-03-22 15:43:34 +00:00
Stafford Horne
df7e29e2a4 or1k: Only define fpu rouding and exceptions with hard-float
This test failure:

  math/test-fenv

If rounding mode and exception macros are defined then the fenv tests
run and always fail.  This patch adds an ifdef using the
__or1k_hard_float__ macro provided by gcc to avoid defining these fenv
macros when they cnnot be used.  This is similar to what is done in csky.

Note, I will post the or1k hard-float support soon. So, I prefer to
leave the hard-float bits here for now.
2024-03-22 15:43:34 +00:00
Stafford Horne
2e982a3937 or1k: Update libm test ulps
To fix test failures:

    FAIL: math/test-float-hypot
    FAIL: math/test-float32-hypot
2024-03-22 15:43:34 +00:00
Wilco Dijkstra
2e94e2f5d2 AArch64: Check kernel version for SVE ifuncs
Old Linux kernels disable SVE after every system call.  Calling the
SVE-optimized memcpy afterwards will then cause a trap to reenable SVE.
As a result, applications with a high use of syscalls may run slower with
the SVE memcpy.  This is true for kernels between 4.15.0 and before 6.2.0,
except for 5.14.0 which was patched.  Avoid this by checking the kernel
version and selecting the SVE ifunc on modern kernels.

Parse the kernel version reported by uname() into a 24-bit kernel.major.minor
value without calling any library functions.  If uname() is not supported or
if the version format is not recognized, assume the kernel is modern.

Tested-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-03-21 16:50:51 +00:00
Amrita H S
1ea0511456 powerpc: Placeholder and infrastructure/build support to add Power11 related changes.
The following three changes have been added to provide initial Power11 support.
    1. Add the directories to hold Power11 files.
    2. Add support to select Power11 libraries based on AT_PLATFORM.
    3. Let submachine=power11 be set automatically.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2024-03-19 21:11:34 -05:00
Manjunath Matti
3ab9b88e2a powerpc: Add HWCAP3/HWCAP4 data to TCB for Power Architecture.
This patch adds a new feature for powerpc.  In order to get faster
access to the HWCAP3/HWCAP4 masks, similar to HWCAP/HWCAP2 (i.e. for
implementing __builtin_cpu_supports() in GCC) without the overhead of
reading them from the auxiliary vector, we now reserve space for them
in the TCB.

This is an ABI change for GLIBC 2.39.

Suggested-by: Peter Bergner <bergner@linux.ibm.com>
Reviewed-by: Peter Bergner <bergner@linux.ibm.com>
2024-03-19 17:19:27 -05:00
Adhemerval Zanella
3d53d18fc7 elf: Enable TLS descriptor tests on aarch64
The aarch64 uses 'trad' for traditional tls and 'desc' for tls
descriptors, but unlike other targets it defaults to 'desc'.  The
gnutls2 configure check does not set aarch64 as an ABI that uses
TLS descriptors, which then disable somes stests.

Also rename the internal machinery fron gnu2 to tls descriptors.

Checked on aarch64-linux-gnu.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-03-19 14:53:30 -03:00
Adhemerval Zanella
64c7e34428 arm: Update _dl_tlsdesc_dynamic to preserve caller-saved registers (BZ 31372)
ARM _dl_tlsdesc_dynamic slow path has two issues:

  * The ip/r12 is defined by AAPCS as a scratch register, and gcc is
    used to save the stack pointer before on some function calls.  So it
    should also be saved/restored as well.  It fixes the tst-gnu2-tls2.

  * None of the possible VFP registers are saved/restored.  ARM has the
    additional complexity to have different VFP bank sizes (depending of
    VFP support by the chip).

The tst-gnu2-tls2 test is extended to check for VFP registers, although
only for hardfp builds.  Different than setcontext, _dl_tlsdesc_dynamic
does not have  HWCAP_ARM_IWMMXT (I don't have a way to properly test
it and it is almost a decade since newer hardware was released).

With this patch there is no need to mark tst-gnu2-tls2 as XFAIL.

Checked on arm-linux-gnueabihf.
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-03-19 14:53:30 -03:00
Adhemerval Zanella
968b0ca944 Ignore undefined symbols for -mtls-dialect=gnu2
So it does not fail for arm config that defaults to -mtp=soft (which
issues a call to __aeabi_read_tp).
Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-03-19 14:53:30 -03:00
Andreas Schwab
fd7ee2e6c5 Add tst-gnu2-tls2mod1 to test-internal-extras
That allows sysdeps/x86_64/tst-gnu2-tls2mod1.S to use internal headers.

Fixes: 717ebfa85c ("x86-64: Allocate state buffer space for RDI, RSI and RBX")
2024-03-19 14:28:28 +01:00
H.J. Lu
717ebfa85c x86-64: Allocate state buffer space for RDI, RSI and RBX
_dl_tlsdesc_dynamic preserves RDI, RSI and RBX before realigning stack.
After realigning stack, it saves RCX, RDX, R8, R9, R10 and R11.  Define
TLSDESC_CALL_REGISTER_SAVE_AREA to allocate space for RDI, RSI and RBX
to avoid clobbering saved RDI, RSI and RBX values on stack by xsave to
STATE_SAVE_OFFSET(%rsp).

   +==================+<- stack frame start aligned at 8 or 16 bytes
   |                  |<- RDI saved in the red zone
   |                  |<- RSI saved in the red zone
   |                  |<- RBX saved in the red zone
   |                  |<- paddings for stack realignment of 64 bytes
   |------------------|<- xsave buffer end aligned at 64 bytes
   |                  |<-
   |                  |<-
   |                  |<-
   |------------------|<- xsave buffer start at STATE_SAVE_OFFSET(%rsp)
   |                  |<- 8-byte padding for 64-byte alignment
   |                  |<- 8-byte padding for 64-byte alignment
   |                  |<- R11
   |                  |<- R10
   |                  |<- R9
   |                  |<- R8
   |                  |<- RDX
   |                  |<- RCX
   +==================+<- RSP aligned at 64 bytes

Define TLSDESC_CALL_REGISTER_SAVE_AREA, the total register save area size
for all integer registers by adding 24 to STATE_SAVE_OFFSET since RDI, RSI
and RBX are saved onto stack without adjusting stack pointer first, using
the red-zone.  This fixes BZ #31501.
Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
2024-03-18 19:45:13 -07:00
Darius Rad
f44f3aed31 riscv: Update nofpu libm test ulps
Fix two test failures.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-03-18 11:28:50 +01:00
Joseph Myers
4b0860d029 Add STATX_MNT_ID_UNIQUE from Linux 6.8 to bits/statx-generic.h
Linux 6.8 adds a new STATX_MNT_ID_UNIQUE constant.  Add it to glibc's
bits/statx-generic.h.

Tested for x86_64.
2024-03-15 22:22:50 +00:00
Florian Weimer
7a76f21867 linux: Use rseq area unconditionally in sched_getcpu (bug 31479)
Originally, nptl/descr.h included <sys/rseq.h>, but we removed that
in commit 2c6b4b272e ("nptl:
Unconditionally use a 32-byte rseq area").  After that, it was
not ensured that the RSEQ_SIG macro was defined during sched_getcpu.c
compilation that provided a definition.  This commit always checks
the rseq area for CPU number information before using the other
approaches.

This adds an unnecessary (but well-predictable) branch on
architectures which do not define RSEQ_SIG, but its cost is small
compared to the system call.  Most architectures that have vDSO
acceleration for getcpu also have rseq support.

Fixes: 2c6b4b272e
Fixes: 1d350aa060
Reviewed-by: Arjun Shankar <arjun@redhat.com>
2024-03-15 19:08:24 +01:00
Szabolcs Nagy
73c26018ed aarch64: fix check for SVE support in assembler
Due to GCC bug 110901 -mcpu can override -march setting when compiling
asm code and thus a compiler targetting a specific cpu can fail the
configure check even when binutils gas supports SVE.

The workaround is that explicit .arch directive overrides both -mcpu
and -march, and since that's what the actual SVE memcpy uses the
configure check should use that too even if the GCC issue is fixed
independently.

Reviewed-by: Florian Weimer <fweimer@redhat.com>
2024-03-14 14:27:56 +00:00
Joseph Myers
2367bf468c Update kernel version to 6.8 in header constant tests
This patch updates the kernel version in the tests tst-mman-consts.py,
tst-mount-consts.py and tst-pidfd-consts.py to 6.8.  (There are no new
constants covered by these tests in 6.8 that need any other header
changes.)

Tested with build-many-glibcs.py.
2024-03-13 19:46:21 +00:00
Joseph Myers
3de2f8755c Update syscall lists for Linux 6.8
Linux 6.8 adds five new syscalls.  Update syscall-names.list and
regenerate the arch-syscall.h headers with build-many-glibcs.py
update-syscalls.

Tested with build-many-glibcs.py.
2024-03-13 13:57:56 +00:00
Joseph Myers
cba186f2f0 Use Linux 6.8 in build-many-glibcs.py
This patch makes build-many-glibcs.py use Linux 6.8.

Tested with build-many-glibcs.py (host-libraries, compilers and glibcs
builds).
2024-03-13 13:30:30 +00:00
Adhemerval Zanella
4a76fb1da8 powerpc: Remove power8 strcasestr optimization
Similar to strstr (1e9a550ba4), power8 strcasestr does not show much
improvement compared to the generic implementation.  The geomean
on bench-strcasestr shows:

            __strcasestr_power8  __strcasestr_ppc
  power10                  1159              1120
  power9                   1640              1469
  power8                   1787              1904

The strcasestr uses the same 'trick' as power7 strstr to detect
potential quadradic behavior, which only adds overheads for input
that trigger quadradic behavior and it is really a hack.

Checked on powerpc64le-linux-gnu.
Reviewed-by: DJ Delorie <dj@redhat.com>
2024-03-12 17:11:01 -03:00
Adhemerval Zanella
2149da3683 riscv: Fix alignment-ignorant memcpy implementation
The memcpy optimization (commit 587a1290a1) has a series
of mistakes:

  - The implementation is wrong: the chunk size calculation is wrong
    leading to invalid memory access.

  - It adds ifunc supports as default, so --disable-multi-arch does
    not work as expected for riscv.

  - It mixes Linux files (memcpy ifunc selection which requires the
    vDSO/syscall mechanism)  with generic support (the memcpy
    optimization itself).

  - There is no __libc_ifunc_impl_list, which makes testing only
    check the selected implementation instead of all supported
    by the system.

This patch also simplifies the required bits to enable ifunc: there
is no need to memcopy.h; nor to add Linux-specific files.

The __memcpy_noalignment tail handling now uses a branchless strategy
similar to aarch64 (overlap 32-bits copies for sizes 4..7 and byte
copies for size 1..3).

Checked on riscv64 and riscv32 by explicitly enabling the function
on __libc_ifunc_impl_list on qemu-system.

Changes from v1:
* Implement the memcpy in assembly to correctly handle RISCV
  strict-alignment.
Reviewed-by: Evan Green <evan@rivosinc.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-03-12 14:38:08 -03:00
Andreas Schwab
2173173d57 linux/sigsetops: fix type confusion (bug 31468)
Each mask in the sigset array is an unsigned long, so fix __sigisemptyset
to use that instead of int.  The __sigword function returns a simple array
index, so it can return int instead of unsigned long.
2024-03-12 10:00:22 +01:00
caiyinyu
aeee41f1cf LoongArch: Correct {__ieee754, _}_scalb -> {__ieee754, _}_scalbf 2024-03-12 14:07:27 +08:00
Andreas Schwab
513331b788 duplocale: protect use of global locale (bug 23970)
Protect the global locale from being modified while we compute the size of
the locale category names.  That allows the use of the global locale in a
single thread, while all other threads use the thread safe locale
functions.
2024-03-11 09:52:59 +01:00
Sunil K Pandey
b6e3898194 x86-64: Simplify minimum ISA check ifdef conditional with if
Replace minimum ISA check ifdef conditional with if.  Since
MINIMUM_X86_ISA_LEVEL and AVX_X86_ISA_LEVEL are compile time constants,
compiler will perform constant folding optimization, getting same
results.

Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2024-03-03 15:47:53 -08:00
Joe Talbott
d370155b9a manual/tunables - Add entry for enable_secure tunable. 2024-03-01 17:43:03 +00:00
Joe Talbott
18a81441ba NEWS: Move enable_secure_tunable from 2.39 to 2.40. 2024-03-01 17:37:31 +00:00
Evan Green
587a1290a1
riscv: Add and use alignment-ignorant memcpy
For CPU implementations that can perform unaligned accesses with little
or no performance penalty, create a memcpy implementation that does not
bother aligning buffers. It will use a block of integer registers, a
single integer register, and fall back to bytewise copy for the
remainder.

Signed-off-by: Evan Green <evan@rivosinc.com>
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-03-01 07:15:01 -08:00
Evan Green
a2b47f7d46
riscv: Add ifunc helper method to hwprobe.h
Add a little helper method so it's easier to fetch a single value from
the hwprobe function when used within an ifunc selector.

Signed-off-by: Evan Green <evan@rivosinc.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-03-01 07:15:00 -08:00
Evan Green
a29bb320a1
riscv: Enable multi-arg ifunc resolvers
RISC-V is apparently the first architecture to pass more than one
argument to ifunc resolvers. The helper macros in libc-symbols.h,
__ifunc_resolver(), __ifunc(), and __ifunc_hidden(), are incompatible
with this. These macros have an "arg" (non-final) parameter that
represents the parameter signature of the ifunc resolver. The result is
an inability to pass the required comma through in a single preprocessor
argument.

Rearrange the __ifunc_resolver() macro to be variadic, and pass the
types as those variable parameters. Move the guts of __ifunc() and
__ifunc_hidden() into new macros, __ifunc_args(), and
__ifunc_args_hidden(), that pass the variable arguments down through to
__ifunc_resolver(). Then redefine __ifunc() and __ifunc_hidden(), which
are used in a bunch of places, to simply shuffle the arguments down into
__ifunc_args[_hidden]. Finally, define a riscv-ifunc.h header, which
provides convenience macros to those looking to write ifunc selectors
that use both arguments.

Signed-off-by: Evan Green <evan@rivosinc.com>
Reviewed-by: Florian Weimer <fweimer@redhat.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-03-01 07:14:59 -08:00
Evan Green
78308ce77a
riscv: Add __riscv_hwprobe pointer to ifunc calls
The new __riscv_hwprobe() function is designed to be used by ifunc
selector functions. This presents a challenge for applications and
libraries, as ifunc selectors are invoked before all relocations have
been performed, so an external call to __riscv_hwprobe() from an ifunc
selector won't work. To address this, pass a pointer to the
__riscv_hwprobe() function into ifunc selectors as the second
argument (alongside dl_hwcap, which was already being passed).

Include a typedef as well for convenience, so that ifunc users don't
have to go through contortions to call this routine. Users will need to
remember to check the second argument for NULL, to account for older
glibcs that don't pass the function.

Signed-off-by: Evan Green <evan@rivosinc.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-03-01 07:14:58 -08:00
Evan Green
e7919e0db2
riscv: Add hwprobe vdso call support
The new riscv_hwprobe syscall also comes with a vDSO for faster answers
to your most common questions. Call in today to speak with a kernel
representative near you!

Signed-off-by: Evan Green <evan@rivosinc.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-03-01 07:14:57 -08:00
Evan Green
c6c33339b4
linux: Introduce INTERNAL_VSYSCALL
Add an INTERNAL_VSYSCALL() macro that makes a vDSO call, falling back to
a regular syscall, but without setting errno. Instead, the return value
is plumbed straight out of the macro.

Signed-off-by: Evan Green <evan@rivosinc.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-03-01 07:14:56 -08:00
Evan Green
426d0e1aa8
riscv: Add Linux hwprobe syscall support
Add awareness and a thin wrapper function around a new Linux system call
that allows callers to get architecture and microarchitecture
information about the CPUs from the kernel. This can be used to
do things like dynamically choose a memcpy implementation.

Signed-off-by: Evan Green <evan@rivosinc.com>
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-03-01 07:14:55 -08:00