mirror of
https://sourceware.org/git/glibc.git
synced 2024-11-26 23:10:06 +00:00
c50e1c263e
This patch removes the arch-specific x86 assembly implementation for low level locking and consolidate both 64 bits and 32 bits in a single implementation. Different than other architectures, x86 lll_trylock, lll_lock, and lll_unlock implements a single-thread optimization to avoid atomic operation, using cmpxchgl instead. This patch implements by using the new single-thread.h definitions in a generic way, although using the previous semantic. The lll_cond_trylock, lll_cond_lock, and lll_timedlock just use atomic operations plus calls to lll_lock_wait*. For __lll_lock_wait_private and __lll_lock_wait the generic implemtation there is no indication that assembly implementation is required performance-wise. Checked on x86_64-linux-gnu and i686-linux-gnu. * sysdeps/nptl/lowlevellock.h (__lll_trylock): New macro. (lll_trylock): Call __lll_trylock. * sysdeps/unix/sysv/linux/i386/libc-lowlevellock.S: Remove file. * sysdeps/unix/sysv/linux/i386/lll_timedlock_wait.c: Likewise. * sysdeps/unix/sysv/linux/i386/lowlevellock.S: Likewise. * sysdeps/unix/sysv/linux/i386/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/x86_64/libc-lowlevellock.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/lll_timedlock_wait.c: Likewise. * sysdeps/unix/sysv/linux/x86_64/lowlevellock.S: Likewise. * sysdeps/unix/sysv/linux/x86_64/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/x86/lowlevellock.h: New file. * sysdeps/unix/sysv/linux/x86_64/cancellation.S: Include lowlevellock-futex.h. |
||
---|---|---|
.. | ||
linux |