Some SPE opcodes clashes with some recent PowerISA opcodes and
until recently gas did not complain about it. However binutils
recently changed it and now VLE configured gas does not support to
assembler some instruction that might class with VLE (HTM for
instance). It also does not help that glibc build hardware lock
elision support as default (regardless of assembler support).
Although runtime will not actually enables TLE on SPE hardware
(since kernel will not advertise it), I see little advantage on
adding HTM support on SPE built glibc. SPE uses an incompatible
ABI which does not allow share the same build with default
powerpc and HTM code slows down SPE without any benefict.
This patch fixes it by only building HTM when SPE configuration
is not used.
Checked with a powerpc-linux-gnuspe build. I also did some sniff
tests on a e500 hardware without any issue.
[BZ #22926]
* sysdeps/powerpc/powerpc32/sysdep.h (ABORT_TRANSACTION_IMPL): Define
empty for __SPE__.
* sysdeps/powerpc/sysdep.h (ABORT_TRANSACTION): Likewise.
* sysdeps/unix/sysv/linux/powerpc/elision-lock.c (__lll_lock_elision):
Do not build hardware transactional code for __SPE__.
* sysdeps/unix/sysv/linux/powerpc/elision-trylock.c
(__lll_trylock_elision): Likewise.
* sysdeps/unix/sysv/linux/powerpc/elision-unlock.c
(__lll_unlock_elision): Likewise.
The update of *adapt_count after the release of the lock causes a race
condition when thread A unlocks, thread B continues and destroys the
mutex, and thread A writes to *adapt_count.
Work around a GCC behavior with hardware transactional memory built-ins.
GCC doesn't treat the PowerPC transactional built-ins as compiler
barriers, moving instructions past the transaction boundaries and
altering their atomicity.
The skip_lock_out_of_tbegin_retries adaptive parameter was
not being used correctly, nor as described. This prevents
a fallback for all users of the lock if a transient abort
occurs within the accepted number of retries.
[BZ #19174]
* sysdeps/powerpc/nptl/elide.h (__elide_lock): Fix usage of
.skip_lock_out_of_tbegin_retries.
* sysdeps/unix/sysv/linux/powerpc/elision-lock.c
(__lll_lock_elision): Likewise, and respect a value of
try_tbegin <= 0.
With TLE enabled, the adapt count variable update incurs
an 8% overhead before entering the critical section of an
elided mutex.
Instead, if it is done right after leaving the critical
section, this serialization can be avoided.
This alters the existing behavior of __lll_trylock_elision
as it will only decrement the adapt_count if it successfully
acquires the lock.
* sysdeps/unix/sysv/linux/powerpc/elision-lock.c
(__lll_lock_elision): Remove adapt_count decrement...
* sysdeps/unix/sysv/linux/powerpc/elision-trylock.c
(__lll_trylock_elision): Likewise.
* sysdeps/unix/sysv/linux/powerpc/elision-unlock.c
(__lll_unlock_elision): ... to here. And utilize
new adapt_count parameter.
* sysdeps/unix/sysv/linux/powerpc/lowlevellock.h
(__lll_unlock_elision): Update to include adapt_count
parameter.
(lll_unlock_elision): Pass pointer to adapt_count
variable.
Power ISA 2.07B section B.5.5 relaxed the barrier requirement around a
TLE enabled lock. It is now identical to a traditional lock.
2015-08-26 Paul E. Murphy <murphyp@linux.vnet.ibm.com>
* sysdeps/unix/sysv/linux/powerpc/elision-lock.c
(__arch_compare_and_exchange_val_32_acq): Remove and use common
definition. ISA 2.07B no longer requires full sync.
This patch adds support for lock elision using ISA 2.07 hardware
transactional memory instructions for pthread_mutex primitives.
Similar to s390 version, the for elision logic defined in
'force-elision.h' is only enabled if ENABLE_LOCK_ELISION is defined.
Also, the lock elision code should be able to be built even with
a compiler that does not provide HTM support with builtins.
However I have noted the performance is sub-optimal due scheduling
pressures.