[*] Explicit memory order access barrier when reading WOA_SEMAPHORE_MODE-less bAlive under weakly ordered systems. (5b193411 cont: "[*] Improve regressed AuWoA time to wake"

In all other cases, the memory is either thread-local write-local or followed up by an indirect aquire/release of the processors pipeline and L1 cache by virtue of the containers dumb spinlock ::Lock, ::Unlock (...release, ...barrier)
Clang doesn't have /volatile:ms anymore so we cant rely on that
Assuming MSVC-like or x86 isnt good enough

(and, no retard midwits, volatile is a fine keyword. take ur spec sperging and shove it. i just need to control over-optimization of defacto-weakly ordered access between explicit lockless semaphore yields)
This commit is contained in:
Reece Wilson 2024-06-23 04:08:58 +01:00
parent 114976a71d
commit 035d822ec1
2 changed files with 6 additions and 5 deletions

View File

@ -136,7 +136,8 @@ namespace Aurora::Threading
Win32DropSchedulerResolution();
#endif
if (!this->bAlive)
// potentially release/acquire-less by virtue of the lockless semaphore mode
if (!AuAtomicLoad(&this->bAlive))
{
#if !defined(WOA_SEMAPHORE_MODE)
this->mutex.Unlock();
@ -165,7 +166,7 @@ namespace Aurora::Threading
{
while (WaitBuffer::Compare2<eMethod, true>(this->pAddress, this->uSize, state.compare.buffer, state.uDownsizeMask))
{
if (!this->bAlive)
if (!AuAtomicLoad(&this->bAlive))
{
#if !defined(WOA_SEMAPHORE_MODE)
this->mutex.Unlock();
@ -646,7 +647,7 @@ namespace Aurora::Threading
void ProcessWaitNodeContainer::Unlock()
{
this->uAtomic = 0;
AuAtomicClearU8Lock(&this->uAtomic);
}
#define AddressToIndex AuHashCode(pAddress) & (AuArraySize(this->list) - 1)

View File

@ -99,8 +99,8 @@ namespace Aurora::Threading
EWaitMethod eWaitMethod { EWaitMethod::eNotEqual };
// bookkeeping (parent container)
volatile bool bAlive {}; // wait entry validity. must be rechecked for each spurious or expected wake, if the comparison doesn't break the yield loop.
// if false, and we're still yielding under pCompare == pAddress, we must reschedule with inverse order (as to steal the next signal, as opposed to waiting last)
volatile AuUInt8 bAlive {}; // wait entry validity. must be rechecked for each spurious or expected wake, if the comparison doesn't break the yield loop.
// if false, and we're still yielding under pCompare == pAddress, we must reschedule with inverse order (as to steal the next signal, as opposed to waiting last)
void Release();
template <EWaitMethod eMethod>