* "Wakes the count of currently sleeping threads without guaranteed respect for ordering."
* "Assuming correctness of your mutex paths, this will wake all threads up-to your everyone-be-alert condition"
* [...]
* "Schedules a single thread for wake up without guaranteed respect for ordering.
*
* A simple explaination:
*---------------------------------------------
* Problematic broadcasts:
*
* Under mutex:
* Sleep: [A, B, C]
*
* Under or out of mutex (it doesnt matter so long as: the write itself was under lock *or* the write condition was fenced against the conditions sleep count update via a later call to mutex::lock() ):
* ~awake all? shutdown condition? who knows~
* Broadcast
*
* Out of mutex (!!!):
* if (~missed/incorrect if !work available check before sleep~)
* Sleep: [D]
* // given that WaitForSignal forces you to unlock and relock a mutex, this illogical branch should never happen
*
* Effect:
* Awake: [B, C, D]
* Sleeping: [A]
*
*---------------------------------------------
* Problematic signals:
*
* Under mutex:
* Sleep: [A, B, C]
*
* Not under mutex:
* Signal
*
* Under mutex:
* Sleep: [D]
*
* Effect:
* Awake: [D]
* Sleeping: [A, B, C]
*
*---------------------------------------------
* Cause:
* The abstract condition accounts for the amount of threads sleeping accuarely, not the order.
* This is usually a good thing because ordering under a spinloop generally does not happen in time and/or does not matter.
* To implement ordering, is to implement cache-thrashing and increased context-switching for an abstract idea of "correctness" that doesn't apply to real code or performance goals.
* (spoilers: your work pool of uniform priorities couldn't care less which thread wakes up first, nor does a single waiter pattern, but your end product will certainly bleed performance with yield thrashing)
* ( : the same can be said for semaphores; what part of waiting while an available work count is zero needs ordering?)
* ( : yield thrashing, that might i add, serves no purpose other than to get the right thread local context and decoupled-from-parent thread id of a context on a given physical core of a highly limited set)
* * Ensure to properly check the correctness of the sleep condition, and that the mutex is properly locked, before calling Aurora condition primitives
* (why the fuck would you be sleeping on a variable state observer without checking its' the state, causing an unwanted deficit? this is counter to the purpose of using a condvar.)
* * Increase the size of the condition variable to account for a counter and implement inefficient rescheduling, to fix buggy code
* (no thanks. implement ticket primitives yourself, see: the hacky workaround.)
*
*---------------------------------------------
* The hacky workaround:
* * If you can somehow justify this, i doubt it, but if you can, you can force the slow-path order condvar+mutex+semaphore by using AuFutex[Mutex/Semaphore/Cond] with ThreadingConfig::bPreferEmulatedWakeOnAddress = true.
* You can further eliminate the fast paths to remove fast-path reordering; but really, if you care that much, you should be implementing your own ticket primitives over AuThreading WakeOnAddress with bPreferEmulatedWakeOnAddress = true for the guarantee of a FIFO WoA interface.