23 lines
3.2 KiB
Plaintext
23 lines
3.2 KiB
Plaintext
Defer to ThreadingConfig::bEnableRWLockWriteBiasOnReadLock
|
|
|
|
When true
|
|
Reader availability is starved as soon as there is a writer waiting
|
|
|
|
When false
|
|
Readers can starve pending write waiters
|
|
|
|
Much like other Aurora primitives, there is no guarantee of order.
|
|
Reader and writer threads are generally kept separate. No wake operation for many-readers will result in a spurious wake of a writer thread, and vice versa.
|
|
Waits/yields of write-upgrades, write-locks, and then read-locks are grouped together by identical wake characteristics defined by platform specific quirks, sorted by that respective order for the logical intentions of the abstract RWLock primitive.
|
|
To reiterate, only WakeOnAddress in emulation mode is guaranteed to have FIFO Wait/an order of thread wake-ups - that's to say tickets or prio-wake-queue aren't used to enforce access order.
|
|
Condvar and other reodering of thread primitives.txt implies tickets are a waste of cycles, serving no point but to thrash contexts. They serve no logical high level value, nor provide low level performance gains (*). It's just a spinlock algorithm a bunch of braindead normies thought was cute.
|
|
|
|
*: you are telling the os to resched or perform a SMT yield because your wake wasnt good enough. why would you context switch again? oh right, you'd do it to sit on the exact same physical core (0-12? 0-16? out of hundreds/thousands of contexts?), most likely thrashing cache in a switch, to dequeue a high level context to perform the exact same queued up work - but with different TLS indicies!!!
|
|
you would be hard-pressed to make a convincing argument against this perspective...
|
|
...that's unless you're stuck in the bygone-era of native fibers, async io routines, and the likes, oh shit we're going back in time with meme-langs and runtimes again aren't we?
|
|
They're more akin to a hack to support babiest first "while (!bSocketError) { accept(); CreateThread(); }" server than an answer to any real problem.
|
|
A third/fourth level of indirection of a thread context (physical core -> (hyperthread ->) software/kernel switchable context -> abstract userland schedulers' context) makes very little sense, past making these instances of shit-code suck less.
|
|
Arguably, fibers and async routines are at best a hack for naive developers who don't think about what their program is doing... or at best a hack for a single-threaded interpreter program like Python or Lua.
|
|
Arguably, optimizing thread primitives with tickets serves no purpose other than to optimize fiber-iq code, where half-assed FIFO and monte carlo guarantees aren't good enough to ensure fiber #45345 doesn't idle or spin a bit more than fiber #346.
|
|
Arguably, if you're trying to "optimize" thread primitives with tickets and in-process scheduling, and not high level descriptions of what the synchronization primitive is doing in the context of its use case and the platform it runs on, you're a fucking idiot.
|
|
Any other scheduling leg-work should be the responsibility of the kernel and perhaps a chipset vendor driver being aware of system-wide thread priorities+tasks; you are not faster or smarter than a generic interface and the kernel/os doing one of its rudimentary responsibilities. |