631624dc55
[*] Linux build regressions, and shrink the size of Linux RWLocks to 48 bytes from 64
2024-05-07 14:57:19 +01:00
134816e128
[*] Optimize primitives SMTYield for Alderlake+ user-space, BIOS-ring mwait, and AARCH
2024-05-03 12:22:38 +01:00
c79a709f96
[*] RWLock: "I didn't like what I saw emitted" cont
2024-04-30 22:57:45 +01:00
410a67d842
[*] RWLock: I didn't like what I saw emitted
2024-04-30 20:29:20 +01:00
62b6fa20f8
[*] Update the copyright header of most of the primitives
...
[*] Fix generic mutex abs yield always returning true
2024-01-29 14:48:04 +00:00
6037d14674
[-] hadnt fully removed the aggressive context switch thing from the old rwlock impl
2023-12-30 21:58:52 +00:00
0c1c6d7c24
[*] Formatting regressions (+ 1x double-based RNG regression)
2023-12-01 03:43:06 +00:00
43583a1748
[+] IRWLock::CheckSelfThreadIsWriter
2023-12-01 01:15:35 +00:00
c3165de4cf
[*] RWLock: Disable dumb scatter switch for now
2023-09-23 02:50:54 +01:00
76bd36939e
[*] Simplify RWLock some more
2023-09-23 02:40:23 +01:00
9e1655d579
[*] Clean up RWLock
2023-09-19 17:36:21 +01:00
7477bfe56f
[*] A Linux and other OS insurance policy: rel/xref 7357764c
2023-09-19 02:05:11 +01:00
7357764cfc
[*] Fix abnormal ::UnlockWrite performance under heavyweight native WaitOnAddress platforms (Linux, BSD-like, etc)
2023-09-18 18:21:46 +01:00
1a4a4ad863
[*] Added missing this->
s
2023-09-09 19:46:08 +01:00
88355932c1
[*] Optimize thread configurations to be unpacked from the bitmap once at startup and during reconfigure as opposed ad-hoc
2023-09-09 17:37:14 +01:00
4ad70cadb4
[*] optimization: cea33621
cont
2023-09-09 14:38:02 +01:00
cea3362186
[*] Finally fixed an old regression: RWLock is back to being write-biased to prevent forever-read conditions
2023-09-09 13:03:02 +01:00
109b0cff3f
[*] ...and same applies to RWLock
...
(instead its pls dont use the public api instead of the internal NT apis)
2023-09-09 12:39:47 +01:00
1c80e4910b
[*] RWLock: another optimization for embedded and win7 targets
2023-09-03 13:35:12 +01:00
cc6e0358fa
[*] NT: further optimizations to solve CPU usage regressions
2023-09-02 16:11:06 +01:00
a20e9b4954
[*] Win10/Linux regression in RWLock (again)
2023-08-31 18:41:18 +01:00
affe4cc496
[*] RWLock: simplify writersPending_ guard
...
[*] Fix RWLock: I may have messed up the new fast path timeouts in RWLock
[*] RWLock: refactoring/improvements
2023-08-30 16:11:54 +01:00
cf118d0b4b
[*] Minor RW lock tweaks
2023-08-30 14:57:13 +01:00
3e5aa1aff0
[*] Simplified lines of code: shared pointer init
2023-08-26 19:02:14 +01:00
935d1b6ab2
[*] RWLock: added another SignalManyWriter condition to ensure upgrades never get missed
2023-08-24 11:45:15 +01:00
937f123dad
[*] RWLock: Futex waiter path: force read semantics
2023-08-24 10:20:43 +01:00
3898a41198
[*] Adopt new ROXTL atomics
...
...AuAtomicLoad + AuAtomicClearU8Lock
2023-08-23 22:03:00 +01:00
d79cb4f3ca
[*] RWLock: WakeOnAddress optimization on wait to prevent mutex congestion on modern OSes
2023-08-23 15:37:55 +01:00
cd362db7af
[*] Deaf, dumb, and blind
2023-08-21 19:20:52 +01:00
5cc811be19
[*] More compact Win32 primitives!
2023-08-21 17:34:24 +01:00
869512e651
[*] Optimization: Remove two stupid branches in RWLock
2023-08-21 16:33:32 +01:00
f847ab049a
[+] ThreadingConfig::bPreferRWLockReadLockSpin
2023-08-21 16:25:51 +01:00
e1f384de2e
[*] RWLock: improper upgrade handshake
...
The switch over to two condvars still doesnt seem right
2023-08-21 16:20:52 +01:00
681c4b9158
[*] RWLock: revert this branch to checking for 0 and 1 remaining readers
...
[*] Formatting
2023-08-21 16:08:30 +01:00
e2909ebe74
[*] RWLock: Upgrade UpgradeReadToWrite sleep path
2023-08-21 16:02:55 +01:00
68b4fe5f8b
[*] RWLock: not implementing LockAbsMS and LockAbsNS can hurt the hotpath
2023-08-21 15:50:45 +01:00
d1b1bfb221
[*] Caught an issue with RWLock: cant reenter unlocked reentrance mutex.
2023-08-21 15:39:56 +01:00
9a2e5674e8
[*] RWLock improvements
2023-07-30 11:23:40 +01:00
1a383f8157
[*] Two annoying formatting issues in RWLock
2023-07-25 12:57:47 +01:00
fa90463a73
[*] im not sure why this was written like this
2023-06-23 22:36:13 +01:00
0d05fd3d33
[*] Minor mostly unnoticeable primitive improvements
2023-06-23 21:37:04 +01:00
2d6dca4e21
[+] 32bit SOO sizes for sync primitives under x86_32/MSVC
...
[*] Optimize the write-biased reentrant read-write lock down to 88 bytes on MSVC x64
2023-06-17 17:08:58 +01:00
451b9025c0
[*] Fix major recent regressions
...
amend: 48075bfd
amend: 25b933aa
amend: f50067e6 (to be overwritten)
et al
2023-06-17 15:12:16 +01:00
25b933aafa
[*] Fixed regression in RWLock size without sacrificing on features
...
(TODO: I would like to WoA optimize it for modern oses at some point)
2023-06-16 00:02:42 +01:00
74b813f051
[*] Bloat RWLock by adding a separate yield queue for writers (we were already writer biased)
...
This will help us reduce cpu usage and latency at the cost of 32 bytes.
We are now hopelessly oversized: 136 bytes for a single primitive. 104 was barely passble.
2023-06-15 20:54:19 +01:00
36dee459ca
[*] TryLockRead was unware of RWRenterableLocks specifications
...
(not an issue for regular blocking lock paths)
2023-04-24 19:39:36 +01:00
f74a41e286
[*] Refactor our thread primitives for an SOO change, where the SOO[_t]-suffix is no longer required, resulting a new type conflict issue
2023-04-23 19:25:37 +01:00
d755a9d651
[*] Massive perf boost by removing atomic and
...
[*] Refactor ambiguous IWaitable::Lock(timeoutMs) to LockMS to prevent final using collisions
2023-04-03 08:21:44 +01:00
6974c713f7
[+] Allocationless thread primitives
...
[*] Rename SMPYield to SMTYield
2023-03-21 03:19:22 +00:00
e5981a5747
[*] Reintroduce the older implementation based on Vista sync primitives when best implementation under NT 5.1 apis isn't available (microsoft being cunts under the false guise of sandboxing xbox and uwp)
2023-03-16 18:25:23 +00:00