Commit Graph

23 Commits

Author SHA1 Message Date
631624dc55 [*] Linux build regressions, and shrink the size of Linux RWLocks to 48 bytes from 64 2024-05-07 14:57:19 +01:00
c79a709f96 [*] RWLock: "I didn't like what I saw emitted" cont 2024-04-30 22:57:45 +01:00
410a67d842 [*] RWLock: I didn't like what I saw emitted 2024-04-30 20:29:20 +01:00
62b6fa20f8 [*] Update the copyright header of most of the primitives
[*] Fix generic mutex abs yield always returning true
2024-01-29 14:48:04 +00:00
6037d14674 [-] hadnt fully removed the aggressive context switch thing from the old rwlock impl 2023-12-30 21:58:52 +00:00
43583a1748 [+] IRWLock::CheckSelfThreadIsWriter 2023-12-01 01:15:35 +00:00
9e1655d579 [*] Clean up RWLock 2023-09-19 17:36:21 +01:00
7357764cfc [*] Fix abnormal ::UnlockWrite performance under heavyweight native WaitOnAddress platforms (Linux, BSD-like, etc) 2023-09-18 18:21:46 +01:00
cea3362186 [*] Finally fixed an old regression: RWLock is back to being write-biased to prevent forever-read conditions 2023-09-09 13:03:02 +01:00
d79cb4f3ca [*] RWLock: WakeOnAddress optimization on wait to prevent mutex congestion on modern OSes 2023-08-23 15:37:55 +01:00
5cc811be19 [*] More compact Win32 primitives! 2023-08-21 17:34:24 +01:00
68b4fe5f8b [*] RWLock: not implementing LockAbsMS and LockAbsNS can hurt the hotpath 2023-08-21 15:50:45 +01:00
9a2e5674e8 [*] RWLock improvements 2023-07-30 11:23:40 +01:00
2d6dca4e21 [+] 32bit SOO sizes for sync primitives under x86_32/MSVC
[*] Optimize the write-biased reentrant read-write lock down to 88 bytes on MSVC x64
2023-06-17 17:08:58 +01:00
25b933aafa [*] Fixed regression in RWLock size without sacrificing on features
(TODO: I would like to WoA optimize it for modern oses at some point)
2023-06-16 00:02:42 +01:00
74b813f051 [*] Bloat RWLock by adding a separate yield queue for writers (we were already writer biased)
This will help us reduce cpu usage and latency at the cost of 32 bytes.

We are now hopelessly oversized: 136 bytes for a single primitive. 104 was barely passble.
2023-06-15 20:54:19 +01:00
d755a9d651 [*] Massive perf boost by removing atomic and
[*] Refactor ambiguous IWaitable::Lock(timeoutMs) to LockMS to prevent final using collisions
2023-04-03 08:21:44 +01:00
8272959249 [*] Further compress 2023-03-22 13:42:07 +00:00
046b70d7bc [*] [Pre-Win8.1] Optimize for modern nt instead of windows vista synch in legacy path; yes, this is how windows 7 and vista synch is somewhat implemented.
...on apis that predate those kernel revisions. so, technically this might be able to run on xp.
[*] GetThreadCookie optimization for all platforms
2023-03-15 00:35:29 +00:00
e82ec4a343 [+] IWaitable::LockNS(...)
[+] AuThreading.WakeAllOnAddress
[+] AuThreading.WakeOnAddress
[+] AuThreading.WakeNOnAddress
[+] AuThreading.TryWaitOnAddress
[+] AuThreading.WaitOnAddress
[*] Further optimize synch primitives
[+] AuThreadPrimitives::RWRenterableLock
2023-03-12 15:27:28 +00:00
8ff81df129 [*] Fix deadlock involving WaitFor under ThreadPool (shutdown race)
[*] Optimize mutex lock out of RWLockImpl::TryLockWrite
[*] Force all relevant members of RWLockImpl to be volatile just bc lol (afaik we cant justify it yet; however, i want to minimalize the risk of future issues in this type)
2023-01-30 14:35:48 +00:00
0cdbc34c06 [*] Optimize allocations out of RWLock
[*] Fix linux regression
2022-12-29 09:42:02 +00:00
898c0ced37 [*] Refactoring in progress... 2022-11-17 08:03:20 +00:00