Commit Graph

15 Commits

Author SHA1 Message Date
de25694416 [*] bonk (use AuAXXX atomics) 2023-09-02 04:55:43 +01:00
fa170c413d [*] More compact Linux primitives 2023-08-21 19:17:05 +01:00
5cc811be19 [*] More compact Win32 primitives! 2023-08-21 17:34:24 +01:00
ab5274b1f6 [*] Fix some blatantly incorrect Linux x86_32 SOO values 2023-08-13 20:21:35 +01:00
04956bedba [*] Shorten the expected overhead of some Linux primitives 2023-08-13 20:09:58 +01:00
6ec2fcc4b6 [*] Added timeout awareness in ConditionEx; returns false on timeout
[*] Updated Linux SOO sizes
2023-08-12 11:18:19 +01:00
1f173a8799 [*] Begin resolving 8 months of Linux neglect 2023-08-11 16:51:42 +01:00
dab6e9caee [*] Refactor: FeaturefulCondition -> FlexibleConditionVariable
[+] SOO for FlexibleConditionVariable
2023-07-25 12:27:08 +01:00
2d6dca4e21 [+] 32bit SOO sizes for sync primitives under x86_32/MSVC
[*] Optimize the write-biased reentrant read-write lock down to 88 bytes on MSVC x64
2023-06-17 17:08:58 +01:00
25b933aafa [*] Fixed regression in RWLock size without sacrificing on features
(TODO: I would like to WoA optimize it for modern oses at some point)
2023-06-16 00:02:42 +01:00
74b813f051 [*] Bloat RWLock by adding a separate yield queue for writers (we were already writer biased)
This will help us reduce cpu usage and latency at the cost of 32 bytes.

We are now hopelessly oversized: 136 bytes for a single primitive. 104 was barely passble.
2023-06-15 20:54:19 +01:00
07ce6d8974 [+] Future proof IWaitable with Abs waits
(polyfil for now)
2023-06-05 11:47:19 +01:00
d755a9d651 [*] Massive perf boost by removing atomic and
[*] Refactor ambiguous IWaitable::Lock(timeoutMs) to LockMS to prevent final using collisions
2023-04-03 08:21:44 +01:00
8272959249 [*] Further compress 2023-03-22 13:42:07 +00:00
6974c713f7 [+] Allocationless thread primitives
[*] Rename SMPYield to SMTYield
2023-03-21 03:19:22 +00:00