1c80e4910b
[*] RWLock: another optimization for embedded and win7 targets
2023-09-03 13:35:12 +01:00
1479bcaa22
[*] bonk
2023-09-02 21:24:37 +01:00
c2a6bd92fa
[*] Minor optimization in a shit primitive
2023-09-02 19:09:04 +01:00
0838373410
[*] NT Optimization: more aggressive semaphores to prevent atomic failures (perhaps this could be made to account for weak exchanges under different archs)
2023-09-02 19:05:07 +01:00
cc6e0358fa
[*] NT: further optimizations to solve CPU usage regressions
2023-09-02 16:11:06 +01:00
9fbdafea74
[*] x86_64 Use RDSC for more deterministic back-off durations
...
Well, sort of. It's more likely to be referenced against the exact frequency stated in the hard-coded CPUID vendor string.
2023-09-02 14:37:07 +01:00
de25694416
[*] bonk (use AuAXXX atomics)
2023-09-02 04:55:43 +01:00
a20e9b4954
[*] Win10/Linux regression in RWLock (again)
2023-08-31 18:41:18 +01:00
affe4cc496
[*] RWLock: simplify writersPending_ guard
...
[*] Fix RWLock: I may have messed up the new fast path timeouts in RWLock
[*] RWLock: refactoring/improvements
2023-08-30 16:11:54 +01:00
cf118d0b4b
[*] Minor RW lock tweaks
2023-08-30 14:57:13 +01:00
0d759f85f8
[*] Linux/Clang fixerinos/improvements
2023-08-28 16:35:32 +01:00
55c02d4aa0
[*] Tweak default thread config
...
[*] Fix regressions
2023-08-28 11:48:13 +01:00
b2e1df8f72
[*] Annoying Linux checks
2023-08-27 21:35:40 +01:00
97296d1fe9
[*] ThreadingConfig::bPreferEnableAdaptiveSpin
2023-08-27 20:26:36 +01:00
8fe2619673
[*] Rework SMT yielding
2023-08-27 19:56:22 +01:00
5cf7533eab
[*] Linux and UNIX QOL
2023-08-27 12:42:10 +01:00
3e5aa1aff0
[*] Simplified lines of code: shared pointer init
2023-08-26 19:02:14 +01:00
7680a86d5a
[*] ...same applies to these
2023-08-26 18:46:00 +01:00
610f2c73a0
[*] Optimize >4 thread wakeups on <= Win7, under Semaphore
2023-08-26 18:08:33 +01:00
346a9f3bde
[*] More aggressively wake up reorder prone (unlikely) condvars under broadcast (unlikely)
2023-08-26 15:56:59 +01:00
3ca8de022e
[*] Fix issues related to inconstent retardation in the asinine freetard world
...
GNU and its consequences have been a disaster for the human race
2023-08-26 15:04:48 +01:00
4a73f7250f
[*] Another uniproc test
2023-08-25 12:42:31 +01:00
337df8040c
[*] Move uniprocessor check
2023-08-24 15:12:49 +01:00
100964ac87
[*] NT Semaphore optimization
2023-08-24 13:55:31 +01:00
935d1b6ab2
[*] RWLock: added another SignalManyWriter condition to ensure upgrades never get missed
2023-08-24 11:45:15 +01:00
937f123dad
[*] RWLock: Futex waiter path: force read semantics
2023-08-24 10:20:43 +01:00
31319981ba
[*] Two trivial Linux tweaks
2023-08-23 23:45:26 +01:00
3898a41198
[*] Adopt new ROXTL atomics
...
...AuAtomicLoad + AuAtomicClearU8Lock
2023-08-23 22:03:00 +01:00
524365b5da
[*] Handhold non-MSVC compilers
2023-08-23 16:38:22 +01:00
0c5d140bd4
[*] Autoset bPlatformIsSMPProcessorOptimized to false on singlethreaded systems
2023-08-23 16:03:22 +01:00
d79cb4f3ca
[*] RWLock: WakeOnAddress optimization on wait to prevent mutex congestion on modern OSes
2023-08-23 15:37:55 +01:00
a4d317a48d
[*] Reoptimize semaphore wait paths
2023-08-22 15:28:09 +01:00
7ad725ca04
[+] Global adaptive spin
2023-08-22 13:01:06 +01:00
76ac770674
[*] Update a handful of condvar cas's and account for laziness along the way
2023-08-22 09:44:54 +01:00
cd362db7af
[*] Deaf, dumb, and blind
2023-08-21 19:20:52 +01:00
fa170c413d
[*] More compact Linux primitives
2023-08-21 19:17:05 +01:00
5cc811be19
[*] More compact Win32 primitives!
2023-08-21 17:34:24 +01:00
869512e651
[*] Optimization: Remove two stupid branches in RWLock
2023-08-21 16:33:32 +01:00
f847ab049a
[+] ThreadingConfig::bPreferRWLockReadLockSpin
2023-08-21 16:25:51 +01:00
e1f384de2e
[*] RWLock: improper upgrade handshake
...
The switch over to two condvars still doesnt seem right
2023-08-21 16:20:52 +01:00
681c4b9158
[*] RWLock: revert this branch to checking for 0 and 1 remaining readers
...
[*] Formatting
2023-08-21 16:08:30 +01:00
e2909ebe74
[*] RWLock: Upgrade UpgradeReadToWrite sleep path
2023-08-21 16:02:55 +01:00
68b4fe5f8b
[*] RWLock: not implementing LockAbsMS and LockAbsNS can hurt the hotpath
2023-08-21 15:50:45 +01:00
d1b1bfb221
[*] Caught an issue with RWLock: cant reenter unlocked reentrance mutex.
2023-08-21 15:39:56 +01:00
a60a1b3088
[*] dont assume these condvar paths cant underflow
2023-08-21 00:25:29 +01:00
b8d4e02ab5
[+] Aurora::Threading::GetThreadingConfig
...
[+] Aurora::Threading::SetThreadingConfig
[*] Save a few bytes in Aurora::ThreadingConfig
2023-08-20 16:23:03 +01:00
71617ca66e
[*] Format SMT spin
2023-08-20 09:50:41 +01:00
f1a08d25e7
[+] AuUInt32 GetTotalSpinCountTimeout()
...
[*] Fixup FutexWaitable
2023-08-20 09:47:31 +01:00
19224d2eed
[*] Default back to zero. Do not throw off other threads if only used once
2023-08-19 18:39:13 +01:00
ab4971ef9c
[+] Linux threading options
2023-08-19 18:33:54 +01:00
8bf6bdd963
[+] More threading options
...
[+] AuThreading::SetSpinCountTimeout
[+] AuThreading::SetThreadLocalAdditionalSpinCountTimeout
2023-08-19 18:16:48 +01:00
7fb8b89def
[*] Some unwanted indirect branching is still bleeding in; mark more primitive classes as final
2023-08-19 11:41:37 +01:00
92ebafecab
[*] Suppress a number of antisemitic clang warnings
2023-08-18 23:32:46 +01:00
4240966512
[*] Two trivial changes (not fixing or improving anything)
2023-08-18 15:53:38 +01:00
7dd6145dc1
[*] Always use unsigned integers under the semaphore classes
2023-08-18 15:26:31 +01:00
2a1556d80c
[*] Optimize Linux semaphore
2023-08-17 23:06:02 +01:00
04956bedba
[*] Shorten the expected overhead of some Linux primitives
2023-08-13 20:09:58 +01:00
6ec2fcc4b6
[*] Added timeout awareness in ConditionEx; returns false on timeout
...
[*] Updated Linux SOO sizes
2023-08-12 11:18:19 +01:00
7962772c62
[+] Added Linux-specific condvars and condmutex
2023-08-12 11:11:12 +01:00
737d3bb4d6
[+] AuProcAddresses.Linux.*
2023-08-12 10:16:20 +01:00
1f173a8799
[*] Begin resolving 8 months of Linux neglect
2023-08-11 16:51:42 +01:00
9a2e5674e8
[*] RWLock improvements
2023-07-30 11:23:40 +01:00
5e94be7487
[*] ConditionEx::WaitForSignalRelativeNanoseconds -> WaitForSignalNS
2023-07-29 09:52:59 +01:00
b411c710d1
[+] IConditionVariable::WaitForSignalNS
2023-07-25 15:59:04 +01:00
76262c2e3e
[*] Trivial Win8+ condvar broadcast improvement (pragmatism)
2023-07-25 15:28:02 +01:00
1a383f8157
[*] Two annoying formatting issues in RWLock
2023-07-25 12:57:47 +01:00
66cfbb5351
[+] FlexibleConditionVariable::WaitForSignalRelativeNanoseconds(AuUInt64 uRelativeNanoseconds)
...
[+] FlexibleConditionVariable::WaitForSignalRelativeNanoseconds(Threading::IWaitable *pWaitable, AuUInt64 uRelativeNanoseconds)
[+] FlexibleConditionVariable::WaitForSignalRelativeNanoseconds(AuUInt64 uRelativeNanoseconds)
[*] Refactor FlexibleConditionVariable
2023-07-25 12:38:49 +01:00
dab6e9caee
[*] Refactor: FeaturefulCondition -> FlexibleConditionVariable
...
[+] SOO for FlexibleConditionVariable
2023-07-25 12:27:08 +01:00
daf6108902
[*] Removed retarded code inspired by late 90s - 2003 Microsoft Andy-IQ engineering
...
This was bothering my autism
2023-07-25 11:57:22 +01:00
b48966a39e
[*] Caught uninitialized member
2023-07-25 02:12:19 +01:00
8a4fc0d9c3
[*] Amend runtime config typo: Prefer*
...
[-] Redundant AuTime header (ExtendedTimer.hpp)
2023-07-13 19:50:18 +01:00
8bf351e007
[*] NT Win8+ fix: improper condvar wake up
...
[*] Fix kThreadIdAny regression
2023-07-11 00:54:54 +01:00
c90a13ad95
[*] Minor NT optimization: move branch
2023-07-10 20:06:18 +01:00
a977f0d1b5
[*] NT: backport unix optimization - no spin during spurious wake up
2023-07-10 13:12:17 +01:00
536522743a
[*] Move this branch in NTs condvar
2023-07-10 12:31:06 +01:00
8c84ecf892
[*] Win8+: Experimental primitive improvements by taking notes from Win7 cycle pinching
...
[*] +regression in condvar
2023-07-10 01:13:55 +01:00
355f7db711
[*] Forgot to reintroduce these: 75b71275
(cont)
2023-07-09 22:34:31 +01:00
75b71275e7
[*] Made past and present NT condvar optional spin steps configurable via the runtime config
2023-07-09 20:52:31 +01:00
99e8c68c62
[*] Update a Win8+ sync branch; can back out earlier
2023-07-05 19:32:01 +01:00
894df69fe0
[*] remove redundant branch from sync primitive
...
[*] optimize event
2023-06-28 02:24:53 +01:00
a454a2d71e
[*] Sync primitive improvements
...
[*] Reverted a change for UNIX: always never-spin acquire under observational lock
[*] Decreased common case of syscall operations under Linux and UNIX
[*] Unix signaling: prevent waits while during condvar wake up by unlocking before the signal
[*] NT no wait: semaphores must not spin under lock
2023-06-26 08:59:49 +01:00
fa90463a73
[*] im not sure why this was written like this
2023-06-23 22:36:13 +01:00
0d05fd3d33
[*] Minor mostly unnoticeable primitive improvements
2023-06-23 21:37:04 +01:00
2d6dca4e21
[+] 32bit SOO sizes for sync primitives under x86_32/MSVC
...
[*] Optimize the write-biased reentrant read-write lock down to 88 bytes on MSVC x64
2023-06-17 17:08:58 +01:00
451b9025c0
[*] Fix major recent regressions
...
amend: 48075bfd
amend: 25b933aa
amend: f50067e6 (to be overwritten)
et al
2023-06-17 15:12:16 +01:00
48075bfda7
[*] cleanup: added gUseNativeWaitSemapahore
2023-06-16 00:06:32 +01:00
25b933aafa
[*] Fixed regression in RWLock size without sacrificing on features
...
(TODO: I would like to WoA optimize it for modern oses at some point)
2023-06-16 00:02:42 +01:00
74b813f051
[*] Bloat RWLock by adding a separate yield queue for writers (we were already writer biased)
...
This will help us reduce cpu usage and latency at the cost of 32 bytes.
We are now hopelessly oversized: 136 bytes for a single primitive. 104 was barely passble.
2023-06-15 20:54:19 +01:00
d389f9dda3
[*] Re-optimize the primitives for Windows 8+ on top of a Windows XP+ core
2023-06-15 20:52:28 +01:00
17c50eff64
[*] fix old unix sync regressions
...
do not hold switching lock while spinning as originally written and intended
2023-06-13 12:05:55 +01:00
1a8acbdde5
[+] By-raw pointer WOA lists
...
(also they are now fairer)
[+] Steps towards future proofing NT (not the future proofing itself)
2023-06-12 18:31:44 +01:00
50413f36e5
[*] keyed events should yield indefinitely in their failure path
...
(amended one day later: removed one of the fixes. this is gonna apply to just one place for now)
2023-06-12 15:51:54 +01:00
5b495f7fd9
[*] sched: minor clean up
2023-06-11 17:52:50 +01:00
50f25e147a
[*] improve latency (i think - benchmark pending)
2023-06-07 11:45:14 +01:00
b423ce14b1
[*] change-up cond-vars mutual exclusivity locking
2023-05-31 05:34:36 +01:00
5cb56da924
[*] missed break [regression in 53df1ee8
]
2023-05-31 05:21:05 +01:00
055b149e11
[*] remove verbose "!= 0"
2023-05-31 04:38:05 +01:00
f92a19621a
[*] adjust undershooted ms scale sleeps to smt spin, then yield, in an effort to match nano-scale sleeps within 10s of kns
2023-05-30 13:12:53 +01:00
53df1ee81d
[*] Work on AuConditionVariable.NT some more
...
[*] Fix high cpu regression in 6af9940b
2023-05-30 12:53:26 +01:00
f842573352
[*] copy/pasted function parity
2023-05-08 15:21:15 +01:00