Commit Graph

394 Commits

Author SHA1 Message Date
a96788623f [*] Fix linux build warning and not quite complete linux ipc fix a4f6db7ec9 2023-09-17 18:36:20 +01:00
81775a76bf [*] Linux: build regressions 2023-09-15 00:10:40 +01:00
729c9f8508 [*] Ensure WaitFor always respects 0 timeouts, no matter the flags 2023-09-12 22:06:55 +01:00
6ff27c6855 [*] This was bothering me - 4b0a7c65 cont 2023-09-12 21:56:58 +01:00
bf03124f92 [+] AuThreading::TryWait 2023-09-12 18:47:25 +01:00
8e54071d60 [-] Remove 2 year old 0.0 WaitFor back-off implementation 2023-09-12 18:30:45 +01:00
4b0a7c651a [*] Guess I should finalize this for linux. The verbosity of writing a cas in the wait loops is stupid if we arent doing anything special with the bits. 2023-09-12 16:12:54 +01:00
403c186f0a [*] Improve NT semaphore: use a different internal api now. Might help uncontested servers with work queues using semaphores 2023-09-12 13:28:46 +01:00
d727859cc2 [*] Linux/Modern NT regression in new optimized event wait path 2023-09-10 23:06:58 +01:00
22efbff12f [*] 74dc6772 cont: this structure is going to be padded to 32bits anyway. this makes the atomic operations easier 2023-09-10 18:10:36 +01:00
b539cfb353 [*] 74dc6772 cont: improvements 2023-09-10 15:04:32 +01:00
74dc6772b0 [+] Non-mutually exclusive binary semaphore / event wait path
[+] ThreadingConfig::gPreferFutexEvent
2023-09-10 14:50:59 +01:00
48dc2e790b [+] IEvent::TrySet()
[+] New atomic logic for AuEvent. With this change, I can stop slandering it as the "shit primitive."
(it's still not the best it could be, but it's an improvement over what i had before)
2023-09-10 14:04:00 +01:00
1a4a4ad863 [*] Added missing this->s 2023-09-09 19:46:08 +01:00
dfe44317a0 [*] SMT Yield: minor branch added to SMT Yield 2023-09-09 18:09:22 +01:00
88355932c1 [*] Optimize thread configurations to be unpacked from the bitmap once at startup and during reconfigure as opposed ad-hoc 2023-09-09 17:37:14 +01:00
ca2f8fea71 [*] Mitigate Kernel32 and Rtl mixing 2023-09-09 15:29:48 +01:00
1fa063e19f [*] Why am i calling libm/crt from here? how stupid am i 2023-09-09 15:16:06 +01:00
4ad70cadb4 [*] optimization: cea33621 cont 2023-09-09 14:38:02 +01:00
cea3362186 [*] Finally fixed an old regression: RWLock is back to being write-biased to prevent forever-read conditions 2023-09-09 13:03:02 +01:00
109b0cff3f [*] ...and same applies to RWLock
(instead its pls dont use the public api instead of the internal NT apis)
2023-09-09 12:39:47 +01:00
9601fcfd39 [*] NT AuThreadPrimitives: stop using these AuProcAddresses directly 2023-09-09 12:29:43 +01:00
8e59300395 [*] Another broken Linux path 2023-09-09 12:17:40 +01:00
36a72228db [*] Cleanup/formatting of SMT yields 2023-09-06 17:01:01 +01:00
f53508baa9 [*] Unify both SMT subloops 2023-09-04 23:03:08 +01:00
1c80e4910b [*] RWLock: another optimization for embedded and win7 targets 2023-09-03 13:35:12 +01:00
1479bcaa22 [*] bonk 2023-09-02 21:24:37 +01:00
c2a6bd92fa [*] Minor optimization in a shit primitive 2023-09-02 19:09:04 +01:00
0838373410 [*] NT Optimization: more aggressive semaphores to prevent atomic failures (perhaps this could be made to account for weak exchanges under different archs) 2023-09-02 19:05:07 +01:00
cc6e0358fa [*] NT: further optimizations to solve CPU usage regressions 2023-09-02 16:11:06 +01:00
9fbdafea74 [*] x86_64 Use RDSC for more deterministic back-off durations
Well, sort of. It's more likely to be referenced against the exact frequency stated in the hard-coded CPUID vendor string.
2023-09-02 14:37:07 +01:00
de25694416 [*] bonk (use AuAXXX atomics) 2023-09-02 04:55:43 +01:00
a20e9b4954 [*] Win10/Linux regression in RWLock (again) 2023-08-31 18:41:18 +01:00
dd655ad3e0 [*] Linux unaligned signal fix 2023-08-31 18:41:10 +01:00
affe4cc496 [*] RWLock: simplify writersPending_ guard
[*] Fix RWLock: I may have messed up the new fast path timeouts in RWLock
[*] RWLock: refactoring/improvements
2023-08-30 16:11:54 +01:00
cf118d0b4b [*] Minor RW lock tweaks 2023-08-30 14:57:13 +01:00
3503d0ec68 [+] Added Linux signal configuration and separate LinuxConfig type (LinuxConfig)
[*] Fix Linux regressions in previous NT commit
2023-08-29 03:11:28 +01:00
fef6eac859 [*] More Linux tweaks 2023-08-28 19:13:18 +01:00
0d759f85f8 [*] Linux/Clang fixerinos/improvements 2023-08-28 16:35:32 +01:00
55c02d4aa0 [*] Tweak default thread config
[*] Fix regressions
2023-08-28 11:48:13 +01:00
b2e1df8f72 [*] Annoying Linux checks 2023-08-27 21:35:40 +01:00
97296d1fe9 [*] ThreadingConfig::bPreferEnableAdaptiveSpin 2023-08-27 20:26:36 +01:00
8fe2619673 [*] Rework SMT yielding 2023-08-27 19:56:22 +01:00
5cf7533eab [*] Linux and UNIX QOL 2023-08-27 12:42:10 +01:00
3e5aa1aff0 [*] Simplified lines of code: shared pointer init 2023-08-26 19:02:14 +01:00
7680a86d5a [*] ...same applies to these 2023-08-26 18:46:00 +01:00
610f2c73a0 [*] Optimize >4 thread wakeups on <= Win7, under Semaphore 2023-08-26 18:08:33 +01:00
346a9f3bde [*] More aggressively wake up reorder prone (unlikely) condvars under broadcast (unlikely) 2023-08-26 15:56:59 +01:00
3ca8de022e [*] Fix issues related to inconstent retardation in the asinine freetard world
GNU and its consequences have been a disaster for the human race
2023-08-26 15:04:48 +01:00
4a73f7250f [*] Another uniproc test 2023-08-25 12:42:31 +01:00
337df8040c [*] Move uniprocessor check 2023-08-24 15:12:49 +01:00
100964ac87 [*] NT Semaphore optimization 2023-08-24 13:55:31 +01:00
935d1b6ab2 [*] RWLock: added another SignalManyWriter condition to ensure upgrades never get missed 2023-08-24 11:45:15 +01:00
1ecd46ddbf [*] Ad-hoc system thread signals 2023-08-24 10:43:33 +01:00
937f123dad [*] RWLock: Futex waiter path: force read semantics 2023-08-24 10:20:43 +01:00
49ced3fcc6 [*] Always attach to the main thread context on init 2023-08-24 10:17:53 +01:00
31319981ba [*] Two trivial Linux tweaks 2023-08-23 23:45:26 +01:00
3898a41198 [*] Adopt new ROXTL atomics
...AuAtomicLoad + AuAtomicClearU8Lock
2023-08-23 22:03:00 +01:00
293a8ddd66 [*] Simplify when to call _beginthreadex (we probably shouldn't) 2023-08-23 19:11:21 +01:00
61eaa701b7 [*] MSVC compilation regression
(theres no way in fuck vista or xp will run with >= 32 threads. hell, you can barely expect the acpi and boot video drivers to work.)
2023-08-23 19:05:32 +01:00
10e95fb5ae [*] Begin reworking Linux/POSIX PRIOs 2023-08-23 18:17:15 +01:00
9c04b31da3 [*] Don't warn on XP/Vista 2023-08-23 17:09:19 +01:00
921fee1b8d [+] IAuroraThread::SetNoUnwindTerminateExitWatchDogTimeoutInMS 2023-08-23 17:01:56 +01:00
412630077d [+] ThreadingConfig::bPreferWaitOnAddressAlwaysSpin 2023-08-23 16:45:08 +01:00
524365b5da [*] Handhold non-MSVC compilers 2023-08-23 16:38:22 +01:00
0c5d140bd4 [*] Autoset bPlatformIsSMPProcessorOptimized to false on singlethreaded systems 2023-08-23 16:03:22 +01:00
d79cb4f3ca [*] RWLock: WakeOnAddress optimization on wait to prevent mutex congestion on modern OSes 2023-08-23 15:37:55 +01:00
a4d317a48d [*] Reoptimize semaphore wait paths 2023-08-22 15:28:09 +01:00
7ad725ca04 [+] Global adaptive spin 2023-08-22 13:01:06 +01:00
ccfd0fafab [*] Why must all languages be garbage at expressing life-span of constness?
This is const-correct, as in, we don't expect to modify the pointer; you dont need to be a writer
This was const-correct, as in, this field better be a volatile block of memory you expect to update - plz dont to any retarded assumptions based on it being "const," compiler.
2023-08-22 11:08:56 +01:00
3747fb7c6f [+] ThreadingConfig::uUWPNanosecondEmulationMaxYields
[+] ThreadingConfig::bUWPNanosecondEmulationCheckFirst
2023-08-22 09:56:32 +01:00
76ac770674 [*] Update a handful of condvar cas's and account for laziness along the way 2023-08-22 09:44:54 +01:00
cd362db7af [*] Deaf, dumb, and blind 2023-08-21 19:20:52 +01:00
fa170c413d [*] More compact Linux primitives 2023-08-21 19:17:05 +01:00
5cc811be19 [*] More compact Win32 primitives! 2023-08-21 17:34:24 +01:00
869512e651 [*] Optimization: Remove two stupid branches in RWLock 2023-08-21 16:33:32 +01:00
f847ab049a [+] ThreadingConfig::bPreferRWLockReadLockSpin 2023-08-21 16:25:51 +01:00
e1f384de2e [*] RWLock: improper upgrade handshake
The switch over to two condvars still doesnt seem right
2023-08-21 16:20:52 +01:00
681c4b9158 [*] RWLock: revert this branch to checking for 0 and 1 remaining readers
[*] Formatting
2023-08-21 16:08:30 +01:00
e2909ebe74 [*] RWLock: Upgrade UpgradeReadToWrite sleep path 2023-08-21 16:02:55 +01:00
68b4fe5f8b [*] RWLock: not implementing LockAbsMS and LockAbsNS can hurt the hotpath 2023-08-21 15:50:45 +01:00
d1b1bfb221 [*] Caught an issue with RWLock: cant reenter unlocked reentrance mutex. 2023-08-21 15:39:56 +01:00
a60a1b3088 [*] dont assume these condvar paths cant underflow 2023-08-21 00:25:29 +01:00
b8d4e02ab5 [+] Aurora::Threading::GetThreadingConfig
[+] Aurora::Threading::SetThreadingConfig
[*] Save a few bytes in Aurora::ThreadingConfig
2023-08-20 16:23:03 +01:00
08f30017b8 [*] regression: b236469d06 cont 2023-08-20 13:41:53 +01:00
71617ca66e [*] Format SMT spin 2023-08-20 09:50:41 +01:00
f1a08d25e7 [+] AuUInt32 GetTotalSpinCountTimeout()
[*] Fixup FutexWaitable
2023-08-20 09:47:31 +01:00
b236469d06 [*] Made WakeOnAddress trigger pointers always const 2023-08-19 20:37:24 +01:00
2fae266876 [*] Fix WakeOnAddress constness of the comparison argument 2023-08-19 19:48:24 +01:00
8874fd9810 [*] Cache Win8+ check 2023-08-19 18:49:16 +01:00
19224d2eed [*] Default back to zero. Do not throw off other threads if only used once 2023-08-19 18:39:13 +01:00
ab4971ef9c [+] Linux threading options 2023-08-19 18:33:54 +01:00
8bf6bdd963 [+] More threading options
[+] AuThreading::SetSpinCountTimeout
[+] AuThreading::SetThreadLocalAdditionalSpinCountTimeout
2023-08-19 18:16:48 +01:00
7fb8b89def [*] Some unwanted indirect branching is still bleeding in; mark more primitive classes as final 2023-08-19 11:41:37 +01:00
92ebafecab [*] Suppress a number of antisemitic clang warnings 2023-08-18 23:32:46 +01:00
4240966512 [*] Two trivial changes (not fixing or improving anything) 2023-08-18 15:53:38 +01:00
7dd6145dc1 [*] Always use unsigned integers under the semaphore classes 2023-08-18 15:26:31 +01:00
2a1556d80c [*] Optimize Linux semaphore 2023-08-17 23:06:02 +01:00
04956bedba [*] Shorten the expected overhead of some Linux primitives 2023-08-13 20:09:58 +01:00
6ec2fcc4b6 [*] Added timeout awareness in ConditionEx; returns false on timeout
[*] Updated Linux SOO sizes
2023-08-12 11:18:19 +01:00
7962772c62 [+] Added Linux-specific condvars and condmutex 2023-08-12 11:11:12 +01:00
737d3bb4d6 [+] AuProcAddresses.Linux.* 2023-08-12 10:16:20 +01:00
1f173a8799 [*] Begin resolving 8 months of Linux neglect 2023-08-11 16:51:42 +01:00
9a2e5674e8 [*] RWLock improvements 2023-07-30 11:23:40 +01:00
c889af13e5 [*] bNoThreadNames option wasnt respected 2023-07-30 10:00:54 +01:00
ceb5b2961e [+] FALLBACK_WAKEONADDRESS_SUPPORTS_NONEXACT_MATCHING
[+] ThreadingConfig::bPreferEmulatedWakeOnAddress
2023-07-30 09:52:41 +01:00
c306c12763 [*] Improve WakeOnAddress by hash binning by kDefaultWaitPerProcess instead the previous iteration before BST or HashTree lookup 2023-07-30 09:34:39 +01:00
5e94be7487 [*] ConditionEx::WaitForSignalRelativeNanoseconds -> WaitForSignalNS 2023-07-29 09:52:59 +01:00
b411c710d1 [+] IConditionVariable::WaitForSignalNS 2023-07-25 15:59:04 +01:00
76262c2e3e [*] Trivial Win8+ condvar broadcast improvement (pragmatism) 2023-07-25 15:28:02 +01:00
1a383f8157 [*] Two annoying formatting issues in RWLock 2023-07-25 12:57:47 +01:00
66cfbb5351 [+] FlexibleConditionVariable::WaitForSignalRelativeNanoseconds(AuUInt64 uRelativeNanoseconds)
[+] FlexibleConditionVariable::WaitForSignalRelativeNanoseconds(Threading::IWaitable *pWaitable, AuUInt64 uRelativeNanoseconds)
[+] FlexibleConditionVariable::WaitForSignalRelativeNanoseconds(AuUInt64 uRelativeNanoseconds)
[*] Refactor FlexibleConditionVariable
2023-07-25 12:38:49 +01:00
dab6e9caee [*] Refactor: FeaturefulCondition -> FlexibleConditionVariable
[+] SOO for FlexibleConditionVariable
2023-07-25 12:27:08 +01:00
daf6108902 [*] Removed retarded code inspired by late 90s - 2003 Microsoft Andy-IQ engineering
This was bothering my autism
2023-07-25 11:57:22 +01:00
b48966a39e [*] Caught uninitialized member 2023-07-25 02:12:19 +01:00
d45dc977d8 [*] NT: Further reduce Win32 link-time requirements cont (1948dd0c) 2023-07-24 12:48:42 +01:00
788dde684b [*] Windows Vista, UWP, and Windows 11: Move Windows 7 and 8 SetThreadGroupAffinity symbol from the IAT to AuProcAddresses.NT.[c/h]pps object
[*] Update the READMEs support table
2023-07-14 16:33:26 +01:00
8a4fc0d9c3 [*] Amend runtime config typo: Prefer*
[-] Redundant AuTime header (ExtendedTimer.hpp)
2023-07-13 19:50:18 +01:00
8bf351e007 [*] NT Win8+ fix: improper condvar wake up
[*] Fix kThreadIdAny regression
2023-07-11 00:54:54 +01:00
c90a13ad95 [*] Minor NT optimization: move branch 2023-07-10 20:06:18 +01:00
a977f0d1b5 [*] NT: backport unix optimization - no spin during spurious wake up 2023-07-10 13:12:17 +01:00
536522743a [*] Move this branch in NTs condvar 2023-07-10 12:31:06 +01:00
8c84ecf892 [*] Win8+: Experimental primitive improvements by taking notes from Win7 cycle pinching
[*] +regression in condvar
2023-07-10 01:13:55 +01:00
355f7db711 [*] Forgot to reintroduce these: 75b71275 (cont) 2023-07-09 22:34:31 +01:00
75b71275e7 [*] Made past and present NT condvar optional spin steps configurable via the runtime config 2023-07-09 20:52:31 +01:00
03dbfeefe1 [*] Enhance Windows 7 scheduling resolution 2023-07-09 12:56:35 +01:00
627bdddfdc [*] Ensure AuProcAddresses.NT.* is used for all dynamically linked symbols 2023-07-09 10:03:29 +01:00
94e2f7924e [-] More redundant code from WakeOnAddress 2023-07-06 09:47:46 +01:00
b90feae7d0 [-] Remove preemptive POSIX optimization
This'll just get in the way of Linux optimizations for the sake of trying to hit the correct yield period without a spurious wake up - all in one shot.
2023-07-06 09:41:09 +01:00
e2758ea243 [-] Remove unused code from WakeOnAddress 2023-07-06 09:37:58 +01:00
99e8c68c62 [*] Update a Win8+ sync branch; can back out earlier 2023-07-05 19:32:01 +01:00
e2accb900b [*] Begin work around for use after thread-local free; WaitOnAddress emulation 2023-07-05 18:25:07 +01:00
894df69fe0 [*] remove redundant branch from sync primitive
[*] optimize event
2023-06-28 02:24:53 +01:00
a454a2d71e [*] Sync primitive improvements
[*] Reverted a change for UNIX: always never-spin acquire under observational lock
[*] Decreased common case of syscall operations under Linux and UNIX
[*] Unix signaling: prevent waits while during condvar wake up by unlocking before the signal
[*] NT no wait: semaphores must not spin under lock
2023-06-26 08:59:49 +01:00
fa90463a73 [*] im not sure why this was written like this 2023-06-23 22:36:13 +01:00
0d05fd3d33 [*] Minor mostly unnoticeable primitive improvements 2023-06-23 21:37:04 +01:00
2d6dca4e21 [+] 32bit SOO sizes for sync primitives under x86_32/MSVC
[*] Optimize the write-biased reentrant read-write lock down to 88 bytes on MSVC x64
2023-06-17 17:08:58 +01:00
451b9025c0 [*] Fix major recent regressions
amend: 48075bfd
amend: 25b933aa
amend: f50067e6 (to be overwritten)
et al
2023-06-17 15:12:16 +01:00
48075bfda7 [*] cleanup: added gUseNativeWaitSemapahore 2023-06-16 00:06:32 +01:00
25b933aafa [*] Fixed regression in RWLock size without sacrificing on features
(TODO: I would like to WoA optimize it for modern oses at some point)
2023-06-16 00:02:42 +01:00
e11028bb03 [*] Timeout division: ensure this never deadlocks 2023-06-15 21:15:58 +01:00
74b813f051 [*] Bloat RWLock by adding a separate yield queue for writers (we were already writer biased)
This will help us reduce cpu usage and latency at the cost of 32 bytes.

We are now hopelessly oversized: 136 bytes for a single primitive. 104 was barely passble.
2023-06-15 20:54:19 +01:00
d389f9dda3 [*] Re-optimize the primitives for Windows 8+ on top of a Windows XP+ core 2023-06-15 20:52:28 +01:00
28201db2d7 [+] Improve WoA on Windows 8+
[+] AuThreading::WaitOnAddressSteady
2023-06-15 20:44:27 +01:00
17c50eff64 [*] fix old unix sync regressions
do not hold switching lock while spinning as originally written and intended
2023-06-13 12:05:55 +01:00
b91ce52195 [*] Not sure how WOA regressed 2023-06-12 19:35:54 +01:00
1a8acbdde5 [+] By-raw pointer WOA lists
(also they are now fairer)
[+] Steps towards future proofing NT (not the future proofing itself)
2023-06-12 18:31:44 +01:00
50413f36e5 [*] keyed events should yield indefinitely in their failure path
(amended one day later: removed one of the fixes. this is gonna apply to just one place for now)
2023-06-12 15:51:54 +01:00
123e34d224 [*] been meaning to remove this debug preemptive wake up for awhile 2023-06-11 21:35:47 +01:00
1bda1f469f [*] simplify wake on address emulation
Windows 7 reporting improved time to wake, but it is still averaging about the same... everything.
2023-06-11 19:13:37 +01:00