Commit Graph

28 Commits

Author SHA1 Message Date
3e77e61914 [*] As I said, blame clang and gcc devs for being retarded cunts.
// Even if clang (and gcc) has these intrins available, you must enable them globally, unlike see for some fucking reason.
// I mean, we can do runtime branching around SSE4 paths no problem. Why all of a sudden am i being gated out of the intrins im electing to use by hand?
// No, you (the compiler) may not use these in your baseline feature set (or incl in stl locks). Yes, i still want them. Now fuck off.
// If these end up being wrong, blame clang and gnu for being cunts, not me.

No, I will not raise our requirements above ivybridge; no, I will not expose feature macros to the STL (et al) that boosts our requirements to modern intelaviv slop and amd atomic ackers
2024-08-19 08:05:01 +01:00
67894b399b [*] Revert clang 'optimization' because this piece of shit compiler wont listen to me.
Even worse, im just going to fucking nuke all clang related checks from orbit in our global build_scripts (8b00dc69fceea62ecbbf5a21255a41e2f23921a4), because they admit they cause a 2x slowdown.
2024-05-13 23:43:19 +01:00
c3f7e625ba [*] Clang has check_stack, strict_gs_check is msvc specific 2024-05-10 22:37:51 +01:00
631624dc55 [*] Linux build regressions, and shrink the size of Linux RWLocks to 48 bytes from 64 2024-05-07 14:57:19 +01:00
8e1c74a5df [*] i swore i replaced this with a tpause before
[*] ...and the docs arent clear on whether or not this clock value is relative or absolute
2024-05-06 22:47:45 +01:00
f3ba901f71 [+] Zen3 on top of AlderLake optimizations
[*] Minor alderlake adjustments
2024-05-05 19:42:10 +01:00
459a9a789b [*] Switch C0.2 and C0.1 powerstates around 2024-05-03 15:52:50 +01:00
134816e128 [*] Optimize primitives SMTYield for Alderlake+ user-space, BIOS-ring mwait, and AARCH 2024-05-03 12:22:38 +01:00
0164919cd9 [+] while_bc 2024-04-13 22:49:05 +01:00
62b6fa20f8 [*] Update the copyright header of most of the primitives
[*] Fix generic mutex abs yield always returning true
2024-01-29 14:48:04 +00:00
0d6d073b85 [*] No way should we be using DWORDs here 2024-01-07 02:26:34 +00:00
49a6173011 [+] Improved SMT yielding
[+] Clocks.aarch64.[h/c]pp
2024-01-02 05:54:22 +00:00
e071b3d509 [+] WaitOnAddress[Steady](..., AuOptional<bool> optAlreadySpun = {}) arguments
[+] ...slight UWP optimization?
[*] Lift WoA limitation
2023-10-30 15:29:20 +00:00
5a9292ad1a [*] ...yes 2023-09-19 01:38:16 +01:00
74dc6772b0 [+] Non-mutually exclusive binary semaphore / event wait path
[+] ThreadingConfig::gPreferFutexEvent
2023-09-10 14:50:59 +01:00
dfe44317a0 [*] SMT Yield: minor branch added to SMT Yield 2023-09-09 18:09:22 +01:00
88355932c1 [*] Optimize thread configurations to be unpacked from the bitmap once at startup and during reconfigure as opposed ad-hoc 2023-09-09 17:37:14 +01:00
36a72228db [*] Cleanup/formatting of SMT yields 2023-09-06 17:01:01 +01:00
f53508baa9 [*] Unify both SMT subloops 2023-09-04 23:03:08 +01:00
9fbdafea74 [*] x86_64 Use RDSC for more deterministic back-off durations
Well, sort of. It's more likely to be referenced against the exact frequency stated in the hard-coded CPUID vendor string.
2023-09-02 14:37:07 +01:00
55c02d4aa0 [*] Tweak default thread config
[*] Fix regressions
2023-08-28 11:48:13 +01:00
8fe2619673 [*] Rework SMT yielding 2023-08-27 19:56:22 +01:00
d79cb4f3ca [*] RWLock: WakeOnAddress optimization on wait to prevent mutex congestion on modern OSes 2023-08-23 15:37:55 +01:00
7ad725ca04 [+] Global adaptive spin 2023-08-22 13:01:06 +01:00
71617ca66e [*] Format SMT spin 2023-08-20 09:50:41 +01:00
19224d2eed [*] Default back to zero. Do not throw off other threads if only used once 2023-08-19 18:39:13 +01:00
8bf6bdd963 [+] More threading options
[+] AuThreading::SetSpinCountTimeout
[+] AuThreading::SetThreadLocalAdditionalSpinCountTimeout
2023-08-19 18:16:48 +01:00
6974c713f7 [+] Allocationless thread primitives
[*] Rename SMPYield to SMTYield
2023-03-21 03:19:22 +00:00