mirror of
https://sourceware.org/git/glibc.git
synced 2024-11-22 04:50:07 +00:00
7a25d6a84d
The x86-specific versions of both pthread_cond_wait and pthread_cond_timedwait have (in their fall-back-to-futex-wait slow paths) calls to __pthread_mutex_cond_lock_adjust followed by __pthread_mutex_unlock_usercnt, which load the parameters before the first call but then assume that the first parameter, in %eax, will survive unaffected. This happens to have been true before now, but %eax is a call-clobbered register, and this assumption is not safe: it could change at any time, at GCC's whim, and indeed the stack-protector canary checking code clobbers %eax while checking that the canary is uncorrupted. So reload %eax before calling __pthread_mutex_unlock_usercnt. (Do this unconditionally, even when stack-protection is not in use, because it's the right thing to do, it's a slow path, and anything else is dicing with death.) * sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S: Reload call-clobbered %eax on retry path. * sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S: Likewise. |
||
---|---|---|
.. | ||
aarch64 | ||
alpha | ||
arm | ||
generic | ||
gnu | ||
hppa | ||
i386 | ||
ia64 | ||
ieee754 | ||
init_array | ||
m68k | ||
mach | ||
microblaze | ||
mips | ||
nacl | ||
nios2 | ||
nptl | ||
posix | ||
powerpc | ||
pthread | ||
s390 | ||
sh | ||
sparc | ||
tile | ||
unix | ||
wordsize-32 | ||
wordsize-64 | ||
x86 | ||
x86_64 |