The ia64_rse_is_rnat_slot func expects an unsigned pointer, but we're
passing in a signed pointer. The signness doesn't matter here, so
convert it to unsigned.
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
* sysdeps/unix/sysv/linux/bits/mman-linux.h (MAP_ANONYMOUS):
Allow definition via __MAP_ANONYMOUS.
* sysdeps/unix/sysv/linux/mips/bits/mman.h: Remove all defines
provided by bits/mman-linux.h and include <bits/mman-linux.h>.
(__MAP_ANONYMOUS): Define.
For arm this makes no difference--the result is bit-for-bit identical;
for thumb this results in smaller encodings. Perhaps it ought not and
this is in fact an assembler bug, but I also think it's clearer.
Factor out the sequence needed to call kuser_get_tls, as we can't
play subtract into pc games in thumb mode. Prepare for hard-tp,
pulling the save of LR into the macro.
There are several places in which we access negative offsets from
the thread-pointer, but thumb2 only supports positive offsets in
memory references.
Avoid duplicating the rather large macros in which these references
are embedded by abstracting out the operation.
Some routines are written with complex LDM/STM insns that cannot be
used in thumb mode, or are highly conditional requiring excessive
IT insns.
When a future patch goes in to enable thumb2 by default, this marker
will be used to override that default.
This feature is specifically for the C++ compiler to offload calling
thread_local object destructors on thread program exit, to glibc.
This is to overcome the possible complication of destructors of
thread_local objects getting called after the DSO in which they're
defined is unloaded by the dynamic linker. The DSO is marked as
'unloadable' if it has a constructed thread_local object and marked as
'unloadable' again when all the constructed thread_local objects
defined in it are destroyed.
There hasn't been a use for lll_unlock_wake_cb since it was
removed globally in 2007-05-29. This patch removes the
function from hppa's lowlevellock.[ch] implementation.
ARM now supports loading unmarked objects from
the dynamic loader cache. Unmarked objects can
be used with the hard-float or soft-float ABI.
We must support loading unmarked objects during
the transition period from a binutils that does
not mark objects to one that does mark them with
the correct ELF flags.
Signed-off-by: Carlos O'Donell <carlos@redhat.com>
That convention requires the instruction immediately preceding SYSCALL
to initialize $v0 with the syscall number. Then if a restart triggers,
$v0 will have been clobbered by the syscall interrupted, and needs to be
reinititalized. The kernel will decrement the PC by 4 before switching
back to the user mode so that $v0 has been reloaded before SYSCALL is
executed again. This implies the place $v0 is loaded from must be
preserved across a syscall, e.g. an immediate, static register, stack
slot, etc.
The restriction was lifted with Linux 2.6.36 kernel release and no
special requirements are placed around the SYSCALL instruction anymore,
however we still support older kernel binaries.
Previously, we would see a bad frame in the gdb backtrace output, e.g.:
(gdb) bt
#0 foo () at foo.c:5
#1 0x000000aaaab68ee8 in start_thread () from /lib/libpthread.so.0
#2 0x000000aaaad01c88 in clone () from /lib/libc.so.6
#3 0x0000000000000000 in ?? ()
With this change the bogus frame #3 is gone and we have the
same output as x86 does for the same program.