We use a callback function into libc.so to get access to the data
structure with the information and have special versions of the test
macros which automatically use this function.
SSE registers are used for passing parameters and must be preserved
in runtime relocations. This is inside ld.so enforced through the
tests in tst-xmmymm.sh. But the malloc routines used after startup
come from libc.so and can be arbitrarily complex. It's overkill
to save the SSE registers all the time because of that. These calls
are rare. Instead we save them on demand. The new infrastructure
put in place in this patch makes this possible and efficient.
The test now takes the callgraph into account. Only code called
during runtime relocation is affected by the limitation. We now
determine the affected object files as closely as possible from
the outside. This allowed to remove some the specializations
for some of the string functions as they are only used in other
code paths.
This patch introduces a test to make sure no function modifies the
xmm/ymm registers. With the exception of the auditing functions.
The test is probably too pessimistic. All code linked into ld.so
is checked. Perhaps at some point the callgraph starting from
_dl_fixup and _dl_profile_fixup is checked and we can start using
faster SSE-using functions in parts of ld.so.
There will be more than one function which, in multiarch mode, wants
to use SSSE3. We should not test in each of them for Atoms with
slow SSSE3. Instead, disable the SSSE3 bit in the startup code for
such machines.
The original AVX patch used a function pointer to handle the difference
between machines with and without AVX support. This is insecure. A
well-placed memory exploit could lead to redirection of the execution.
Using a variable and several tests is a bit slower but cannot be
exploited in this way.
Some of the new multi-arch string functions for x86-64 were
not aligned to 16 byte boundarie,s possibly creating unnecessary
cache line misses and delays.
This patch adds SSSE3 strcpy/stpcpy. I got up to 4X speed up on Core 2
and Core i7. I disabled it on Atom since SSSE3 version is slower for
shorter (<64byte) data.
The test to call the indirect function now includes a subtest to
checked whether the symbol is defined. When coming to that point
this is almost always the case. The test for STT_GNU_IFUNC on the
other hand rarely is true. Move it to the front means we don't have
to perform the second test unless really necessary.
SO far Intel and AMD use exactly the same bits meaning the same
things in CPUID index 1. Simplify the code. Should an architecture
come along which doesn't use the same semantics then it must use a
different index value than COMMON_CPUID_INDEX_1.
If longjmp restores the stack frame to an address which is beyond
the stack frame at the time of the longjmp call it would install
an uninitialized stack frame. If compiled with _FORTIFY_SOURCE
defined, longjmp will now bail out in this situation.
from definition.
* sysdeps/x86_64/dl-machine.h (elf_machine_rela): Don't define
label if it is not used.
* elf/dl-profile.c (_dl_start_profile): Define real-type variant
of gmon_hist_hdr and gmon_hdr structures and use them.
* elf/dl-load.c (open_verify): Add temporary variable to avoid
warning.
* nscd/nscd_helper.c (get_mapping): Avoid casts to avoid warnings.
* sunrpc/clnt_raw.c (clntraw_private_s): Use union in definition
to avoid cast.
* inet/rexec.c (rexec_af): Make sa2 a union to avoid warnings.
* inet/rcmd.c (rcmd_af): Make from a union of the various needed types
to avoid warnings.
(iruserok_af): Use ss_family instead of casts.
* gmon/gmon.c (write_hist): Define real-type variant of
gmon_hist_hdr structure and use it.
(write_gmon): Likewise for gmon_hdr.
* sysdeps/unix/sysv/linux/readv.c: Avoid declaration of replacement
function if we are not going to define it.
* sysdeps/unix/sysv/linux/writev.c: Likewise.
* inet/inet6_option.c (optin_alloc): Add temporary variable to
avoid warning.
* libio/strfile.h (struct _IO_streambuf): Use correct type and
name of VTable element.
* libio/iovsprintf.c: Avoid casts to avoid warnings.
* libio/iovsscanf.c: Likewise.
* libio/vasprintf.c: Likewise.
* libio/vsnprintf.c: Likewise.
* stdio-common/isoc99_vsscanf.c: Likewise.
* stdlib/strfmon_l.c: Likewise.
* debug/vasprintf_chk.c: Likewise.
* debug/vsnprintf_chk.c: Likewise.
* debug/vsprintf_chk.c: Likewise.
* sysdeps/x86_64/add_n.S: New file.
* sysdeps/x86_64/addmul_1.S: New file.
* sysdeps/x86_64/lshift.S: New file.
* sysdeps/x86_64/mul_1.S: New file.
* sysdeps/x86_64/rshift.S: New file.
* sysdeps/x86_64/sub_n.S: New file.
* sysdeps/x86_64/submul_1.S: New file.
* inet/inet6_rth.c (inet6_rth_add): Add some error checking.
Patch mostly by Yang Hongyang <yanghy@cn.fujitsu.com>.
* inet/Makefile (tests): Add tst-inet6_rth.
* inet/tst-inet6_rth.c: New file.
alignment of La_x86_64_regs. Store xmm parameters.
(reloc_index): Define.
(_dl_fixup): Rename reloc_offset parameter to reloc_arg.
(_dl_fixup_profile): Likewise. Use reloc_index instead of
computing index from reloc_offset.
(_dl_call_pltexit): Likewise.
* sysdeps/x86_64/dl-trampoline.S (_dl_runtime_resolve): Just pass
the relocation index to _dl_fixup.
(_dl_runtime_profile): Likewise for _dl_fixup_profile and
_dl_call_pltexit.
* sysdeps/x86_64/dl-runtime.c: New file.
* sysdeps/x86_64/dl-trampoline.S (_dl_runtime_profile): Fix
alignement of La_x86_64_regs. Store xmm parameters.
Patch mostly by Jiri Olsa <olsajiri@gmail.com>.
* nscd/mem.c (gc): Use alloca_count to get the real stack usage.
* include/alloca.h (alloca_account): Define.
* sysdeps/x86_64/stackinfo.h (stackinfo_get_sp): Define.
(stackinfo_sub_sp): Define.
2008-2-26 Harsha Jagasia <harsha.jagasia@amd.com>
* sysdeps/x86_64/cacheinfo.c (NOT_USED_RIGHT_NOW): Remove ifdef guards.
* sysdeps/x86_64/memset.S: Rewrite non-SSE code path as tuned for AMD
Barcelona machine. Make default fall through branch of
__x86_64_preferred_memory_instruction check as the integer code path.
2007-10-15 H.J. Lu <hongjiu.lu@intel.com>
* sysdeps/x86_64/cacheinfo.c
(__x86_64_preferred_memory_instruction): New variable.
(init_cacheinfo): Initialize __x86_64_preferred_memory_instruction.
* sysdeps/x86_64/memset.S: Rewrite.
2008-01-08 Jakub Jelinek <jakub@redhat.com>
* malloc/malloc.c (public_cALLOc): For arenas other than
2007-10-16 Ulrich Drepper <drepper@redhat.com>
* time/tzfile.c (__tzfile_read): Take extra memory requested by caller
into account when copying TZ string.
a local label rather than HIDDEN_JUMPTARGET.
2007-10-16 Jakub Jelinek <jakub@redhat.com>
* sysdeps/x86_64/memset.S: Jump from bzero to memset using
a local label rather than HIDDEN_JUMPTARGET.
from __x86_64_core_cache_size_half.
(init_cacheinfo): Compute shared cache size for AMD processors with
shared L3 correctly.
* sysdeps/x86_64/memcpy.S: Adjust for __x86_64_data_cache_size_half
name change.
Patch in large parts by Evandro Menezes.