We add new memcpy version that uses unaligned loads which are fast
on modern processors. This allows second improvement which is avoiding
computed jump which is relatively expensive operation.
Tests available here:
http://kam.mff.cuni.cz/~ondra/memcpy_profile_result27_04_13.tar.bz2
Resolves: #15424
The compiler would optimize the benchmark function call out of the
loop and call it only once, resulting in blazingly fast times for some
benchmarks (notably atan, sin and cos). Mark the inputs as volatile
so that the code is forced to read again from the input for each
iteration.
[BZ #15442] This adds support for the inverse interpretation of the
quiet bit of IEEE 754 floating-point NaN data that some processors
use. This includes in particular MIPS architecture processors; the
payload used for the canonical qNaN encoding is updated accordingly
so as not to interfere with the quiet bit.
The EXTRACT_WORDS64 and INSERT_WORDS64 macros use movd for a 64-bit
operation. Somehow gcc manages to turn this into movq, but LLVM won't.
2013-05-15 Peter Collingbourne <pcc@google.com>
* sysdeps/x86_64/fpu/math_private.h (MOVQ): New macro.
(EXTRACT_WORDS64) Use where appropriate.
(INSERT_WORDS64) Likewise.
While these instructions accept memory operands, only one operand
may be a memory operand. Giving two operands xm constraints gives
the compiler the option of using memory for both operands, which
would result in invalid assembly code. Using x for all operands is
more appropriate, as most x86_64 calling conventions will pass the
arguments in registers anyway.
2013-05-15 Peter Collingbourne <pcc@google.com>
* sysdeps/x86_64/fpu/multiarch/s_fma.c (__fma_fma4): Replace xm
constraints with x constraints.
* sysdeps/x86_64/fpu/multiarch/s_fmaf.c (__fmaf_fma4): Likewise.
it is impossible to create an alias of a common symbol (as
compat_symbol does), because common symbols do not have a section or
an offset until linked. GNU as tolerates aliases of common symbols by
simply creating another common symbol, but other assemblers (notably
LLVM's integrated assembler) are less tolerant.
2013-05-15 Peter Collingbourne <pcc@google.com>
* malloc/obstack.c (_obstack_compat): Add initializer.
-
Loading of the vDSO pseudo-hwcap from the type 2 GNU note is
a rather arcane and poorly documented process. Given that I had
a chance to review this code today I thought I would add all
of the things I had to lookup to verify the validity of the
process.
With a single .note.GNU the vDSO can register up to 64 flags,
though in practice you are limited to 64 - _DL_FIRST_EXTRA
bits which on x86 is 12 bits.
The only use of this that I know of is in the Xen support
in Linux where they use the 1st bit to indicate "nosegneg".
I see "We use bit 1 to avoid bugs in some versions of glibc
when bit 0 is used; the choice is otherwise arbitrary.", but
no reference to a glibc bug anywhere. The code as-is should
support bit zero, so we still have that free for future use.
The kernel, glibc, and ld.so.cache must coordinate to ensure
that bit values don't go too high and are used consistently.
---
2013-05-13 Carlos O'Donell <carlos@redhat.com>
* elf/dl-hwcaps.c (_dl_important_hwcaps): Comment vDSO hwcap loading.
* elf/ldconfig.c (is_hwcap_platform): Comment each hwcap check.
(main): Comment "tls" pseudo-hwcap.