Commit Graph

707 Commits

Author SHA1 Message Date
Ondřej Bílka
5905e7b3e2 Faster strchr implementation. 2013-09-11 17:07:38 +02:00
Joseph Myers
ffa3cd7f1a Fix lgammaf spurious underflow (bug 15427). 2013-09-03 15:32:54 +00:00
Ondřej Bílka
8f02859f17 Add unaligned strcmp. 2013-09-03 16:27:10 +02:00
Joseph Myers
b7835e3223 Fix spurious jnf underflows (bug 14155). 2013-09-02 14:51:24 +00:00
Ondřej Bílka
382466e04e Fix typos. 2013-08-30 18:08:59 +02:00
Ondřej Bílka
0186c6e97e Fix rawmemchr regression on bulldozer. 2013-08-30 10:14:37 +02:00
Ondřej Bílka
c0c3f78afb Fix typos. 2013-08-21 19:48:48 +02:00
Jeroen Albers
72c90ed01f Update x86 and x86_64 ulps on AMD FX-8350 with GCC 4.8.1. 2013-07-05 12:58:20 +00:00
Markus Trippelsdorf
5314ed1afd Update x86_64 ULPs. 2013-07-02 22:01:13 +00:00
Joseph Myers
67338156ea Regenerate x86 and x86_64 ulps. 2013-07-02 20:01:15 +00:00
Liubov Dmitrieva
6308fd9a46 Skip SSE4.2 versions on Intel Silvermont
SSE2/SSSE3 versions are faster than SSE4.2 versions on Intel Silvermont.
2013-06-28 15:31:40 -07:00
Liubov Dmitrieva
11b8a0e1d7 Fix buffers overrun in x86_64 memcmp-ssse3.S 2013-06-26 12:31:51 -07:00
Liubov Dmitrieva
d086fc7ba0 Set fast unaligned load flag for new Intel microarchitecture
I have small patch for new Intel Silvermont machines.

http://newsroom.intel.com/community/intel_newsroom/blog/2013/05/06/intel-launches-low-power-high-performance-silvermont-microarchitecture

I checked this on my machine and see that strcpy, ... unaligned
versions are faster than ssse3 versions.
2013-06-14 20:46:15 +02:00
Siddhesh Poyarekar
747ef469ff Add rtld-memset.S for x86_64
Resolves: BZ #15627

Add an assembler version of rtld-memset to avoid using SSE registers.
2013-06-15 00:09:26 +05:30
Joseph Myers
9c84384cc1 Remove trailing whitespace. 2013-06-05 20:44:03 +00:00
Siddhesh Poyarekar
b937534868 Avoid crashing in LD_DEBUG when program name is unavailable
Resolves: #15465

The program name may be unavailable if the user application tampers
with argc and argv[].  Some parts of the dynamic linker caters for
this while others don't, so this patch consolidates the check and
fallback into a single macro and updates all users.
2013-05-29 21:34:12 +05:30
Joseph Myers
dd4259b9f7 Test drem and pow10 in libm-test.inc. 2013-05-24 20:33:14 +00:00
Joseph Myers
4f8dfe270b Use same tests for isfinite/finite, lgamma/gamma. 2013-05-24 19:21:22 +00:00
Joseph Myers
b50a71810b Don't include expected results in libm-test test names. 2013-05-22 11:49:36 +00:00
Ondrej Bilka
b2b671b677 Faster memset on x64
This implementation speed up memset in several ways. First is avoiding
expensive computed jump. Second is using fact that arguments of memset
are most of time aligned to 8 bytes.

Benchmark results on:
kam.mff.cuni.cz/~ondra/benchmark_string/memset_profile_result27_04_13.tar.bz2
2013-05-20 08:32:45 +02:00
Ondrej Bilka
2d48b41c8f Faster memcpy on x64.
We add new memcpy version that uses unaligned loads which are fast
on modern processors. This allows second improvement which is avoiding
computed jump which is relatively expensive operation.

Tests available here:
http://kam.mff.cuni.cz/~ondra/memcpy_profile_result27_04_13.tar.bz2
2013-05-20 08:24:41 +02:00
Joseph Myers
db62a90753 Handle sincos with generic libm-test logic. 2013-05-19 14:45:41 +00:00
Ryan S. Arnold
e054f49430 Add #include <stdint.h> for uint[32|64]_t usage (except installed headers). 2013-05-16 11:32:54 -05:00
Peter Collingbourne
1deff3dca1 Use movq for 64-bit operations
The EXTRACT_WORDS64 and INSERT_WORDS64 macros use movd for a 64-bit
operation.  Somehow gcc manages to turn this into movq, but LLVM won't.

2013-05-15  Peter Collingbourne  <pcc@google.com>

	* sysdeps/x86_64/fpu/math_private.h (MOVQ): New macro.
	(EXTRACT_WORDS64) Use where appropriate.
	(INSERT_WORDS64) Likewise.
2013-05-15 20:33:45 +02:00
Peter Collingbourne
791f3ba0db Use x constraints for operands to vfmaddss and vfmaddsd
While these instructions accept memory operands, only one operand
may be a memory operand.  Giving two operands xm constraints gives
the compiler the option of using memory for both operands, which
would result in invalid assembly code.  Using x for all operands is
more appropriate, as most x86_64 calling conventions will pass the
arguments in registers anyway.

2013-05-15  Peter Collingbourne  <pcc@google.com>

	* sysdeps/x86_64/fpu/multiarch/s_fma.c (__fma_fma4): Replace xm
	constraints with x constraints.
	* sysdeps/x86_64/fpu/multiarch/s_fmaf.c (__fmaf_fma4): Likewise.
2013-05-15 20:31:53 +02:00
Joseph Myers
d8cd06db62 Improve tgamma accuracy (bugs 2546, 2560, 5159, 15426). 2013-05-08 11:58:18 +00:00
Joseph Myers
10de07f5fd Fix catan, catanh spurious underflows (bug 15423). 2013-05-01 10:07:00 +00:00
Joseph Myers
caf84319c1 Fix catan, catanh inaccuracy from atan2 denominators near 0 (bug 15416). 2013-04-30 11:27:35 +00:00
Joseph Myers
5b4217d71f Fix catan, catanh spurious overflows (bug 15409). 2013-04-27 14:57:41 +00:00
Markus Trippelsdorf
1b8359836d Update x86_64 ULPs
2013-04-26  Markus Trippelsdorf  <markus@trippelsdorf.de>

	* sysdeps/x86_64/fpu/libm-test-ulps: Update.
2013-04-26 09:30:46 +02:00
Joseph Myers
73709b2611 Move x86_64-specific audit tests to sysdeps/x86_64/. 2013-04-25 19:23:11 +00:00
Joseph Myers
2f38fbfe09 Fix catan, catanh inaccuracy through use of log (bug 15394). 2013-04-24 18:49:13 +00:00
Carlos O'Donell
aba5e333d4 libm-test.inc: Fix tests where cos(PI/2) != 0.
The value of PI is never exactly PI in any floating point representation,
and the value of PI/2 is never PI/2. It is wrong to expect cos(M_PI_2l)
to return 0, instead it will return an answer that is  non-zero because
M_PI_2l doesn't round to exactly PI/2 in the type used.

That is to say that the correct answer is to do the following:
* Take PI or PI/2.
* Round to the floating point representation.
* Take the rounded value and compute an infinite precision cos or sin.
* Use the rounded result of the infinite precision cos or sin as the
  answer to the test.

I used printf to do the type rounding, and Wolfram's Alpha to do the
infinite precision cos calculations.

The following changes bring x86-64 and x86 to 1/2 ulp for two tests.
It shows that the x86 cos implementation is quite good, and that
our test are flawed.

Unfortunately given that the rounding errors are type dependent we
need to fix this for each type. No regressions on x86-64 or x86.

---

2013-04-11  Carlos O'Donell  <carlos@redhat.com>

	* math/libm-test.inc (cos_test): Fix PI/2 test.
	(sincos_test): Likewise.
	* sysdeps/x86_64/fpu/libm-test-ulps: Regenerate.
	* sysdeps/i386/fpu/libm-test-ulps: Regenerate.
2013-04-11 08:52:18 -04:00
Joseph Myers
52ce486045 Fix cacosh inaccuracy and spurious exceptions (bug 15327). 2013-04-02 22:54:00 +00:00
Joseph Myers
ccc8cadf75 Fix casinh inaccuracy for imaginary part < 1.0, real part small (bug 10357). 2013-03-30 13:31:53 +00:00
Joseph Myers
3a7182a14b Fix casinh inaccuracy near i, imaginary part > 1 (bug 15307). 2013-03-27 14:38:44 +00:00
Dmitry V. Levin
2e0fb52187 BZ#11120: fix x86_64/strcmp.S NOT_IN_libc safeguards
Due to a typo repeated several times, this bug hasn't been fixed yet,
despite being marked as resolved in glibc 2.12.

* sysdeps/x86_64/strcmp.S: Replace all occurrences of NOT_IN_lib
with NOT_IN_libc.
2013-03-22 03:16:00 +00:00
Joseph Myers
0a1b2ae6f6 Fix casinh inaccuracy for argument with imaginary part 1 (bug 15287). 2013-03-21 10:27:10 +00:00
Joseph Myers
bef0b50749 Move system-specific settings out of toplevel configure.in and config.make.in. 2013-03-20 22:37:06 +00:00
Ondrej Bilka
37bb363f03 Faster strlen on x64. 2013-03-18 07:39:12 +01:00
Joseph Myers
d2f9799e7c Fix y1l spurious overflows for ldbl-96 (bug 15283). 2013-03-16 17:51:48 +00:00
Joseph Myers
06d5adfbda Regenerate sysdeps/x86_64/preconfigure. 2013-03-15 01:18:32 +00:00
Ondrej Bilka
80f844c9d8 Remove Prefer_SSE_for_memop on x64 2013-03-11 15:39:08 +01:00
Ondrej Bilka
87bd9bc4bd Revert " * sysdeps/x86_64/strlen.S: Replace with new SSE2 based implementation"
This reverts commit b79188d717.
2013-03-06 22:27:18 +01:00
Ondrej Bilka
b79188d717 * sysdeps/x86_64/strlen.S: Replace with new SSE2 based implementation
which is faster on all x86_64 architectures.
	Tested on AMD, Intel Nehalem, SNB, IVB.
2013-03-06 21:54:01 +01:00
Joseph Myers
2969121014 Remove bounded-pointers handling from x86_64 assembly sources. 2013-02-17 21:57:26 +00:00
Siddhesh Poyarekar
d6752ccd69 New __sqr function as a faster special case of __mul 2013-02-14 10:31:09 +05:30
Roland McGrath
f1d70dad53 Remove lots of inline keywords. 2013-02-07 14:44:18 -08:00
Joseph Myers
8cf28c5ebe Fix casinh spurious underflows away from [-i,i] (bug 15062). 2013-01-31 22:55:29 +00:00
Joseph Myers
728d7b43fc Fix cacos real-part inaccuracy for result real part near 0 (bug 15023). 2013-01-17 20:25:51 +00:00