Commit Graph

10229 Commits

Author SHA1 Message Date
Stefan Liebler
074b0f27d9 S390: Use 64bit instruction to check for copies of > 1MB with mvcle.
The __memcpy_default variant on s390 64bit calculates the number of
256byte blocks in a 64bit register and checks, if they exceed 1MB
to jump to mvcle. Otherwise a mvc-loop is used. The compare-instruction
only checks a 32bit value.
This patch uses a 64bit compare.

ChangeLog:

	* sysdeps/s390/s390-64/memcpy.S (memcpy):
	Use cghi instead of chi to compare 64bit value.
2016-05-24 10:39:13 +02:00
Stefan Liebler
04bb21ac93 S390: Use mvcle for copies > 1MB on 32bit with default memcpy variant.
If more than 255 bytes should be copied, the algorithm jumps away.
Before this patch, it jumps to the mvc-loop (.L_G5_12).
Now it jumps first to the "> 1MB" check, which jumps away to
__memcpy_mvcle. Otherwise, the mvc-loop (.L_G5_12) copies the bytes.

ChangeLog:

	* sysdeps/s390/s390-32/memcpy.S (memcpy):
	Jump to 1MB check before executing mvc-loop.
2016-05-24 10:39:13 +02:00
Florian Weimer
3375cfafa7 Make padding in struct sockaddr_storage explicit [BZ #20111]
This avoids aliasing issues with GCC 6 in -fno-strict-aliasing
mode.  (With implicit padding, not all data is copied.)

This change makes it explicit that struct sockaddr_storage is
only 126 bytes large on m68k (unlike elsewhere, where we end up
with the requested 128 bytes).  The new test case makes sure that
this does not happen on other architectures.
2016-05-23 19:43:09 +02:00
Joseph Myers
f9b437d5ef Update sysdeps/unix/sysv/linux/bits/socket.h for Linux 4.6.
This patch updates sysdeps/unix/sysv/linux/bits/socket.h for new
constants added in Linux 4.6.  AF_KCM / PF_KCM are added.  SOL_KCM is
new, and I added a lot of SOL_* values postdating the last one present
in the header, since I saw no apparent reason for the set in glibc to
stop at SOL_IRDA.  MSG_BATCH is added; Linux also has
MSG_SENDPAGE_NOTLAST which is not in glibc, but given the comment
starts "sendpage() internal" I presume it's correct for it not to be
in glibc.

(Note that this is a case where the Linux kernel header with userspace
relevant values is *not* a uapi header but include/linux/socket.h - I
don't know why, but at least this header, as well as uapi headers,
needs reviewing for glibc-relevant changes each release.)

Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by the patch).

	* sysdeps/unix/sysv/linux/bits/socket.h (PF_KCM): New macro.
	(PF_MAX): Update value.
	(AF_KCM): New macro.
	(SOL_NETBEUI): Likewise.
	(SOL_LLC): Likewise.
	(SOL_DCCP): Likewise.
	(SOL_NETLINK): Likewise.
	(SOL_TIPC): Likewise.
	(SOL_RXRPC): Likewise.
	(SOL_PPPOL2TP): Likewise.
	(SOL_BLUETOOTH): Likewise.
	(SOL_PNPIPE): Likewise.
	(SOL_RDS): Likewise.
	(SOL_IUCV): Likewise.
	(SOL_CAIF): Likewise.
	(SOL_ALG): Likewise.
	(SOL_NFC): Likewise.
	(SOL_KCM): Likewise.
	(MSG_BATCH): New enum value and macro.
2016-05-23 13:27:37 +00:00
H.J. Lu
b7598b1b85 Remove special L2 cache case for Knights Landing
L2 cache is shared by 2 cores on Knights Landing, which has 4 threads
per core:

https://en.wikipedia.org/wiki/Xeon_Phi#Knights_Landing

So L2 cache is shared by 8 threads on Knights Landing as reported by
CPUID.  We should remove special L2 cache case for Knights Landing.

	[BZ #18185]
	* sysdeps/x86/cacheinfo.c (init_cacheinfo): Don't limit threads
	sharing L2 cache to 2 for Knights Landing.
2016-05-20 14:42:00 -07:00
Joseph Myers
ffe9aaf2b9 Implement proper fmal for ldbl-128ibm (bug 13304).
ldbl-128ibm had an implementation of fmal that just did (x * y) + z in
most cases, with no attempt at actually being a fused operation.

This patch replaces it with a genuine fused operation.  It is not
necessarily correctly rounding, but should produce a result at least
as accurate as the long double arithmetic operations in libgcc, which
I think is all that can reasonably be expected for such a non-IEEE
format where arithmetic is approximate rather than rounded according
to any particular rule for determining the exact result.  Like the
libgcc arithmetic, it may produce spurious overflow and underflow
results, and it falls back to the libgcc multiplication in the case of
(finite, finite, zero).

This concludes the fixes for bug 13304; any subsequently found fma
issues should go in separate Bugzilla bugs.  Various other pieces of
bug 13304 were fixed in past releases over the past several years.

Tested for powerpc.

	[BZ #13304]
	* sysdeps/ieee754/ldbl-128ibm/s_fmal.c: Include <fenv.h>,
	<float.h>, <math_private.h> and <stdlib.h>.
	(add_split): New function.
	(mul_split): Likewise.
	(ext_val): New typedef.
	(store_ext_val): New function.
	(mul_ext_val): New function.
	(compare): New function.
	(add_split_ext): New function.
	(__fmal): After checking for Inf, NaN and zero, compute result as
	an exact sum of scaled double values in round-to-nearest before
	adding those up and adjusting for other rounding modes.
	* math/auto-libm-test-in: Remove xfail-rounding:ldbl-128ibm from
	tests of fma.
	* math/auto-libm-test-out: Regenerated.
2016-05-19 20:10:56 +00:00
H.J. Lu
de71e0421b Correct Intel processor level type mask from CPUID
Intel CPUID with EAX == 11 returns:

ECX Bits 07 - 00: Level number. Same value in ECX input.
    Bits 15 - 08: Level type.
    ^^^^^^^^^^^^^^^^^^^^^^^^ This is level type.
    Bits 31 - 16: Reserved.

Intel processor level type mask should be 0xff00, not 0xff0.

	[BZ #20119]
	* sysdeps/x86/cacheinfo.c (init_cacheinfo): Correct Intel
	processor level type mask for CPUID with EAX == 11.
2016-05-19 10:02:36 -07:00
H.J. Lu
7c08d791ee Check the HTT bit before counting logical threads
Skip counting logical threads for Intel processors if the HTT bit is 0
which indicates there is only a single logical processor.

	* sysdeps/x86/cacheinfo.c (init_cacheinfo): Skip counting
	logical threads if the HTT bit is 0.
	* sysdeps/x86/cpu-features.h (bit_cpu_HTT): New.
	(index_cpu_HTT): Likewise.
	(reg_HTT): Likewise.
2016-05-19 09:09:00 -07:00
H.J. Lu
eb2c88c7c8 Remove alignments on jump targets in memset
X86-64 memset-vec-unaligned-erms.S aligns many jump targets, which
increases code sizes, but not necessarily improve performance.  As
memset benchtest data of align vs no align on various Intel and AMD
processors

https://sourceware.org/bugzilla/attachment.cgi?id=9277

shows that aligning jump targets isn't necessary.

	[BZ #20115]
	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S (__memset):
	Remove alignments on jump targets.
2016-05-19 08:49:55 -07:00
H.J. Lu
16cd2b35c2 Don't call internal _Unwind_Resume via PLT
There is no need to call the internal funtion, _Unwind_Resume, which
is defined in unwind-forcedunwind.c, via PLT.

	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S
	(__condvar_cleanup2): Remove JUMPTARGET from  _Unwind_Resume
	call.
	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S
	(__condvar_cleanup1): Likewise.
2016-05-18 13:43:26 -07:00
H.J. Lu
d29261db22 Don't call internal __pthread_unwind via PLT
Add PTHREAD_UNWIND to replace JUMPTARGET(__pthread_unwind) and define
it to __GI___pthread_unwind within libpthread.

	* sysdeps/unix/sysv/linux/x86_64/cancellation.S (PTHREAD_UNWIND):
	New
	(__pthread_unwind): Renamed to ...
	(PTHREAD_UNWIND): This.
	(__pthread_enable_asynccancel): Replace
	JUMPTARGET(__pthread_unwind) with PTHREAD_UNWIND.
2016-05-18 13:41:55 -07:00
Joseph Myers
48526672b6 Add CLONE_NEWCGROUP from Linux 4.6 to bits/sched.h.
This patch adds CLONE_NEWCGROUP, new in Linux 4.6, to
sysdeps/unix/sysv/linux/bits/sched.h.

Tested for x86_64 and x86 (testsuite, and that installed stripped
shared libraries are unchanged by the patch).

	* sysdeps/unix/sysv/linux/bits/sched.h [__USE_GNU]
	(CLONE_NEWCGROUP): New macro.
2016-05-18 17:46:52 +00:00
Joseph Myers
2a1aa52824 Add Q_GETNEXTQUOTA from Linux 4.6 to sys/quota.h.
This patch adds Q_GETNEXTQUOTA, new in Linux 4.6, to
sysdeps/unix/sysv/linux/sys/quota.h.

Tested for x86_64 and x86 (testsuite, and that installed shared
libraries are unchanged by the patch).

	* sysdeps/unix/sysv/linux/sys/quota.h [_LINUX_QUOTA_VERSION >= 2]
	(Q_GETNEXTQUOTA): New macro.
2016-05-18 13:15:11 +00:00
H.J. Lu
4facca0b0e Call init_cpu_features only if SHARED is defined
In static executable, since init_cpu_features is called early from
__libc_start_main, there is no need to call it again in dl_platform_init.

	[BZ #20072]
	* sysdeps/i386/dl-machine.h (dl_platform_init): Call
	init_cpu_features only if SHARED is defined.
	* sysdeps/x86_64/dl-machine.h (dl_platform_init): Likewise.
2016-05-13 08:29:33 -07:00
H.J. Lu
9e4ec3e816 Support non-inclusive caches on Intel processors
* sysdeps/x86/cacheinfo.c (init_cacheinfo): Check and support
	non-inclusive caches on Intel processors.
2016-05-13 07:18:35 -07:00
Wilco Dijkstra
a8c5a2a952 This is an optimized memset for AArch64. Memset is split into 4 main cases:
small sets of up to 16 bytes, medium of 16..96 bytes which are fully unrolled.
Large memsets of more than 96 bytes align the destination and use an unrolled
loop processing 64 bytes per iteration.  Memsets of zero of more than 256 use
the dc zva instruction, and there are faster versions for the common ZVA sizes
64 or 128.  STP of Q registers is used to reduce codesize without loss of
performance.

The speedup on test-memset is 1% on Cortex-A57 and 8% on Cortex-A53.

	* sysdeps/aarch64/memset.S (__memset):
	Rewrite of optimized memset.
2016-05-12 16:44:53 +01:00
Florian Weimer
56290d6e76 Increase fork signal safety for single-threaded processes [BZ #19703]
This provides a band-aid and addresses the scenario where fork is
called from a signal handler while the process is in the malloc
subsystem (or has acquired the libio list lock).  It does not
address the general issue of async-signal-safety of fork;
multi-threaded processes are not covered, and some glibc
subsystems have fork handlers which are not async-signal-safe.
2016-05-12 15:26:55 +02:00
Florian Weimer
cd065b6843 getaddrinfo: Convert from extend_alloca to struct scratch_buffer 2016-05-12 14:07:56 +02:00
Stefan Liebler
c64a10e544 S390: Use fPIC to avoid R_390_GOT12 relocation in gcrt1.o.
if glibc is build with -march=z900 | -march=z990,
the startup file gcrt1.o (used if you link with gcc -pg)
contains R_390_GOT12 | R_390_GOT20 relocations.
Thus, an entry in the GOT can be addressed relative to the GOT pointer
with a 12 | 20 bit displacement value.
The startup files should not contain R_390_GOT12,
R_390_GOT20 relocations, but R_390_GOTENT ones.

This patch removes the overrides of pic-ccflag and
the default pic-ccflag = -fPIC in Makeconfig
is used instead to get the R_390_GOTENT relocations in gcrt1.o.

ChangeLog:

	* sysdeps/s390/s390-32/Makefile (pic-ccflag): Remove.
	* sysdeps/s390/s390-64/Makefile: Likewise.
2016-05-11 15:51:25 +02:00
H.J. Lu
2a1f15b1a9 Remove x86 ifunc-defines.sym and rtld-global-offsets.sym
Merge x86 ifunc-defines.sym with x86 cpu-features-offsets.sym.  Remove
x86 ifunc-defines.sym and rtld-global-offsets.sym.  No code changes on
i686 and x86-64.

	* sysdeps/i386/i686/multiarch/Makefile (gen-as-const-headers):
	Remove ifunc-defines.sym.
	* sysdeps/x86_64/multiarch/Makefile (gen-as-const-headers):
	Likewise.
	* sysdeps/i386/i686/multiarch/ifunc-defines.sym: Removed.
	* sysdeps/x86/rtld-global-offsets.sym: Likewise.
	* sysdeps/x86_64/multiarch/ifunc-defines.sym: Likewise.
	* sysdeps/x86/Makefile (gen-as-const-headers): Remove
	rtld-global-offsets.sym.
	* sysdeps/x86_64/multiarch/ifunc-defines.sym: Merged with ...
	* sysdeps/x86/cpu-features-offsets.sym: This.
	* sysdeps/x86/cpu-features.h: Include <cpu-features-offsets.h>
	instead of <ifunc-defines.h> and <rtld-global-offsets.h>.
2016-05-11 05:51:39 -07:00
Florian Weimer
8db2cf163e getaddrinfo: Restore RES_USE_INET6 flag on error path [BZ #19994] 2016-05-10 10:09:24 +02:00
Stefan Liebler
b91a333ecb S390: Add support for vdso getcpu symbol.
This patch adds support for symbol __kernel_getcpu in vDSO,
which is available with kernel 4.5.
Now sched_getcpu is using this symbol if available in mapped vDSO
by defining macro HAVE_GETCPU_VSYSCALL. If not available at runtime,
the former syscall is used.
2016-05-09 11:05:45 +02:00
H.J. Lu
a9558b49b3 Move sysdeps/x86_64/cacheinfo.c to sysdeps/x86
Move sysdeps/x86_64/cacheinfo.c to sysdeps/x86.  No code changes on x86
and x86_64.

	* sysdeps/i386/cacheinfo.c: Include <sysdeps/x86/cacheinfo.c>
	instead of <sysdeps/x86_64/cacheinfo.c>.
	* sysdeps/x86_64/cacheinfo.c: Moved to ...
	* sysdeps/x86/cacheinfo.c: Here.
2016-05-08 08:49:18 -07:00
Samuel Thibault
04794f3e7e Revert "aio: fix newp->running data race"
This reverts commit fd67a9cf7b.
2016-05-04 15:52:30 +02:00
Samuel Thibault
fd67a9cf7b aio: fix newp->running data race
* sysdeps/pthread/aio_misc.c (__aio_enqueue_request): Do not write
	`running` field of `newp` when a thread was started to process it,
	since that thread will not take `__aio_requests_mutex`, and the field
	already has the proper value actually.
2016-05-04 15:14:29 +02:00
Gabriel F. T. Gomes
eb3b8a4924 powerpc: Fix operand prefixes
The file sysdeps/powerpc/sysdeps.h defines aliases for condition register
operands.  E.g.: 'cr7' means condition register 7.  On the one hand, this
increases readability, as it makes it easier for readers to know whether the
operand is a condition register, a general purpose register or an immediate.
On the other hand, this permits that condition registers be written as if they
were general purpose, and vice-versa, thus reducing the readability of the
code.

This commit removes some of these unintentional misuses.

The changes have no effect on the final code.  Checked with objdump.
2016-05-04 09:14:52 -03:00
Florian Weimer
5171f3079f CVE-2016-1234: glob: Do not copy d_name field of struct dirent [BZ #19779]
Instead, we store the data we need from the return value of
readdir in an object of the new type struct readdir_result.
This type is independent of the layout of struct dirent.
2016-05-04 12:09:35 +02:00
Paul E. Murphy
cbc06bc486 powerpc: Add missing insn in swapcontext [BZ #20004]
A missing instruction was discovered in the compat version of
swapcontext while running the GCC test suite.
2016-05-03 10:45:51 -05:00
Adhemerval Zanella
230528c467 powerpc: Fix clone CLONE_VM compare
This patch fixes the clone CLONE_VM change from 0cb313f (BZ#19957)
where the commit changed the register that contains the save flags
argument to compare with (from r28 to r29).  This patch changes
back to correct register.

Tested on powerpc32 (thanks to Tulio Magno Quites Machado Filho).

	* sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S (__clone): Fix
	flags CLONE_VM compare.
2016-05-02 17:44:00 -03:00
Andreas Schwab
8a9ea3ccc5 m68k: use large PIC model for gcrt1.o 2016-04-30 18:51:43 +02:00
Andreas Schwab
4816d802ff m68k: avoid local labels in symbol table 2016-04-30 18:50:39 +02:00
Adhemerval Zanella
0cb313f7cb Fix clone (CLONE_VM) pid/tid reset (BZ#19957)
As discussed in libc-alpha [1] current clone with CLONE_VM (without
CLONE_THREAD set) will reset the pthread pid/tid fields to -1.  The
issue is since memory is shared between the parent and child it will
clobber parent's cached pid/tid leading to internal inconsistencies
if the value is not restored.

And even it is restored it may lead to racy conditions when between
set/restore a thread might invoke pthread function that validate the
pthread with INVALID_TD_P/INVALID_NOT_TERMINATED_TD_P and thus get
wrong results.

As stated in BZ19957, previously reports of this behaviour was close
with EWONTFIX due the fact usage of clone outside glibc is tricky
since glibc requires consistent internal pthread, while using clone
directly may not provide it. However since now posix_spawn uses
clone (CLONE_VM) to fixes various issues related to previous vfork
usage this issue requires fixing.

The vfork implementation also does something similar, but instead
it negates and restores only the *pid* field and functions that
might access its value know to handle such case (getpid, raise
and pthread ones that uses INVALID_TD_P/INVALID_NOT_TERMINATED_TD_P
macros that check only *tid* field).  Also vfork does not call
__clone directly, instead calling either __NR_vfork or __NR_clone
directly.

So this patch removes this clone behavior by avoiding setting
the pthread pid/tid field for CLONE_VM. There is no need to
check for CLONE_THREAD, since the minimum supported kernel in all
architecture implies that CLONE_VM must be used with CLONE_THREAD,
otherwise clone returns EINVAL.

Instead of current approach of:

   int clone(int (*fn)(void *), void *child_stack, int flags, ...)
      [...]
      if (flags & CLONE_THREAD)
        goto do_syscall;
      pid_t new_value;
      if (flags & CLONE_VM)
        new_value = -1;
      else
        new_value = getpid ();
      THREAD_SETMEM (THREAD_SELF, pid, new_value);
      THREAD_SETMEM (THREAD_SELF, tid, new_value);

    do_syscall:
      [...]

The new approach uses:

   int clone(int (*fn)(void *), void *child_stack, int flags, ...)
      [...]
      if (flags & CLONE_VM)
        goto do_syscall;
      pid_t new_value = getpid ();
      THREAD_SETMEM (THREAD_SELF, pid, new_value);
      THREAD_SETMEM (THREAD_SELF, tid, new_value);

    do_syscall:
      [...]

It also removes the linux tst-getpid2.c test which expects the previous
behavior and instead add another clone test.

Tested on x86_64, i686, x32, powerpc64le, aarch64, armhf, s390, and
s390x. I also did limited check on mips32 and sparc64 (using the new
added test).

I also got reviews from both m68k, hppa, and tile.  So I presume for
these architecture the patch works.

The fixes for alpha, microblaze, sh, ia64, and nio2 have not been
tested.

[1] https://sourceware.org/ml/libc-alpha/2016-04/msg00307.html

	* sysdeps/unix/sysv/linux/Makefile [$(subdir) == nptl] (test): Remove
	tst-getpid2.
	(test): Add tst-clone2.
	* sysdeps/unix/sysv/linux/tst-clone2.c: New file.
	* sysdeps/unix/sysv/linux/aarch64/clone.S (__clone): Do not change
	pid/tid fields for CLONE_VM.
	* sysdeps/unix/sysv/linux/arm/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/i386/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/mips/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/s390/s390-64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc32/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/sparc/sparc64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/x86_64/clone.S: Likewise.
	* sysdeps/unix/sysv/linux/tst-getpid2.c: Remove file.
2016-04-29 18:19:30 -03:00
Gabriel F. T. Gomes
72c11b353e powerpc: Zero pad using memset in strncpy/stpncpy
Call __memset_power8 to pad, with zeros, the remaining bytes in the
dest string on __strncpy_power8 and __stpncpy_power8.  This improves
performance when n is larger than the input string, giving ~30% gain for
larger strings without impacting much shorter strings.
2016-04-29 10:05:33 -03:00
Florian Weimer
4ab2ab03d4 CVE-2016-3706: getaddrinfo: stack overflow in hostent conversion [BZ #20010]
When converting a struct hostent response to struct gaih_addrtuple, the
gethosts macro (which is called from gaih_inet) used alloca, without
malloc fallback for large responses.  This commit changes this code to
use calloc unconditionally.

This commit also consolidated a second hostent-to-gaih_addrtuple
conversion loop (in gaih_inet) to use the new conversion function.
2016-04-29 10:35:34 +02:00
Stefan Liebler
b06549a5e6 Add missing iucv related defines.
this patch adds the missing SOL_IUCV socket level definition
and socket options SO_IPRMDATA_MSG, SO_MSGLIMIT, SO_MSGSIZE
which can be used with get/setsockopt().
SCM_IUCV_TRGCLS is needed to send/receive ancillary data with send/recvmsg().

The defines are copied from kernel-source:
include/net/iucv/af_iucv.h
include/linux/socket.h
2016-04-27 09:08:29 +02:00
Paul E. Murphy
8f1b841e45 powerpc: Add optimized strcspn for P8
A few minor adjustments to the P8 strspn gives us
an almost equally optimized P8 strcspn.
2016-04-25 09:11:02 -05:00
Chung-Lin Tang
a5507dfa60 Fix stdlib/tst-makecontext regression for Nios II 2016-04-25 00:08:17 -07:00
Rajalakshmi Srinivasaraghavan
e413b14e18 powerpc: strcasestr optmization for power8
This patch optimizes strcasestr function for power >= 8 systems.  The average
improvement of this optimization is ~40% and compares 16 bytes at a time
using vector instructions.  This patch is tested on powerpc64 and powerpc64le.
2016-04-22 19:23:13 +05:30
Samuel Thibault
6f8222a1c5 Fix gprof timing
* sysdeps/mach/hurd/profil.c (__profile_frequency): Return tick
	frequency instead of tick length in us.
2016-04-19 23:27:27 +02:00
Samuel Thibault
593285ac15 hurd: fix profiling short-living processes
* sysdeps/mach/hurd/profil.c (update_waiter): Initialize
	profil_reply_port.
	(profile_waiter): Do not initialize profil_reply_port.
2016-04-19 00:54:24 +02:00
Carlos Eduardo Seo
1b045ee53e powerpc: Optimization for strlen for POWER8.
This implementation takes advantage of vectorization to improve performance of
the loop over the current strlen implementation for POWER7.
2016-04-15 17:19:19 -03:00
H.J. Lu
2e2d9796da Detect Intel Goldmont and Airmont processors
Updated from the model numbers of Goldmont and Airmont processors in
Intel64 And IA-32 Processor Architectures Software Developer's Manual
Volume 3 Revision 058.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Detect Intel
	Goldmont and Airmont processors.
2016-04-15 05:23:06 -07:00
Adhemerval Zanella
41e77f36d4 Fix pread consolidation on ports that require argument alignment
This patch fixes the __ALIGNMENT_{ARG,COUNT} definition for ports that
define __ASSUME_ALIGNED_REGISTER_PAIRS by including the kernel-features.h
(where it is defined if the case).

This was shown on arm with failing cases:

FAIL: debug/tst-chk1
FAIL: debug/tst-chk2
FAIL: debug/tst-chk3
FAIL: debug/tst-chk4
FAIL: debug/tst-chk5
FAIL: debug/tst-chk6
FAIL: debug/tst-lfschk1
FAIL: debug/tst-lfschk2
FAIL: debug/tst-lfschk3
FAIL: debug/tst-lfschk4
FAIL: debug/tst-lfschk5
FAIL: debug/tst-lfschk6
FAIL: posix/tst-preadwrite
FAIL: posix/tst-preadwrite64

The patches fixes it.  Tested on armhf.

	* sysdeps/unix/sysv/linux/sysdep.h: Include kernel-features.h.
2016-04-14 16:49:40 -03:00
Florian Weimer
ae9e94e744 malloc: Remove unused definitions of thread_atfork, thread_atfork_static 2016-04-14 09:17:36 +02:00
Florian Weimer
29d794863c malloc: Run fork handler as late as possible [BZ #19431]
Previously, a thread M invoking fork would acquire locks in this order:

  (M1) malloc arena locks (in the registered fork handler)
  (M2) libio list lock

A thread F invoking flush (NULL) would acquire locks in this order:

  (F1) libio list lock
  (F2) individual _IO_FILE locks

A thread G running getdelim would use this order:

  (G1) _IO_FILE lock
  (G2) malloc arena lock

After executing (M1), (F1), (G1), none of the threads can make progress.

This commit changes the fork lock order to:

  (M'1) libio list lock
  (M'2) malloc arena locks

It explicitly encodes the lock order in the implementations of fork,
and does not rely on the registration order, thus avoiding the deadlock.
2016-04-14 09:17:02 +02:00
Florian Weimer
b49ab5f450 Remove union wait [BZ #19613]
The overloading approach in the W* macros was incompatible with
integer expressions of a type different from int.  Applications
using union wait and these macros will have to migrate to the
POSIX-specified int status type.
2016-04-14 08:54:57 +02:00
Andreas Schwab
b4bcb3aec6 Register extra test objects
This makes sure that the extra test objects are compiled with the correct
MODULE_NAME and dependencies are tracked.
2016-04-13 17:07:13 +02:00
H.J. Lu
a057f5f8cd X86-64: Use non-temporal store in memcpy on large data
The large memcpy micro benchmark in glibc shows that there is a
regression with large data on Haswell machine.  non-temporal store in
memcpy on large data can improve performance significantly.  This
patch adds a threshold to use non temporal store which is 6 times of
shared cache size.  When size is above the threshold, non temporal
store will be used, but avoid non-temporal store if there is overlap
between destination and source since destination may be in cache when
source is loaded.

For size below 8 vector register width, we load all data into registers
and store them together.  Only forward and backward loops, which move 4
vector registers at a time, are used to support overlapping addresses.
For forward loop, we load the last 4 vector register width of data and
the first vector register width of data into vector registers before the
loop and store them after the loop.  For backward loop, we load the first
4 vector register width of data and the last vector register width of
data into vector registers before the loop and store them after the loop.

	[BZ #19928]
	* sysdeps/x86_64/cacheinfo.c (__x86_shared_non_temporal_threshold):
	New.
	(init_cacheinfo): Set __x86_shared_non_temporal_threshold to 6
	times of shared cache size.
	* sysdeps/x86_64/multiarch/memmove-avx-unaligned-erms.S
	(VMOVNT): New.
	* sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S
	(VMOVNT): Likewise.
	* sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S
	(VMOVNT): Likewise.
	(VMOVU): Changed to movups for smaller code sizes.
	(VMOVA): Changed to movaps for smaller code sizes.
	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S: Update
	comments.
	(PREFETCH): New.
	(PREFETCH_SIZE): Likewise.
	(PREFETCHED_LOAD_SIZE): Likewise.
	(PREFETCH_ONE_SET): Likewise.
	Rewrite to use forward and backward loops, which move 4 vector
	registers at a time, to support overlapping addresses and use
	non temporal store if size is above the threshold and there is
	no overlap between destination and source.
2016-04-12 08:10:47 -07:00
Matthew Fortune
b39d84adff VDSO support for MIPS
This patch adds support for using the implementations of gettimeofday()
and clock_gettime() provided by the kernel in the VDSO. The VDSO will
always provide clock_gettime() as CLOCK_{REALTIME,MONOTONIC}_COARSE can
be implemented regardless of platform. CLOCK_{REALTIME,MONOTONIC}, along
with gettimeofday(), are only implemented on platforms which make use of
either the CP0 count or GIC as their clocksource. On other platforms,
the VDSO does not provide the __vdso_gettimeofday symbol, as it is
never useful.

The VDSO functions return ENOSYS when they encounter an unsupported
request, in which case glibc should fall back to the standard syscall.

Tested with upstream kernel 4.5 and QEMU emulating Malta.

./vdsotest gettimeofday bench
gettimeofday: syscall: 1021 nsec/call
gettimeofday:    libc: 262 nsec/call
gettimeofday:    vdso: 174 nsec/call

	* sysdeps/unix/sysv/linux/mips/Makefile (sysdep_routines):
	Include dl-vdso.
	* sysdeps/unix/sysv/linux/mips/Versions: Add
	__vdso_clock_gettime.
	* sysdeps/unix/sysv/linux/mips/init-first.c: New file.
	* sysdeps/unix/sysv/linux/mips/libc-vdso.h: New file.
	* sysdeps/unix/sysv/linux/mips/mips32/sysdep.h:
	(INTERNAL_VSYSCALL_CALL): Define to be compatible with MIPS
	definitions of INTERNAL_SYSCALL_{ERROR_P,ERRNO}.
	(HAVE_CLOCK_GETTIME_VSYSCALL): Define.
	(HAVE_GETTIMEOFDAY_VSYSCALL): Define.
	* sysdeps/unix/sysv/linux/mips/mips64/n32/sysdep.h: Likewise.
	* sysdeps/unix/sysv/linux/mips/mips64/n64/sysdep.h: Likewise.
2016-04-12 11:05:13 +01:00
Adhemerval Zanella
071af4769f Consolidate pwrite/pwrite64 implementations
This patch consolidates all the pwrite/pwrite64 implementation for Linux
in only one (sysdeps/unix/sysv/linux/pwrite{64}.c).  It also removes the
syscall from the auto-generation using assembly macros.

For pwrite{64} offset argument placement the new SYSCALL_LL{64} macro
is used.  For pwrite ports that do not define __NR_pwrite will use
__NR_pwrite64 and for pwrite64 ports that dot define __NR_pwrite64 will
use __NR_pwrite for the syscall.

Checked on x86_64, x32, i386, aarch64, and ppc64le.

	* sysdeps/unix/sysv/linux/arm/pwrite.c: Remove file.
	* sysdeps/unix/sysv/linux/arm/pwrite64.c: Likewise.
	* sysdeps/unix/sysv/linux/generic/wordsize-32/pwrite.c: Likewise.
	* sysdeps/unix/sysv/linux/generic/wordsize-32/pwrite64.c: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/pwrite.c: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/pwrite64.c: Likewise.
	* sysdeps/unix/sysv/linux/wordsize-64/pwrite64.c: Likewise.
	* sysdeps/unix/sysv/linux/wordsize-64/syscalls.list (prite): Remove
	syscalls generation.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/sysdep.h
	[__NR_pwrite64] (__NR_write): Remove define.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/sysdep.h
	[__NR_pwrite64] (__NR_write): Remove define.
	* sysdeps/unix/sysv/linux/pwrite.c [__NR_pwrite64] (__NR_pwrite):
	Remove define.
	(__libc_pwrite): Use SYSCALL_LL macro on offset argument.
	* sysdeps/unix/sysv/linux/pwrite64.c [__NR_pwrite64] (__NR_pwrite):
	Remove define.
	(__libc_pwrite64): Use SYSCALL_LL64 macro on offset argument.
	* sysdeps/unix/sysv/linux/sh/pwrite.c: Rewrite using default
	Linux implementation as base.
	* sysdeps/unix/sysv/linux/sh/pwrite64.c: Likewise.
	* sysdeps/unix/sysv/linux/mips/pwrite.c: Likewise.
	* sysdeps/unix/sysv/linux/mips/pwrite64.c: Likewise.
2016-04-11 10:08:01 -03:00
Adhemerval Zanella
77a4fbd536 Consolidate pread/pread64 implementations
This patch consolidates all the pread/pread64 implementation for Linux
in only one (sysdeps/unix/sysv/linux/pread.c).  It also removes the
syscall from the auto-generation using assembly macros.

For pread{64} offset argument placement the new SYSCALL_LL{64} macro
is used.  For pread ports that do not define __NR_pread will use
__NR_pread64 and for pread64 ports that dot define __NR_pread64 will
use __NR_pread for the syscall.

Checked on x86_64, x32, i386, aarch64, and ppc64le.

	* sysdeps/unix/sysv/linux/arm/pread.c: Remove file.
	* sysdeps/unix/sysv/linux/arm/pread64.c: Likewise.
	* sysdeps/unix/sysv/linux/generic/wordsize-32/pread.c: Likewise.
	* sysdeps/unix/sysv/linux/generic/wordsize-32/pread64.c: Likewise.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/pread.c: Likewise,
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/pread64.c: Likewise.
	* sysdeps/unix/sysv/linux/wordsize-64/pread64.c: Likewise.
	* sysdeps/unix/sysv/linux/wordsize-64/syscalls.list (pread): Remove
	syscall generation.
	* sysdeps/unix/sysv/linux/powerpc/powerpc32/sysdep.h
	[__NR_pread64] (__NR_pread): Remove define.
	* sysdeps/unix/sysv/linux/powerpc/powerpc64/sysdep.h:
	[__NR_pread64] (__NR_pread): Likewise.
	* sysdeps/unix/sysv/linux/pread.c [__NR_pread64] (__NR_pread): Remove
	define.
	(__libc_pread): Use SYSCALL_LL macro on offset argument.
	* sysdeps/unix/sysv/linux/pread64.c [__NR_pread64] (__NR_pread):
	Remove define.
	(__libc_pread64): Use SYSCALL_LL64 macro on offset argument.
	* sysdeps/unix/sysv/linux/sh/pread.c: Rewrite using default
	Linux implementation as base.
	* sysdeps/unix/sysv/linux/sh/pread64.c: Likewise.
	* sysdeps/unix/sysv/linux/mips/pread.c: Likewise.
	* sysdeps/unix/sysv/linux/mips/pread64.c: Likewise.
2016-04-11 10:08:01 -03:00
Adhemerval Zanella
eeddfa91cb Consolidate off_t/off64_t syscall argument passing
This patch add three new macros (SYSCALL_LL, SYSCALL_LL64, and
__ASSUME_WORDSIZE64_ILP32) to use along with off_t and off64_t argument
syscalls.  The rationale for this change is:

1. Remove multiple implementations for the same syscall for different
   architectures (for instance, pread have 6 different implementations).

2. Also remove the requirement to use syscall wrappers for cancellable
   entrypoints.

The macro usage should be used along __ALIGNMENT_ARG to follow ABI constrains
for architecture where it applies.  For instance, pread can be rewritten as:

  return SYSCALL_CANCEL (pread, fd, buf, count,
                         __ALIGNMENT_ARG SYSCALL_LL (offset));

Another macro, SYSCALL_LL64, is provided for off64_t.  The macro
__ASSUME_WORDSIZE64_ILP32 is used by the ABI to define is uses 64-bit register
even if ABI is ILP32 (for instance x32 and mips64-n32).

The changes itself are not currently used in any implementation, so no
code change is expected.

	* sysdeps/unix/sysv/linux/generic/sysdep.h (__ALIGNMENT_ARG): Move
	definition.
	(__ALIGNMENT_COUNT): Likewise.
	* sysdeps/unix/sysv/linux/sysdep.h (__ALIGNMENT_ARG): To here.
	(__ALIGNMENT_COUNT): Likewise.
	(SYSCALL_LL): New define.
	(SYSCALL_LL64): Likewise.
	* sysdeps/unix/sysv/linux/mips/kernel-features.h:
	[_MIPS_SIM == _ABIO32] (__ASSUME_WORDSIZE64_ILP32): Define.
	* sysdeps/unix/sysv/linux/x86_64/kernel-features.h:
	[ILP32] (__ASUME_WORDSIZE64_ILP32): Likewise.
2016-04-11 10:07:53 -03:00
Adhemerval Zanella
482b2f87a8 Define __ASSUME_ALIGNED_REGISTER_PAIRS for missing ports
This patch defines __ASSUME_ALIGNED_REGISTER_PAIRS for the missing
ports that require 64-bit value (e.g., long long) to be aligned to
an even register pair in argument passing.

No code change is expected, tested with builds for powerpc32,
mips-o32, and armhf.

	* sysdeps/unix/sysv/linux/arm/kernel-features.h
	(__ASSUME_ALIGNED_REGISTER_PAIRS): Define.
	* sysdeps/unix/sysv/linux/mips/kernel-features.h
	[_MIPS_SIM == _ABIO32] (__ASSUME_ALIGNED_REGISTER_PAIRS): Likewise.
	* sysdeps/unix/sysv/linux/powerpc/kernel-features.h
	[!__powerpc64__] (__ASSUME_ALIGNED_REGISTER_PAIRS): Likewise.
2016-04-11 09:15:11 -03:00
Samuel Thibault
e1ef505659 Fix build with HAVE_AUX_VECTOR
* sysdeps/unix/sysv/linux/ldsodefs.h (HAVE_AUX_VECTOR): Define before
	including <ldsodefs.h>.
	* sysdeps/nacl/ldsodefs.h (HAVE_AUX_VECTOR): Likewise.
2016-04-11 10:27:25 +02:00
Samuel Thibault
0cdc5e930a Fix crash on getauxval call without HAVE_AUX_VECTOR
* sysdeps/generic/ldsodefs.h (struct rtld_global_ro)
	[!HAVE_AUX_VECTOR]: Do not define _dl_auxv field.
	* misc/getauxval.c (__getauxval) [!HAVE_AUX_VECTOR]: Do not go through
	GLRO(dl_auxv) list.
2016-04-10 23:58:43 +02:00
Khem Raj
1a5d01e79e When disabling SSE, make sure -fpmath is not set to use SSE either
This fixes errors when we inject sse options through CFLAGS and now
that we have -Werror turned on by default this warning turns into an
error on x86:

$ gcc -m32 -march=core2 -mtune=core2 -msse3 -mfpmath=sse -x c /dev/null -S -mno-sse -mno-mmx
/dev/null:1:0: warning: SSE instruction set disabled, using 387 arithmetics

Where as:

$ gcc -m32 -march=core2 -mtune=core2 -msse3 -mfpmath=sse -x c /dev/null -S -mno-sse -mno-mmx -mfpmath=387

Generates no warnings.
2016-04-09 22:14:24 -04:00
Mike Frysinger
b2d4456b33 configure: fix test == usage
POSIX defines the = operator, but not ==.  Fix the few places where we
incorrectly used ==.
2016-04-09 20:05:13 -04:00
Joseph Myers
fb3227b95c Fix termios.h XCASE namespace (bug 19925).
bits/termios.h (various versions under sysdeps/unix/sysv/linux)
defines XCASE if defined __USE_MISC || defined __USE_XOPEN.  This
macro was removed in the 2001 edition of POSIX, and is not otherwise
reserved, so should not be defined for 2001 and later versions of
POSIX.  This patch fixes the conditions accordingly (leaving the macro
defined for __USE_MISC, so still in the default namespace).

Tested for x86_64 and x86 (testsuite, and that installed shared
libraries are unchanged by the patch).

	[BZ #19925]
	* sysdeps/unix/sysv/linux/alpha/bits/termios.h (XCASE): Do not
	define if [!__USE_MISC && __USE_XOPEN2K].
	* sysdeps/unix/sysv/linux/bits/termios.h (XCASE): Likewise.
	* sysdeps/unix/sysv/linux/mips/bits/termios.h (XCASE): Likewise.
	* sysdeps/unix/sysv/linux/powerpc/bits/termios.h (XCASE):
	Likewise.
	* sysdeps/unix/sysv/linux/sparc/bits/termios.h (XCASE): Likewise.
	* conform/Makefile (test-xfail-XOPEN2K/termios.h/conform): Remove
	variable.
	(test-xfail-XOPEN2K8/termios.h/conform): Likewise.
2016-04-08 18:16:09 +00:00
Paul E. Murphy
25dba0ad05 powerpc: Add optimized P8 strspn
This utilizes vectors and bitmasks.  For small needle, large
haystack, the performance improvement is upto 8x.  For short
strings (0-4B), the cost of computing the bitmask dominates,
and is a tad slower.
2016-04-07 15:51:28 -05:00
H.J. Lu
a7d1c51482 X86-64: Prepare memmove-vec-unaligned-erms.S
Prepare memmove-vec-unaligned-erms.S to make the SSE2 version as the
default memcpy, mempcpy and memmove.

	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
	(MEMCPY_SYMBOL): New.
	(MEMPCPY_SYMBOL): Likewise.
	(MEMMOVE_CHK_SYMBOL): Likewise.
	Replace MEMMOVE_SYMBOL with MEMMOVE_CHK_SYMBOL on __mempcpy_chk
	symbols.  Replace MEMMOVE_SYMBOL with MEMPCPY_SYMBOL on
	__mempcpy symbols.  Provide alias for __memcpy_chk in libc.a.
	Provide alias for memcpy in libc.a and ld.so.
2016-04-06 10:19:16 -07:00
H.J. Lu
4af1bb06c5 X86-64: Prepare memset-vec-unaligned-erms.S
Prepare memset-vec-unaligned-erms.S to make the SSE2 version as the
default memset.

	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S
	(MEMSET_CHK_SYMBOL): New.  Define if not defined.
	(__bzero): Check VEC_SIZE == 16 instead of USE_MULTIARCH.
	Disabled fro now.
	Replace MEMSET_SYMBOL with MEMSET_CHK_SYMBOL on __memset_chk
	symbols.  Properly check USE_MULTIARCH on __memset symbols.
2016-04-06 09:10:35 -07:00
H.J. Lu
ec0cac9a1f Force 32-bit displacement in memset-vec-unaligned-erms.S
* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: Force
	32-bit displacement to avoid long nop between instructions.
2016-04-05 05:21:19 -07:00
H.J. Lu
696ac77484 Add a comment in memset-sse2-unaligned-erms.S
* sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S: Add
	a comment on VMOVU and VMOVA.
2016-04-05 05:19:18 -07:00
H.J. Lu
5cd7af016d Don't put SSE2/AVX/AVX512 memmove/memset in ld.so
Since memmove and memset in ld.so don't use IFUNC, don't put SSE2, AVX
and AVX512 memmove and memset in ld.so.

	* sysdeps/x86_64/multiarch/memmove-avx-unaligned-erms.S: Skip
	if not in libc.
	* sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memset-avx2-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S:
	Likewise.
2016-04-03 14:35:38 -07:00
H.J. Lu
ea2785e96f Fix memmove-vec-unaligned-erms.S
__mempcpy_erms and __memmove_erms can't be placed between __memmove_chk
and __memmove it breaks __memmove_chk.

Don't check source == destination first since it is less common.

	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:
	(__mempcpy_erms, __memmove_erms): Moved before __mempcpy_chk
	with unaligned_erms.
	(__memmove_erms): Skip if source == destination.
	(__memmove_unaligned_erms): Don't check source == destination
	first.
2016-04-03 12:38:25 -07:00
H.J. Lu
27d3ce1467 Remove Fast_Copy_Backward from Intel Core processors
Intel Core i3, i5 and i7 processors have fast unaligned copy and
copy backward is ignored.  Remove Fast_Copy_Backward from Intel Core
processors to avoid confusion.

	* sysdeps/x86/cpu-features.c (init_cpu_features): Don't set
	bit_arch_Fast_Copy_Backward for Intel Core proessors.
2016-04-01 15:09:14 -07:00
Adhemerval Zanella
528ffb3a04 Remove powerpc64 strspn, strcspn, and strpbrk implementation
This patch removes the powerpc64 optimized strspn, strcspn, and
strpbrk assembly implementation now that the default C one
implements the same strategy.  On internal glibc benchtests
current implementations shows similar performance with -O2.

Tested on powerpc64le (POWER8).

	* sysdeps/powerpc/powerpc64/strcspn.S: Remove file.
	* sysdeps/powerpc/powerpc64/strpbrk.S: Remove file.
	* sysdeps/powerpc/powerpc64/strspn.S: Remove file.
2016-04-01 10:44:45 -03:00
Wilco Dijkstra
d3496c9f4f Improve generic strcspn performance
Improve strcspn performance using a much faster algorithm.  It is kept simple
so it works well on most targets.  It is generally at least 10 times faster
than the existing implementation on bench-strcspn on a few AArch64
implementations, and for some tests 100 times as fast (repeatedly calling
strchr on a small string is extremely slow...).

In fact the string/bits/string2.h inlines make no longer sense, as GCC
already uses strlen if reject is an empty string, strchrnul is 5 times as
fast as __strcspn_c1, while __strcspn_c2 and __strcspn_c3 are slower than
the strcspn main loop for large strings (though reject length 2-4 could be
special cased in the future to gain even more performance).

Tested on x86_64, i686, and aarch64.

	* string/Version (libc): Add GLIBC_2.24.
	* string/strcspn.c (strcspn): Rewrite function.
	* string/bits/string2.h (strcspn): Use __builtin_strcspn.
	(__strcspn_c1): Remove inline function.
	(__strcspn_c2): Likewise.
	(__strcspn_c3): Likewise.
	* string/string-inline.c
	[SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strcspn_c1): Add
	compatibility symbol.
	[SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strcspn_c2):
	Likewise.
	[SHLIB_COMPAT(libc, GLIBC_2_1_1, GLIBC_2_24)] (__strcspn_c3):
	Likewise.
	* sysdeps/i386/string-inlines.c: Include generic string-inlines.c.
2016-04-01 10:44:40 -03:00
Stefan Liebler
d8a012c5c9 S390: Use ahi instead of aghi in 32bit _dl_runtime_resolve.
This patch uses ahi instead of aghi in 32bit _dl_runtime_resolve
to adjust the stack pointer. This is no functional change,
but a cosmetic one.

ChangeLog:

	* sysdeps/s390/s390-32/dl-trampoline.h (_dl_runtime_resolve):
	Use ahi instead of aghi to adjust stack pointer.
2016-04-01 10:42:54 +02:00
Paul E. Murphy
37a4c70bd4 Increase internal precision of ldbl-128ibm decimal printf [BZ #19853]
When the signs differ, the precision of the conversion sometimes
drops below 106 bits.  This strategy is identical to the
hexadecimal variant.

I've refactored tst-sprintf3 to enable testing a value with more
than 30 significant digits in order to demonstrate this failure
and its solution.

Additionally, this implicitly fixes a typo in the shift
quantities when subtracting from the high mantissa to compute
the difference.
2016-03-31 12:14:33 -05:00
H.J. Lu
830566307f Add x86-64 memset with unaligned store and rep stosb
Implement x86-64 memset with unaligned store and rep movsb.  Support
16-byte, 32-byte and 64-byte vector register sizes.  A single file
provides 2 implementations of memset, one with rep stosb and the other
without rep stosb.  They share the same codes when size is between 2
times of vector register size and REP_STOSB_THRESHOLD which defaults
to 2KB.

Key features:

1. Use overlapping store to avoid branch.
2. For size <= 4 times of vector register size, fully unroll the loop.
3. For size > 4 times of vector register size, store 4 times of vector
register size at a time.

	[BZ #19881]
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memset-sse2-unaligned-erms, memset-avx2-unaligned-erms and
	memset-avx512-unaligned-erms.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Test __memset_chk_sse2_unaligned,
	__memset_chk_sse2_unaligned_erms, __memset_chk_avx2_unaligned,
	__memset_chk_avx2_unaligned_erms, __memset_chk_avx512_unaligned,
	__memset_chk_avx512_unaligned_erms, __memset_sse2_unaligned,
	__memset_sse2_unaligned_erms, __memset_erms,
	__memset_avx2_unaligned, __memset_avx2_unaligned_erms,
	__memset_avx512_unaligned_erms and __memset_avx512_unaligned.
	* sysdeps/x86_64/multiarch/memset-avx2-unaligned-erms.S: New
	file.
	* sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memset-sse2-unaligned-erms.S:
	Likewise.
	* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:
	Likewise.
2016-03-31 10:06:07 -07:00
H.J. Lu
88b57b8ed4 Add x86-64 memmove with unaligned load/store and rep movsb
Implement x86-64 memmove with unaligned load/store and rep movsb.
Support 16-byte, 32-byte and 64-byte vector register sizes.  When
size <= 8 times of vector register size, there is no check for
address overlap bewteen source and destination.  Since overhead for
overlap check is small when size > 8 times of vector register size,
memcpy is an alias of memmove.

A single file provides 2 implementations of memmove, one with rep movsb
and the other without rep movsb.  They share the same codes when size is
between 2 times of vector register size and REP_MOVSB_THRESHOLD which
is 2KB for 16-byte vector register size and scaled up by large vector
register size.

Key features:

1. Use overlapping load and store to avoid branch.
2. For size <= 8 times of vector register size, load  all sources into
registers and store them together.
3. If there is no address overlap bewteen source and destination, copy
from both ends with 4 times of vector register size at a time.
4. If address of destination > address of source, backward copy 8 times
of vector register size at a time.
5. Otherwise, forward copy 8 times of vector register size at a time.
6. Use rep movsb only for forward copy.  Avoid slow backward rep movsb
by fallbacking to backward copy 8 times of vector register size at a
time.
7. Skip when address of destination == address of source.

	[BZ #19776]
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add
	memmove-sse2-unaligned-erms, memmove-avx-unaligned-erms and
	memmove-avx512-unaligned-erms.
	* sysdeps/x86_64/multiarch/ifunc-impl-list.c
	(__libc_ifunc_impl_list): Test
	__memmove_chk_avx512_unaligned_2,
	__memmove_chk_avx512_unaligned_erms,
	__memmove_chk_avx_unaligned_2, __memmove_chk_avx_unaligned_erms,
	__memmove_chk_sse2_unaligned_2,
	__memmove_chk_sse2_unaligned_erms, __memmove_avx_unaligned_2,
	__memmove_avx_unaligned_erms, __memmove_avx512_unaligned_2,
	__memmove_avx512_unaligned_erms, __memmove_erms,
	__memmove_sse2_unaligned_2, __memmove_sse2_unaligned_erms,
	__memcpy_chk_avx512_unaligned_2,
	__memcpy_chk_avx512_unaligned_erms,
	__memcpy_chk_avx_unaligned_2, __memcpy_chk_avx_unaligned_erms,
	__memcpy_chk_sse2_unaligned_2, __memcpy_chk_sse2_unaligned_erms,
	__memcpy_avx_unaligned_2, __memcpy_avx_unaligned_erms,
	__memcpy_avx512_unaligned_2, __memcpy_avx512_unaligned_erms,
	__memcpy_sse2_unaligned_2, __memcpy_sse2_unaligned_erms,
	__memcpy_erms, __mempcpy_chk_avx512_unaligned_2,
	__mempcpy_chk_avx512_unaligned_erms,
	__mempcpy_chk_avx_unaligned_2, __mempcpy_chk_avx_unaligned_erms,
	__mempcpy_chk_sse2_unaligned_2, __mempcpy_chk_sse2_unaligned_erms,
	__mempcpy_avx512_unaligned_2, __mempcpy_avx512_unaligned_erms,
	__mempcpy_avx_unaligned_2, __mempcpy_avx_unaligned_erms,
	__mempcpy_sse2_unaligned_2, __mempcpy_sse2_unaligned_erms and
	__mempcpy_erms.
	* sysdeps/x86_64/multiarch/memmove-avx-unaligned-erms.S: New
	file.
	* sysdeps/x86_64/multiarch/memmove-avx512-unaligned-erms.S:
	Likwise.
	* sysdeps/x86_64/multiarch/memmove-sse2-unaligned-erms.S:
	Likwise.
	* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:
	Likwise.
2016-03-31 10:04:40 -07:00
Stefan Liebler
5cdd1989d1 S390: Extend structs La_s390_regs / La_s390_retval with vector-registers.
Starting with z13, vector registers can also occur as argument registers.
Thus the passed input/output register structs for
la_s390_[32|64]_gnu_plt[enter|exit] functions should reflect those new
registers. This patch extends these structs La_s390_regs and La_s390_retval
and adjusts _dl_runtime_profile() to handle those fields in case of
running on a z13 machine.

ChangeLog:

	* sysdeps/s390/bits/link.h: (La_s390_vr) New typedef.
	(La_s390_32_regs): Append vector register lr_v24-lr_v31.
	(La_s390_64_regs): Likewise.
	(La_s390_32_retval): Append vector register lrv_v24.
	(La_s390_64_retval): Likeweise.
	* sysdeps/s390/s390-32/dl-trampoline.h (_dl_runtime_profile):
	Handle extended structs La_s390_32_regs and La_s390_32_retval.
	* sysdeps/s390/s390-64/dl-trampoline.h (_dl_runtime_profile):
	Handle extended structs La_s390_64_regs and La_s390_64_retval.
2016-03-31 17:37:16 +02:00
Stefan Liebler
4603c51ef7 S390: Save and restore fprs/vrs while resolving symbols.
On s390, no fpr/vrs were saved while resolving a symbol
via _dl_runtime_resolve/_dl_runtime_profile.

According to the abi, the fpr-arguments are defined as call clobbered.
In leaf-functions, gcc 4.9 and newer can use fprs for saving/restoring gprs
instead of saving them to the stack.
If gcc do this in one of the resolver-functions, then the floating point
arguments of a library-function are invalid for the first library-function-call.
Thus, this patch saves/restores the fprs around the resolving code.

The same could occur for vector registers. Furthermore an ifunc-resolver
could also clobber the vector/floating point argument registers.
Thus this patch provides the further variants _dl_runtime_resolve_vx/
_dl_runtime_profile_vx, which are used if the kernel claims, that
we run on a machine with vector registers.

Furthermore, if _dl_runtime_profile calls _dl_call_pltexit,
the pointers to inregs-/outregs-structs were setup invalid.
Now they point to the correct location in the stack-frame.
Before branching back to the caller, the return values are now
restored instead of containing the return values of the
_dl_call_pltexit() call.
On s390-32, an endless loop occurs if _dl_call_pltexit() should be called.
Now, this code-path branches to this function instead of just after the
preceding basr-instruction.

ChangeLog:

	* sysdeps/s390/s390-32/dl-trampoline.S: Include dl-trampoline.h twice
	to create a non-vector/vector version for _dl_runtime_resolve and
	_dl_runtime_profile. Move implementation to ...
	* sysdeps/s390/s390-32/dl-trampoline.h: ... here.
	(_dl_runtime_resolve) Save and restore fpr/vrs.
	(_dl_runtime_profile) Save and restore vrs and fix some issues
	if _dl_call_pltexit is called.
	* sysdeps/s390/s390-32/dl-machine.h (elf_machine_runtime_setup):
	Choose the correct resolver function if running on a machine with vx.
	* sysdeps/s390/s390-64/dl-trampoline.S: Include dl-trampoline.h twice
	to create a non-vector/vector version for _dl_runtime_resolve and
	_dl_runtime_profile. Move implementation to ...
	* sysdeps/s390/s390-64/dl-trampoline.h: ... here.
	(_dl_runtime_resolve) Save and restore fpr/vrs.
	(_dl_runtime_profile) Save and restore vrs and fix some issues
	* sysdeps/s390/s390-64/dl-machine.h: (elf_machine_runtime_setup):
	Choose the correct resolver function if running on a machine with vx.
2016-03-31 17:37:16 +02:00
Joseph Myers
258ec8abc1 [microblaze] Remove __ASSUME_FUTIMESAT.
MicroBlaze has a special version of futimesat.c because it gained the
futimesat syscall later than other non-asm-generic architectures.  Now
the minimum kernel is recent enough that this syscall can always be
assumed to be present for MicroBlaze, so this patch removes the
special version and the __ASSUME_FUTIMESAT macro, resulting in the
sysdeps/unix/sysv/linux/futimesat.c version being used.

Untested.

	* sysdeps/unix/sysv/linux/microblaze/kernel-features.h
	(__ASSUME_FUTIMESAT): Remove macro.
	* sysdeps/unix/sysv/linux/microblaze/futimesat.c: Remove file.
2016-03-29 22:13:36 +00:00
H.J. Lu
0791f91dff Initial Enhanced REP MOVSB/STOSB (ERMS) support
The newer Intel processors support Enhanced REP MOVSB/STOSB (ERMS) which
has a feature bit in CPUID.  This patch adds the Enhanced REP MOVSB/STOSB
(ERMS) bit to x86 cpu-features.

	* sysdeps/x86/cpu-features.h (bit_cpu_ERMS): New.
	(index_cpu_ERMS): Likewise.
	(reg_ERMS): Likewise.
2016-03-28 19:23:31 -07:00
Aurelien Jarno
9ff9351d02 Synchronize <sys/personality.h> with kernel headers
<sys/personality.h> is out of sync with kernel headers, missing the
UNAME26, FDPIC_FUNCPTRS and PER_LINUX_FDPIC entries. Fix that.

Changelog:
	* sysdeps/unix/sysv/linux/sys/personality.h (UNAME26, FDPIC_FUNCPTRS,
	PER_LINUX_FDPIC): Add.
2016-03-28 22:42:52 +02:00
H.J. Lu
064f01b10b Make __memcpy_avx512_no_vzeroupper an alias
Since x86-64 memcpy-avx512-no-vzeroupper.S implements memmove, make
__memcpy_avx512_no_vzeroupper an alias of __memmove_avx512_no_vzeroupper
to reduce code size of libc.so.

	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
	memcpy-avx512-no-vzeroupper.
	* sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S: Renamed
	to ...
	* sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S: This.
	(MEMCPY): Don't define.
	(MEMCPY_CHK): Likewise.
	(MEMPCPY): Likewise.
	(MEMPCPY_CHK): Likewise.
	(MEMPCPY_CHK): Renamed to ...
	(__mempcpy_chk_avx512_no_vzeroupper): This.
	(MEMPCPY_CHK): Renamed to ...
	(__mempcpy_chk_avx512_no_vzeroupper): This.
	(MEMCPY_CHK): Renamed to ...
	(__memmove_chk_avx512_no_vzeroupper): This.
	(MEMCPY): Renamed to ...
	(__memmove_avx512_no_vzeroupper): This.
	(__memcpy_avx512_no_vzeroupper): New alias.
	(__memcpy_chk_avx512_no_vzeroupper): Likewise.
2016-03-28 13:16:22 -07:00
H.J. Lu
c365e615f7 Implement x86-64 multiarch mempcpy in memcpy
Implement x86-64 multiarch mempcpy in memcpy to share most of code.  It
reduces code size of libc.so.

	[BZ #18858]
	* sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Remove
	mempcpy-ssse3, mempcpy-ssse3-back, mempcpy-avx-unaligned
	and mempcpy-avx512-no-vzeroupper.
	* sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMPCPY_CHK):
	New.
	(MEMPCPY): Likewise.
	* sysdeps/x86_64/multiarch/memcpy-avx512-no-vzeroupper.S
	(MEMPCPY_CHK): New.
	(MEMPCPY): Likewise.
	* sysdeps/x86_64/multiarch/memcpy-ssse3-back.S (MEMPCPY_CHK): New.
	(MEMPCPY): Likewise.
	* sysdeps/x86_64/multiarch/memcpy-ssse3.S (MEMPCPY_CHK): New.
	(MEMPCPY): Likewise.
	* sysdeps/x86_64/multiarch/mempcpy-avx-unaligned.S: Removed.
	* sysdeps/x86_64/multiarch/mempcpy-avx512-no-vzeroupper.S:
	Likewise.
	* sysdeps/x86_64/multiarch/mempcpy-ssse3-back.S: Likewise.
	* sysdeps/x86_64/multiarch/mempcpy-ssse3.S: Likewise.
2016-03-28 13:13:51 -07:00
H.J. Lu
e41b395523 [x86] Add a feature bit: Fast_Unaligned_Copy
On AMD processors, memcpy optimized with unaligned SSE load is
slower than emcpy optimized with aligned SSSE3 while other string
functions are faster with unaligned SSE load.  A feature bit,
Fast_Unaligned_Copy, is added to select memcpy optimized with
unaligned SSE load.

	[BZ #19583]
	* sysdeps/x86/cpu-features.c (init_cpu_features): Set
	Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel
	processors.  Set Fast_Copy_Backward for AMD Excavator
	processors.
	* sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy):
	New.
	(index_arch_Fast_Unaligned_Copy): Likewise.
	* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check
	Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
2016-03-28 04:40:03 -07:00
Florian Weimer
f327f5b47b tst-audit10: Fix compilation on compilers without bit_AVX512F [BZ #19860]
[BZ# 19860]
	* sysdeps/x86_64/tst-audit10.c (avx512_enabled): Always return
	zero if the compiler does not provide the AVX512F bit.
2016-03-25 11:11:42 +01:00
Joseph Myers
c898991d8b Fix x86_64 / x86 powl inaccuracy for integer exponents (bug 19848).
Bug 19848 reports cases where powl on x86 / x86_64 has error
accumulation, for small integer exponents, larger than permitted by
glibc's accuracy goals, at least in some rounding modes.  This patch
further restricts the exponent range for which the
small-integer-exponent logic is used to limit the possible error
accumulation.

Tested for x86_64 and x86 and ulps updated accordingly.

	[BZ #19848]
	* sysdeps/i386/fpu/e_powl.S (p3): Rename to p2 and change value
	from 8 to 4.
	(__ieee754_powl): Compare integer exponent against 4 not 8.
	* sysdeps/x86_64/fpu/e_powl.S (p3): Rename to p2 and change value
	from 8 to 4.
	(__ieee754_powl): Compare integer exponent against 4 not 8.
	* math/auto-libm-test-in: Add more tests of pow.
	* math/auto-libm-test-out: Regenerated.
	* sysdeps/i386/i686/fpu/multiarch/libm-test-ulps: Update.
	* sysdeps/x86_64/fpu/libm-test-ulps: Likewise.
2016-03-24 01:32:52 +00:00
Aurelien Jarno
7e1ff08c26 Assume __NR_utimensat is always defined
With the 2.6.32 minimum kernel on x86 and 3.2 on other architectures,
__NR_utimensat is always defined.

Changelog:
	* sysdeps/unix/sysv/linux/futimens.c (futimens) [__NR_utimensat]:
	Make code unconditional.
	[!__NR_utimensat]: Remove conditional code.
	* sysdeps/unix/sysv/linux/lutimes.c (lutimes) [__NR_utimensat]:
	Make code unconditional.
	[!__NR_utimensat]: Remove conditional code.
	* sysdeps/unix/sysv/linux/utimensat.c (utimensat) [__NR_utimensat]:
	Make code unconditional.
	[!__NR_utimensat]: Remove conditional code.
2016-03-23 23:35:08 +01:00
Aurelien Jarno
16d94f67e5 Assume __NR_openat is always defined
With the 2.6.32 minimum kernel on x86 and 3.2 on other architectures,
__NR_openat is always defined.

Changelog:
	* sysdeps/unix/sysv/linux/dl-openat64.c (openat64) [__NR_openat]:
	Make code unconditional.
2016-03-23 23:35:08 +01:00
Nick Alcock
7a25d6a84d x86, pthread_cond_*wait: Do not depend on %eax not being clobbered
The x86-specific versions of both pthread_cond_wait and
pthread_cond_timedwait have (in their fall-back-to-futex-wait slow
paths) calls to __pthread_mutex_cond_lock_adjust followed by
__pthread_mutex_unlock_usercnt, which load the parameters before the
first call but then assume that the first parameter, in %eax, will
survive unaffected.  This happens to have been true before now, but %eax
is a call-clobbered register, and this assumption is not safe: it could
change at any time, at GCC's whim, and indeed the stack-protector canary
checking code clobbers %eax while checking that the canary is
uncorrupted.

So reload %eax before calling __pthread_mutex_unlock_usercnt.  (Do this
unconditionally, even when stack-protection is not in use, because it's
the right thing to do, it's a slow path, and anything else is dicing
with death.)

	* sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S: Reload
	call-clobbered %eax on retry path.
	* sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S: Likewise.
2016-03-23 13:40:14 +01:00
H.J. Lu
3c9a4cd16c Don't set %rcx twice before "rep movsb"
* sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S (MEMCPY):
	Don't set %rcx twice before "rep movsb".
2016-03-22 08:36:16 -07:00
H.J. Lu
f781a9e961 Set index_arch_AVX_Fast_Unaligned_Load only for Intel processors
Since only Intel processors with AVX2 have fast unaligned load, we
should set index_arch_AVX_Fast_Unaligned_Load only for Intel processors.

Move AVX, AVX2, AVX512, FMA and FMA4 detection into get_common_indeces
and call get_common_indeces for other processors.

Add CPU_FEATURES_CPU_P and CPU_FEATURES_ARCH_P to aoid loading
GLRO(dl_x86_cpu_features) in cpu-features.c.

	[BZ #19583]
	* sysdeps/x86/cpu-features.c (get_common_indeces): Remove
	inline.  Check family before setting family, model and
	extended_model.  Set AVX, AVX2, AVX512, FMA and FMA4 usable
	bits here.
	(init_cpu_features): Replace HAS_CPU_FEATURE and
	HAS_ARCH_FEATURE with CPU_FEATURES_CPU_P and
	CPU_FEATURES_ARCH_P.  Set index_arch_AVX_Fast_Unaligned_Load
	for Intel processors with usable AVX2.  Call get_common_indeces
	for other processors with family == NULL.
	* sysdeps/x86/cpu-features.h (CPU_FEATURES_CPU_P): New macro.
	(CPU_FEATURES_ARCH_P): Likewise.
	(HAS_CPU_FEATURE): Use CPU_FEATURES_CPU_P.
	(HAS_ARCH_FEATURE): Use CPU_FEATURES_ARCH_P.
2016-03-22 07:47:20 -07:00
Joseph Myers
37ad347359 Remove __ASSUME_GETDENTS64_SYSCALL.
This patch removes the __ASSUME_GETDENTS64_SYSCALL macro, as its
definition is constant given the new kernel version requirements (and
was constant anyway before those requirements except for MIPS n32).

Note that the "#ifdef __NR_getdents64" conditional *is* still needed,
because MIPS n64 only has the getdents syscall (being a 64-bit ABI,
that syscall is 64-bit; the difference between the two on 64-bit
architectures is where d_type goes).  If MIPS n64 were to gain the
getdents64 syscall and we wanted to use it conditionally on the kernel
version at runtime we'd have to revert this patch, but I think that's
unlikely (and in any case, we could follow the simpler approach of
undefining __NR_getdents64 if the syscall can't be assumed, just like
we do for accept4 / recvmmsg / sendmmsg syscalls on architectures
where socketcall support came first).

Most of the getdents.c changes are reindentation.

Tested for x86_64 and x86 that installed stripped shared libraries are
unchanged by the patch.

	* sysdeps/unix/sysv/linux/kernel-features.h
	(__ASSUME_GETDENTS64_SYSCALL): Remove macro.
	* sysdeps/unix/sysv/linux/getdents.c
	[!__ASSUME_GETDENTS64_SYSCALL]: Remove conditional code.
	[!have_no_getdents64_defined]: Likewise.
	(__GETDENTS): Remove __have_no_getdents64 conditional.
2016-03-22 00:32:20 +00:00
Joseph Myers
238d60ac9b Remove __ASSUME_SIGNALFD4.
Current Linux kernel version requirements mean the signalfd4 syscall
can always be assumed to be available.  This patch removes
__ASSUME_SIGNALFD4 and associated conditionals.

Tested for x86_64 and x86 that installed stripped shared libraries are
unchanged by the patch.

	* sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_SIGNALFD4):
	Remove macro.
	* sysdeps/unix/sysv/linux/signalfd.c: Do not include
	<kernel-features.h>.
	(signalfd) [__NR_signalfd4]: Make code unconditional.
	(signalfd) [!__ASSUME_SIGNALFD4]: Remove conditional code.
2016-03-21 16:30:05 +00:00
Adhemerval Zanella
67b23376fb posix: Fix posix_spawn implict check style
This patch fixes the implicit check style add in 2a69f853c for the
general convention one.

Checked on x86_64.

	* sysdeps/unix/sysv/linux/spawni.c (__spawnix): Fix implict checks
	style.
2016-03-21 12:12:26 -03:00
H.J. Lu
893e371b2f Use JUMPTARGET in x86-64 pthread
When PLT may be used, JUMPTARGET should be used instead calling the
function directly.

	* sysdeps/unix/sysv/linux/x86_64/cancellation.S
	(__pthread_enable_asynccancel): Use JUMPTARGET to call
	__pthread_unwind.
	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S
	(__condvar_cleanup2): Use JUMPTARGET to call _Unwind_Resume.
	* sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S
	(__condvar_cleanup1): Likewise.
2016-03-21 06:51:05 -07:00
Adhemerval Zanella
2a69f853c0 posix: Fix posix_spawn invalid memory access
Current Linux posix_spawn spawn do not test if the pid argument is
valid before trying to update it for success case.  This patch fixes
it.

Tested on x86_64 and i686.

	* sysdeps/unix/sysv/linux/spawni.c (__spawnix): Fix invalid memory
	access where posix_spawn success and pid argument is null.
	* posix/tst-spawn.c (do_test): Add posix_spawn null pid argument for
	success case.
2016-03-20 18:17:52 -03:00
Samuel Thibault
0e8e593d73 hurd: Add c++-types expected result
* sysdeps/mach/hurd/i386/c++-types.data: New file.
2016-03-20 22:16:34 +01:00
Samuel Thibault
4d10ceb2b2 hurd: Allow inlining IO locks
* sysdeps/mach/hurd/libc-lock.h (_IO_lock_inexpensive): Define to 1.
2016-03-20 22:12:06 +01:00
Samuel Thibault
d2129ad457 hurd: Do not hide rtld symbols which need to be preempted
* sysdeps/generic/dl-fcntl.h: New file, adds attribute_hidden to __open
	and __fcntl.
	* sysdeps/mach/hurd/dl-fcntl.h: New file, adds attribute_hidden to
	__fcntl only.
	* include/fcntl.h [IS_IN (rtld)]: Include <dl-fcntl.h> instead of
	adding attribute_hidden to __open and __fcntl.
2016-03-20 19:51:42 +01:00
Samuel Thibault
fe43d0f464 hurd: Break errnos.d / libc-modules.h dependency loop
Generating errnos.d does not actually need libc-modules.h.

* sysdeps/mach/hurd/Makefile ($(common-objpfx)errnos.d): Strip
"-include $(common-objpfx)libc-modules.h" from CPPFLAGS, and do not
depend on libc-modules.h,
2016-03-20 16:44:44 +01:00
Joseph Myers
a64e3aadbf Remove __ASSUME_EVENTFD2, move eventfd to syscalls.list.
Given current Linux kernel version requirements, we can assume the
presence of the eventfd2 syscall.  This means that __ASSUME_EVENTFD2
can be removed, and a syscalls.list entry suffices for eventfd instead
of needing a .c file.  This patch implements those changes.

Tested for x86_64 and x86 (not that that means much, given the lack of
testsuite coverage for eventfd).

	* sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_EVENTFD2):
	Remove macro.
	* sysdeps/unix/sysv/linux/eventfd.c: Remove file.
	* sysdeps/unix/sysv/linux/syscalls.list (eventfd): New syscall
	entry.
2016-03-17 19:07:39 +00:00
Joseph Myers
4674df40bb Remove __ASSUME_FALLOCATE.
Given current Linux kernel version requirements, we can always assume
the fallocate syscall to be available.  This patch removes
__ASSUME_FALLOCATE and a test for whether __NR_fallocate is defined.

Tested for x86_64 and x86 that installed stripped shared libraries are
unchanged by the patch.

	* sysdeps/unix/sysv/linux/kernel-features.h (__ASSUME_FALLOCATE):
	Remove macro.
	* sysdeps/unix/sysv/linux/wordsize-64/posix_fallocate.c: Do not
	include <kernel-features.h>.
	[!__ASSUME_FALLOCATE]: Remove conditional code.
	(posix_fallocate) [__NR_fallocate]: Make code unconditional.
2016-03-17 12:15:51 +00:00
H.J. Lu
86ed888255 Use JUMPTARGET in x86-64 mathvec
When PLT may be used, JUMPTARGET should be used instead calling the
function directly.

	* sysdeps/x86_64/fpu/multiarch/svml_d_cos2_core_sse4.S
	(_ZGVbN2v_cos_sse4): Use JUMPTARGET to call cos.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos4_core_avx2.S
	(_ZGVdN4v_cos_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_cos8_core_avx512.S
	(_ZGVdN4v_cos): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp2_core_sse4.S
	(_ZGVbN2v_exp_sse4): Use JUMPTARGET to call exp.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp4_core_avx2.S
	(_ZGVdN4v_exp_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_exp8_core_avx512.S
	(_ZGVdN4v_exp): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log2_core_sse4.S
	(_ZGVbN2v_log_sse4): Use JUMPTARGET to call log.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log4_core_avx2.S
	(_ZGVdN4v_log_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_log8_core_avx512.S
	(_ZGVdN4v_log): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow2_core_sse4.S
	(_ZGVbN2vv_pow_sse4): Use JUMPTARGET to call pow.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow4_core_avx2.S
	(_ZGVdN4vv_pow_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_pow8_core_avx512.S
	(_ZGVdN4vv_pow): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin2_core_sse4.S
	(_ZGVbN2v_sin_sse4): Use JUMPTARGET to call sin.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin4_core_avx2.S
	(_ZGVdN4v_sin_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sin8_core_avx512.S
	(_ZGVdN4v_sin): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos2_core_sse4.S
	(_ZGVbN2vvv_sincos_sse4): Use JUMPTARGET to call sin and cos.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos4_core_avx2.S
	(_ZGVdN4vvv_sincos_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S
	(_ZGVdN4vvv_sincos): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_cosf16_core_avx512.S
	(_ZGVdN8v_cosf): Use JUMPTARGET to call cosf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_cosf4_core_sse4.S
	(_ZGVbN4v_cosf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_cosf8_core_avx2.S
	(_ZGVdN8v_cosf_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_expf16_core_avx512.S
	(_ZGVdN8v_expf): Use JUMPTARGET to call expf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_expf4_core_sse4.S
	(_ZGVbN4v_expf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_expf8_core_avx2.S
	(_ZGVdN8v_expf_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S
	(_ZGVdN8v_logf): Use JUMPTARGET to call logf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S
	(_ZGVbN4v_logf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S
	(_ZGVdN8v_logf_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_powf16_core_avx512.S
	(_ZGVdN8vv_powf): Use JUMPTARGET to call powf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_powf4_core_sse4.S
	(_ZGVbN4vv_powf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_powf8_core_avx2.S
	(_ZGVdN8vv_powf_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S
	(_ZGVdN8vv_powf): Use JUMPTARGET to call sinf and cosf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf4_core_sse4.S
	(_ZGVbN4vvv_sincosf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf8_core_avx2.S
	(_ZGVdN8vvv_sincosf_avx2): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sinf16_core_avx512.S
	(_ZGVdN8v_sinf): Use JUMPTARGET to call sinf.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sinf4_core_sse4.S
	(_ZGVbN4v_sinf_sse4): Likewise.
	* sysdeps/x86_64/fpu/multiarch/svml_s_sinf8_core_avx2.S
	(_ZGVdN8v_sinf_avx2): Likewise.
	* sysdeps/x86_64/fpu/svml_d_wrapper_impl.h (WRAPPER_IMPL_SSE2):
	Use JUMPTARGET to call callee.
	(WRAPPER_IMPL_SSE2_ff): Likewise.
	(WRAPPER_IMPL_SSE2_fFF): Likewise.
	(WRAPPER_IMPL_AVX): Likewise.
	(WRAPPER_IMPL_AVX_ff): Likewise.
	(WRAPPER_IMPL_AVX_fFF): Likewise.
	(WRAPPER_IMPL_AVX512): Likewise.
	(WRAPPER_IMPL_AVX512_ff): Likewise.
	* sysdeps/x86_64/fpu/svml_s_wrapper_impl.h (WRAPPER_IMPL_SSE2):
	Likewise.
	(WRAPPER_IMPL_SSE2_ff): Likewise.
	(WRAPPER_IMPL_SSE2_fFF): Likewise.
	(WRAPPER_IMPL_AVX): Likewise.
	(WRAPPER_IMPL_AVX_ff): Likewise.
	(WRAPPER_IMPL_AVX_fFF): Likewise.
	(WRAPPER_IMPL_AVX512): Likewise.
	(WRAPPER_IMPL_AVX512_ff): Likewise.
	(WRAPPER_IMPL_AVX512_fFF): Likewise.
2016-03-16 14:24:19 -07:00
Samuel Thibault
35fbb341f8 Fix hurd build
* sysdeps/mach/hurd/openat.c (__openat): Add missing ellipsis.
	* resolv/gai_sigqueue.c (__gai_sigqueue): Add missing internal_function
	qualifier.
	* /rt/aio_sigqueue.c (__aio_sigqueue): Add missing attribute_hidden
	internal_function qualifiers.
2016-03-16 13:57:57 +01:00