Clean up calls to malloc_printerr and trim its argument list.
This also removes a few bits of work done before calling
malloc_printerr (such as unlocking operations).
The tunable/environment variable still enables the lightweight
additional malloc checking, but mallopt (M_CHECK_ACTION)
no longer has any effect.
On very large multi-processor systems, creating hundreds of threads
runs into a test time out. The tests do not seem to benefit from
massive over-scheduling.
[BZ #22038]
* locales/so_DJ (LC_TIME): Fix abday, abmon and
make t_fmt in the comment agree with the value of t_fmt.
* locales/so_ET (LC_TIME): Fix abday (From Axa to Axd)
* locales/so_KE (LC_TIME): Fix abday (From Axa to Axd)
* locales/so_SO (LC_TIME): Fix abday (From Axa to Axd)
As shown by build bot failures
<https://sourceware.org/ml/libc-testresults/2017-q3/msg00349.html> the
m68k bits/mathinline.h is not namespace-clean: it fails to compile if
the user has defined macros f or l before it is included, because of
expansions of those arguments to __inline_functions. This patch
changes the __inline_functions definitions to take not the suffix but
a macro that concatenates it with the function name, to avoid the
spurious macro expansions.
Tested for m68k with build-many-glibcs.py.
[BZ #22035]
* sysdeps/m68k/m680x0/fpu/bits/mathinline.h (__inline_functions):
Define to take a second argument that is a macro that
concatentates a suffix, not the suffix itself.
(__CONCAT_d): New macro.
(__CONCAT_f): Likewise.
(__CONCAT_l): Likewise.
Fix a commit cc25c8b4c1 ("New pthread rwlock that is more scalable.")
regression and prevent uncontrolled stack space usage from happening
when a 5-, 6- or 7-argument syscall wrapper is placed in a loop.
The cause of the problem is the use of `alloca' in regular MIPS/Linux
wrappers to force the use of the frame pointer register in any function
using one or more of these wrappers. Using the frame pointer register
is required so as not to break frame unwinding as the the stack pointer
is lowered within the inline asm used by these wrappers to make room for
the stack arguments, which 5-, 6- and 7-argument syscalls use with the
o32 ABI.
The regular MIPS/Linux wrappers are macros however, expanded inline, and
stack allocations made with `alloca' are not discarded until the return
of the function they are made in. Consequently if called in a loop,
then virtual memory is wasted, and if the loop goes through enough
iterations, then ultimately available memory can get exhausted causing
the program to crash.
Address the issue by replacing the inline code with standalone assembly
functions, which rely on the compiler arranging syscall arguments
according to the o32 function calling convention, which MIPS/Linux
syscalls also use, except for the syscall number passed and the error
flag returned. This way there is no need to fiddle with the stack
pointer anymore and all that has to be handled in the new standalone
functions is the special handling of the syscall number and the error
flag.
Redirect 5-, 6- or 7-argument MIPS16/Linux syscall wrappers to these new
functions as well, so as to avoid an unnecessary double call the
existing wrappers would cause with the new arrangement.
[BZ #21956]
* sysdeps/unix/sysv/linux/mips/mips32/mips16/Makefile
[subdir = misc] (sysdep_routines): Remove `mips16-syscall5',
`mips16-syscall6' and `mips16-syscall7'.
(CFLAGS-mips16-syscall5.c, CFLAGS-mips16-syscall6.c)
(CFLAGS-mips16-syscall7.c): Remove.
* sysdeps/unix/sysv/linux/mips/mips32/mips16/Versions (libc):
Remove `__mips16_syscall5', `__mips16_syscall6' and
`__mips16_syscall7'.
* sysdeps/unix/sysv/linux/mips/mips32/mips16/mips16-syscall0.c
(__mips16_syscall0): Rename `__mips16_syscall_return' to
`__mips_syscall_return'.
* sysdeps/unix/sysv/linux/mips/mips32/mips16/mips16-syscall1.c
(__mips16_syscall1): Likewise.
* sysdeps/unix/sysv/linux/mips/mips32/mips16/mips16-syscall2.c
(__mips16_syscall2): Likewise.
* sysdeps/unix/sysv/linux/mips/mips32/mips16/mips16-syscall3.c
(__mips16_syscall3): Likewise.
* sysdeps/unix/sysv/linux/mips/mips32/mips16/mips16-syscall4.c
(__mips16_syscall4): Likewise.
* sysdeps/unix/sysv/linux/mips/mips32/mips16/mips16-syscall5.c:
Remove.
* sysdeps/unix/sysv/linux/mips/mips32/mips16/mips16-syscall6.c:
Remove.
* sysdeps/unix/sysv/linux/mips/mips32/mips16/mips16-syscall7.c:
Remove.
* sysdeps/unix/sysv/linux/mips/mips32/mips16/mips16-syscall.h
(__mips16_syscall5): Expand to `__mips_syscall5' rather than
`__mips16_syscall5'. Remove prototype.
(__mips16_syscall6): Expand to `__mips_syscall6' rather than
`__mips16_syscall6'. Remove prototype.
(__mips16_syscall7): Expand to `__mips_syscall7' rather than
`__mips16_syscall7'. Remove prototype.
(__nomips16, __mips16_syscall_return): Move to...
* sysdeps/unix/sysv/linux/mips/mips32/sysdep.h
(__nomips16, __mips_syscall_return): ... here.
[__mips16] (INTERNAL_SYSCALL_NCS): Rename
`__mips16_syscall_return' to `__mips_syscall_return'.
[__mips16] (INTERNAL_SYSCALL_MIPS16): Pass `number' to
`internal_syscall##nr'.
[!__mips16] (INTERNAL_SYSCALL): Pass `SYS_ify (name)' to
`internal_syscall##nr'.
(FORCE_FRAME_POINTER): Remove.
(__mips_syscall5): New prototype.
(internal_syscall5): Rewrite to call `__mips_syscall5'.
(__mips_syscall6): New prototype.
(internal_syscall6): Rewrite to call `__mips_syscall6'.
(__mips_syscall7): New prototype.
(internal_syscall7): Rewrite to call `__mips_syscall7'.
* sysdeps/unix/sysv/linux/mips/mips32/mips-syscall5.S: New file.
* sysdeps/unix/sysv/linux/mips/mips32/mips-syscall6.S: New file.
* sysdeps/unix/sysv/linux/mips/mips32/mips-syscall7.S: New file.
* sysdeps/unix/sysv/linux/mips/mips32/Makefile [subdir = misc]
(sysdep_routines): Add libc-do-syscall.
* sysdeps/unix/sysv/linux/mips/mips32/Versions (libc): Add
`__mips_syscall5', `__mips_syscall6' and `__mips_syscall7'.
This patch fixes ia64 failures on thread exit by madvise the required
area taking in consideration its disjoing stacks
(NEED_SEPARATE_REGISTER_STACK). Also the snippet that setup the
madvise call to advertise kernel the area won't be used anymore in
near future is reallocated in allocatestack.c (for consistency to
put all stack management function in one place).
Checked on x86_64-linux-gnu and i686-linux-gnu for sanity (since
it is not expected code changes for architecture that do not
define NEED_SEPARATE_REGISTER_STACK) and also got a report that
it fixes ia64-linux-gnu failures from Sergei Trofimovich
<slyfox@gentoo.org>.
[BZ #21672]
* nptl/allocatestack.c [_STACK_GROWS_DOWN] (setup_stack_prot):
Set to use !NEED_SEPARATE_REGISTER_STACK as well.
(advise_stack_range): New function.
* nptl/pthread_create.c (START_THREAD_DEFN): Move logic to mark
stack non required to advise_stack_range at allocatestack.c
Commit 39e7a5a668 added stdint.h
to sys/procfs.h, but it is included into signal.h by default and
there is code that does not expect stdint.h to be visible there,
so use __uint64_t instead of uint64_t.
The current bits/math-finite.h approach to defining functions for
different types, involving math.h defining _MSUF_ and _MSUFTO_ for the
function suffixes involved, is not namespace-clean if one of those
suffixes (f, l, f128) is defined as a macro by the user before math.h
is included; too many levels of macro expansion occur. Instead, those
suffixes should appear directly in the expansion of the macro using ##
so they don't get expanded even if defined as macros by the user (that
is, math.h should be defining __REDIRFROM_X and __REDIRTO_X directly
to use those suffixes rather than suffixes being passed as an argument
by macro callers). This patch makes that change.
Tested for x86_64.
[BZ #22028]
* math/math.h [__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0]
(_MSUF_): Remove macro.
[__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0] (_MSUFTO_):
Likewise.
[__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0]
(__REDIRFROM_X): New macro.
[__FINITE_MATH_ONLY__ && __FINITE_MATH_ONLY__ > 0] (__REDIRTO_X):
Likewise.
* math/bits/math-finite.h (__REDIRFROM_X): Remove macro.
(__REDIRTO_X): Likewise.
(__MATH_REDIRCALL): Do not pass _MSUF_ or _MSUFTO_ macro
arguments.
(__MATH_REDIRCALL_2): Likewise.
(__MATH_REDIRCALL_INTERNAL): Likewise.
(__REDIRFROM (lgamma, , _MSUF_)): Likewise.
(__REDIRFROM (gamma, , _MSUF_)): Likweise.
(__REDIRFROM (__gamma, _r_finite, _MSUF_)): Likewise.
(__REDIRFROM (tgamma, , _MSUF_)): Likewise.
* math/test-finite-macros.c: New file.
* math/Makefile (tests): Add test-finite-macros.
(CFLAGS-test-finite-macros.c): New variable.
[BZ #13805]
* locales/ru_RU (LC_MONETARY): Use “,” for mon_decimal_point
(to agree with CLDR).
* locales/ru_RU (LC_NUMERIC): Write mon_decimal_point in ASCII
for readability.
* locales/os_RU (LC_MONETARY): Copy from ru_RU,
makes it agree with CLDR.
Add locale for “Morisyen” which is also called “Mauritian Creole”
and is spoken in Mauritius.
[BZ #21971]
* localedata/SUPPORTED: Add mfe_MU/UTF-8.
* localedata/locales/mfe_MU: New File.
[BZ #21971]
* locale/iso-639.def: add Morisyen.
When signaling nans are enabled (with -fsignaling-nans), the C++ version
of iszero uses the fpclassify macro, which is defined with __MATH_TG.
However, when support for float128 is available, __MATH_TG uses the
builtin __builtin_types_compatible_p, which is only available in C mode.
This patch refactors the C++ version of iszero so that it uses function
overloading to select between the floating-point types, instead of
relying on fpclassify and __MATH_TG.
Tested for powerpc64le, s390x, x86_64, and with build-many-glibcs.py.
[BZ #21930]
* math/math.h [defined __cplusplus && defined __SUPPORT_SNAN__]
(iszero): New C++ implementation that does not use
fpclassify/__MATH_TG/__builtin_types_compatible_p, when
signaling nans are enabled, since __builtin_types_compatible_p
is a C-only feature.
* math/test-math-iszero.cc: When __HAVE_DISTINCT_FLOAT128 is
defined, include ieee754_float128.h for access to the union and
member ieee854_float128.ieee.
[__HAVE_DISTINCT_FLOAT128] (do_test): Call check_float128.
[__HAVE_DISTINCT_FLOAT128] (check_float128): New function.
* sysdeps/powerpc/powerpc64le/Makefile [subdir == math]
(CXXFLAGS-test-math-iszero.cc): Add -mfloat128 to the build
options of test-math-zero on powerpc64le.
Now there are no more assembly wrappers using _LIB_VERSION or
__kernel_standard, the math-svid-compat code can be slighly
simplified. math-svid-compat.h no longer needs __ASSEMBLER__
conditionals, and the _LIB_VERSION variable no longer needs to be
built for static libm, since all references are now in C code that
includes math-svid-compat.h and so gets the macro definition of
_LIB_VERSION to _POSIX_ outside the compat case. This patch makes
those cleanups.
Tested for x86_64, and with build-many-glibcs.py.
* math/math-svid-compat.h [!__ASSEMBLER__]: Make code
unconditional.
* sysdeps/ieee754/s_lib_version.c [!defined SHARED]: Remove
conditional code; define contents only for [LIBM_SVID_COMPAT].
This commit changes the way the list of SYS_* system call macros is
created on Linux. glibc now contains a list of all known system
calls, and the generated <bits/syscall.h> file defines the SYS_ macro
only if the correspnding __NR_ macro is defined by the kernel headers.
As a result, glibc does not have to be rebuilt to pick up system calls
if the glibc sources already know about them. This means that glibc
can be built with older kernel headers, and if the installed kernel
headers are upgraded afterwards, additional SYS_ macros become
available as long as glibc has a record for those system calls.
When linked statically, TLS initialization is not achieved before
mach_init and alike, so ssp accesses to tcbhead's stack_guard would
crash. We can just avoid using ssp in the few functions needed before
TLS is set up.
* mach/Makefile (CFLAGS-mach_init.o, CFLAGS-RPC_vm_statistics.o,
CFLAGS-RPC_vm_map.o, CFLAGS-RPC_vm_protect.o,
CFLAGS-RPC_i386_set_gdt.o, CFLAGS-RPC_i386_set_ldt.o,
CFLAGS-RPC_task_get_special_port.o): Add $(no-stack-protector).
* hurd/Makefile (CFLAGS-hurdstartup.o,
CFLAGS-RPC_exec_startup_get_info.o): Add $(no-stack-protector).
libmachuser and libhurduser also need stack_chk_fail_local and they do not
link against libc_nonshared.
* mach/stack_chk_fail_local.c: New file.
* hurd/stack_chk_fail_local.c: New file.
* mach/Machrules ($(interface-library)-routines): Add
stack_chk_fail_local.
* mach/Versions (GLIBC_2.4): Add __stack_chk_fail.
* hurd/Versions (GLIBC_2.4): Add __stack_chk_fail.
Since assembly versions of HAS_CPU_FEATURE and HAS_ARCH_FEATURE have
been removed, assembly versions of index_cpu_* and index_arch_* can
also be removed.
Tested on i686 and x86-64 with and without --disable-multi-arch.
* sysdeps/x86/cpu-features.h [__ASSEMBLER__]
(index_cpu_*, index_arch_*): Removed.
When _Float128 is ABI-equivalent to long double, there is no need for
tgmath.h to have any special _Float128 handling: it's always OK to
call the long double versions of functions for _Float128 arguments in
that case, and the logic to determine return types is generic. Thus,
this patch changes the use of __HAVE_FLOAT128 to
__HAVE_DISTINCT_FLOAT128, as a minor optimization to reduce the size
of the macro expansions in the ABI-equivalent case.
Tested for x86_64.
* math/tgmath.h [__HAVE_FLOAT128]: Change conditional to
[__HAVE_DISTINCT_FLOAT128].
This patch cleans up how bits/math-finite.h handles types that are
ABI-aliases of other types.
For such types, no __*_finite functions exist; instead,
bits/math-finite.h must redirect calls to a the functions for a
canonical choice of type for each floating-point format. (For the
actual public interfaces, symbols need exporting for each type, even
those that are ABI-aliases, because of standard requirements that
programs can declare the functions themselves without including
<math.h>, but that does not apply to __*_finite.)
At present, there is a special-case conditional in bits/math-finite.h
on __MATH_DECLARING_LDOUBLE && defined __NO_LONG_DOUBLE_MATH to handle
redirecting long double function calls to double __*_finite. This
patch replaces this by a more general mechanism. math.h, before each
inclusion of bits/math-finite.h, defines _MSUFTO_ as the suffix to use
on the target of redirection, in addition to the existing _MSUF_.
This way, __MATH_DECLARING_LDOUBLE can go away, as can the special
conditional in bits/math-finite.h. With this patch, math.h is now
prepared for the case of supporting float128 functions as aliases of
long double ones on platforms where long double is binary128, with
_MSUFTO_ appropriately defined for that case, and appropriate _MSUFTO_
definitions can easily be included when supporting _Float32 / _Float64
/ _Float32x / _Float64x (which will always be ABI-aliases of another
type when supported).
Tested for x86_64, and did a compilation test for ARM with
build-many-glibcs.py to cover the long double = double case.
* math/math.h (_MSUFTO_): Define and undefine for each inclusion
of <bits/math-finite.h>.
(__MATH_DECLARING_LDOUBLE): Do not define and undefine for each
inclusion of <bits/math-finite.h>.
* math/bits/math-finite.h (__REDIRTO_X): Do not define
conditionally on [__MATH_DECLARING_LDOUBLE && defined
__NO_LONG_DOUBLE_MATH].
(__MATH_REDIRCALL): Use _MSUFTO_ in __REDIRTO call.
(__MATH_REDIRCALL_2): Likewise.
(__MATH_REDIRCALL_INTERNAL): Likewise.
(__REDIRFROM (lgamma, , _MSUF_)): Likewise.
(__REDIRFROM (gamma, , _MSUF_)): Likewise.
(__REDIRFROM (tgamma, , _MSUF_)): Likewise.
This patch removes the powerpc32-specific wrappers for sqrt and sqrtf.
These wrappers, by adding architecture-specific uses of _LIB_VERSION
and __kernel_standard, unnecessarily complicate cleanups of libm error
handling. They also do not serve a useful optimization purpose. GCC
knows about sqrt as a built-in function, and can generate direct calls
to a hardware square root instruction, either on its own, in the
-fno-math-errno case, or together with an inline check for the
argument being negative and a call to the out-of-line sqrt function
for error handling only in that case (and has been able to do so for a
long time). Thus in practice the wrapper will only be called only in
the case of negative arguments, which is not a case it is useful to
optimize for.
Tested with build-many-glibcs.py for powerpc-linux-gnu-power4.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/w_sqrt_compat-power5.S:
Remove file.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/w_sqrt_compat-ppc32.S:
Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/w_sqrt_compat.c:
Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/w_sqrtf_compat-power5.S:
Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/w_sqrtf_compat-ppc32.S:
Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/w_sqrtf_compat.c:
Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/w_sqrt_compat.S: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/w_sqrtf_compat.S: Likewise.
* sysdeps/powerpc/powerpc32/power5/fpu/w_sqrt_compat.S: Likewise.
* sysdeps/powerpc/powerpc32/power5/fpu/w_sqrtf_compat.S: Likewise.
* sysdeps/powerpc/powerpc32/power4/fpu/multiarch/Makefile
(libm-sysdep-routines): Remove w_sqrt_compat-power5,
w_sqrt_compat-ppc32, w_sqrtf_compat-power5 and
w_sqrtf_compat-ppc32.
When __NO_LONG_DOUBLE_MATH is defined, __issignalingl is not available,
thus issignaling with long double argument should call __issignaling,
instead.
Tested for powerpc64le.
* math/math.h [defined __cplusplus] (issignaling): In the long
double case, call __issignalingl only if __NO_LONG_DOUBLE_MATH
is not defined. Call __issignaling, otherwise.
Use += instead of = to avoid overriding target specific CFLAGS settings.
Ideally the settings in target Makefiles would have precedence, but the
Makefile inclusion order does not allow that, with this fix at least the
target settings are not dropped.
Update libm-test-ulps for AVX512 mathvec tests by running
“make regen-ulps” on Intel Xeon processor with AVX512.
* sysdeps/x86_64/fpu/libm-test-ulps: Regenerated.
Fix GCC 7 errors when string/stratcliff.c is compiled with -O3:
stratcliff.c: In function ‘do_test’:
cc1: error: assuming signed overflow does not occur when assuming that (X - c) <= X is always true [-Werror=strict-overflow]
[BZ #21982]
* string/stratcliff.c (do_test): Declare size, nchars, inner,
middle and outer with size_t instead of int. Repleace %d and
%Zd with %zu in printf. Update "MAX (0, nchars - 128)" and
"MAX (outer, nchars - 64)" to support unsigned outer and
nchars. Also exit loop when outer == 0.
This patch consolidate the remaning non cancellable syscall definitions
on not-cancel.h header. They are:
* __fcntl_nocancel: Moved from fcntl.h to not-cancel.h.
* __sigsuspend_nocancel: Removed since 988f991b50 it is not used or
defined anymore.
* __nanosleep_nocancel: Removed since 6f33fd046b it is defined on
not-cancel.h.
Now all non-cancellable syscall definition are defined on not-cancel
(the only exceptions is the stdio symbol __fxprintf_nocancel which
uses non cancellable open and it is used on getopt implementation).
Checked on x86_64-linux-gnu and with build-many-glibc.py.
* include/fcntl.h (__fcntl_nocancel): Remove definition.
* include/signal.h (__sigsuspend_nocancel): Likewise.
* include/time.h (__nanosleep_nocancel): Likewise.
* sysdeps/generic/not-cancel.h (__fcntl_nocancel): New macro.
* login/utmp_file.c: Include non cancellable syscall header.
* sysdeps/unix/sysv/linux/not-cancel.h (__fcntl_nocancel): New
prototype.
Since binutils 2.25 or later is required to build glibc, we can replace
AVX512F .byte sequences with AVX512F instructions.
Tested on x86-64 and x32. There are no code differences in libmvec.so
and libmvec.a.
* sysdeps/x86_64/fpu/svml_d_sincos8_core.S: Replace AVX512F
.byte sequences with AVX512F instructions.
* sysdeps/x86_64/fpu/svml_d_wrapper_impl.h: Likewise.
* sysdeps/x86_64/fpu/svml_s_sincosf16_core.S: Likewise.
* sysdeps/x86_64/fpu/svml_s_wrapper_impl.h: Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_d_sincos8_core_avx512.S:
Likewise.
* sysdeps/x86_64/fpu/multiarch/svml_s_sincosf16_core_avx512.S:
Likewise.
Since Martin Sebor's commit
commit ee4e992ebe
Author: Martin Sebor <msebor@redhat.com>
Date: Tue Aug 22 09:35:23 2017 -0600
Declare ifunc resolver to return a pointer to the same type as the target
function to help GCC detect incompatibilities between the two when it's
enhanced to do so.
builds for powerpc64le fail in the declaration of some ifunc resolvers,
because the ifunc is declared with unmatching return types. One of the
declarations comes from the __ifunc_resolver macro, which was patched by
the aforementioned commit:
/* Helper / base macros for indirect function symbols. */
#define __ifunc_resolver(type_name, name, expr, arg, init, classifier) \
classifier inhibit_stack_protector \
__typeof (type_name) *name##_ifunc (arg) \
whereas the other comes from the unpatched __ifunc macro when
HAVE_GCC_IFUNC is not defined:
# define __ifunc(type_name, name, expr, arg, init) \
extern __typeof (type_name) name; \
void *name##_ifunc (arg) __asm__ (#name); \
This patch changes the return type of the ifunc resolver in the __ifunc
macro, so that it matches the return type of the target function,
similarly to what the aforementioned commit does.
Tested for powerpc64le and s390x with unpatched GCC.
* include/libc-symbols.h: [!defined HAVE_GCC_IFUNC] (__ifunc):
Change the return type of the ifunc resolver to match the return
type of the target function.
With {INLINE,INTERNAL}_SYSCALL macros fixed for 64-bits arguments on x32,
we can remove the p{read,write}{v} from auto-generation list.
Tested on x86_64 and x32.
* sysdeps/unix/sysv/linux/x86_64/syscalls.list (pread64): Remove.
(preadv64): Likewise.
(pwrite64(: Likewise.
(pwritev64): Likewise.
The problem for x32 is the {INTERNAL,INLINE}_SYSCALL C macros explicit
cast the arguments to 'long int', thus passing as 32 bits arguments
that should be passed to 64 bits.
Previous x32 implementation uses the auto-generated syscalls from
assembly macros (syscalls.list), so the {INTERNAL,INLINE}_SYSCALL
macros are never used with 64 bit argument in x32 (which are
internally broken for this ILP).
To fix it I used a strategy similar to MIPS64n32 (although both
ABI differs for some syscalls on how top pass 64-bits arguments)
where argument types for kernel call are defined using GCC extension
'typeof' with a arithmetic operation. This allows 64-bits arguments
to be defined while 32-bits argument will still passed as 32-bits.
I also cleanup the {INLINE,INTERNAL}_SYSCALL definition by defining
'inline_syscallX' instead of constructing the argument passing using
macros (it adds some readability) and removed the ununsed
INTERNAL_SYSCALL_NCS_TYPES define (since the patch idea is exactly to
avoid requiric explicit types passing).
Tested on x86_64 and x32.
* sysdeps/unix/sysv/linux/x86_64/sysdep.h
(INTERNAL_SYSCALL_NCS_TYPES): Remove define.
(LOAD_ARGS_0): Likewise.
(LOAD_ARGS_1): Likewise.
(LOAD_ARGS_2): Likewise.
(LOAD_ARGS_3): Likewise.
(LOAD_ARGS_4): Likewise.
(LOAD_ARGS_5): Likewise.
(LOAD_ARGS_6): Likewise.
(LOAD_REGS_0): Likewise.
(LOAD_REGS_1): Likewise.
(LOAD_REGS_2): Likewise.
(LOAD_REGS_3): Likewise.
(LOAD_REGS_4): Likewise.
(LOAD_REGS_5): Likewise.
(LOAD_REGS_6): Likewise.
(ASM_ARGS_0): Likewise.
(ASM_ARGS_1): Likewise.
(ASM_ARGS_2): Likewise.
(ASM_ARGS_3): Likewise.
(ASM_ARGS_4): Likewise.
(ASM_ARGS_5): Likewise.
(ASM_ARGS_6): Likewise.
(LOAD_ARGS_TYPES_1): Likewise.
(LOAD_ARGS_TYPES_2): Likewise.
(LOAD_ARGS_TYPES_3): Likewise.
(LOAD_ARGS_TYPES_4): Likewise.
(LOAD_ARGS_TYPES_5): Likewise.
(LOAD_ARGS_TYPES_6): Likewise.
(LOAD_REGS_TYPES_1): Likewise.
(LOAD_REGS_TYPES_2): Likewise.
(LOAD_REGS_TYPES_3): Likewise.
(LOAD_REGS_TYPES_4): Likewise.
(LOAD_REGS_TYPES_5): Likewise.
(LOAD_REGS_TYPES_6): Likewise.
(TYPEFY): New define.
(ARGIFY): Likewise.
(internal_syscall0): Likewise.
(internal_syscall1): Likewise.
(internal_syscall2): Likewise.
(internal_syscall3): Likewise.
(internal_syscall4): Likewise.
(internal_syscall5): Likewise.
(internal_syscall6): Likewise.
* sysdeps/unix/sysv/linux/x86_64/x32/times.c
(INTERNAL_SYSCALL_NCS): Remove define.
(internal_syscall1): Add define.