In case the allocator is corrupted and an assert triggers, we shouldn't
allocate any more memory. Use a private assert definition which doesn't
use malloc.
If a signal arrived during a symbol lookup and the signal handler also
required a symbol lookup, the end of the lookup in the signal handler reset
the flag whether restoring AVX/SSE registers is needed. Resetting means
in this case that the tail part of the outer lookup code will try to
restore the registers and this can fail miserably. We now restore to the
previous value which makes nesting calls possible.
On 64-bit machines we should not split doubles into two 32 bit
integer and handle the words separately. We have wide registers.
This patch implements a 64-bit ceil version. Ideally all other
functions will be converted over time.
This patch fixes mixed SSE/AVX audit and checks AVX only once in
_dl_runtime_profile. When an AVX or SSE register value in pltenter is
modified, we have to make sure that the SSE part value is the same in both
lr_xmm and lr_vector fields so that pltexit will get the correct value
from either lr_xmm or lr_vector fields. AVX-enabled pltenter should
update both lr_xmm and lr_vector fields to support stacked AVX/SSE
pltenter functions.
The meaning of the 25-14 bits in EAX returned from cpuid with EAX = 4
has been changed from "the maximum number of threads sharing the cache"
to "the maximum number of addressable IDs for logical processors sharing
the cache" if cpuid takes EAX = 11. We need to use results from both
EAX = 4 and EAX = 11 to get the number of threads sharing the cache.
The 25-14 bits in EAX on Core i7 is 15 although the number of logical
processors is 8. Here is a white paper on this:
http://software.intel.com/en-us/articles/intel-64-architecture-processor-topology-enumeration/
This patch correctly counts number of logical processors on Intel CPUs
with EAX = 11 support on cpuid. Tested on Dinnington, Core i7 and
Nehalem EX/EP.
It also fixed Pentium Ds workaround since EBX may not have the right
value returned from cpuid with EAX = 1.
This patch adds 32bit SSE4.2 string functions. It uses -16L instead of
0xfffffffffffffff0L, which works for both 32bit and 64bit long. Tested
on 32bit Core i7 and Core 2.
This patch adds multiarch support when configured for i686. I modified
some x86-64 functions to support 32bit. I will contribute 32bit SSE string
and memory functions later.
obstack calls several callbacks, so on i?86 it'd better be compiled
without -mpreferred-stack-boundary=2, otherwise the callbacks are called
with misaligned stack.
We use sigaltstack internally which on some systems is a syscall
and should be used as such. Move the x86-64 version to the Linux
specific directory and create in its place a file which always
causes compile errors.
SSE registers are used for passing parameters and must be preserved
in runtime relocations. This is inside ld.so enforced through the
tests in tst-xmmymm.sh. But the malloc routines used after startup
come from libc.so and can be arbitrarily complex. It's overkill
to save the SSE registers all the time because of that. These calls
are rare. Instead we save them on demand. The new infrastructure
put in place in this patch makes this possible and efficient.
The test now takes the callgraph into account. Only code called
during runtime relocation is affected by the limitation. We now
determine the affected object files as closely as possible from
the outside. This allowed to remove some the specializations
for some of the string functions as they are only used in other
code paths.
There were several issues when the initial 31 entries hashtab filled up.
size * 3 <= tab->n_elements is always false, table can't have more elements
than its size. I assume from libiberty/hashtab.c this meant to be check for
3/4 full. Even after fixing that, _dl_higher_prime_number (31) apparently
returns 31, only _dl_higher_prime_number (32) returns 61. And, size
variable wasn't updated during reallocation, which means during reallocation
the insertion of the new entry was done into a wrong spot.
All this lead to a hang in ld.so, because a search with n_elements 31 size
31 wouldn't ever terminate.
This patch introduces a test to make sure no function modifies the
xmm/ymm registers. With the exception of the auditing functions.
The test is probably too pessimistic. All code linked into ld.so
is checked. Perhaps at some point the callgraph starting from
_dl_fixup and _dl_profile_fixup is checked and we can start using
faster SSE-using functions in parts of ld.so.
There will be more than one function which, in multiarch mode, wants
to use SSSE3. We should not test in each of them for Atoms with
slow SSSE3. Instead, disable the SSSE3 bit in the startup code for
such machines.
The posix/tst-rfc3484* test cases caused warnings in newer gccs
because the unused but copied sin_zero part of sockaddr_in wasn't
explicitly initialized.
If a locale does not have 8-bit characters with case conversion which
are different from the ASCII conversion (±0x20) then we can perform
some optimizations. These will follow later.
In EDNS0 records the maximum result size is transmitted in a 16
bit value. Large buffer sizes were handled incorrectly by using
only the low 16 bits. Fix this by limiting the size to 0xffff.
The commit 20e498bd removes the pthread_mutex_rdlock() calls, but not the
corresponding pthread_mutex_unlock() calls. Also, the database lock is never
unlocked in one branch of the mempool_alloc() if.
I think unreproducible random assert(dh->usable) crashes in prune_cache() were
caused by this. But an easy way to make nscd threads hang with the broken
locking was.
With atomic fastbins the checks performed can race with concurrent
modifications of the arena. If we detect a problem re-do the test
after getting the lock.
The following patch fixes catomic_compare_and_exchange_*_rel definitions
(which were never used and weren't correct) and uses
catomic_compare_and_exchange_val_rel in _int_free. Comparing to the
pre-2009-07-02 --enable-experimental-malloc state the generated code should
be identical on all arches other than ppc/ppc64 and on ppc/ppc64 should use
lwsync instead of isync barrier.
The original AVX patch used a function pointer to handle the difference
between machines with and without AVX support. This is insecure. A
well-placed memory exploit could lead to redirection of the execution.
Using a variable and several tests is a bit slower but cannot be
exploited in this way.
Some symbols have to be identified process-wide by their name. This is
particularly important for some C++ features (e.g., class local static data
and static variables in inline functions). This cannot completely be
implemented with ELF functionality so far. The STB_GNU_UNIQUE binding
helps by ensuring the dynamic linker will always use the same definition for
all symbols with the same name and this binding.
Some of the new multi-arch string functions for x86-64 were
not aligned to 16 byte boundarie,s possibly creating unnecessary
cache line misses and delays.
This patch adds SSSE3 strcpy/stpcpy. I got up to 4X speed up on Core 2
and Core i7. I disabled it on Atom since SSSE3 version is slower for
shorter (<64byte) data.
I changed the files NSS backend for networks because I thought the
getent use of getnetbyaddr is correct. But it isn't. Undo parts
of the last change and fix getent.
There were two problems in the getnetbyaddr implementation. The type
argument is pretty much useless since (almost) no input file contains
this information and the NSS backends make up the value they fill in
for the n_addrtype field. Therefore we now declare that passing AF_UNSPEC
is always recognized. Secondly, the files backend didn't compare the network
numbers with the correct endianess.
Also change getent to take advantage of the type parameter change.
There is some more shardware/software out there which has problems
if two DNS requests are sent using the same tuple
(source addr, source port, dest addr, dest port)
This can range from firewalls to load balancers. Some of the vendors
already fixed it in response to this problem. Still, we need a way
to make glibc work with broken environments. The single-request-reopen
flag can be used or we fall back automatically to this mode.
The check for the inclusion of a group in the result gave up too early
in case of broken-up NIS groups. We now fall back automatically to
the slow mode of using getgrent_r. As an optimization, if there is
not blacklist we need not perform the check in the first place and
therefore can just accept the results of the initgroups_dyn callback.
The terminal output etc is not visible in a core file. The new
libc-internal variable __abort_msg will point to a string with the
message which has been printed before the abort in case abort is
called from inside libc. BZ #10217
The dl-lookup.c changes are needed for prelink (support in prelink
checked into SVN, tested for both i?86 and x86-64), dl-irel.h just
something I discovered by code inspection.
The kernel from 2.3.31 on supports the rt_tgsigqueueinfo syscall.
Use it to implement the non-standard extension which, like
sigqueue, can pass additional data to the receiving thread.
Old binutils don't provide IFUNC and don't generate the section start/end
symbols we expect. At least for now only add the initializer code for
static IFUNC relocations if multi-arch support is requested.
The test to call the indirect function now includes a subtest to
checked whether the symbol is defined. When coming to that point
this is almost always the case. The test for STT_GNU_IFUNC on the
other hand rarely is true. Move it to the front means we don't have
to perform the second test unless really necessary.
SO far Intel and AMD use exactly the same bits meaning the same
things in CPUID index 1. Simplify the code. Should an architecture
come along which doesn't use the same semantics then it must use a
different index value than COMMON_CPUID_INDEX_1.
The latest stratcliff extension exposed a bug in the IA-64 memchr which
uses non-speculative loads to prefetch data. Change the code to use
speculative loads with appropriate fixup. Fixes BZ 10162.
There are two issues with the forced loop exit in the nscd lookup:
1. the estimate of the entry size isn't pessimistic enough for all
databases, resulting potentially is too early exits
2. the combination of 64-bit process and 32-bit nscd would lead to
rejecting valid records in the database.
The nscd database mapped in processes can change at any time. We
have to be more vigilant when it comes to using that memory. Test
the data entries are valid in their entire size, don't read data
again from memory once we verified it, and make sure the trailing
pointer is not going off the deep end.
If longjmp restores the stack frame to an address which is beyond
the stack frame at the time of the longjmp call it would install
an uninitialized stack frame. If compiled with _FORTIFY_SOURCE
defined, longjmp will now bail out in this situation.
Add a text program, built to run on the host, to check all newly
built DSOs for executable stacks and fail if the stack information
is missing or indicates executable stacks.
2009-05-05 Aurelien Jarno <aurelien@aurel32.net>
[BZ #10128]
* resolv/res_query.c (__libc_res_nquery): If one query returns NOTIMP
or FORMERR and the other NOERROR, don't raise an error.
2009-05-06 Ryan S. Arnold <rsa@us.ibm.com>
[BZ #10118]
* Makeconfig (+asflags): New variable based upon ASFLAG or
asflags-cpu.
(ASFLAGS): Add override to set ASFLAGS to +asflags.
* config.make.in (asflags-cpu): Add variable based upon
@libc_cv_cc_submachine@ to propagate -mcpu=CPU from --with-cpu=CPU to
the assembler.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/power6/fpu/setcontext.S:
Remove unneeded file now that the assembler emits _ARCH_PWR6 and
recognizes power6 instruction set due to passing -mcpu=power6 from
--with-cpu=power6 when compiling .S files.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/power6/fpu/swapcontext.S:
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/power6/fpu/setcontext.S:
Likewise.
* sysdeps/unix/sysv/linux/powerpc/powerpc64/power6/fpu/swapcontext.S:
Likewise.
to MAP_ANON in PROT_NONE mmap64 call.
(open_archive): Likewise.
(file_data_available_p): Use mmap64 instead of mremap.
(enlarge_archive): Likewise. Update head if ah->addr changed.
Attempt to reserve address space after mmap64 region.
* elf/dl-runtime.c (_dl_fixup): Use DL_FIXUP_VALUE_ADDR to access
result of lookup to make call to implement STT_GNU_IFUNC.
(_dl_profile_fixup): Likewise.
Patch by H.J. Lu <hjl.tools@gmail.com>.
from definition.
* sysdeps/x86_64/dl-machine.h (elf_machine_rela): Don't define
label if it is not used.
* elf/dl-profile.c (_dl_start_profile): Define real-type variant
of gmon_hist_hdr and gmon_hdr structures and use them.
* elf/dl-load.c (open_verify): Add temporary variable to avoid
warning.
* nscd/nscd_helper.c (get_mapping): Avoid casts to avoid warnings.
* sunrpc/clnt_raw.c (clntraw_private_s): Use union in definition
to avoid cast.
* inet/rexec.c (rexec_af): Make sa2 a union to avoid warnings.
* inet/rcmd.c (rcmd_af): Make from a union of the various needed types
to avoid warnings.
(iruserok_af): Use ss_family instead of casts.
* gmon/gmon.c (write_hist): Define real-type variant of
gmon_hist_hdr structure and use it.
(write_gmon): Likewise for gmon_hdr.
* sysdeps/unix/sysv/linux/readv.c: Avoid declaration of replacement
function if we are not going to define it.
* sysdeps/unix/sysv/linux/writev.c: Likewise.
* inet/inet6_option.c (optin_alloc): Add temporary variable to
avoid warning.
* libio/strfile.h (struct _IO_streambuf): Use correct type and
name of VTable element.
* libio/iovsprintf.c: Avoid casts to avoid warnings.
* libio/iovsscanf.c: Likewise.
* libio/vasprintf.c: Likewise.
* libio/vsnprintf.c: Likewise.
* stdio-common/isoc99_vsscanf.c: Likewise.
* stdlib/strfmon_l.c: Likewise.
* debug/vasprintf_chk.c: Likewise.
* debug/vsnprintf_chk.c: Likewise.
* debug/vsprintf_chk.c: Likewise.
mmaped and add new reserved field.
* locale/programs/locarchive.c (RESERVE_MMAP_SIZE): Define.
(create_archive): Reserve address space and then map file into it.
(open_archive): Likewise.
(file_data_available_p): New function.
(compare_from_file): New function.
(close_archive): Adjust to member name changes.
(add_locale): Before comparing locale data, check it is mapped.
Otherwise fall back to reading from the file.