I've noticed that sync_file_range is a stub on ppc/ppc64.
The kernel on these arches provides sync_file_range2 syscall with swapped
parameters.
The following completely untested patch ought to fix this.
This is another bug in computing the fastmap. It was reported by a user
of sed because it usually does not happen with !_LIBC. However, it is
there in that case too.
The bug is that whenever we have a range at the beginning of the regex,
the regex must be tested on any possible multibyte character. The reason
why _LIBC masks it, is that in general there is a collation symbol for
each possible multibyte-character lead byte, so all the lead bytes are
in general already part of the fastmap.
The tests use cyrillic characters as an example. With _LIBC, they pass
without the patch too, but you can make them fail by removing collation
symbols handling.
With big DNS answers like the one you get for goodtimesdot.com you can
get a truncated address list if IPv6 mapping is enabled. Instead tell
the caller to resize the buffer.
Due to alignment of 64bit parameters there is a dummy second argument.
But other than that the syscall arguments are directly mapped to the
function arguments.
I've just committed STT_GNU_IFUNC ppc/ppc64 support into prelink,
and this patch is needed on the glibc side. Without it ld.so segfaults,
as in dl-conflict.c sym_map is always NULL. While dl-machine.h could use
RESOLVE_CONFLICT_FIND_MAP macro to compute it, it doesn't make sense,
because with prelink we know it is already properly relocated (all relative
relocations are applied by prelink).
As reported in http://bugzilla.redhat.com/533063 , preadv/pwritev prototypes
are wrong on 32-bit arches with -D_FILE_OFFSET_BITS=64 and as I've just
found, fallocate is wrong too.
The problem is that only off_t is remapped to the 64-bit type transparently,
__off_t is not.
The nscd/*cache.c files contain assert()s, writeall() and sendfileall() calls
that invalidly use together &dataset->resp and total where either dataset or
dataset->head.recsize should be used instead one of the components. In the
writeall() and sendfileall() cases, it is unlikely to matter in practice, but
the assertions can fail sometimes without a proper reason.
The implementation of posix_openpt on Linux can fail in a few extra
ways if the appropriate pseudo filesystems are not mounted etc. In
some of these cases we have to explicitly set errno.
If a second call to ttyname is not for the same type of device (e.g.,
serial vs ptty) the prefix of the buffer was wrong. Don't rely on
the previous content, always reinitialize it.
The syscall conventions on some Linux archs prevented F_GETOWN from working
correctly in some situations. This can be rectified when using the new
F_GETOWN_EX command.
tst-longjmp_chk passes, tst-longjmp_chk2 fails but that is because
of some limitations of kernel signal delivery on sparc that I need
to fix, it has nothing to do with the longjmp_chk implementation.
(The problem with tst-longjmp_chk2 is that it tries to do a stack
fault SIGSEGV within a stack fault SIGSEGV , and the Linux kernel
will refuse to setup the signal stack and deliver the signal if the
register windows can't be written out to the stack first)
localedef got into an endless loop in case order_start was used for
the unnamed_section twice and the first use didn't actually result
into any definition.
In case the allocator is corrupted and an assert triggers, we shouldn't
allocate any more memory. Use a private assert definition which doesn't
use malloc.
If a signal arrived during a symbol lookup and the signal handler also
required a symbol lookup, the end of the lookup in the signal handler reset
the flag whether restoring AVX/SSE registers is needed. Resetting means
in this case that the tail part of the outer lookup code will try to
restore the registers and this can fail miserably. We now restore to the
previous value which makes nesting calls possible.
On 64-bit machines we should not split doubles into two 32 bit
integer and handle the words separately. We have wide registers.
This patch implements a 64-bit ceil version. Ideally all other
functions will be converted over time.
This patch fixes mixed SSE/AVX audit and checks AVX only once in
_dl_runtime_profile. When an AVX or SSE register value in pltenter is
modified, we have to make sure that the SSE part value is the same in both
lr_xmm and lr_vector fields so that pltexit will get the correct value
from either lr_xmm or lr_vector fields. AVX-enabled pltenter should
update both lr_xmm and lr_vector fields to support stacked AVX/SSE
pltenter functions.
The meaning of the 25-14 bits in EAX returned from cpuid with EAX = 4
has been changed from "the maximum number of threads sharing the cache"
to "the maximum number of addressable IDs for logical processors sharing
the cache" if cpuid takes EAX = 11. We need to use results from both
EAX = 4 and EAX = 11 to get the number of threads sharing the cache.
The 25-14 bits in EAX on Core i7 is 15 although the number of logical
processors is 8. Here is a white paper on this:
http://software.intel.com/en-us/articles/intel-64-architecture-processor-topology-enumeration/
This patch correctly counts number of logical processors on Intel CPUs
with EAX = 11 support on cpuid. Tested on Dinnington, Core i7 and
Nehalem EX/EP.
It also fixed Pentium Ds workaround since EBX may not have the right
value returned from cpuid with EAX = 1.
This patch adds 32bit SSE4.2 string functions. It uses -16L instead of
0xfffffffffffffff0L, which works for both 32bit and 64bit long. Tested
on 32bit Core i7 and Core 2.
This patch adds multiarch support when configured for i686. I modified
some x86-64 functions to support 32bit. I will contribute 32bit SSE string
and memory functions later.
obstack calls several callbacks, so on i?86 it'd better be compiled
without -mpreferred-stack-boundary=2, otherwise the callbacks are called
with misaligned stack.
We use sigaltstack internally which on some systems is a syscall
and should be used as such. Move the x86-64 version to the Linux
specific directory and create in its place a file which always
causes compile errors.
SSE registers are used for passing parameters and must be preserved
in runtime relocations. This is inside ld.so enforced through the
tests in tst-xmmymm.sh. But the malloc routines used after startup
come from libc.so and can be arbitrarily complex. It's overkill
to save the SSE registers all the time because of that. These calls
are rare. Instead we save them on demand. The new infrastructure
put in place in this patch makes this possible and efficient.
The test now takes the callgraph into account. Only code called
during runtime relocation is affected by the limitation. We now
determine the affected object files as closely as possible from
the outside. This allowed to remove some the specializations
for some of the string functions as they are only used in other
code paths.
There were several issues when the initial 31 entries hashtab filled up.
size * 3 <= tab->n_elements is always false, table can't have more elements
than its size. I assume from libiberty/hashtab.c this meant to be check for
3/4 full. Even after fixing that, _dl_higher_prime_number (31) apparently
returns 31, only _dl_higher_prime_number (32) returns 61. And, size
variable wasn't updated during reallocation, which means during reallocation
the insertion of the new entry was done into a wrong spot.
All this lead to a hang in ld.so, because a search with n_elements 31 size
31 wouldn't ever terminate.
This patch introduces a test to make sure no function modifies the
xmm/ymm registers. With the exception of the auditing functions.
The test is probably too pessimistic. All code linked into ld.so
is checked. Perhaps at some point the callgraph starting from
_dl_fixup and _dl_profile_fixup is checked and we can start using
faster SSE-using functions in parts of ld.so.
There will be more than one function which, in multiarch mode, wants
to use SSSE3. We should not test in each of them for Atoms with
slow SSSE3. Instead, disable the SSSE3 bit in the startup code for
such machines.
The posix/tst-rfc3484* test cases caused warnings in newer gccs
because the unused but copied sin_zero part of sockaddr_in wasn't
explicitly initialized.
If a locale does not have 8-bit characters with case conversion which
are different from the ASCII conversion (±0x20) then we can perform
some optimizations. These will follow later.
In EDNS0 records the maximum result size is transmitted in a 16
bit value. Large buffer sizes were handled incorrectly by using
only the low 16 bits. Fix this by limiting the size to 0xffff.
The commit 20e498bd removes the pthread_mutex_rdlock() calls, but not the
corresponding pthread_mutex_unlock() calls. Also, the database lock is never
unlocked in one branch of the mempool_alloc() if.
I think unreproducible random assert(dh->usable) crashes in prune_cache() were
caused by this. But an easy way to make nscd threads hang with the broken
locking was.
With atomic fastbins the checks performed can race with concurrent
modifications of the arena. If we detect a problem re-do the test
after getting the lock.
The following patch fixes catomic_compare_and_exchange_*_rel definitions
(which were never used and weren't correct) and uses
catomic_compare_and_exchange_val_rel in _int_free. Comparing to the
pre-2009-07-02 --enable-experimental-malloc state the generated code should
be identical on all arches other than ppc/ppc64 and on ppc/ppc64 should use
lwsync instead of isync barrier.
The original AVX patch used a function pointer to handle the difference
between machines with and without AVX support. This is insecure. A
well-placed memory exploit could lead to redirection of the execution.
Using a variable and several tests is a bit slower but cannot be
exploited in this way.