There were several issues when the initial 31 entries hashtab filled up.
size * 3 <= tab->n_elements is always false, table can't have more elements
than its size. I assume from libiberty/hashtab.c this meant to be check for
3/4 full. Even after fixing that, _dl_higher_prime_number (31) apparently
returns 31, only _dl_higher_prime_number (32) returns 61. And, size
variable wasn't updated during reallocation, which means during reallocation
the insertion of the new entry was done into a wrong spot.
All this lead to a hang in ld.so, because a search with n_elements 31 size
31 wouldn't ever terminate.
This patch introduces a test to make sure no function modifies the
xmm/ymm registers. With the exception of the auditing functions.
The test is probably too pessimistic. All code linked into ld.so
is checked. Perhaps at some point the callgraph starting from
_dl_fixup and _dl_profile_fixup is checked and we can start using
faster SSE-using functions in parts of ld.so.
There will be more than one function which, in multiarch mode, wants
to use SSSE3. We should not test in each of them for Atoms with
slow SSSE3. Instead, disable the SSSE3 bit in the startup code for
such machines.
The posix/tst-rfc3484* test cases caused warnings in newer gccs
because the unused but copied sin_zero part of sockaddr_in wasn't
explicitly initialized.
If a locale does not have 8-bit characters with case conversion which
are different from the ASCII conversion (±0x20) then we can perform
some optimizations. These will follow later.
In EDNS0 records the maximum result size is transmitted in a 16
bit value. Large buffer sizes were handled incorrectly by using
only the low 16 bits. Fix this by limiting the size to 0xffff.
The commit 20e498bd removes the pthread_mutex_rdlock() calls, but not the
corresponding pthread_mutex_unlock() calls. Also, the database lock is never
unlocked in one branch of the mempool_alloc() if.
I think unreproducible random assert(dh->usable) crashes in prune_cache() were
caused by this. But an easy way to make nscd threads hang with the broken
locking was.
With atomic fastbins the checks performed can race with concurrent
modifications of the arena. If we detect a problem re-do the test
after getting the lock.
The following patch fixes catomic_compare_and_exchange_*_rel definitions
(which were never used and weren't correct) and uses
catomic_compare_and_exchange_val_rel in _int_free. Comparing to the
pre-2009-07-02 --enable-experimental-malloc state the generated code should
be identical on all arches other than ppc/ppc64 and on ppc/ppc64 should use
lwsync instead of isync barrier.
The original AVX patch used a function pointer to handle the difference
between machines with and without AVX support. This is insecure. A
well-placed memory exploit could lead to redirection of the execution.
Using a variable and several tests is a bit slower but cannot be
exploited in this way.
Some symbols have to be identified process-wide by their name. This is
particularly important for some C++ features (e.g., class local static data
and static variables in inline functions). This cannot completely be
implemented with ELF functionality so far. The STB_GNU_UNIQUE binding
helps by ensuring the dynamic linker will always use the same definition for
all symbols with the same name and this binding.
Some of the new multi-arch string functions for x86-64 were
not aligned to 16 byte boundarie,s possibly creating unnecessary
cache line misses and delays.