In EDNS0 records the maximum result size is transmitted in a 16
bit value. Large buffer sizes were handled incorrectly by using
only the low 16 bits. Fix this by limiting the size to 0xffff.
The commit 20e498bd removes the pthread_mutex_rdlock() calls, but not the
corresponding pthread_mutex_unlock() calls. Also, the database lock is never
unlocked in one branch of the mempool_alloc() if.
I think unreproducible random assert(dh->usable) crashes in prune_cache() were
caused by this. But an easy way to make nscd threads hang with the broken
locking was.
With atomic fastbins the checks performed can race with concurrent
modifications of the arena. If we detect a problem re-do the test
after getting the lock.
The following patch fixes catomic_compare_and_exchange_*_rel definitions
(which were never used and weren't correct) and uses
catomic_compare_and_exchange_val_rel in _int_free. Comparing to the
pre-2009-07-02 --enable-experimental-malloc state the generated code should
be identical on all arches other than ppc/ppc64 and on ppc/ppc64 should use
lwsync instead of isync barrier.
The original AVX patch used a function pointer to handle the difference
between machines with and without AVX support. This is insecure. A
well-placed memory exploit could lead to redirection of the execution.
Using a variable and several tests is a bit slower but cannot be
exploited in this way.
Some symbols have to be identified process-wide by their name. This is
particularly important for some C++ features (e.g., class local static data
and static variables in inline functions). This cannot completely be
implemented with ELF functionality so far. The STB_GNU_UNIQUE binding
helps by ensuring the dynamic linker will always use the same definition for
all symbols with the same name and this binding.
Some of the new multi-arch string functions for x86-64 were
not aligned to 16 byte boundarie,s possibly creating unnecessary
cache line misses and delays.
This patch adds SSSE3 strcpy/stpcpy. I got up to 4X speed up on Core 2
and Core i7. I disabled it on Atom since SSSE3 version is slower for
shorter (<64byte) data.
I changed the files NSS backend for networks because I thought the
getent use of getnetbyaddr is correct. But it isn't. Undo parts
of the last change and fix getent.
There were two problems in the getnetbyaddr implementation. The type
argument is pretty much useless since (almost) no input file contains
this information and the NSS backends make up the value they fill in
for the n_addrtype field. Therefore we now declare that passing AF_UNSPEC
is always recognized. Secondly, the files backend didn't compare the network
numbers with the correct endianess.
Also change getent to take advantage of the type parameter change.
There is some more shardware/software out there which has problems
if two DNS requests are sent using the same tuple
(source addr, source port, dest addr, dest port)
This can range from firewalls to load balancers. Some of the vendors
already fixed it in response to this problem. Still, we need a way
to make glibc work with broken environments. The single-request-reopen
flag can be used or we fall back automatically to this mode.
The check for the inclusion of a group in the result gave up too early
in case of broken-up NIS groups. We now fall back automatically to
the slow mode of using getgrent_r. As an optimization, if there is
not blacklist we need not perform the check in the first place and
therefore can just accept the results of the initgroups_dyn callback.