1d167b515e
It was confusing entry capacity with the bucket capacity. The value maxNumBuckets() returned was the maximum number of entries. This issue was harmless: we would just fail to cap the maximum to an allocatable size. But the array new[] in the Data constructors would have capped the maximum anyway (by way of throwing std::bad_alloc). So instead of trying to calculate what the maximum bucket count is so we can cap at that, simplify the calculation of the next power of 2 while preventing it from overflowing in our calculations. We continue to rely on new[] throwing when we return count that is larger than the maximum allocatable. This commit changes the load factor for QHashes containing exactly a number of elements that is exactly a power of two. Previously, it would be loaded at 50%, now it's at 25%. For this reason, tst_QSet::squeeze needed to be fixed to depend less on the implementation details. Pick-to: 6.5 Change-Id: I9671dee8ceb64aa9b9cafffd17415f3856c358a0 Reviewed-by: Mårten Nordheim <marten.nordheim@qt.io> |
||
---|---|---|
.. | ||
.gitignore | ||
CMakeLists.txt | ||
tst_qset.cpp |