and deprecate LZ4_decompress_fast(),
with deprecation warnings enabled by default.
Note that, as a consequence of the fix,
LZ4_decompress_fast is now __slower__ than LZ4_decompress_safe().
That's because, since it doesn't know the input buffer size,
it must progress more cautiously into the input buffer
to ensure to out-of-bound read.
between lz4.c and lz4hc.c .
was left in a strange state after the "amalgamation" patch.
Now only 3 directives remain,
same name across both implementations,
single definition place.
Might allow some light simplification due to reduced nb of states possible.
when one block was not compressible,
it would tag the context as `dirty`,
resulting in compression automatically bailing out of all future blocks,
making the rest of the frame uncompressible.
since Visual 2017,
worries about potential overflow, which are actually impossible.
Replaced (c * a) by (c ? a : 0).
Will likely replaced a * by a cmov.
Probably harmless for performance.
make it possible to generate LZ4-compressed block
with a controlled maximum offset (necessarily <= 65535).
This could be useful for compatibility with decoders
using a very limited memory budget (<64 KB).
Answer #154
it fails on x86 32-bit mode :
Visual reports an alignment of 8-bytes (even with alignof())
but actually only align LZ4_stream_t on 4 bytes.
The alignment check then fails, resulting in missed initialization.
- promoted LZ4_resetStream_fast() to stable
- moved LZ4_resetStream() into deprecate, but without triggering a compiler warning
- update all sources to no longer rely on LZ4_resetStream()
note : LZ4_initStream() proposal is slightly different :
it's able to initialize any buffer, provided that it's large enough.
To this end, it accepts a void*, and returns an LZ4_stream_t*.
For small offsets of size 1, 2, 4 and 8, we can set a single uint64_t,
and then use it to do a memset() variation. In particular, this makes
the somewhat-common RLE (offset 1) about 2-4x faster than the previous
implementation - we avoid not only the load blocked by store, but also
avoid the loads entirely.
Generally we want our wildcopy loops to look like the
memcpy loops from our libc, but without the final byte copy checks.
We can unroll a bit to make long copies even faster.
The only catch is that this affects the value of FASTLOOP_SAFE_DISTANCE.
We've already checked that we are more than FASTLOOP_SAFE_DISTANCE
away from the end, so this branch can never be true, we will have
already jumped to the second decode loop.
Use LZ4_wildCopy16 for variable-length literals. For literal counts that
fit in the flag byte, copy directly. We can also omit oend checks for
roughly the same reason as the previous shortcut: We check once that both
match length and literal length fit in FASTLOOP_SAFE_DISTANCE, including
wildcopy distance.
Add an LZ4_wildCopy16, that will wildcopy, potentially smashing up
to 16 bytes, and use it for match copy. On x64, this avoids many
blocked loads due to store forwarding, similar to issue #411.
Copy the main loop, and change checks such that op is always less
than oend-SAFE_DISTANCE. Currently these are added for the literal
copy length check, and for the match copy length check.
Otherwise the first loop is exactly the same as the second. Follow on
diffs will optimize the first copy loop based on this new requirement.
I also tried instead making a separate inlineable function for the copy
loop (similar to existing partialDecode flags, etc), but I think the
changes might be significant enough to warrent doubling the code, instead
pulling out common functionality to separate functions.
This is the basic transformation that will allow several following optimisations.
Dictionaries don't need to be > 4 bytes, they need to be >= 4 bytes. This test
was overly conservative.
Also removes the test in `LZ4_attach_dictionary()`.
Fixes a mismatch in behavior between loading into the context (via
`LZ4_loadDict()`) a very small (<= 4 bytes) non-contiguous dictionary, versus
attaching it with `LZ4_attach_dictionary()`.
Before this patch, this divergence could be reproduced by running
```
make -C tests fuzzer MOREFLAGS="-m32"
tests/fuzzer -v -s1239 -t3146
```
Making sure these two paths behave exactly identically is an easy way to test
the correctness of the attach path, so it's desirable that this remain an
unpolluted, high signal test.
so "funny" thing with cppcheck
is that no 2 versions give the same list of warnings.
On Mac, I'm using v1.81, which had all warnings fixed.
On Travis CI, it's v1.61, and it complains about a dozen more/different things.
On Linux, it's v1.72, and it finds a completely different list of a half dozen warnings.
Some of these seems to be bugs/limitations in cppcheck itself.
The TravisCI version v1.61 seems unable to understand %zu correctly, and seems to assume it means %u.
The error can be reproduced using following command :
./frametest -v -i100000000 -s1659 -t31096808
It's actually a bug in the stream LZ4 API,
when starting a new stream
and providing a first chunk to complete with size < MINMATCH.
In which case, the chunk becomes a dictionary.
No hash was generated and stored,
but the chunk is accessible as default position 0 points to dictStart,
and position 0 is still within MAX_DISTANCE.
Then, next attempt to read 32-bits from position 0 fails.
The issue would have been mitigated by starting from index 64 KB,
effectively eliminating position 0 as too far away.
The proper fix is to eliminate such "dictionary" as too small.
Which is what this patch does.
The change is very similar to that of the LZ4_decompress_safe_continue
case. The only reason a make this a separate change is to ensure that
the fuzzer, after it's been enhanced, can detect the flaw in
LZ4_decompress_fast_continue, and that the change indeed fixes the flaw.
The previous change broke decoding with a ring buffer. That's because
I didn't realize that the "double dictionary mode" was possible, i.e.
that the decoding routine can look both at the first part of the
dictionary passed as prefix and the second part passed via dictStart+dictSize.
So this change introduces the LZ4_decompress_safe_doubleDict helper,
which handles this "double dictionary" situation. (This is a bit of
a misnomer, there is only one dictionary, but I can't think of a better
name, and perhaps the designation is not all too bad.) The helper is
used only once, in LZ4_decompress_safe_continue, it should be inlined
with LZ4_FORCE_O2_GCC_PPC64LE attached to LZ4_decompress_safe_continue.
(Also, in the helper functions, I change the dictStart parameter type
to "const void*", to avoid a cast when calling helpers. In the helpers,
the upcast to "BYTE*" is still required, for compatibility with C++.)
So this fixes the case of LZ4_decompress_safe_continue, and I'm
surprised by the fact that the fuzzer is now happy and does not detect
a similar problem with LZ4_decompress_fast_continue. So before fixing
LZ4_decompress_fast_continue, the next logical step is to enhance
the fuzzer.
I noticed that LZ4_decompress_generic is sometimes instantiated with
identical set of parameters, or (what's worse) with a subtly different
sets of parameters. For example, LZ4_decompress_fast_withPrefix64k is
instantiated as follows:
return LZ4_decompress_generic(source, dest, 0, originalSize, endOnOutputSize,
full, 0, withPrefix64k, (BYTE*)dest - 64 KB, NULL, 64 KB);
while the equivalent withPrefix64k call in LZ4_decompress_usingDict_generic
passes 0 for the last argument instead of 64 KB. It turns out that there
is no difference in this case: if you change 64 KB to 0 KB in
LZ4_decompress_fast_withPrefix64k, you get the same binary code.
Moreover, because it's been clarified that LZ4_decompress_fast doesn't
check match offsets, it is now obvious that both of these fast/withPrefix64k
instantiations are simply redundant. Exactly because LZ4_decompress_fast
doesn't check offsets, it serves well with any prefixed dictionary.
There's a difference, though, with LZ4_decompress_safe_withPrefix64k.
It also passes 64 KB as the last argument, and if you change that to 0,
as in LZ4_decompress_usingDict_generic, you get a completely different
binary code. It seems that passing 0 enables offset checking:
const int checkOffset = ((safeDecode) && (dictSize < (int)(64 KB)));
However, the resulting code seems to run a bit faster. How come
enabling extra checks can make the code run faster? Curiouser and
curiouser! This needs extra study. Currently I take the view that
the dictSize should be set to non-zero when nothing else will do,
i.e. when passing the external dictionary via dictStart. Otherwise,
lowPrefix betrays just enough information about the dictionary.
* * *
Anyway, with this change, I instantiate all the necessary cases as
functions with distinctive names, which also take fewer arguments and
are therefore less error-prone. I also make the functions non-inline.
(The compiler won't inline the functions because they are used more than
once. Hence I attach LZ4_FORCE_O2_GCC_PPC64LE to the instances while
removing from the callers.) The number of instances is now is reduced
from 18 (safe+fast+partial+4*continue+4*prefix+4*dict+2*prefix64+forceExtDict)
down to 7 (safe+fast+partial+2*prefix+2*dict). The size of the code is
not the only issue here. Separate helper function are much more
amenable to profile-guided optimization: it is enough to profile only
a few basic functions, while the other less-often used functions, such
as LZ4_decompress_*_continue, will benefit automatically.
This is the list of LZ4_decompress* functions in liblz4.so, sorted by size.
Exported functions are marked with a capital T.
$ nm -S lib/liblz4.so |grep -wi T |grep LZ4_decompress |sort -k2
0000000000016260 0000000000000005 T LZ4_decompress_fast_withPrefix64k
0000000000016dc0 0000000000000025 T LZ4_decompress_fast_usingDict
0000000000016d80 0000000000000040 T LZ4_decompress_safe_usingDict
0000000000016d10 000000000000006b T LZ4_decompress_fast_continue
0000000000016c70 000000000000009f T LZ4_decompress_safe_continue
00000000000156c0 000000000000059c T LZ4_decompress_fast
0000000000014a90 00000000000005fa T LZ4_decompress_safe
0000000000015c60 00000000000005fa T LZ4_decompress_safe_withPrefix64k
0000000000002280 00000000000005fa t LZ4_decompress_safe_withSmallPrefix
0000000000015090 000000000000062f T LZ4_decompress_safe_partial
0000000000002880 00000000000008ea t LZ4_decompress_fast_extDict
0000000000016270 0000000000000993 t LZ4_decompress_safe_forceExtDict
The bug is a read up to 2 bytes past the end of the buffer.
There are three cases for this bug, one for each test case added.
* An empty input causes `token = *ip++` to read one byte too far.
* A one byte input with `(token >> ML_BITS) == RUN_MASK` causes
one extra byte to be read without validation. This could be
combined with the first bug to cause 2 extra bytes to be read.
* The case pointed out in issue #508, where `ip == iend` at the
beginning of the loop after taking the shortcut.
Benchmarks show no regressions on clang or gcc-7 on both my mac
and devserver.
Fixes#508.
The simple change from
`matchIndex+MAX_DISTANCE < current`
towards
`current - matchIndex > MAX_DISTANCE`
is enough to generate a 10% performance drop under clang.
Quite massive.
(I missed as my eyes were concentrated on gcc performance at that time).
The second version is more robust, because it also survives a situation where
`matchIndex > current`
due to overflows.
The first version requires matchIndex to not overflow.
Hence were added `assert()` conditions.
The only case where this can happen is with dictCtx compression,
in the case where the dictionary context is not initialized before loading the dictionary.
So it's enough to always initialize the context while loading the dictionary.
Someone found it would be a great idea to define there a global variable under the very generic name "index".
Cause problem with shadow warnings, so no variable can be named "index" now ...
Also : automatically update API manual