which actively tries to make it write out of bound.
For this scenario to be possible,
it's necessary to set dstCapacity < LZ4F_compressBound()
When a compression operation fails,
the CCtx context is left in an undefined state,
therefore compression cannot resume.
As a consequence :
- round trip tests must be aborted, since there is nothing valid to decompress
- most users avoid this situation, by ensuring that dstCapacity >= LZ4F_compressBound()
For these reasons, this use case was poorly tested up to now.
so "funny" thing with cppcheck
is that no 2 versions give the same list of warnings.
On Mac, I'm using v1.81, which had all warnings fixed.
On Travis CI, it's v1.61, and it complains about a dozen more/different things.
On Linux, it's v1.72, and it finds a completely different list of a half dozen warnings.
Some of these seems to be bugs/limitations in cppcheck itself.
The TravisCI version v1.61 seems unable to understand %zu correctly, and seems to assume it means %u.
make a round trip test with arbitrary input file,
generate an `abort()` on error,
to work in tandem with `afl`.
note : currently locked on level 9, to investigate #560.
The error can be reproduced using following command :
./frametest -v -i100000000 -s1659 -t31096808
It's actually a bug in the stream LZ4 API,
when starting a new stream
and providing a first chunk to complete with size < MINMATCH.
In which case, the chunk becomes a dictionary.
No hash was generated and stored,
but the chunk is accessible as default position 0 points to dictStart,
and position 0 is still within MAX_DISTANCE.
Then, next attempt to read 32-bits from position 0 fails.
The issue would have been mitigated by starting from index 64 KB,
effectively eliminating position 0 as too far away.
The proper fix is to eliminate such "dictionary" as too small.
Which is what this patch does.
* Uninstall didn't remove the pkg-config correctly.
* Fix `mandir`
* Allow overriding either upper- or lower-case location variables, but
always use the lower case variables.
* Add test case that ensures overriding both upper- and lower-case
variables is the same, and that the directory is empty after uninstall.
Ring buffer tests were performed only with LZ4_decompress_safe_continue,
leaving my buggy changes to LZ4_decompress_safe_continue undetected.
The tests are now replicated and performed in a similar manner for both
LZ4_decompress_safe_continue and LZ4_decompress_safe_continue (except
for the small buffer case where only one function can be tested,
because part of the dictionary is overwritten with the output).
I also updated function names in the messages (changed them to the
actual ones). The error was reported for LZ4_decompress_safe(),
which I found misleading.
The previous change broke decoding with a ring buffer. That's because
I didn't realize that the "double dictionary mode" was possible, i.e.
that the decoding routine can look both at the first part of the
dictionary passed as prefix and the second part passed via dictStart+dictSize.
So this change introduces the LZ4_decompress_safe_doubleDict helper,
which handles this "double dictionary" situation. (This is a bit of
a misnomer, there is only one dictionary, but I can't think of a better
name, and perhaps the designation is not all too bad.) The helper is
used only once, in LZ4_decompress_safe_continue, it should be inlined
with LZ4_FORCE_O2_GCC_PPC64LE attached to LZ4_decompress_safe_continue.
(Also, in the helper functions, I change the dictStart parameter type
to "const void*", to avoid a cast when calling helpers. In the helpers,
the upcast to "BYTE*" is still required, for compatibility with C++.)
So this fixes the case of LZ4_decompress_safe_continue, and I'm
surprised by the fact that the fuzzer is now happy and does not detect
a similar problem with LZ4_decompress_fast_continue. So before fixing
LZ4_decompress_fast_continue, the next logical step is to enhance
the fuzzer.