Define 0-argument functions like foo(void) instead of foo(), in order
to avoid a warning with -Wold-style-definition. This makes it easier
to embed lz4.c in projects that compile with -Werror
-Wold-style-definition.
when src ptr is in very low memory area (< 64K),
the virtual reference to data in dictionary
might end up in a very high memory address.
Since it's not a "real" memory address,
just a virtual one, to calculate distance,
it doesn't matter : only distance matters.
The assert was to restrictive.
Fixed.
by enabling the fast decoder path.
Visual requires a different set of macro constants to detect x86 / x64.
On my laptop, decoding speed on x64 went up from 3.12 to 3.45 GB/s.
32-bit is less impressive, though still favorable,
with speed increasing from 2.55 to 2.60 GB/s.
So both cases are now enabled.
Suggested by Bartosz Taudul (@wolfpld).
Otherwise, the output from decoding LZ4-compressed input could be
platform dependent.
Also add a compile-time check to confirm the existing code's assumptions
that, if <stdint.h> isn't used, then sizeof(int) == 4.
Updates #792
PR#756 fixed the data corruption bug, but didn't clear `ip`. PR#760
fixed that off-by-one error, but missed the case where `ip == filledIp`,
which is harder for the fuzzers to find (it took 20 days not 1 day).
Verified this fixed the issue reported by OSS-Fuzz.
Credit to OSS-Fuzz.
When using an empty dictionary, we bail out of loading or attaching it in
ways that leave the working context in potentially slightly different states.
In particular, in some paths, we will cause the currentOffset to be non-zero,
while in others we would allow it to remain 0.
This difference in behavior is perfectly harmless, but in some situations, it
can produce slight differences in the compressed output. For sanity's sake,
we currently try to maintain a strict correspondence between the behavior of
the dict attachment and the dict loading paths. This patch restores them to
behaving identically.
This shouldn't have any negative side-effects, as far as I can tell. When
writing the dict attachment code, I tried to preserve zeroed currentOffsets
when possible, since they benchmarked as very slightly faster. However, the
case of attaching an empty dictionary is probably rare enought that it's
acceptable to minisculely degrade performance in that corner case.
When the match is very long and found quickly, we can do
matchLength * nbCompares iterations through the chain
swapping, which can really slow down compression.
It is important to continue to look backwards if the current pattern
reaches `lowPrefixPtr`. If the pattern detection doesn't go all the
way to the beginning of the pattern, or the end of the pattern it
slows down the search instead of speeding it up.
The slow unit in `round_trip_stream_fuzzer` used to take 12 seconds
to run with -O3, now it takes 0.2 seconds.
Credit to OSS-Fuzz
This diff fixes an issue in which we failed to clear the `dictCtx` in HC
compression. The `dictCtx` is not supposed to be used when an `extDict` is
present: matches found in the `dictCtx` do not account for the presence of an
`extDict` segment, and their offsets are therefore miscalculated when one is
present. This can lead to data corruption.
This diff clears the `dictCtx` whenever setting an `extDict`.
This issue was uncovered by @terrelln's fuzzing work.
It's now possible to select a custom LZ4_DISTANCE_MAX at compile time,
provided it's <= 65535.
However, in some cases (when compressing in byU16 mode),
the new distance wasn't respected,
as it used to implied that it was necessarily within range.
Added a distance check for this case.
Also : added a new TravisCI test which ensures that
custom LZ4_DISTANCE_MAX compiles correctly
and compresses correctly (relying on `assert()` to find outsized offsets).
When I did FAST_DEC_LOOP performance test, I found the
offset check is much more than v1.8.3
You will see the condition check difference via lzbench with dickens test case.
v1.8.3 34959
v.1.9.x 1055885
After investigate the code, we could see the difference.
v.1.8.3 SKIP the condition check if
if condition is true in:
https://github.com/lz4/lz4/blob/v1.8.3/lib/lz4.c#L1463
AND below condition is true
https://github.com/lz4/lz4/blob/v1.8.3/lib/lz4.c#L1478\
The offset check should be invoked.
v1.9.3
The offset check code will be invoked in every loop which lead to downgrade.
So the fix would be move this check to specific condition
to avoid useless condition check.
After this change, the call number is same as v1.8.3
Use the list of prerequisites instead of listing those files manually,
this way they will never fall out of sync.
Also update the amalgamation example to use a single cat command.
Fixes: a7e8d394 ("[amalgamation] add test")
Fixes: b192c86b ("[amalgamation] lz4frame.c")
Moved a few other tests to Makefiles.inc. Other things might need to go there.
Made a test for symlink appropriateness. Windows can NOT handle them the same way Unix-like operating systems do (if at all). This is mostly the same as the Visual C projects.
embed version info into .dll and .exes that are redistributed.
and deprecate LZ4_decompress_fast(),
with deprecation warnings enabled by default.
Note that, as a consequence of the fix,
LZ4_decompress_fast is now __slower__ than LZ4_decompress_safe().
That's because, since it doesn't know the input buffer size,
it must progress more cautiously into the input buffer
to ensure to out-of-bound read.
between lz4.c and lz4hc.c .
was left in a strange state after the "amalgamation" patch.
Now only 3 directives remain,
same name across both implementations,
single definition place.
Might allow some light simplification due to reduced nb of states possible.
they are classified as deprecated in the API documentation (lz4.h)
but do not yet trigger a warning,
to give time to existing applications to move away.
Also, the _fast() variants are still ~5% faster than the _safe() ones
after Dave's patch.
when one block was not compressible,
it would tag the context as `dirty`,
resulting in compression automatically bailing out of all future blocks,
making the rest of the frame uncompressible.
yet some overly cautious overflow risk flag,
while it's actually impossible, due to previous test just one line above.
Changing the cast position, just to be please the thing.
since Visual 2017,
worries about potential overflow, which are actually impossible.
Replaced (c * a) by (c ? a : 0).
Will likely replaced a * by a cmov.
Probably harmless for performance.
make it possible to generate LZ4-compressed block
with a controlled maximum offset (necessarily <= 65535).
This could be useful for compatibility with decoders
using a very limited memory budget (<64 KB).
Answer #154
it fails on x86 32-bit mode :
Visual reports an alignment of 8-bytes (even with alignof())
but actually only align LZ4_stream_t on 4 bytes.
The alignment check then fails, resulting in missed initialization.
- promoted LZ4_resetStream_fast() to stable
- moved LZ4_resetStream() into deprecate, but without triggering a compiler warning
- update all sources to no longer rely on LZ4_resetStream()
note : LZ4_initStream() proposal is slightly different :
it's able to initialize any buffer, provided that it's large enough.
To this end, it accepts a void*, and returns an LZ4_stream_t*.
- promoted LZ4_resetStreamHC_fast() to stable
- moved LZ4_resetStreamHC() to deprecated (but do not generate a warning yet)
- Updated doc, to highlight difference between init and reset
- switched all invocations of LZ4_resetStreamHC() onto LZ4_initStreamHC()
- misc: ensure `make all` also builds /tests
which remained undetected so far,
as it requires a fairly large number of conditions to be triggered,
starting with enabling Block checksum, which is disabled by default,
and which usage is known to be extremely rare.
For small offsets of size 1, 2, 4 and 8, we can set a single uint64_t,
and then use it to do a memset() variation. In particular, this makes
the somewhat-common RLE (offset 1) about 2-4x faster than the previous
implementation - we avoid not only the load blocked by store, but also
avoid the loads entirely.
Generally we want our wildcopy loops to look like the
memcpy loops from our libc, but without the final byte copy checks.
We can unroll a bit to make long copies even faster.
The only catch is that this affects the value of FASTLOOP_SAFE_DISTANCE.
We've already checked that we are more than FASTLOOP_SAFE_DISTANCE
away from the end, so this branch can never be true, we will have
already jumped to the second decode loop.
Use LZ4_wildCopy16 for variable-length literals. For literal counts that
fit in the flag byte, copy directly. We can also omit oend checks for
roughly the same reason as the previous shortcut: We check once that both
match length and literal length fit in FASTLOOP_SAFE_DISTANCE, including
wildcopy distance.
Add an LZ4_wildCopy16, that will wildcopy, potentially smashing up
to 16 bytes, and use it for match copy. On x64, this avoids many
blocked loads due to store forwarding, similar to issue #411.
Copy the main loop, and change checks such that op is always less
than oend-SAFE_DISTANCE. Currently these are added for the literal
copy length check, and for the match copy length check.
Otherwise the first loop is exactly the same as the second. Follow on
diffs will optimize the first copy loop based on this new requirement.
I also tried instead making a separate inlineable function for the copy
loop (similar to existing partialDecode flags, etc), but I think the
changes might be significant enough to warrent doubling the code, instead
pulling out common functionality to separate functions.
This is the basic transformation that will allow several following optimisations.
Every 0xff byte in the compressed block corresponds to a length of 255 (not 256) in the input data. For long repeating sequences, using (length >> 8) may generate bad compressed blocks.
Dictionaries don't need to be > 4 bytes, they need to be >= 4 bytes. This test
was overly conservative.
Also removes the test in `LZ4_attach_dictionary()`.
Fixes a mismatch in behavior between loading into the context (via
`LZ4_loadDict()`) a very small (<= 4 bytes) non-contiguous dictionary, versus
attaching it with `LZ4_attach_dictionary()`.
Before this patch, this divergence could be reproduced by running
```
make -C tests fuzzer MOREFLAGS="-m32"
tests/fuzzer -v -s1239 -t3146
```
Making sure these two paths behave exactly identically is an easy way to test
the correctness of the attach path, so it's desirable that this remain an
unpolluted, high signal test.
following recommendations by @raggi.
The fix is slightly different, but achieves the same goal,
and is backed by a test tool which proves that it works
(generates the error before the patch, no longer after the patch).
which actively tries to make it write out of bound.
For this scenario to be possible,
it's necessary to set dstCapacity < LZ4F_compressBound()
When a compression operation fails,
the CCtx context is left in an undefined state,
therefore compression cannot resume.
As a consequence :
- round trip tests must be aborted, since there is nothing valid to decompress
- most users avoid this situation, by ensuring that dstCapacity >= LZ4F_compressBound()
For these reasons, this use case was poorly tested up to now.
when LZ4F_decompress() decodes an uncompressed block,
it provides an incorrect hint for next block
when frame checksum is enabled and block checksum is not.
Impact is low : the hint is just an hint,
the decoder works whatever the amount of input provided.
But the assumption that each call to LZ4F_decompress()
would generate just one complete block if input size hint was respected
was broken by this error.