which serves no more purpose.
The comment implies that the simple presence of this unused function was affecting performance,
and that's the reason why it was not removed earlier.
This is likely another side effect of instruction alignment.
It's obviously unreliable to rely on it in this way,
meaning that the impact will be different, positive of negative,
with any minor code change, and any minor compiler version change, even parameter change.
EndMark, the 4-bytes value indicating the end of frame,
must be `0x00000000`.
Previously, it was just mentioned as a `0-size` block.
But such definition could encompass uncompressed blocks of size 0,
with a header of value `0x80000000`.
But the intention was to also support uncompressed empty blocks.
They could be used as a keep-alive signal.
Note that compressed empty blocks are already supported,
it's just that they have a size 1 instead of 0 (for the `0` token).
Unfortunately, the decoder implementation was also wrong,
and would also interpret a `0x80000000` block header as an endMark.
This issue evaded detection so far simply because
this situation never happens, as LZ4Frame always issues
a clean 0x00000000 value as a endMark.
It also does not flush empty blocks.
This is fixed in this PR.
The decoder can now deal with empty uncompressed blocks,
and do not confuse them with EndMark.
The specification is also clarified.
Finally, FrameTest is updated to randomly insert empty blocks during fuzzing.
`LZ4_memcpy()` uses `__builtin_memcpy()` to ensure that clang/gcc
can inline the `memcpy()` calls in freestanding mode.
This is necessary for decompressing the Linux Kernel with LZ4.
Without an analogous patch decompression ran at 77 MB/s, and with
the patch it ran at 884 MB/s.
Similar work in the kernel:
https://patchwork.kernel.org/patch/11351499/
UBsan (+clang-10) complains about doing pointer arithmetic (adding 0)
to a nullpointer.
This patch is tested with clang-10+ubsan
Define 0-argument functions like foo(void) instead of foo(), in order
to avoid a warning with -Wold-style-definition. This makes it easier
to embed lz4.c in projects that compile with -Werror
-Wold-style-definition.
when src ptr is in very low memory area (< 64K),
the virtual reference to data in dictionary
might end up in a very high memory address.
Since it's not a "real" memory address,
just a virtual one, to calculate distance,
it doesn't matter : only distance matters.
The assert was to restrictive.
Fixed.
by enabling the fast decoder path.
Visual requires a different set of macro constants to detect x86 / x64.
On my laptop, decoding speed on x64 went up from 3.12 to 3.45 GB/s.
32-bit is less impressive, though still favorable,
with speed increasing from 2.55 to 2.60 GB/s.
So both cases are now enabled.
Suggested by Bartosz Taudul (@wolfpld).
Otherwise, the output from decoding LZ4-compressed input could be
platform dependent.
Also add a compile-time check to confirm the existing code's assumptions
that, if <stdint.h> isn't used, then sizeof(int) == 4.
Updates #792
PR#756 fixed the data corruption bug, but didn't clear `ip`. PR#760
fixed that off-by-one error, but missed the case where `ip == filledIp`,
which is harder for the fuzzers to find (it took 20 days not 1 day).
Verified this fixed the issue reported by OSS-Fuzz.
Credit to OSS-Fuzz.
When using an empty dictionary, we bail out of loading or attaching it in
ways that leave the working context in potentially slightly different states.
In particular, in some paths, we will cause the currentOffset to be non-zero,
while in others we would allow it to remain 0.
This difference in behavior is perfectly harmless, but in some situations, it
can produce slight differences in the compressed output. For sanity's sake,
we currently try to maintain a strict correspondence between the behavior of
the dict attachment and the dict loading paths. This patch restores them to
behaving identically.
This shouldn't have any negative side-effects, as far as I can tell. When
writing the dict attachment code, I tried to preserve zeroed currentOffsets
when possible, since they benchmarked as very slightly faster. However, the
case of attaching an empty dictionary is probably rare enought that it's
acceptable to minisculely degrade performance in that corner case.
When the match is very long and found quickly, we can do
matchLength * nbCompares iterations through the chain
swapping, which can really slow down compression.
It is important to continue to look backwards if the current pattern
reaches `lowPrefixPtr`. If the pattern detection doesn't go all the
way to the beginning of the pattern, or the end of the pattern it
slows down the search instead of speeding it up.
The slow unit in `round_trip_stream_fuzzer` used to take 12 seconds
to run with -O3, now it takes 0.2 seconds.
Credit to OSS-Fuzz