Commit Graph

1840 Commits

Author SHA1 Message Date
Yann Collet
9d4eae59f0
Merge pull request #522 from svpv/refactorDec
Refactor dec
2018-04-27 17:22:06 -07:00
Yann Collet
1e6ca25af3
Merge pull request #520 from felixhandte/frame-dict-nits
Minor Fixes to Dictionary Preparation in LZ4 Frame
2018-04-27 13:52:30 -07:00
Yann Collet
de7b274d99 Merge branch 'dev' into BD_deterministic 2018-04-27 12:59:20 -07:00
Yann Collet
19b1267d44 fix lz4hc -BD non-determinism
related to chain table update
2018-04-27 12:46:49 -07:00
Yann Collet
72e99c8939 lz4hc : minor editions for clarity 2018-04-27 12:28:58 -07:00
Yann Collet
47d70e755e
Merge pull request #519 from lz4/fdParser
Faster decoding speed
2018-04-27 11:46:29 -07:00
W. Felix Handte
fefc40fc0a Avoid Possibly Redundant Table Clears When Loading HC Dict 2018-04-27 14:10:27 -04:00
W. Felix Handte
5076aa3e35 Remove Redundant LZ4_resetStream() Call 2018-04-27 13:59:02 -04:00
W. Felix Handte
7d11e34413 Rename LZ4F_applyCDict() -> LZ4F_initStream() 2018-04-27 13:57:10 -04:00
Yann Collet
d294dd7fc6 ensure favorDecSpeed is properly initialized
also :
- fix a potential malloc error
- proper use of ALLOC macro inside lz4hc
- update html API doc
2018-04-27 09:04:09 -07:00
Yann Collet
938e4849ae updated NEWS, in preparation for v1.8.2 2018-04-27 08:43:40 -07:00
Alexey Tourbin
d81a434c3d lz4.c: fixed the LZ4_decompress_fast_continue case
The change is very similar to that of the LZ4_decompress_safe_continue
case.  The only reason a make this a separate change is to ensure that
the fuzzer, after it's been enhanced, can detect the flaw in
LZ4_decompress_fast_continue, and that the change indeed fixes the flaw.
2018-04-27 15:10:12 +03:00
Alexey Tourbin
ce4e1389cc fuzzer.c: enabled ring buffer tests for decompress_fast
Ring buffer tests were performed only with LZ4_decompress_safe_continue,
leaving my buggy changes to LZ4_decompress_safe_continue undetected.
The tests are now replicated and performed in a similar manner for both
LZ4_decompress_safe_continue and LZ4_decompress_safe_continue (except
for the small buffer case where only one function can be tested,
because part of the dictionary is overwritten with the output).

I also updated function names in the messages (changed them to the
actual ones).  The error was reported for LZ4_decompress_safe(),
which I found misleading.
2018-04-27 07:27:10 +03:00
Yann Collet
0fb3a3b199 fixed a number of minor cast warnings 2018-04-26 18:08:28 -07:00
Yann Collet
00909b27b1
Merge pull request #518 from felixhandte/fix-517-dict-size-truncation
Limit Dictionary Size During LZ4F Decompression
2018-04-26 16:47:50 -07:00
Yann Collet
3eb3ed26e1
Merge pull request #516 from felixhandte/merge-dest-size
Merge _destSize Compress Variant into LZ4_compress_generic()
2018-04-26 16:40:33 -07:00
Yann Collet
5c7d3812d9 fasterDecSpeed can be triggered from cli with --favor-decSpeed 2018-04-26 15:49:32 -07:00
Yann Collet
3792d00168 favorDecSpeed feature can be triggered from lz4frame
and lz4hc.
2018-04-26 15:18:44 -07:00
W. Felix Handte
0858362f28 Merge _destSize Compress Variant into LZ4_compress_generic() 2018-04-26 18:01:08 -04:00
W. Felix Handte
2becd69bb1 Add _destSize() to Fullbench 2018-04-26 17:25:12 -04:00
W. Felix Handte
a2edeac201 Limit Dictionary Size During LZ4F Decompression
Fixes lz4/lz4#517.
2018-04-26 17:18:40 -04:00
Yann Collet
1148173c5d introduced ability to parse for decompression speed
triggered through an enum.

Now, it's still necessary to properly expose this capability
all the way up to the cli.
2018-04-26 13:01:59 -07:00
Alexey Tourbin
5603d30f81 lz4.c: fixed the LZ4_decompress_safe_continue case
The previous change broke decoding with a ring buffer.  That's because
I didn't realize that the "double dictionary mode" was possible, i.e.
that the decoding routine can look both at the first part of the
dictionary passed as prefix and the second part passed via dictStart+dictSize.

So this change introduces the LZ4_decompress_safe_doubleDict helper,
which handles this "double dictionary" situation.  (This is a bit of
a misnomer, there is only one dictionary, but I can't think of a better
name, and perhaps the designation is not all too bad.)  The helper is
used only once, in LZ4_decompress_safe_continue, it should be inlined
with LZ4_FORCE_O2_GCC_PPC64LE attached to LZ4_decompress_safe_continue.

(Also, in the helper functions, I change the dictStart parameter type
to "const void*", to avoid a cast when calling helpers.  In the helpers,
the upcast to "BYTE*" is still required, for compatibility with C++.)

So this fixes the case of LZ4_decompress_safe_continue, and I'm
surprised by the fact that the fuzzer is now happy and does not detect
a similar problem with LZ4_decompress_fast_continue.  So before fixing
LZ4_decompress_fast_continue, the next logical step is to enhance
the fuzzer.
2018-04-26 08:23:54 +03:00
Cyan4973
bd92689798 minor edit of block format
clarifying parsing restrictions near end of block.
2018-04-25 06:42:57 -07:00
Yann Collet
c67cc0e8dd
Merge pull request #514 from svpv/clarifyBlockFormat
lz4_Block_format.md: clarify on short inputs and restrictions
2018-04-25 06:13:08 -07:00
Alexey Tourbin
b4eda8d08f lz4.c: refactor the decoding routines
I noticed that LZ4_decompress_generic is sometimes instantiated with
identical set of parameters, or (what's worse) with a subtly different
sets of parameters.  For example, LZ4_decompress_fast_withPrefix64k is
instantiated as follows:

    return LZ4_decompress_generic(source, dest, 0, originalSize, endOnOutputSize,
		full, 0, withPrefix64k, (BYTE*)dest - 64 KB, NULL, 64 KB);

while the equivalent withPrefix64k call in LZ4_decompress_usingDict_generic
passes 0 for the last argument instead of 64 KB.  It turns out that there
is no difference in this case: if you change 64 KB to 0 KB in
LZ4_decompress_fast_withPrefix64k, you get the same binary code.

Moreover, because it's been clarified that LZ4_decompress_fast doesn't
check match offsets, it is now obvious that both of these fast/withPrefix64k
instantiations are simply redundant.  Exactly because LZ4_decompress_fast
doesn't check offsets, it serves well with any prefixed dictionary.

There's a difference, though, with LZ4_decompress_safe_withPrefix64k.
It also passes 64 KB as the last argument, and if you change that to 0,
as in LZ4_decompress_usingDict_generic, you get a completely different
binary code.  It seems that passing 0 enables offset checking:

    const int checkOffset = ((safeDecode) && (dictSize < (int)(64 KB)));

However, the resulting code seems to run a bit faster.  How come
enabling extra checks can make the code run faster?  Curiouser and
curiouser!  This needs extra study.  Currently I take the view that
the dictSize should be set to non-zero when nothing else will do,
i.e. when passing the external dictionary via dictStart.  Otherwise,
lowPrefix betrays just enough information about the dictionary.

    * * *

Anyway, with this change, I instantiate all the necessary cases as
functions with distinctive names, which also take fewer arguments and
are therefore less error-prone.  I also make the functions non-inline.
(The compiler won't inline the functions because they are used more than
once.  Hence I attach LZ4_FORCE_O2_GCC_PPC64LE to the instances while
removing from the callers.)  The number of instances is now is reduced
from 18 (safe+fast+partial+4*continue+4*prefix+4*dict+2*prefix64+forceExtDict)
down to 7 (safe+fast+partial+2*prefix+2*dict).  The size of the code is
not the only issue here.  Separate helper function are much more
amenable to profile-guided optimization: it is enough to profile only
a few basic functions, while the other less-often used functions, such
as LZ4_decompress_*_continue, will benefit automatically.

This is the list of LZ4_decompress* functions in liblz4.so, sorted by size.
Exported functions are marked with a capital T.

$ nm -S lib/liblz4.so |grep -wi T |grep LZ4_decompress |sort -k2
0000000000016260 0000000000000005 T LZ4_decompress_fast_withPrefix64k
0000000000016dc0 0000000000000025 T LZ4_decompress_fast_usingDict
0000000000016d80 0000000000000040 T LZ4_decompress_safe_usingDict
0000000000016d10 000000000000006b T LZ4_decompress_fast_continue
0000000000016c70 000000000000009f T LZ4_decompress_safe_continue
00000000000156c0 000000000000059c T LZ4_decompress_fast
0000000000014a90 00000000000005fa T LZ4_decompress_safe
0000000000015c60 00000000000005fa T LZ4_decompress_safe_withPrefix64k
0000000000002280 00000000000005fa t LZ4_decompress_safe_withSmallPrefix
0000000000015090 000000000000062f T LZ4_decompress_safe_partial
0000000000002880 00000000000008ea t LZ4_decompress_fast_extDict
0000000000016270 0000000000000993 t LZ4_decompress_safe_forceExtDict
2018-04-25 13:18:06 +03:00
Yann Collet
cadf5cd5f9
Merge pull request #513 from felixhandte/integrate-static-frame-functions
Integrate Contents of `lz4frame_static.h` into `lz4frame.h`
2018-04-24 16:40:13 -07:00
Alexey Tourbin
ff9b4cf826 lz4_Block_format.md: clarify on short inputs and restrictions
It occurred to me that the formula "The last 5 bytes are always
literals", on the list of "assumptions made by the decoder", is
remarkably ambiguous.  Suppose the decoder is presented with 5 bytes.
Are they literals?  It may seem that the decoder degenerates
to memcpy on short inputs.  But of course the answer is no,
so the formula needs some clarification.

Parsing restrictions should be explained as well, otherwise they look
like arbitrary numbers.  The 5-byte restriction has been mentioned
recently in connection with the shortcut in LZ4_decompress_generic,
so I add that.  The second restriction is left to be explained
by the author.

I also took the liberty to explain that empty inputs "are either
unrepresentable or can be represented with a null byte".  This wording
may actually have some merit: it leaves for the implementation,
as opposed to the spec, to decide whether the encoder can compress
empty inputs, and whether the decoder can produce an empty output
(which the implementation should further clarify).
2018-04-25 02:39:28 +03:00
W. Felix Handte
27c6eec18d Multiply-Include Header to Check Guard Macro Correctness 2018-04-24 18:50:03 -04:00
W. Felix Handte
2dfc7cbe82 Change Over Includes in the Project 2018-04-24 16:22:28 -04:00
W. Felix Handte
2be3905fa4 Integrate lz4frame_static.h Declarations into lz4frame.h 2018-04-24 16:22:28 -04:00
Yann Collet
b2637ab7b2
Merge pull request #512 from lz4/HC_dict
In-place unmutable dictionaries for LZ4HC
2018-04-24 13:18:40 -07:00
Yann Collet
8c6ca6283d
Merge pull request #511 from lz4/decFast
Fixed performance issue with LZ4_decompress_fast()
2018-04-24 11:25:57 -07:00
Yann Collet
c92df76361
Merge pull request #488 from felixhandte/hc-dict-ctx
Use Dictionary In-Place in HC Mode
2018-04-24 10:49:41 -07:00
W. Felix Handte
5ed1463bf4 Remove Debug Log Statements 2018-04-24 11:58:51 -04:00
W. Felix Handte
db9deb7b74 Remove the Framebench Tool 2018-04-24 11:58:51 -04:00
W. Felix Handte
13271a88d7 Revert Stream Size Const to Correct Value 2018-04-24 11:55:53 -04:00
Yann Collet
092cb77597
Merge pull request #504 from baruchsiach/static-only-support
lib: allow to disable shared libraries
2018-04-23 23:44:04 -07:00
Cyan4973
44bff3fd3b re-ordered parenthesis
to avoid mixing && and &
as suggested by @terrelln
2018-04-23 19:26:02 -07:00
Yann Collet
0c2ae72ba8
Merge pull request #507 from lz4/clangPerf
fixed lz4_fast clang performance
2018-04-23 15:55:56 -07:00
Cyan4973
644b7bd2b6 fixed minor declaration issue with clang on msys 2018-04-23 15:52:44 -07:00
Cyan4973
cd0663456f disable shortcut for LZ4_decompress_fast()
improving speed
2018-04-23 15:47:08 -07:00
Cyan4973
bd06fde104 fullbench compiled without assert()
to better reflect release speed
2018-04-23 15:42:27 -07:00
Yann Collet
57cc7daf22
Merge pull request #510 from terrelln/bug-fix
Fix input size validation edge cases
2018-04-23 15:28:19 -07:00
Nick Terrell
672799e814 Fix compilation error and assert. 2018-04-23 14:21:02 -07:00
Nick Terrell
bb83cad98f Fix input size validation edge cases
The bug is a read up to 2 bytes past the end of the buffer.
There are three cases for this bug, one for each test case added.

* An empty input causes `token = *ip++` to read one byte too far.
* A one byte input with `(token >> ML_BITS) == RUN_MASK` causes
  one extra byte to be read without validation. This could be
  combined with the first bug to cause 2 extra bytes to be read.
* The case pointed out in issue #508, where `ip == iend` at the
  beginning of the loop after taking the shortcut.

Benchmarks show no regressions on clang or gcc-7 on both my mac
and devserver.

Fixes #508.
2018-04-23 13:34:18 -07:00
Yann Collet
996d211aca
Merge pull request #509 from svpv/clarifyFastRisks
lz4.h: clarify the risks of using LZ4_decompress_fast()
2018-04-22 19:30:24 -07:00
Alexey Tourbin
ab06ef97bb lz4.h: clarify the risks of using LZ4_decompress_fast()
The notes about "security guarantee" and "malicious inputs" seemed
a bit non-technical to me, so I took the liberty to tone them down
and instead describe the actual risks in technical terms.  Namely,
the function never writes past the end of the output buffer, so
a direct hostile takeover (resulting in arbitrary code execution
soon after the return from the function) is not possible.  However,
the application can crash because of reads from unmapped pages.

I also took the liberty to describe what I believe is the only sensible
usage scenario for the function: "This function is only usable if the
originalSize of uncompressed data is known in advance," etc.
2018-04-23 02:13:49 +03:00
Cyan4973
d1f21883d6 fixed incorrect comment 2018-04-21 00:11:51 -07:00
Yann Collet
a8a5dfd426 fixed clang performance in lz4_fast
The simple change from
`matchIndex+MAX_DISTANCE < current`
towards
`current - matchIndex > MAX_DISTANCE`

is enough to generate a 10% performance drop under clang.
Quite massive.
(I missed as my eyes were concentrated on gcc performance at that time).

The second version is more robust, because it also survives a situation where
`matchIndex > current`
due to overflows.

The first version requires matchIndex to not overflow.
Hence were added `assert()` conditions.

The only case where this can happen is with dictCtx compression,
in the case where the dictionary context is not initialized before loading the dictionary.
So it's enough to always initialize the context while loading the dictionary.
2018-04-20 18:09:51 -07:00