There was a flaw in the formula
which compared literal cost with match cost :
at a given position,
a non-null literal suite is going to be part of next sequence,
while if position ends a previous match, to immediately start another match,
next sequence will have a litlength of zero.
A litlength of zero has a non-null cost.
It follows that literals cost should be compared to match cost + litlength==0.
Not doing so gave a structural advantage to matches, which would be selected more often.
I believe that's what led to the creation of the strange heuristic which added a complex cost to matches.
The heuristic was actually compensating.
It was probably created through multiple trials, settling for best outcome on a given scenario (I suspect silesia.tar).
The problem with this heuristic is that it's hard to understand,
and unfortunately, any future change in the parser would impact the way it should be calculated and its effects.
The "proper" formula makes it possible to remove this heuristic.
Now, the problem is : in a head to head comparison, it's sometimes better, sometimes worse.
Note that all differences are small (< 0.01 ratio).
In general, the newer formula is better for smaller files (for example, calgary.tar and enwik7).
I suspect that's because starting statistics are pretty poor (another area of improvement).
However, for silesia.tar specifically, it's worse at level 22 (while being better at level 17, so even compression level has an impact ...).
It's a pity that zstd -22 gets worse on silesia.tar.
That being said, I like that the new code gets rid of strange variables,
which were introducing complexity for any future evolution (faster variants being in mind).
Therefore, in spite of this detrimental side effect, I tend to be in favor of it.
Fixes issue where, when `zstd --format=lz4` is fed an input larger than 128KB,
the read overruns the input buffer. This changes Zstd to use LZ4 with chained
64KB blocks. This is technically a breaking change in that some third party
LZ4 implementations may not support linked blocks. However, progress should not
be allowed to be stopped by such petty concerns as backwards compatibility!
adapt accuracy depending on value.
makes it possible to have higher accuracy for small value,
notably small compression speed.
This capability is expected to be useful while modifying optimal parser.
Currently, all files are joined by default,
they are compressed separately but benchmarked together,
providing a single final result.
Benchmarking files separately make it possible to accurately measure difference for each file.
This is expected to be useful while tuning optimal parser.
this version has same speed as branch `opt`
which is itself 5-10% slower than branch `dev`
(no identified reason)
It does not compress exactly the same as `opt` or `dev`,
maybe because it doesn't stop search after repcodes,
leading to sometimes better compression, sometimes worse
(by a small margin).
warning : _extDict path does not work for the time being
This means that benchmark module works,
but file module will fail with large files (and high compression level).
Objective is to fuse _extDict path into current one,
in order to have a single parser to maintain.
as per documentation, on ZSTD_setPledgedSrcSize() :
> If all data is provided and consumed in a single round,
> this value (pledgedSrcSize) is overriden by srcSize instead.
This wasn't applied before compression level is transformed into compression parameters.
As a consequence, small input missed compression parameters adaptation.
It seems to work fine now : compression was compared with ZSTD_compress_advanced(),
results were the same.
we lose a warning message :
when a job size is chosen < minimum job size for multithreading,
it is automatically resized to minimum size.
If this information is really useful, it should be present in zstd.h now.
removed the other 2 code paths (single thread, and ZSTDMT ones)
keeping only the new advanced API, for easier code coverage.
It shall also fix identified issue with Visual Studio
which doesn't have ZSTD_NEWAPI defined.
UTIL_getFileSize() used to return zero on failure.
This made it impossible to distinguish a failure from a genuine empty file.
Both cases where coalesced.
Adding UTIL_FILESIZE_UNKNOWN constant has many consequences on user code,
since in many places, the `0` was assumed to mean "error".
This is no longer the case, and the error code must be actively checked.
It was multiple reasons stacked :
- Visual use a different code path, because ZSTD_NEWAPI is not defined
- fileio.c sends `0` as `pledgedSrcSize` to mean `ZSTD_CONTENTSIZE_UNKNOWN` (fixed)
- ZSTDMT_resetCCtx() interpreted `0` as "empty" instead of "unknown" (fixed)
when determining compression parameters
to compress one file only.
For multiple files, it still "bets" that files are going to be small.
There was also a bug recently added in ZSTD_CCtx_loadDictionary_advanced()
making it incapable to use pledgedSrcSize to determine compression parameters.
It's not good to mix old and new API
ZSTD_resetCStream() doesn't just set pledgedSrcSize :
it also sets the CCtx for a single thread compression.
Problem is, when 2+ threads are defined in cctx->requestedParams,
ZSTD_compress_generic() will want to start MT compression,
since initialization is supposed to have already happened (thanks to ZSTD_resetCStream())
except that the underlying ZSTDMT_CCtx* object is not created,
resulting in a segfault.
This is an invalid construction
(correct one is to use ZSTD_CCtx_setPledgedSrcSize()).
I haven't found a nice way to mitigate this impact if someone makes the same mistake.
At some point, removing the old API to keep only the new API within fileio.c will limit these risks.
srcSize is read and provided at each file, not at resource creation.
This used to be useful with older API, because it could not re-adapt parameters between sessions.
At some point, it will be better to remove the old code, and only keep the new_api.
It works fine by now.
fixes#874 :
when a frame is not properly terminated by a "last block" signal,
zstd -d used to detect it immediately and error out.
This version will decode and flush the last block, and only then issue an error.
* Maximum window size in 32-bit mode is 1GB, since allocations for 2GB fail
on my Mac.
* Maximum window size in 64-bit mode is 2GB, since that is the largest
power of 2 that works with the overflow prevention.
* Allow `--long=windowLog` to set the window log, along with
`--zstd=wlog=#`. These options also set the window size during
decompression, but don't override `--memory=#` if it is set.
* Present a helpful error message when the window size is too large during
decompression.
* The long range matcher defaults to a hash log 7 less than the window log,
which keeps it at 20 for window log 27.
* Keep the default long range matcher window size and the default maximum
window size at 27 for the API and CLI.
* Add tests that use the maximum window size and hash size for compression
and decompression.
Simple makefile change + quick typename change
Test:
make clean
make
# successfully produces binary without lz4 support
make clean
# with flags to pick up my lz4 build
make MOREFLAGS="-L/home/felixh/prog/lz4/lib -I/home/felixh/prog/lz4/lib"
# successfully produces binary with lz4 support
echo "TEST TEST TEST THIS IS A TEST STRING PLEASE TEST THIS PLEASE OK THANK YOU" | \
./lz4/lz4 | \
LD_LIBRARY_PATH=/home/felixh/prog/lz4/lib ./zstd/zstd -d
# successfully prints TEST TEST TEST THIS IS A TEST STRING PLEASE TEST THIS PLEASE OK THANK YOU
for easier invocation.
- no longer expose frequency timer :
it's either useless, or stored internally in a static variable (init is only necessary once).
- UTIL_getTime() provides result by function return.
The timer used was only accurate up to 0.01 seconds. This timer is accurate up to 1 ns.
It is a monotonic timer that measures the real time difference, not on CPU time.
file-information is dependent on decompression functions.
it should only be enabled when ZSTD_NODECOMPRESS is not set.
also : added zstd-compress compilation test into `make shortest`
Doesn't speed optimize this buffer-to-buffer scenario yet.
Still internally defers to streaming implementation.
Also : fixed a long standing bug in ZSTDMT streaming API.
It makes it more difficult to directly cast the result of a function,
requiring to store the result in an intermediate variable.
It does not necessarily help readability,
and this restriction can be difficult to overcome in some constructions,
like some macros.
also : fixed minor Visual conversion warnings in datagencli.c
* upstream/dev: (305 commits)
added test for ZSTD_estimateCStreamSize()
changed variable name, for clarity
fixed ZSTD_estimateCStreamSize()
shortened ZSTD_createCStream_Advanced()
fixed symbols test
added ZSTD_estimateDStreamSize()
changed name frameParams into frameHeader
regroup memory usage function declarations
separated ZSTD_estimateCStreamSize() from ZSTD_estimateCCtxSize()
bumped version number
added ZSTD_estimateCDictSize() and ZSTD_estimateDDictSize()
Updated ZSTD_freeCCtx()
updated ZSTD_estimateCCtxSize()
Updated ZSTD_sizeof_CCtx()
merged CCtx and CStream as a single same object
cli : -d and -t do not stop after a failed decompression
added dev branch CircleCI badge
added dev branch Appveyor badge
keep dev branch status only
creates a binary archive without the `programs` directory
...
It inflates binary sizes, which is negative for the Windows build.
It also makes it impossible to check if 2 different source codes
get nonetheless compiled to the same binary,
since checksum will be different, due to integrated source code.
It now only uses compressionParameters as argument.
It produces many changes throughout user code,
though hopefully they tend to be simple :
just provide the cParams part from existing ZSTD_parameters.
Some programs might depend on ZSTD_createCDict_advanced() to pass frame parameters.
This change will force them to revisit this strategy and fix it,
since frame parameters are effectively silently ignored in current version.
makes it more explicit that it allocates a buffer
and that it's meant to be used for dictionary.
Also : simplified function a bit,
now only works for dictionaries up to DICTSIZE_MAX
now works with the `=` variant, which is the recommended one.
Old variant `--dictID #` still works, for compatibility with existing scripts.
Long term objective is to remove the old variant..
* If zlib/lzma isn't in the usual spot, it won't be used,
even if `$CFLAGS` and `$LDFLAGS` add the location it is in.
* Update the test code snippets to not trigger any warnings.
zstd-internal was intended to be a helper target, but it doesn't help
at all, what it does in practice is a useless rebuild of zstd every time
"make zstd" is invoked.
Fixes: 030ac243a0 ("Changed Makefile to generate zstd with .gz support by default")