fix#1379
decodecorpus was generating one extraneous byte when `nbSeq==0`.
This is disallowed by the specification.
The reference decoder was just skipping the extraneous byte.
It is now stricter, and flag such situation as an error.
and slightly refactored affected function.
Honestly, the formula calculating variance should get a second reviewing round,
it's not clear if it's correct.
We could allocate up to 2^28 bytes of memory when using 2 threads with
window log = 24. Now, we limit it to 2^26 bytes of memory when not running
big tests.
I chose max window log = 22 since that is the maximum source size when
big tests are disabled. Hopefully this will be enough to reduce or
eliminate the test failures.
experimental function ZSTD_compressBlock() is designed for very small data in mind,
for situation where saving the ~12 bytes of frame header can actually make a difference.
Some systems though may have to deal with small and large data entangled.
If it's larger than a block (> 128KB), compressBlock() cannot compress them in one round.
That's why it's possible to compress in multiple rounds.
This is a chain of compressed blocks.
Some users push this capability to the limit, encoding gigantic chain of blocks.
On crossing the 4GB limit, some internal overflow occurs.
This fix moves the overflow correction mechanism higher in the call chain,
so that it's applied also to gigantic chains of blocks.
Added a test case in fuzzer.c, which crashes before the fix, and pass now.
* Updates CircleCI to use workflows.
We can now specify any number of test jobs to run in parallel.
* Switch the image to `buildpack-deps:trusty` which is only 500 MB
instead of 7 GB, so that saves 7 minutes to download it if it isn't
already cached on the host.
* Publish the source tarball and sha256sum as artifacts.
* If the `GITHUB_TOKEN` environment variable is set, we will also
add the tarball + sha256sum to the tagged release, after manual
approval.
Sometimes, it's necessary to test that a certain command fail, as expected.
Such failure is actually a success, and must not stop the flow of tests.
Several tests were prefixed with `!` to invert return code.
This does not work : it effectively makes the tests pass no matter what.
Use instead function die(), which is meant to trap successes, and transform them into errors.
which can be probed using new function ZSTD_minCLevel().
Also : redefined ZSTD_TARGETLENGTH_MIN/MAX for consistency
used the opportunity to bump version number to v1.3.6
tests/playTests.sh uses 'head -c' in a couple of tests to truncate the
last byte of a file. The '-c' option is non-portable (not in POSIX).
Instead use a wrapper around dd (truncateLastByte).
they were pretty easy to trigger by the way,
just start an extended paramgrill session
to find a compression table based on any sample,
it would necessarily happen at some point.
assert() in paramgrill are not in the benchmark path.
They should remain active, as they don't impact measurements, and their runtime is insignificant.
* Minor fix
* Run non-optimize FASTCOVER 5 times in benchmark
* Merge fastCover into dictBuilder
* Fix mixed declaration issue
* Add fastcover to symbol.c
* Add fastCover.c and cover.h to build
* Change fastCover.c to fastcover.c
* Update benchmark to run FASTCOVER in dictBuilder
* Undo spliting fastcover_param into cover_param and f
* Remove convert param functions
* Assign f to parameter
* Add zdict.h to Makefile in lib
* Add cover.h to BUCK
* Cast 1 to U64 before shifting
* Remove trimming of zero freq head and tail in selectSegment and rebenchmark
* Remove f as a separate parameter of tryParam
* Read 8 bytes when d is 6
* Add trimming off zero frequency head and tail
* Use best functions from COVER and remove trimming part(which leads to worse compression ratio after previous bugs were fixed)
* Add finalize= argument to FASTCOVER to specify percentage of training samples passed to ZDICT_finalizeDictionary
* Change nbDmer to always read 8 bytes even when d=6
* Add skip=# argument to allow skipping dmers in computeFrequency in FASTCOVER
* Update comments and benchmarking result
* Change default method of ZDICT_trainFromBuffer to ZDICT_optimizeTrainFromBuffer_fastCover
* Add dictType enum and fix bug about passing zParam when converting to coverParam
* Combine finalize and skip into a single parameter
* Update acceleration parameters and benchmark on 3 sample sets
* Change default splitPoint of FASTCOVER to 0.75 and benchmark first 3 sample sets
* Initialize variables outside of for loop in benchmark.c
* Update benchmark result for hg-manifest
* Remove cover.h from install-includes
* Add explanation of f
* Set default compression level for trainFromBuffer to 3
* Add assertion of fastCoverParams in DiB_trainFromFiles
* Add checkTotalCompressedSize function + some minor fixes
* Add test for multithreading fastCovr
* Initialize segmentFreqs in every FASTCOVER_selectSegment and move mutex_unnlock to end of COVER_best_finish
* Free segmentFreqs
* Initialize segmentFreqs before calling FASTCOVER_buildDictionary instead of in FASTCOVER_selectSegment
* Add FASTCOVER_MEMMULT
* Minor fix
* Update benchmarking result
Per warnings from flawfinder: "Does not check for buffer overflows when
copying to destination [MS-banned] (CWE-120). Consider using snprintf,
strcpy_s, or strlcpy (warning: strncpy easily misused).".
Replaced called to strcpy and strcat in `fileio.c` to calls with a
specified size (`strncpy` and `strncat`).
Tested the changes on OSX, Linux, Windows.
On OSX + Linux, changes were tested with ASAN. The following flags were
used: 'check_initialization_order=1:strict_init_order=1:detect_odr_violation=1:detect_stack_use_after_return=1'
To reproduce warning:
./flawfinder.py ./programs/fileio.c
Add different constraint types (decompression speed, compression memory, parameter constraints)
Separate search space by strategy + strategy selection
Memoize results
Real random restarts
Support multiple files
Support Dictionary inputs
Debug Macro for extra printing
Additional constraint checking
Minor fixes
more param parsing
Add Memory
Change paramVariation
work on feasibility
reformat bench
Changed Paramgrill to use bench.c benchmarking
customlevel macro
Printing Flag
Minor changes
Explicit casting
Makefile fix
casting, type fix
Printing Flag
Minor Changes
comments, helper fn's
The correct parameters are used once, but once `ZSTD_resetCStream()` is
called the default parameters (level 3) are used. Fix this by setting
`requestedParams` in the `ZSTD_initCStream*()` functions.
The added tests both fail before this patch and pass after.
OpenBSD's port building infrastructure is able to build in a privilege
separated mode. It uses a privilege drop model. Regression tests fail in
this mode as xz/lzma is unable to set file group and errors out with:
xz: tmp.xz: Cannot set the file group: Operation not permitted
gmake[1]: *** [Makefile:307: zstd-playTests] Error 2
Actually it is not a xz/lzma error but a warning causing zstd's
regression test to fail. Proposed fix is to have xz/lzma not set the
exit status to 2 even if a condition worth a warning was detected (-Q
flag).
https://github.com/facebook/zstd/pull/1124 fixes an issue with GNU/Hurd
being unable to write to /dev/zero. Implemented fix is writing to
/dev/random instead.
On OpenBSD a regular user is unable to write to /dev/random because of
permissions set on this device. Result is failing a regression test.
Proposed solution should work for all platforms.
[zstdmt] Fix jobsize bugs
* `ZSTDMT_serialState_reset()` should use `targetSectionSize`, not `jobSize` when sizing the seqstore.
Add an assert that checks that we sized the seqstore using the right job size.
* `ZSTDMT_compressionJob()` should check if `rawSeqStore.seq == NULL`.
* `ZSTDMT_initCStream_internal()` should not adjust `mtctx->params.jobSize` (clamping to MIN/MAX is okay).
streaming decoders, such as ZSTD_decompressStream() or ZSTD_decompress_generic(),
may end up making no forward progress,
(aka no byte read from input __and__ no byte written to output),
due to unusual parameters conditions,
such as providing an output buffer already full.
In such case, the caller may be caught in an infinite loop,
calling the streaming decompression function again and again,
without making any progress.
This version detects such situation, and generates an error instead :
ZSTD_error_dstSize_tooSmall when output buffer is full,
ZSTD_error_srcSize_wrong when input buffer is empty.
The detection tolerates a number of attempts before triggering an error,
controlled by ZSTD_NO_FORWARD_PROGRESS_MAX macro constant,
which is set to 16 by default, and can be re-defined at compilation time.
This behavior tolerates potentially existing implementations
where such cases happen sporadically, like once or twice,
which is not dangerous (only infinite loops are),
without generating an error, hence without breaking these implementations.
Seperate syntheticTest and fileTableTest (now renamed as benchFiles)
Add incremental display to benchMem
Change to only iterMode for benchFunction
Make Synthetic test's compressibility configurable from cli (using -P#)
from zstreamtest.
This test is now integrated within --newapi,
which dynamically switches between the 2 modes randomly.
The main outcome is reduced testing time.
-Remove global variables
-Remove gv setting functions
-Add advancedParams struct
-Add defaultAdvancedParams();
-Change return type of bench Files
-Change cli to use new interface
-Changed error returns to own struct value
-Change default compression benchmark to use decompress_generic
-Add CustomBench function
-Add Documentation for new functions
There were 2 competing set of debug functions
within zstd_internal.h and bitstream.h.
They were mostly duplicate, and required care to avoid messing with each other.
There is now a single implementation, shared by both.
Significant change :
The macro variable ZSTD_DEBUG does no longer exist,
it has been replaced by DEBUGLEVEL,
which required modifying several source files.
result of ZSTD_compress_advanced()
is different from ZSTD_compress_generic()
when using negative compression levels
because the disabling of huffman compression is not passed in parameters.
when parameters are "equivalent",
the context is re-used in continue mode,
hence needed workspace size is not recalculated.
This incidentally also evades the size-down check and action.
This patch intercepts the "continue mode"
so that the size-down check and action is actually triggered.