https://github.com/facebook/zstd/pull/1124 fixes an issue with GNU/Hurd
being unable to write to /dev/zero. Implemented fix is writing to
/dev/random instead.
On OpenBSD a regular user is unable to write to /dev/random because of
permissions set on this device. Result is failing a regression test.
Proposed solution should work for all platforms.
The original conditions only worked, when both, static and shared variants where built, resulting in an inconsistency between programs and library. The program was built with MT support enabled, the library not. That lead to error 11 'unsupported parameter' when compressing anything with the command line tool.
When changing the AND condition to `ZSTD_MULTITHREAD_SUPPORT AND (ZSTD_BUILD_SHARED OR ZSTD_BUILD_SHARED)`, cmake stopps complaining one of the targets wasn't built. This commit works for any case.
[zstdmt] Fix jobsize bugs
* `ZSTDMT_serialState_reset()` should use `targetSectionSize`, not `jobSize` when sizing the seqstore.
Add an assert that checks that we sized the seqstore using the right job size.
* `ZSTDMT_compressionJob()` should check if `rawSeqStore.seq == NULL`.
* `ZSTDMT_initCStream_internal()` should not adjust `mtctx->params.jobSize` (clamping to MIN/MAX is okay).
streaming decoders, such as ZSTD_decompressStream() or ZSTD_decompress_generic(),
may end up making no forward progress,
(aka no byte read from input __and__ no byte written to output),
due to unusual parameters conditions,
such as providing an output buffer already full.
In such case, the caller may be caught in an infinite loop,
calling the streaming decompression function again and again,
without making any progress.
This version detects such situation, and generates an error instead :
ZSTD_error_dstSize_tooSmall when output buffer is full,
ZSTD_error_srcSize_wrong when input buffer is empty.
The detection tolerates a number of attempts before triggering an error,
controlled by ZSTD_NO_FORWARD_PROGRESS_MAX macro constant,
which is set to 16 by default, and can be re-defined at compilation time.
This behavior tolerates potentially existing implementations
where such cases happen sporadically, like once or twice,
which is not dangerous (only infinite loops are),
without generating an error, hence without breaking these implementations.