* Make arugments optional and add --dict argument
* Removing accidental print statement
* Change to more likely scenario for dictionary compression benchmark
Super blocks must never violate the zstd block bound of input_size + ZSTD_blockHeaderSize. The individual sub-blocks may, but not the super block. If the superblock violates the block bound we are liable to violate ZSTD_compressBound(), which we must not do. Whenever the super block violates the block bound we instead emit an uncompressed block.
This means we increase the latency because of the single uncompressed block. I fix this by enabling streaming an uncompressed block, so the latency of an uncompressed block is 1 byte. This doesn't reduce the latency of the buffer-less API, but I don't think we really care.
* I added a test case that verifies that the decompression has 1 byte latency.
* I rely on existing zstreamtest / fuzzer / libfuzzer regression tests for correctness. During development I had several correctness bugs, and they easily caught them.
* The added assert that the superblock doesn't violate the block bound will help us discover any missed conditions (though I think I got them all).
Credit to OSS-Fuzz.
When building zst under cygwin or msys2 with std=c99 the build would fail because
of an undefined fileno()/_fileno(), which is used by the IS_CONSOLE() macro.
When building with -std=c99 (gcc otherwise defaults to gnu, which implies POSIX),
which is the default of the cmake build, then including unistd.h wont define
_POSIX_VERSION and all other headers also wont expose POSIX API.
To fix this make sure to define _POSIX_C_SOURCE with the version we want before including
unistd.h and so that _POSIX_VERSION is set to the version provided by the system.
Since Cygwin/MSYS2 just follow POSIX we can also remove their special cases for
defining IS_CONSOLE().
And, for completeness, also explicitly include stdio.h which is what actually declares fileno().
Tested with the normal make file and cmake under MSYS2 and Cygwin.