* The algorithm would bail as soon as it found one epoch that
contained no new segments. Change it so it now has to fail
>= 10 times in a row (10 for fastcover, 10-100 for cover).
* The algorithm uses the `maxDict` size to decide the epoch size.
When this size is absurdly large, it causes tiny epochs. Lower
bound the epoch size at 10x the segment size, and warn the user
that their training set is too small.
Fixes#1554
* After loading a dictionary only create the cdict once we've started the
compression job. This allows the user to pass the dictionary before they
set other settings, and is in line with the rest of the API.
* Add tests that mix the 3 dictionary loading APIs.
* Add extra tests for `ZSTD_CCtx_loadDictionary()`.
* The first 2 tests added fail before this patch.
* Run the regression test suite.
The order you set parameters in the advanced API is not supposed to matter.
However, once you call `ZSTD_CCtx_refCDict()` the compression parameters
cannot be changed. Remove that restriction, and document what parameters
are used when using a CDict.
If the CCtx is in dictionary mode, then the CDict's parameters are used.
If the CCtx is not in dictionary mode, then its requested parameters are
used.
As documented in `zstd.h`, ZSTD_decompressBound returns `ZSTD_CONTENTSIZE_ERROR`
if an error occurs (not `ZSTD_CONTENTSIZE_UNKNOWN`). This is consistent with
the error checking made in ZSTD_decompressBound, particularly line 545.
Introduces a new utility function `ZSTD_findFrameCompressedSize_internal` which
is equivalent to `ZSTD_findFrameCompressSize`, but accepts an additional output
parameter `bound` that computes an upper-bound for the compressed data in the frame.
The new API function is named `ZSTD_decompressBound` to be consistent with
`zstd_compressBound` (the inverse operation). Clients will now be able to compute an upper-bound for
their compressed payloads instead of guessing a large size.
Implements https://github.com/facebook/zstd/issues/1536.