as suggested in #1441.
generally U32 and unsigned are the same thing,
except when they are not ...
case : 32-bit compilation for MIPS (uint32_t == unsigned long)
A vast majority of transformation consists in transforming U32 into unsigned.
In rare cases, it's the other way around (typically for internal code, such as seeds).
Among a few issues this patches solves :
- some parameters were declared with type `unsigned` in *.h,
but with type `U32` in their implementation *.c .
- some parameters have type unsigned*,
but the caller user a pointer to U32 instead.
These fixes are useful.
However, the bulk of changes is about %u formating,
which requires unsigned type,
but generally receives U32 values instead,
often just for brevity (U32 is shorter than unsigned).
These changes are generally minor, or even annoying.
As a consequence, the amount of code changed is larger than I would expect for such a patch.
Testing is also a pain :
it requires manually modifying `mem.h`,
in order to lie about `U32`
and force it to be an `unsigned long` typically.
On a 64-bit system, this will break the equivalence unsigned == U32.
Unfortunately, it will also break a few static_assert(), controlling structure sizes.
So it also requires modifying `debug.h` to make `static_assert()` a noop.
And then reverting these changes.
So it's inconvenient, and as a consequence,
this property is currently not checked during CI tests.
Therefore, these problems can emerge again in the future.
I wonder if it is worth ensuring proper distinction of U32 != unsigned in CI tests.
It's another restriction for coding, adding more frustration during merge tests,
since most platforms don't need this distinction (hence contributor will not see it),
and while this can matter in theory, the number of platforms impacted seems minimal.
Thoughts ?
The problem was already masked,
due to no longer accepting tiny blocks for statistics.
But in case it could still happen with not-so-tiny blocks,
there is a stricter control which ensures that
nothing was already loaded prior to statistics collection.
depending on initialization,
the first byte of a new frame was invalidated or not.
As a consequence, one match opportunity was available or not,
resulting in slightly different compressed sizes
(on average, 1 or 2 bytes once every 20 frames).
It impacted ratio comparison between one-shot and streaming modes.
This fix makes the first byte of a new frame always a valid match.
Now compressed size is always the same.
It also improves compressed size by a negligible amount.
* Fix `ZSTD_estimateCCtxSize()` with negative levels.
* Fix `ZSTD_estimateCStreamSize()` with negative levels.
* Add a unit test to test for this error.
When srcSize is small,
the nb of symbols produced is likely too small to warrant dedicated probability tables.
In which case, predefined distribution tables will be used instead.
There is a cheap algorithm in btultra initialization :
it presumes default distribution will be used if srcSize <= 1024.
btultra2 now uses the same threshold to shut down probability estimation,
since measured frequencies won't be used at entropy stage,
and therefore relying on them to determine sequence cost is misleading,
resulting in worse compression ratios.
This fixes btultra2 performance issue on very small input.
Note that, a proper way should be
to determine which symbol is going to use predefined probaility
and which symbol is going to use dynamic ones.
But the current algorithm is unable to make a "per-symbol" decision.
So this will require significant modifications.
When we switched `ZSTD_SKIPPABLEHEADERSIZE` to a macro, the places where we do:
MEM_readLE32(ptr) + ZSTD_SKIPPABLEHEADERSIZE
can now overflow `(unsigned)-8` to `0` and we infinite loop. We now check
the frame size and reject sizes that overflow a U32.
Note that this bug never made it into a release, and was only in the dev branch
for a few days.
Credit to OSS-Fuzz
from overlapSizeLog.
Reasoning :
`overlapLog` is already used everwhere, in the code, command line and documentation.
`ZSTD_c_overlapSizeLog` feels unnecessarily different.
ZSTD_compress_generic() is renamed ZSTD_compressStream2().
Note that, for the time being,
the "stable" API and advanced one use different parameter planes :
setting parameters using the advanced API does not influence ZSTD_compressStream()
and using ZSTD_initCStream() does not influence parameters for ZSTD_compressStream2().
constants that *may* change in the future
will be accessed through functions instead
(to be created).
_simpleArgs() variant do not have (yet) a clear enough added value
to deserve "stable" status.
which now accepts an enum,
to distinguish between resetting the session, or the parameters (or both).
removed ZSTD_CCtx_resetParameters(), which is redundant.
start replacing invocation of ZSTD_CCtx_reset*() functions
Updated advanced API documentation
trimmed down amount of API staged in RC,
in particular, all functions related to ZSTD_CCtxParams()
seem too advanced.
answering #1407.
Also : removed obsolete function ZSTD_setDStreamParameter()
which could only be used with one parameter (DStream_p_maxWindowSize).
Now replaced by ZSTD_DCtx_setWindowSize() (which exists since a few revisions)
by triggering the prefetching decoder path
(which used to be dedicated to long-range offsets only).
Figures on my laptop :
no content prefetch : ~300 MB/s (for reference)
full content prefetch : ~325 MB/s (before this patch)
new prefetch path : ~375 MB/s (after this patch)
The benchmark speed is already significant,
but another side-effect is that this version
prefetch less data into memory,
since it only prefetches what's needed, instead of the full dictionary.
This is supposed to help highly active environments
such as active databases,
that can't be properly measured in benchmark environment (too clean).
Also :
fixed the largeNbDict test program
which was working improperly when setting nbBlocks > nbFiles.
not sure why it only triggers now,
this code has been around for a while.
Introduced a new error code : dstBuffer_null,
I couldn't express anything even remotely similar with existing error codes set.
changed workspace parameter convention
to always provide workspaceSize,
so that size can be explicitly checked.
Also, use more enum to make the meaning of some parameters more explicit.
isolate all logic associated with block decompression
into its own module.
zstd_decompress is still in charge
of context creation/destruction,
frames, headers, streaming, special blocks, etc.
Compressed blocks themselves are now handled within zstd_decompress_block .
fix#1385
decompressing into NULL was an automatic error.
It is now allowed, as long as the content of the frame is empty.
Seems to simplify things for `arrow`.
Maybe some other projects rely on this behavior ?
fix#1379
decodecorpus was generating one extraneous byte when `nbSeq==0`.
This is disallowed by the specification.
The reference decoder was just skipping the extraneous byte.
It is now stricter, and flag such situation as an error.
since corrupted bitstreams can generate too large values.
This slightly reduces the benefits from clang on my laptop.
gcc results and code generation are not affected.