2017-09-02 01:28:35 +00:00
/*
* Copyright ( c ) 2016 - present , Yann Collet , Facebook , Inc .
* All rights reserved .
*
* This source code is licensed under both the BSD - style license ( found in the
* LICENSE file in the root directory of this source tree ) and the GPLv2 ( found
* in the COPYING file in the root directory of this source tree ) .
2017-09-08 07:09:23 +00:00
* You may select , at your option , one of the above - listed licenses .
2017-09-02 01:28:35 +00:00
*/
2017-11-07 23:27:06 +00:00
/* This header contains definitions
2017-11-08 00:15:23 +00:00
* that shall * * only * * be used by modules within lib / compress .
2017-11-07 23:27:06 +00:00
*/
2017-09-02 01:28:35 +00:00
# ifndef ZSTD_COMPRESS_H
# define ZSTD_COMPRESS_H
/*-*************************************
* Dependencies
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
# include "zstd_internal.h"
2019-08-20 18:02:50 +00:00
# include "zstd_cwksp.h"
2017-09-11 21:09:34 +00:00
# ifdef ZSTD_MULTITHREAD
# include "zstdmt_compress.h"
# endif
2017-09-02 01:28:35 +00:00
# if defined (__cplusplus)
extern " C " {
# endif
2018-06-05 18:23:18 +00:00
2017-09-02 01:28:35 +00:00
/*-*************************************
* Constants
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
2018-02-03 00:31:20 +00:00
# define kSearchStrength 8
# define HASH_READ_SIZE 8
2019-05-28 20:15:48 +00:00
# define ZSTD_DUBT_UNSORTED_MARK 1 / * For btlazy2 strategy, index ZSTD_DUBT_UNSORTED_MARK==1 means "unsorted".
2018-02-07 22:22:35 +00:00
It could be confused for a real successor at index " 1 " , if sorted as larger than its predecessor .
It ' s not a big deal though : candidate will just be sorted again .
2019-04-12 18:18:11 +00:00
Additionally , candidate position 1 will be lost .
2018-02-07 22:22:35 +00:00
But candidate 1 cannot hide a large tree of candidates , so it ' s a minimal loss .
2019-05-28 20:15:48 +00:00
The benefit is that ZSTD_DUBT_UNSORTED_MARK cannot be mishandled after table re - use with a different strategy .
This constant is required by ZSTD_compressBlock_btlazy2 ( ) and ZSTD_reduceTable_internal ( ) */
2017-09-02 01:28:35 +00:00
/*-*************************************
* Context memory management
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
typedef enum { ZSTDcs_created = 0 , ZSTDcs_init , ZSTDcs_ongoing , ZSTDcs_ending } ZSTD_compressionStage_e ;
typedef enum { zcss_init = 0 , zcss_load , zcss_flush } ZSTD_cStreamStage ;
typedef struct ZSTD_prefixDict_s {
const void * dict ;
size_t dictSize ;
2018-03-20 22:13:14 +00:00
ZSTD_dictContentType_e dictContentType ;
2017-09-02 01:28:35 +00:00
} ZSTD_prefixDict ;
2019-03-21 22:17:41 +00:00
typedef struct {
void * dictBuffer ;
void const * dict ;
size_t dictSize ;
ZSTD_dictContentType_e dictContentType ;
ZSTD_CDict * cdict ;
} ZSTD_localDict ;
2017-11-07 23:27:06 +00:00
typedef struct {
2018-05-22 23:06:33 +00:00
U32 CTable [ HUF_CTABLE_SIZE_U32 ( 255 ) ] ;
HUF_repeat repeatMode ;
} ZSTD_hufCTables_t ;
typedef struct {
2017-11-07 23:27:06 +00:00
FSE_CTable offcodeCTable [ FSE_CTABLE_SIZE_U32 ( OffFSELog , MaxOff ) ] ;
FSE_CTable matchlengthCTable [ FSE_CTABLE_SIZE_U32 ( MLFSELog , MaxML ) ] ;
FSE_CTable litlengthCTable [ FSE_CTABLE_SIZE_U32 ( LLFSELog , MaxLL ) ] ;
FSE_repeat offcode_repeatMode ;
FSE_repeat matchlength_repeatMode ;
FSE_repeat litlength_repeatMode ;
2018-05-22 23:06:33 +00:00
} ZSTD_fseCTables_t ;
typedef struct {
ZSTD_hufCTables_t huf ;
ZSTD_fseCTables_t fse ;
2017-11-07 23:27:06 +00:00
} ZSTD_entropyCTables_t ;
typedef struct {
U32 off ;
U32 len ;
} ZSTD_match_t ;
typedef struct {
zstd_opt: changed cost formula
There was a flaw in the formula
which compared literal cost with match cost :
at a given position,
a non-null literal suite is going to be part of next sequence,
while if position ends a previous match, to immediately start another match,
next sequence will have a litlength of zero.
A litlength of zero has a non-null cost.
It follows that literals cost should be compared to match cost + litlength==0.
Not doing so gave a structural advantage to matches, which would be selected more often.
I believe that's what led to the creation of the strange heuristic which added a complex cost to matches.
The heuristic was actually compensating.
It was probably created through multiple trials, settling for best outcome on a given scenario (I suspect silesia.tar).
The problem with this heuristic is that it's hard to understand,
and unfortunately, any future change in the parser would impact the way it should be calculated and its effects.
The "proper" formula makes it possible to remove this heuristic.
Now, the problem is : in a head to head comparison, it's sometimes better, sometimes worse.
Note that all differences are small (< 0.01 ratio).
In general, the newer formula is better for smaller files (for example, calgary.tar and enwik7).
I suspect that's because starting statistics are pretty poor (another area of improvement).
However, for silesia.tar specifically, it's worse at level 22 (while being better at level 17, so even compression level has an impact ...).
It's a pity that zstd -22 gets worse on silesia.tar.
That being said, I like that the new code gets rid of strange variables,
which were introducing complexity for any future evolution (faster variants being in mind).
Therefore, in spite of this detrimental side effect, I tend to be in favor of it.
2017-11-28 22:07:03 +00:00
int price ;
2017-11-07 23:27:06 +00:00
U32 off ;
U32 mlen ;
U32 litlen ;
U32 rep [ ZSTD_REP_NUM ] ;
} ZSTD_optimal_t ;
2018-05-15 01:04:08 +00:00
typedef enum { zop_dynamic = 0 , zop_predef } ZSTD_OptPrice_e ;
2018-05-08 22:37:06 +00:00
2017-11-07 23:27:06 +00:00
typedef struct {
2017-11-19 00:24:02 +00:00
/* All tables are allocated inside cctx->workspace by ZSTD_resetCCtx_internal() */
fix confusion between unsigned <-> U32
as suggested in #1441.
generally U32 and unsigned are the same thing,
except when they are not ...
case : 32-bit compilation for MIPS (uint32_t == unsigned long)
A vast majority of transformation consists in transforming U32 into unsigned.
In rare cases, it's the other way around (typically for internal code, such as seeds).
Among a few issues this patches solves :
- some parameters were declared with type `unsigned` in *.h,
but with type `U32` in their implementation *.c .
- some parameters have type unsigned*,
but the caller user a pointer to U32 instead.
These fixes are useful.
However, the bulk of changes is about %u formating,
which requires unsigned type,
but generally receives U32 values instead,
often just for brevity (U32 is shorter than unsigned).
These changes are generally minor, or even annoying.
As a consequence, the amount of code changed is larger than I would expect for such a patch.
Testing is also a pain :
it requires manually modifying `mem.h`,
in order to lie about `U32`
and force it to be an `unsigned long` typically.
On a 64-bit system, this will break the equivalence unsigned == U32.
Unfortunately, it will also break a few static_assert(), controlling structure sizes.
So it also requires modifying `debug.h` to make `static_assert()` a noop.
And then reverting these changes.
So it's inconvenient, and as a consequence,
this property is currently not checked during CI tests.
Therefore, these problems can emerge again in the future.
I wonder if it is worth ensuring proper distinction of U32 != unsigned in CI tests.
It's another restriction for coding, adding more frustration during merge tests,
since most platforms don't need this distinction (hence contributor will not see it),
and while this can matter in theory, the number of platforms impacted seems minimal.
Thoughts ?
2018-12-22 00:19:44 +00:00
unsigned * litFreq ; /* table of literals statistics, of size 256 */
unsigned * litLengthFreq ; /* table of litLength statistics, of size (MaxLL+1) */
unsigned * matchLengthFreq ; /* table of matchLength statistics, of size (MaxML+1) */
unsigned * offCodeFreq ; /* table of offCode statistics, of size (MaxOff+1) */
2018-05-17 18:19:05 +00:00
ZSTD_match_t * matchTable ; /* list of found matches, of size ZSTD_OPT_NUM+1 */
ZSTD_optimal_t * priceTable ; /* All positions tracked by optimal parser, of size ZSTD_OPT_NUM+1 */
2017-11-07 23:27:06 +00:00
2017-11-08 19:05:32 +00:00
U32 litSum ; /* nb of literals */
U32 litLengthSum ; /* nb of litLength codes */
U32 matchLengthSum ; /* nb of matchLength codes */
U32 offCodeSum ; /* nb of offset codes */
2018-05-16 21:53:35 +00:00
U32 litSumBasePrice ; /* to compare to log2(litfreq) */
U32 litLengthSumBasePrice ; /* to compare to log2(llfreq) */
U32 matchLengthSumBasePrice ; /* to compare to log2(mlfreq) */
U32 offCodeSumBasePrice ; /* to compare to log2(offreq) */
ZSTD_OptPrice_e priceType ; /* prices can be determined dynamically, or follow a pre-defined cost structure */
2018-05-11 22:54:06 +00:00
const ZSTD_entropyCTables_t * symbolCosts ; /* pre-calculated dictionary statistics */
2019-02-15 18:29:03 +00:00
ZSTD_literalCompressionMode_e literalCompressionMode ;
2017-11-07 23:27:06 +00:00
} optState_t ;
2017-12-13 00:51:00 +00:00
typedef struct {
2018-01-12 20:06:10 +00:00
ZSTD_entropyCTables_t entropy ;
U32 rep [ ZSTD_REP_NUM ] ;
} ZSTD_compressedBlockState_t ;
typedef struct {
BYTE const * nextSrc ; /* next block here to continue on current prefix */
2017-12-13 00:51:00 +00:00
BYTE const * base ; /* All regular indexes relative to this position */
BYTE const * dictBase ; /* extDict indexes relative to this position */
U32 dictLimit ; /* below that point, need extDict */
2019-05-28 20:15:48 +00:00
U32 lowLimit ; /* below that point, no more valid data */
2018-02-24 00:48:18 +00:00
} ZSTD_window_t ;
2018-04-27 22:46:59 +00:00
typedef struct ZSTD_matchState_t ZSTD_matchState_t ;
struct ZSTD_matchState_t {
2018-05-08 19:32:16 +00:00
ZSTD_window_t window ; /* State for window round buffer management */
2019-08-01 13:58:17 +00:00
U32 loadedDictEnd ; /* index of end of dictionary, within context's referential.
* When loadedDictEnd ! = 0 , a dictionary is in use , and still valid .
2019-08-05 13:18:43 +00:00
* This relies on a mechanism to set loadedDictEnd = 0 when dictionary is no longer within distance .
* Such mechanism is provided within ZSTD_window_enforceMaxDist ( ) and ZSTD_checkDictValidity ( ) .
2019-08-01 13:58:17 +00:00
* When dict referential is copied into active context ( i . e . not attached ) ,
* loadedDictEnd = = dictSize , since referential starts from zero .
*/
2018-05-08 19:32:16 +00:00
U32 nextToUpdate ; /* index from which to continue table update */
2019-08-01 13:58:17 +00:00
U32 hashLog3 ; /* dispatch table for matches of len==3 : larger == faster, more memory */
2017-12-13 00:51:00 +00:00
U32 * hashTable ;
U32 * hashTable3 ;
U32 * chainTable ;
optState_t opt ; /* optimal parser state */
2019-05-31 22:55:12 +00:00
const ZSTD_matchState_t * dictMatchState ;
2018-08-23 18:32:32 +00:00
ZSTD_compressionParameters cParams ;
2018-04-27 22:46:59 +00:00
} ;
2017-12-13 00:51:00 +00:00
2018-01-12 20:06:10 +00:00
typedef struct {
ZSTD_compressedBlockState_t * prevCBlock ;
ZSTD_compressedBlockState_t * nextCBlock ;
ZSTD_matchState_t matchState ;
} ZSTD_blockState_t ;
2017-11-07 23:27:06 +00:00
typedef struct {
U32 offset ;
U32 checksum ;
} ldmEntry_t ;
typedef struct {
2018-02-24 02:17:44 +00:00
ZSTD_window_t window ; /* State for the window round buffer management */
2017-11-07 23:27:06 +00:00
ldmEntry_t * hashTable ;
BYTE * bucketOffsets ; /* Next position in bucket to insert entry */
U64 hashPower ; /* Used to compute the rolling hash.
* Depends on ldmParams . minMatchLength */
} ldmState_t ;
typedef struct {
U32 enableLdm ; /* 1 if enable long distance matching */
U32 hashLog ; /* Log size of hashTable */
U32 bucketSizeLog ; /* Log bucket size for collision resolution, at most 8 */
U32 minMatchLength ; /* Minimum match length */
2018-11-21 22:36:57 +00:00
U32 hashRateLog ; /* Log number of entries to skip */
2018-02-24 02:17:44 +00:00
U32 windowLog ; /* Window log for the LDM */
2017-11-07 23:27:06 +00:00
} ldmParams_t ;
2018-03-07 03:50:50 +00:00
typedef struct {
U32 offset ;
U32 litLength ;
U32 matchLength ;
} rawSeq ;
typedef struct {
rawSeq * seq ; /* The start of the sequences */
size_t pos ; /* The position where reading stopped. <= size. */
size_t size ; /* The number of sequences. <= capacity. */
2018-05-08 19:32:16 +00:00
size_t capacity ; /* The capacity starting from `seq` pointer */
2018-03-07 03:50:50 +00:00
} rawSeqStore_t ;
2019-08-30 16:18:44 +00:00
typedef struct {
int collectSequences ;
ZSTD_Sequence * seqStart ;
2019-09-10 03:04:46 +00:00
size_t seqIndex ;
2019-08-30 16:18:44 +00:00
size_t maxSequences ;
} SeqCollector ;
2017-11-07 23:27:06 +00:00
struct ZSTD_CCtx_params_s {
ZSTD_format_e format ;
ZSTD_compressionParameters cParams ;
ZSTD_frameParameters fParams ;
int compressionLevel ;
2018-02-02 03:29:30 +00:00
int forceWindow ; /* force back-references to respect limit of
2017-11-07 23:27:06 +00:00
* 1 < < wLog , even for dictionary */
2019-06-24 20:40:52 +00:00
size_t targetCBlockSize ; /* Tries to fit compressed block size to be around targetCBlockSize.
* No target when targetCBlockSize = = 0.
* There is no guarantee on compressed block size */
2019-08-19 23:49:25 +00:00
int srcSizeHint ; /* User's best guess of source size.
2019-08-19 15:52:08 +00:00
* Hint is not valid when srcSizeHint = = 0.
* There is no guarantee that hint is close to actual source size */
2017-11-07 23:27:06 +00:00
2018-06-21 21:02:50 +00:00
ZSTD_dictAttachPref_e attachDictPref ;
2019-02-13 22:59:22 +00:00
ZSTD_literalCompressionMode_e literalCompressionMode ;
2018-06-21 21:02:50 +00:00
2017-11-07 23:27:06 +00:00
/* Multithreading: used to pass parameters to mtctx */
2018-12-12 01:41:42 +00:00
int nbWorkers ;
size_t jobSize ;
int overlapLog ;
int rsyncable ;
2017-11-07 23:27:06 +00:00
/* Long distance matching parameters */
ldmParams_t ldmParams ;
2018-03-19 21:41:23 +00:00
/* Internal use, for createCCtxParams() and freeCCtxParams() only */
2017-11-07 23:27:06 +00:00
ZSTD_customMem customMem ;
} ; /* typedef'd to ZSTD_CCtx_params within "zstd.h" */
2017-09-02 01:28:35 +00:00
struct ZSTD_CCtx_s {
ZSTD_compressionStage_e stage ;
2018-02-02 22:24:56 +00:00
int cParamsChanged ; /* == 1 if cParams(except wlog) or compression level are changed in requestedParams. Triggers transmission of new params to ZSTDMT (if available) then reset to 0. */
2018-02-20 22:12:11 +00:00
int bmi2 ; /* == 1 if the CPU supports BMI2 and 0 otherwise. CPU support is determined dynamically once per context lifetime. */
2017-09-02 01:28:35 +00:00
ZSTD_CCtx_params requestedParams ;
ZSTD_CCtx_params appliedParams ;
2018-02-02 22:24:56 +00:00
U32 dictID ;
2018-06-05 18:23:18 +00:00
2019-08-15 16:51:24 +00:00
ZSTD_cwksp workspace ; /* manages buffer for dynamic allocations */
2017-09-02 01:28:35 +00:00
size_t blockSize ;
2018-01-18 19:15:23 +00:00
unsigned long long pledgedSrcSizePlusOne ; /* this way, 0 (default) == unknown */
unsigned long long consumedSrcSize ;
unsigned long long producedCSize ;
2017-09-02 01:28:35 +00:00
XXH64_state_t xxhState ;
ZSTD_customMem customMem ;
size_t staticSize ;
2019-08-30 16:18:44 +00:00
SeqCollector seqCollector ;
2019-08-26 22:30:41 +00:00
int isFirstBlock ;
2019-11-21 02:21:51 +00:00
int initialized ;
2017-09-02 01:28:35 +00:00
2018-03-07 03:50:50 +00:00
seqStore_t seqStore ; /* sequences storage ptrs */
ldmState_t ldmState ; /* long distance matching state */
rawSeq * ldmSequences ; /* Storage for the ldm output sequences */
size_t maxNbLdmSequences ;
rawSeqStore_t externSeqStore ; /* Mutable reference to external sequences */
2018-01-12 20:06:10 +00:00
ZSTD_blockState_t blockState ;
2017-12-13 00:51:00 +00:00
U32 * entropyWorkspace ; /* entropy workspace of HUF_WORKSPACE_SIZE bytes */
2017-09-02 01:28:35 +00:00
/* streaming */
char * inBuff ;
size_t inBuffSize ;
size_t inToCompress ;
size_t inBuffPos ;
size_t inBuffTarget ;
char * outBuff ;
size_t outBuffSize ;
size_t outBuffContentSize ;
size_t outBuffFlushedSize ;
ZSTD_cStreamStage streamStage ;
U32 frameEnded ;
/* Dictionary */
2019-03-21 22:17:41 +00:00
ZSTD_localDict localDict ;
2017-09-02 01:28:35 +00:00
const ZSTD_CDict * cdict ;
ZSTD_prefixDict prefixDict ; /* single-usage dictionary */
/* Multi-threading */
2017-09-11 21:09:34 +00:00
# ifdef ZSTD_MULTITHREAD
2017-09-02 01:28:35 +00:00
ZSTDMT_CCtx * mtctx ;
2017-09-11 21:09:34 +00:00
# endif
2017-09-02 01:28:35 +00:00
} ;
2018-04-02 21:41:30 +00:00
typedef enum { ZSTD_dtlm_fast , ZSTD_dtlm_full } ZSTD_dictTableLoadMethod_e ;
2017-09-02 01:28:35 +00:00
2018-05-02 21:34:34 +00:00
typedef enum { ZSTD_noDict = 0 , ZSTD_extDict = 1 , ZSTD_dictMatchState = 2 } ZSTD_dictMode_e ;
2018-05-02 21:10:51 +00:00
2017-12-13 00:51:00 +00:00
typedef size_t ( * ZSTD_blockCompressor ) (
2018-01-12 20:06:10 +00:00
ZSTD_matchState_t * bs , seqStore_t * seqStore , U32 rep [ ZSTD_REP_NUM ] ,
2018-08-23 18:53:34 +00:00
void const * src , size_t srcSize ) ;
2018-05-09 19:14:12 +00:00
ZSTD_blockCompressor ZSTD_selectBlockCompressor ( ZSTD_strategy strat , ZSTD_dictMode_e dictMode ) ;
2017-12-13 00:51:00 +00:00
2017-11-08 20:33:06 +00:00
MEM_STATIC U32 ZSTD_LLcode ( U32 litLength )
{
static const BYTE LL_Code [ 64 ] = { 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 ,
8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 ,
16 , 16 , 17 , 17 , 18 , 18 , 19 , 19 ,
20 , 20 , 20 , 20 , 21 , 21 , 21 , 21 ,
22 , 22 , 22 , 22 , 22 , 22 , 22 , 22 ,
23 , 23 , 23 , 23 , 23 , 23 , 23 , 23 ,
24 , 24 , 24 , 24 , 24 , 24 , 24 , 24 ,
24 , 24 , 24 , 24 , 24 , 24 , 24 , 24 } ;
static const U32 LL_deltaCode = 19 ;
return ( litLength > 63 ) ? ZSTD_highbit32 ( litLength ) + LL_deltaCode : LL_Code [ litLength ] ;
}
/* ZSTD_MLcode() :
* note : mlBase = matchLength - MINMATCH ;
* because it ' s the format it ' s stored in seqStore - > sequences */
MEM_STATIC U32 ZSTD_MLcode ( U32 mlBase )
{
static const BYTE ML_Code [ 128 ] = { 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 ,
16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 ,
32 , 32 , 33 , 33 , 34 , 34 , 35 , 35 , 36 , 36 , 36 , 36 , 37 , 37 , 37 , 37 ,
38 , 38 , 38 , 38 , 38 , 38 , 38 , 38 , 39 , 39 , 39 , 39 , 39 , 39 , 39 , 39 ,
40 , 40 , 40 , 40 , 40 , 40 , 40 , 40 , 40 , 40 , 40 , 40 , 40 , 40 , 40 , 40 ,
41 , 41 , 41 , 41 , 41 , 41 , 41 , 41 , 41 , 41 , 41 , 41 , 41 , 41 , 41 , 41 ,
42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 ,
42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 , 42 } ;
static const U32 ML_deltaCode = 36 ;
return ( mlBase > 127 ) ? ZSTD_highbit32 ( mlBase ) + ML_deltaCode : ML_Code [ mlBase ] ;
}
2017-09-02 01:28:35 +00:00
2019-07-03 20:40:37 +00:00
/* ZSTD_cParam_withinBounds:
* @ return 1 if value is within cParam bounds ,
* 0 otherwise */
MEM_STATIC int ZSTD_cParam_withinBounds ( ZSTD_cParameter cParam , int value )
{
ZSTD_bounds const bounds = ZSTD_cParam_getBounds ( cParam ) ;
if ( ZSTD_isError ( bounds . error ) ) return 0 ;
if ( value < bounds . lowerBound ) return 0 ;
if ( value > bounds . upperBound ) return 0 ;
return 1 ;
}
2019-11-05 20:51:25 +00:00
/* ZSTD_noCompressBlock() :
* Writes uncompressed block to dst buffer from given src .
* Returns the size of the block */
MEM_STATIC size_t ZSTD_noCompressBlock ( void * dst , size_t dstCapacity , const void * src , size_t srcSize , U32 lastBlock )
{
U32 const cBlockHeader24 = lastBlock + ( ( ( U32 ) bt_raw ) < < 1 ) + ( U32 ) ( srcSize < < 3 ) ;
RETURN_ERROR_IF ( srcSize + ZSTD_blockHeaderSize > dstCapacity ,
dstSize_tooSmall ) ;
MEM_writeLE24 ( dst , cBlockHeader24 ) ;
memcpy ( ( BYTE * ) dst + ZSTD_blockHeaderSize , src , srcSize ) ;
return ZSTD_blockHeaderSize + srcSize ;
}
2019-07-03 20:40:37 +00:00
/* ZSTD_minGain() :
* minimum compression required
* to generate a compress block or a compressed literals section .
* note : use same formula for both situations */
MEM_STATIC size_t ZSTD_minGain ( size_t srcSize , ZSTD_strategy strat )
{
U32 const minlog = ( strat > = ZSTD_btultra ) ? ( U32 ) ( strat ) - 1 : 6 ;
ZSTD_STATIC_ASSERT ( ZSTD_btultra = = 8 ) ;
assert ( ZSTD_cParam_withinBounds ( ZSTD_c_strategy , strat ) ) ;
return ( srcSize > > minlog ) + 2 ;
}
2019-09-20 19:23:25 +00:00
/*! ZSTD_safecopyLiterals() :
* memcpy ( ) function that won ' t read beyond more than WILDCOPY_OVERLENGTH bytes past ilimit_w .
* Only called when the sequence ends past ilimit_w , so it only needs to be optimized for single
* large copies .
*/
static void ZSTD_safecopyLiterals ( BYTE * op , BYTE const * ip , BYTE const * const iend , BYTE const * ilimit_w ) {
assert ( iend > ilimit_w ) ;
if ( ip < = ilimit_w ) {
ZSTD_wildcopy ( op , ip , ilimit_w - ip , ZSTD_no_overlap ) ;
op + = ilimit_w - ip ;
ip = ilimit_w ;
}
while ( ip < iend ) * op + + = * ip + + ;
}
2017-09-02 01:28:35 +00:00
/*! ZSTD_storeSeq() :
2019-09-17 21:02:57 +00:00
* Store a sequence ( litlen , litPtr , offCode and mlBase ) into seqStore_t .
* ` offCode ` : distance to match + ZSTD_REP_MOVE ( values < = ZSTD_REP_MOVE are repCodes ) .
2017-11-08 20:33:06 +00:00
* ` mlBase ` : matchLength - MINMATCH
2019-09-20 07:52:55 +00:00
* Allowed to overread literals up to litLimit .
2017-09-02 01:28:35 +00:00
*/
2019-09-21 04:37:13 +00:00
HINT_INLINE UNUSED_ATTR
void ZSTD_storeSeq ( seqStore_t * seqStorePtr , size_t litLength , const BYTE * literals , const BYTE * litLimit , U32 offCode , size_t mlBase )
2017-09-02 01:28:35 +00:00
{
2019-09-20 19:23:25 +00:00
BYTE const * const litLimit_w = litLimit - WILDCOPY_OVERLENGTH ;
BYTE const * const litEnd = literals + litLength ;
2018-06-13 18:59:26 +00:00
# if defined(DEBUGLEVEL) && (DEBUGLEVEL >= 6)
2017-09-02 01:28:35 +00:00
static const BYTE * g_start = NULL ;
2017-10-13 09:36:16 +00:00
if ( g_start = = NULL ) g_start = ( const BYTE * ) literals ; /* note : index only works for compression within a single segment */
2017-11-13 10:19:36 +00:00
{ U32 const pos = ( U32 ) ( ( const BYTE * ) literals - g_start ) ;
2018-05-29 21:07:25 +00:00
DEBUGLOG ( 6 , " Cpos%7u :%3u literals, match%4u bytes at offCode%7u " ,
2019-09-17 21:02:57 +00:00
pos , ( U32 ) litLength , ( U32 ) mlBase + MINMATCH , ( U32 ) offCode ) ;
2017-11-13 10:19:36 +00:00
}
2017-09-02 01:28:35 +00:00
# endif
2018-08-21 21:20:02 +00:00
assert ( ( size_t ) ( seqStorePtr - > sequences - seqStorePtr - > sequencesStart ) < seqStorePtr - > maxNbSeq ) ;
2017-09-02 01:28:35 +00:00
/* copy Literals */
2018-08-28 20:24:44 +00:00
assert ( seqStorePtr - > maxNbLit < = 128 KB ) ;
assert ( seqStorePtr - > lit + litLength < = seqStorePtr - > litStart + seqStorePtr - > maxNbLit ) ;
2019-09-20 19:23:25 +00:00
assert ( literals + litLength < = litLimit ) ;
if ( litEnd < = litLimit_w ) {
/* Common case we can use wildcopy.
* First copy 16 bytes , because literals are likely short .
*/
assert ( WILDCOPY_OVERLENGTH > = 16 ) ;
ZSTD_copy16 ( seqStorePtr - > lit , literals ) ;
if ( litLength > 16 ) {
ZSTD_wildcopy ( seqStorePtr - > lit + 16 , literals + 16 , ( ptrdiff_t ) litLength - 16 , ZSTD_no_overlap ) ;
}
} else {
ZSTD_safecopyLiterals ( seqStorePtr - > lit , literals , litEnd , litLimit_w ) ;
}
2017-09-02 01:28:35 +00:00
seqStorePtr - > lit + = litLength ;
/* literal Length */
if ( litLength > 0xFFFF ) {
2017-11-07 23:27:06 +00:00
assert ( seqStorePtr - > longLengthID = = 0 ) ; /* there can only be a single long length */
2017-09-02 01:28:35 +00:00
seqStorePtr - > longLengthID = 1 ;
seqStorePtr - > longLengthPos = ( U32 ) ( seqStorePtr - > sequences - seqStorePtr - > sequencesStart ) ;
}
seqStorePtr - > sequences [ 0 ] . litLength = ( U16 ) litLength ;
/* match offset */
2019-09-17 21:02:57 +00:00
seqStorePtr - > sequences [ 0 ] . offset = offCode + 1 ;
2017-09-02 01:28:35 +00:00
/* match Length */
2017-11-08 20:33:06 +00:00
if ( mlBase > 0xFFFF ) {
2017-11-07 23:27:06 +00:00
assert ( seqStorePtr - > longLengthID = = 0 ) ; /* there can only be a single long length */
2017-09-02 01:28:35 +00:00
seqStorePtr - > longLengthID = 2 ;
seqStorePtr - > longLengthPos = ( U32 ) ( seqStorePtr - > sequences - seqStorePtr - > sequencesStart ) ;
}
2017-11-08 20:33:06 +00:00
seqStorePtr - > sequences [ 0 ] . matchLength = ( U16 ) mlBase ;
2017-09-02 01:28:35 +00:00
seqStorePtr - > sequences + + ;
}
/*-*************************************
* Match length counter
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
2017-12-05 01:57:42 +00:00
static unsigned ZSTD_NbCommonBytes ( size_t val )
2017-09-02 01:28:35 +00:00
{
if ( MEM_isLittleEndian ( ) ) {
if ( MEM_64bits ( ) ) {
# if defined(_MSC_VER) && defined(_WIN64)
unsigned long r = 0 ;
2020-03-05 19:52:15 +00:00
return _BitScanForward64 ( & r , ( U64 ) val ) ? ( unsigned ) ( r > > 3 ) : 0 ;
2017-09-11 22:17:31 +00:00
# elif defined(__GNUC__) && (__GNUC__ >= 4)
2017-09-02 01:28:35 +00:00
return ( __builtin_ctzll ( ( U64 ) val ) > > 3 ) ;
# else
static const int DeBruijnBytePos [ 64 ] = { 0 , 0 , 0 , 0 , 0 , 1 , 1 , 2 ,
0 , 3 , 1 , 3 , 1 , 4 , 2 , 7 ,
0 , 2 , 3 , 6 , 1 , 5 , 3 , 5 ,
1 , 3 , 4 , 4 , 2 , 5 , 6 , 7 ,
7 , 0 , 1 , 2 , 3 , 3 , 4 , 6 ,
2 , 6 , 5 , 5 , 3 , 4 , 5 , 6 ,
7 , 1 , 2 , 4 , 6 , 4 , 4 , 5 ,
7 , 2 , 6 , 5 , 7 , 6 , 7 , 7 } ;
return DeBruijnBytePos [ ( ( U64 ) ( ( val & - ( long long ) val ) * 0x0218A392CDABBD3FULL ) ) > > 58 ] ;
# endif
} else { /* 32 bits */
# if defined(_MSC_VER)
unsigned long r = 0 ;
2020-03-05 19:52:15 +00:00
_BitScanForward ( & r , ( U32 ) val ) ? return ( unsigned ) ( r > > 3 ) : 0 ;
2017-09-02 01:28:35 +00:00
# elif defined(__GNUC__) && (__GNUC__ >= 3)
return ( __builtin_ctz ( ( U32 ) val ) > > 3 ) ;
# else
static const int DeBruijnBytePos [ 32 ] = { 0 , 0 , 3 , 0 , 3 , 1 , 3 , 0 ,
3 , 2 , 2 , 1 , 3 , 2 , 0 , 1 ,
3 , 3 , 1 , 2 , 2 , 2 , 2 , 0 ,
3 , 1 , 2 , 0 , 1 , 0 , 1 , 1 } ;
return DeBruijnBytePos [ ( ( U32 ) ( ( val & - ( S32 ) val ) * 0x077CB531U ) ) > > 27 ] ;
# endif
}
} else { /* Big Endian CPU */
if ( MEM_64bits ( ) ) {
# if defined(_MSC_VER) && defined(_WIN64)
unsigned long r = 0 ;
2020-03-05 19:52:15 +00:00
_BitScanReverse64 ( & r , val ) ? ( unsigned ) ( r > > 3 ) : 0 ;
2017-09-11 22:17:31 +00:00
# elif defined(__GNUC__) && (__GNUC__ >= 4)
2017-09-02 01:28:35 +00:00
return ( __builtin_clzll ( val ) > > 3 ) ;
# else
unsigned r ;
const unsigned n32 = sizeof ( size_t ) * 4 ; /* calculate this way due to compiler complaining in 32-bits mode */
if ( ! ( val > > n32 ) ) { r = 4 ; } else { r = 0 ; val > > = n32 ; }
if ( ! ( val > > 16 ) ) { r + = 2 ; val > > = 8 ; } else { val > > = 24 ; }
r + = ( ! val ) ;
return r ;
# endif
} else { /* 32 bits */
# if defined(_MSC_VER)
unsigned long r = 0 ;
2020-03-05 19:52:15 +00:00
_BitScanReverse ( & r , ( unsigned long ) val ) ? ( unsigned ) ( r > > 3 ) : 0 ;
2017-09-02 01:28:35 +00:00
# elif defined(__GNUC__) && (__GNUC__ >= 3)
return ( __builtin_clz ( ( U32 ) val ) > > 3 ) ;
# else
unsigned r ;
if ( ! ( val > > 16 ) ) { r = 2 ; val > > = 8 ; } else { r = 0 ; val > > = 24 ; }
r + = ( ! val ) ;
return r ;
# endif
} }
}
MEM_STATIC size_t ZSTD_count ( const BYTE * pIn , const BYTE * pMatch , const BYTE * const pInLimit )
{
const BYTE * const pStart = pIn ;
const BYTE * const pInLoopLimit = pInLimit - ( sizeof ( size_t ) - 1 ) ;
2017-11-19 22:40:21 +00:00
if ( pIn < pInLoopLimit ) {
{ size_t const diff = MEM_readST ( pMatch ) ^ MEM_readST ( pIn ) ;
if ( diff ) return ZSTD_NbCommonBytes ( diff ) ; }
pIn + = sizeof ( size_t ) ; pMatch + = sizeof ( size_t ) ;
while ( pIn < pInLoopLimit ) {
size_t const diff = MEM_readST ( pMatch ) ^ MEM_readST ( pIn ) ;
if ( ! diff ) { pIn + = sizeof ( size_t ) ; pMatch + = sizeof ( size_t ) ; continue ; }
pIn + = ZSTD_NbCommonBytes ( diff ) ;
return ( size_t ) ( pIn - pStart ) ;
} }
if ( MEM_64bits ( ) & & ( pIn < ( pInLimit - 3 ) ) & & ( MEM_read32 ( pMatch ) = = MEM_read32 ( pIn ) ) ) { pIn + = 4 ; pMatch + = 4 ; }
2017-09-02 01:28:35 +00:00
if ( ( pIn < ( pInLimit - 1 ) ) & & ( MEM_read16 ( pMatch ) = = MEM_read16 ( pIn ) ) ) { pIn + = 2 ; pMatch + = 2 ; }
if ( ( pIn < pInLimit ) & & ( * pMatch = = * pIn ) ) pIn + + ;
return ( size_t ) ( pIn - pStart ) ;
}
/** ZSTD_count_2segments() :
2018-03-11 12:21:53 +00:00
* can count match length with ` ip ` & ` match ` in 2 different segments .
* convention : on reaching mEnd , match count continue starting from iStart
*/
MEM_STATIC size_t
ZSTD_count_2segments ( const BYTE * ip , const BYTE * match ,
const BYTE * iEnd , const BYTE * mEnd , const BYTE * iStart )
2017-09-02 01:28:35 +00:00
{
const BYTE * const vEnd = MIN ( ip + ( mEnd - match ) , iEnd ) ;
size_t const matchLength = ZSTD_count ( ip , match , vEnd ) ;
if ( match + matchLength ! = mEnd ) return matchLength ;
2018-06-01 22:18:32 +00:00
DEBUGLOG ( 7 , " ZSTD_count_2segments: found a 2-parts match (current length==%zu) " , matchLength ) ;
DEBUGLOG ( 7 , " distance from match beginning to end dictionary = %zi " , mEnd - match ) ;
DEBUGLOG ( 7 , " distance from current pos to end buffer = %zi " , iEnd - ip ) ;
DEBUGLOG ( 7 , " next byte : ip==%02X, istart==%02X " , ip [ matchLength ] , * iStart ) ;
DEBUGLOG ( 7 , " final match length = %zu " , matchLength + ZSTD_count ( ip + matchLength , iStart , iEnd ) ) ;
2017-09-02 01:28:35 +00:00
return matchLength + ZSTD_count ( ip + matchLength , iStart , iEnd ) ;
}
/*-*************************************
2018-03-11 12:21:53 +00:00
* Hashes
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
2017-09-02 01:28:35 +00:00
static const U32 prime3bytes = 506832829U ;
static U32 ZSTD_hash3 ( U32 u , U32 h ) { return ( ( u < < ( 32 - 24 ) ) * prime3bytes ) > > ( 32 - h ) ; }
MEM_STATIC size_t ZSTD_hash3Ptr ( const void * ptr , U32 h ) { return ZSTD_hash3 ( MEM_readLE32 ( ptr ) , h ) ; } /* only in zstd_opt.h */
static const U32 prime4bytes = 2654435761U ;
static U32 ZSTD_hash4 ( U32 u , U32 h ) { return ( u * prime4bytes ) > > ( 32 - h ) ; }
static size_t ZSTD_hash4Ptr ( const void * ptr , U32 h ) { return ZSTD_hash4 ( MEM_read32 ( ptr ) , h ) ; }
static const U64 prime5bytes = 889523592379ULL ;
static size_t ZSTD_hash5 ( U64 u , U32 h ) { return ( size_t ) ( ( ( u < < ( 64 - 40 ) ) * prime5bytes ) > > ( 64 - h ) ) ; }
static size_t ZSTD_hash5Ptr ( const void * p , U32 h ) { return ZSTD_hash5 ( MEM_readLE64 ( p ) , h ) ; }
static const U64 prime6bytes = 227718039650203ULL ;
static size_t ZSTD_hash6 ( U64 u , U32 h ) { return ( size_t ) ( ( ( u < < ( 64 - 48 ) ) * prime6bytes ) > > ( 64 - h ) ) ; }
static size_t ZSTD_hash6Ptr ( const void * p , U32 h ) { return ZSTD_hash6 ( MEM_readLE64 ( p ) , h ) ; }
static const U64 prime7bytes = 58295818150454627ULL ;
static size_t ZSTD_hash7 ( U64 u , U32 h ) { return ( size_t ) ( ( ( u < < ( 64 - 56 ) ) * prime7bytes ) > > ( 64 - h ) ) ; }
static size_t ZSTD_hash7Ptr ( const void * p , U32 h ) { return ZSTD_hash7 ( MEM_readLE64 ( p ) , h ) ; }
static const U64 prime8bytes = 0xCF1BBCDCB7A56463ULL ;
static size_t ZSTD_hash8 ( U64 u , U32 h ) { return ( size_t ) ( ( ( u ) * prime8bytes ) > > ( 64 - h ) ) ; }
static size_t ZSTD_hash8Ptr ( const void * p , U32 h ) { return ZSTD_hash8 ( MEM_readLE64 ( p ) , h ) ; }
MEM_STATIC size_t ZSTD_hashPtr ( const void * p , U32 hBits , U32 mls )
{
switch ( mls )
{
default :
case 4 : return ZSTD_hash4Ptr ( p , hBits ) ;
case 5 : return ZSTD_hash5Ptr ( p , hBits ) ;
case 6 : return ZSTD_hash6Ptr ( p , hBits ) ;
case 7 : return ZSTD_hash7Ptr ( p , hBits ) ;
case 8 : return ZSTD_hash8Ptr ( p , hBits ) ;
}
}
2018-11-14 20:15:20 +00:00
/** ZSTD_ipow() :
* Return base ^ exponent .
*/
static U64 ZSTD_ipow ( U64 base , U64 exponent )
{
U64 power = 1 ;
while ( exponent ) {
if ( exponent & 1 ) power * = base ;
exponent > > = 1 ;
base * = base ;
}
return power ;
}
# define ZSTD_ROLL_HASH_CHAR_OFFSET 10
/** ZSTD_rollingHash_append() :
* Add the buffer to the hash value .
*/
static U64 ZSTD_rollingHash_append ( U64 hash , void const * buf , size_t size )
{
BYTE const * istart = ( BYTE const * ) buf ;
size_t pos ;
for ( pos = 0 ; pos < size ; + + pos ) {
hash * = prime8bytes ;
hash + = istart [ pos ] + ZSTD_ROLL_HASH_CHAR_OFFSET ;
}
return hash ;
}
/** ZSTD_rollingHash_compute() :
* Compute the rolling hash value of the buffer .
*/
MEM_STATIC U64 ZSTD_rollingHash_compute ( void const * buf , size_t size )
{
return ZSTD_rollingHash_append ( 0 , buf , size ) ;
}
/** ZSTD_rollingHash_primePower() :
* Compute the primePower to be passed to ZSTD_rollingHash_rotate ( ) for a hash
* over a window of length bytes .
*/
MEM_STATIC U64 ZSTD_rollingHash_primePower ( U32 length )
{
return ZSTD_ipow ( prime8bytes , length - 1 ) ;
}
/** ZSTD_rollingHash_rotate() :
* Rotate the rolling hash by one byte .
*/
MEM_STATIC U64 ZSTD_rollingHash_rotate ( U64 hash , BYTE toRemove , BYTE toAdd , U64 primePower )
{
hash - = ( toRemove + ZSTD_ROLL_HASH_CHAR_OFFSET ) * primePower ;
hash * = prime8bytes ;
hash + = toAdd + ZSTD_ROLL_HASH_CHAR_OFFSET ;
return hash ;
}
2018-02-24 00:48:18 +00:00
/*-*************************************
* Round buffer management
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
2019-06-21 22:58:55 +00:00
# if (ZSTD_WINDOWLOG_MAX_64 > 31)
# error "ZSTD_WINDOWLOG_MAX is too large : would overflow ZSTD_CURRENT_MAX"
# endif
2018-03-06 21:07:28 +00:00
/* Max current allowed */
# define ZSTD_CURRENT_MAX ((3U << 29) + (1U << ZSTD_WINDOWLOG_MAX))
2018-02-24 00:48:18 +00:00
/* Maximum chunk size before overflow correction needs to be called again */
2018-03-06 21:07:28 +00:00
# define ZSTD_CHUNKSIZE_MAX \
( ( ( U32 ) - 1 ) /* Maximum ending current index */ \
- ZSTD_CURRENT_MAX ) /* Maximum beginning lowLimit */
2018-02-24 00:48:18 +00:00
/**
* ZSTD_window_clear ( ) :
* Clears the window containing the history by simply setting it to empty .
*/
MEM_STATIC void ZSTD_window_clear ( ZSTD_window_t * window )
{
size_t const endT = ( size_t ) ( window - > nextSrc - window - > base ) ;
U32 const end = ( U32 ) endT ;
window - > lowLimit = end ;
window - > dictLimit = end ;
}
/**
* ZSTD_window_hasExtDict ( ) :
* Returns non - zero if the window has a non - empty extDict .
*/
MEM_STATIC U32 ZSTD_window_hasExtDict ( ZSTD_window_t const window )
{
return window . lowLimit < window . dictLimit ;
}
2018-05-02 21:34:34 +00:00
/**
* ZSTD_matchState_dictMode ( ) :
2018-05-09 17:14:20 +00:00
* Inspects the provided matchState and figures out what dictMode should be
* passed to the compressor .
2018-05-02 21:34:34 +00:00
*/
MEM_STATIC ZSTD_dictMode_e ZSTD_matchState_dictMode ( const ZSTD_matchState_t * ms )
{
2018-05-15 17:08:03 +00:00
return ZSTD_window_hasExtDict ( ms - > window ) ?
ZSTD_extDict :
ms - > dictMatchState ! = NULL ?
ZSTD_dictMatchState :
2018-05-09 17:14:20 +00:00
ZSTD_noDict ;
2018-05-02 21:34:34 +00:00
}
2018-05-02 21:10:51 +00:00
2018-02-24 00:48:18 +00:00
/**
* ZSTD_window_needOverflowCorrection ( ) :
* Returns non - zero if the indices are getting too large and need overflow
* protection .
*/
2018-03-06 21:07:28 +00:00
MEM_STATIC U32 ZSTD_window_needOverflowCorrection ( ZSTD_window_t const window ,
void const * srcEnd )
2018-02-24 00:48:18 +00:00
{
2018-03-06 21:07:28 +00:00
U32 const current = ( U32 ) ( ( BYTE const * ) srcEnd - window . base ) ;
return current > ZSTD_CURRENT_MAX ;
2018-02-24 00:48:18 +00:00
}
/**
* ZSTD_window_correctOverflow ( ) :
* Reduces the indices to protect from index overflow .
* Returns the correction made to the indices , which must be applied to every
* stored index .
*
* The least significant cycleLog bits of the indices must remain the same ,
* which may be 0. Every index up to maxDist in the past must be valid .
* NOTE : ( maxDist & cycleMask ) must be zero .
*/
MEM_STATIC U32 ZSTD_window_correctOverflow ( ZSTD_window_t * window , U32 cycleLog ,
U32 maxDist , void const * src )
{
/* preemptive overflow correction:
* 1. correction is large enough :
* lowLimit > ( 3 < < 29 ) = = > current > 3 < < 29 + 1 < < windowLog
* 1 < < windowLog < = newCurrent < 1 < < chainLog + 1 < < windowLog
*
* current - newCurrent
* > ( 3 < < 29 + 1 < < windowLog ) - ( 1 < < windowLog + 1 < < chainLog )
* > ( 3 < < 29 ) - ( 1 < < chainLog )
* > ( 3 < < 29 ) - ( 1 < < 30 ) ( NOTE : chainLog < = 30 )
* > 1 < < 29
*
* 2. ( ip + ZSTD_CHUNKSIZE_MAX - cctx - > base ) doesn ' t overflow :
* After correction , current is less than ( 1 < < chainLog + 1 < < windowLog ) .
* In 64 - bit mode we are safe , because we have 64 - bit ptrdiff_t .
* In 32 - bit mode we are safe , because ( chainLog < = 29 ) , so
* ip + ZSTD_CHUNKSIZE_MAX - cctx - > base < 1 < < 32.
* 3. ( cctx - > lowLimit + 1 < < windowLog ) < 1 < < 32 :
* windowLog < = 31 = = > 3 < < 29 + 1 < < windowLog < 7 < < 29 < 1 < < 32.
*/
U32 const cycleMask = ( 1U < < cycleLog ) - 1 ;
U32 const current = ( U32 ) ( ( BYTE const * ) src - window - > base ) ;
2020-01-17 20:05:53 +00:00
U32 const currentCycle0 = current & cycleMask ;
/* Exclude zero so that newCurrent - maxDist >= 1. */
U32 const currentCycle1 = currentCycle0 = = 0 ? ( 1U < < cycleLog ) : currentCycle0 ;
U32 const newCurrent = currentCycle1 + maxDist ;
2018-02-24 00:48:18 +00:00
U32 const correction = current - newCurrent ;
assert ( ( maxDist & cycleMask ) = = 0 ) ;
assert ( current > newCurrent ) ;
/* Loose bound, should be around 1<<29 (see above) */
assert ( correction > 1 < < 28 ) ;
window - > base + = correction ;
window - > dictBase + = correction ;
2020-01-17 20:05:53 +00:00
if ( window - > lowLimit < = correction ) window - > lowLimit = 1 ;
else window - > lowLimit - = correction ;
if ( window - > dictLimit < = correction ) window - > dictLimit = 1 ;
else window - > dictLimit - = correction ;
/* Ensure we can still reference the full window. */
assert ( newCurrent > = maxDist ) ;
assert ( newCurrent - maxDist > = 1 ) ;
/* Ensure that lowLimit and dictLimit didn't underflow. */
assert ( window - > lowLimit < = newCurrent ) ;
assert ( window - > dictLimit < = newCurrent ) ;
2018-02-24 00:48:18 +00:00
DEBUGLOG ( 4 , " Correction of 0x%x bytes to lowLimit=0x%x " , correction ,
window - > lowLimit ) ;
return correction ;
}
/**
* ZSTD_window_enforceMaxDist ( ) :
2018-03-16 01:52:38 +00:00
* Updates lowLimit so that :
* ( srcEnd - base ) - lowLimit = = maxDist + loadedDictEnd
2018-05-22 00:12:11 +00:00
*
2019-05-28 20:15:48 +00:00
* It ensures index is valid as long as index > = lowLimit .
* This must be called before a block compression call .
2018-05-22 00:12:11 +00:00
*
2019-05-28 20:15:48 +00:00
* loadedDictEnd is only defined if a dictionary is in use for current compression .
* As the name implies , loadedDictEnd represents the index at end of dictionary .
* The value lies within context ' s referential , it can be directly compared to blockEndIdx .
2018-05-22 00:12:11 +00:00
*
2019-05-28 20:15:48 +00:00
* If loadedDictEndPtr is NULL , no dictionary is in use , and we use loadedDictEnd = = 0.
* If loadedDictEndPtr is not NULL , we set it to zero after updating lowLimit .
* This is because dictionaries are allowed to be referenced fully
* as long as the last byte of the dictionary is in the window .
* Once input has progressed beyond window size , dictionary cannot be referenced anymore .
*
* In normal dict mode , the dictionary lies between lowLimit and dictLimit .
* In dictMatchState mode , lowLimit and dictLimit are the same ,
* and the dictionary is below them .
* forceWindow and dictMatchState are therefore incompatible .
2018-02-24 00:48:18 +00:00
*/
2018-12-19 18:11:06 +00:00
MEM_STATIC void
ZSTD_window_enforceMaxDist ( ZSTD_window_t * window ,
2019-05-28 20:15:48 +00:00
const void * blockEnd ,
U32 maxDist ,
U32 * loadedDictEndPtr ,
2018-12-19 18:11:06 +00:00
const ZSTD_matchState_t * * dictMatchStatePtr )
2018-02-24 00:48:18 +00:00
{
2019-05-28 20:15:48 +00:00
U32 const blockEndIdx = ( U32 ) ( ( BYTE const * ) blockEnd - window - > base ) ;
U32 const loadedDictEnd = ( loadedDictEndPtr ! = NULL ) ? * loadedDictEndPtr : 0 ;
DEBUGLOG ( 5 , " ZSTD_window_enforceMaxDist: blockEndIdx=%u, maxDist=%u, loadedDictEnd=%u " ,
( unsigned ) blockEndIdx , ( unsigned ) maxDist , ( unsigned ) loadedDictEnd ) ;
/* - When there is no dictionary : loadedDictEnd == 0.
In which case , the test ( blockEndIdx > maxDist ) is merely to avoid
overflowing next operation ` newLowLimit = blockEndIdx - maxDist ` .
- When there is a standard dictionary :
Index referential is copied from the dictionary ,
which means it starts from 0.
In which case , loadedDictEnd = = dictSize ,
and it makes sense to compare ` blockEndIdx > maxDist + dictSize `
since ` blockEndIdx ` also starts from zero .
- When there is an attached dictionary :
loadedDictEnd is expressed within the referential of the context ,
so it can be directly compared against blockEndIdx .
*/
2018-12-19 18:11:06 +00:00
if ( blockEndIdx > maxDist + loadedDictEnd ) {
U32 const newLowLimit = blockEndIdx - maxDist ;
2018-02-24 00:48:18 +00:00
if ( window - > lowLimit < newLowLimit ) window - > lowLimit = newLowLimit ;
if ( window - > dictLimit < window - > lowLimit ) {
2018-05-17 23:13:53 +00:00
DEBUGLOG ( 5 , " Update dictLimit to match lowLimit, from %u to %u " ,
fix confusion between unsigned <-> U32
as suggested in #1441.
generally U32 and unsigned are the same thing,
except when they are not ...
case : 32-bit compilation for MIPS (uint32_t == unsigned long)
A vast majority of transformation consists in transforming U32 into unsigned.
In rare cases, it's the other way around (typically for internal code, such as seeds).
Among a few issues this patches solves :
- some parameters were declared with type `unsigned` in *.h,
but with type `U32` in their implementation *.c .
- some parameters have type unsigned*,
but the caller user a pointer to U32 instead.
These fixes are useful.
However, the bulk of changes is about %u formating,
which requires unsigned type,
but generally receives U32 values instead,
often just for brevity (U32 is shorter than unsigned).
These changes are generally minor, or even annoying.
As a consequence, the amount of code changed is larger than I would expect for such a patch.
Testing is also a pain :
it requires manually modifying `mem.h`,
in order to lie about `U32`
and force it to be an `unsigned long` typically.
On a 64-bit system, this will break the equivalence unsigned == U32.
Unfortunately, it will also break a few static_assert(), controlling structure sizes.
So it also requires modifying `debug.h` to make `static_assert()` a noop.
And then reverting these changes.
So it's inconvenient, and as a consequence,
this property is currently not checked during CI tests.
Therefore, these problems can emerge again in the future.
I wonder if it is worth ensuring proper distinction of U32 != unsigned in CI tests.
It's another restriction for coding, adding more frustration during merge tests,
since most platforms don't need this distinction (hence contributor will not see it),
and while this can matter in theory, the number of platforms impacted seems minimal.
Thoughts ?
2018-12-22 00:19:44 +00:00
( unsigned ) window - > dictLimit , ( unsigned ) window - > lowLimit ) ;
2018-02-24 00:48:18 +00:00
window - > dictLimit = window - > lowLimit ;
}
2019-05-28 20:15:48 +00:00
/* On reaching window size, dictionaries are invalidated */
if ( loadedDictEndPtr ) * loadedDictEndPtr = 0 ;
if ( dictMatchStatePtr ) * dictMatchStatePtr = NULL ;
2018-02-24 00:48:18 +00:00
}
}
2019-05-31 23:52:37 +00:00
/* Similar to ZSTD_window_enforceMaxDist(),
* but only invalidates dictionary
2019-08-01 13:58:17 +00:00
* when input progresses beyond window size .
* assumption : loadedDictEndPtr and dictMatchStatePtr are valid ( non NULL )
* loadedDictEnd uses same referential as window - > base
* maxDist is the window size */
2019-05-31 22:55:12 +00:00
MEM_STATIC void
2019-08-01 13:58:17 +00:00
ZSTD_checkDictValidity ( const ZSTD_window_t * window ,
2019-05-31 22:55:12 +00:00
const void * blockEnd ,
U32 maxDist ,
U32 * loadedDictEndPtr ,
const ZSTD_matchState_t * * dictMatchStatePtr )
{
2019-08-01 13:58:17 +00:00
assert ( loadedDictEndPtr ! = NULL ) ;
assert ( dictMatchStatePtr ! = NULL ) ;
{ U32 const blockEndIdx = ( U32 ) ( ( BYTE const * ) blockEnd - window - > base ) ;
U32 const loadedDictEnd = * loadedDictEndPtr ;
DEBUGLOG ( 5 , " ZSTD_checkDictValidity: blockEndIdx=%u, maxDist=%u, loadedDictEnd=%u " ,
( unsigned ) blockEndIdx , ( unsigned ) maxDist , ( unsigned ) loadedDictEnd ) ;
assert ( blockEndIdx > = loadedDictEnd ) ;
if ( blockEndIdx > loadedDictEnd + maxDist ) {
/* On reaching window size, dictionaries are invalidated.
* For simplification , if window size is reached anywhere within next block ,
* the dictionary is invalidated for the full block .
*/
DEBUGLOG ( 6 , " invalidating dictionary for current block (distance > windowSize) " ) ;
* loadedDictEndPtr = 0 ;
* dictMatchStatePtr = NULL ;
} else {
if ( * loadedDictEndPtr ! = 0 ) {
DEBUGLOG ( 6 , " dictionary considered valid for current block " ) ;
} } }
2019-05-31 22:55:12 +00:00
}
2019-11-21 02:21:51 +00:00
MEM_STATIC void ZSTD_window_init ( ZSTD_window_t * window ) {
memset ( window , 0 , sizeof ( * window ) ) ;
window - > base = ( BYTE const * ) " " ;
window - > dictBase = ( BYTE const * ) " " ;
window - > dictLimit = 1 ; /* start from 1, so that 1st position is valid */
window - > lowLimit = 1 ; /* it ensures first and later CCtx usages compress the same */
window - > nextSrc = window - > base + 1 ; /* see issue #1241 */
}
2018-02-24 00:48:18 +00:00
/**
* ZSTD_window_update ( ) :
* Updates the window by appending [ src , src + srcSize ) to the window .
* If it is not contiguous , the current prefix becomes the extDict , and we
* forget about the extDict . Handles overlap of the prefix and extDict .
* Returns non - zero if the segment is contiguous .
*/
MEM_STATIC U32 ZSTD_window_update ( ZSTD_window_t * window ,
void const * src , size_t srcSize )
{
BYTE const * const ip = ( BYTE const * ) src ;
U32 contiguous = 1 ;
2018-05-17 23:13:53 +00:00
DEBUGLOG ( 5 , " ZSTD_window_update " ) ;
2019-11-21 02:21:51 +00:00
if ( srcSize = = 0 )
return contiguous ;
assert ( window - > base ! = NULL ) ;
assert ( window - > dictBase ! = NULL ) ;
2018-02-24 00:48:18 +00:00
/* Check if blocks follow each other */
if ( src ! = window - > nextSrc ) {
/* not contiguous */
size_t const distanceFromBase = ( size_t ) ( window - > nextSrc - window - > base ) ;
2018-05-17 23:13:53 +00:00
DEBUGLOG ( 5 , " Non contiguous blocks, new segment starts at %u " , window - > dictLimit ) ;
2018-02-24 00:48:18 +00:00
window - > lowLimit = window - > dictLimit ;
assert ( distanceFromBase = = ( size_t ) ( U32 ) distanceFromBase ) ; /* should never overflow */
window - > dictLimit = ( U32 ) distanceFromBase ;
window - > dictBase = window - > base ;
window - > base = ip - distanceFromBase ;
// ms->nextToUpdate = window->dictLimit;
if ( window - > dictLimit - window - > lowLimit < HASH_READ_SIZE ) window - > lowLimit = window - > dictLimit ; /* too small extDict */
contiguous = 0 ;
}
window - > nextSrc = ip + srcSize ;
/* if input and dictionary overlap : reduce dictionary (area presumed modified by input) */
if ( ( ip + srcSize > window - > dictBase + window - > lowLimit )
& ( ip < window - > dictBase + window - > dictLimit ) ) {
ptrdiff_t const highInputIdx = ( ip + srcSize ) - window - > dictBase ;
U32 const lowLimitMax = ( highInputIdx > ( ptrdiff_t ) window - > dictLimit ) ? window - > dictLimit : ( U32 ) highInputIdx ;
window - > lowLimit = lowLimitMax ;
2018-05-17 23:13:53 +00:00
DEBUGLOG ( 5 , " Overlapping extDict and input : new lowLimit = %u " , window - > lowLimit ) ;
2018-02-24 00:48:18 +00:00
}
return contiguous ;
}
2019-08-05 13:18:43 +00:00
MEM_STATIC U32 ZSTD_getLowestMatchIndex ( const ZSTD_matchState_t * ms , U32 current , unsigned windowLog )
{
U32 const maxDistance = 1U < < windowLog ;
U32 const lowestValid = ms - > window . lowLimit ;
U32 const withinWindow = ( current - lowestValid > maxDistance ) ? current - maxDistance : lowestValid ;
U32 const isDictionary = ( ms - > loadedDictEnd ! = 0 ) ;
U32 const matchLowest = isDictionary ? lowestValid : withinWindow ;
return matchLowest ;
}
2018-05-18 21:09:42 +00:00
/* debug functions */
2018-12-22 17:09:40 +00:00
# if (DEBUGLEVEL>=2)
2018-05-18 21:09:42 +00:00
MEM_STATIC double ZSTD_fWeight ( U32 rawStat )
{
U32 const fp_accuracy = 8 ;
U32 const fp_multiplier = ( 1 < < fp_accuracy ) ;
2018-12-20 22:56:44 +00:00
U32 const newStat = rawStat + 1 ;
U32 const hb = ZSTD_highbit32 ( newStat ) ;
2018-05-18 21:09:42 +00:00
U32 const BWeight = hb * fp_multiplier ;
2018-12-20 22:56:44 +00:00
U32 const FWeight = ( newStat < < fp_accuracy ) > > hb ;
2018-05-18 21:09:42 +00:00
U32 const weight = BWeight + FWeight ;
assert ( hb + fp_accuracy < 31 ) ;
return ( double ) weight / fp_multiplier ;
}
2018-12-22 17:09:40 +00:00
/* display a table content,
* listing each element , its frequency , and its predicted bit cost */
2018-05-18 21:09:42 +00:00
MEM_STATIC void ZSTD_debugTable ( const U32 * table , U32 max )
{
unsigned u , sum ;
for ( u = 0 , sum = 0 ; u < = max ; u + + ) sum + = table [ u ] ;
DEBUGLOG ( 2 , " total nb elts: %u " , sum ) ;
for ( u = 0 ; u < = max ; u + + ) {
DEBUGLOG ( 2 , " %2u: %5u (%.2f) " ,
u , table [ u ] , ZSTD_fWeight ( sum ) - ZSTD_fWeight ( table [ u ] ) ) ;
}
}
2018-12-22 17:09:40 +00:00
# endif
2017-09-02 01:28:35 +00:00
# if defined (__cplusplus)
}
# endif
2019-11-07 15:43:43 +00:00
/* ===============================================================
2019-11-07 16:46:25 +00:00
* Shared internal declarations
2019-11-07 15:43:43 +00:00
* These prototypes may be called from sources not in lib / compress
* = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = */
/* ZSTD_loadCEntropy() :
* dict : must point at beginning of a valid zstd dictionary .
2019-11-07 18:58:35 +00:00
* return : size of dictionary header ( size of magic number + dict ID + entropy tables )
* assumptions : magic number supposed already checked
* and dictSize > = 8 */
2019-11-07 15:43:43 +00:00
size_t ZSTD_loadCEntropy ( ZSTD_compressedBlockState_t * bs , void * workspace ,
short * offcodeNCount , unsigned * offcodeMaxValue ,
const void * const dict , size_t dictSize ) ;
2017-11-07 23:27:06 +00:00
2019-11-07 19:07:21 +00:00
void ZSTD_reset_compressedBlockState ( ZSTD_compressedBlockState_t * bs ) ;
2017-11-07 23:27:06 +00:00
/* ==============================================================
* Private declarations
* These prototypes shall only be called from within lib / compress
* = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = */
2018-03-19 21:41:23 +00:00
/* ZSTD_getCParamsFromCCtxParams() :
2018-04-02 21:41:30 +00:00
* cParams are built depending on compressionLevel , src size hints ,
2018-03-19 21:41:23 +00:00
* LDM and manually set compression parameters .
2019-11-07 01:41:29 +00:00
* Note : srcSizeHint = = 0 means 0 !
2018-03-19 21:41:23 +00:00
*/
ZSTD_compressionParameters ZSTD_getCParamsFromCCtxParams (
const ZSTD_CCtx_params * CCtxParams , U64 srcSizeHint , size_t dictSize ) ;
2017-11-07 23:27:06 +00:00
/*! ZSTD_initCStream_internal() :
* Private use only . Init streaming operation .
* expects params to be valid .
* must receive dict , or cdict , or none , but not both .
* @ return : 0 , or an error code */
size_t ZSTD_initCStream_internal ( ZSTD_CStream * zcs ,
const void * dict , size_t dictSize ,
const ZSTD_CDict * cdict ,
2019-09-05 09:58:30 +00:00
const ZSTD_CCtx_params * params , unsigned long long pledgedSrcSize ) ;
2017-11-07 23:27:06 +00:00
2018-05-17 18:19:05 +00:00
void ZSTD_resetSeqStore ( seqStore_t * ssPtr ) ;
2017-11-07 23:27:06 +00:00
/*! ZSTD_getCParamsFromCDict() :
* as the name implies */
ZSTD_compressionParameters ZSTD_getCParamsFromCDict ( const ZSTD_CDict * cdict ) ;
/* ZSTD_compressBegin_advanced_internal() :
* Private use only . To be called from zstdmt_compress . c . */
size_t ZSTD_compressBegin_advanced_internal ( ZSTD_CCtx * cctx ,
const void * dict , size_t dictSize ,
2018-03-20 22:13:14 +00:00
ZSTD_dictContentType_e dictContentType ,
2018-04-02 21:41:30 +00:00
ZSTD_dictTableLoadMethod_e dtlm ,
2017-12-13 00:20:51 +00:00
const ZSTD_CDict * cdict ,
2019-09-05 09:58:30 +00:00
const ZSTD_CCtx_params * params ,
2017-11-07 23:27:06 +00:00
unsigned long long pledgedSrcSize ) ;
/* ZSTD_compress_advanced_internal() :
* Private use only . To be called from zstdmt_compress . c . */
size_t ZSTD_compress_advanced_internal ( ZSTD_CCtx * cctx ,
void * dst , size_t dstCapacity ,
const void * src , size_t srcSize ,
const void * dict , size_t dictSize ,
2019-09-05 09:58:30 +00:00
const ZSTD_CCtx_params * params ) ;
2017-11-07 23:27:06 +00:00
2018-01-26 01:35:49 +00:00
/* ZSTD_writeLastEmptyBlock() :
* output an empty Block with end - of - frame mark to complete a frame
* @ return : size of data written into ` dst ` ( = = ZSTD_blockHeaderSize ( defined in zstd_internal . h ) )
2019-04-12 18:18:11 +00:00
* or an error code if ` dstCapacity ` is too small ( < ZSTD_blockHeaderSize )
2018-01-26 01:35:49 +00:00
*/
size_t ZSTD_writeLastEmptyBlock ( void * dst , size_t dstCapacity ) ;
2018-03-07 03:50:50 +00:00
/* ZSTD_referenceExternalSequences() :
* Must be called before starting a compression operation .
* seqs must parse a prefix of the source .
* This cannot be used when long range matching is enabled .
* Zstd will use these sequences , and pass the literals to a secondary block
* compressor .
* @ return : An error code on failure .
* NOTE : seqs are not verified ! Invalid sequences can cause out - of - bounds memory
* access and data corruption .
*/
size_t ZSTD_referenceExternalSequences ( ZSTD_CCtx * cctx , rawSeq * seq , size_t nbSeq ) ;
2017-09-02 01:28:35 +00:00
# endif /* ZSTD_COMPRESS_H */