Commit Graph

217 Commits

Author SHA1 Message Date
George Lu
50d612f4f0 Interleave compression/decompression
Fix Bugs
2018-06-25 15:01:03 -07:00
George Lu
d6121ad0e1 Opaque State
And minor fixups (comments/alignment/checks/fix memory leak)
2018-06-25 08:07:43 -07:00
George Lu
ab26f24c9c benchFunction Timed Wrappers
Add BMK_benchFunctionTimed
Add BMK_init_customResultCont..
Change benchMem to use benchFunctionTimed
Minor Fixes/Adjustments
2018-06-21 16:23:55 -07:00
George Lu
a8eea99ebe Incremental Display + Fn Separations
Seperate syntheticTest and fileTableTest (now renamed as benchFiles)
Add incremental display to benchMem
Change to only iterMode for benchFunction
Make Synthetic test's compressibility configurable from cli (using -P#)
2018-06-21 16:23:18 -07:00
George Lu
a3c8b59990 Fix cli no print
Change looping behavior to match old
2018-06-18 15:38:14 -07:00
George Lu
e482e328cd Reorder Arguments
make initFn nullable
2018-06-18 13:21:42 -07:00
George Lu
0d1ee22990 Requested Changes
Add Comment
Simplify Interface (Remove resultSet)
Reorder Arguments
Remove customBench displayLevel
Reorder bench.h
Change benchFiles return type to match advanced
Rename stuff
2018-06-18 12:01:12 -07:00
George Lu
8522346322 Make Fullbench use new function
Rearrange Args
Add nothing function
Use new function, change locals to match
New Display
Comment cleanup
Change builds
2018-06-15 11:37:49 -04:00
George Lu
20f4f32379 Add to bench
-Remove global variables
-Remove gv setting functions
-Add advancedParams struct
-Add defaultAdvancedParams();
-Change return type of bench Files
-Change cli to use new interface
-Changed error returns to own struct value
-Change default compression benchmark to use decompress_generic
-Add CustomBench function
-Add Documentation for new functions
2018-06-14 14:23:24 -04:00
George Lu
01d940b670 Requested changes
-Remove g_displaylevel/setNotificationLevel function
-Add extern "C"
-Remove averaging
-Reorder arguments

More fixes

-Added BMK_return_t (result + possible error)
-Correct comment'
-Nullcheck ctx, dctx when allocated
-Remove extra assert
2018-06-12 17:02:44 -04:00
George Lu
0e808d608b Make paramgrill use bench.c benchmarking 2018-06-08 12:01:05 -07:00
Yann Collet
f24566b597 minor bench improvements
- do not test level 0, as it is converted into level 3,
  which feels strange when compressing multiple levels
- Use direct synchronous mode when a single worker is requested.
2018-03-12 04:02:57 -07:00
Yann Collet
6a9b41b731 create command --fast[=#]
access negative compression levels from command line
for both compression and benchmark modes.

also : ensure proper propagation of parameters
through ZSTD_compress_generic() interface.

added relevant cli tests.
2018-03-11 20:01:23 -07:00
Yann Collet
a70f7e10fa Merge branch 'benchDecode' into longOffsetMode 2018-03-05 14:09:00 -08:00
Yann Collet
03e7e14192 fix benchmark issue when measuring only decoding speed
zstd bench module can focus on decompression speed _only_.
This is useful when trying to measure performance
on large input data compressed using a high level
as compression time becomes problematic (too long).

This mode is triggered by command : zstd -b -d

Problem was : in such a mode,
measured decoding speed was > 10% slower
than in nominal mode (compression + decompression),
making decompression benchmark mode much less useful.

This patch fixes the issue.
It's not completely clear why, but
moving the `memcpy()` operation sooner in the pipeline fixed it.

I can still measure some difference, but it is in the < 2% range,
so it's much more tolerable.

also : it doesn't matter anymore in which order are selected
commands `-b` and `-d`.
The combination always triggers bench_decodeOnly mode.
2018-03-05 13:57:41 -08:00
Yann Collet
b91ddf0ae6 Merge branch 'dev' into longOffsetMode 2018-03-05 11:59:54 -08:00
Yann Collet
25d00d10fc fixed minor conversion warning 2018-02-20 16:52:28 -08:00
Yann Collet
3538a535bf use TIMELOOP_NANOSEC
as suggested by @terrelln
2018-02-20 15:33:56 -08:00
Yann Collet
d3364aa39e improve benchmark measurement for small inputs
by invoking time() once per batch, instead of once per compression / decompression.
Batch is dynamically resized so that each round lasts approximately 1 second.

Also : increases time accuracy to nanosecond
2018-02-20 14:58:40 -08:00
Yann Collet
04a3f85ce7 fixed gcc warning on a switch code path 2018-02-09 16:16:27 -08:00
Yann Collet
209df52ba2 Changed nbThreads for nbWorkers
This makes it easier to explain that nbWorkers=0 --> single-threaded mode,
while nbWorkers=1 --> asynchronous mode (one mode thread on top of the "main" caller thread).
No need for an additional asynchronous mode flag.
nbWorkers>=2 works the same as nbThreads>=2 previously.
2018-02-01 19:29:30 -08:00
Yann Collet
c707c6e9f2 fix: bench can accept hlog custom parameter
was ignored during initialization
2017-12-27 13:32:05 +01:00
Yann Collet
a1b24e6262
Merge pull request #938 from terrelln/time
Use util.h for timing
2017-12-01 16:40:38 -08:00
Nick Terrell
dab8cfa3c7 Combine definitions of SEC_TO_MICRO 2017-11-30 19:40:53 -08:00
Nick Terrell
9a2f6f477b Use util.h for timing 2017-11-30 14:57:25 -08:00
Yann Collet
0a0a212934 zstd_opt: changed cost formula
There was a flaw in the formula
which compared literal cost with match cost :
at a given position,
a non-null literal suite is going to be part of next sequence,
while if position ends a previous match, to immediately start another match,
next sequence will have a litlength of zero.
A litlength of zero has a non-null cost.
It follows that literals cost should be compared to match cost + litlength==0.

Not doing so gave a structural advantage to matches, which would be selected more often.
I believe that's what led to the creation of the strange heuristic which added a complex cost to matches.
The heuristic was actually compensating.
It was probably created through multiple trials, settling for best outcome on a given scenario (I suspect silesia.tar).
The problem with this heuristic is that it's hard to understand,
and unfortunately, any future change in the parser would impact the way it should be calculated and its effects.

The "proper" formula makes it possible to remove this heuristic.

Now, the problem is : in a head to head comparison, it's sometimes better, sometimes worse.
Note that all differences are small (< 0.01 ratio).
In general, the newer formula is better for smaller files (for example, calgary.tar and enwik7).
I suspect that's because starting statistics are pretty poor (another area of improvement).
However, for silesia.tar specifically, it's worse at level 22 (while being better at level 17, so even compression level has an impact ...).

It's a pity that zstd -22 gets worse on silesia.tar.
That being said, I like that the new code gets rid of strange variables,
which were introducing complexity for any future evolution (faster variants being in mind).
Therefore, in spite of this detrimental side effect, I tend to be in favor of it.
2017-11-28 14:07:03 -08:00
Yann Collet
daebc7fe26 bench: slightly adjusted display format
adapt accuracy depending on value.
makes it possible to have higher accuracy for small value,
notably small compression speed.
This capability is expected to be useful while modifying optimal parser.
2017-11-18 15:54:32 -08:00
Yann Collet
5b957ba899 minor interface adjustments 2017-11-17 01:21:40 -08:00
Yann Collet
d898fb7ba6 bench: added cli command -S to benchmark multiple files separately
Currently, all files are joined by default,
they are compressed separately but benchmarked together,
providing a single final result.

Benchmarking files separately make it possible to accurately measure difference for each file.
This is expected to be useful while tuning optimal parser.
2017-11-17 00:22:55 -08:00
Yann Collet
8accfa7fcc bench: realTime is a global parameter
like most parameters not directly related to compression
2017-11-17 00:02:37 -08:00
Yann Collet
9a11f70dc3 merged repcode search into BT match search
this version has same speed as branch `opt`
which is itself 5-10% slower than branch `dev`
(no identified reason)

It does not compress exactly the same as `opt` or `dev`,
maybe because it doesn't stop search after repcodes,
leading to sometimes better compression, sometimes worse
(by a small margin).

warning : _extDict path does not work for the time being
This means that benchmark module works,
but file module will fail with large files (and high compression level).
Objective is to fuse _extDict path into current one,
in order to have a single parser to maintain.
2017-11-13 02:23:48 -08:00
Yann Collet
6f1dfa8adf removed line with // comment
this is for a different topic
(better parameter adaptation for small files + dictionary and/or custome parameters)
2017-11-01 17:01:45 -07:00
Yann Collet
428e8b3bf4 fix : ZSTD_compress_generic(,,,ZSTD_e_end) automatically sets pledgedSrcSize
as per documentation, on ZSTD_setPledgedSrcSize() :
> If all data is provided and consumed in a single round,
> this value (pledgedSrcSize) is overriden by srcSize instead.

This wasn't applied before compression level is transformed into compression parameters.
As a consequence, small input missed compression parameters adaptation.

It seems to work fine now : compression was compared with ZSTD_compress_advanced(),
results were the same.
2017-11-01 13:15:23 -07:00
Yann Collet
eac42534fe bench: fixed Visual warning regarding struct initialization
also :
removed dependency on zstdmt_compress.h
removed several unused macros
fileio : small code refactoring to reduce some variable scope
2017-10-19 11:56:14 -07:00
Yann Collet
d3b9547aa4 IO and bench : ZSTD_NEWAPI is the only remaining code path
removed the other 2 code paths (single thread, and ZSTDMT ones)
keeping only the new advanced API, for easier code coverage.

It shall also fix identified issue with Visual Studio
which doesn't have ZSTD_NEWAPI defined.
2017-10-18 17:01:53 -07:00
Yann Collet
18b795374a UTIL_getFileSize() returns UTIL_FILESIZE_UNKNOWN on failure
UTIL_getFileSize() used to return zero on failure.
This made it impossible to distinguish a failure from a genuine empty file.
Both cases where coalesced.

Adding UTIL_FILESIZE_UNKNOWN constant has many consequences on user code,
since in many places, the `0` was assumed to mean "error".
This is no longer the case, and the error code must be actively checked.
2017-10-17 16:14:25 -07:00
Yann Collet
f1571dad8f Merge pull request #838 from stellamplau/ldm-mergeDev
Add long distance matcher
2017-09-13 13:24:08 -07:00
Yann Collet
c95c0c9725 modified util::time API
for easier invocation.
- no longer expose frequency timer :
it's either useless, or stored internally in a static variable (init is only necessary once).
- UTIL_getTime() provides result by function return.
2017-09-12 18:12:46 -07:00
Stella Lau
eb3327c10a Merge branch 'dev' of https://github.com/facebook/zstd into ldm-mergeDev 2017-09-11 15:00:01 -07:00
Yann Collet
3128e03be6 updated license header
to clarify dual-license meaning as "or"
2017-09-08 00:09:23 -07:00
Stella Lau
eeff55dfa8 Merge remote-tracking branch 'upstream/dev' into ldm-mergeDev 2017-09-06 15:56:32 -07:00
Stella Lau
67d4a6161c Add ldmBucketSizeLog param 2017-09-02 21:55:29 -07:00
Stella Lau
a1f04d518d Move hashEveryLog to cctxParams and update cli 2017-09-01 15:05:47 -07:00
Yann Collet
0558850735 bench stops immediately on decoding error 2017-09-01 11:46:15 -07:00
Stella Lau
17d8e0bdcc Merge remote-tracking branch 'upstream/longRangeMatcher' into ldm-integrate 2017-09-01 10:19:38 -07:00
Stella Lau
8081becadc Add long distance matching as a CCtxParam 2017-09-01 09:18:58 -07:00
Yann Collet
d7ad99b2ab Merge branch 'longRangeMatcher' into dev 2017-08-31 18:08:37 -07:00
Stella Lau
6a546efb8c Add long distance matcher
Move last literals section to ZSTD_block_internal
2017-08-31 12:53:19 -07:00
Stella Lau
c88fb9267f Replace 'byReference' with enum 2017-08-29 11:55:02 -07:00
Yann Collet
32fb407c9d updated a bunch of headers
for the new license
2017-08-18 16:52:05 -07:00