Commit Graph

1206 Commits

Author SHA1 Message Date
Yann Collet
174bd3d4a7
Merge pull request #1131 from facebook/zstdcli
minor: control numeric argument overflow
2018-05-14 11:53:58 -07:00
Yann Collet
9cd5c63771 cli: control numeric argument overflow
exit on overflow
backported from paramgrill
added associated test case
2018-05-12 14:29:33 -07:00
Yann Collet
b824d213cb fix #1115 2018-05-12 10:21:30 -07:00
cyan4973
62487b5e76 fixed decoding bogus lz4 frame
FIO would keep presenting data after an LZ4F decoding error
resulting in a NULL pointer dereference
when associated with older liblz4 version (< v1.8.1.2)
2018-04-23 18:50:16 -07:00
Yann Collet
1da629f2ad
Merge pull request #1104 from terrelln/fast-train
Allow negative compression levels in training
2018-04-09 14:16:20 -07:00
Nick Terrell
569e2abccd Allow negative compression levels in training
* Set `dictCLevel` in `zstdcli.c`.
* Only set to default level if the compression level `== 0`, not `<= 0`.
2018-04-09 12:12:03 -07:00
Björn Ketelaars
e5ea8d272a fix typo in programs/zstd.{1,1.md}
s/nodictID/no-dictID/g
2018-04-05 06:44:46 +02:00
Yann Collet
7188862d32
Merge pull request #1086 from hagemt/hagemt-patch-1
Correct small typo in manual (man file and markdown)
2018-03-30 20:45:10 -06:00
Tor E Hagemann
c7a5e60bc6
Update zstd.1.md 2018-03-30 15:25:32 -07:00
Tor E Hagemann
292d370ab4
Update zstd.1 2018-03-30 14:53:57 -07:00
Yann Collet
525f3fab33 restored ability to manually set overlapLog 2018-03-28 11:33:41 -06:00
Yann Collet
01082a39bd restored simple status line during zstd compression
the more advanced one, featuring amount of data buffered,
is triggered on `-v`.
2018-03-22 17:49:46 -07:00
Yann Collet
153bc1c004 removed limit ZSTD_TARGETLENGTH_MAX
this makes it possible to specify extremely large negative compression levels,
achieving the side effect as "no compression".

It will also be possible to define larger targetlength for ultra compression mode.

There is no adverse side effect due to removing this limit.
2018-03-21 15:50:05 -07:00
Yann Collet
353117c5d7 implemented ZSTD_DCtx_loadDictionary*()
this required updating ZSTD_createDDict_advanced()
to accept a dictContentType parameter (raw, full, auto).
2018-03-20 13:40:29 -07:00
Yann Collet
4c5cbac179
Merge pull request #1041 from facebook/fasterFast
Negative compression levels
2018-03-13 21:32:46 -07:00
Yann Collet
bd7bb94361
Merge pull request #1044 from baldurk/remove-utf8-characters
Remove non-ASCII characters in header file comments
2018-03-13 13:22:07 -07:00
Baldur Karlsson
430a2fec19 Remove non-ASCII characters in header file comments
* Replaced a non-breaking space and an en dash with a plain space and
  a hyphen.
* This means the files are simple ASCII and less likely to run into
  codepage issues.
2018-03-13 20:05:53 +00:00
Jesse Talavera-Greenberg
2f70fbf2a3
Made -H's printout specify the semantics of -T0 2018-03-12 20:43:32 -04:00
Yann Collet
a57d43d4d4 updated documentation of targetLength 2018-03-12 11:35:01 -07:00
Yann Collet
f24566b597 minor bench improvements
- do not test level 0, as it is converted into level 3,
  which feels strange when compressing multiple levels
- Use direct synchronous mode when a single worker is requested.
2018-03-12 04:02:57 -07:00
Yann Collet
6a9b41b731 create command --fast[=#]
access negative compression levels from command line
for both compression and benchmark modes.

also : ensure proper propagation of parameters
through ZSTD_compress_generic() interface.

added relevant cli tests.
2018-03-11 20:01:23 -07:00
Yann Collet
a70f7e10fa Merge branch 'benchDecode' into longOffsetMode 2018-03-05 14:09:00 -08:00
Yann Collet
03e7e14192 fix benchmark issue when measuring only decoding speed
zstd bench module can focus on decompression speed _only_.
This is useful when trying to measure performance
on large input data compressed using a high level
as compression time becomes problematic (too long).

This mode is triggered by command : zstd -b -d

Problem was : in such a mode,
measured decoding speed was > 10% slower
than in nominal mode (compression + decompression),
making decompression benchmark mode much less useful.

This patch fixes the issue.
It's not completely clear why, but
moving the `memcpy()` operation sooner in the pipeline fixed it.

I can still measure some difference, but it is in the < 2% range,
so it's much more tolerable.

also : it doesn't matter anymore in which order are selected
commands `-b` and `-d`.
The combination always triggers bench_decodeOnly mode.
2018-03-05 13:57:41 -08:00
Yann Collet
41bd10446e Merge branch 'dev' into longOffsetMode 2018-03-05 13:10:10 -08:00
Yann Collet
b91ddf0ae6 Merge branch 'dev' into longOffsetMode 2018-03-05 11:59:54 -08:00
Conrad Meyer
606374269c FIO_addFInfo: Fully initialize output 'total' struct
Silence a Coverity warning about 'windowSize' being uninitialized.
(Yes, nothing that calls this routine actually uses the windowSize
value.  Still, appeasing Coverity is pretty harmless in this case.)
2018-02-28 15:23:05 -08:00
Yann Collet
25d00d10fc fixed minor conversion warning 2018-02-20 16:52:28 -08:00
Yann Collet
3538a535bf use TIMELOOP_NANOSEC
as suggested by @terrelln
2018-02-20 15:33:56 -08:00
Yann Collet
d3364aa39e improve benchmark measurement for small inputs
by invoking time() once per batch, instead of once per compression / decompression.
Batch is dynamically resized so that each round lasts approximately 1 second.

Also : increases time accuracy to nanosecond
2018-02-20 14:58:40 -08:00
Yann Collet
5cb1144872 fixed --single-thread
was incorrectly set to -T0 (use as many cores as possible) previously
2018-02-13 14:56:35 -08:00
Yann Collet
04a3f85ce7 fixed gcc warning on a switch code path 2018-02-09 16:16:27 -08:00
Yann Collet
75689838e4 specify new command --single-thread 2018-02-09 15:55:41 -08:00
Yann Collet
4beaeaace5 Merge branch 'dev' into flexibleLevel 2018-02-09 09:15:05 -08:00
Yann Collet
4b525af53a zstdmt: applies new parameters on the fly
when invoked from ZSTD_compress_generic()
2018-02-02 15:58:13 -08:00
Yann Collet
90eca318a7 fileio: create dedicated function to generate zstd frames
like other formats
2018-02-02 14:24:56 -08:00
Yann Collet
549d26ae71
Merge pull request #1005 from systemcrash/dev
Update zstd.1
2018-02-02 10:04:40 -08:00
Yann Collet
6c492af284 fixed minor conversion warning 2018-02-01 20:16:00 -08:00
Yann Collet
209df52ba2 Changed nbThreads for nbWorkers
This makes it easier to explain that nbWorkers=0 --> single-threaded mode,
while nbWorkers=1 --> asynchronous mode (one mode thread on top of the "main" caller thread).
No need for an additional asynchronous mode flag.
nbWorkers>=2 works the same as nbThreads>=2 previously.
2018-02-01 19:29:30 -08:00
Yann Collet
4b6a94f0cc clarified comments on LDM parameters 2018-02-01 17:07:27 -08:00
Yann Collet
2bfc79ab8d removed bitstream.h dependency 2018-02-01 16:13:04 -08:00
Yann Collet
823a28a1f4
Merge pull request #1000 from facebook/progressiveFlush
Progressive flush
2018-01-30 22:49:47 -08:00
systemcrash
d13a75c969
Update zstd.1 2018-01-29 18:38:02 +01:00
Yann Collet
9f8ed23b5b bumped version number to v1.3.4
also added a paragraph on using compression level with training mode
as this is a recurrent question (see for example #1004)
2018-01-27 22:23:26 -08:00
ne-sted
50aea2f293 cli: fix align of defaults 2018-01-24 15:07:22 +02:00
Yann Collet
cb5eba8e20 add zcat symlink support, suggested by @wtarreau
added some test
also updated relevant doc

+ fixed a mistake in `lz4` symlink support :
  lz4 utility doesn't remove source files by default (like zstd, but unlike gzip).
  The symlink must behave the same.
2018-01-19 11:26:35 -08:00
Yann Collet
70f81d6030 zstdmt uses POOL_tryAdd() to call a new worker
so that it's no longer a blocking call.
This makes it possible to stream out data gradually,
while waiting for a worker to become available.
2018-01-19 10:01:40 -08:00
Yann Collet
4d08ba8b77 fileio: READY_FOR_UPDATE() is now a function-like macro
as suggested by @terrelln
2018-01-18 11:27:13 -08:00
Yann Collet
aa79c18e3f fixed a few access contention
passes thread sanitizer test
2018-01-17 17:18:19 -08:00
Yann Collet
394eec697b Introduce ZSTD_getFrameProgression()
Produces 3 statistics for ongoing frame compression :
- ingested
- consumed (effectively compressed)
- produced

Ingested can be larger than consumed due to buffering effect.

For the time being, this patch mostly fixes the % ratio issue,
since it computes consumed / produced,
instead of ingested / produced.

That being said, update is not "smooth",
because on a slow enough setting,
fileio spends most of its time waiting for a worker to complete its job.

This could be improved thanks to more granular flushing
i.e. start flushing before ongoing job is fully completed.
2018-01-17 16:39:02 -08:00
Yann Collet
58dd7de640 zstdmt: fixed an endless loop on allocation failure
this happened on 32-bits build when requiring a too large input buffer,
typically on wlog=29, creating jobs of 2 GB size.

also : zstd32 now compiles with multithread support enabled by default
(can be disabled with HAVE_THREAD=0)
2018-01-17 12:10:15 -08:00