This patch returns error and exit from loop when there is error in
comparing bitmap test.
Bug: skia:None
Change-Id: If7fe9d8cf6b936bf1fb84de69e73c453a6c253de
Reviewed-on: https://skia-review.googlesource.com/57600
Reviewed-by: Leon Scroggins <scroggo@google.com>
Commit-Queue: Leon Scroggins <scroggo@google.com>
Will need guards for android (at least)
Bug: skia:
Change-Id: I2bb8e656997984489ef1f2e41cd3d301c4e7b947
Reviewed-on: https://skia-review.googlesource.com/26040
Reviewed-by: Mike Klein <mtklein@chromium.org>
Commit-Queue: Mike Reed <reed@google.com>
Update NewFromStream to report an error on failure to create an
SkCodec, so that a client can distinguish between
- not enough data
- invalid data
In Chromium, this will allow blink::ImageDecoder to call SetFailed if
the stream is invalid early and we never create an SkCodec. Without
this, ImageDecoder will keep trying to create an SkCodec when it
receives more data.
Change-Id: I4f505c56d91c982be36a828fd0f7db17b1596588
Reviewed-on: https://skia-review.googlesource.com/22642
Commit-Queue: Leon Scroggins <scroggo@google.com>
Reviewed-by: Derek Sollenberger <djsollen@google.com>
Reviewed-by: Chris Blume <cblume@chromium.org>
This reverts commit 742a3e298f.
Reason for revert: Breaking Android roll:
frameworks/base/core/jni/android/graphics/BitmapFactory.cpp:453:18: error: no member named 'fColorPtr' in 'SkAndroidCodec::AndroidOptions'
codecOptions.fColorPtr = colorPtr;
~~~~~~~~~~~~ ^
frameworks/base/core/jni/android/graphics/BitmapFactory.cpp:454:18: error: no member named 'fColorCount' in 'SkAndroidCodec::AndroidOptions'
codecOptions.fColorCount = colorCount;
~~~~~~~~~~~~ ^
Original change's description:
> Remove support for decoding to kIndex_8
>
> Fix up callsites, and remove tests that no longer make sense.
>
> Bug: skia:6828
> Change-Id: I2548c4b7528b7b1be7412563156f27b52c9d4295
> Reviewed-on: https://skia-review.googlesource.com/21664
> Reviewed-by: Derek Sollenberger <djsollen@google.com>
> Commit-Queue: Leon Scroggins <scroggo@google.com>
TBR=djsollen@google.com,scroggo@google.com
Change-Id: I1bc669441f250690884e75a9a61427fdf75c6907
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Bug: skia:6828
Reviewed-on: https://skia-review.googlesource.com/22120
Reviewed-by: Leon Scroggins <scroggo@google.com>
Commit-Queue: Leon Scroggins <scroggo@google.com>
Fix up callsites, and remove tests that no longer make sense.
Bug: skia:6828
Change-Id: I2548c4b7528b7b1be7412563156f27b52c9d4295
Reviewed-on: https://skia-review.googlesource.com/21664
Reviewed-by: Derek Sollenberger <djsollen@google.com>
Commit-Queue: Leon Scroggins <scroggo@google.com>
Add a new private method to SkCodec that handles Options.fFrameIndex:
- Check to ensure the index is valid
- Call onGetFrameCount to force parsing the stream
- Recursively call getPixels (it should be complete, so no need for
incremental decoding) to decode the prior frame if necessary
- Zero fill a RestoreBGColor frame
Call the method in getPixels and startIncrementalDecode, and remove
duplicate code from GIF and WEBP.
Remove support for scaling frames beyond the first, which is currently
unused.
Preserve the feature of SkGifCodec that it will only parse to the end
of the first frame if the first frame is asked for. (Also note that
when we continue a partial frame, we won't force parsing the full
stream.) If the client only wants the first frame, parsing the rest
would be unnecessary. But if the client wants the second, we assume
they will want any remaining frames, so we parse the remainder of the
stream. This simplifies the code (so SkCodec does not have to ask its
subclass to parse up to a particular frame).
Update tests that relied on the old behavior:
- Codec_partialAnim now hardcodes the bytes needed. Previously it
relied on the old behavior that GIF only parsed up to the frame being
decoded.
- Codec_skipFullParse now only tests the first frame, since that is the
case where it is important to skip a full parse.
TBR=reed@google.com
No changes to the public API.
Change-Id: Ic2f075452dfeedb4e3e60e6cf4df33ee7bd38495
Reviewed-on: https://skia-review.googlesource.com/19276
Reviewed-by: Leon Scroggins <scroggo@google.com>
Reviewed-by: Matt Sarett <msarett@google.com>
Commit-Queue: Leon Scroggins <scroggo@google.com>
If process_data is unable to read (and therefore process) as many bytes
as it expects, process the bytes read before returning false.
Fixes differences in Gold.
Add a test that verifies that it is okay to call png_process_data with
0 bytes. (We could special case 0, but libpng already checks for 0.)
Change-Id: Id500b9305ee3bb6a1a7e8fc70d4e723cb4742b55
Reviewed-on: https://skia-review.googlesource.com/14144
Commit-Queue: Leon Scroggins <scroggo@google.com>
Reviewed-by: Matt Sarett <msarett@google.com>
(Originally uploaded as 13900.)
Previously, SkPngCodec assumed that the stream only contained one
image, which ended at the end of the stream. It read the stream in
arbitrarily-sized chunks, and then passed that data to libpng for
processing.
If a stream contains more than one image, this may result in reading
beyond the end of the image, making future reads read the wrong data.
Now, SkPngCodec starts by reading 8 bytes at a time. After the
signature, 8 bytes is enough to know which chunk is next and how many
bytes are in the chunk.
When decoding the size, we stop when we reach IDAT, and when decoding
the image, we stop when we reach IEND.
This manual parsing is necessary to support APNG, which is planned in
the future. It also allows us to remove the SK_GOOGLE3_PNG_HACK, which
was a workaround for reading more than necessary at the beginning of
the image.
Add a test that simulates the issue, by decoding a special stream that
reports an error if the codec attempts to read beyond the end.
Temporarily disable the partial decoding tests for png. A larger change
will be necessary to get those working again, and no clients are
currently relying on incrementally decoding PNGs (i.e. decode part of
an image, then decode further with more data).
Include a workaround for older versions of libpng (e.g. 1.2 in
Google3). In older versions, if the row callback is null when the
IDAT header is processed, reading the image will fail. When we see the
IDAT, we save the length and process a recreated IDAT header later,
after the row callback has been set.
Bug: skia:5368
Bug:b/34073812
Test: Existing tests, plus a new test in dm.
Change-Id: I293a4ddc013b82669a8b735062228b26d0bce933
Reviewed-on: https://skia-review.googlesource.com/13984
Commit-Queue: Leon Scroggins <scroggo@google.com>
Reviewed-by: Matt Sarett <msarett@google.com>
This reverts commit 2c65d51612.
Reason for revert: Causing failures in Google3 (https://test.corp.google.com/ui#cl=153703311&flags=CAMQAg==&id=OCL:153703311:BASE:153703364:1492695824938:4db2240d&t=//chrome/skia/dm_wrapper:dm_wrapper) and differences in Gold. This change was not intended to change the output.
Original change's description:
> Make SkPngCodec only read as much of the stream as necessary
>
> Previously, SkPngCodec assumed that the stream only contained one
> image, which ended at the end of the stream. It read the stream in
> arbitrarily-sized chunks, and then passed that data to libpng for
> processing.
>
> If a stream contains more than one image, this may result in reading
> beyond the end of the image, making future reads read the wrong data.
>
> Now, SkPngCodec starts by reading 8 bytes at a time. After the
> signature, 8 bytes is enough to know which chunk is next and how many
> bytes are in the chunk.
>
> When decoding the size, we stop when we reach IDAT, and when decoding
> the image, we stop when we reach IEND.
>
> This manual parsing is necessary to support APNG, which is planned in
> the future. It also allows us to remove the SK_GOOGLE3_PNG_HACK, which
> was a workaround for reading more than necessary at the beginning of
> the image.
>
> Add a test that simulates the issue, by decoding a special stream that
> reports an error if the codec attempts to read beyond the end.
>
> Temporarily disable the partial decoding tests for png. A larger change
> will be necessary to get those working again, and no clients are
> currently relying on incrementally decoding PNGs (i.e. decode part of
> an image, then decode further with more data).
>
> Bug: skia:5368
> BUG:34073812
>
> Change-Id: If832f7b20565411226fb5be3c305a4d16bf9269d
> Reviewed-on: https://skia-review.googlesource.com/13900
> Commit-Queue: Leon Scroggins <scroggo@google.com>
> Reviewed-by: Matt Sarett <msarett@google.com>
>
TBR=msarett@google.com,scroggo@google.com,reviews@skia.org
NOPRESUBMIT=true
NOTREECHECKS=true
NOTRY=true
Change-Id: I2f82e9960dda7bf5c646774df84320dadb7b930e
Reviewed-on: https://skia-review.googlesource.com/13971
Reviewed-by: Leon Scroggins <scroggo@google.com>
Commit-Queue: Leon Scroggins <scroggo@google.com>
Previously, SkPngCodec assumed that the stream only contained one
image, which ended at the end of the stream. It read the stream in
arbitrarily-sized chunks, and then passed that data to libpng for
processing.
If a stream contains more than one image, this may result in reading
beyond the end of the image, making future reads read the wrong data.
Now, SkPngCodec starts by reading 8 bytes at a time. After the
signature, 8 bytes is enough to know which chunk is next and how many
bytes are in the chunk.
When decoding the size, we stop when we reach IDAT, and when decoding
the image, we stop when we reach IEND.
This manual parsing is necessary to support APNG, which is planned in
the future. It also allows us to remove the SK_GOOGLE3_PNG_HACK, which
was a workaround for reading more than necessary at the beginning of
the image.
Add a test that simulates the issue, by decoding a special stream that
reports an error if the codec attempts to read beyond the end.
Temporarily disable the partial decoding tests for png. A larger change
will be necessary to get those working again, and no clients are
currently relying on incrementally decoding PNGs (i.e. decode part of
an image, then decode further with more data).
Bug: skia:5368
BUG:34073812
Change-Id: If832f7b20565411226fb5be3c305a4d16bf9269d
Reviewed-on: https://skia-review.googlesource.com/13900
Commit-Queue: Leon Scroggins <scroggo@google.com>
Reviewed-by: Matt Sarett <msarett@google.com>
guarded by SK_SUPPORT_OBSOLETE_LOCKPIXELS
needs https://codereview.chromium.org/2820873002/# to land first
Bug: skia:6481
Change-Id: I1c39902cbf6fe99f622adfa8192733b95f7fea09
Change-Id: I1c39902cbf6fe99f622adfa8192733b95f7fea09
Reviewed-on: https://skia-review.googlesource.com/13580
Reviewed-by: Florin Malita <fmalita@chromium.org>
Reviewed-by: Leon Scroggins <scroggo@google.com>
Commit-Queue: Mike Reed <reed@google.com>
Previously, getFrameInfo might report a frame that was truncated prior
to setting its requiredFrame. As a result, fRequiredFrame may be
different depending on how much data has already been received.
If there is a local color table, do not report the frame until the
color table has been received, since that is used to determine
fRequiredFrame. If there is no local color table, set fRequiredFrame
and report the frame after reading the header.
Add a test.
Replace make_from_resource with GetResourceAsData
Change-Id: I1b697f766c1d0e1e12ab2ae1d27167af5193395d
Reviewed-on: https://skia-review.googlesource.com/7756
Reviewed-by: Matt Sarett <msarett@google.com>
Commit-Queue: Leon Scroggins <scroggo@google.com>
If the input SkStream has a length and position, do not copy and store
LZW blocks or ColorMaps. Instead, mark the position and size, and read
from the stream when necessary.
This will save memory in Chromium's use case, which has already
buffered all of its data.
In the case where we *do* need to copy, store it on the SkStreamBuffer.
This allows SkGifImageReader to have simpler code.
Add tests.
Change-Id: Ic65fa766328ae2e5974b2084bc2099e19aced731
Reviewed-on: https://skia-review.googlesource.com/6157
Reviewed-by: Matt Sarett <msarett@google.com>
Commit-Queue: Leon Scroggins <scroggo@google.com>
This indicates whether the frame has been fully received, i.e. the
stream contains enough data to decode to the end of the frame.
A client may want to use this to know whether they should attempt to
decode this frame, if they do not want to decode partial frames.
Change-Id: I336c7031b0c0b8c1401ce040f5372aedc87fdc14
Reviewed-on: https://skia-review.googlesource.com/5703
Reviewed-by: Chris Blume <cblume@google.com>
Reviewed-by: Derek Sollenberger <djsollen@google.com>
Commit-Queue: Leon Scroggins <scroggo@google.com>
Prior to this CL, if a GIF file was truncated before reading the local
color map of a frame, incremental decode would do the wrong thing. In
onStartIncrementalDecode, we would either create a color table based on
the global color map, or we would create a dummy one with only one
color (transparent). The dummy color table is correct if there is
neither a global nor a local color map, and allows us to fill the frame
with transparent. But if more data is provided, and it includes an
actual color map and image data, one of the following can happen:
- If the created color table is smaller than the actual one, the
decoded data may include indices outside of the range of the created
color table, resulting in a crash.
- If we get lucky, and the created color table is large enough, it may
still be the wrong colors (and most likely is).
To solve this, make onStartIncrementalDecode fail if there is a local
color map that has not been read yet. A future call may read more data
and read the correct color map.
This is done by returning kIncompleteInput in
SkGifCodec::prepareToDecode if there is a local color map that has not
yet been read. (It is possible that there is no color map at all, in
which case we still need to support decoding that frame. Skip
attempting to decode in that case.)
In onGetPixels, if prepareToDecode returned kIncompleteInput, return
kInvalidInput. Although the input is technically incomplete, no future
call will provide more data (unlike in incremental decoding), and there
is nothing interesting for the client to draw. This also prevents
SkCodec from attempting to fill the data with an SkSwizzler, which has
not been created. (An alternative solution would be create the dummy
color table and an SkSwizzler, which would keep the current behavior.
But I think the new behavior of returning kInvalidInput makes more
sense.)
Add tests to verify the intended behavior:
- getPixels fails.
- startIncrementalDecode fails, but after providing more data it will
succeed and incremental decoding matches the image decoded from the
full stream.
- Both succeed if there is no color table at all.
Change-Id: Ifb52fe7f723673406a28e80c8805a552f0ac33b6
Reviewed-on: https://skia-review.googlesource.com/5758
Reviewed-by: Matt Sarett <msarett@google.com>
Commit-Queue: Leon Scroggins <scroggo@google.com>
If there is enough data in the stream to read the reported canvas size,
but not enough to read the first image's header, we do not know the
true canvas size, since we may expand it to fit the first frame. In
that case, return nullptr from NewFromStream.
Add a test.
SkGifCodec.cpp:
Correct a comment - parse returns false if there is a fatal error.
parse() returning true does not guarantee that the size was found.
Instead of checking the width and height, check to see whether the
first frame exists and has its header defined. If not, we do not yet
know the true canvas size. Assert that the canvas size is non-zero,
which is a fatal error from parse.
SkGifImageReader.cpp:
Move the code to set the header defined before the SkGIFSizeQuery exit
condition. This allows SkGifCodec to check the first frame's header to
determine whether the size is known.
GifTest.cpp:
Add a test which truncates the file just before the image header (and
after the global header). Prior to the other changes, this would create
an SkCodec. For an image that needs its canvas size expanded, the
SkCodec would have an incorrect size.
CodecPartialTest.cpp:
randPixels.gif now needs more than half of its data to create an
SkCodec, so set a minimum for test_partial.
Change-Id: I40482f524128b2f1fe59b8f27dd64c7cbe793079
Reviewed-on: https://skia-review.googlesource.com/5701
Reviewed-by: Derek Sollenberger <djsollen@google.com>
Commit-Queue: Leon Scroggins <scroggo@google.com>
Replace with std::unique_ptr.
Change-Id: I5806cfbb30515fcb20e5e66ce13fb5f3b8728176
Reviewed-on: https://skia-review.googlesource.com/4381
Commit-Queue: Ben Wagner <bungeman@google.com>
Reviewed-by: Mike Klein <mtklein@chromium.org>
Add an interface to decode frames beyond the first in SkCodec, and
add an implementation for SkGifCodec.
Add getFrameData to SkCodec. This method reads ahead in the stream
to return a vector containing meta data about each frame in the image.
This is not required in order to decode frames beyond the first, but
it allows a client to learn extra information:
- how long the frame should be displayed
- whether a frame should be blended with a prior frame, allowing the
client to provide the prior frame to speed up decoding
Add a new fields to SkCodec::Options:
- fFrameIndex
- fHasPriorFrame
The API is designed so that SkCodec never caches frames. If a
client wants a frame beyond the first, they specify the frame in
Options.fFrameIndex. If the client does not have the
frame's required frame (the frame that this frame must be blended on
top of) cached, they pass false for
Options.fHasPriorFrame. Unless the frame is
independent, the codec will then recursively decode all frames
necessary to decode fFrameIndex. If the client has the required frame
cached, they can put it in the dst they pass to the codec, and the
codec will only draw fFrameIndex onto it.
Replace SkGifCodec's scanline decoding support with progressive
decoding, and update the tests accordingly.
Implement new APIs in SkGifCodec. Instead of using gif_lib, use
GIFImageReader, imported from Chromium (along with its copyright
headers) with the following changes:
- SkGifCodec is now the client
- Replace blink types
- Combine GIFColorMap::buildTable and ::getTable into a method that
creates and returns an SkColorTable
- Input comes from an SkStream, instead of a SegmentReader. Add
SkStreamBuffer, which buffers the (potentially partial) stream in
order to decode progressively.
(FIXME: This requires copying data that previously was read directly
from the SegmentReader. Does this hurt performance? If so, can we
fix it?)
- Remove UMA code
- Instead of reporting screen width and height to the client, allow the
client to query for it
- Fail earlier if the first frame AND screen have size of zero
- Compute required previous frame when adding a new one
- Move GIFParseQuery from GIFImageDecoder to GIFImageReader
- Allow parsing up to a specific frame (to skip parsing the rest of the
stream if a client only wants the first frame)
- Compute whether the first frame has alpha and supports index 8, to
create the SkImageInfo. This happens before reporting that the size
has been decoded.
Add GIFImageDecoder::haveDecodedRow to SkGifCodec, imported from
Chromium (along with its copyright header), with the following changes:
- Add support for sampling
- Use the swizzler
- Keep track of the rows decoded
- Do *not* keep track of whether we've seen alpha
Remove SkCodec::kOutOfOrder_SkScanlineOrder, which was only used by GIF
scanline decoding.
Call onRewind even if there is no stream (SkGifCodec needs to clear its
decoded state so it will decode from the beginning).
Add a method to SkSwizzler to access the offset into the dst, taking
subsetting into account.
Add a GM that animates a GIF.
Add tests for the new APIs.
*** Behavior changes:
* Previously, we reported that an image with a subset frame and no transparent
index was opaque and used the background index (if present) to fill the
background. This is necessary in order to support index 8, but it does not
match viewers/browsers I have seen. Examples:
- Chromium and Gimp render the background transparent
- Firefox, Safari, Linux Image Viewer, Safari Preview clip to the frame (for
a single frame image)
This CL matches Chromium's behavior and renders the background transparent.
This allows us to have consistent behavior across products and simplifies
the code (relative to what we would have to do to continue the old behavior
on Android). It also means that we will no longer support index 8 for some
GIFs.
* Stop checking for GIFSTAMP - all GIFs should be either 89a or 87a.
This matches Chromium. I suspect that bugs would have been reported if valid
GIFs started with "GIFVER" instead of "GIF89a" or "GIF87a" (but did not decode
in Chromium).
*** Future work not included in this CL:
* Move some checks out of haveDecodedRow, since they are the same for the
entire frame e.g.
- intersecting the frameRect with the full image size
- whether there is a color table
* Change when we write transparent pixels
- In some cases, Chromium deemed this unnecessary, but I suspect it is slower
than the fallback case. There will continue to be cases where we should
*not* write them, but for e.g. the first pass where we have already
cleared to transparent (which we may also be able to skip) writing the
transparent pixels will not make anything incorrect.
* Report color type and alpha type per frame
- Depending on alpha values, disposal methods, frame rects, etc, subsequent
frames may have different properties than the first.
* Skip copies of the encoded data
- We copy the encoded data in case the stream is one that cannot be rewound,
so we can parse and then decode (possibly not immediately). For some input
streams, this is unnecessary.
- I was concerned this cause a performance regression, but on average the
new code is faster than the old for the images I tested [1].
- It may cause a performance regression for Chromium, though, where we can
always move back in the stream, so this should be addressed.
Design doc:
https://docs.google.com/a/google.com/document/d/12Qhf9T92MWfdWujQwCIjhCO3sw6pTJB5pJBwDM1T7Kc/
[1] https://docs.google.com/a/google.com/spreadsheets/d/19V-t9BfbFw5eiwBTKA1qOBkZbchjlTC5EIz6HFy-6RI/
GOLD_TRYBOT_URL= https://gold.skia.org/search2?unt=true&query=source_type%3Dgm&master=false&issue=2045293002
Review-Url: https://codereview.chromium.org/2045293002
This is a step towards using SkCodec in Chromium, where progressive
decoding is necessary.
Switch from using png_read_row (which expects all the data to be
available) to png_process_data, which uses callbacks when rows are
available.
Create a new API for SkCodec, which supports progressive decoding and
scanline decoding. Future changes will switch the other clients off of
startScanlineDecode and get/skip-Scanlines to the new API.
Remove SkCodec::kNone_ScanlineOrder, which was only used for interlaced
PNG images. In the new API, interlaced PNG fits kTopDown. Also remove
updateCurrScanline(), which was only used by the old implementation for
interlaced PNG.
DMSrcSink:
- In CodecSrc::kScanline_Mode, use the new method for scanline decoding
for the supported formats (just PNG and PNG-in-ICO for now).
fuzz.cpp:
- Remove reference to kNone_ScanlineOrder
SkCodec:
- Add new APIs:
- startIncrementalDecode
- incrementalDecode
- Remove kNone_SkScanlineOrder and updateCurrScanline()
- Set fDstInfo and fOptions in getPixels(). This may not be necessary
for all implementations, but it simplifies things for SkPngCodec.
SkPngCodec:
- Implement new APIs
- Switch from sk_read_fn/png_read_row etc to png_process_data
- Expand AutoCleanPng's role to decode the header and create the
SkPngCodec
- Make the interlaced PNG decoder report how many lines were
initialized during an incomplete decode
SkIcoCodec:
- Implement the new APIs; supported for PNG in ICO
SkSampledCodec:
- Call the new method for decoding scanlines, and fall back to the old
method if the new version is unimplemented
- Remove references to kNone_SkScanlineOrder
tests/CodecPartial:
- Add a test which decodes part of an image, then finishes the decode,
and compares it to the straightforward method
tests/CodecTest:
- Add a test which decodes all scanlines using the new method
- Repurpose the Codec_stripes test to decode using the new method in
sections rather than all at once
- In the method check(), add a parameter for whether the image supports
the new method of scanline decoding, and be explicit about whether an
image supports incomplete
- Test incomplete PNG decodes. We should have been doing it anyway for
non-interlaced (except for an image that is too small - one row), but
the new method supports interlaced incomplete as well
- Make test_invalid_parameters test the new method
- Add a test to ensure that it's safe to fall back to scanline decoding without
rewinding
BUG=skia:4211
The new version was generally faster than the old version (but not significantly so).
Some raw performance differences can be found at https://docs.google.com/a/google.com/spreadsheets/d/1Gis3aRCEa72qBNDRMgGDg3jD-pMgO-FXldlNF9ejo4o/
Design doc can be found at https://docs.google.com/a/google.com/document/d/11Mn8-ePDKwVEMCjs3nWwSjxcSpJ_Cu8DF57KNtUmgLM/
GOLD_TRYBOT_URL= https://gold.skia.org/search2?unt=true&query=source_type%3Dgm&master=false&issue=1997703003
Review-Url: https://codereview.chromium.org/1997703003