[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
// Copyright 2017 the V8 project authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
|
|
// found in the LICENSE file.
|
|
|
|
|
|
|
|
#include <stddef.h>
|
|
|
|
#include <stdint.h>
|
|
|
|
#include <stdlib.h>
|
|
|
|
|
|
|
|
#include <algorithm>
|
|
|
|
|
2021-08-30 15:58:51 +00:00
|
|
|
#include "src/base/macros.h"
|
2019-05-22 07:55:37 +00:00
|
|
|
#include "src/execution/isolate.h"
|
2019-05-23 08:51:46 +00:00
|
|
|
#include "src/objects/objects-inl.h"
|
|
|
|
#include "src/objects/objects.h"
|
2019-05-29 13:30:15 +00:00
|
|
|
#include "src/utils/ostreams.h"
|
2021-02-23 06:50:08 +00:00
|
|
|
#include "src/wasm/function-body-decoder.h"
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
#include "src/wasm/wasm-module-builder.h"
|
|
|
|
#include "src/wasm/wasm-module.h"
|
2020-06-24 11:17:46 +00:00
|
|
|
#include "src/wasm/wasm-opcodes-inl.h"
|
2019-05-29 13:30:15 +00:00
|
|
|
#include "test/common/wasm/flag-utils.h"
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
#include "test/common/wasm/test-signatures.h"
|
|
|
|
#include "test/common/wasm/wasm-module-runner.h"
|
|
|
|
#include "test/fuzzer/fuzzer-support.h"
|
2017-05-08 09:22:54 +00:00
|
|
|
#include "test/fuzzer/wasm-fuzzer-common.h"
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2017-09-01 13:20:46 +00:00
|
|
|
namespace v8 {
|
|
|
|
namespace internal {
|
|
|
|
namespace wasm {
|
|
|
|
namespace fuzzer {
|
2017-08-09 08:11:21 +00:00
|
|
|
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
namespace {
|
|
|
|
|
2021-09-23 09:53:33 +00:00
|
|
|
constexpr int kMaxArrays = 4;
|
|
|
|
constexpr int kMaxStructs = 4;
|
|
|
|
constexpr int kMaxStructFields = 4;
|
2017-12-21 14:31:42 +00:00
|
|
|
constexpr int kMaxFunctions = 4;
|
2018-01-29 16:45:59 +00:00
|
|
|
constexpr int kMaxGlobals = 64;
|
2020-05-18 08:31:21 +00:00
|
|
|
constexpr int kMaxParameters = 15;
|
|
|
|
constexpr int kMaxReturns = 15;
|
2021-02-17 14:24:38 +00:00
|
|
|
constexpr int kMaxExceptions = 4;
|
2021-08-13 20:08:20 +00:00
|
|
|
constexpr int kMaxTableSize = 32;
|
|
|
|
constexpr int kMaxTables = 4;
|
2021-11-02 14:00:27 +00:00
|
|
|
constexpr int kMaxArraySize = 20;
|
2017-12-21 14:31:42 +00:00
|
|
|
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
class DataRange {
|
2021-06-17 15:43:55 +00:00
|
|
|
base::Vector<const uint8_t> data_;
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
|
|
|
public:
|
2021-06-17 15:43:55 +00:00
|
|
|
explicit DataRange(base::Vector<const uint8_t> data) : data_(data) {}
|
2020-11-06 02:39:19 +00:00
|
|
|
DataRange(const DataRange&) = delete;
|
|
|
|
DataRange& operator=(const DataRange&) = delete;
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2017-12-21 14:30:32 +00:00
|
|
|
// Don't accidentally pass DataRange by value. This will reuse bytes and might
|
|
|
|
// lead to OOM because the end might not be reached.
|
|
|
|
// Define move constructor and move assignment, disallow copy constructor and
|
|
|
|
// copy assignment (below).
|
2018-08-02 08:16:29 +00:00
|
|
|
DataRange(DataRange&& other) V8_NOEXCEPT : DataRange(other.data_) {
|
|
|
|
other.data_ = {};
|
|
|
|
}
|
|
|
|
DataRange& operator=(DataRange&& other) V8_NOEXCEPT {
|
2017-12-21 14:30:32 +00:00
|
|
|
data_ = other.data_;
|
2018-07-12 09:05:43 +00:00
|
|
|
other.data_ = {};
|
2017-12-21 14:30:32 +00:00
|
|
|
return *this;
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
|
|
|
|
2018-07-12 09:05:43 +00:00
|
|
|
size_t size() const { return data_.size(); }
|
2017-12-21 14:30:32 +00:00
|
|
|
|
|
|
|
DataRange split() {
|
2018-07-12 09:05:43 +00:00
|
|
|
uint16_t num_bytes = get<uint16_t>() % std::max(size_t{1}, data_.size());
|
|
|
|
DataRange split(data_.SubVector(0, num_bytes));
|
2017-12-21 14:30:32 +00:00
|
|
|
data_ += num_bytes;
|
|
|
|
return split;
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
|
|
|
|
2018-11-15 15:08:32 +00:00
|
|
|
template <typename T, size_t max_bytes = sizeof(T)>
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
T get() {
|
2021-08-30 15:58:51 +00:00
|
|
|
// Bool needs special handling (see template specialization below).
|
|
|
|
static_assert(!std::is_same<T, bool>::value, "bool needs special handling");
|
2018-11-15 15:08:32 +00:00
|
|
|
STATIC_ASSERT(max_bytes <= sizeof(T));
|
2017-12-21 14:31:42 +00:00
|
|
|
// We want to support the case where we have less than sizeof(T) bytes
|
|
|
|
// remaining in the slice. For example, if we emit an i32 constant, it's
|
|
|
|
// okay if we don't have a full four bytes available, we'll just use what
|
|
|
|
// we have. We aren't concerned about endianness because we are generating
|
|
|
|
// arbitrary expressions.
|
2018-11-15 15:08:32 +00:00
|
|
|
const size_t num_bytes = std::min(max_bytes, data_.size());
|
2017-12-21 14:31:42 +00:00
|
|
|
T result = T();
|
2019-04-29 11:06:49 +00:00
|
|
|
memcpy(&result, data_.begin(), num_bytes);
|
2017-12-21 14:31:42 +00:00
|
|
|
data_ += num_bytes;
|
|
|
|
return result;
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2021-08-30 17:55:28 +00:00
|
|
|
// Explicit specialization must be defined outside of class body.
|
|
|
|
template <>
|
|
|
|
bool DataRange::get() {
|
|
|
|
// The general implementation above is not instantiable for bool, as that
|
|
|
|
// would cause undefinied behaviour when memcpy'ing random bytes to the
|
|
|
|
// bool. This can result in different observable side effects when invoking
|
|
|
|
// get<bool> between debug and release version, which eventually makes the
|
|
|
|
// code output different as well as raising various unrecoverable errors on
|
|
|
|
// runtime.
|
|
|
|
// Hence we specialize get<bool> to consume a full byte and use the least
|
|
|
|
// significant bit only (0 == false, 1 == true).
|
|
|
|
return get<uint8_t>() % 2;
|
|
|
|
}
|
|
|
|
|
2021-09-17 16:45:15 +00:00
|
|
|
ValueType GetValueType(DataRange* data, bool liftoff_as_reference,
|
2021-10-07 15:10:58 +00:00
|
|
|
uint32_t num_types, bool include_packed_types = false) {
|
2021-07-07 14:54:58 +00:00
|
|
|
constexpr ValueType types[] = {
|
2021-10-12 13:23:23 +00:00
|
|
|
kWasmI8,
|
|
|
|
kWasmI16,
|
|
|
|
kWasmI32,
|
|
|
|
kWasmI64,
|
|
|
|
kWasmF32,
|
|
|
|
kWasmF64,
|
|
|
|
kWasmS128,
|
|
|
|
kWasmExternRef,
|
|
|
|
kWasmFuncRef,
|
|
|
|
kWasmEqRef,
|
|
|
|
ValueType::Ref(HeapType(HeapType::kI31), kNullable),
|
|
|
|
kWasmAnyRef,
|
|
|
|
ValueType::Ref(HeapType(HeapType::kData), kNullable)};
|
|
|
|
|
|
|
|
constexpr int kLiftoffOnlyTypeCount = 4; // at the end of {types}.
|
2021-10-07 15:10:58 +00:00
|
|
|
constexpr int kPackedOnlyTypeCount = 2; // at the begining of {types}.
|
2021-07-07 14:54:58 +00:00
|
|
|
|
2021-07-21 11:43:07 +00:00
|
|
|
if (liftoff_as_reference) {
|
2021-10-07 15:10:58 +00:00
|
|
|
uint32_t id;
|
|
|
|
if (include_packed_types) {
|
|
|
|
id = data->get<uint8_t>() % (arraysize(types) + num_types);
|
|
|
|
} else {
|
|
|
|
id = kPackedOnlyTypeCount +
|
|
|
|
(data->get<uint8_t>() %
|
|
|
|
(arraysize(types) + num_types - kPackedOnlyTypeCount));
|
|
|
|
}
|
2021-09-28 13:36:22 +00:00
|
|
|
if (id >= arraysize(types)) {
|
|
|
|
return ValueType::Ref(id - arraysize(types), kNullable);
|
2021-07-21 11:43:07 +00:00
|
|
|
}
|
2021-09-28 13:36:22 +00:00
|
|
|
return types[id];
|
2021-07-21 11:43:07 +00:00
|
|
|
}
|
2021-10-07 15:10:58 +00:00
|
|
|
if (include_packed_types) {
|
|
|
|
return types[data->get<uint8_t>() %
|
|
|
|
(arraysize(types) - kLiftoffOnlyTypeCount)];
|
|
|
|
}
|
|
|
|
return types[kPackedOnlyTypeCount +
|
|
|
|
(data->get<uint8_t>() %
|
|
|
|
(arraysize(types) - kPackedOnlyTypeCount -
|
|
|
|
kLiftoffOnlyTypeCount))];
|
2017-12-21 13:53:15 +00:00
|
|
|
}
|
|
|
|
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
class WasmGenerator {
|
2021-02-22 09:28:44 +00:00
|
|
|
template <WasmOpcode Op, ValueKind... Args>
|
2019-07-08 11:44:42 +00:00
|
|
|
void op(DataRange* data) {
|
2017-11-06 15:48:17 +00:00
|
|
|
Generate<Args...>(data);
|
|
|
|
builder_->Emit(Op);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
|
|
|
|
2020-11-26 10:08:27 +00:00
|
|
|
class V8_NODISCARD BlockScope {
|
2017-11-15 11:51:15 +00:00
|
|
|
public:
|
2020-05-18 08:31:21 +00:00
|
|
|
BlockScope(WasmGenerator* gen, WasmOpcode block_type,
|
2021-06-17 15:43:55 +00:00
|
|
|
base::Vector<const ValueType> param_types,
|
|
|
|
base::Vector<const ValueType> result_types,
|
|
|
|
base::Vector<const ValueType> br_types, bool emit_end = true)
|
2021-02-17 14:24:38 +00:00
|
|
|
: gen_(gen), emit_end_(emit_end) {
|
2020-06-18 12:55:08 +00:00
|
|
|
gen->blocks_.emplace_back(br_types.begin(), br_types.end());
|
2021-07-07 14:54:58 +00:00
|
|
|
gen->builder_->EmitByte(block_type);
|
|
|
|
|
2020-05-18 08:31:21 +00:00
|
|
|
if (param_types.size() == 0 && result_types.size() == 0) {
|
2021-07-07 14:54:58 +00:00
|
|
|
gen->builder_->EmitValueType(kWasmVoid);
|
2020-05-18 08:31:21 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (param_types.size() == 0 && result_types.size() == 1) {
|
2021-07-07 14:54:58 +00:00
|
|
|
gen->builder_->EmitValueType(result_types[0]);
|
2020-05-18 08:31:21 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
// Multi-value block.
|
|
|
|
Zone* zone = gen->builder_->builder()->zone();
|
|
|
|
FunctionSig::Builder builder(zone, result_types.size(),
|
|
|
|
param_types.size());
|
|
|
|
for (auto& type : param_types) {
|
2021-03-22 06:56:01 +00:00
|
|
|
DCHECK_NE(type, kWasmVoid);
|
2020-05-18 08:31:21 +00:00
|
|
|
builder.AddParam(type);
|
|
|
|
}
|
|
|
|
for (auto& type : result_types) {
|
2021-03-22 06:56:01 +00:00
|
|
|
DCHECK_NE(type, kWasmVoid);
|
2020-05-18 08:31:21 +00:00
|
|
|
builder.AddReturn(type);
|
|
|
|
}
|
|
|
|
FunctionSig* sig = builder.Build();
|
|
|
|
int sig_id = gen->builder_->builder()->AddSignature(sig);
|
2021-07-07 14:54:58 +00:00
|
|
|
gen->builder_->EmitI32V(sig_id);
|
2017-11-15 11:51:15 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
~BlockScope() {
|
2021-02-17 14:24:38 +00:00
|
|
|
if (emit_end_) gen_->builder_->Emit(kExprEnd);
|
2017-11-15 11:51:15 +00:00
|
|
|
gen_->blocks_.pop_back();
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
WasmGenerator* const gen_;
|
2021-02-17 14:24:38 +00:00
|
|
|
bool emit_end_;
|
2017-11-15 11:51:15 +00:00
|
|
|
};
|
|
|
|
|
2021-06-17 15:43:55 +00:00
|
|
|
void block(base::Vector<const ValueType> param_types,
|
|
|
|
base::Vector<const ValueType> return_types, DataRange* data) {
|
2020-05-18 08:31:21 +00:00
|
|
|
BlockScope block_scope(this, kExprBlock, param_types, return_types,
|
|
|
|
return_types);
|
|
|
|
ConsumeAndGenerate(param_types, return_types, data);
|
|
|
|
}
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
template <ValueKind T>
|
2019-07-08 11:44:42 +00:00
|
|
|
void block(DataRange* data) {
|
2021-06-17 15:43:55 +00:00
|
|
|
block({}, base::VectorOf({ValueType::Primitive(T)}), data);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
|
|
|
|
2021-06-17 15:43:55 +00:00
|
|
|
void loop(base::Vector<const ValueType> param_types,
|
|
|
|
base::Vector<const ValueType> return_types, DataRange* data) {
|
2020-05-20 09:21:27 +00:00
|
|
|
BlockScope block_scope(this, kExprLoop, param_types, return_types,
|
|
|
|
param_types);
|
|
|
|
ConsumeAndGenerate(param_types, return_types, data);
|
|
|
|
}
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
template <ValueKind T>
|
2019-07-08 11:44:42 +00:00
|
|
|
void loop(DataRange* data) {
|
2021-06-17 15:43:55 +00:00
|
|
|
loop({}, base::VectorOf({ValueType::Primitive(T)}), data);
|
2017-11-15 11:51:15 +00:00
|
|
|
}
|
2017-11-06 15:48:17 +00:00
|
|
|
|
2018-01-30 11:38:01 +00:00
|
|
|
enum IfType { kIf, kIfElse };
|
|
|
|
|
2021-06-17 15:43:55 +00:00
|
|
|
void if_(base::Vector<const ValueType> param_types,
|
|
|
|
base::Vector<const ValueType> return_types, IfType type,
|
|
|
|
DataRange* data) {
|
2020-05-20 09:21:27 +00:00
|
|
|
// One-armed "if" are only valid if the input and output types are the same.
|
|
|
|
DCHECK_IMPLIES(type == kIf, param_types == return_types);
|
|
|
|
Generate(kWasmI32, data);
|
|
|
|
BlockScope block_scope(this, kExprIf, param_types, return_types,
|
|
|
|
return_types);
|
|
|
|
ConsumeAndGenerate(param_types, return_types, data);
|
|
|
|
if (type == kIfElse) {
|
|
|
|
builder_->Emit(kExprElse);
|
|
|
|
ConsumeAndGenerate(param_types, return_types, data);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
template <ValueKind T, IfType type>
|
2019-07-08 11:44:42 +00:00
|
|
|
void if_(DataRange* data) {
|
2021-03-22 06:56:01 +00:00
|
|
|
static_assert(T == kVoid || type == kIfElse,
|
2018-01-30 11:38:01 +00:00
|
|
|
"if without else cannot produce a value");
|
2020-06-18 12:55:08 +00:00
|
|
|
if_({},
|
2021-06-17 15:43:55 +00:00
|
|
|
T == kVoid ? base::Vector<ValueType>{}
|
|
|
|
: base::VectorOf({ValueType::Primitive(T)}),
|
2020-06-18 12:55:08 +00:00
|
|
|
type, data);
|
2020-05-20 09:21:27 +00:00
|
|
|
}
|
|
|
|
|
2021-02-17 14:24:38 +00:00
|
|
|
void try_block_helper(ValueType return_type, DataRange* data) {
|
2021-08-30 15:58:51 +00:00
|
|
|
bool has_catch_all = data->get<bool>();
|
2021-02-17 14:24:38 +00:00
|
|
|
uint8_t num_catch =
|
|
|
|
data->get<uint8_t>() % (builder_->builder()->NumExceptions() + 1);
|
2021-08-30 15:58:51 +00:00
|
|
|
bool is_delegate = num_catch == 0 && !has_catch_all && data->get<bool>();
|
2021-02-17 14:24:38 +00:00
|
|
|
// Allow one more target than there are enclosing try blocks, for delegating
|
|
|
|
// to the caller.
|
|
|
|
|
2021-06-17 15:43:55 +00:00
|
|
|
base::Vector<const ValueType> return_type_vec =
|
|
|
|
return_type.kind() == kVoid ? base::Vector<ValueType>{}
|
|
|
|
: base::VectorOf(&return_type, 1);
|
2021-02-17 14:24:38 +00:00
|
|
|
BlockScope block_scope(this, kExprTry, {}, return_type_vec, return_type_vec,
|
|
|
|
!is_delegate);
|
|
|
|
int control_depth = static_cast<int>(blocks_.size()) - 1;
|
|
|
|
Generate(return_type, data);
|
|
|
|
catch_blocks_.push_back(control_depth);
|
|
|
|
for (int i = 0; i < num_catch; ++i) {
|
|
|
|
const FunctionSig* exception_type =
|
|
|
|
builder_->builder()->GetExceptionType(i);
|
2021-06-17 15:43:55 +00:00
|
|
|
auto exception_type_vec =
|
|
|
|
base::VectorOf(exception_type->parameters().begin(),
|
|
|
|
exception_type->parameter_count());
|
2021-02-17 14:24:38 +00:00
|
|
|
builder_->EmitWithU32V(kExprCatch, i);
|
|
|
|
ConsumeAndGenerate(exception_type_vec, return_type_vec, data);
|
|
|
|
}
|
|
|
|
if (has_catch_all) {
|
|
|
|
builder_->Emit(kExprCatchAll);
|
|
|
|
Generate(return_type, data);
|
|
|
|
}
|
|
|
|
if (is_delegate) {
|
2021-09-08 14:20:31 +00:00
|
|
|
// The delegate target depth does not include the current try block,
|
|
|
|
// because 'delegate' closes this scope. However it is still in the
|
|
|
|
// {blocks_} list, so remove one to get the correct size.
|
|
|
|
int delegate_depth = data->get<uint8_t>() % (blocks_.size() - 1);
|
2021-02-17 14:24:38 +00:00
|
|
|
builder_->EmitWithU32V(kExprDelegate, delegate_depth);
|
|
|
|
}
|
|
|
|
catch_blocks_.pop_back();
|
|
|
|
}
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
template <ValueKind T>
|
2021-02-17 14:24:38 +00:00
|
|
|
void try_block(DataRange* data) {
|
|
|
|
try_block_helper(ValueType::Primitive(T), data);
|
|
|
|
}
|
|
|
|
|
2021-06-17 15:43:55 +00:00
|
|
|
void any_block(base::Vector<const ValueType> param_types,
|
|
|
|
base::Vector<const ValueType> return_types, DataRange* data) {
|
2020-05-20 09:21:27 +00:00
|
|
|
uint8_t block_type = data->get<uint8_t>() % 4;
|
|
|
|
switch (block_type) {
|
|
|
|
case 0:
|
|
|
|
block(param_types, return_types, data);
|
|
|
|
return;
|
|
|
|
case 1:
|
|
|
|
loop(param_types, return_types, data);
|
|
|
|
return;
|
|
|
|
case 2:
|
|
|
|
if (param_types == return_types) {
|
|
|
|
if_({}, {}, kIf, data);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
V8_FALLTHROUGH;
|
|
|
|
case 3:
|
|
|
|
if_(param_types, return_types, kIfElse, data);
|
|
|
|
return;
|
2018-01-30 11:38:01 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-07-08 11:44:42 +00:00
|
|
|
void br(DataRange* data) {
|
2017-11-15 11:51:15 +00:00
|
|
|
// There is always at least the block representing the function body.
|
|
|
|
DCHECK(!blocks_.empty());
|
2019-07-08 11:44:42 +00:00
|
|
|
const uint32_t target_block = data->get<uint32_t>() % blocks_.size();
|
2020-05-12 14:39:52 +00:00
|
|
|
const auto break_types = blocks_[target_block];
|
2017-11-06 15:48:17 +00:00
|
|
|
|
2021-06-17 15:43:55 +00:00
|
|
|
Generate(base::VectorOf(break_types), data);
|
2017-11-08 21:03:20 +00:00
|
|
|
builder_->EmitWithI32V(
|
|
|
|
kExprBr, static_cast<uint32_t>(blocks_.size()) - 1 - target_block);
|
2017-11-06 15:48:17 +00:00
|
|
|
}
|
|
|
|
|
2021-02-26 09:49:02 +00:00
|
|
|
template <ValueKind wanted_kind>
|
2019-07-08 11:44:42 +00:00
|
|
|
void br_if(DataRange* data) {
|
2018-01-29 16:42:03 +00:00
|
|
|
// There is always at least the block representing the function body.
|
|
|
|
DCHECK(!blocks_.empty());
|
2019-07-08 11:44:42 +00:00
|
|
|
const uint32_t target_block = data->get<uint32_t>() % blocks_.size();
|
2021-06-17 15:43:55 +00:00
|
|
|
const auto break_types = base::VectorOf(blocks_[target_block]);
|
2020-05-12 14:39:52 +00:00
|
|
|
|
2020-06-18 12:55:08 +00:00
|
|
|
Generate(break_types, data);
|
2018-01-29 16:42:03 +00:00
|
|
|
Generate(kWasmI32, data);
|
|
|
|
builder_->EmitWithI32V(
|
|
|
|
kExprBrIf, static_cast<uint32_t>(blocks_.size()) - 1 - target_block);
|
2021-06-17 15:43:55 +00:00
|
|
|
ConsumeAndGenerate(
|
|
|
|
break_types,
|
|
|
|
wanted_kind == kVoid
|
|
|
|
? base::Vector<ValueType>{}
|
|
|
|
: base::VectorOf({ValueType::Primitive(wanted_kind)}),
|
|
|
|
data);
|
2018-01-29 16:42:03 +00:00
|
|
|
}
|
|
|
|
|
2021-08-27 10:50:03 +00:00
|
|
|
template <ValueKind wanted_kind>
|
|
|
|
void br_on_null(DataRange* data) {
|
|
|
|
DCHECK(!blocks_.empty());
|
|
|
|
const uint32_t target_block = data->get<uint32_t>() % blocks_.size();
|
|
|
|
const auto break_types = base::VectorOf(blocks_[target_block]);
|
|
|
|
if (!liftoff_as_reference_) {
|
|
|
|
Generate<wanted_kind>(data);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
Generate(break_types, data);
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(HeapType::kAny), data);
|
2021-08-27 10:50:03 +00:00
|
|
|
builder_->EmitWithI32V(
|
|
|
|
kExprBrOnNull,
|
|
|
|
static_cast<uint32_t>(blocks_.size()) - 1 - target_block);
|
|
|
|
builder_->Emit(kExprDrop);
|
|
|
|
ConsumeAndGenerate(
|
|
|
|
break_types,
|
|
|
|
wanted_kind == kVoid
|
|
|
|
? base::Vector<ValueType>{}
|
|
|
|
: base::VectorOf({ValueType::Primitive(wanted_kind)}),
|
|
|
|
data);
|
|
|
|
}
|
|
|
|
|
2017-11-08 21:03:20 +00:00
|
|
|
// TODO(eholk): make this function constexpr once gcc supports it
|
|
|
|
static uint8_t max_alignment(WasmOpcode memop) {
|
|
|
|
switch (memop) {
|
2020-03-12 23:53:38 +00:00
|
|
|
case kExprS128LoadMem:
|
|
|
|
case kExprS128StoreMem:
|
|
|
|
return 4;
|
2017-11-08 21:03:20 +00:00
|
|
|
case kExprI64LoadMem:
|
|
|
|
case kExprF64LoadMem:
|
|
|
|
case kExprI64StoreMem:
|
|
|
|
case kExprF64StoreMem:
|
2020-01-14 18:17:34 +00:00
|
|
|
case kExprI64AtomicStore:
|
|
|
|
case kExprI64AtomicLoad:
|
2020-01-21 14:43:23 +00:00
|
|
|
case kExprI64AtomicAdd:
|
|
|
|
case kExprI64AtomicSub:
|
|
|
|
case kExprI64AtomicAnd:
|
|
|
|
case kExprI64AtomicOr:
|
|
|
|
case kExprI64AtomicXor:
|
2020-01-30 14:34:16 +00:00
|
|
|
case kExprI64AtomicExchange:
|
|
|
|
case kExprI64AtomicCompareExchange:
|
2020-09-22 20:54:28 +00:00
|
|
|
case kExprS128Load8x8S:
|
|
|
|
case kExprS128Load8x8U:
|
|
|
|
case kExprS128Load16x4S:
|
|
|
|
case kExprS128Load16x4U:
|
|
|
|
case kExprS128Load32x2S:
|
|
|
|
case kExprS128Load32x2U:
|
|
|
|
case kExprS128Load64Splat:
|
2020-10-27 10:18:45 +00:00
|
|
|
case kExprS128Load64Zero:
|
2017-11-08 21:03:20 +00:00
|
|
|
return 3;
|
|
|
|
case kExprI32LoadMem:
|
|
|
|
case kExprI64LoadMem32S:
|
|
|
|
case kExprI64LoadMem32U:
|
|
|
|
case kExprF32LoadMem:
|
|
|
|
case kExprI32StoreMem:
|
|
|
|
case kExprI64StoreMem32:
|
|
|
|
case kExprF32StoreMem:
|
2020-01-14 18:17:34 +00:00
|
|
|
case kExprI32AtomicStore:
|
|
|
|
case kExprI64AtomicStore32U:
|
|
|
|
case kExprI32AtomicLoad:
|
|
|
|
case kExprI64AtomicLoad32U:
|
2020-01-21 14:43:23 +00:00
|
|
|
case kExprI32AtomicAdd:
|
|
|
|
case kExprI32AtomicSub:
|
|
|
|
case kExprI32AtomicAnd:
|
|
|
|
case kExprI32AtomicOr:
|
|
|
|
case kExprI32AtomicXor:
|
2020-01-30 14:34:16 +00:00
|
|
|
case kExprI32AtomicExchange:
|
|
|
|
case kExprI32AtomicCompareExchange:
|
2020-01-21 14:43:23 +00:00
|
|
|
case kExprI64AtomicAdd32U:
|
|
|
|
case kExprI64AtomicSub32U:
|
|
|
|
case kExprI64AtomicAnd32U:
|
|
|
|
case kExprI64AtomicOr32U:
|
|
|
|
case kExprI64AtomicXor32U:
|
2020-01-30 14:34:16 +00:00
|
|
|
case kExprI64AtomicExchange32U:
|
|
|
|
case kExprI64AtomicCompareExchange32U:
|
2020-09-22 20:54:28 +00:00
|
|
|
case kExprS128Load32Splat:
|
2020-10-27 10:18:45 +00:00
|
|
|
case kExprS128Load32Zero:
|
2017-11-08 21:03:20 +00:00
|
|
|
return 2;
|
|
|
|
case kExprI32LoadMem16S:
|
|
|
|
case kExprI32LoadMem16U:
|
|
|
|
case kExprI64LoadMem16S:
|
|
|
|
case kExprI64LoadMem16U:
|
|
|
|
case kExprI32StoreMem16:
|
|
|
|
case kExprI64StoreMem16:
|
2020-01-14 18:17:34 +00:00
|
|
|
case kExprI32AtomicStore16U:
|
|
|
|
case kExprI64AtomicStore16U:
|
|
|
|
case kExprI32AtomicLoad16U:
|
|
|
|
case kExprI64AtomicLoad16U:
|
2020-01-21 14:43:23 +00:00
|
|
|
case kExprI32AtomicAdd16U:
|
|
|
|
case kExprI32AtomicSub16U:
|
|
|
|
case kExprI32AtomicAnd16U:
|
|
|
|
case kExprI32AtomicOr16U:
|
|
|
|
case kExprI32AtomicXor16U:
|
2020-01-30 14:34:16 +00:00
|
|
|
case kExprI32AtomicExchange16U:
|
|
|
|
case kExprI32AtomicCompareExchange16U:
|
2020-01-21 14:43:23 +00:00
|
|
|
case kExprI64AtomicAdd16U:
|
|
|
|
case kExprI64AtomicSub16U:
|
|
|
|
case kExprI64AtomicAnd16U:
|
|
|
|
case kExprI64AtomicOr16U:
|
|
|
|
case kExprI64AtomicXor16U:
|
2020-01-30 14:34:16 +00:00
|
|
|
case kExprI64AtomicExchange16U:
|
|
|
|
case kExprI64AtomicCompareExchange16U:
|
2020-09-22 20:54:28 +00:00
|
|
|
case kExprS128Load16Splat:
|
2017-11-08 21:03:20 +00:00
|
|
|
return 1;
|
|
|
|
case kExprI32LoadMem8S:
|
|
|
|
case kExprI32LoadMem8U:
|
|
|
|
case kExprI64LoadMem8S:
|
|
|
|
case kExprI64LoadMem8U:
|
|
|
|
case kExprI32StoreMem8:
|
|
|
|
case kExprI64StoreMem8:
|
2020-01-14 18:17:34 +00:00
|
|
|
case kExprI32AtomicStore8U:
|
|
|
|
case kExprI64AtomicStore8U:
|
|
|
|
case kExprI32AtomicLoad8U:
|
|
|
|
case kExprI64AtomicLoad8U:
|
2020-01-21 14:43:23 +00:00
|
|
|
case kExprI32AtomicAdd8U:
|
|
|
|
case kExprI32AtomicSub8U:
|
|
|
|
case kExprI32AtomicAnd8U:
|
|
|
|
case kExprI32AtomicOr8U:
|
|
|
|
case kExprI32AtomicXor8U:
|
2020-01-30 14:34:16 +00:00
|
|
|
case kExprI32AtomicExchange8U:
|
|
|
|
case kExprI32AtomicCompareExchange8U:
|
2020-01-21 14:43:23 +00:00
|
|
|
case kExprI64AtomicAdd8U:
|
|
|
|
case kExprI64AtomicSub8U:
|
|
|
|
case kExprI64AtomicAnd8U:
|
|
|
|
case kExprI64AtomicOr8U:
|
|
|
|
case kExprI64AtomicXor8U:
|
2020-01-30 14:34:16 +00:00
|
|
|
case kExprI64AtomicExchange8U:
|
|
|
|
case kExprI64AtomicCompareExchange8U:
|
2020-09-22 20:54:28 +00:00
|
|
|
case kExprS128Load8Splat:
|
2017-11-08 21:03:20 +00:00
|
|
|
return 0;
|
|
|
|
default:
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-26 09:49:02 +00:00
|
|
|
template <WasmOpcode memory_op, ValueKind... arg_kinds>
|
2019-07-08 11:44:42 +00:00
|
|
|
void memop(DataRange* data) {
|
|
|
|
const uint8_t align = data->get<uint8_t>() % (max_alignment(memory_op) + 1);
|
|
|
|
const uint32_t offset = data->get<uint32_t>();
|
2017-11-06 15:48:17 +00:00
|
|
|
|
2017-11-08 10:25:14 +00:00
|
|
|
// Generate the index and the arguments, if any.
|
2021-02-26 09:49:02 +00:00
|
|
|
Generate<kI32, arg_kinds...>(data);
|
2017-11-06 15:48:17 +00:00
|
|
|
|
2020-03-09 20:52:40 +00:00
|
|
|
if (WasmOpcodes::IsPrefixOpcode(static_cast<WasmOpcode>(memory_op >> 8))) {
|
2020-03-04 22:53:37 +00:00
|
|
|
DCHECK(memory_op >> 8 == kAtomicPrefix || memory_op >> 8 == kSimdPrefix);
|
2020-01-14 18:17:34 +00:00
|
|
|
builder_->EmitWithPrefix(memory_op);
|
|
|
|
} else {
|
|
|
|
builder_->Emit(memory_op);
|
|
|
|
}
|
2017-11-06 15:48:17 +00:00
|
|
|
builder_->EmitU32V(align);
|
|
|
|
builder_->EmitU32V(offset);
|
|
|
|
}
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
template <WasmOpcode Op, ValueKind... Args>
|
2020-01-21 14:43:23 +00:00
|
|
|
void atomic_op(DataRange* data) {
|
|
|
|
const uint8_t align = data->get<uint8_t>() % (max_alignment(Op) + 1);
|
|
|
|
const uint32_t offset = data->get<uint32_t>();
|
|
|
|
|
|
|
|
Generate<Args...>(data);
|
|
|
|
builder_->EmitWithPrefix(Op);
|
|
|
|
|
|
|
|
builder_->EmitU32V(align);
|
|
|
|
builder_->EmitU32V(offset);
|
|
|
|
}
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
template <WasmOpcode Op, ValueKind... Args>
|
2020-05-20 07:08:11 +00:00
|
|
|
void op_with_prefix(DataRange* data) {
|
2020-02-06 18:26:29 +00:00
|
|
|
Generate<Args...>(data);
|
|
|
|
builder_->EmitWithPrefix(Op);
|
|
|
|
}
|
|
|
|
|
2020-07-08 19:10:14 +00:00
|
|
|
void simd_const(DataRange* data) {
|
|
|
|
builder_->EmitWithPrefix(kExprS128Const);
|
|
|
|
for (int i = 0; i < kSimd128Size; i++) {
|
|
|
|
builder_->EmitByte(data->get<byte>());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
template <WasmOpcode Op, int lanes, ValueKind... Args>
|
2020-05-28 00:06:31 +00:00
|
|
|
void simd_lane_op(DataRange* data) {
|
|
|
|
Generate<Args...>(data);
|
|
|
|
builder_->EmitWithPrefix(Op);
|
|
|
|
builder_->EmitByte(data->get<byte>() % lanes);
|
|
|
|
}
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
template <WasmOpcode Op, int lanes, ValueKind... Args>
|
2021-01-29 22:35:36 +00:00
|
|
|
void simd_lane_memop(DataRange* data) {
|
|
|
|
// Simd load/store instructions that have a lane immediate.
|
|
|
|
memop<Op, Args...>(data);
|
|
|
|
builder_->EmitByte(data->get<byte>() % lanes);
|
|
|
|
}
|
|
|
|
|
2020-04-03 23:56:53 +00:00
|
|
|
void simd_shuffle(DataRange* data) {
|
2021-02-22 09:28:44 +00:00
|
|
|
Generate<kS128, kS128>(data);
|
2020-09-25 18:08:04 +00:00
|
|
|
builder_->EmitWithPrefix(kExprI8x16Shuffle);
|
2020-04-03 23:56:53 +00:00
|
|
|
for (int i = 0; i < kSimd128Size; i++) {
|
|
|
|
builder_->EmitByte(static_cast<uint8_t>(data->get<byte>() % 32));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-07-08 11:44:42 +00:00
|
|
|
void drop(DataRange* data) {
|
2021-09-28 13:36:22 +00:00
|
|
|
Generate(GetValueType(data, liftoff_as_reference_,
|
|
|
|
static_cast<uint32_t>(functions_.size()) +
|
|
|
|
num_structs_ + num_arrays_),
|
2021-07-21 11:43:07 +00:00
|
|
|
data);
|
2017-12-21 13:53:15 +00:00
|
|
|
builder_->Emit(kExprDrop);
|
|
|
|
}
|
|
|
|
|
2021-09-08 15:33:51 +00:00
|
|
|
enum CallKind { kCallDirect, kCallIndirect, kCallRef };
|
2020-07-28 15:09:13 +00:00
|
|
|
|
2021-02-26 09:49:02 +00:00
|
|
|
template <ValueKind wanted_kind>
|
2019-07-08 11:44:42 +00:00
|
|
|
void call(DataRange* data) {
|
2021-02-26 09:49:02 +00:00
|
|
|
call(data, ValueType::Primitive(wanted_kind), kCallDirect);
|
2020-07-28 15:09:13 +00:00
|
|
|
}
|
|
|
|
|
2021-02-26 09:49:02 +00:00
|
|
|
template <ValueKind wanted_kind>
|
2020-07-28 15:09:13 +00:00
|
|
|
void call_indirect(DataRange* data) {
|
2021-02-26 09:49:02 +00:00
|
|
|
call(data, ValueType::Primitive(wanted_kind), kCallIndirect);
|
2017-12-21 14:31:42 +00:00
|
|
|
}
|
|
|
|
|
2021-09-08 15:33:51 +00:00
|
|
|
template <ValueKind wanted_kind>
|
|
|
|
void call_ref(DataRange* data) {
|
|
|
|
if (liftoff_as_reference_) {
|
|
|
|
call(data, ValueType::Primitive(wanted_kind), kCallRef);
|
|
|
|
} else {
|
|
|
|
Generate<wanted_kind>(data);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-12-21 14:31:42 +00:00
|
|
|
void Convert(ValueType src, ValueType dst) {
|
|
|
|
auto idx = [](ValueType t) -> int {
|
2020-03-12 14:29:51 +00:00
|
|
|
switch (t.kind()) {
|
2021-02-22 09:28:44 +00:00
|
|
|
case kI32:
|
2017-12-21 14:31:42 +00:00
|
|
|
return 0;
|
2021-02-22 09:28:44 +00:00
|
|
|
case kI64:
|
2017-12-21 14:31:42 +00:00
|
|
|
return 1;
|
2021-02-22 09:28:44 +00:00
|
|
|
case kF32:
|
2017-12-21 14:31:42 +00:00
|
|
|
return 2;
|
2021-02-22 09:28:44 +00:00
|
|
|
case kF64:
|
2017-12-21 14:31:42 +00:00
|
|
|
return 3;
|
|
|
|
default:
|
|
|
|
UNREACHABLE();
|
|
|
|
}
|
|
|
|
};
|
|
|
|
static constexpr WasmOpcode kConvertOpcodes[] = {
|
|
|
|
// {i32, i64, f32, f64} -> i32
|
|
|
|
kExprNop, kExprI32ConvertI64, kExprI32SConvertF32, kExprI32SConvertF64,
|
|
|
|
// {i32, i64, f32, f64} -> i64
|
|
|
|
kExprI64SConvertI32, kExprNop, kExprI64SConvertF32, kExprI64SConvertF64,
|
|
|
|
// {i32, i64, f32, f64} -> f32
|
|
|
|
kExprF32SConvertI32, kExprF32SConvertI64, kExprNop, kExprF32ConvertF64,
|
|
|
|
// {i32, i64, f32, f64} -> f64
|
|
|
|
kExprF64SConvertI32, kExprF64SConvertI64, kExprF64ConvertF32, kExprNop};
|
|
|
|
int arr_idx = idx(dst) << 2 | idx(src);
|
|
|
|
builder_->Emit(kConvertOpcodes[arr_idx]);
|
|
|
|
}
|
|
|
|
|
2021-09-08 15:33:51 +00:00
|
|
|
void call(DataRange* data, ValueType wanted_kind, CallKind call_kind) {
|
2020-07-24 11:05:13 +00:00
|
|
|
uint8_t random_byte = data->get<uint8_t>();
|
|
|
|
int func_index = random_byte % functions_.size();
|
2020-07-28 15:09:13 +00:00
|
|
|
uint32_t sig_index = functions_[func_index];
|
|
|
|
FunctionSig* sig = builder_->builder()->GetSignature(sig_index);
|
2017-12-21 14:31:42 +00:00
|
|
|
// Generate arguments.
|
|
|
|
for (size_t i = 0; i < sig->parameter_count(); ++i) {
|
|
|
|
Generate(sig->GetParam(i), data);
|
|
|
|
}
|
|
|
|
// Emit call.
|
2020-07-24 11:05:13 +00:00
|
|
|
// If the return types of the callee happen to match the return types of the
|
|
|
|
// caller, generate a tail call.
|
|
|
|
bool use_return_call = random_byte > 127;
|
|
|
|
if (use_return_call &&
|
|
|
|
std::equal(sig->returns().begin(), sig->returns().end(),
|
|
|
|
builder_->signature()->returns().begin(),
|
|
|
|
builder_->signature()->returns().end())) {
|
2021-09-08 15:33:51 +00:00
|
|
|
if (call_kind == kCallDirect) {
|
2020-07-28 15:09:13 +00:00
|
|
|
builder_->EmitWithU32V(kExprReturnCall, func_index);
|
2021-09-08 15:33:51 +00:00
|
|
|
} else if (call_kind == kCallIndirect) {
|
2021-08-27 12:53:28 +00:00
|
|
|
// This will not trap because table[func_index] always contains function
|
|
|
|
// func_index.
|
2020-07-28 15:09:13 +00:00
|
|
|
builder_->EmitI32Const(func_index);
|
|
|
|
builder_->EmitWithU32V(kExprReturnCallIndirect, sig_index);
|
2021-08-27 12:53:28 +00:00
|
|
|
// TODO(11954): Use other table indices too.
|
2020-07-28 15:09:13 +00:00
|
|
|
builder_->EmitByte(0); // Table index.
|
2021-09-08 15:33:51 +00:00
|
|
|
} else {
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(sig_index), data);
|
2021-09-08 15:33:51 +00:00
|
|
|
builder_->Emit(kExprReturnCallRef);
|
2020-07-28 15:09:13 +00:00
|
|
|
}
|
2020-07-24 11:05:13 +00:00
|
|
|
return;
|
|
|
|
} else {
|
2021-09-08 15:33:51 +00:00
|
|
|
if (call_kind == kCallDirect) {
|
2020-07-28 15:09:13 +00:00
|
|
|
builder_->EmitWithU32V(kExprCallFunction, func_index);
|
2021-09-08 15:33:51 +00:00
|
|
|
} else if (call_kind == kCallIndirect) {
|
2021-08-27 12:53:28 +00:00
|
|
|
// This will not trap because table[func_index] always contains function
|
|
|
|
// func_index.
|
2020-07-28 15:09:13 +00:00
|
|
|
builder_->EmitI32Const(func_index);
|
|
|
|
builder_->EmitWithU32V(kExprCallIndirect, sig_index);
|
2021-08-27 12:53:28 +00:00
|
|
|
// TODO(11954): Use other table indices too.
|
2020-07-28 15:09:13 +00:00
|
|
|
builder_->EmitByte(0); // Table index.
|
2021-09-08 15:33:51 +00:00
|
|
|
} else {
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(sig_index), data);
|
2021-09-08 15:33:51 +00:00
|
|
|
builder_->Emit(kExprCallRef);
|
2020-07-28 15:09:13 +00:00
|
|
|
}
|
2020-07-24 11:05:13 +00:00
|
|
|
}
|
2021-03-22 06:56:01 +00:00
|
|
|
if (sig->return_count() == 0 && wanted_kind != kWasmVoid) {
|
2017-12-21 14:31:42 +00:00
|
|
|
// The call did not generate a value. Thus just generate it here.
|
2021-02-26 09:49:02 +00:00
|
|
|
Generate(wanted_kind, data);
|
2020-05-12 14:39:52 +00:00
|
|
|
return;
|
|
|
|
}
|
2021-03-22 06:56:01 +00:00
|
|
|
if (wanted_kind == kWasmVoid) {
|
2020-05-12 14:39:52 +00:00
|
|
|
// The call did generate values, but we did not want one.
|
|
|
|
for (size_t i = 0; i < sig->return_count(); ++i) {
|
|
|
|
builder_->Emit(kExprDrop);
|
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
2021-06-17 15:43:55 +00:00
|
|
|
auto return_types =
|
|
|
|
base::VectorOf(sig->returns().begin(), sig->return_count());
|
2020-06-18 12:55:08 +00:00
|
|
|
auto wanted_types =
|
2021-06-17 15:43:55 +00:00
|
|
|
base::VectorOf(&wanted_kind, wanted_kind == kWasmVoid ? 0 : 1);
|
2020-05-18 08:31:21 +00:00
|
|
|
ConsumeAndGenerate(return_types, wanted_types, data);
|
2017-12-21 14:31:42 +00:00
|
|
|
}
|
|
|
|
|
2018-01-29 16:45:59 +00:00
|
|
|
struct Var {
|
2018-01-02 18:52:49 +00:00
|
|
|
uint32_t index;
|
2021-03-22 06:56:01 +00:00
|
|
|
ValueType type = kWasmVoid;
|
2018-01-29 16:45:59 +00:00
|
|
|
Var() = default;
|
|
|
|
Var(uint32_t index, ValueType type) : index(index), type(type) {}
|
2021-03-22 06:56:01 +00:00
|
|
|
bool is_valid() const { return type != kWasmVoid; }
|
2018-01-02 18:52:49 +00:00
|
|
|
};
|
|
|
|
|
2019-07-08 11:44:42 +00:00
|
|
|
Var GetRandomLocal(DataRange* data) {
|
2018-01-02 18:52:49 +00:00
|
|
|
uint32_t num_params =
|
|
|
|
static_cast<uint32_t>(builder_->signature()->parameter_count());
|
|
|
|
uint32_t num_locals = static_cast<uint32_t>(locals_.size());
|
|
|
|
if (num_params + num_locals == 0) return {};
|
2019-07-08 11:44:42 +00:00
|
|
|
uint32_t index = data->get<uint8_t>() % (num_params + num_locals);
|
2018-01-02 18:52:49 +00:00
|
|
|
ValueType type = index < num_params ? builder_->signature()->GetParam(index)
|
|
|
|
: locals_[index - num_params];
|
|
|
|
return {index, type};
|
|
|
|
}
|
|
|
|
|
2021-07-07 14:54:58 +00:00
|
|
|
constexpr static bool is_convertible_kind(ValueKind kind) {
|
|
|
|
return kind == kI32 || kind == kI64 || kind == kF32 || kind == kF64;
|
|
|
|
}
|
|
|
|
|
2021-02-26 09:49:02 +00:00
|
|
|
template <ValueKind wanted_kind>
|
2019-07-08 11:44:42 +00:00
|
|
|
void local_op(DataRange* data, WasmOpcode opcode) {
|
2021-07-07 14:54:58 +00:00
|
|
|
STATIC_ASSERT(wanted_kind == kVoid || is_convertible_kind(wanted_kind));
|
2018-01-29 16:45:59 +00:00
|
|
|
Var local = GetRandomLocal(data);
|
2018-01-11 12:56:19 +00:00
|
|
|
// If there are no locals and no parameters, just generate any value (if a
|
|
|
|
// value is needed), or do nothing.
|
2021-07-07 14:54:58 +00:00
|
|
|
if (!local.is_valid() || !is_convertible_kind(local.type.kind())) {
|
2021-03-22 06:56:01 +00:00
|
|
|
if (wanted_kind == kVoid) return;
|
2021-02-26 09:49:02 +00:00
|
|
|
return Generate<wanted_kind>(data);
|
2018-01-11 12:56:19 +00:00
|
|
|
}
|
2018-01-02 18:52:49 +00:00
|
|
|
|
2019-10-08 12:38:48 +00:00
|
|
|
if (opcode != kExprLocalGet) Generate(local.type, data);
|
2018-01-11 12:56:19 +00:00
|
|
|
builder_->EmitWithU32V(opcode, local.index);
|
2021-03-22 06:56:01 +00:00
|
|
|
if (wanted_kind != kVoid && local.type.kind() != wanted_kind) {
|
2021-02-26 09:49:02 +00:00
|
|
|
Convert(local.type, ValueType::Primitive(wanted_kind));
|
2018-01-11 12:56:19 +00:00
|
|
|
}
|
2018-01-02 18:52:49 +00:00
|
|
|
}
|
|
|
|
|
2021-02-26 09:49:02 +00:00
|
|
|
template <ValueKind wanted_kind>
|
2019-07-08 11:44:42 +00:00
|
|
|
void get_local(DataRange* data) {
|
2021-03-22 06:56:01 +00:00
|
|
|
static_assert(wanted_kind != kVoid, "illegal type");
|
2021-02-26 09:49:02 +00:00
|
|
|
local_op<wanted_kind>(data, kExprLocalGet);
|
2018-01-11 12:56:19 +00:00
|
|
|
}
|
|
|
|
|
2021-03-22 06:56:01 +00:00
|
|
|
void set_local(DataRange* data) { local_op<kVoid>(data, kExprLocalSet); }
|
2018-01-02 18:52:49 +00:00
|
|
|
|
2021-02-26 09:49:02 +00:00
|
|
|
template <ValueKind wanted_kind>
|
2019-07-08 11:44:42 +00:00
|
|
|
void tee_local(DataRange* data) {
|
2021-02-26 09:49:02 +00:00
|
|
|
local_op<wanted_kind>(data, kExprLocalTee);
|
2018-01-02 18:52:49 +00:00
|
|
|
}
|
|
|
|
|
2018-11-15 15:08:32 +00:00
|
|
|
template <size_t num_bytes>
|
2019-07-08 11:44:42 +00:00
|
|
|
void i32_const(DataRange* data) {
|
|
|
|
builder_->EmitI32Const(data->get<int32_t, num_bytes>());
|
2018-11-15 15:08:32 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
template <size_t num_bytes>
|
2019-07-08 11:44:42 +00:00
|
|
|
void i64_const(DataRange* data) {
|
|
|
|
builder_->EmitI64Const(data->get<int64_t, num_bytes>());
|
2018-11-15 15:08:32 +00:00
|
|
|
}
|
|
|
|
|
2019-07-08 11:44:42 +00:00
|
|
|
Var GetRandomGlobal(DataRange* data, bool ensure_mutable) {
|
2018-01-29 16:45:59 +00:00
|
|
|
uint32_t index;
|
|
|
|
if (ensure_mutable) {
|
|
|
|
if (mutable_globals_.empty()) return {};
|
2019-07-08 11:44:42 +00:00
|
|
|
index = mutable_globals_[data->get<uint8_t>() % mutable_globals_.size()];
|
2018-01-29 16:45:59 +00:00
|
|
|
} else {
|
|
|
|
if (globals_.empty()) return {};
|
2019-07-08 11:44:42 +00:00
|
|
|
index = data->get<uint8_t>() % globals_.size();
|
2018-01-29 16:45:59 +00:00
|
|
|
}
|
|
|
|
ValueType type = globals_[index];
|
|
|
|
return {index, type};
|
|
|
|
}
|
|
|
|
|
2021-02-26 09:49:02 +00:00
|
|
|
template <ValueKind wanted_kind>
|
2019-07-08 11:44:42 +00:00
|
|
|
void global_op(DataRange* data) {
|
2021-07-07 14:54:58 +00:00
|
|
|
STATIC_ASSERT(wanted_kind == kVoid || is_convertible_kind(wanted_kind));
|
2021-03-22 06:56:01 +00:00
|
|
|
constexpr bool is_set = wanted_kind == kVoid;
|
2018-01-29 16:45:59 +00:00
|
|
|
Var global = GetRandomGlobal(data, is_set);
|
|
|
|
// If there are no globals, just generate any value (if a value is needed),
|
|
|
|
// or do nothing.
|
2021-07-07 14:54:58 +00:00
|
|
|
if (!global.is_valid() || !is_convertible_kind(global.type.kind())) {
|
2021-03-22 06:56:01 +00:00
|
|
|
if (wanted_kind == kVoid) return;
|
2021-02-26 09:49:02 +00:00
|
|
|
return Generate<wanted_kind>(data);
|
2018-01-29 16:45:59 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (is_set) Generate(global.type, data);
|
2019-10-08 13:16:30 +00:00
|
|
|
builder_->EmitWithU32V(is_set ? kExprGlobalSet : kExprGlobalGet,
|
2018-01-29 16:45:59 +00:00
|
|
|
global.index);
|
2021-02-26 09:49:02 +00:00
|
|
|
if (!is_set && global.type.kind() != wanted_kind) {
|
|
|
|
Convert(global.type, ValueType::Primitive(wanted_kind));
|
2018-01-29 16:45:59 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-26 09:49:02 +00:00
|
|
|
template <ValueKind wanted_kind>
|
2019-07-08 11:44:42 +00:00
|
|
|
void get_global(DataRange* data) {
|
2021-03-22 06:56:01 +00:00
|
|
|
static_assert(wanted_kind != kVoid, "illegal type");
|
2021-02-26 09:49:02 +00:00
|
|
|
global_op<wanted_kind>(data);
|
2018-01-29 16:45:59 +00:00
|
|
|
}
|
|
|
|
|
2021-02-26 09:49:02 +00:00
|
|
|
template <ValueKind select_kind>
|
2019-07-08 11:44:42 +00:00
|
|
|
void select_with_type(DataRange* data) {
|
2021-03-22 06:56:01 +00:00
|
|
|
static_assert(select_kind != kVoid, "illegal kind for select");
|
2021-02-26 09:49:02 +00:00
|
|
|
Generate<select_kind, select_kind, kI32>(data);
|
2019-05-28 10:06:42 +00:00
|
|
|
// num_types is always 1.
|
|
|
|
uint8_t num_types = 1;
|
|
|
|
builder_->EmitWithU8U8(kExprSelectWithType, num_types,
|
2021-02-26 09:49:02 +00:00
|
|
|
ValueType::Primitive(select_kind).value_type_code());
|
2019-05-28 10:06:42 +00:00
|
|
|
}
|
|
|
|
|
2021-03-22 06:56:01 +00:00
|
|
|
void set_global(DataRange* data) { global_op<kVoid>(data); }
|
2018-01-29 16:45:59 +00:00
|
|
|
|
2021-02-17 14:24:38 +00:00
|
|
|
void throw_or_rethrow(DataRange* data) {
|
2021-08-30 15:58:51 +00:00
|
|
|
bool rethrow = data->get<bool>();
|
2021-02-17 14:24:38 +00:00
|
|
|
if (rethrow && !catch_blocks_.empty()) {
|
|
|
|
int control_depth = static_cast<int>(blocks_.size() - 1);
|
|
|
|
int catch_index =
|
|
|
|
data->get<uint8_t>() % static_cast<int>(catch_blocks_.size());
|
|
|
|
builder_->EmitWithU32V(kExprRethrow,
|
|
|
|
control_depth - catch_blocks_[catch_index]);
|
|
|
|
} else {
|
|
|
|
int tag = data->get<uint8_t>() % builder_->builder()->NumExceptions();
|
|
|
|
FunctionSig* exception_sig = builder_->builder()->GetExceptionType(tag);
|
2021-06-17 15:43:55 +00:00
|
|
|
base::Vector<const ValueType> exception_types(
|
2021-02-17 14:24:38 +00:00
|
|
|
exception_sig->parameters().begin(),
|
|
|
|
exception_sig->parameter_count());
|
|
|
|
Generate(exception_types, data);
|
|
|
|
builder_->EmitWithU32V(kExprThrow, tag);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
template <ValueKind... Types>
|
2019-07-08 11:44:42 +00:00
|
|
|
void sequence(DataRange* data) {
|
2018-08-01 15:14:06 +00:00
|
|
|
Generate<Types...>(data);
|
2017-11-06 15:48:17 +00:00
|
|
|
}
|
|
|
|
|
2019-07-08 11:44:42 +00:00
|
|
|
void current_memory(DataRange* data) {
|
2017-11-06 15:48:17 +00:00
|
|
|
builder_->EmitWithU8(kExprMemorySize, 0);
|
|
|
|
}
|
|
|
|
|
2019-07-08 11:44:42 +00:00
|
|
|
void grow_memory(DataRange* data);
|
2017-11-07 17:38:57 +00:00
|
|
|
|
2021-07-07 14:54:58 +00:00
|
|
|
void ref_null(HeapType type, DataRange* data) {
|
|
|
|
builder_->EmitWithI32V(kExprRefNull, type.code());
|
|
|
|
}
|
2021-10-18 14:19:15 +00:00
|
|
|
|
|
|
|
bool get_local_ref(HeapType type, DataRange* data) {
|
2021-07-07 14:54:58 +00:00
|
|
|
Var local = GetRandomLocal(data);
|
|
|
|
// TODO(manoskouk): Ideally we would check for subtyping here over type
|
|
|
|
// equality, but we don't have a module.
|
|
|
|
if (local.is_valid() && local.type.is_object_reference() &&
|
|
|
|
type == local.type.heap_type()) {
|
|
|
|
builder_->EmitWithU32V(kExprLocalGet, local.index);
|
2021-10-18 14:19:15 +00:00
|
|
|
return true;
|
2021-07-07 14:54:58 +00:00
|
|
|
}
|
2021-10-18 14:19:15 +00:00
|
|
|
|
|
|
|
return false;
|
2021-07-07 14:54:58 +00:00
|
|
|
}
|
2021-10-18 14:19:15 +00:00
|
|
|
|
|
|
|
bool new_object(HeapType type, DataRange* data) {
|
2021-09-06 11:46:06 +00:00
|
|
|
DCHECK(liftoff_as_reference_ && type.is_index());
|
|
|
|
bool new_default = data->get<bool>();
|
|
|
|
uint32_t index = type.ref_index();
|
|
|
|
if (builder_->builder()->IsStructType(index)) {
|
|
|
|
if (new_default) {
|
|
|
|
builder_->EmitWithPrefix(kExprRttCanon);
|
|
|
|
builder_->EmitU32V(index);
|
2021-09-15 12:59:42 +00:00
|
|
|
builder_->EmitWithPrefix(kExprStructNewDefaultWithRtt);
|
2021-09-06 11:46:06 +00:00
|
|
|
builder_->EmitU32V(index);
|
|
|
|
} else {
|
|
|
|
StructType* struct_gen = builder_->builder()->GetStructType(index);
|
|
|
|
int field_count = struct_gen->field_count();
|
|
|
|
for (int i = 0; i < field_count; i++) {
|
2021-10-07 15:10:58 +00:00
|
|
|
Generate(struct_gen->field(i).Unpacked(), data);
|
2021-07-28 14:10:06 +00:00
|
|
|
}
|
2021-09-06 11:46:06 +00:00
|
|
|
builder_->EmitWithPrefix(kExprRttCanon);
|
|
|
|
builder_->EmitU32V(index);
|
|
|
|
builder_->EmitWithPrefix(kExprStructNewWithRtt);
|
|
|
|
builder_->EmitU32V(index);
|
|
|
|
}
|
|
|
|
} else if (builder_->builder()->IsArrayType(index)) {
|
|
|
|
if (new_default) {
|
|
|
|
Generate(kWasmI32, data);
|
2021-11-02 14:00:27 +00:00
|
|
|
builder_->EmitI32Const(kMaxArraySize);
|
|
|
|
builder_->Emit(kExprI32RemS);
|
2021-09-06 11:46:06 +00:00
|
|
|
builder_->EmitWithPrefix(kExprRttCanon);
|
|
|
|
builder_->EmitU32V(index);
|
2021-09-15 12:59:42 +00:00
|
|
|
builder_->EmitWithPrefix(kExprArrayNewDefaultWithRtt);
|
2021-09-06 11:46:06 +00:00
|
|
|
builder_->EmitU32V(index);
|
2021-08-23 12:42:55 +00:00
|
|
|
} else {
|
2021-10-07 15:10:58 +00:00
|
|
|
Generate(
|
|
|
|
builder_->builder()->GetArrayType(index)->element_type().Unpacked(),
|
|
|
|
data);
|
2021-09-06 11:46:06 +00:00
|
|
|
Generate(kWasmI32, data);
|
2021-11-02 14:00:27 +00:00
|
|
|
builder_->EmitI32Const(kMaxArraySize);
|
|
|
|
builder_->Emit(kExprI32RemS);
|
2021-09-06 11:46:06 +00:00
|
|
|
builder_->EmitWithPrefix(kExprRttCanon);
|
|
|
|
builder_->EmitU32V(index);
|
|
|
|
builder_->EmitWithPrefix(kExprArrayNewWithRtt);
|
|
|
|
builder_->EmitU32V(index);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
DCHECK(builder_->builder()->IsSignature(index));
|
|
|
|
int func_size = builder_->builder()->NumFunctions();
|
|
|
|
for (int i = 0; i < func_size; i++) {
|
|
|
|
WasmFunctionBuilder* func = builder_->builder()->GetFunction(i);
|
|
|
|
// TODO(11954): Choose a random function from among those matching the
|
|
|
|
// signature (consider function subtyping?).
|
2021-10-01 07:57:46 +00:00
|
|
|
if (*(func->signature()) ==
|
|
|
|
*(builder_->builder()->GetSignature(index))) {
|
2021-09-06 11:46:06 +00:00
|
|
|
builder_->EmitWithU32V(kExprRefFunc, func->func_index());
|
2021-10-18 14:19:15 +00:00
|
|
|
return true;
|
2021-08-23 12:42:55 +00:00
|
|
|
}
|
2021-07-28 14:10:06 +00:00
|
|
|
}
|
2021-09-17 16:45:15 +00:00
|
|
|
UNREACHABLE();
|
2021-09-06 11:46:06 +00:00
|
|
|
}
|
2021-10-18 14:19:15 +00:00
|
|
|
|
|
|
|
return true;
|
2021-09-06 11:46:06 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
template <ValueKind wanted_kind>
|
|
|
|
void table_op(std::vector<ValueType> types, DataRange* data,
|
|
|
|
WasmOpcode opcode) {
|
|
|
|
DCHECK(opcode == kExprTableSet || opcode == kExprTableSize ||
|
|
|
|
opcode == kExprTableGrow || opcode == kExprTableFill);
|
|
|
|
int num_tables = builder_->builder()->NumTables();
|
|
|
|
DCHECK_GT(num_tables, 0);
|
|
|
|
int index = data->get<uint8_t>() % num_tables;
|
|
|
|
for (size_t i = 0; i < types.size(); i++) {
|
|
|
|
// When passing the reftype by default kWasmFuncRef is used.
|
|
|
|
// Then the type is changed according to its table type.
|
|
|
|
if (types[i] == kWasmFuncRef) {
|
|
|
|
types[i] = builder_->builder()->GetTableType(index);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
Generate(base::VectorOf(types), data);
|
|
|
|
if (opcode == kExprTableSet) {
|
|
|
|
builder_->Emit(opcode);
|
|
|
|
} else {
|
|
|
|
builder_->EmitWithPrefix(opcode);
|
|
|
|
}
|
|
|
|
builder_->EmitU32V(index);
|
|
|
|
}
|
2021-10-18 14:19:15 +00:00
|
|
|
|
|
|
|
bool table_get(HeapType type, DataRange* data) {
|
2021-09-06 11:46:06 +00:00
|
|
|
ValueType needed_type = ValueType::Ref(type, kNullable);
|
2021-09-17 16:49:39 +00:00
|
|
|
int table_count = builder_->builder()->NumTables();
|
2021-09-06 11:46:06 +00:00
|
|
|
ZoneVector<uint32_t> table(builder_->builder()->zone());
|
2021-09-17 16:49:39 +00:00
|
|
|
for (int i = 0; i < table_count; i++) {
|
2021-09-06 11:46:06 +00:00
|
|
|
if (builder_->builder()->GetTableType(i) == needed_type) {
|
|
|
|
table.push_back(i);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (table.empty()) {
|
2021-10-18 14:19:15 +00:00
|
|
|
return false;
|
2021-07-28 14:10:06 +00:00
|
|
|
}
|
2021-09-06 11:46:06 +00:00
|
|
|
int index = data->get<uint8_t>() % static_cast<int>(table.size());
|
|
|
|
Generate(kWasmI32, data);
|
|
|
|
builder_->Emit(kExprTableGet);
|
|
|
|
builder_->EmitU32V(table[index]);
|
2021-10-18 14:19:15 +00:00
|
|
|
return true;
|
2021-09-06 11:46:06 +00:00
|
|
|
}
|
2021-10-18 14:19:15 +00:00
|
|
|
|
2021-09-06 11:46:06 +00:00
|
|
|
void table_set(DataRange* data) {
|
|
|
|
table_op<kVoid>({kWasmI32, kWasmFuncRef}, data, kExprTableSet);
|
|
|
|
}
|
|
|
|
void table_size(DataRange* data) { table_op<kI32>({}, data, kExprTableSize); }
|
|
|
|
void table_grow(DataRange* data) {
|
|
|
|
table_op<kI32>({kWasmFuncRef, kWasmI32}, data, kExprTableGrow);
|
|
|
|
}
|
|
|
|
void table_fill(DataRange* data) {
|
|
|
|
table_op<kVoid>({kWasmI32, kWasmFuncRef, kWasmI32}, data, kExprTableFill);
|
2021-07-28 14:10:06 +00:00
|
|
|
}
|
2021-09-17 16:49:39 +00:00
|
|
|
void table_copy(DataRange* data) {
|
|
|
|
ValueType needed_type =
|
|
|
|
data->get<bool>()
|
|
|
|
? ValueType::Ref(HeapType(HeapType::kFunc), kNullable)
|
|
|
|
: ValueType::Ref(HeapType(HeapType::kExtern), kNullable);
|
|
|
|
int table_count = builder_->builder()->NumTables();
|
|
|
|
ZoneVector<uint32_t> table(builder_->builder()->zone());
|
|
|
|
for (int i = 0; i < table_count; i++) {
|
|
|
|
if (builder_->builder()->GetTableType(i) == needed_type) {
|
|
|
|
table.push_back(i);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (table.empty()) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
int first_index = data->get<uint8_t>() % static_cast<int>(table.size());
|
|
|
|
int second_index = data->get<uint8_t>() % static_cast<int>(table.size());
|
|
|
|
Generate(kWasmI32, data);
|
|
|
|
Generate(kWasmI32, data);
|
|
|
|
Generate(kWasmI32, data);
|
|
|
|
builder_->EmitWithPrefix(kExprTableCopy);
|
|
|
|
builder_->EmitU32V(table[first_index]);
|
|
|
|
builder_->EmitU32V(table[second_index]);
|
|
|
|
}
|
2021-07-07 14:54:58 +00:00
|
|
|
|
2021-10-04 07:57:53 +00:00
|
|
|
bool array_get_helper(ValueType value_type, DataRange* data) {
|
2021-09-20 08:24:49 +00:00
|
|
|
WasmModuleBuilder* builder = builder_->builder();
|
|
|
|
ZoneVector<uint32_t> array_indices(builder->zone());
|
2021-10-04 07:57:53 +00:00
|
|
|
|
2021-09-20 08:24:49 +00:00
|
|
|
for (uint32_t i = num_structs_; i < num_arrays_ + num_structs_; i++) {
|
|
|
|
DCHECK(builder->IsArrayType(i));
|
2021-10-07 15:10:58 +00:00
|
|
|
if (builder->GetArrayType(i)->element_type().Unpacked() == value_type) {
|
2021-09-20 08:24:49 +00:00
|
|
|
array_indices.push_back(i);
|
|
|
|
}
|
|
|
|
}
|
2021-10-04 07:57:53 +00:00
|
|
|
|
|
|
|
if (!array_indices.empty()) {
|
|
|
|
int index = data->get<uint8_t>() % static_cast<int>(array_indices.size());
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(array_indices[index]), data);
|
2021-10-04 07:57:53 +00:00
|
|
|
Generate(kWasmI32, data);
|
2021-10-07 15:10:58 +00:00
|
|
|
if (builder->GetArrayType(array_indices[index])
|
|
|
|
->element_type()
|
|
|
|
.is_packed()) {
|
|
|
|
builder_->EmitWithPrefix(data->get<bool>() ? kExprArrayGetS
|
|
|
|
: kExprArrayGetU);
|
|
|
|
|
|
|
|
} else {
|
|
|
|
builder_->EmitWithPrefix(kExprArrayGet);
|
|
|
|
}
|
2021-10-04 07:57:53 +00:00
|
|
|
builder_->EmitU32V(array_indices[index]);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
template <ValueKind wanted_kind>
|
|
|
|
void array_get(DataRange* data) {
|
|
|
|
bool got_array_value =
|
|
|
|
array_get_helper(ValueType::Primitive(wanted_kind), data);
|
|
|
|
if (!got_array_value) {
|
2021-09-20 08:24:49 +00:00
|
|
|
Generate<wanted_kind>(data);
|
|
|
|
}
|
2021-10-04 07:57:53 +00:00
|
|
|
}
|
2021-10-18 14:19:15 +00:00
|
|
|
bool array_get_ref(HeapType type, DataRange* data) {
|
2021-10-04 07:57:53 +00:00
|
|
|
ValueType needed_type = ValueType::Ref(type, kNullable);
|
2021-10-18 14:19:15 +00:00
|
|
|
return array_get_helper(needed_type, data);
|
2021-09-20 08:24:49 +00:00
|
|
|
}
|
|
|
|
|
2021-10-12 13:23:23 +00:00
|
|
|
void i31_get(DataRange* data) {
|
|
|
|
if (!liftoff_as_reference_) {
|
|
|
|
Generate(kWasmI32, data);
|
|
|
|
return;
|
|
|
|
}
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(HeapType::kI31), data);
|
2021-10-12 13:23:23 +00:00
|
|
|
builder_->Emit(kExprRefAsNonNull);
|
|
|
|
if (data->get<bool>()) {
|
|
|
|
builder_->EmitWithPrefix(kExprI31GetS);
|
|
|
|
} else {
|
|
|
|
builder_->EmitWithPrefix(kExprI31GetU);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-20 08:24:49 +00:00
|
|
|
void array_len(DataRange* data) {
|
|
|
|
if (num_arrays_ > 1) {
|
|
|
|
int array_index = (data->get<uint8_t>() % num_arrays_) + num_structs_;
|
|
|
|
DCHECK(builder_->builder()->IsArrayType(array_index));
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(array_index), data);
|
2021-09-20 08:24:49 +00:00
|
|
|
builder_->EmitWithPrefix(kExprArrayLen);
|
|
|
|
builder_->EmitU32V(array_index);
|
|
|
|
} else {
|
|
|
|
Generate(kWasmI32, data);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void array_set(DataRange* data) {
|
|
|
|
WasmModuleBuilder* builder = builder_->builder();
|
|
|
|
ZoneVector<uint32_t> array_indices(builder->zone());
|
|
|
|
for (uint32_t i = num_structs_; i < num_arrays_ + num_structs_; i++) {
|
|
|
|
DCHECK(builder->IsArrayType(i));
|
|
|
|
if (builder->GetArrayType(i)->mutability()) {
|
|
|
|
array_indices.push_back(i);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (array_indices.empty()) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
int index = data->get<uint8_t>() % static_cast<int>(array_indices.size());
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(array_indices[index]), data);
|
2021-09-20 08:24:49 +00:00
|
|
|
Generate(kWasmI32, data);
|
2021-10-07 15:10:58 +00:00
|
|
|
Generate(
|
|
|
|
builder->GetArrayType(array_indices[index])->element_type().Unpacked(),
|
|
|
|
data);
|
2021-09-20 08:24:49 +00:00
|
|
|
builder_->EmitWithPrefix(kExprArraySet);
|
|
|
|
builder_->EmitU32V(array_indices[index]);
|
|
|
|
}
|
|
|
|
|
2021-10-04 07:57:53 +00:00
|
|
|
bool struct_get_helper(ValueType value_type, DataRange* data) {
|
2021-08-10 08:48:39 +00:00
|
|
|
WasmModuleBuilder* builder = builder_->builder();
|
|
|
|
ZoneVector<uint32_t> field_index(builder->zone());
|
|
|
|
ZoneVector<uint32_t> struct_index(builder->zone());
|
2021-08-12 21:40:25 +00:00
|
|
|
for (uint32_t i = 0; i < num_structs_; i++) {
|
|
|
|
DCHECK(builder->IsStructType(i));
|
|
|
|
int field_count = builder->GetStructType(i)->field_count();
|
|
|
|
for (int index = 0; index < field_count; index++) {
|
2021-10-04 07:57:53 +00:00
|
|
|
if (builder->GetStructType(i)->field(index) == value_type) {
|
2021-08-12 21:40:25 +00:00
|
|
|
field_index.push_back(index);
|
|
|
|
struct_index.push_back(i);
|
2021-08-10 08:48:39 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2021-10-04 07:57:53 +00:00
|
|
|
if (!field_index.empty()) {
|
|
|
|
int index = data->get<uint8_t>() % static_cast<int>(field_index.size());
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(struct_index[index]), data);
|
2021-10-07 15:10:58 +00:00
|
|
|
if (builder->GetStructType(struct_index[index])
|
|
|
|
->field(field_index[index])
|
|
|
|
.is_packed()) {
|
|
|
|
builder_->EmitWithPrefix(data->get<bool>() ? kExprStructGetS
|
|
|
|
: kExprStructGetU);
|
|
|
|
} else {
|
|
|
|
builder_->EmitWithPrefix(kExprStructGet);
|
|
|
|
}
|
2021-10-04 07:57:53 +00:00
|
|
|
builder_->EmitU32V(struct_index[index]);
|
|
|
|
builder_->EmitU32V(field_index[index]);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
template <ValueKind wanted_kind>
|
|
|
|
void struct_get(DataRange* data) {
|
|
|
|
bool got_struct_value =
|
|
|
|
struct_get_helper(ValueType::Primitive(wanted_kind), data);
|
|
|
|
if (!got_struct_value) {
|
2021-08-10 08:48:39 +00:00
|
|
|
Generate<wanted_kind>(data);
|
|
|
|
}
|
2021-10-04 07:57:53 +00:00
|
|
|
}
|
|
|
|
|
2021-10-18 14:19:15 +00:00
|
|
|
bool struct_get_ref(HeapType type, DataRange* data) {
|
2021-10-04 07:57:53 +00:00
|
|
|
ValueType needed_type = ValueType::Ref(type, kNullable);
|
2021-10-18 14:19:15 +00:00
|
|
|
return struct_get_helper(needed_type, data);
|
2021-08-10 08:48:39 +00:00
|
|
|
}
|
2021-09-28 13:36:22 +00:00
|
|
|
|
2021-08-12 21:40:25 +00:00
|
|
|
void struct_set(DataRange* data) {
|
|
|
|
WasmModuleBuilder* builder = builder_->builder();
|
|
|
|
if (num_structs_ > 0) {
|
|
|
|
int struct_index = data->get<uint8_t>() % num_structs_;
|
|
|
|
DCHECK(builder->IsStructType(struct_index));
|
2021-09-28 16:29:03 +00:00
|
|
|
StructType* struct_type = builder->GetStructType(struct_index);
|
|
|
|
ZoneVector<uint32_t> field_indices(builder->zone());
|
|
|
|
for (uint32_t i = 0; i < struct_type->field_count(); i++) {
|
|
|
|
if (struct_type->mutability(i)) {
|
|
|
|
field_indices.push_back(i);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (field_indices.empty()) {
|
2021-08-12 21:40:25 +00:00
|
|
|
return;
|
|
|
|
}
|
2021-09-28 16:29:03 +00:00
|
|
|
int field_index =
|
|
|
|
field_indices[data->get<uint8_t>() % field_indices.size()];
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(struct_index), data);
|
2021-10-07 15:10:58 +00:00
|
|
|
Generate(struct_type->field(field_index).Unpacked(), data);
|
2021-08-12 21:40:25 +00:00
|
|
|
builder_->EmitWithPrefix(kExprStructSet);
|
|
|
|
builder_->EmitU32V(struct_index);
|
|
|
|
builder_->EmitU32V(field_index);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-08-23 12:42:55 +00:00
|
|
|
template <ValueKind wanted_kind>
|
|
|
|
void ref_is_null(DataRange* data) {
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(HeapType::kAny), data);
|
2021-08-27 10:50:03 +00:00
|
|
|
builder_->Emit(kExprRefIsNull);
|
|
|
|
}
|
|
|
|
|
|
|
|
void ref_eq(DataRange* data) {
|
|
|
|
if (!liftoff_as_reference_) {
|
|
|
|
Generate(kWasmI32, data);
|
|
|
|
return;
|
|
|
|
}
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(HeapType::kEq), data);
|
|
|
|
GenerateRef(HeapType(HeapType::kEq), data);
|
2021-08-27 10:50:03 +00:00
|
|
|
builder_->Emit(kExprRefEq);
|
2021-08-23 12:42:55 +00:00
|
|
|
}
|
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
using GenerateFn = void (WasmGenerator::*const)(DataRange*);
|
2021-10-18 14:19:15 +00:00
|
|
|
using GenerateFnWithHeap = bool (WasmGenerator::*const)(HeapType, DataRange*);
|
2017-11-06 15:48:17 +00:00
|
|
|
|
|
|
|
template <size_t N>
|
2019-10-17 08:46:01 +00:00
|
|
|
void GenerateOneOf(GenerateFn (&alternatives)[N], DataRange* data) {
|
2017-11-06 15:48:17 +00:00
|
|
|
static_assert(N < std::numeric_limits<uint8_t>::max(),
|
2019-10-17 08:46:01 +00:00
|
|
|
"Too many alternatives. Use a bigger type if needed.");
|
2019-07-08 11:44:42 +00:00
|
|
|
const auto which = data->get<uint8_t>();
|
2017-11-06 15:48:17 +00:00
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
GenerateFn alternate = alternatives[which % N];
|
2017-11-06 15:48:17 +00:00
|
|
|
(this->*alternate)(data);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
|
|
|
|
2021-07-07 14:54:58 +00:00
|
|
|
template <size_t N>
|
|
|
|
void GenerateOneOf(GenerateFnWithHeap (&alternatives)[N], HeapType type,
|
2021-10-18 14:19:15 +00:00
|
|
|
DataRange* data, bool nullability = true) {
|
2021-07-07 14:54:58 +00:00
|
|
|
static_assert(N < std::numeric_limits<uint8_t>::max(),
|
|
|
|
"Too many alternatives. Use a bigger type if needed.");
|
|
|
|
|
2021-10-18 14:19:15 +00:00
|
|
|
int index = data->get<uint8_t>() % (N + 1);
|
|
|
|
|
|
|
|
if (nullability && index == N) {
|
|
|
|
ref_null(type, data);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (int i = index; i < static_cast<int>(N); i++) {
|
|
|
|
if ((this->*alternatives[i])(type, data)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (int i = 0; i < index; i++) {
|
|
|
|
if ((this->*alternatives[i])(type, data)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
DCHECK(nullability);
|
|
|
|
ref_null(type, data);
|
2021-07-07 14:54:58 +00:00
|
|
|
}
|
|
|
|
|
2017-11-03 16:43:29 +00:00
|
|
|
struct GeneratorRecursionScope {
|
|
|
|
explicit GeneratorRecursionScope(WasmGenerator* gen) : gen(gen) {
|
|
|
|
++gen->recursion_depth;
|
2017-11-06 15:48:17 +00:00
|
|
|
DCHECK_LE(gen->recursion_depth, kMaxRecursionDepth);
|
2017-11-03 16:43:29 +00:00
|
|
|
}
|
|
|
|
~GeneratorRecursionScope() {
|
|
|
|
DCHECK_GT(gen->recursion_depth, 0);
|
|
|
|
--gen->recursion_depth;
|
|
|
|
}
|
|
|
|
WasmGenerator* gen;
|
|
|
|
};
|
|
|
|
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
public:
|
2020-07-28 15:09:13 +00:00
|
|
|
WasmGenerator(WasmFunctionBuilder* fn, const std::vector<uint32_t>& functions,
|
2018-01-29 16:45:59 +00:00
|
|
|
const std::vector<ValueType>& globals,
|
2021-08-12 21:40:25 +00:00
|
|
|
const std::vector<uint8_t>& mutable_globals,
|
|
|
|
uint32_t num_structs, uint32_t num_arrays, DataRange* data,
|
2021-07-21 11:43:07 +00:00
|
|
|
bool liftoff_as_reference)
|
2018-01-29 16:45:59 +00:00
|
|
|
: builder_(fn),
|
|
|
|
functions_(functions),
|
|
|
|
globals_(globals),
|
2021-07-21 11:43:07 +00:00
|
|
|
mutable_globals_(mutable_globals),
|
2021-08-12 21:40:25 +00:00
|
|
|
num_structs_(num_structs),
|
|
|
|
num_arrays_(num_arrays),
|
2021-07-21 11:43:07 +00:00
|
|
|
liftoff_as_reference_(liftoff_as_reference) {
|
2017-12-21 14:31:42 +00:00
|
|
|
FunctionSig* sig = fn->signature();
|
2020-05-20 09:21:27 +00:00
|
|
|
blocks_.emplace_back();
|
|
|
|
for (size_t i = 0; i < sig->return_count(); ++i) {
|
|
|
|
blocks_.back().push_back(sig->GetReturn(i));
|
2020-05-12 14:39:52 +00:00
|
|
|
}
|
2018-01-02 18:52:49 +00:00
|
|
|
|
|
|
|
constexpr uint32_t kMaxLocals = 32;
|
2019-07-08 11:44:42 +00:00
|
|
|
locals_.resize(data->get<uint8_t>() % kMaxLocals);
|
2018-01-02 18:52:49 +00:00
|
|
|
for (ValueType& local : locals_) {
|
2021-09-28 13:36:22 +00:00
|
|
|
local = GetValueType(data, liftoff_as_reference_,
|
|
|
|
static_cast<uint32_t>(functions_.size()) +
|
|
|
|
num_structs_ + num_arrays_);
|
2018-01-02 18:52:49 +00:00
|
|
|
fn->AddLocal(local);
|
|
|
|
}
|
2017-11-15 11:51:15 +00:00
|
|
|
}
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2019-07-08 11:44:42 +00:00
|
|
|
void Generate(ValueType type, DataRange* data);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
template <ValueKind T>
|
2019-07-08 11:44:42 +00:00
|
|
|
void Generate(DataRange* data);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
template <ValueKind T1, ValueKind T2, ValueKind... Ts>
|
2019-07-08 11:44:42 +00:00
|
|
|
void Generate(DataRange* data) {
|
2019-09-26 08:16:50 +00:00
|
|
|
// TODO(clemensb): Implement a more even split.
|
2019-07-08 11:44:42 +00:00
|
|
|
auto first_data = data->split();
|
|
|
|
Generate<T1>(&first_data);
|
2017-12-21 14:30:32 +00:00
|
|
|
Generate<T2, Ts...>(data);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
|
|
|
|
2021-10-18 14:19:15 +00:00
|
|
|
void GenerateRef(HeapType type, DataRange* data);
|
2021-07-07 14:54:58 +00:00
|
|
|
|
2020-05-18 08:31:21 +00:00
|
|
|
std::vector<ValueType> GenerateTypes(DataRange* data);
|
2021-06-17 15:43:55 +00:00
|
|
|
void Generate(base::Vector<const ValueType> types, DataRange* data);
|
|
|
|
void ConsumeAndGenerate(base::Vector<const ValueType> parameter_types,
|
|
|
|
base::Vector<const ValueType> return_types,
|
2020-05-18 08:31:21 +00:00
|
|
|
DataRange* data);
|
2021-02-23 06:50:08 +00:00
|
|
|
bool HasSimd() { return has_simd_; }
|
2020-05-18 08:31:21 +00:00
|
|
|
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
private:
|
2017-09-01 13:20:46 +00:00
|
|
|
WasmFunctionBuilder* builder_;
|
2020-05-12 14:39:52 +00:00
|
|
|
std::vector<std::vector<ValueType>> blocks_;
|
2020-07-28 15:09:13 +00:00
|
|
|
const std::vector<uint32_t>& functions_;
|
2018-01-02 18:52:49 +00:00
|
|
|
std::vector<ValueType> locals_;
|
2018-01-29 16:45:59 +00:00
|
|
|
std::vector<ValueType> globals_;
|
|
|
|
std::vector<uint8_t> mutable_globals_; // indexes into {globals_}.
|
2017-11-03 16:43:29 +00:00
|
|
|
uint32_t recursion_depth = 0;
|
2021-02-17 14:24:38 +00:00
|
|
|
std::vector<int> catch_blocks_;
|
2021-02-23 06:50:08 +00:00
|
|
|
bool has_simd_;
|
2021-08-12 21:40:25 +00:00
|
|
|
uint32_t num_structs_;
|
|
|
|
uint32_t num_arrays_;
|
2021-07-21 11:43:07 +00:00
|
|
|
bool liftoff_as_reference_;
|
2017-11-03 16:43:29 +00:00
|
|
|
static constexpr uint32_t kMaxRecursionDepth = 64;
|
|
|
|
|
|
|
|
bool recursion_limit_reached() {
|
|
|
|
return recursion_depth >= kMaxRecursionDepth;
|
|
|
|
}
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
};
|
|
|
|
|
2020-05-18 08:31:21 +00:00
|
|
|
template <>
|
2021-03-22 06:56:01 +00:00
|
|
|
void WasmGenerator::block<kVoid>(DataRange* data) {
|
2020-05-18 08:31:21 +00:00
|
|
|
block({}, {}, data);
|
|
|
|
}
|
|
|
|
|
2020-05-20 09:21:27 +00:00
|
|
|
template <>
|
2021-03-22 06:56:01 +00:00
|
|
|
void WasmGenerator::loop<kVoid>(DataRange* data) {
|
2020-05-20 09:21:27 +00:00
|
|
|
loop({}, {}, data);
|
|
|
|
}
|
|
|
|
|
2017-11-06 15:07:14 +00:00
|
|
|
template <>
|
2021-03-22 06:56:01 +00:00
|
|
|
void WasmGenerator::Generate<kVoid>(DataRange* data) {
|
2017-11-06 15:07:14 +00:00
|
|
|
GeneratorRecursionScope rec_scope(this);
|
2019-07-08 11:44:42 +00:00
|
|
|
if (recursion_limit_reached() || data->size() == 0) return;
|
2017-11-06 15:48:17 +00:00
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
constexpr GenerateFn alternatives[] = {
|
2021-03-22 06:56:01 +00:00
|
|
|
&WasmGenerator::sequence<kVoid, kVoid>,
|
|
|
|
&WasmGenerator::sequence<kVoid, kVoid, kVoid, kVoid>,
|
|
|
|
&WasmGenerator::sequence<kVoid, kVoid, kVoid, kVoid, kVoid, kVoid, kVoid,
|
|
|
|
kVoid>,
|
|
|
|
&WasmGenerator::block<kVoid>,
|
|
|
|
&WasmGenerator::loop<kVoid>,
|
|
|
|
&WasmGenerator::if_<kVoid, kIf>,
|
|
|
|
&WasmGenerator::if_<kVoid, kIfElse>,
|
2017-11-15 11:51:15 +00:00
|
|
|
&WasmGenerator::br,
|
2021-03-22 06:56:01 +00:00
|
|
|
&WasmGenerator::br_if<kVoid>,
|
2021-08-27 10:50:03 +00:00
|
|
|
&WasmGenerator::br_on_null<kVoid>,
|
2021-02-22 09:28:44 +00:00
|
|
|
|
|
|
|
&WasmGenerator::memop<kExprI32StoreMem, kI32>,
|
|
|
|
&WasmGenerator::memop<kExprI32StoreMem8, kI32>,
|
|
|
|
&WasmGenerator::memop<kExprI32StoreMem16, kI32>,
|
|
|
|
&WasmGenerator::memop<kExprI64StoreMem, kI64>,
|
|
|
|
&WasmGenerator::memop<kExprI64StoreMem8, kI64>,
|
|
|
|
&WasmGenerator::memop<kExprI64StoreMem16, kI64>,
|
|
|
|
&WasmGenerator::memop<kExprI64StoreMem32, kI64>,
|
|
|
|
&WasmGenerator::memop<kExprF32StoreMem, kF32>,
|
|
|
|
&WasmGenerator::memop<kExprF64StoreMem, kF64>,
|
|
|
|
&WasmGenerator::memop<kExprI32AtomicStore, kI32>,
|
|
|
|
&WasmGenerator::memop<kExprI32AtomicStore8U, kI32>,
|
|
|
|
&WasmGenerator::memop<kExprI32AtomicStore16U, kI32>,
|
|
|
|
&WasmGenerator::memop<kExprI64AtomicStore, kI64>,
|
|
|
|
&WasmGenerator::memop<kExprI64AtomicStore8U, kI64>,
|
|
|
|
&WasmGenerator::memop<kExprI64AtomicStore16U, kI64>,
|
|
|
|
&WasmGenerator::memop<kExprI64AtomicStore32U, kI64>,
|
|
|
|
&WasmGenerator::memop<kExprS128StoreMem, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_memop<kExprS128Store8Lane, 16, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_memop<kExprS128Store16Lane, 8, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_memop<kExprS128Store32Lane, 4, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_memop<kExprS128Store64Lane, 2, kS128>,
|
2017-12-21 13:53:15 +00:00
|
|
|
|
2017-12-21 14:31:42 +00:00
|
|
|
&WasmGenerator::drop,
|
|
|
|
|
2021-03-22 06:56:01 +00:00
|
|
|
&WasmGenerator::call<kVoid>,
|
|
|
|
&WasmGenerator::call_indirect<kVoid>,
|
2021-09-08 15:33:51 +00:00
|
|
|
&WasmGenerator::call_ref<kVoid>,
|
2018-01-11 12:56:19 +00:00
|
|
|
|
2018-01-29 16:45:59 +00:00
|
|
|
&WasmGenerator::set_local,
|
2021-02-17 14:24:38 +00:00
|
|
|
&WasmGenerator::set_global,
|
|
|
|
&WasmGenerator::throw_or_rethrow,
|
2021-08-12 21:40:25 +00:00
|
|
|
&WasmGenerator::try_block<kVoid>,
|
|
|
|
|
2021-09-06 11:46:06 +00:00
|
|
|
&WasmGenerator::struct_set,
|
2021-09-20 08:24:49 +00:00
|
|
|
&WasmGenerator::array_set,
|
2021-09-06 11:46:06 +00:00
|
|
|
|
|
|
|
&WasmGenerator::table_set,
|
2021-09-17 16:49:39 +00:00
|
|
|
&WasmGenerator::table_fill,
|
|
|
|
&WasmGenerator::table_copy};
|
2017-11-06 15:07:14 +00:00
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
GenerateOneOf(alternatives, data);
|
2017-11-06 15:07:14 +00:00
|
|
|
}
|
|
|
|
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
template <>
|
2021-02-22 09:28:44 +00:00
|
|
|
void WasmGenerator::Generate<kI32>(DataRange* data) {
|
2017-11-03 16:43:29 +00:00
|
|
|
GeneratorRecursionScope rec_scope(this);
|
2019-07-08 11:44:42 +00:00
|
|
|
if (recursion_limit_reached() || data->size() <= 1) {
|
|
|
|
builder_->EmitI32Const(data->get<uint32_t>());
|
2017-11-06 15:48:17 +00:00
|
|
|
return;
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
2017-11-06 15:48:17 +00:00
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
constexpr GenerateFn alternatives[] = {
|
2018-11-15 15:08:32 +00:00
|
|
|
&WasmGenerator::i32_const<1>,
|
|
|
|
&WasmGenerator::i32_const<2>,
|
|
|
|
&WasmGenerator::i32_const<3>,
|
|
|
|
&WasmGenerator::i32_const<4>,
|
|
|
|
|
2021-03-22 06:56:01 +00:00
|
|
|
&WasmGenerator::sequence<kI32, kVoid>,
|
|
|
|
&WasmGenerator::sequence<kVoid, kI32>,
|
|
|
|
&WasmGenerator::sequence<kVoid, kI32, kVoid>,
|
2021-02-22 09:28:44 +00:00
|
|
|
|
|
|
|
&WasmGenerator::op<kExprI32Eqz, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32Eq, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32Ne, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32LtS, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32LtU, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32GeS, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32GeU, kI32, kI32>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprI64Eqz, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64Eq, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64Ne, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64LtS, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64LtU, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64GeS, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64GeU, kI64, kI64>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprF32Eq, kF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Ne, kF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Lt, kF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Ge, kF32, kF32>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprF64Eq, kF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Ne, kF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Lt, kF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Ge, kF64, kF64>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprI32Add, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32Sub, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32Mul, kI32, kI32>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprI32DivS, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32DivU, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32RemS, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32RemU, kI32, kI32>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprI32And, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32Ior, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32Xor, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32Shl, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32ShrU, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32ShrS, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32Ror, kI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32Rol, kI32, kI32>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprI32Clz, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32Ctz, kI32>,
|
|
|
|
&WasmGenerator::op<kExprI32Popcnt, kI32>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprI32ConvertI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI32SConvertF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprI32UConvertF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprI32SConvertF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprI32UConvertF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprI32ReinterpretF32, kF32>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32SConvertSatF32, kF32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32UConvertSatF32, kF32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32SConvertSatF64, kF64>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32UConvertSatF64, kF64>,
|
|
|
|
|
|
|
|
&WasmGenerator::block<kI32>,
|
|
|
|
&WasmGenerator::loop<kI32>,
|
|
|
|
&WasmGenerator::if_<kI32, kIfElse>,
|
|
|
|
&WasmGenerator::br_if<kI32>,
|
2021-08-27 10:50:03 +00:00
|
|
|
&WasmGenerator::br_on_null<kI32>,
|
2017-11-06 15:48:17 +00:00
|
|
|
|
|
|
|
&WasmGenerator::memop<kExprI32LoadMem>,
|
|
|
|
&WasmGenerator::memop<kExprI32LoadMem8S>,
|
|
|
|
&WasmGenerator::memop<kExprI32LoadMem8U>,
|
|
|
|
&WasmGenerator::memop<kExprI32LoadMem16S>,
|
|
|
|
&WasmGenerator::memop<kExprI32LoadMem16U>,
|
2020-01-14 18:17:34 +00:00
|
|
|
&WasmGenerator::memop<kExprI32AtomicLoad>,
|
|
|
|
&WasmGenerator::memop<kExprI32AtomicLoad8U>,
|
|
|
|
&WasmGenerator::memop<kExprI32AtomicLoad16U>,
|
2017-11-06 15:48:17 +00:00
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicAdd, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicSub, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicAnd, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicOr, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicXor, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicExchange, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicCompareExchange, kI32, kI32,
|
|
|
|
kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicAdd8U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicSub8U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicAnd8U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicOr8U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicXor8U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicExchange8U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicCompareExchange8U, kI32, kI32,
|
|
|
|
kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicAdd16U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicSub16U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicAnd16U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicOr16U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicXor16U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicExchange16U, kI32, kI32>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI32AtomicCompareExchange16U, kI32, kI32,
|
|
|
|
kI32>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprV128AnyTrue, kS128>,
|
2021-03-05 23:56:29 +00:00
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16AllTrue, kS128>,
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16BitMask, kS128>,
|
2021-03-05 23:56:29 +00:00
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8AllTrue, kS128>,
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8BitMask, kS128>,
|
2021-03-05 23:56:29 +00:00
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4AllTrue, kS128>,
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4BitMask, kS128>,
|
2021-03-05 23:56:29 +00:00
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2AllTrue, kS128>,
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2BitMask, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_op<kExprI8x16ExtractLaneS, 16, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_op<kExprI8x16ExtractLaneU, 16, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_op<kExprI16x8ExtractLaneS, 8, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_op<kExprI16x8ExtractLaneU, 8, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_op<kExprI32x4ExtractLane, 4, kS128>,
|
2020-02-06 18:26:29 +00:00
|
|
|
|
2017-11-06 15:48:17 +00:00
|
|
|
&WasmGenerator::current_memory,
|
2017-12-21 14:31:42 +00:00
|
|
|
&WasmGenerator::grow_memory,
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::get_local<kI32>,
|
|
|
|
&WasmGenerator::tee_local<kI32>,
|
|
|
|
&WasmGenerator::get_global<kI32>,
|
|
|
|
&WasmGenerator::op<kExprSelect, kI32, kI32, kI32>,
|
|
|
|
&WasmGenerator::select_with_type<kI32>,
|
2018-01-02 18:52:49 +00:00
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::call<kI32>,
|
|
|
|
&WasmGenerator::call_indirect<kI32>,
|
2021-09-08 15:33:51 +00:00
|
|
|
&WasmGenerator::call_ref<kI32>,
|
2021-08-10 08:48:39 +00:00
|
|
|
&WasmGenerator::try_block<kI32>,
|
|
|
|
|
2021-10-12 13:23:23 +00:00
|
|
|
&WasmGenerator::i31_get,
|
|
|
|
|
2021-08-23 12:42:55 +00:00
|
|
|
&WasmGenerator::struct_get<kI32>,
|
2021-09-20 08:24:49 +00:00
|
|
|
&WasmGenerator::array_get<kI32>,
|
|
|
|
&WasmGenerator::array_len,
|
2021-08-23 12:42:55 +00:00
|
|
|
|
2021-08-27 10:50:03 +00:00
|
|
|
&WasmGenerator::ref_is_null<kI32>,
|
2021-09-06 11:46:06 +00:00
|
|
|
&WasmGenerator::ref_eq,
|
|
|
|
|
|
|
|
&WasmGenerator::table_size,
|
|
|
|
&WasmGenerator::table_grow};
|
2017-11-06 15:48:17 +00:00
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
GenerateOneOf(alternatives, data);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
template <>
|
2021-02-22 09:28:44 +00:00
|
|
|
void WasmGenerator::Generate<kI64>(DataRange* data) {
|
2017-11-03 16:43:29 +00:00
|
|
|
GeneratorRecursionScope rec_scope(this);
|
2019-07-08 11:44:42 +00:00
|
|
|
if (recursion_limit_reached() || data->size() <= 1) {
|
|
|
|
builder_->EmitI64Const(data->get<int64_t>());
|
2017-11-06 15:48:17 +00:00
|
|
|
return;
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
2017-11-06 15:48:17 +00:00
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
constexpr GenerateFn alternatives[] = {
|
2018-11-15 15:08:32 +00:00
|
|
|
&WasmGenerator::i64_const<1>,
|
|
|
|
&WasmGenerator::i64_const<2>,
|
|
|
|
&WasmGenerator::i64_const<3>,
|
|
|
|
&WasmGenerator::i64_const<4>,
|
|
|
|
&WasmGenerator::i64_const<5>,
|
|
|
|
&WasmGenerator::i64_const<6>,
|
|
|
|
&WasmGenerator::i64_const<7>,
|
|
|
|
&WasmGenerator::i64_const<8>,
|
|
|
|
|
2021-03-22 06:56:01 +00:00
|
|
|
&WasmGenerator::sequence<kI64, kVoid>,
|
|
|
|
&WasmGenerator::sequence<kVoid, kI64>,
|
|
|
|
&WasmGenerator::sequence<kVoid, kI64, kVoid>,
|
2021-02-22 09:28:44 +00:00
|
|
|
|
|
|
|
&WasmGenerator::op<kExprI64Add, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64Sub, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64Mul, kI64, kI64>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprI64DivS, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64DivU, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64RemS, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64RemU, kI64, kI64>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprI64And, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64Ior, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64Xor, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64Shl, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64ShrU, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64ShrS, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64Ror, kI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64Rol, kI64, kI64>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprI64Clz, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64Ctz, kI64>,
|
|
|
|
&WasmGenerator::op<kExprI64Popcnt, kI64>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64SConvertSatF32, kF32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64UConvertSatF32, kF32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64SConvertSatF64, kF64>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64UConvertSatF64, kF64>,
|
|
|
|
|
|
|
|
&WasmGenerator::block<kI64>,
|
|
|
|
&WasmGenerator::loop<kI64>,
|
|
|
|
&WasmGenerator::if_<kI64, kIfElse>,
|
|
|
|
&WasmGenerator::br_if<kI64>,
|
2021-08-27 10:50:03 +00:00
|
|
|
&WasmGenerator::br_on_null<kI64>,
|
2017-11-06 15:48:17 +00:00
|
|
|
|
|
|
|
&WasmGenerator::memop<kExprI64LoadMem>,
|
|
|
|
&WasmGenerator::memop<kExprI64LoadMem8S>,
|
|
|
|
&WasmGenerator::memop<kExprI64LoadMem8U>,
|
|
|
|
&WasmGenerator::memop<kExprI64LoadMem16S>,
|
|
|
|
&WasmGenerator::memop<kExprI64LoadMem16U>,
|
|
|
|
&WasmGenerator::memop<kExprI64LoadMem32S>,
|
2017-12-21 14:31:42 +00:00
|
|
|
&WasmGenerator::memop<kExprI64LoadMem32U>,
|
2020-01-14 18:17:34 +00:00
|
|
|
&WasmGenerator::memop<kExprI64AtomicLoad>,
|
|
|
|
&WasmGenerator::memop<kExprI64AtomicLoad8U>,
|
|
|
|
&WasmGenerator::memop<kExprI64AtomicLoad16U>,
|
|
|
|
&WasmGenerator::memop<kExprI64AtomicLoad32U>,
|
2017-12-21 14:31:42 +00:00
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicAdd, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicSub, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicAnd, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicOr, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicXor, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicExchange, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicCompareExchange, kI32, kI64,
|
|
|
|
kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicAdd8U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicSub8U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicAnd8U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicOr8U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicXor8U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicExchange8U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicCompareExchange8U, kI32, kI64,
|
|
|
|
kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicAdd16U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicSub16U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicAnd16U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicOr16U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicXor16U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicExchange16U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicCompareExchange16U, kI32, kI64,
|
|
|
|
kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicAdd32U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicSub32U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicAnd32U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicOr32U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicXor32U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicExchange32U, kI32, kI64>,
|
|
|
|
&WasmGenerator::atomic_op<kExprI64AtomicCompareExchange32U, kI32, kI64,
|
|
|
|
kI64>,
|
|
|
|
|
|
|
|
&WasmGenerator::simd_lane_op<kExprI64x2ExtractLane, 2, kS128>,
|
|
|
|
|
|
|
|
&WasmGenerator::get_local<kI64>,
|
|
|
|
&WasmGenerator::tee_local<kI64>,
|
|
|
|
&WasmGenerator::get_global<kI64>,
|
|
|
|
&WasmGenerator::op<kExprSelect, kI64, kI64, kI32>,
|
|
|
|
&WasmGenerator::select_with_type<kI64>,
|
|
|
|
|
|
|
|
&WasmGenerator::call<kI64>,
|
|
|
|
&WasmGenerator::call_indirect<kI64>,
|
2021-09-08 15:33:51 +00:00
|
|
|
&WasmGenerator::call_ref<kI64>,
|
2021-08-10 08:48:39 +00:00
|
|
|
&WasmGenerator::try_block<kI64>,
|
|
|
|
|
2021-09-20 08:24:49 +00:00
|
|
|
&WasmGenerator::struct_get<kI64>,
|
|
|
|
&WasmGenerator::array_get<kI64>};
|
2017-11-06 15:48:17 +00:00
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
GenerateOneOf(alternatives, data);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
template <>
|
2021-02-22 09:28:44 +00:00
|
|
|
void WasmGenerator::Generate<kF32>(DataRange* data) {
|
2017-11-03 16:43:29 +00:00
|
|
|
GeneratorRecursionScope rec_scope(this);
|
2019-07-08 11:44:42 +00:00
|
|
|
if (recursion_limit_reached() || data->size() <= sizeof(float)) {
|
|
|
|
builder_->EmitF32Const(data->get<float>());
|
2017-11-06 15:48:17 +00:00
|
|
|
return;
|
|
|
|
}
|
2017-11-06 15:07:14 +00:00
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
constexpr GenerateFn alternatives[] = {
|
2021-03-22 06:56:01 +00:00
|
|
|
&WasmGenerator::sequence<kF32, kVoid>,
|
|
|
|
&WasmGenerator::sequence<kVoid, kF32>,
|
|
|
|
&WasmGenerator::sequence<kVoid, kF32, kVoid>,
|
2021-02-22 09:28:44 +00:00
|
|
|
|
|
|
|
&WasmGenerator::op<kExprF32Abs, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Neg, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Ceil, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Floor, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Trunc, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32NearestInt, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Sqrt, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Add, kF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Sub, kF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Mul, kF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Div, kF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Min, kF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32Max, kF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF32CopySign, kF32, kF32>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprF32SConvertI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprF32UConvertI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprF32SConvertI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprF32UConvertI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprF32ConvertF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF32ReinterpretI32, kI32>,
|
|
|
|
|
|
|
|
&WasmGenerator::block<kF32>,
|
|
|
|
&WasmGenerator::loop<kF32>,
|
|
|
|
&WasmGenerator::if_<kF32, kIfElse>,
|
|
|
|
&WasmGenerator::br_if<kF32>,
|
2021-08-27 10:50:03 +00:00
|
|
|
&WasmGenerator::br_on_null<kF32>,
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2017-12-21 14:31:42 +00:00
|
|
|
&WasmGenerator::memop<kExprF32LoadMem>,
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::simd_lane_op<kExprF32x4ExtractLane, 4, kS128>,
|
2020-05-28 00:06:31 +00:00
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::get_local<kF32>,
|
|
|
|
&WasmGenerator::tee_local<kF32>,
|
|
|
|
&WasmGenerator::get_global<kF32>,
|
|
|
|
&WasmGenerator::op<kExprSelect, kF32, kF32, kI32>,
|
|
|
|
&WasmGenerator::select_with_type<kF32>,
|
2018-01-02 18:52:49 +00:00
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::call<kF32>,
|
|
|
|
&WasmGenerator::call_indirect<kF32>,
|
2021-09-08 15:33:51 +00:00
|
|
|
&WasmGenerator::call_ref<kF32>,
|
2021-08-10 08:48:39 +00:00
|
|
|
&WasmGenerator::try_block<kF32>,
|
|
|
|
|
2021-09-20 08:24:49 +00:00
|
|
|
&WasmGenerator::struct_get<kF32>,
|
|
|
|
&WasmGenerator::array_get<kF32>};
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
GenerateOneOf(alternatives, data);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
template <>
|
2021-02-22 09:28:44 +00:00
|
|
|
void WasmGenerator::Generate<kF64>(DataRange* data) {
|
2017-11-03 16:43:29 +00:00
|
|
|
GeneratorRecursionScope rec_scope(this);
|
2019-07-08 11:44:42 +00:00
|
|
|
if (recursion_limit_reached() || data->size() <= sizeof(double)) {
|
|
|
|
builder_->EmitF64Const(data->get<double>());
|
2017-11-06 15:48:17 +00:00
|
|
|
return;
|
|
|
|
}
|
2017-11-06 15:07:14 +00:00
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
constexpr GenerateFn alternatives[] = {
|
2021-03-22 06:56:01 +00:00
|
|
|
&WasmGenerator::sequence<kF64, kVoid>,
|
|
|
|
&WasmGenerator::sequence<kVoid, kF64>,
|
|
|
|
&WasmGenerator::sequence<kVoid, kF64, kVoid>,
|
2021-02-22 09:28:44 +00:00
|
|
|
|
|
|
|
&WasmGenerator::op<kExprF64Abs, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Neg, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Ceil, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Floor, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Trunc, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64NearestInt, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Sqrt, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Add, kF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Sub, kF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Mul, kF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Div, kF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Min, kF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64Max, kF64, kF64>,
|
|
|
|
&WasmGenerator::op<kExprF64CopySign, kF64, kF64>,
|
|
|
|
|
|
|
|
&WasmGenerator::op<kExprF64SConvertI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprF64UConvertI32, kI32>,
|
|
|
|
&WasmGenerator::op<kExprF64SConvertI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprF64UConvertI64, kI64>,
|
|
|
|
&WasmGenerator::op<kExprF64ConvertF32, kF32>,
|
|
|
|
&WasmGenerator::op<kExprF64ReinterpretI64, kI64>,
|
|
|
|
|
|
|
|
&WasmGenerator::block<kF64>,
|
|
|
|
&WasmGenerator::loop<kF64>,
|
|
|
|
&WasmGenerator::if_<kF64, kIfElse>,
|
|
|
|
&WasmGenerator::br_if<kF64>,
|
2021-08-27 10:50:03 +00:00
|
|
|
&WasmGenerator::br_on_null<kF64>,
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2017-12-21 14:31:42 +00:00
|
|
|
&WasmGenerator::memop<kExprF64LoadMem>,
|
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::simd_lane_op<kExprF64x2ExtractLane, 2, kS128>,
|
2020-05-28 00:06:31 +00:00
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::get_local<kF64>,
|
|
|
|
&WasmGenerator::tee_local<kF64>,
|
|
|
|
&WasmGenerator::get_global<kF64>,
|
|
|
|
&WasmGenerator::op<kExprSelect, kF64, kF64, kI32>,
|
|
|
|
&WasmGenerator::select_with_type<kF64>,
|
2018-01-02 18:52:49 +00:00
|
|
|
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::call<kF64>,
|
|
|
|
&WasmGenerator::call_indirect<kF64>,
|
2021-09-08 15:33:51 +00:00
|
|
|
&WasmGenerator::call_ref<kF64>,
|
2021-08-10 08:48:39 +00:00
|
|
|
&WasmGenerator::try_block<kF64>,
|
|
|
|
|
2021-09-20 08:24:49 +00:00
|
|
|
&WasmGenerator::struct_get<kF64>,
|
|
|
|
&WasmGenerator::array_get<kF64>};
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2019-10-17 08:46:01 +00:00
|
|
|
GenerateOneOf(alternatives, data);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
|
|
|
|
2020-02-06 18:26:29 +00:00
|
|
|
template <>
|
2021-02-22 09:28:44 +00:00
|
|
|
void WasmGenerator::Generate<kS128>(DataRange* data) {
|
2020-02-06 18:26:29 +00:00
|
|
|
GeneratorRecursionScope rec_scope(this);
|
2021-02-23 06:50:08 +00:00
|
|
|
has_simd_ = true;
|
2020-02-06 18:26:29 +00:00
|
|
|
if (recursion_limit_reached() || data->size() <= sizeof(int32_t)) {
|
|
|
|
// TODO(v8:8460): v128.const is not implemented yet, and we need a way to
|
|
|
|
// "bottom-out", so use a splat to generate this.
|
|
|
|
builder_->EmitI32Const(data->get<int32_t>());
|
|
|
|
builder_->EmitWithPrefix(kExprI8x16Splat);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
constexpr GenerateFn alternatives[] = {
|
2020-07-08 19:10:14 +00:00
|
|
|
&WasmGenerator::simd_const,
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::simd_lane_op<kExprI8x16ReplaceLane, 16, kS128, kI32>,
|
|
|
|
&WasmGenerator::simd_lane_op<kExprI16x8ReplaceLane, 8, kS128, kI32>,
|
|
|
|
&WasmGenerator::simd_lane_op<kExprI32x4ReplaceLane, 4, kS128, kI32>,
|
|
|
|
&WasmGenerator::simd_lane_op<kExprI64x2ReplaceLane, 2, kS128, kI64>,
|
|
|
|
&WasmGenerator::simd_lane_op<kExprF32x4ReplaceLane, 4, kS128, kF32>,
|
|
|
|
&WasmGenerator::simd_lane_op<kExprF64x2ReplaceLane, 2, kS128, kF64>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16Splat, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16Eq, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16Ne, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16LtS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16LtU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16GtS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16GtU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16LeS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16LeU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16GeS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16GeU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16Abs, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16Neg, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16Shl, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16ShrS, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16ShrU, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16Add, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16AddSatS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16AddSatU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16Sub, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16SubSatS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16SubSatU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16MinS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16MinU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16MaxS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16MaxU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16RoundingAverageU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16Popcnt, kS128>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8Splat, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8Eq, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8Ne, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8LtS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8LtU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8GtS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8GtU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8LeS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8LeU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8GeS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8GeU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8Abs, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8Neg, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8Shl, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8ShrS, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8ShrU, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8Add, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8AddSatS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8AddSatU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8Sub, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8SubSatS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8SubSatU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8Mul, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8MinS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8MinU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8MaxS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8MaxU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8RoundingAverageU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8ExtMulLowI8x16S, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8ExtMulLowI8x16U, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8ExtMulHighI8x16S, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8ExtMulHighI8x16U, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8Q15MulRSatS, kS128, kS128>,
|
2021-02-23 00:55:41 +00:00
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8ExtAddPairwiseI8x16S, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8ExtAddPairwiseI8x16U, kS128>,
|
2021-02-22 09:28:44 +00:00
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4Splat, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4Eq, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4Ne, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4LtS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4LtU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4GtS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4GtU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4LeS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4LeU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4GeS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4GeU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4Abs, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4Neg, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4Shl, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4ShrS, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4ShrU, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4Add, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4Sub, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4Mul, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4MinS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4MinU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4MaxS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4MaxU, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4DotI16x8S, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4ExtMulLowI16x8S, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4ExtMulLowI16x8U, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4ExtMulHighI16x8S, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4ExtMulHighI16x8U, kS128, kS128>,
|
2021-02-23 00:55:41 +00:00
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4ExtAddPairwiseI16x8S, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4ExtAddPairwiseI16x8U, kS128>,
|
2021-02-22 09:28:44 +00:00
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2Splat, kI64>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2Eq, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2Ne, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2LtS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2GtS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2LeS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2GeS, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2Abs, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2Neg, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2Shl, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2ShrS, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2ShrU, kS128, kI32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2Add, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2Sub, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2Mul, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2ExtMulLowI32x4S, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2ExtMulLowI32x4U, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2ExtMulHighI32x4S, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2ExtMulHighI32x4U, kS128, kS128>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Splat, kF32>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Eq, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Ne, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Lt, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Gt, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Le, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Ge, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Abs, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Neg, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Sqrt, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Add, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Sub, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Mul, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Div, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Min, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Max, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Pmin, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Pmax, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Ceil, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Floor, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4Trunc, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4NearestInt, kS128>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Splat, kF64>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Eq, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Ne, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Lt, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Gt, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Le, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Ge, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Abs, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Neg, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Sqrt, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Add, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Sub, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Mul, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Div, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Min, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Max, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Pmin, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Pmax, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Ceil, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Floor, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2Trunc, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2NearestInt, kS128>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2PromoteLowF32x4, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2ConvertLowI32x4S, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF64x2ConvertLowI32x4U, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4DemoteF64x2Zero, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4TruncSatF64x2SZero, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4TruncSatF64x2UZero, kS128>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2SConvertI32x4Low, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2SConvertI32x4High, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2UConvertI32x4Low, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI64x2UConvertI32x4High, kS128>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4SConvertF32x4, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4UConvertF32x4, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4SConvertI32x4, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprF32x4UConvertI32x4, kS128>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16SConvertI16x8, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16UConvertI16x8, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8SConvertI32x4, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8UConvertI32x4, kS128, kS128>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8SConvertI8x16Low, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8SConvertI8x16High, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8UConvertI8x16Low, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI16x8UConvertI8x16High, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4SConvertI16x8Low, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4SConvertI16x8High, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4UConvertI16x8Low, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprI32x4UConvertI16x8High, kS128>,
|
|
|
|
|
|
|
|
&WasmGenerator::op_with_prefix<kExprS128Not, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprS128And, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprS128AndNot, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprS128Or, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprS128Xor, kS128, kS128>,
|
|
|
|
&WasmGenerator::op_with_prefix<kExprS128Select, kS128, kS128, kS128>,
|
2020-06-01 22:37:03 +00:00
|
|
|
|
2020-04-03 23:56:53 +00:00
|
|
|
&WasmGenerator::simd_shuffle,
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::op_with_prefix<kExprI8x16Swizzle, kS128, kS128>,
|
2020-04-03 23:56:53 +00:00
|
|
|
|
2020-05-11 17:12:45 +00:00
|
|
|
&WasmGenerator::memop<kExprS128LoadMem>,
|
2020-09-22 20:54:28 +00:00
|
|
|
&WasmGenerator::memop<kExprS128Load8x8S>,
|
|
|
|
&WasmGenerator::memop<kExprS128Load8x8U>,
|
|
|
|
&WasmGenerator::memop<kExprS128Load16x4S>,
|
|
|
|
&WasmGenerator::memop<kExprS128Load16x4U>,
|
|
|
|
&WasmGenerator::memop<kExprS128Load32x2S>,
|
|
|
|
&WasmGenerator::memop<kExprS128Load32x2U>,
|
|
|
|
&WasmGenerator::memop<kExprS128Load8Splat>,
|
|
|
|
&WasmGenerator::memop<kExprS128Load16Splat>,
|
|
|
|
&WasmGenerator::memop<kExprS128Load32Splat>,
|
2020-10-27 10:18:45 +00:00
|
|
|
&WasmGenerator::memop<kExprS128Load64Splat>,
|
|
|
|
&WasmGenerator::memop<kExprS128Load32Zero>,
|
|
|
|
&WasmGenerator::memop<kExprS128Load64Zero>,
|
2021-02-22 09:28:44 +00:00
|
|
|
&WasmGenerator::simd_lane_memop<kExprS128Load8Lane, 16, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_memop<kExprS128Load16Lane, 8, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_memop<kExprS128Load32Lane, 4, kS128>,
|
|
|
|
&WasmGenerator::simd_lane_memop<kExprS128Load64Lane, 2, kS128>,
|
2020-10-27 10:18:45 +00:00
|
|
|
};
|
2020-02-06 18:26:29 +00:00
|
|
|
|
|
|
|
GenerateOneOf(alternatives, data);
|
|
|
|
}
|
|
|
|
|
2019-07-08 11:44:42 +00:00
|
|
|
void WasmGenerator::grow_memory(DataRange* data) {
|
2021-02-22 09:28:44 +00:00
|
|
|
Generate<kI32>(data);
|
2018-10-26 17:28:37 +00:00
|
|
|
builder_->EmitWithU8(kExprMemoryGrow, 0);
|
2017-11-07 17:38:57 +00:00
|
|
|
}
|
|
|
|
|
2019-07-08 11:44:42 +00:00
|
|
|
void WasmGenerator::Generate(ValueType type, DataRange* data) {
|
2020-03-12 14:29:51 +00:00
|
|
|
switch (type.kind()) {
|
2021-03-22 06:56:01 +00:00
|
|
|
case kVoid:
|
|
|
|
return Generate<kVoid>(data);
|
2021-02-22 09:28:44 +00:00
|
|
|
case kI32:
|
|
|
|
return Generate<kI32>(data);
|
|
|
|
case kI64:
|
|
|
|
return Generate<kI64>(data);
|
|
|
|
case kF32:
|
|
|
|
return Generate<kF32>(data);
|
|
|
|
case kF64:
|
|
|
|
return Generate<kF64>(data);
|
|
|
|
case kS128:
|
|
|
|
return Generate<kS128>(data);
|
2021-07-07 14:54:58 +00:00
|
|
|
case kOptRef:
|
2021-10-18 14:19:15 +00:00
|
|
|
return GenerateRef(type.heap_type(), data);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
default:
|
|
|
|
UNREACHABLE();
|
|
|
|
}
|
|
|
|
}
|
2017-12-21 14:31:42 +00:00
|
|
|
|
2021-10-18 14:19:15 +00:00
|
|
|
void WasmGenerator::GenerateRef(HeapType type, DataRange* data) {
|
2021-09-23 09:53:33 +00:00
|
|
|
GeneratorRecursionScope rec_scope(this);
|
|
|
|
if (recursion_limit_reached() || data->size() == 0) {
|
|
|
|
ref_null(type, data);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-10-13 14:23:06 +00:00
|
|
|
constexpr GenerateFnWithHeap alternatives_indexed_type[] = {
|
2021-10-18 14:19:15 +00:00
|
|
|
&WasmGenerator::new_object, &WasmGenerator::get_local_ref,
|
|
|
|
&WasmGenerator::array_get_ref, &WasmGenerator::struct_get_ref};
|
2021-10-13 14:23:06 +00:00
|
|
|
|
|
|
|
constexpr GenerateFnWithHeap alternatives_func_extern[] = {
|
2021-10-18 14:19:15 +00:00
|
|
|
&WasmGenerator::table_get, &WasmGenerator::get_local_ref,
|
|
|
|
&WasmGenerator::array_get_ref, &WasmGenerator::struct_get_ref};
|
2021-10-13 14:23:06 +00:00
|
|
|
|
|
|
|
constexpr GenerateFnWithHeap alternatives_other[] = {
|
2021-10-18 14:19:15 +00:00
|
|
|
&WasmGenerator::array_get_ref, &WasmGenerator::get_local_ref,
|
|
|
|
&WasmGenerator::struct_get_ref};
|
2021-10-13 14:23:06 +00:00
|
|
|
|
2021-08-23 11:02:30 +00:00
|
|
|
switch (type.representation()) {
|
2021-10-13 14:23:06 +00:00
|
|
|
// For abstract types, sometimes generate one of their subtypes.
|
2021-08-23 11:02:30 +00:00
|
|
|
case HeapType::kAny: {
|
2021-10-13 14:23:06 +00:00
|
|
|
// Note: It is possible we land here even without {liftoff_as_reference_},
|
|
|
|
// because we use anyref as a supertype of all reference types. Therefore,
|
|
|
|
// we have to generate the correct subtypes based on the value of
|
|
|
|
// {liftoff_as_reference_}.
|
|
|
|
// Weighed according to the types in the module. If there are D data types
|
|
|
|
// and F function types, the relative frequencies for dataref is D, for
|
|
|
|
// funcref F, for externref 1, for i31ref 2 if {liftoff_as_reference_}
|
|
|
|
// otherwise 0, and for falling back to anyref 2 or 0.
|
|
|
|
const uint8_t num_data_types = num_structs_ + num_arrays_;
|
|
|
|
const uint8_t num_function_types = functions_.size();
|
|
|
|
constexpr uint8_t emit_externref = 1;
|
|
|
|
const uint8_t emit_i31ref = liftoff_as_reference_ ? 2 : 0;
|
|
|
|
const uint8_t fallback_to_anyref = liftoff_as_reference_ ? 2 : 0;
|
|
|
|
uint8_t random = data->get<uint8_t>() %
|
|
|
|
(num_data_types + num_function_types + emit_externref +
|
|
|
|
emit_i31ref + fallback_to_anyref);
|
|
|
|
if (random < num_data_types) {
|
|
|
|
DCHECK(liftoff_as_reference_);
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(HeapType::kData), data);
|
2021-10-13 14:23:06 +00:00
|
|
|
} else if (random < num_data_types + num_function_types) {
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(HeapType::kFunc), data);
|
2021-10-13 14:23:06 +00:00
|
|
|
} else if (random <
|
|
|
|
num_data_types + num_function_types + emit_externref) {
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(HeapType::kExtern), data);
|
2021-10-13 14:23:06 +00:00
|
|
|
} else if (random < num_data_types + num_function_types + emit_externref +
|
|
|
|
emit_i31ref) {
|
|
|
|
DCHECK(liftoff_as_reference_);
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(HeapType::kI31), data);
|
2021-10-13 14:23:06 +00:00
|
|
|
} else {
|
|
|
|
DCHECK(liftoff_as_reference_);
|
|
|
|
GenerateOneOf(alternatives_other, type, data);
|
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
case HeapType::kData: {
|
|
|
|
DCHECK(liftoff_as_reference_);
|
|
|
|
constexpr uint8_t fallback_to_dataref = 2;
|
|
|
|
uint8_t random = data->get<uint8_t>() %
|
|
|
|
(num_arrays_ + num_structs_ + fallback_to_dataref);
|
|
|
|
if (random < num_arrays_ + num_structs_) {
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(random), data);
|
2021-10-13 14:23:06 +00:00
|
|
|
} else {
|
|
|
|
GenerateOneOf(alternatives_other, type, data);
|
2021-08-23 11:02:30 +00:00
|
|
|
}
|
2021-10-13 14:23:06 +00:00
|
|
|
return;
|
2021-08-23 11:02:30 +00:00
|
|
|
}
|
|
|
|
case HeapType::kEq: {
|
2021-10-13 14:23:06 +00:00
|
|
|
DCHECK(liftoff_as_reference_);
|
|
|
|
const uint8_t num_types = num_arrays_ + num_structs_;
|
|
|
|
constexpr uint8_t emit_i31ref = 2;
|
|
|
|
constexpr uint8_t fallback_to_eqref = 1;
|
|
|
|
uint8_t random =
|
|
|
|
data->get<uint8_t>() % (num_types + emit_i31ref + fallback_to_eqref);
|
|
|
|
if (random < num_types) {
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(random), data);
|
2021-10-13 14:23:06 +00:00
|
|
|
} else if (random < num_types + emit_i31ref) {
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(HeapType::kI31), data);
|
2021-10-13 14:23:06 +00:00
|
|
|
} else {
|
|
|
|
GenerateOneOf(alternatives_other, type, data);
|
2021-08-23 11:02:30 +00:00
|
|
|
}
|
2021-10-13 14:23:06 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
case HeapType::kExtern: {
|
|
|
|
GenerateOneOf(alternatives_func_extern, type, data);
|
|
|
|
return;
|
2021-08-23 11:02:30 +00:00
|
|
|
}
|
|
|
|
case HeapType::kFunc: {
|
2021-09-17 16:45:15 +00:00
|
|
|
uint32_t random = data->get<uint32_t>() % (functions_.size() + 1);
|
2021-10-13 14:23:06 +00:00
|
|
|
if (random < functions_.size()) {
|
|
|
|
if (liftoff_as_reference_) {
|
|
|
|
// Only reduce to indexed type with liftoff as reference.
|
|
|
|
uint32_t signature_index = functions_[random];
|
|
|
|
DCHECK(builder_->builder()->IsSignature(signature_index));
|
2021-10-18 14:19:15 +00:00
|
|
|
GenerateRef(HeapType(signature_index), data);
|
2021-10-13 14:23:06 +00:00
|
|
|
} else {
|
|
|
|
// If interpreter is used as reference, generate a ref.func directly.
|
|
|
|
builder_->EmitWithU32V(kExprRefFunc, random);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
GenerateOneOf(alternatives_func_extern, type, data);
|
2021-08-23 11:02:30 +00:00
|
|
|
}
|
2021-10-13 14:23:06 +00:00
|
|
|
return;
|
2021-08-23 11:02:30 +00:00
|
|
|
}
|
2021-10-12 13:23:23 +00:00
|
|
|
case HeapType::kI31: {
|
2021-10-13 14:23:06 +00:00
|
|
|
DCHECK(liftoff_as_reference_);
|
2021-10-12 13:23:23 +00:00
|
|
|
if (data->get<bool>()) {
|
|
|
|
Generate(kWasmI32, data);
|
|
|
|
builder_->EmitWithPrefix(kExprI31New);
|
2021-10-13 14:23:06 +00:00
|
|
|
} else {
|
|
|
|
GenerateOneOf(alternatives_other, type, data);
|
2021-10-12 13:23:23 +00:00
|
|
|
}
|
2021-10-13 14:23:06 +00:00
|
|
|
return;
|
2021-10-12 13:23:23 +00:00
|
|
|
}
|
2021-08-23 11:02:30 +00:00
|
|
|
default:
|
2021-10-13 14:23:06 +00:00
|
|
|
// Indexed type.
|
|
|
|
DCHECK(liftoff_as_reference_);
|
|
|
|
GenerateOneOf(alternatives_indexed_type, type, data);
|
|
|
|
return;
|
2021-09-06 11:46:06 +00:00
|
|
|
}
|
2021-10-13 14:23:06 +00:00
|
|
|
UNREACHABLE();
|
2021-07-07 14:54:58 +00:00
|
|
|
}
|
|
|
|
|
2020-05-18 08:31:21 +00:00
|
|
|
std::vector<ValueType> WasmGenerator::GenerateTypes(DataRange* data) {
|
|
|
|
std::vector<ValueType> types;
|
|
|
|
int num_params = int{data->get<uint8_t>()} % (kMaxParameters + 1);
|
|
|
|
for (int i = 0; i < num_params; ++i) {
|
2021-09-28 13:36:22 +00:00
|
|
|
types.push_back(GetValueType(
|
|
|
|
data, liftoff_as_reference_,
|
|
|
|
num_structs_ + num_arrays_ + static_cast<uint32_t>(functions_.size())));
|
2020-05-18 08:31:21 +00:00
|
|
|
}
|
|
|
|
return types;
|
|
|
|
}
|
|
|
|
|
2021-06-17 15:43:55 +00:00
|
|
|
void WasmGenerator::Generate(base::Vector<const ValueType> types,
|
|
|
|
DataRange* data) {
|
2020-05-18 08:31:21 +00:00
|
|
|
// Maybe emit a multi-value block with the expected return type. Use a
|
|
|
|
// non-default value to indicate block generation to avoid recursion when we
|
|
|
|
// reach the end of the data.
|
|
|
|
bool generate_block = data->get<uint8_t>() % 32 == 1;
|
|
|
|
if (generate_block) {
|
|
|
|
GeneratorRecursionScope rec_scope(this);
|
|
|
|
if (!recursion_limit_reached()) {
|
|
|
|
const auto param_types = GenerateTypes(data);
|
2021-06-17 15:43:55 +00:00
|
|
|
Generate(base::VectorOf(param_types), data);
|
|
|
|
any_block(base::VectorOf(param_types), types, data);
|
2020-05-18 08:31:21 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (types.size() == 0) {
|
2021-03-22 06:56:01 +00:00
|
|
|
Generate(kWasmVoid, data);
|
2020-05-18 08:31:21 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (types.size() == 1) {
|
|
|
|
Generate(types[0], data);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Split the types in two halves and recursively generate each half.
|
|
|
|
// Each half is non empty to ensure termination.
|
|
|
|
size_t split_index = data->get<uint8_t>() % (types.size() - 1) + 1;
|
2021-06-17 15:43:55 +00:00
|
|
|
base::Vector<const ValueType> lower_half = types.SubVector(0, split_index);
|
|
|
|
base::Vector<const ValueType> upper_half =
|
2020-06-22 07:24:35 +00:00
|
|
|
types.SubVector(split_index, types.size());
|
2020-05-18 08:31:21 +00:00
|
|
|
DataRange first_range = data->split();
|
|
|
|
Generate(lower_half, &first_range);
|
|
|
|
Generate(upper_half, data);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Emit code to match an arbitrary signature.
|
2021-07-07 14:54:58 +00:00
|
|
|
// TODO(manoskouk): Do something which uses inputs instead of dropping them.
|
|
|
|
// Possibly generate function a function with the correct sig on the fly? Or
|
|
|
|
// generalize the {Convert} function.
|
2021-06-17 15:43:55 +00:00
|
|
|
void WasmGenerator::ConsumeAndGenerate(
|
|
|
|
base::Vector<const ValueType> param_types,
|
|
|
|
base::Vector<const ValueType> return_types, DataRange* data) {
|
2021-07-07 14:54:58 +00:00
|
|
|
for (unsigned i = 0; i < param_types.size(); i++) builder_->Emit(kExprDrop);
|
|
|
|
Generate(return_types, data);
|
2020-05-18 08:31:21 +00:00
|
|
|
}
|
|
|
|
|
2021-02-17 14:24:38 +00:00
|
|
|
enum SigKind { kFunctionSig, kExceptionSig };
|
|
|
|
|
2021-07-21 11:43:07 +00:00
|
|
|
FunctionSig* GenerateSig(Zone* zone, DataRange* data, SigKind sig_kind,
|
2021-09-28 13:36:22 +00:00
|
|
|
bool liftoff_as_reference, int num_types) {
|
2017-12-21 14:31:42 +00:00
|
|
|
// Generate enough parameters to spill some to the stack.
|
2019-07-08 11:44:42 +00:00
|
|
|
int num_params = int{data->get<uint8_t>()} % (kMaxParameters + 1);
|
2021-02-17 14:24:38 +00:00
|
|
|
int num_returns = sig_kind == kFunctionSig
|
|
|
|
? int{data->get<uint8_t>()} % (kMaxReturns + 1)
|
|
|
|
: 0;
|
2017-12-21 14:31:42 +00:00
|
|
|
|
2020-05-12 14:39:52 +00:00
|
|
|
FunctionSig::Builder builder(zone, num_returns, num_params);
|
2021-07-21 11:43:07 +00:00
|
|
|
for (int i = 0; i < num_returns; ++i) {
|
2021-09-28 13:36:22 +00:00
|
|
|
builder.AddReturn(GetValueType(data, liftoff_as_reference, num_types));
|
2021-07-21 11:43:07 +00:00
|
|
|
}
|
|
|
|
for (int i = 0; i < num_params; ++i) {
|
2021-09-28 13:36:22 +00:00
|
|
|
builder.AddParam(GetValueType(data, liftoff_as_reference, num_types));
|
2021-07-21 11:43:07 +00:00
|
|
|
}
|
2017-12-21 14:31:42 +00:00
|
|
|
return builder.Build();
|
|
|
|
}
|
|
|
|
|
2017-09-04 10:05:10 +00:00
|
|
|
} // namespace
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2017-05-08 09:22:54 +00:00
|
|
|
class WasmCompileFuzzer : public WasmExecutionFuzzer {
|
2021-07-05 14:08:41 +00:00
|
|
|
bool GenerateModule(Isolate* isolate, Zone* zone,
|
2021-07-16 14:46:36 +00:00
|
|
|
base::Vector<const uint8_t> data, ZoneBuffer* buffer,
|
|
|
|
bool liftoff_as_reference) override {
|
2017-05-08 09:22:54 +00:00
|
|
|
TestSignatures sigs;
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2017-05-08 09:22:54 +00:00
|
|
|
WasmModuleBuilder builder(zone);
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2018-07-12 09:05:43 +00:00
|
|
|
DataRange range(data);
|
2020-07-28 15:09:13 +00:00
|
|
|
std::vector<uint32_t> function_signatures;
|
2021-08-06 10:45:15 +00:00
|
|
|
|
|
|
|
// Add struct and array types first so that we get a chance to generate
|
2021-08-12 21:40:25 +00:00
|
|
|
// these types in function signatures.
|
|
|
|
// Currently, WasmGenerator assumes this order for struct/array/signature
|
|
|
|
// definitions.
|
2021-09-23 09:53:33 +00:00
|
|
|
|
|
|
|
uint8_t num_structs = 0;
|
|
|
|
uint8_t num_arrays = 0;
|
|
|
|
static_assert(kMaxFunctions >= 1, "need min. 1 function");
|
|
|
|
uint8_t num_functions = 1 + (range.get<uint8_t>() % kMaxFunctions);
|
2021-09-28 13:36:22 +00:00
|
|
|
uint16_t num_types = num_functions;
|
2021-09-23 09:53:33 +00:00
|
|
|
|
2021-08-06 10:45:15 +00:00
|
|
|
if (liftoff_as_reference) {
|
2021-09-23 09:53:33 +00:00
|
|
|
num_structs = range.get<uint8_t>() % (kMaxStructs + 1);
|
|
|
|
num_arrays = range.get<uint8_t>() % (kMaxArrays + 1);
|
2021-09-28 13:36:22 +00:00
|
|
|
|
|
|
|
num_types += num_structs + num_arrays;
|
2021-09-23 09:53:33 +00:00
|
|
|
|
|
|
|
for (int struct_index = 0; struct_index < num_structs; struct_index++) {
|
|
|
|
uint8_t num_fields = range.get<uint8_t>() % (kMaxStructFields + 1);
|
|
|
|
StructType::Builder struct_builder(zone, num_fields);
|
|
|
|
for (int field_index = 0; field_index < num_fields; field_index++) {
|
2021-10-07 15:10:58 +00:00
|
|
|
ValueType type = GetValueType(&range, true, num_types, true);
|
2021-10-25 10:25:53 +00:00
|
|
|
bool mutability = range.get<bool>();
|
2021-09-23 09:53:33 +00:00
|
|
|
struct_builder.AddField(type, mutability);
|
|
|
|
}
|
|
|
|
StructType* struct_fuz = struct_builder.Build();
|
|
|
|
builder.AddStructType(struct_fuz);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (int array_index = 0; array_index < num_arrays; array_index++) {
|
2021-10-07 15:10:58 +00:00
|
|
|
ValueType type = GetValueType(&range, true, num_types, true);
|
2021-10-25 10:25:53 +00:00
|
|
|
bool mutability = range.get<bool>();
|
|
|
|
ArrayType* array_fuz = zone->New<ArrayType>(type, mutability);
|
2021-09-23 09:53:33 +00:00
|
|
|
builder.AddArrayType(array_fuz);
|
|
|
|
}
|
2021-08-06 10:45:15 +00:00
|
|
|
}
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2021-09-28 13:36:22 +00:00
|
|
|
function_signatures.push_back(builder.ForceAddSignature(sigs.i_iii()));
|
2017-12-21 14:31:42 +00:00
|
|
|
|
|
|
|
for (int i = 1; i < num_functions; ++i) {
|
2021-09-28 13:36:22 +00:00
|
|
|
FunctionSig* sig = GenerateSig(zone, &range, kFunctionSig,
|
|
|
|
liftoff_as_reference, num_types);
|
|
|
|
uint32_t signature_index = builder.ForceAddSignature(sig);
|
2020-07-28 15:09:13 +00:00
|
|
|
function_signatures.push_back(signature_index);
|
2017-12-21 14:31:42 +00:00
|
|
|
}
|
|
|
|
|
2018-01-29 16:45:59 +00:00
|
|
|
int num_globals = range.get<uint8_t>() % (kMaxGlobals + 1);
|
|
|
|
std::vector<ValueType> globals;
|
|
|
|
std::vector<uint8_t> mutable_globals;
|
|
|
|
globals.reserve(num_globals);
|
|
|
|
mutable_globals.reserve(num_globals);
|
|
|
|
|
2021-02-17 14:24:38 +00:00
|
|
|
int num_exceptions = 1 + (range.get<uint8_t>() % kMaxExceptions);
|
|
|
|
for (int i = 0; i < num_exceptions; ++i) {
|
2021-09-28 13:36:22 +00:00
|
|
|
FunctionSig* sig = GenerateSig(zone, &range, kExceptionSig,
|
|
|
|
liftoff_as_reference, num_types);
|
2021-02-17 14:24:38 +00:00
|
|
|
builder.AddException(sig);
|
|
|
|
}
|
|
|
|
|
2018-01-29 16:45:59 +00:00
|
|
|
for (int i = 0; i < num_globals; ++i) {
|
2021-09-28 13:36:22 +00:00
|
|
|
ValueType type = GetValueType(&range, liftoff_as_reference, num_types);
|
2018-01-29 16:45:59 +00:00
|
|
|
// 1/8 of globals are immutable.
|
|
|
|
const bool mutability = (range.get<uint8_t>() % 8) != 0;
|
2019-07-11 15:02:44 +00:00
|
|
|
builder.AddGlobal(type, mutability, WasmInitExpr());
|
2018-01-29 16:45:59 +00:00
|
|
|
globals.push_back(type);
|
|
|
|
if (mutability) mutable_globals.push_back(static_cast<uint8_t>(i));
|
|
|
|
}
|
2021-09-01 10:39:33 +00:00
|
|
|
|
|
|
|
// Generate function declarations before tables. This will be needed once we
|
|
|
|
// have typed-function tables.
|
|
|
|
std::vector<WasmFunctionBuilder*> functions;
|
2017-12-21 14:31:42 +00:00
|
|
|
for (int i = 0; i < num_functions; ++i) {
|
2020-07-28 15:09:13 +00:00
|
|
|
FunctionSig* sig = builder.GetSignature(function_signatures[i]);
|
2021-09-01 10:39:33 +00:00
|
|
|
functions.push_back(builder.AddFunction(sig));
|
2017-12-21 14:31:42 +00:00
|
|
|
}
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2021-09-01 10:39:33 +00:00
|
|
|
// Generate tables before function bodies, so they are available for table
|
|
|
|
// operations.
|
2021-08-31 09:17:12 +00:00
|
|
|
// Always generate at least one table for call_indirect.
|
|
|
|
int num_tables = range.get<uint8_t>() % kMaxTables + 1;
|
2021-08-13 20:08:20 +00:00
|
|
|
for (int i = 0; i < num_tables; i++) {
|
2021-08-31 09:17:12 +00:00
|
|
|
// Table 0 has to reference all functions in the program. This is so that
|
|
|
|
// all functions count as declared so they can be referenced with
|
|
|
|
// ref.func.
|
|
|
|
// TODO(11954): Consider removing this restriction.
|
|
|
|
uint32_t min_size =
|
|
|
|
i == 0 ? num_functions : range.get<uint8_t>() % kMaxTableSize;
|
2021-08-13 20:08:20 +00:00
|
|
|
uint32_t max_size =
|
|
|
|
range.get<uint8_t>() % (kMaxTableSize - min_size) + min_size;
|
2021-08-31 09:17:12 +00:00
|
|
|
// Table 0 is always funcref.
|
2021-09-01 10:39:33 +00:00
|
|
|
// TODO(11954): Remove this requirement once we support call_indirect with
|
|
|
|
// other table indices.
|
2021-08-31 09:17:12 +00:00
|
|
|
// TODO(11954): Support typed function tables.
|
|
|
|
bool use_funcref = i == 0 || range.get<bool>();
|
|
|
|
ValueType type = use_funcref ? kWasmFuncRef : kWasmExternRef;
|
|
|
|
uint32_t table_index = builder.AddTable(type, min_size, max_size);
|
|
|
|
if (type == kWasmFuncRef) {
|
|
|
|
// For function tables, initialize them with functions from the program.
|
|
|
|
// Currently, the fuzzer assumes that every function table contains the
|
|
|
|
// functions in the program in the order they are defined.
|
|
|
|
// TODO(11954): Consider generalizing this.
|
|
|
|
WasmModuleBuilder::WasmElemSegment segment(
|
|
|
|
zone, kWasmFuncRef, table_index, WasmInitExpr(0));
|
|
|
|
for (int entry_index = 0; entry_index < static_cast<int>(min_size);
|
|
|
|
entry_index++) {
|
|
|
|
segment.entries.emplace_back(
|
|
|
|
WasmModuleBuilder::WasmElemSegment::Entry::kRefFuncEntry,
|
|
|
|
entry_index % num_functions);
|
|
|
|
}
|
|
|
|
builder.AddElementSegment(std::move(segment));
|
|
|
|
}
|
2021-08-13 20:08:20 +00:00
|
|
|
}
|
2021-09-01 10:39:33 +00:00
|
|
|
|
|
|
|
for (int i = 0; i < num_functions; ++i) {
|
|
|
|
WasmFunctionBuilder* f = functions[i];
|
|
|
|
DataRange function_range = range.split();
|
|
|
|
WasmGenerator gen(f, function_signatures, globals, mutable_globals,
|
|
|
|
num_structs, num_arrays, &function_range,
|
|
|
|
liftoff_as_reference);
|
|
|
|
FunctionSig* sig = f->signature();
|
|
|
|
base::Vector<const ValueType> return_types(sig->returns().begin(),
|
|
|
|
sig->return_count());
|
|
|
|
gen.Generate(return_types, &function_range);
|
|
|
|
if (!CheckHardwareSupportsSimd() && gen.HasSimd()) return false;
|
|
|
|
f->Emit(kExprEnd);
|
|
|
|
if (i == 0) builder.AddExport(base::CStrVector("main"), f);
|
|
|
|
}
|
|
|
|
|
2017-08-29 08:47:47 +00:00
|
|
|
builder.SetMaxMemorySize(32);
|
2020-01-14 18:17:34 +00:00
|
|
|
// We enable shared memory to be able to test atomics.
|
|
|
|
builder.SetHasSharedMemory();
|
2019-07-08 11:44:42 +00:00
|
|
|
builder.WriteTo(buffer);
|
2017-05-08 09:22:54 +00:00
|
|
|
return true;
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
2017-05-08 09:22:54 +00:00
|
|
|
};
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
|
2017-05-08 09:22:54 +00:00
|
|
|
extern "C" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) {
|
2017-11-08 21:03:20 +00:00
|
|
|
constexpr bool require_valid = true;
|
2020-06-09 15:54:14 +00:00
|
|
|
EXPERIMENTAL_FLAG_SCOPE(reftypes);
|
2021-07-07 14:54:58 +00:00
|
|
|
EXPERIMENTAL_FLAG_SCOPE(typed_funcref);
|
|
|
|
EXPERIMENTAL_FLAG_SCOPE(gc);
|
|
|
|
EXPERIMENTAL_FLAG_SCOPE(simd);
|
2021-02-17 14:24:38 +00:00
|
|
|
EXPERIMENTAL_FLAG_SCOPE(eh);
|
2018-11-20 14:41:36 +00:00
|
|
|
WasmCompileFuzzer().FuzzWasmModule({data, size}, require_valid);
|
|
|
|
return 0;
|
[wasm] Syntax- and Type-aware Fuzzer
This is the beginning of a new fuzzer that generates
correct-by-construction Wasm modules. This should allow us to better
exercise the compiler and correctness aspects of fuzzing. It is based off
of ahaas' original Wasm fuzzer.
At the moment, it can generate expressions made up of most binops, and
also nested blocks with unconditional breaks. Future CLs will add
additional constructs, such as br_if, loops, memory access, etc.
The way the fuzzer works is that it starts with an array of arbitrary
data provided by libfuzzer. It uses the data to generate an expression.
Care is taken to make use of the entire string. Basically, the
generator has a bunch of grammar-like rules for how to construct an
expression of a given type. For example, an i32 can be made by adding
two other i32s, or by wrapping an i64. The process then continues
recursively until all the data is consumed.
We generate an expression from a slice of data as follows:
* If the slice is less than or equal to the size of the type (e.g. 4
bytes for i32), then it will emit the entire slice as a constant.
* Otherwise, it will consume the first 4 bytes of the slice and use
this to select which rule to apply. Each rule then consumes the
remainder of the slice in an appropriate way. For example:
* Unary ops use the remainder of the slice to generate the argument.
* Binary ops consume another four bytes and mod this with the length
of the remaining slice to split the slice into two parts. Each of
these subslices are then used to generate one of the arguments to
the binop.
* Blocks are basically like a unary op, but a stack of block types is
maintained to facilitate branches. For blocks that end in a break,
the first four bytes of a slice are used to select the break depth
and the stack determines what type of expression to generate.
The goal is that once this generator is complete, it will provide a one
to one mapping between binary strings and valid Wasm modules.
Review-Url: https://codereview.chromium.org/2658723006
Cr-Commit-Position: refs/heads/master@{#43289}
2017-02-17 17:06:29 +00:00
|
|
|
}
|
2017-09-01 13:20:46 +00:00
|
|
|
|
|
|
|
} // namespace fuzzer
|
|
|
|
} // namespace wasm
|
|
|
|
} // namespace internal
|
|
|
|
} // namespace v8
|