Goal of this CL: explicit return from non-async function has position after
return expression as return position (will unblock [1]).
BytecodeArrayBuilder has SetStatementPosition and SetExpressionPosition methods.
If one of these methods is called then next generated bytecode will get passed
position. It's general treatment for most cases.
Unfortunately it doesn't work for Returns:
- debugger requires source positions exactly on kReturn bytecode in stepping
implementation,
- BytecodeGenerator::BuildReturn and BytecodeGenerator::BuildAsyncReturn
generates more then one bytecode and general solution will put return position
on first generated bytecode,
- it's not easy to split BuildReturn function into two parts to allow something
like following in BytecodeGenerator::VisitReturnStatement since generated
bytecodes are actually controlled by execution_control().
..->BuildReturnPrologue();
..->SetReturnPosition(stmt);
..->Return();
In this CL we pass ReturnStatement through ExecutionControl and use it for
position when we emit return bytecode right here.
So this CL only will improve return position for returns inside of non-async
functions, I'll address async functions later.
[1] https://chromium-review.googlesource.com/c/543161/
Change-Id: Iede512c120b00c209990bf50c20e7d23dc0d65db
Reviewed-on: https://chromium-review.googlesource.com/560738
Commit-Queue: Aleksey Kozyatinskiy <kozyatinskiy@chromium.org>
Reviewed-by: Adam Klein <adamk@chromium.org>
Reviewed-by: Michael Starzinger <mstarzinger@chromium.org>
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Reviewed-by: Jakob Gruber <jgruber@chromium.org>
Cr-Commit-Position: refs/heads/master@{#46687}
Instead of counting profiler ticks on the shared function info (which is
shared between native contexts), count them on the feedback vector
(which is not). This allows us to continue pushing optimization
decisions off the SFI, onto the feedback vector.
Note that a side-effect of this is that ICs don't have to walk the stack
to reset profiler ticks, as they can access the feedback vector directly
from their feedback nexus.
Change-Id: I232ae9e759fca75cd89d393148a4ff42caa2646f
Reviewed-on: https://chromium-review.googlesource.com/544888
Reviewed-by: Igor Sheludko <ishell@chromium.org>
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Commit-Queue: Leszek Swirski <leszeks@chromium.org>
Cr-Commit-Position: refs/heads/master@{#46411}
In some codes flushing the registers was costly: we processed each
register whereas all the registers alone in their equivalence class need
not to be processed. We now overapproximate easily which classes are of
size 2 so as to save many iterations in the Flush() loop in some cases.
Bug: v8:6432
Change-Id: I945e151736e8a515263ac76312127d930fd20d74
Reviewed-on: https://chromium-review.googlesource.com/525795
Commit-Queue: Alexandre Talon <alexandret@google.com>
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Reviewed-by: Leszek Swirski <leszeks@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45805}
Since the feedback vector is itself a native context structure, why
not store optimized code for a function in there rather than in
a map from native context to code? This allows us to get rid of
the optimized code map in the SharedFunctionInfo, saving a pointer,
and making lookup of any optimized code quicker.
Original patch by Michael Stanton <mvstanton@chromium.org>
BUG=v8:6246,chromium:718891
TBR=yangguo@chromium.org,ulan@chromium.org
Change-Id: I3bb9ec0cfff32e667cca0e1403f964f33a6958a6
Reviewed-on: https://chromium-review.googlesource.com/500134
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Reviewed-by: Jaroslav Sevcik <jarin@chromium.org>
Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45234}
This reverts commit 662aa425ba.
Reason for revert: Crashing on Canary
BUG=chromium:718891
Original change's description:
> Reland: [TypeFeedbackVector] Store optimized code in the vector
>
> Since the feedback vector is itself a native context structure, why
> not store optimized code for a function in there rather than in
> a map from native context to code? This allows us to get rid of
> the optimized code map in the SharedFunctionInfo, saving a pointer,
> and making lookup of any optimized code quicker.
>
> Original patch by Michael Stanton <mvstanton@chromium.org>
>
> BUG=v8:6246
> TBR=yangguo@chromium.org,ulan@chromium.org
>
> Change-Id: Ic83e4011148164ef080c63215a0c77f1dfb7f327
> Reviewed-on: https://chromium-review.googlesource.com/494487
> Reviewed-by: Jaroslav Sevcik <jarin@chromium.org>
> Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#45084}
TBR=ulan@chromium.org,rmcilroy@chromium.org,yangguo@chromium.org,jarin@chromium.org
# Not skipping CQ checks because original CL landed > 1 day ago.
BUG=v8:6246
Change-Id: Idab648d6fe260862c2a0e35366df19dcecf13a82
Reviewed-on: https://chromium-review.googlesource.com/498633
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45174}
Since the feedback vector is itself a native context structure, why
not store optimized code for a function in there rather than in
a map from native context to code? This allows us to get rid of
the optimized code map in the SharedFunctionInfo, saving a pointer,
and making lookup of any optimized code quicker.
Original patch by Michael Stanton <mvstanton@chromium.org>
BUG=v8:6246
TBR=yangguo@chromium.org,ulan@chromium.org
Change-Id: Ic83e4011148164ef080c63215a0c77f1dfb7f327
Reviewed-on: https://chromium-review.googlesource.com/494487
Reviewed-by: Jaroslav Sevcik <jarin@chromium.org>
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45084}
This reverts commit c5ad9c6d8e.
Reason for revert: Fails on gc stress:
https://build.chromium.org/p/client.v8/builders/V8%20Linux64%20GC%20Stress%20-%20custom%20snapshot/builds/12661
Original change's description:
> [TypeFeedbackVector] Store optimized code in the vector
>
> Since the feedback vector is itself a native context structure, why
> not store optimized code for a function in there rather than in
> a map from native context to code? This allows us to get rid of
> the optimized code map in the SharedFunctionInfo, saving a pointer,
> and making lookup of any optimized code quicker.
>
> Original patch by Michael Stanton <mvstanton@chromium.org>
>
> BUG=v8:6246
>
> Change-Id: I60ff8c408c3001bc272b4b198c9cbaea2872a9e5
> Reviewed-on: https://chromium-review.googlesource.com/476891
> Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
> Reviewed-by: Michael Stanton <mvstanton@chromium.org>
> Reviewed-by: Yang Guo <yangguo@chromium.org>
> Reviewed-by: Jaroslav Sevcik <jarin@chromium.org>
> Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#45022}
TBR=ulan@chromium.org,rmcilroy@chromium.org,yangguo@chromium.org,mvstanton@chromium.org,jarin@chromium.org
NOPRESUBMIT=true
NOTREECHECKS=true
NOTRY=true
BUG=v8:6246
Change-Id: I9cd5735b03898cae6ae7adea0f19d32fceb31619
Reviewed-on: https://chromium-review.googlesource.com/493287
Reviewed-by: Michael Achenbach <machenbach@chromium.org>
Commit-Queue: Michael Achenbach <machenbach@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45027}
Since the feedback vector is itself a native context structure, why
not store optimized code for a function in there rather than in
a map from native context to code? This allows us to get rid of
the optimized code map in the SharedFunctionInfo, saving a pointer,
and making lookup of any optimized code quicker.
Original patch by Michael Stanton <mvstanton@chromium.org>
BUG=v8:6246
Change-Id: I60ff8c408c3001bc272b4b198c9cbaea2872a9e5
Reviewed-on: https://chromium-review.googlesource.com/476891
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Reviewed-by: Michael Stanton <mvstanton@chromium.org>
Reviewed-by: Yang Guo <yangguo@chromium.org>
Reviewed-by: Jaroslav Sevcik <jarin@chromium.org>
Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45022}
This relands commit d3e9aade0f. The original CL was reverted speculatively but didn't cause the buildbot failure.
Original change's description:
> [Interpreter] Move BinaryOp Smi transformation into BytecodeGenerator.
>
> Perform the transformation to <BinaryOp>Smi for Binary ops which take Smi
> literals in the BytecodeGenerator. This enables us to perform the
> transformation for literals on either side for commutative operations, and
> Avoids having to do the check on every bytecode in the peephole optimizer.
>
> In the process, adds Smi bytecode variants for all binary operations, adding
> - MulSmi
> - DivSmi
> - ModSmi
> - BitwiseXorSmi
> - ShiftRightLogical
>
> BUG=v8:6194
>
> Change-Id: If1484252f5385c16957004b9cac8bfbb1f209219
> Reviewed-on: https://chromium-review.googlesource.com/466246
> Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
> Reviewed-by: Yang Guo <yangguo@chromium.org>
> Reviewed-by: Michael Starzinger <mstarzinger@chromium.org>
> Reviewed-by: Igor Sheludko <ishell@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#44477}
TBR=rmcilroy@chromium.org,machenbach@chromium.org,yangguo@chromium.org,mstarzinger@chromium.org,mythria@chromium.org,v8-reviews@googlegroups.com,ishell@chromium.org
# Not skipping CQ checks because original CL landed > 1 day ago.
BUG=v8:6194
Change-Id: I2ccaefa1ce58d3885f5c2648755985c06f25c1d8
Reviewed-on: https://chromium-review.googlesource.com/472746
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Cr-Commit-Position: refs/heads/master@{#44511}
This reverts commit d3e9aade0f.
Reason for revert: Speculative for:
https://build.chromium.org/p/client.v8.ports/builders/V8%20Linux%20-%20arm64%20-%20sim%20-%20nosnap%20-%20debug/builds/4449
Bisect points to this CL.
Original change's description:
> [Interpreter] Move BinaryOp Smi transformation into BytecodeGenerator.
>
> Perform the transformation to <BinaryOp>Smi for Binary ops which take Smi
> literals in the BytecodeGenerator. This enables us to perform the
> transformation for literals on either side for commutative operations, and
> Avoids having to do the check on every bytecode in the peephole optimizer.
>
> In the process, adds Smi bytecode variants for all binary operations, adding
> - MulSmi
> - DivSmi
> - ModSmi
> - BitwiseXorSmi
> - ShiftRightLogical
>
> BUG=v8:6194
>
> Change-Id: If1484252f5385c16957004b9cac8bfbb1f209219
> Reviewed-on: https://chromium-review.googlesource.com/466246
> Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
> Reviewed-by: Yang Guo <yangguo@chromium.org>
> Reviewed-by: Michael Starzinger <mstarzinger@chromium.org>
> Reviewed-by: Igor Sheludko <ishell@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#44477}
TBR=rmcilroy@chromium.org,yangguo@chromium.org,mstarzinger@chromium.org,mythria@chromium.org,ishell@chromium.org,v8-reviews@googlegroups.com
NOPRESUBMIT=true
NOTREECHECKS=true
NOTRY=true
BUG=v8:6194
Change-Id: If57dbdbe40be77804bf437463b855d3167e2d473
Reviewed-on: https://chromium-review.googlesource.com/471308
Reviewed-by: Michael Achenbach <machenbach@chromium.org>
Commit-Queue: Michael Achenbach <machenbach@chromium.org>
Cr-Commit-Position: refs/heads/master@{#44488}
Perform the transformation to <BinaryOp>Smi for Binary ops which take Smi
literals in the BytecodeGenerator. This enables us to perform the
transformation for literals on either side for commutative operations, and
Avoids having to do the check on every bytecode in the peephole optimizer.
In the process, adds Smi bytecode variants for all binary operations, adding
- MulSmi
- DivSmi
- ModSmi
- BitwiseXorSmi
- ShiftRightLogical
BUG=v8:6194
Change-Id: If1484252f5385c16957004b9cac8bfbb1f209219
Reviewed-on: https://chromium-review.googlesource.com/466246
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Reviewed-by: Yang Guo <yangguo@chromium.org>
Reviewed-by: Michael Starzinger <mstarzinger@chromium.org>
Reviewed-by: Igor Sheludko <ishell@chromium.org>
Cr-Commit-Position: refs/heads/master@{#44477}
Since JumpLoop is always backwards, and other jumps are always forwards,
we can store the jump offset as an always positive integer and decide on
the jump direction based on the bytecode. This will save a small amount
of space for large-ish for loops (>128 bytecodes).
Review-Url: https://codereview.chromium.org/2641443002
Cr-Commit-Position: refs/heads/master@{#42638}
This changes the NewClosure interface descriptor, but ignores
the additional vector/slot arguments for now. The feedback vector
gets larger, as it holds a space for each literal array. A follow-on
CL will constructively use this space.
BUG=v8:5456
Review-Url: https://codereview.chromium.org/2614373002
Cr-Commit-Position: refs/heads/master@{#42146}
Reason for revert:
Speculative revert because of blocked roll: https://codereview.chromium.org/2596013002/
Original issue's description:
> [TypeFeedbackVector] Root literal arrays in function literals slots
>
> Literal arrays and feedback vectors for a function can be garbage
> collected if we don't have a rooted closure for the function, which
> happens often. It's expensive to come back from this (recreating
> boilerplates and gathering feedback again), and the cost is
> disproportionate if the function was inlined into optimized code.
>
> To guard against losing these arrays when we need them, we'll now
> create literal arrays when creating the feedback vector for the outer
> closure, and root them strongly in that vector.
>
> BUG=v8:5456
>
> Review-Url: https://codereview.chromium.org/2504153002
> Cr-Commit-Position: refs/heads/master@{#41893}
> Committed: 93df094081TBR=bmeurer@chromium.org,mlippautz@chromium.org,mvstanton@chromium.org
# Skipping CQ checks because original CL landed less than 1 days ago.
NOPRESUBMIT=true
NOTREECHECKS=true
NOTRY=true
BUG=v8:5456
Review-Url: https://codereview.chromium.org/2597163002
Cr-Commit-Position: refs/heads/master@{#41917}
Literal arrays and feedback vectors for a function can be garbage
collected if we don't have a rooted closure for the function, which
happens often. It's expensive to come back from this (recreating
boilerplates and gathering feedback again), and the cost is
disproportionate if the function was inlined into optimized code.
To guard against losing these arrays when we need them, we'll now
create literal arrays when creating the feedback vector for the outer
closure, and root them strongly in that vector.
BUG=v8:5456
Review-Url: https://codereview.chromium.org/2504153002
Cr-Commit-Position: refs/heads/master@{#41893}
The majority of context slot accesses are to the local context (current context
register and depth 0), so this adds bytecodes to optimise for that case.
This cuts down bytecode size by roughly 1% (measured on Octane and Top25).
Review-Url: https://codereview.chromium.org/2459513002
Cr-Commit-Position: refs/heads/master@{#40641}
Move hole check logic from full-codegen into scope analysis, and store the
"needs hole check" bit on VariableProxy. This makes it easy to re-use in
any backend: it will be trivial to extend the use of this logic in, e.g.,
full-codegen variable stores.
While changing the signatures of the variable loading/storing methods in
Ignition, I took the liberty of replacing the verb "Visit" with "Build", since these
are not part of AST visiting.
BUG=v8:5460
Review-Url: https://chromiumcodereview.appspot.com/2411873004
Cr-Commit-Position: refs/heads/master@{#40479}
There are only a few occasions where we allocate a register in an outer
expression allocation scope, which makes the costly free-list approach
of the BytecodeRegisterAllocator unecessary. This CL replaces all
occurrences with moves to the accumulator and stores to a register
allocated in the correct scope. By doing this, we can simplify the
BytecodeRegisterAllocator to be a simple bump-pointer allocator
with registers released in the same order as allocated.
The following changes are also made:
- Make BytecodeRegisterOptimizer able to use registers which have been
unallocated, but not yet reused
- Remove RegisterExpressionResultScope and rename
AccumulatorExpressionResultScope to ValueExpressionResultScope
- Introduce RegisterList to represent consecutive register
allocations, and use this for operands to call bytecodes.
By avoiding the free-list handling, this gives another couple of
percent on CodeLoad.
BUG=v8:4280
Review-Url: https://codereview.chromium.org/2369873002
Cr-Commit-Position: refs/heads/master@{#39905}
Add a notion of "invocation count" to the baseline compilers, which
increment a special slot in the TypeFeedbackVector for each invocation
of a given function (the optimized code doesn't currently collect this
information).
Use this invocation count to relativize the call counts on the call
sites within the function, so that the inlining heuristic has a view
of relative importance of a call site rather than some absolute numbers
with unclear meaning for the current function. Also apply the call site
frequency as a factor to all frequencies in the inlinee by passing this
to the graph builders so that the importance of a call site in an
inlinee is relative to the topmost optimized function.
Note that all functions that neither have literals nor need type
feedback slots will share a single invocation count cell in the
canonical empty type feedback vector, so their invocation count is
meaningless, but that doesn't matter since we only use the invocation
count to relativize call counts within the function, which we only have
if we have at least one type feedback vector (the CallIC slot).
See the design document for additional details on this change:
https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8
BUG=v8:5267,v8:5372
R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org
Review-Url: https://codereview.chromium.org/2337123003
Cr-Commit-Position: refs/heads/master@{#39410}
This introduces a new {JumpLoop} bytecode to combine the OSR polling
mechanism modeled by {OsrPoll} with the actual {Jump} performing the
backwards branch. This reduces the overall size and also avoids one
additional dispatch. It also makes sure that OSR polling is only done
within real loops.
R=rmcilroy@chromium.org
BUG=v8:4764
Review-Url: https://codereview.chromium.org/2331033002
Cr-Commit-Position: refs/heads/master@{#39384}
Moves the context chain search loop out of generated bytecode, and into
the (Lda|Ldr|Sda)ContextSlot handler, by passing the context depth in as
an additional operand. This should decrease the bytecode size and
increase performance for deep context chain searches, at the cost of
slightly increasing bytecode size for shallow context access.
Review-Url: https://codereview.chromium.org/2336643002
Cr-Commit-Position: refs/heads/master@{#39378}
For historical reasons, the interpreter's bytecode expectations tests
required a type for the constant pool. This had two disadvantages:
1. Strings and numbers were not visible in mixed pools, and
2. Mismatches of pool types (e.g. when rebaselining) would cause parser
errors
This removes the pool types, making everything 'mixed', but appending
the values to string and number valued constants. Specifying a pool type
in the *.golden header now prints a warning (for backwards compatibility).
BUG=v8:5350
Review-Url: https://codereview.chromium.org/2310103002
Cr-Commit-Position: refs/heads/master@{#39216}
Drive-by fix: the order of parameters in the BinaryOpWithFeedback TurboFan code stubs now reflects the convention of having the context at the end.
BUG=v8:5273
Review-Url: https://codereview.chromium.org/2263253002
Cr-Commit-Position: refs/heads/master@{#38832}
Allows us to create a corresponding TurboFan node, so TF can
optimize it.
BUG=v8:4280
LOG=n
Review-Url: https://codereview.chromium.org/2248633002
Cr-Commit-Position: refs/heads/master@{#38651}
Assign feedback slots in the type feedback vector for binary operations.
Update bytecode-generator to use these slots and add them as an operand
to binary operations.
BUG=v8:4280
LOG=N
Review-Url: https://codereview.chromium.org/2209633002
Cr-Commit-Position: refs/heads/master@{#38408}
The old code was using VariableMode, but that signal is both
over-pessimistic (some CONST and LET variables need no hole-initialization)
and inconsistent with other uses of the InitializationFlag enum (such
as %LoadLookupSlot).
This changes no observable behavior, but removes unnecessary hole
initialization and hole checks in a few places, including
block-scoped function declarations, super property lookups,
and new.target.
R=bmeurer@chromium.org, neis@chromium.org
Review-Url: https://codereview.chromium.org/2201193004
Cr-Commit-Position: refs/heads/master@{#38395}
Add a new bytecode to create a function context. The handler inlines
FastNewFunctionContextStub.
BUG=v8:4280
LOG=n
Review-Url: https://codereview.chromium.org/2187523002
Cr-Commit-Position: refs/heads/master@{#38301}
Introduces fused bytecodes for fusing LdaSmi followed by a binary op bytecode.
The chosen bytecodes are used frequently in Octane: AddSmi, SubSmi,
BitwiseOrSmi, BitwiseAndSmi, ShiftLeftSmi, ShiftRightSmi.
There are additional code stubs for these operations that are biased towards
both the left hand and right hand operands being Smis.
BUG=v8:4280
LOG=N
Review-Url: https://codereview.chromium.org/2111923002
Cr-Commit-Position: refs/heads/master@{#37531}
With this change the bytecode array builder only emits expression
positions for bytecodes that can throw. This allows more peephole
optimization opportunities and results in smaller code.
BUG=v8:4280,chromium:615979
LOG=N
Review-Url: https://codereview.chromium.org/2038323002
Cr-Commit-Position: refs/heads/master@{#36863}
The original peephole optimizer logic in the BytecodeArrayBuilder did
not respect source positions as it was written before there were
bytecode source positions. This led to some minor differences to
FCG and was problematic when combined with pending bytecode
optimizations. This change makes the new peephole optimizer fully
respect source positions.
BUG=v8:4280
LOG=N
Review-Url: https://codereview.chromium.org/1998203002
Cr-Commit-Position: refs/heads/master@{#36439}
Prints source position information alongside bytecode.
BUG=v8:4280
LOG=N
Review-Url: https://codereview.chromium.org/1963663002
Cr-Commit-Position: refs/heads/master@{#36171}
Adds IncStub and DecStub TurboFan code stubs and hooks them up to the
interpreter's Inc and Dec bytecodes (which are used for count
operations, e.g. i++).
BUG=v8:4280
LOG=N
Review URL: https://codereview.chromium.org/1901083002
Cr-Commit-Position: refs/heads/master@{#35720}
This change introduces wide prefix bytecodes to support wide (16-bit)
and extra-wide (32-bit) operands. It retires the previous
wide-bytecodes and reduces the number of operand types.
Operands are now either scalable or fixed size. Scalable operands
increase in width when a bytecode is prefixed with wide or extra-wide.
The bytecode handler table is extended to 256*3 entries. The
first 256 entries are used for bytecodes with 8-bit operands,
the second 256 entries are used for bytecodes with operands that
scale to 16-bits, and the third group of 256 entries are used for
bytecodes with operands that scale to 32-bits.
LOG=N
BUG=v8:4747,v8:4280
Review URL: https://codereview.chromium.org/1783483002
Cr-Commit-Position: refs/heads/master@{#34955}
Bytecode expectations have been moved to external (.golden) files,
one per test. Each test in the suite builds a representation of the
the compiled bytecode using BytecodeExpectationsPrinter. The output is
then compared to the golden file. If the comparision fails, a textual
diff can be used to identify the discrepancies.
Only the test snippets are left in the cc file, which also allows to
make it more compact and meaningful. Leaving the snippets in the cc
file was a deliberate choice to allow keeping the "truth" about the
tests in the cc file, which will rarely change, as opposed to golden
files.
Golden files can be generated and kept up to date using
generate-bytecode-expectations, which also means that the test suite
can be batch updated whenever the bytecode or golden format changes.
The golden format has been slightly amended (no more comments about
`void*`, add size of the bytecode array) following the consideration
made while converting the tests.
There is also a fix: BytecodeExpectationsPrinter::top_level_ was left
uninitialized, leading to undefined behaviour.
BUG=v8:4280
LOG=N
Review URL: https://codereview.chromium.org/1717293002
Cr-Commit-Position: refs/heads/master@{#34285}