Commit Graph

40 Commits

Author SHA1 Message Date
Alexey Kozyatinskiy
0896586083 [inspector] improve return position of explicit return in non-async function
Goal of this CL: explicit return from non-async function has position after
return expression as return position (will unblock [1]).

BytecodeArrayBuilder has SetStatementPosition and SetExpressionPosition methods.
If one of these methods is called then next generated bytecode will get passed
position. It's general treatment for most cases.
Unfortunately it doesn't work for Returns:
- debugger requires source positions exactly on kReturn bytecode in stepping
  implementation,
- BytecodeGenerator::BuildReturn and BytecodeGenerator::BuildAsyncReturn
  generates more then one bytecode and general solution will put return position
  on first generated bytecode,
- it's not easy to split BuildReturn function into two parts to allow something
  like following in BytecodeGenerator::VisitReturnStatement since generated
  bytecodes are actually controlled by execution_control().
..->BuildReturnPrologue();
..->SetReturnPosition(stmt);
..->Return();

In this CL we pass ReturnStatement through ExecutionControl and use it for
position when we emit return bytecode right here.

So this CL only will improve return position for returns inside of non-async
functions, I'll address async functions later.

[1] https://chromium-review.googlesource.com/c/543161/

Change-Id: Iede512c120b00c209990bf50c20e7d23dc0d65db
Reviewed-on: https://chromium-review.googlesource.com/560738
Commit-Queue: Aleksey Kozyatinskiy <kozyatinskiy@chromium.org>
Reviewed-by: Adam Klein <adamk@chromium.org>
Reviewed-by: Michael Starzinger <mstarzinger@chromium.org>
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Reviewed-by: Jakob Gruber <jgruber@chromium.org>
Cr-Commit-Position: refs/heads/master@{#46687}
2017-07-14 19:10:13 +00:00
Leszek Swirski
a2fcdc7cc8 [runtime] Move profiler ticks from SFI to feedback vector
Instead of counting profiler ticks on the shared function info (which is
shared between native contexts), count them on the feedback vector
(which is not). This allows us to continue pushing optimization
decisions off the SFI, onto the feedback vector.

Note that a side-effect of this is that ICs don't have to walk the stack
to reset profiler ticks, as they can access the feedback vector directly
from their feedback nexus.

Change-Id: I232ae9e759fca75cd89d393148a4ff42caa2646f
Reviewed-on: https://chromium-review.googlesource.com/544888
Reviewed-by: Igor Sheludko <ishell@chromium.org>
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Commit-Queue: Leszek Swirski <leszeks@chromium.org>
Cr-Commit-Position: refs/heads/master@{#46411}
2017-07-05 12:04:50 +00:00
Alexandre Talon
8edef78d4d [ignition] Fix register flushing performance issue
In some codes flushing the registers was costly: we processed each
register whereas all the registers alone in their equivalence class need
not to be processed. We now overapproximate easily which classes are of
size 2 so as to save many iterations in the Flush() loop in some cases.

Bug: v8:6432
Change-Id: I945e151736e8a515263ac76312127d930fd20d74
Reviewed-on: https://chromium-review.googlesource.com/525795
Commit-Queue: Alexandre Talon <alexandret@google.com>
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Reviewed-by: Leszek Swirski <leszeks@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45805}
2017-06-09 09:58:15 +00:00
Ross McIlroy
11a211ff1b Reland: [TypeFeedbackVector] Store optimized code in the vector
Since the feedback vector is itself a native context structure, why
not store optimized code for a function in there rather than in
a map from native context to code? This allows us to get rid of
the optimized code map in the SharedFunctionInfo, saving a pointer,
and making lookup of any optimized code quicker.

Original patch by Michael Stanton <mvstanton@chromium.org>

BUG=v8:6246,chromium:718891
TBR=yangguo@chromium.org,ulan@chromium.org

Change-Id: I3bb9ec0cfff32e667cca0e1403f964f33a6958a6
Reviewed-on: https://chromium-review.googlesource.com/500134
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Reviewed-by: Jaroslav Sevcik <jarin@chromium.org>
Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45234}
2017-05-10 15:04:35 +00:00
Ross McIlroy
fd749344bf Revert "Reland: [TypeFeedbackVector] Store optimized code in the vector"
This reverts commit 662aa425ba.

Reason for revert: Crashing on Canary
BUG=chromium:718891

Original change's description:
> Reland: [TypeFeedbackVector] Store optimized code in the vector
> 
> Since the feedback vector is itself a native context structure, why
> not store optimized code for a function in there rather than in
> a map from native context to code? This allows us to get rid of
> the optimized code map in the SharedFunctionInfo, saving a pointer,
> and making lookup of any optimized code quicker.
> 
> Original patch by Michael Stanton <mvstanton@chromium.org>
> 
> BUG=v8:6246
> TBR=yangguo@chromium.org,ulan@chromium.org
> 
> Change-Id: Ic83e4011148164ef080c63215a0c77f1dfb7f327
> Reviewed-on: https://chromium-review.googlesource.com/494487
> Reviewed-by: Jaroslav Sevcik <jarin@chromium.org>
> Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#45084}

TBR=ulan@chromium.org,rmcilroy@chromium.org,yangguo@chromium.org,jarin@chromium.org
# Not skipping CQ checks because original CL landed > 1 day ago.
BUG=v8:6246

Change-Id: Idab648d6fe260862c2a0e35366df19dcecf13a82
Reviewed-on: https://chromium-review.googlesource.com/498633
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45174}
2017-05-08 20:57:30 +00:00
Ross McIlroy
662aa425ba Reland: [TypeFeedbackVector] Store optimized code in the vector
Since the feedback vector is itself a native context structure, why
not store optimized code for a function in there rather than in
a map from native context to code? This allows us to get rid of
the optimized code map in the SharedFunctionInfo, saving a pointer,
and making lookup of any optimized code quicker.

Original patch by Michael Stanton <mvstanton@chromium.org>

BUG=v8:6246
TBR=yangguo@chromium.org,ulan@chromium.org

Change-Id: Ic83e4011148164ef080c63215a0c77f1dfb7f327
Reviewed-on: https://chromium-review.googlesource.com/494487
Reviewed-by: Jaroslav Sevcik <jarin@chromium.org>
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45084}
2017-05-04 11:21:59 +00:00
Michael Achenbach
5fcf508e07 Revert "[TypeFeedbackVector] Store optimized code in the vector"
This reverts commit c5ad9c6d8e.

Reason for revert: Fails on gc stress:
https://build.chromium.org/p/client.v8/builders/V8%20Linux64%20GC%20Stress%20-%20custom%20snapshot/builds/12661

Original change's description:
> [TypeFeedbackVector] Store optimized code in the vector
> 
> Since the feedback vector is itself a native context structure, why
> not store optimized code for a function in there rather than in
> a map from native context to code? This allows us to get rid of
> the optimized code map in the SharedFunctionInfo, saving a pointer,
> and making lookup of any optimized code quicker.
> 
> Original patch by Michael Stanton <mvstanton@chromium.org>
> 
> BUG=v8:6246
> 
> Change-Id: I60ff8c408c3001bc272b4b198c9cbaea2872a9e5
> Reviewed-on: https://chromium-review.googlesource.com/476891
> Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
> Reviewed-by: Michael Stanton <mvstanton@chromium.org>
> Reviewed-by: Yang Guo <yangguo@chromium.org>
> Reviewed-by: Jaroslav Sevcik <jarin@chromium.org>
> Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#45022}

TBR=ulan@chromium.org,rmcilroy@chromium.org,yangguo@chromium.org,mvstanton@chromium.org,jarin@chromium.org
NOPRESUBMIT=true
NOTREECHECKS=true
NOTRY=true
BUG=v8:6246

Change-Id: I9cd5735b03898cae6ae7adea0f19d32fceb31619
Reviewed-on: https://chromium-review.googlesource.com/493287
Reviewed-by: Michael Achenbach <machenbach@chromium.org>
Commit-Queue: Michael Achenbach <machenbach@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45027}
2017-05-02 11:51:01 +00:00
Ross McIlroy
c5ad9c6d8e [TypeFeedbackVector] Store optimized code in the vector
Since the feedback vector is itself a native context structure, why
not store optimized code for a function in there rather than in
a map from native context to code? This allows us to get rid of
the optimized code map in the SharedFunctionInfo, saving a pointer,
and making lookup of any optimized code quicker.

Original patch by Michael Stanton <mvstanton@chromium.org>

BUG=v8:6246

Change-Id: I60ff8c408c3001bc272b4b198c9cbaea2872a9e5
Reviewed-on: https://chromium-review.googlesource.com/476891
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Reviewed-by: Michael Stanton <mvstanton@chromium.org>
Reviewed-by: Yang Guo <yangguo@chromium.org>
Reviewed-by: Jaroslav Sevcik <jarin@chromium.org>
Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
Cr-Commit-Position: refs/heads/master@{#45022}
2017-05-02 11:20:23 +00:00
Ross McIlroy
496864f8af Reland: [Interpreter] Move BinaryOp Smi transformation into BytecodeGenerator.""
This relands commit d3e9aade0f. The original CL was reverted speculatively but didn't cause the buildbot failure.

Original change's description:
> [Interpreter] Move BinaryOp Smi transformation into BytecodeGenerator.
> 
> Perform the transformation to <BinaryOp>Smi for Binary ops which take Smi
> literals in the BytecodeGenerator. This enables us to perform the
> transformation for literals on either side for commutative operations, and
> Avoids having to do the check on every bytecode in the peephole optimizer.
> 
> In the process, adds Smi bytecode variants for all binary operations, adding
>  - MulSmi
>  - DivSmi
>  - ModSmi
>  - BitwiseXorSmi
>  - ShiftRightLogical
> 
> BUG=v8:6194
> 
> Change-Id: If1484252f5385c16957004b9cac8bfbb1f209219
> Reviewed-on: https://chromium-review.googlesource.com/466246
> Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
> Reviewed-by: Yang Guo <yangguo@chromium.org>
> Reviewed-by: Michael Starzinger <mstarzinger@chromium.org>
> Reviewed-by: Igor Sheludko <ishell@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#44477}

TBR=rmcilroy@chromium.org,machenbach@chromium.org,yangguo@chromium.org,mstarzinger@chromium.org,mythria@chromium.org,v8-reviews@googlegroups.com,ishell@chromium.org
# Not skipping CQ checks because original CL landed > 1 day ago.
BUG=v8:6194

Change-Id: I2ccaefa1ce58d3885f5c2648755985c06f25c1d8
Reviewed-on: https://chromium-review.googlesource.com/472746
Reviewed-by: Ross McIlroy <rmcilroy@chromium.org>
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Cr-Commit-Position: refs/heads/master@{#44511}
2017-04-10 09:58:18 +00:00
Michael Achenbach
084471ce6b Revert "[Interpreter] Move BinaryOp Smi transformation into BytecodeGenerator."
This reverts commit d3e9aade0f.

Reason for revert: Speculative for:
https://build.chromium.org/p/client.v8.ports/builders/V8%20Linux%20-%20arm64%20-%20sim%20-%20nosnap%20-%20debug/builds/4449

Bisect points to this CL.

Original change's description:
> [Interpreter] Move BinaryOp Smi transformation into BytecodeGenerator.
> 
> Perform the transformation to <BinaryOp>Smi for Binary ops which take Smi
> literals in the BytecodeGenerator. This enables us to perform the
> transformation for literals on either side for commutative operations, and
> Avoids having to do the check on every bytecode in the peephole optimizer.
> 
> In the process, adds Smi bytecode variants for all binary operations, adding
>  - MulSmi
>  - DivSmi
>  - ModSmi
>  - BitwiseXorSmi
>  - ShiftRightLogical
> 
> BUG=v8:6194
> 
> Change-Id: If1484252f5385c16957004b9cac8bfbb1f209219
> Reviewed-on: https://chromium-review.googlesource.com/466246
> Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
> Reviewed-by: Yang Guo <yangguo@chromium.org>
> Reviewed-by: Michael Starzinger <mstarzinger@chromium.org>
> Reviewed-by: Igor Sheludko <ishell@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#44477}

TBR=rmcilroy@chromium.org,yangguo@chromium.org,mstarzinger@chromium.org,mythria@chromium.org,ishell@chromium.org,v8-reviews@googlegroups.com
NOPRESUBMIT=true
NOTREECHECKS=true
NOTRY=true
BUG=v8:6194

Change-Id: If57dbdbe40be77804bf437463b855d3167e2d473
Reviewed-on: https://chromium-review.googlesource.com/471308
Reviewed-by: Michael Achenbach <machenbach@chromium.org>
Commit-Queue: Michael Achenbach <machenbach@chromium.org>
Cr-Commit-Position: refs/heads/master@{#44488}
2017-04-07 13:17:52 +00:00
Ross McIlroy
d3e9aade0f [Interpreter] Move BinaryOp Smi transformation into BytecodeGenerator.
Perform the transformation to <BinaryOp>Smi for Binary ops which take Smi
literals in the BytecodeGenerator. This enables us to perform the
transformation for literals on either side for commutative operations, and
Avoids having to do the check on every bytecode in the peephole optimizer.

In the process, adds Smi bytecode variants for all binary operations, adding
 - MulSmi
 - DivSmi
 - ModSmi
 - BitwiseXorSmi
 - ShiftRightLogical

BUG=v8:6194

Change-Id: If1484252f5385c16957004b9cac8bfbb1f209219
Reviewed-on: https://chromium-review.googlesource.com/466246
Commit-Queue: Ross McIlroy <rmcilroy@chromium.org>
Reviewed-by: Yang Guo <yangguo@chromium.org>
Reviewed-by: Michael Starzinger <mstarzinger@chromium.org>
Reviewed-by: Igor Sheludko <ishell@chromium.org>
Cr-Commit-Position: refs/heads/master@{#44477}
2017-04-07 09:44:57 +00:00
leszeks
03a2b3a1a3 [ignition] Expect 'I' for signed bytecode operands
Because it was confusing seeing U8(negative value).

Review-Url: https://codereview.chromium.org/2640273002
Cr-Commit-Position: refs/heads/master@{#42662}
2017-01-25 17:39:24 +00:00
leszeks
e56437b630 [ignition] Use absolute values for jump offsets
Since JumpLoop is always backwards, and other jumps are always forwards,
we can store the jump offset as an always positive integer and decide on
the jump direction based on the bytecode. This will save a small amount
of space for large-ish for loops (>128 bytecodes).

Review-Url: https://codereview.chromium.org/2641443002
Cr-Commit-Position: refs/heads/master@{#42638}
2017-01-24 22:09:02 +00:00
mvstanton
38602f1ff5 [FeedbackVector] Infrastructure for literal arrays in the vector.
This changes the NewClosure interface descriptor, but ignores
the additional vector/slot arguments for now. The feedback vector
gets larger, as it holds a space for each literal array. A follow-on
CL will constructively use this space.

BUG=v8:5456

Review-Url: https://codereview.chromium.org/2614373002
Cr-Commit-Position: refs/heads/master@{#42146}
2017-01-09 15:31:00 +00:00
hablich
aa8a208a47 Revert of [TypeFeedbackVector] Root literal arrays in function literals slots (patchset #11 id:370001 of https://codereview.chromium.org/2504153002/ )
Reason for revert:
Speculative revert because of blocked roll: https://codereview.chromium.org/2596013002/

Original issue's description:
> [TypeFeedbackVector] Root literal arrays in function literals slots
>
> Literal arrays and feedback vectors for a function can be garbage
> collected if we don't have a rooted closure for the function, which
> happens often. It's expensive to come back from this (recreating
> boilerplates and gathering feedback again), and the cost is
> disproportionate if the function was inlined into optimized code.
>
> To guard against losing these arrays when we need them, we'll now
> create literal arrays when creating the feedback vector for the outer
> closure, and root them strongly in that vector.
>
> BUG=v8:5456
>
> Review-Url: https://codereview.chromium.org/2504153002
> Cr-Commit-Position: refs/heads/master@{#41893}
> Committed: 93df094081

TBR=bmeurer@chromium.org,mlippautz@chromium.org,mvstanton@chromium.org
# Skipping CQ checks because original CL landed less than 1 days ago.
NOPRESUBMIT=true
NOTREECHECKS=true
NOTRY=true
BUG=v8:5456

Review-Url: https://codereview.chromium.org/2597163002
Cr-Commit-Position: refs/heads/master@{#41917}
2016-12-22 10:26:36 +00:00
mvstanton
93df094081 [TypeFeedbackVector] Root literal arrays in function literals slots
Literal arrays and feedback vectors for a function can be garbage
collected if we don't have a rooted closure for the function, which
happens often. It's expensive to come back from this (recreating
boilerplates and gathering feedback again), and the cost is
disproportionate if the function was inlined into optimized code.

To guard against losing these arrays when we need them, we'll now
create literal arrays when creating the feedback vector for the outer
closure, and root them strongly in that vector.

BUG=v8:5456

Review-Url: https://codereview.chromium.org/2504153002
Cr-Commit-Position: refs/heads/master@{#41893}
2016-12-21 14:06:29 +00:00
rmcilroy
bfc53f6ed0 [Interpreter] Add expression positions to BinaryOps.
BUG=v8:5723

Review-Url: https://codereview.chromium.org/2555263002
Cr-Commit-Position: refs/heads/master@{#41583}
2016-12-08 10:11:17 +00:00
leszeks
d2caa302a7 [ignition] Add bytecodes for loads/stores in the current context
The majority of context slot accesses are to the local context (current context
register and depth 0), so this adds bytecodes to optimise for that case.

This cuts down bytecode size by roughly 1% (measured on Octane and Top25).

Review-Url: https://codereview.chromium.org/2459513002
Cr-Commit-Position: refs/heads/master@{#40641}
2016-10-28 10:11:06 +00:00
adamk
35a3ccbfac [ignition] Eliminate hole checks where statically possible for loads and stores
Move hole check logic from full-codegen into scope analysis, and store the
"needs hole check" bit on VariableProxy. This makes it easy to re-use in
any backend: it will be trivial to extend the use of this logic in, e.g.,
full-codegen variable stores.

While changing the signatures of the variable loading/storing methods in
Ignition, I took the liberty of replacing the verb "Visit" with "Build", since these
are not part of AST visiting.

BUG=v8:5460

Review-Url: https://chromiumcodereview.appspot.com/2411873004
Cr-Commit-Position: refs/heads/master@{#40479}
2016-10-20 17:32:08 +00:00
neis
99cfa5f620 [interpreter] Remove redundant flag from bytecode cctest suite.
This removes the execute_ flag, which was always the negation of top_level_.

R=rmcilroy@chromium.org
BUG=

Review-Url: https://codereview.chromium.org/2390163003
Cr-Commit-Position: refs/heads/master@{#39961}
2016-10-04 16:30:15 +00:00
rmcilroy
27fe988b85 [Interpreter] Replace BytecodeRegisterAllocator with a simple bump pointer.
There are only a few occasions where we allocate a register in an outer
expression allocation scope, which makes the costly free-list approach
of the BytecodeRegisterAllocator unecessary. This CL replaces all
occurrences with moves to the accumulator and stores to a register
allocated in the correct scope. By doing this, we can simplify the
BytecodeRegisterAllocator to be a simple bump-pointer allocator
with registers released in the same order as allocated.

The following changes are also made:
 - Make BytecodeRegisterOptimizer able to use registers which have been
   unallocated, but not yet reused
 - Remove RegisterExpressionResultScope and rename
   AccumulatorExpressionResultScope to ValueExpressionResultScope
 - Introduce RegisterList to represent consecutive register
   allocations, and use this for operands to call bytecodes.

By avoiding the free-list handling, this gives another couple of
percent on CodeLoad.

BUG=v8:4280

Review-Url: https://codereview.chromium.org/2369873002
Cr-Commit-Position: refs/heads/master@{#39905}
2016-09-30 09:03:25 +00:00
bmeurer
c7d7ca361d [turbofan] Collect invocation counts and compute relative call frequencies.
Add a notion of "invocation count" to the baseline compilers, which
increment a special slot in the TypeFeedbackVector for each invocation
of a given function (the optimized code doesn't currently collect this
information).

Use this invocation count to relativize the call counts on the call
sites within the function, so that the inlining heuristic has a view
of relative importance of a call site rather than some absolute numbers
with unclear meaning for the current function. Also apply the call site
frequency as a factor to all frequencies in the inlinee by passing this
to the graph builders so that the importance of a call site in an
inlinee is relative to the topmost optimized function.

Note that all functions that neither have literals nor need type
feedback slots will share a single invocation count cell in the
canonical empty type feedback vector, so their invocation count is
meaningless, but that doesn't matter since we only use the invocation
count to relativize call counts within the function, which we only have
if we have at least one type feedback vector (the CallIC slot).

See the design document for additional details on this change:
https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8

BUG=v8:5267,v8:5372
R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org

Review-Url: https://codereview.chromium.org/2337123003
Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:48 +00:00
mstarzinger
c9864173f1 [interpreter] Merge {OsrPoll} with {Jump} bytecode.
This introduces a new {JumpLoop} bytecode to combine the OSR polling
mechanism modeled by {OsrPoll} with the actual {Jump} performing the
backwards branch. This reduces the overall size and also avoids one
additional dispatch. It also makes sure that OSR polling is only done
within real loops.

R=rmcilroy@chromium.org
BUG=v8:4764

Review-Url: https://codereview.chromium.org/2331033002
Cr-Commit-Position: refs/heads/master@{#39384}
2016-09-13 13:07:36 +00:00
leszeks
1c0c5fda26 [Interpreter] Move context chain search loop to handler
Moves the context chain search loop out of generated bytecode, and into
the (Lda|Ldr|Sda)ContextSlot handler, by passing the context depth in as
an additional operand. This should decrease the bytecode size and
increase performance for deep context chain searches, at the cost of
slightly increasing bytecode size for shallow context access.

Review-Url: https://codereview.chromium.org/2336643002
Cr-Commit-Position: refs/heads/master@{#39378}
2016-09-13 11:09:33 +00:00
leszeks
b28b7e1328 [Interpreter] Remove constant pool type in tests
For historical reasons, the interpreter's bytecode expectations tests
required a type for the constant pool. This had two disadvantages:

 1. Strings and numbers were not visible in mixed pools, and
 2. Mismatches of pool types (e.g. when rebaselining) would cause parser
    errors

This removes the pool types, making everything 'mixed', but appending
the values to string and number valued constants. Specifying a pool type
in the *.golden header now prints a warning (for backwards compatibility).

BUG=v8:5350

Review-Url: https://codereview.chromium.org/2310103002
Cr-Commit-Position: refs/heads/master@{#39216}
2016-09-06 16:11:23 +00:00
epertoso
708f80d243 [interpreter] Make the comparison bytecode handlers collect type feedback.
BUG=v8:5273

Review-Url: https://codereview.chromium.org/2286273002
Cr-Commit-Position: refs/heads/master@{#39006}
2016-08-30 10:21:39 +00:00
epertoso
b305c7dfcb [interpreter] Make the binary op with Smi bytecode handlers collect type feedback.
Drive-by fix: the order of parameters in the BinaryOpWithFeedback TurboFan code stubs now reflects the convention of having the context at the end.

BUG=v8:5273

Review-Url: https://codereview.chromium.org/2263253002
Cr-Commit-Position: refs/heads/master@{#38832}
2016-08-23 14:59:33 +00:00
klaasb
b07444b16f [interpreter] Add CreateBlockContext bytecode
Allows us to create a corresponding TurboFan node, so TF can
optimize it.

BUG=v8:4280
LOG=n

Review-Url: https://codereview.chromium.org/2248633002
Cr-Commit-Position: refs/heads/master@{#38651}
2016-08-16 11:07:43 +00:00
mythria
9e3e2ee2dd [Interpreter] Assign feedback slots for binary operations and use them in ignition.
Assign feedback slots in the type feedback vector for binary operations.
Update bytecode-generator to use these slots and add them as an operand
to binary operations.

BUG=v8:4280
LOG=N

Review-Url: https://codereview.chromium.org/2209633002
Cr-Commit-Position: refs/heads/master@{#38408}
2016-08-08 01:16:40 +00:00
adamk
6768456db5 Use Variable::binding_needs_init() to determine hole initialization
The old code was using VariableMode, but that signal is both
over-pessimistic (some CONST and LET variables need no hole-initialization)
and inconsistent with other uses of the InitializationFlag enum (such
as %LoadLookupSlot).

This changes no observable behavior, but removes unnecessary hole
initialization and hole checks in a few places, including
block-scoped function declarations, super property lookups,
and new.target.

R=bmeurer@chromium.org, neis@chromium.org

Review-Url: https://codereview.chromium.org/2201193004
Cr-Commit-Position: refs/heads/master@{#38395}
2016-08-05 17:51:17 +00:00
klaasb
8097eeb9f2 [interpreter] Add CreateFunctionContext bytecode
Add a new bytecode to create a function context. The handler inlines
FastNewFunctionContextStub.

BUG=v8:4280
LOG=n

Review-Url: https://codereview.chromium.org/2187523002
Cr-Commit-Position: refs/heads/master@{#38301}
2016-08-03 14:43:26 +00:00
oth
40511877eb [interpreter] Introduce binary op bytecodes for Smi operand.
Introduces fused bytecodes for fusing LdaSmi followed by a binary op bytecode.
The chosen bytecodes are used frequently in Octane: AddSmi, SubSmi,
BitwiseOrSmi, BitwiseAndSmi, ShiftLeftSmi, ShiftRightSmi.

There are additional code stubs for these operations that are biased towards
both the left hand and right hand operands being Smis.

BUG=v8:4280
LOG=N

Review-Url: https://codereview.chromium.org/2111923002
Cr-Commit-Position: refs/heads/master@{#37531}
2016-07-05 13:46:11 +00:00
rmcilroy
02c3414d62 [Interpereter] Inline FastNewClosure into CreateClosure bytecode handler
BUG=v8:4280

Review-Url: https://codereview.chromium.org/2113613002
Cr-Commit-Position: refs/heads/master@{#37453}
2016-06-30 15:32:59 +00:00
oth
769d332619 [interpreter] Filter expression positions at source.
With this change the bytecode array builder only emits expression
positions for bytecodes that can throw. This allows more peephole
optimization opportunities and results in smaller code.

BUG=v8:4280,chromium:615979
LOG=N

Review-Url: https://codereview.chromium.org/2038323002
Cr-Commit-Position: refs/heads/master@{#36863}
2016-06-09 13:33:29 +00:00
oth
5e8f8d4e8c [interpreter] Bytecode register optimizer.
Online optimization stage for reducing redundant transfers between registers.

BUG=V8:4280
LOG=N

Review-Url: https://codereview.chromium.org/1997653002
Cr-Commit-Position: refs/heads/master@{#36551}
2016-05-27 15:59:16 +00:00
oth
e43fbde72b [Interpreter] Preserve source positions in peephole optimizer.
The original peephole optimizer logic in the BytecodeArrayBuilder did
not respect source positions as it was written before there were
bytecode source positions. This led to some minor differences to
FCG and was problematic when combined with pending bytecode
optimizations. This change makes the new peephole optimizer fully
respect source positions.

BUG=v8:4280
LOG=N

Review-Url: https://codereview.chromium.org/1998203002
Cr-Commit-Position: refs/heads/master@{#36439}
2016-05-23 13:33:20 +00:00
oth
52600c6b1c [interpreter] Add checks for source position to test-bytecode-generator.
Prints source position information alongside bytecode.

BUG=v8:4280
LOG=N

Review-Url: https://codereview.chromium.org/1963663002
Cr-Commit-Position: refs/heads/master@{#36171}
2016-05-11 12:22:17 +00:00
rmcilroy
c58f328581 [Interpreter] Introduce IncStub and DecStub.
Adds IncStub and DecStub TurboFan code stubs and hooks them up to the
interpreter's Inc and Dec bytecodes (which are used for count
operations, e.g. i++).

BUG=v8:4280
LOG=N

Review URL: https://codereview.chromium.org/1901083002

Cr-Commit-Position: refs/heads/master@{#35720}
2016-04-22 10:36:33 +00:00
oth
48d082af38 [interpreter] Add support for scalable operands.
This change introduces wide prefix bytecodes to support wide (16-bit)
and extra-wide (32-bit) operands. It retires the previous
wide-bytecodes and reduces the number of operand types.

Operands are now either scalable or fixed size. Scalable operands
increase in width when a bytecode is prefixed with wide or extra-wide.

The bytecode handler table is extended to 256*3 entries. The
first 256 entries are used for bytecodes with 8-bit operands,
the second 256 entries are used for bytecodes with operands that
scale to 16-bits, and the third group of 256 entries are used for
bytecodes with operands that scale to 32-bits.

LOG=N
BUG=v8:4747,v8:4280

Review URL: https://codereview.chromium.org/1783483002

Cr-Commit-Position: refs/heads/master@{#34955}
2016-03-21 17:09:49 +00:00
ssanfilippo
6ae030590d [Interpreter] Refactor bytecode generator test suite.
Bytecode expectations have been moved to external (.golden) files,
one per test. Each test in the suite builds a representation of the
the compiled bytecode using BytecodeExpectationsPrinter. The output is
then compared to the golden file. If the comparision fails, a textual
diff can be used to identify the discrepancies.

Only the test snippets are left in the cc file, which also allows to
make it more compact and meaningful. Leaving the snippets in the cc
file was a deliberate choice to allow keeping the "truth" about the
tests in the cc file, which will rarely change, as opposed to golden
files.

Golden files can be generated and kept up to date using
generate-bytecode-expectations, which also means that the test suite
can be batch updated whenever the bytecode or golden format changes.

The golden format has been slightly amended (no more comments about
`void*`, add size of the bytecode array) following the consideration
made while converting the tests.

There is also a fix: BytecodeExpectationsPrinter::top_level_ was left
uninitialized, leading to undefined behaviour.

BUG=v8:4280
LOG=N

Review URL: https://codereview.chromium.org/1717293002

Cr-Commit-Position: refs/heads/master@{#34285}
2016-02-25 12:07:19 +00:00