v8/test/cctest/interpreter/bytecode_expectations/ForOf.golden

608 lines
22 KiB
Plaintext
Raw Normal View History

#
# Autogenerated by generate-bytecode-expectations.
#
---
wrap: yes
---
snippet: "
for (var p of [0, 1, 2]) {}
"
frame size: 15
parameter count: 1
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
bytecode array length: 272
bytecodes: [
/* 30 E> */ B(StackCheck),
B(LdaZero),
B(Star), R(4),
B(Mov), R(context), R(11),
B(Mov), R(context), R(12),
/* 48 S> */ B(CreateArrayLiteral), U8(0), U8(0), U8(9),
B(Star), R(13),
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
B(LdaNamedProperty), R(13), U8(1), U8(2),
B(Star), R(14),
B(CallProperty), R(14), R(13), U8(1), U8(4),
B(JumpIfJSReceiver), U8(7),
B(CallRuntime), U16(Runtime::kThrowSymbolIteratorInvalid), R(0), U8(0),
B(Star), R(2),
/* 45 S> */ B(LdaNamedProperty), R(2), U8(2), U8(8),
B(Star), R(14),
/* 45 E> */ B(CallProperty), R(14), R(2), U8(1), U8(6),
B(Star), R(3),
/* 45 E> */ B(InvokeIntrinsic), U8(Runtime::k_IsJSReceiver), R(3), U8(1),
B(ToBooleanLogicalNot),
B(JumpIfFalse), U8(7),
B(CallRuntime), U16(Runtime::kThrowIteratorResultNotAnObject), R(3), U8(1),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(LdaNamedProperty), R(3), U8(3), U8(10),
B(JumpIfToBooleanTrue), U8(25),
B(LdaNamedProperty), R(3), U8(4), U8(12),
B(Star), R(5),
B(LdaSmi), U8(2),
B(Star), R(4),
B(Mov), R(5), R(0),
/* 34 E> */ B(StackCheck),
B(Mov), R(0), R(1),
B(LdaZero),
B(Star), R(4),
B(JumpLoop), U8(-51), U8(0),
B(Jump), U8(36),
B(Star), R(13),
B(Ldar), R(closure),
B(CreateCatchContext), R(13), U8(5), U8(6),
B(Star), R(12),
B(PushContext), R(8),
B(LdaSmi), U8(2),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(TestEqualStrict), R(4), U8(14),
B(JumpIfFalse), U8(6),
B(LdaSmi), U8(1),
B(Star), R(4),
B(LdaCurrentContextSlot), U8(4),
B(Star), R(13),
B(CallRuntime), U16(Runtime::kReThrow), R(13), U8(1),
B(PopContext), R(8),
B(LdaSmi), U8(-1),
B(Star), R(9),
B(Jump), U8(7),
B(Star), R(10),
B(LdaZero),
B(Star), R(9),
B(LdaTheHole),
B(SetPendingMessage),
B(Star), R(11),
B(LdaZero),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(TestEqualStrict), R(4), U8(15),
B(JumpIfTrue), U8(111),
B(LdaNamedProperty), R(2), U8(7), U8(16),
B(Star), R(6),
B(TestUndetectable), R(6),
B(JumpIfFalse), U8(4),
B(Jump), U8(99),
B(LdaSmi), U8(1),
B(TestEqualStrict), R(4), U8(19),
B(JumpIfFalse), U8(67),
B(Ldar), R(6),
B(TypeOf),
B(Star), R(12),
B(LdaConstant), U8(8),
B(TestEqualStrict), R(12), U8(20),
B(JumpIfFalse), U8(4),
B(Jump), U8(18),
B(Wide), B(LdaSmi), U16(130),
B(Star), R(12),
B(LdaConstant), U8(9),
B(Star), R(13),
B(CallRuntime), U16(Runtime::kNewTypeError), R(12), U8(2),
B(Throw),
B(Mov), R(context), R(12),
B(Mov), R(6), R(13),
B(Mov), R(2), R(14),
B(InvokeIntrinsic), U8(Runtime::k_Call), R(13), U8(2),
B(Jump), U8(20),
B(Star), R(13),
B(Ldar), R(closure),
B(CreateCatchContext), R(13), U8(5), U8(10),
B(Star), R(12),
B(LdaTheHole),
B(SetPendingMessage),
B(Ldar), R(12),
B(PushContext), R(8),
B(PopContext), R(8),
B(Jump), U8(27),
B(Mov), R(6), R(12),
B(Mov), R(2), R(13),
B(InvokeIntrinsic), U8(Runtime::k_Call), R(12), U8(2),
B(Star), R(7),
B(InvokeIntrinsic), U8(Runtime::k_IsJSReceiver), R(7), U8(1),
B(JumpIfToBooleanFalse), U8(4),
B(Jump), U8(7),
B(CallRuntime), U16(Runtime::kThrowIteratorResultNotAnObject), R(7), U8(1),
B(Ldar), R(11),
B(SetPendingMessage),
B(LdaZero),
B(TestEqualStrict), R(9), U8(0),
B(JumpIfTrue), U8(4),
B(Jump), U8(5),
B(Ldar), R(10),
B(ReThrow),
B(LdaUndefined),
/* 62 S> */ B(Return),
]
constant pool: [
FIXED_ARRAY_TYPE,
SYMBOL_TYPE,
ONE_BYTE_INTERNALIZED_STRING_TYPE ["next"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["done"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["value"],
ONE_BYTE_INTERNALIZED_STRING_TYPE [".catch"],
FIXED_ARRAY_TYPE,
ONE_BYTE_INTERNALIZED_STRING_TYPE ["return"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["function"],
ONE_BYTE_INTERNALIZED_STRING_TYPE [""],
FIXED_ARRAY_TYPE,
]
handlers: [
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
[7, 126, 132],
[10, 90, 92],
[199, 209, 211],
]
---
snippet: "
var x = 'potatoes';
for (var p of x) { return p; }
"
frame size: 16
parameter count: 1
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
bytecode array length: 286
bytecodes: [
/* 30 E> */ B(StackCheck),
/* 42 S> */ B(LdaConstant), U8(0),
B(Star), R(0),
B(LdaZero),
B(Star), R(5),
B(Mov), R(context), R(12),
B(Mov), R(context), R(13),
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
/* 68 S> */ B(LdaNamedProperty), R(0), U8(1), U8(2),
B(Star), R(15),
B(CallProperty), R(15), R(0), U8(1), U8(4),
B(Mov), R(0), R(14),
B(JumpIfJSReceiver), U8(7),
B(CallRuntime), U16(Runtime::kThrowSymbolIteratorInvalid), R(0), U8(0),
B(Star), R(3),
/* 65 S> */ B(LdaNamedProperty), R(3), U8(2), U8(8),
B(Star), R(15),
/* 65 E> */ B(CallProperty), R(15), R(3), U8(1), U8(6),
B(Star), R(4),
/* 65 E> */ B(InvokeIntrinsic), U8(Runtime::k_IsJSReceiver), R(4), U8(1),
B(ToBooleanLogicalNot),
B(JumpIfFalse), U8(7),
B(CallRuntime), U16(Runtime::kThrowIteratorResultNotAnObject), R(4), U8(1),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(LdaNamedProperty), R(4), U8(3), U8(10),
B(JumpIfToBooleanTrue), U8(27),
B(LdaNamedProperty), R(4), U8(4), U8(12),
B(Star), R(6),
B(LdaSmi), U8(2),
B(Star), R(5),
B(Mov), R(6), R(1),
/* 54 E> */ B(StackCheck),
B(Mov), R(1), R(2),
/* 73 S> */ B(LdaZero),
B(Star), R(10),
B(Mov), R(1), R(11),
B(Jump), U8(50),
B(Jump), U8(36),
B(Star), R(14),
B(Ldar), R(closure),
B(CreateCatchContext), R(14), U8(5), U8(6),
B(Star), R(13),
B(PushContext), R(9),
B(LdaSmi), U8(2),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(TestEqualStrict), R(5), U8(14),
B(JumpIfFalse), U8(6),
B(LdaSmi), U8(1),
B(Star), R(5),
B(LdaCurrentContextSlot), U8(4),
B(Star), R(14),
B(CallRuntime), U16(Runtime::kReThrow), R(14), U8(1),
B(PopContext), R(9),
B(LdaSmi), U8(-1),
B(Star), R(10),
B(Jump), U8(8),
B(Star), R(11),
B(LdaSmi), U8(1),
B(Star), R(10),
B(LdaTheHole),
B(SetPendingMessage),
B(Star), R(12),
B(LdaZero),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(TestEqualStrict), R(5), U8(15),
B(JumpIfTrue), U8(111),
B(LdaNamedProperty), R(3), U8(7), U8(16),
B(Star), R(7),
B(TestUndetectable), R(7),
B(JumpIfFalse), U8(4),
B(Jump), U8(99),
B(LdaSmi), U8(1),
B(TestEqualStrict), R(5), U8(19),
B(JumpIfFalse), U8(67),
B(Ldar), R(7),
B(TypeOf),
B(Star), R(13),
B(LdaConstant), U8(8),
B(TestEqualStrict), R(13), U8(20),
B(JumpIfFalse), U8(4),
B(Jump), U8(18),
B(Wide), B(LdaSmi), U16(130),
B(Star), R(13),
B(LdaConstant), U8(9),
B(Star), R(14),
B(CallRuntime), U16(Runtime::kNewTypeError), R(13), U8(2),
B(Throw),
B(Mov), R(context), R(13),
B(Mov), R(7), R(14),
B(Mov), R(3), R(15),
B(InvokeIntrinsic), U8(Runtime::k_Call), R(14), U8(2),
B(Jump), U8(20),
B(Star), R(14),
B(Ldar), R(closure),
B(CreateCatchContext), R(14), U8(5), U8(10),
B(Star), R(13),
B(LdaTheHole),
B(SetPendingMessage),
B(Ldar), R(13),
B(PushContext), R(9),
B(PopContext), R(9),
B(Jump), U8(27),
B(Mov), R(7), R(13),
B(Mov), R(3), R(14),
B(InvokeIntrinsic), U8(Runtime::k_Call), R(13), U8(2),
B(Star), R(8),
B(InvokeIntrinsic), U8(Runtime::k_IsJSReceiver), R(8), U8(1),
B(JumpIfToBooleanFalse), U8(4),
B(Jump), U8(7),
B(CallRuntime), U16(Runtime::kThrowIteratorResultNotAnObject), R(8), U8(1),
B(Ldar), R(12),
B(SetPendingMessage),
B(LdaZero),
B(TestEqualStrict), R(10), U8(0),
B(JumpIfTrue), U8(11),
B(LdaSmi), U8(1),
B(TestEqualStrict), R(10), U8(0),
B(JumpIfTrue), U8(7),
B(Jump), U8(8),
B(Ldar), R(11),
/* 85 S> */ B(Return),
B(Ldar), R(11),
B(ReThrow),
B(LdaUndefined),
/* 85 S> */ B(Return),
]
constant pool: [
ONE_BYTE_INTERNALIZED_STRING_TYPE ["potatoes"],
SYMBOL_TYPE,
ONE_BYTE_INTERNALIZED_STRING_TYPE ["next"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["done"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["value"],
ONE_BYTE_INTERNALIZED_STRING_TYPE [".catch"],
FIXED_ARRAY_TYPE,
ONE_BYTE_INTERNALIZED_STRING_TYPE ["return"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["function"],
ONE_BYTE_INTERNALIZED_STRING_TYPE [""],
FIXED_ARRAY_TYPE,
]
handlers: [
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
[11, 129, 135],
[14, 93, 95],
[203, 213, 215],
]
---
snippet: "
for (var x of [10, 20, 30]) {
if (x == 10) continue;
if (x == 20) break;
}
"
frame size: 15
parameter count: 1
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
bytecode array length: 290
bytecodes: [
/* 30 E> */ B(StackCheck),
B(LdaZero),
B(Star), R(4),
B(Mov), R(context), R(11),
B(Mov), R(context), R(12),
/* 48 S> */ B(CreateArrayLiteral), U8(0), U8(0), U8(9),
B(Star), R(13),
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
B(LdaNamedProperty), R(13), U8(1), U8(2),
B(Star), R(14),
B(CallProperty), R(14), R(13), U8(1), U8(4),
B(JumpIfJSReceiver), U8(7),
B(CallRuntime), U16(Runtime::kThrowSymbolIteratorInvalid), R(0), U8(0),
B(Star), R(2),
/* 45 S> */ B(LdaNamedProperty), R(2), U8(2), U8(8),
B(Star), R(14),
/* 45 E> */ B(CallProperty), R(14), R(2), U8(1), U8(6),
B(Star), R(3),
/* 45 E> */ B(InvokeIntrinsic), U8(Runtime::k_IsJSReceiver), R(3), U8(1),
B(ToBooleanLogicalNot),
B(JumpIfFalse), U8(7),
B(CallRuntime), U16(Runtime::kThrowIteratorResultNotAnObject), R(3), U8(1),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(LdaNamedProperty), R(3), U8(3), U8(10),
B(JumpIfToBooleanTrue), U8(43),
B(LdaNamedProperty), R(3), U8(4), U8(12),
B(Star), R(5),
B(LdaSmi), U8(2),
B(Star), R(4),
B(Mov), R(5), R(0),
/* 34 E> */ B(StackCheck),
B(Mov), R(0), R(1),
/* 66 S> */ B(LdaSmi), U8(10),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
/* 72 E> */ B(TestEqual), R(1), U8(14),
B(JumpIfFalse), U8(4),
/* 79 S> */ B(Jump), U8(14),
/* 91 S> */ B(LdaSmi), U8(20),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
/* 97 E> */ B(TestEqual), R(1), U8(15),
B(JumpIfFalse), U8(4),
/* 104 S> */ B(Jump), U8(8),
B(LdaZero),
B(Star), R(4),
B(JumpLoop), U8(-69), U8(0),
B(Jump), U8(36),
B(Star), R(13),
B(Ldar), R(closure),
B(CreateCatchContext), R(13), U8(5), U8(6),
B(Star), R(12),
B(PushContext), R(8),
B(LdaSmi), U8(2),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(TestEqualStrict), R(4), U8(16),
B(JumpIfFalse), U8(6),
B(LdaSmi), U8(1),
B(Star), R(4),
B(LdaCurrentContextSlot), U8(4),
B(Star), R(13),
B(CallRuntime), U16(Runtime::kReThrow), R(13), U8(1),
B(PopContext), R(8),
B(LdaSmi), U8(-1),
B(Star), R(9),
B(Jump), U8(7),
B(Star), R(10),
B(LdaZero),
B(Star), R(9),
B(LdaTheHole),
B(SetPendingMessage),
B(Star), R(11),
B(LdaZero),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(TestEqualStrict), R(4), U8(17),
B(JumpIfTrue), U8(111),
B(LdaNamedProperty), R(2), U8(7), U8(18),
B(Star), R(6),
B(TestUndetectable), R(6),
B(JumpIfFalse), U8(4),
B(Jump), U8(99),
B(LdaSmi), U8(1),
B(TestEqualStrict), R(4), U8(21),
B(JumpIfFalse), U8(67),
B(Ldar), R(6),
B(TypeOf),
B(Star), R(12),
B(LdaConstant), U8(8),
B(TestEqualStrict), R(12), U8(22),
B(JumpIfFalse), U8(4),
B(Jump), U8(18),
B(Wide), B(LdaSmi), U16(130),
B(Star), R(12),
B(LdaConstant), U8(9),
B(Star), R(13),
B(CallRuntime), U16(Runtime::kNewTypeError), R(12), U8(2),
B(Throw),
B(Mov), R(context), R(12),
B(Mov), R(6), R(13),
B(Mov), R(2), R(14),
B(InvokeIntrinsic), U8(Runtime::k_Call), R(13), U8(2),
B(Jump), U8(20),
B(Star), R(13),
B(Ldar), R(closure),
B(CreateCatchContext), R(13), U8(5), U8(10),
B(Star), R(12),
B(LdaTheHole),
B(SetPendingMessage),
B(Ldar), R(12),
B(PushContext), R(8),
B(PopContext), R(8),
B(Jump), U8(27),
B(Mov), R(6), R(12),
B(Mov), R(2), R(13),
B(InvokeIntrinsic), U8(Runtime::k_Call), R(12), U8(2),
B(Star), R(7),
B(InvokeIntrinsic), U8(Runtime::k_IsJSReceiver), R(7), U8(1),
B(JumpIfToBooleanFalse), U8(4),
B(Jump), U8(7),
B(CallRuntime), U16(Runtime::kThrowIteratorResultNotAnObject), R(7), U8(1),
B(Ldar), R(11),
B(SetPendingMessage),
B(LdaZero),
B(TestEqualStrict), R(9), U8(0),
B(JumpIfTrue), U8(4),
B(Jump), U8(5),
B(Ldar), R(10),
B(ReThrow),
B(LdaUndefined),
/* 113 S> */ B(Return),
]
constant pool: [
FIXED_ARRAY_TYPE,
SYMBOL_TYPE,
ONE_BYTE_INTERNALIZED_STRING_TYPE ["next"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["done"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["value"],
ONE_BYTE_INTERNALIZED_STRING_TYPE [".catch"],
FIXED_ARRAY_TYPE,
ONE_BYTE_INTERNALIZED_STRING_TYPE ["return"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["function"],
ONE_BYTE_INTERNALIZED_STRING_TYPE [""],
FIXED_ARRAY_TYPE,
]
handlers: [
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
[7, 144, 150],
[10, 108, 110],
[217, 227, 229],
]
---
snippet: "
var x = { 'a': 1, 'b': 2 };
for (x['a'] of [1,2,3]) { return x['a']; }
"
frame size: 14
parameter count: 1
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
bytecode array length: 297
bytecodes: [
/* 30 E> */ B(StackCheck),
/* 42 S> */ B(CreateObjectLiteral), U8(0), U8(0), U8(1), R(8),
B(Mov), R(8), R(0),
B(LdaZero),
B(Star), R(3),
B(Mov), R(context), R(10),
B(Mov), R(context), R(11),
/* 77 S> */ B(CreateArrayLiteral), U8(1), U8(1), U8(9),
B(Star), R(12),
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
B(LdaNamedProperty), R(12), U8(2), U8(2),
B(Star), R(13),
B(CallProperty), R(13), R(12), U8(1), U8(4),
B(JumpIfJSReceiver), U8(7),
B(CallRuntime), U16(Runtime::kThrowSymbolIteratorInvalid), R(0), U8(0),
B(Star), R(1),
/* 74 S> */ B(LdaNamedProperty), R(1), U8(3), U8(8),
B(Star), R(13),
/* 74 E> */ B(CallProperty), R(13), R(1), U8(1), U8(6),
B(Star), R(2),
/* 74 E> */ B(InvokeIntrinsic), U8(Runtime::k_IsJSReceiver), R(2), U8(1),
B(ToBooleanLogicalNot),
B(JumpIfFalse), U8(7),
B(CallRuntime), U16(Runtime::kThrowIteratorResultNotAnObject), R(2), U8(1),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(LdaNamedProperty), R(2), U8(4), U8(10),
B(JumpIfToBooleanTrue), U8(31),
/* 67 E> */ B(LdaNamedProperty), R(2), U8(5), U8(12),
B(Star), R(4),
B(LdaSmi), U8(2),
B(Star), R(3),
B(Ldar), R(4),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(StaNamedPropertySloppy), R(0), U8(6), U8(14),
/* 62 E> */ B(StackCheck),
/* 88 S> */ B(Nop),
/* 96 E> */ B(LdaNamedProperty), R(0), U8(6), U8(16),
B(Star), R(9),
B(LdaZero),
B(Star), R(8),
B(Jump), U8(50),
B(Jump), U8(36),
B(Star), R(12),
B(Ldar), R(closure),
B(CreateCatchContext), R(12), U8(7), U8(8),
B(Star), R(11),
B(PushContext), R(7),
B(LdaSmi), U8(2),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(TestEqualStrict), R(3), U8(18),
B(JumpIfFalse), U8(6),
B(LdaSmi), U8(1),
B(Star), R(3),
B(LdaCurrentContextSlot), U8(4),
B(Star), R(12),
B(CallRuntime), U16(Runtime::kReThrow), R(12), U8(1),
B(PopContext), R(7),
B(LdaSmi), U8(-1),
B(Star), R(8),
B(Jump), U8(8),
B(Star), R(9),
B(LdaSmi), U8(1),
B(Star), R(8),
B(LdaTheHole),
B(SetPendingMessage),
B(Star), R(10),
B(LdaZero),
[turbofan] Collect invocation counts and compute relative call frequencies. Add a notion of "invocation count" to the baseline compilers, which increment a special slot in the TypeFeedbackVector for each invocation of a given function (the optimized code doesn't currently collect this information). Use this invocation count to relativize the call counts on the call sites within the function, so that the inlining heuristic has a view of relative importance of a call site rather than some absolute numbers with unclear meaning for the current function. Also apply the call site frequency as a factor to all frequencies in the inlinee by passing this to the graph builders so that the importance of a call site in an inlinee is relative to the topmost optimized function. Note that all functions that neither have literals nor need type feedback slots will share a single invocation count cell in the canonical empty type feedback vector, so their invocation count is meaningless, but that doesn't matter since we only use the invocation count to relativize call counts within the function, which we only have if we have at least one type feedback vector (the CallIC slot). See the design document for additional details on this change: https://docs.google.com/document/d/1VoYBhpDhJC4VlqMXCKvae-8IGuheBGxy32EOgC2LnT8 BUG=v8:5267,v8:5372 R=mvstanton@chromium.org,rmcilroy@chromium.org,mstarzinger@chromium.org Review-Url: https://codereview.chromium.org/2337123003 Cr-Commit-Position: refs/heads/master@{#39410}
2016-09-14 10:20:08 +00:00
B(TestEqualStrict), R(3), U8(19),
B(JumpIfTrue), U8(111),
B(LdaNamedProperty), R(1), U8(9), U8(20),
B(Star), R(5),
B(TestUndetectable), R(5),
B(JumpIfFalse), U8(4),
B(Jump), U8(99),
B(LdaSmi), U8(1),
B(TestEqualStrict), R(3), U8(23),
B(JumpIfFalse), U8(67),
B(Ldar), R(5),
B(TypeOf),
B(Star), R(11),
B(LdaConstant), U8(10),
B(TestEqualStrict), R(11), U8(24),
B(JumpIfFalse), U8(4),
B(Jump), U8(18),
B(Wide), B(LdaSmi), U16(130),
B(Star), R(11),
B(LdaConstant), U8(11),
B(Star), R(12),
B(CallRuntime), U16(Runtime::kNewTypeError), R(11), U8(2),
B(Throw),
B(Mov), R(context), R(11),
B(Mov), R(5), R(12),
B(Mov), R(1), R(13),
B(InvokeIntrinsic), U8(Runtime::k_Call), R(12), U8(2),
B(Jump), U8(20),
B(Star), R(12),
B(Ldar), R(closure),
B(CreateCatchContext), R(12), U8(7), U8(12),
B(Star), R(11),
B(LdaTheHole),
B(SetPendingMessage),
B(Ldar), R(11),
B(PushContext), R(7),
B(PopContext), R(7),
B(Jump), U8(27),
B(Mov), R(5), R(11),
B(Mov), R(1), R(12),
B(InvokeIntrinsic), U8(Runtime::k_Call), R(11), U8(2),
B(Star), R(6),
B(InvokeIntrinsic), U8(Runtime::k_IsJSReceiver), R(6), U8(1),
B(JumpIfToBooleanFalse), U8(4),
B(Jump), U8(7),
B(CallRuntime), U16(Runtime::kThrowIteratorResultNotAnObject), R(6), U8(1),
B(Ldar), R(10),
B(SetPendingMessage),
B(LdaZero),
B(TestEqualStrict), R(8), U8(0),
B(JumpIfTrue), U8(11),
B(LdaSmi), U8(1),
B(TestEqualStrict), R(8), U8(0),
B(JumpIfTrue), U8(7),
B(Jump), U8(8),
B(Ldar), R(9),
/* 105 S> */ B(Return),
B(Ldar), R(9),
B(ReThrow),
B(LdaUndefined),
/* 105 S> */ B(Return),
]
constant pool: [
FIXED_ARRAY_TYPE,
FIXED_ARRAY_TYPE,
SYMBOL_TYPE,
ONE_BYTE_INTERNALIZED_STRING_TYPE ["next"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["done"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["value"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["a"],
ONE_BYTE_INTERNALIZED_STRING_TYPE [".catch"],
FIXED_ARRAY_TYPE,
ONE_BYTE_INTERNALIZED_STRING_TYPE ["return"],
ONE_BYTE_INTERNALIZED_STRING_TYPE ["function"],
ONE_BYTE_INTERNALIZED_STRING_TYPE [""],
FIXED_ARRAY_TYPE,
]
handlers: [
[ignition] desugar GetIterator() via bytecode rather than via AST Introduces: - a new AST node representing the GetIterator() algorithm in the specification, to be used by ForOfStatement, YieldExpression (in the case of delegating yield*), and the future `for-await-of` loop proposed in http://tc39.github.io/proposal-async-iteration/#sec-async-iterator-value-unwrap-functions. - a new opcode (JumpIfJSReceiver), which is useful for `if Type(object) is not Object` checks which are common throughout the specification. This node is easily eliminated by TurboFan. The AST node is desugared specially in bytecode, rather than manually when building the AST. The benefit of this is that desugaring in the BytecodeGenerator is much simpler and easier to understand than desugaring the AST. This also reduces parse time very slightly, and allows us to use LoadIC rather than KeyedLoadIC, which seems to have better baseline performance. This results in a ~20% improvement in test/js-perf-test/Iterators micro-benchmarks, which I believe owes to the use of the slightly faster LoadIC as opposed to the KeyedLoadIC in the baseline case. Both produce identical optimized code via TurboFan when the type check can be eliminated, and the load can be replaced with a constant value. BUG=v8:4280 R=bmeurer@chromium.org, rmcilroy@chromium.org, adamk@chromium.org, neis@chromium.org, jarin@chromium.org TBR=rossberg@chromium.org Review-Url: https://codereview.chromium.org/2557593004 Cr-Commit-Position: refs/heads/master@{#41555}
2016-12-07 15:19:52 +00:00
[15, 140, 146],
[18, 104, 106],
[214, 224, 226],
]