[mips][lite] Allocate feedback vectors lazily

Port: 7629afd https://crrev.com/c/1520719

Original Commit Message:

    Allocate feedback vectors lazily when the function's interrupt budget has
    reached a specified threshold. This cl introduces a new field in the
    ClosureFeedbackCellArray to track the interrupt budget for allocating
    feedback vectors. Using the interrupt budget on the bytecode array could
    cause problems when there are closures across native contexts and we may
    delay allocating feedback vectors in one of them causing unexpected
    performance cliffs. In the long term we may want to remove interrupt budget
    from bytecode array and use context specific budget for tiering up decisions
    as well.

Change-Id: Icddceec22df3dad7861a30f0190397db130db10d
Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1669116
Reviewed-by: Mythri Alle <mythria@chromium.org>
Commit-Queue: Yu Yin <xwafish@gmail.com>
Cr-Commit-Position: refs/heads/master@{#62301}
This commit is contained in:
Yu Yin 2019-06-20 15:22:50 +08:00 committed by Commit Bot
parent 696eae3fc2
commit 82622c52f5
2 changed files with 16 additions and 8 deletions

View File

@ -1056,10 +1056,14 @@ void Builtins::Generate_InterpreterEntryTrampoline(MacroAssembler* masm) {
FrameScope frame_scope(masm, StackFrame::MANUAL); FrameScope frame_scope(masm, StackFrame::MANUAL);
__ PushStandardFrame(closure); __ PushStandardFrame(closure);
// Reset code age. // Reset code age and the OSR arming. The OSR field and BytecodeAgeOffset are
DCHECK_EQ(0, BytecodeArray::kNoAgeBytecodeAge); // 8-bit fields next to each other, so we could just optimize by writing a
__ sb(zero_reg, FieldMemOperand(kInterpreterBytecodeArrayRegister, // 16-bit. These static asserts guard our assumption is valid.
BytecodeArray::kBytecodeAgeOffset)); STATIC_ASSERT(BytecodeArray::kBytecodeAgeOffset ==
BytecodeArray::kOsrNestingLevelOffset + kCharSize);
STATIC_ASSERT(BytecodeArray::kNoAgeBytecodeAge == 0);
__ sh(zero_reg, FieldMemOperand(kInterpreterBytecodeArrayRegister,
BytecodeArray::kOsrNestingLevelOffset));
// Load initial bytecode offset. // Load initial bytecode offset.
__ li(kInterpreterBytecodeOffsetRegister, __ li(kInterpreterBytecodeOffsetRegister,

View File

@ -1073,10 +1073,14 @@ void Builtins::Generate_InterpreterEntryTrampoline(MacroAssembler* masm) {
FrameScope frame_scope(masm, StackFrame::MANUAL); FrameScope frame_scope(masm, StackFrame::MANUAL);
__ PushStandardFrame(closure); __ PushStandardFrame(closure);
// Reset code age. // Reset code age and the OSR arming. The OSR field and BytecodeAgeOffset are
DCHECK_EQ(0, BytecodeArray::kNoAgeBytecodeAge); // 8-bit fields next to each other, so we could just optimize by writing a
__ sb(zero_reg, FieldMemOperand(kInterpreterBytecodeArrayRegister, // 16-bit. These static asserts guard our assumption is valid.
BytecodeArray::kBytecodeAgeOffset)); STATIC_ASSERT(BytecodeArray::kBytecodeAgeOffset ==
BytecodeArray::kOsrNestingLevelOffset + kCharSize);
STATIC_ASSERT(BytecodeArray::kNoAgeBytecodeAge == 0);
__ sh(zero_reg, FieldMemOperand(kInterpreterBytecodeArrayRegister,
BytecodeArray::kOsrNestingLevelOffset));
// Load initial bytecode offset. // Load initial bytecode offset.
__ li(kInterpreterBytecodeOffsetRegister, __ li(kInterpreterBytecodeOffsetRegister,