Reland: [csa] verify skipped write-barriers in MemoryOptimizer

With very few exceptions, this verifies all skipped write-barriers in
CSA and Torque, showing that the MemoryOptimizer together with some
type information on the stored value are enough to avoid unsafe skipped
write-barriers.

Changes to CSA:
SKIP_WRITE_BARRIER and Store*NoWriteBarrier are verified by the
MemoryOptimizer by default.
Type information about the stored values (TNode<Smi>) is exploited to
safely skip write barriers for stored Smi values.
In some cases, the code is re-structured to make it easier to consume
for the MemoryOptimizer (manual branch and load elimination).

Changes to the MemoryOptimizer:
Improve the MemoryOptimizer to remove write barriers:
- When the store happens to a CSA-generated InnerAllocate, by ignoring
  Bitcasts and additions.
- When the stored value is the HeapConstant of an immortal immovable root.
- When the stored value is a SmiConstant (recognized by BitcastToTaggedSigned).
- Fast C-calls are treated as non-allocating.
- Runtime calls can be white-listed as non-allocating.

Remaining missing cases:
- C++-style iterator loops with inner pointers.
- Inner allocates that are reloaded from a field where they were just stored
  (for example an elements backing store). Load elimination would fix that.
- Safe stored value types that cannot be expressed in CSA (e.g., Smi|Hole).
  We could handle that in Torque.
- Double-aligned allocations, which are not lowered in the MemoryOptimizer
  but in CSA.

Drive-by change: Avoid Smi suffix for StoreFixedArrayElement since this
can be handled by overload resolution (in Torque and C++).

Reland Change: Support pointer compression operands.

R=jarin@chromium.org
TBR=mvstanton@chromium.org

Bug: v8:7793
Change-Id: I84e1831eb6bf9be14f36db3f8b485ee4fab6b22e
Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1612904
Auto-Submit: Tobias Tebbi <tebbi@chromium.org>
Reviewed-by: Michael Stanton <mvstanton@chromium.org>
Commit-Queue: Tobias Tebbi <tebbi@chromium.org>
Cr-Commit-Position: refs/heads/master@{#61522}
This commit is contained in:
Tobias Tebbi 2019-05-15 10:45:36 +02:00 committed by Commit Bot
parent 444f83d937
commit a19c3ffb8f
24 changed files with 355 additions and 106 deletions

View File

@ -35,7 +35,7 @@ namespace array_reverse {
StoreElement<array::FastPackedSmiElements, Smi>(implicit context: Context)(
elements: FixedArrayBase, index: Smi, value: Smi) {
const elems: FixedArray = UnsafeCast<FixedArray>(elements);
StoreFixedArrayElementSmi(elems, index, value, SKIP_WRITE_BARRIER);
StoreFixedArrayElement(elems, index, value, SKIP_WRITE_BARRIER);
}
StoreElement<array::FastPackedObjectElements, Object>(

View File

@ -66,8 +66,12 @@ namespace array_slice {
const newElement: Object = e != Hole ?
argumentsContext[UnsafeCast<Smi>(e)] :
unmappedElements.objects[current];
StoreFixedArrayElementSmi(
resultElements, indexOut++, newElement, SKIP_WRITE_BARRIER);
// It is safe to skip the write barrier here because resultElements was
// allocated together with result in a folded allocation.
// TODO(tebbi): The verification of this fails at the moment due to
// missing load elimination.
StoreFixedArrayElement(
resultElements, indexOut++, newElement, UNSAFE_SKIP_WRITE_BARRIER);
}
// Fill in the rest of the result that contains the unmapped parameters

View File

@ -1001,6 +1001,8 @@ const INTPTR_PARAMETERS: constexpr ParameterMode
const SKIP_WRITE_BARRIER:
constexpr WriteBarrierMode generates 'SKIP_WRITE_BARRIER';
const UNSAFE_SKIP_WRITE_BARRIER:
constexpr WriteBarrierMode generates 'UNSAFE_SKIP_WRITE_BARRIER';
extern class AsyncGeneratorRequest extends Struct {
next: AsyncGeneratorRequest | Undefined;
@ -2186,12 +2188,20 @@ extern operator '.objects[]=' macro StoreFixedArrayElement(
FixedArray, constexpr int31, Smi): void;
extern operator '.objects[]=' macro StoreFixedArrayElement(
FixedArray, constexpr int31, HeapObject): void;
extern operator '.objects[]=' macro StoreFixedArrayElementSmi(
extern operator '.objects[]=' macro StoreFixedArrayElement(
FixedArray, Smi, Object): void;
extern operator '.objects[]=' macro StoreFixedArrayElementSmi(
extern macro StoreFixedArrayElement(
FixedArray, Smi, Object, constexpr WriteBarrierMode): void;
extern macro StoreFixedArrayElement(
FixedArray, Smi, Smi, constexpr WriteBarrierMode): void;
extern macro StoreFixedArrayElement(
FixedArray, constexpr int31, Object, constexpr WriteBarrierMode): void;
extern macro StoreFixedArrayElement(
FixedArray, constexpr int31, Smi, constexpr WriteBarrierMode): void;
extern macro StoreFixedArrayElement(
FixedArray, intptr, Object, constexpr WriteBarrierMode): void;
extern macro StoreFixedArrayElement(
FixedArray, intptr, Smi, constexpr WriteBarrierMode): void;
extern operator '.floats[]=' macro StoreFixedDoubleArrayElement(
FixedDoubleArray, intptr, float64): void;
extern operator '.floats[]=' macro StoreFixedDoubleArrayElementSmi(

View File

@ -1578,8 +1578,8 @@ void CollectionsBuiltinsAssembler::StoreOrderedHashMapNewEntry(
Node* const hash, Node* const number_of_buckets, Node* const occupancy) {
Node* const bucket =
WordAnd(hash, IntPtrSub(number_of_buckets, IntPtrConstant(1)));
Node* const bucket_entry = UnsafeLoadFixedArrayElement(
table, bucket, OrderedHashMap::HashTableStartIndex() * kTaggedSize);
TNode<Smi> bucket_entry = CAST(UnsafeLoadFixedArrayElement(
table, bucket, OrderedHashMap::HashTableStartIndex() * kTaggedSize));
// Store the entry elements.
Node* const entry_start = IntPtrAdd(
@ -1752,8 +1752,8 @@ void CollectionsBuiltinsAssembler::StoreOrderedHashSetNewEntry(
Node* const number_of_buckets, Node* const occupancy) {
Node* const bucket =
WordAnd(hash, IntPtrSub(number_of_buckets, IntPtrConstant(1)));
Node* const bucket_entry = UnsafeLoadFixedArrayElement(
table, bucket, OrderedHashSet::HashTableStartIndex() * kTaggedSize);
TNode<Smi> bucket_entry = CAST(UnsafeLoadFixedArrayElement(
table, bucket, OrderedHashSet::HashTableStartIndex() * kTaggedSize));
// Store the entry elements.
Node* const entry_start = IntPtrAdd(

View File

@ -35,7 +35,7 @@ TNode<IntPtrT> RegExpBuiltinsAssembler::IntPtrZero() {
TNode<JSRegExpResult> RegExpBuiltinsAssembler::AllocateRegExpResult(
TNode<Context> context, TNode<Smi> length, TNode<Smi> index,
TNode<String> input) {
TNode<String> input, TNode<FixedArray>* elements_out) {
#ifdef DEBUG
TNode<Smi> max_length = SmiConstant(JSArray::kInitialMaxFastElementArray);
CSA_ASSERT(this, SmiLessThanOrEqual(length, max_length));
@ -89,6 +89,7 @@ TNode<JSRegExpResult> RegExpBuiltinsAssembler::AllocateRegExpResult(
FillFixedArrayWithValue(elements_kind, elements, IntPtrZero(), length_intptr,
RootIndex::kUndefinedValue);
if (elements_out) *elements_out = CAST(elements);
return CAST(result);
}
@ -177,9 +178,9 @@ TNode<JSRegExpResult> RegExpBuiltinsAssembler::ConstructNewResultFromMatchInfo(
TNode<String> first =
CAST(CallBuiltin(Builtins::kSubString, context, string, start, end));
TNode<JSRegExpResult> result =
AllocateRegExpResult(context, num_results, start, string);
TNode<FixedArray> result_elements = CAST(LoadElements(result));
TNode<FixedArray> result_elements;
TNode<JSRegExpResult> result = AllocateRegExpResult(
context, num_results, start, string, &result_elements);
UnsafeStoreFixedArrayElement(result_elements, 0, first, SKIP_WRITE_BARRIER);

View File

@ -38,10 +38,9 @@ class RegExpBuiltinsAssembler : public CodeStubAssembler {
// Allocate a RegExpResult with the given length (the number of captures,
// including the match itself), index (the index where the match starts),
// and input string.
TNode<JSRegExpResult> AllocateRegExpResult(TNode<Context> context,
TNode<Smi> length,
TNode<Smi> index,
TNode<String> input);
TNode<JSRegExpResult> AllocateRegExpResult(
TNode<Context> context, TNode<Smi> length, TNode<Smi> index,
TNode<String> input, TNode<FixedArray>* elements_out = nullptr);
TNode<Object> FastLoadLastIndex(TNode<JSRegExp> regexp);
TNode<Object> SlowLoadLastIndex(TNode<Context> context, TNode<Object> regexp);

View File

@ -127,7 +127,7 @@ TNode<FixedTypedArrayBase> TypedArrayBuiltinsAssembler::AllocateOnHeapElements(
// pointer.
CSA_ASSERT(this, IsRegularHeapObjectSize(total_size));
TNode<Object> elements;
TNode<HeapObject> elements;
if (UnalignedLoadSupported(MachineRepresentation::kFloat64) &&
UnalignedStoreSupported(MachineRepresentation::kFloat64)) {
@ -136,9 +136,13 @@ TNode<FixedTypedArrayBase> TypedArrayBuiltinsAssembler::AllocateOnHeapElements(
elements = AllocateInNewSpace(total_size, kDoubleAlignment);
}
StoreMapNoWriteBarrier(elements, map);
StoreObjectFieldNoWriteBarrier(elements, FixedArray::kLengthOffset, length);
StoreObjectFieldNoWriteBarrier(
// These skipped write barriers are marked unsafe because the MemoryOptimizer
// currently doesn't handle double alignment, so it fails at verifying them.
UnsafeStoreObjectFieldNoWriteBarrier(elements,
FixedTypedArrayBase::kMapOffset, map);
UnsafeStoreObjectFieldNoWriteBarrier(
elements, FixedTypedArrayBase::kLengthOffset, length);
UnsafeStoreObjectFieldNoWriteBarrier(
elements, FixedTypedArrayBase::kBasePointerOffset, elements);
StoreObjectFieldNoWriteBarrier(
elements, FixedTypedArrayBase::kExternalPointerOffset,

View File

@ -2738,12 +2738,24 @@ void CodeStubAssembler::StoreObjectField(Node* object, Node* offset,
void CodeStubAssembler::StoreObjectFieldNoWriteBarrier(
Node* object, int offset, Node* value, MachineRepresentation rep) {
OptimizedStoreFieldNoWriteBarrier(rep, UncheckedCast<HeapObject>(object),
offset, value);
if (CanBeTaggedPointer(rep)) {
OptimizedStoreFieldAssertNoWriteBarrier(
rep, UncheckedCast<HeapObject>(object), offset, value);
} else {
OptimizedStoreFieldUnsafeNoWriteBarrier(
rep, UncheckedCast<HeapObject>(object), offset, value);
}
}
void CodeStubAssembler::UnsafeStoreObjectFieldNoWriteBarrier(
TNode<HeapObject> object, int offset, TNode<Object> value) {
OptimizedStoreFieldUnsafeNoWriteBarrier(MachineRepresentation::kTagged,
object, offset, value);
}
void CodeStubAssembler::StoreObjectFieldNoWriteBarrier(
Node* object, Node* offset, Node* value, MachineRepresentation rep) {
Node* object, SloppyTNode<IntPtrT> offset, Node* value,
MachineRepresentation rep) {
int const_offset;
if (ToInt32Constant(offset, const_offset)) {
return StoreObjectFieldNoWriteBarrier(object, const_offset, value, rep);
@ -2763,9 +2775,9 @@ void CodeStubAssembler::StoreMapNoWriteBarrier(Node* object,
void CodeStubAssembler::StoreMapNoWriteBarrier(Node* object, Node* map) {
CSA_SLOW_ASSERT(this, IsMap(map));
OptimizedStoreFieldNoWriteBarrier(MachineRepresentation::kTaggedPointer,
UncheckedCast<HeapObject>(object),
HeapObject::kMapOffset, map);
OptimizedStoreFieldAssertNoWriteBarrier(MachineRepresentation::kTaggedPointer,
UncheckedCast<HeapObject>(object),
HeapObject::kMapOffset, map);
}
void CodeStubAssembler::StoreObjectFieldRoot(Node* object, int offset,
@ -2794,6 +2806,7 @@ void CodeStubAssembler::StoreFixedArrayOrPropertyArrayElement(
this, Word32Or(IsFixedArraySubclass(object), IsPropertyArray(object)));
CSA_SLOW_ASSERT(this, MatchesParameterMode(index_node, parameter_mode));
DCHECK(barrier_mode == SKIP_WRITE_BARRIER ||
barrier_mode == UNSAFE_SKIP_WRITE_BARRIER ||
barrier_mode == UPDATE_WRITE_BARRIER ||
barrier_mode == UPDATE_EPHEMERON_KEY_WRITE_BARRIER);
DCHECK(IsAligned(additional_offset, kTaggedSize));
@ -2828,6 +2841,9 @@ void CodeStubAssembler::StoreFixedArrayOrPropertyArrayElement(
FixedArray::kHeaderSize));
if (barrier_mode == SKIP_WRITE_BARRIER) {
StoreNoWriteBarrier(MachineRepresentation::kTagged, object, offset, value);
} else if (barrier_mode == UNSAFE_SKIP_WRITE_BARRIER) {
UnsafeStoreNoWriteBarrier(MachineRepresentation::kTagged, object, offset,
value);
} else if (barrier_mode == UPDATE_EPHEMERON_KEY_WRITE_BARRIER) {
StoreEphemeronKey(object, offset, value);
} else {
@ -2862,6 +2878,7 @@ void CodeStubAssembler::StoreFeedbackVectorSlot(Node* object,
CSA_SLOW_ASSERT(this, MatchesParameterMode(slot_index_node, parameter_mode));
DCHECK(IsAligned(additional_offset, kTaggedSize));
DCHECK(barrier_mode == SKIP_WRITE_BARRIER ||
barrier_mode == UNSAFE_SKIP_WRITE_BARRIER ||
barrier_mode == UPDATE_WRITE_BARRIER);
int header_size =
FeedbackVector::kFeedbackSlotsOffset + additional_offset - kHeapObjectTag;
@ -2873,6 +2890,9 @@ void CodeStubAssembler::StoreFeedbackVectorSlot(Node* object,
FeedbackVector::kHeaderSize));
if (barrier_mode == SKIP_WRITE_BARRIER) {
StoreNoWriteBarrier(MachineRepresentation::kTagged, object, offset, value);
} else if (barrier_mode == UNSAFE_SKIP_WRITE_BARRIER) {
UnsafeStoreNoWriteBarrier(MachineRepresentation::kTagged, object, offset,
value);
} else {
Store(object, offset, value);
}
@ -3808,7 +3828,8 @@ void CodeStubAssembler::StoreFieldsNoWriteBarrier(Node* start_address,
BuildFastLoop(
start_address, end_address,
[this, value](Node* current) {
StoreNoWriteBarrier(MachineRepresentation::kTagged, current, value);
UnsafeStoreNoWriteBarrier(MachineRepresentation::kTagged, current,
value);
},
kTaggedSize, INTPTR_PARAMETERS, IndexAdvanceMode::kPost);
}
@ -3850,23 +3871,28 @@ CodeStubAssembler::AllocateUninitializedJSArrayWithElements(
TVARIABLE(JSArray, array);
TVARIABLE(FixedArrayBase, elements);
if (IsIntPtrOrSmiConstantZero(capacity, capacity_mode)) {
TNode<FixedArrayBase> empty_array = EmptyFixedArrayConstant();
array = AllocateJSArray(array_map, empty_array, length, allocation_site);
return {array.value(), empty_array};
}
Label out(this), empty(this), nonempty(this);
Branch(SmiEqual(ParameterToTagged(capacity, capacity_mode), SmiConstant(0)),
&empty, &nonempty);
int capacity_int;
if (TryGetIntPtrOrSmiConstantValue(capacity, &capacity_int, capacity_mode)) {
if (capacity_int == 0) {
TNode<FixedArrayBase> empty_array = EmptyFixedArrayConstant();
array = AllocateJSArray(array_map, empty_array, length, allocation_site);
return {array.value(), empty_array};
} else {
Goto(&nonempty);
}
} else {
Branch(SmiEqual(ParameterToTagged(capacity, capacity_mode), SmiConstant(0)),
&empty, &nonempty);
BIND(&empty);
{
TNode<FixedArrayBase> empty_array = EmptyFixedArrayConstant();
array = AllocateJSArray(array_map, empty_array, length, allocation_site);
elements = empty_array;
Goto(&out);
BIND(&empty);
{
TNode<FixedArrayBase> empty_array = EmptyFixedArrayConstant();
array = AllocateJSArray(array_map, empty_array, length, allocation_site);
elements = empty_array;
Goto(&out);
}
}
BIND(&nonempty);
@ -4961,8 +4987,8 @@ void CodeStubAssembler::CopyFixedArrayElements(
StoreNoWriteBarrier(MachineRepresentation::kFloat64, to_array_adjusted,
to_offset, value);
} else {
StoreNoWriteBarrier(MachineRepresentation::kTagged, to_array_adjusted,
to_offset, value);
UnsafeStoreNoWriteBarrier(MachineRepresentation::kTagged,
to_array_adjusted, to_offset, value);
}
Goto(&next_iter);
@ -10314,8 +10340,9 @@ void CodeStubAssembler::StoreElement(Node* elements, ElementsKind kind,
TNode<Float64T> value_float64 = UncheckedCast<Float64T>(value);
StoreFixedDoubleArrayElement(CAST(elements), index, value_float64, mode);
} else {
WriteBarrierMode barrier_mode =
IsSmiElementsKind(kind) ? SKIP_WRITE_BARRIER : UPDATE_WRITE_BARRIER;
WriteBarrierMode barrier_mode = IsSmiElementsKind(kind)
? UNSAFE_SKIP_WRITE_BARRIER
: UPDATE_WRITE_BARRIER;
StoreFixedArrayElement(CAST(elements), index, value, barrier_mode, 0, mode);
}
}

View File

@ -1304,18 +1304,20 @@ class V8_EXPORT_PRIVATE CodeStubAssembler
void StoreObjectFieldNoWriteBarrier(
Node* object, int offset, Node* value,
MachineRepresentation rep = MachineRepresentation::kTagged);
void UnsafeStoreObjectFieldNoWriteBarrier(TNode<HeapObject> object,
int offset, TNode<Object> value);
void StoreObjectFieldNoWriteBarrier(
Node* object, Node* offset, Node* value,
Node* object, SloppyTNode<IntPtrT> offset, Node* value,
MachineRepresentation rep = MachineRepresentation::kTagged);
template <class T = Object>
void StoreObjectFieldNoWriteBarrier(TNode<HeapObject> object,
TNode<IntPtrT> offset, TNode<T> value) {
void StoreObjectFieldNoWriteBarrier(Node* object, SloppyTNode<IntPtrT> offset,
TNode<T> value) {
StoreObjectFieldNoWriteBarrier(object, offset, value,
MachineRepresentationOf<T>::value);
}
template <class T = Object>
void StoreObjectFieldNoWriteBarrier(TNode<HeapObject> object, int offset,
void StoreObjectFieldNoWriteBarrier(Node* object, int offset,
TNode<T> value) {
StoreObjectFieldNoWriteBarrier(object, offset, value,
MachineRepresentationOf<T>::value);
@ -1343,12 +1345,20 @@ class V8_EXPORT_PRIVATE CodeStubAssembler
return StoreFixedArrayElement(object, index, value, barrier_mode,
CheckBounds::kDebugOnly);
}
void UnsafeStoreFixedArrayElement(
TNode<FixedArray> object, int index, TNode<Smi> value,
WriteBarrierMode barrier_mode = SKIP_WRITE_BARRIER) {
DCHECK_EQ(SKIP_WRITE_BARRIER, barrier_mode);
return StoreFixedArrayElement(object, index, value,
UNSAFE_SKIP_WRITE_BARRIER,
CheckBounds::kDebugOnly);
}
void StoreFixedArrayElement(TNode<FixedArray> object, int index,
TNode<Smi> value,
CheckBounds check_bounds = CheckBounds::kAlways) {
return StoreFixedArrayElement(object, IntPtrConstant(index), value,
SKIP_WRITE_BARRIER, 0, INTPTR_PARAMETERS,
check_bounds);
UNSAFE_SKIP_WRITE_BARRIER, 0,
INTPTR_PARAMETERS, check_bounds);
}
// This doesn't emit a bounds-check. As part of the security-performance
// tradeoff, only use it if it is performance critical.
@ -1391,6 +1401,16 @@ class V8_EXPORT_PRIVATE CodeStubAssembler
additional_offset, parameter_mode,
CheckBounds::kDebugOnly);
}
void UnsafeStoreFixedArrayElement(
TNode<FixedArray> array, Node* index, TNode<Smi> value,
WriteBarrierMode barrier_mode = SKIP_WRITE_BARRIER,
int additional_offset = 0,
ParameterMode parameter_mode = INTPTR_PARAMETERS) {
DCHECK_EQ(SKIP_WRITE_BARRIER, barrier_mode);
return StoreFixedArrayElement(array, index, value,
UNSAFE_SKIP_WRITE_BARRIER, additional_offset,
parameter_mode, CheckBounds::kDebugOnly);
}
void StorePropertyArrayElement(
TNode<PropertyArray> array, Node* index, SloppyTNode<Object> value,
@ -1401,19 +1421,27 @@ class V8_EXPORT_PRIVATE CodeStubAssembler
additional_offset, parameter_mode);
}
void StoreFixedArrayElementSmi(
void StoreFixedArrayElement(
TNode<FixedArray> array, TNode<Smi> index, TNode<Object> value,
WriteBarrierMode barrier_mode = UPDATE_WRITE_BARRIER) {
StoreFixedArrayElement(array, index, value, barrier_mode, 0,
SMI_PARAMETERS);
}
void StoreFixedArrayElement(TNode<FixedArray> array, TNode<IntPtrT> index,
TNode<Smi> value) {
StoreFixedArrayElement(array, index, value, SKIP_WRITE_BARRIER, 0);
void StoreFixedArrayElement(
TNode<FixedArray> array, TNode<IntPtrT> index, TNode<Smi> value,
WriteBarrierMode barrier_mode = SKIP_WRITE_BARRIER,
int additional_offset = 0) {
DCHECK_EQ(SKIP_WRITE_BARRIER, barrier_mode);
StoreFixedArrayElement(array, index, TNode<Object>{value},
UNSAFE_SKIP_WRITE_BARRIER, additional_offset);
}
void StoreFixedArrayElement(TNode<FixedArray> array, TNode<Smi> index,
TNode<Smi> value) {
StoreFixedArrayElement(array, index, value, SKIP_WRITE_BARRIER, 0,
void StoreFixedArrayElement(
TNode<FixedArray> array, TNode<Smi> index, TNode<Smi> value,
WriteBarrierMode barrier_mode = SKIP_WRITE_BARRIER,
int additional_offset = 0) {
DCHECK_EQ(SKIP_WRITE_BARRIER, barrier_mode);
StoreFixedArrayElement(array, index, TNode<Object>{value},
UNSAFE_SKIP_WRITE_BARRIER, additional_offset,
SMI_PARAMETERS);
}

View File

@ -220,7 +220,7 @@ CallDescriptor* Linkage::GetSimplifiedCDescriptor(
// The target for C calls is always an address (i.e. machine pointer).
MachineType target_type = MachineType::Pointer();
LinkageLocation target_loc = LinkageLocation::ForAnyRegister(target_type);
CallDescriptor::Flags flags = CallDescriptor::kNoFlags;
CallDescriptor::Flags flags = CallDescriptor::kNoAllocate;
if (set_initialize_root_flag) {
flags |= CallDescriptor::kInitializeRootRegister;
}

View File

@ -999,9 +999,16 @@ void CodeAssembler::OptimizedStoreField(MachineRepresentation rep,
WriteBarrierKind::kFullWriteBarrier);
}
void CodeAssembler::OptimizedStoreFieldNoWriteBarrier(MachineRepresentation rep,
TNode<HeapObject> object,
int offset, Node* value) {
void CodeAssembler::OptimizedStoreFieldAssertNoWriteBarrier(
MachineRepresentation rep, TNode<HeapObject> object, int offset,
Node* value) {
raw_assembler()->OptimizedStoreField(rep, object, offset, value,
WriteBarrierKind::kAssertNoWriteBarrier);
}
void CodeAssembler::OptimizedStoreFieldUnsafeNoWriteBarrier(
MachineRepresentation rep, TNode<HeapObject> object, int offset,
Node* value) {
raw_assembler()->OptimizedStoreField(rep, object, offset, value,
WriteBarrierKind::kNoWriteBarrier);
}
@ -1023,11 +1030,26 @@ Node* CodeAssembler::StoreEphemeronKey(Node* base, Node* offset, Node* value) {
Node* CodeAssembler::StoreNoWriteBarrier(MachineRepresentation rep, Node* base,
Node* value) {
return raw_assembler()->Store(rep, base, value, kNoWriteBarrier);
return raw_assembler()->Store(
rep, base, value,
CanBeTaggedPointer(rep) ? kAssertNoWriteBarrier : kNoWriteBarrier);
}
Node* CodeAssembler::StoreNoWriteBarrier(MachineRepresentation rep, Node* base,
Node* offset, Node* value) {
return raw_assembler()->Store(
rep, base, offset, value,
CanBeTaggedPointer(rep) ? kAssertNoWriteBarrier : kNoWriteBarrier);
}
Node* CodeAssembler::UnsafeStoreNoWriteBarrier(MachineRepresentation rep,
Node* base, Node* value) {
return raw_assembler()->Store(rep, base, value, kNoWriteBarrier);
}
Node* CodeAssembler::UnsafeStoreNoWriteBarrier(MachineRepresentation rep,
Node* base, Node* offset,
Node* value) {
return raw_assembler()->Store(rep, base, offset, value, kNoWriteBarrier);
}
@ -1194,7 +1216,8 @@ TNode<Object> CodeAssembler::CallRuntimeWithCEntryImpl(
int argc = static_cast<int>(args.size());
auto call_descriptor = Linkage::GetRuntimeCallDescriptor(
zone(), function, argc, Operator::kNoProperties,
CallDescriptor::kNoFlags);
Runtime::MayAllocate(function) ? CallDescriptor::kNoFlags
: CallDescriptor::kNoAllocate);
Node* ref = ExternalConstant(ExternalReference::Create(function));
Node* arity = Int32Constant(argc);

View File

@ -965,6 +965,11 @@ class V8_EXPORT_PRIVATE CodeAssembler {
Node* StoreNoWriteBarrier(MachineRepresentation rep, Node* base, Node* value);
Node* StoreNoWriteBarrier(MachineRepresentation rep, Node* base, Node* offset,
Node* value);
Node* UnsafeStoreNoWriteBarrier(MachineRepresentation rep, Node* base,
Node* value);
Node* UnsafeStoreNoWriteBarrier(MachineRepresentation rep, Node* base,
Node* offset, Node* value);
// Stores uncompressed tagged value to (most likely off JS heap) memory
// location without write barrier.
Node* StoreFullTaggedNoWriteBarrier(Node* base, Node* tagged_value);
@ -977,9 +982,12 @@ class V8_EXPORT_PRIVATE CodeAssembler {
AllowLargeObjects allow_large_objects);
void OptimizedStoreField(MachineRepresentation rep, TNode<HeapObject> object,
int offset, Node* value);
void OptimizedStoreFieldNoWriteBarrier(MachineRepresentation rep,
TNode<HeapObject> object, int offset,
Node* value);
void OptimizedStoreFieldAssertNoWriteBarrier(MachineRepresentation rep,
TNode<HeapObject> object,
int offset, Node* value);
void OptimizedStoreFieldUnsafeNoWriteBarrier(MachineRepresentation rep,
TNode<HeapObject> object,
int offset, Node* value);
void OptimizedStoreMap(TNode<HeapObject> object, TNode<Map>);
// {value_high} is used for 64-bit stores on 32-bit platforms, must be
// nullptr in other cases.

View File

@ -550,6 +550,11 @@ struct MachineOperatorGlobalCache {
Store##Type##NoWriteBarrier##Operator() \
: Store##Type##Operator(kNoWriteBarrier) {} \
}; \
struct Store##Type##AssertNoWriteBarrier##Operator final \
: public Store##Type##Operator { \
Store##Type##AssertNoWriteBarrier##Operator() \
: Store##Type##Operator(kAssertNoWriteBarrier) {} \
}; \
struct Store##Type##MapWriteBarrier##Operator final \
: public Store##Type##Operator { \
Store##Type##MapWriteBarrier##Operator() \
@ -590,6 +595,8 @@ struct MachineOperatorGlobalCache {
kNoWriteBarrier)) {} \
}; \
Store##Type##NoWriteBarrier##Operator kStore##Type##NoWriteBarrier; \
Store##Type##AssertNoWriteBarrier##Operator \
kStore##Type##AssertNoWriteBarrier; \
Store##Type##MapWriteBarrier##Operator kStore##Type##MapWriteBarrier; \
Store##Type##PointerWriteBarrier##Operator \
kStore##Type##PointerWriteBarrier; \
@ -945,6 +952,8 @@ const Operator* MachineOperatorBuilder::Store(StoreRepresentation store_rep) {
switch (store_rep.write_barrier_kind()) { \
case kNoWriteBarrier: \
return &cache_.k##Store##kRep##NoWriteBarrier; \
case kAssertNoWriteBarrier: \
return &cache_.k##Store##kRep##AssertNoWriteBarrier; \
case kMapWriteBarrier: \
return &cache_.k##Store##kRep##MapWriteBarrier; \
case kPointerWriteBarrier: \

View File

@ -11,6 +11,7 @@
#include "src/compiler/node.h"
#include "src/compiler/simplified-operator.h"
#include "src/interface-descriptors.h"
#include "src/roots-inl.h"
namespace v8 {
namespace internal {
@ -18,7 +19,8 @@ namespace compiler {
MemoryOptimizer::MemoryOptimizer(JSGraph* jsgraph, Zone* zone,
PoisoningMitigationLevel poisoning_level,
AllocationFolding allocation_folding)
AllocationFolding allocation_folding,
const char* function_debug_name)
: jsgraph_(jsgraph),
empty_state_(AllocationState::Empty(zone)),
pending_(zone),
@ -26,7 +28,8 @@ MemoryOptimizer::MemoryOptimizer(JSGraph* jsgraph, Zone* zone,
zone_(zone),
graph_assembler_(jsgraph, nullptr, nullptr, zone),
poisoning_level_(poisoning_level),
allocation_folding_(allocation_folding) {}
allocation_folding_(allocation_folding),
function_debug_name_(function_debug_name) {}
void MemoryOptimizer::Optimize() {
EnqueueUses(graph()->start(), empty_state());
@ -58,7 +61,21 @@ void MemoryOptimizer::AllocationGroup::Add(Node* node) {
}
bool MemoryOptimizer::AllocationGroup::Contains(Node* node) const {
return node_ids_.find(node->id()) != node_ids_.end();
// Additions should stay within the same allocated object, so it's safe to
// ignore them.
while (node_ids_.find(node->id()) == node_ids_.end()) {
switch (node->opcode()) {
case IrOpcode::kBitcastTaggedToWord:
case IrOpcode::kBitcastWordToTagged:
case IrOpcode::kInt32Add:
case IrOpcode::kInt64Add:
node = NodeProperties::GetValueInput(node, 0);
break;
default:
return false;
}
}
return true;
}
MemoryOptimizer::AllocationState::AllocationState()
@ -86,6 +103,7 @@ bool CanAllocate(const Node* node) {
case IrOpcode::kDebugBreak:
case IrOpcode::kDeoptimizeIf:
case IrOpcode::kDeoptimizeUnless:
case IrOpcode::kEffectPhi:
case IrOpcode::kIfException:
case IrOpcode::kLoad:
case IrOpcode::kLoadElement:
@ -94,6 +112,10 @@ bool CanAllocate(const Node* node) {
case IrOpcode::kProtectedLoad:
case IrOpcode::kProtectedStore:
case IrOpcode::kRetain:
// TODO(tebbi): Store nodes might do a bump-pointer allocation.
// We should introduce a special bump-pointer store node to
// differentiate that.
case IrOpcode::kStore:
case IrOpcode::kStoreElement:
case IrOpcode::kStoreField:
case IrOpcode::kTaggedPoisonOnSpeculation:
@ -137,29 +159,17 @@ bool CanAllocate(const Node* node) {
case IrOpcode::kCallWithCallerSavedRegisters:
return !(CallDescriptorOf(node->op())->flags() &
CallDescriptor::kNoAllocate);
case IrOpcode::kStore:
// Store is not safe because it could be part of CSA's bump pointer
// allocation(?).
return true;
default:
break;
}
return true;
}
bool CanLoopAllocate(Node* loop_effect_phi, Zone* temp_zone) {
Node* const control = NodeProperties::GetControlInput(loop_effect_phi);
Node* SearchAllocatingNode(Node* start, Node* limit, Zone* temp_zone) {
ZoneQueue<Node*> queue(temp_zone);
ZoneSet<Node*> visited(temp_zone);
visited.insert(loop_effect_phi);
// Start the effect chain walk from the loop back edges.
for (int i = 1; i < control->InputCount(); ++i) {
queue.push(loop_effect_phi->InputAt(i));
}
visited.insert(limit);
queue.push(start);
while (!queue.empty()) {
Node* const current = queue.front();
@ -167,16 +177,40 @@ bool CanLoopAllocate(Node* loop_effect_phi, Zone* temp_zone) {
if (visited.find(current) == visited.end()) {
visited.insert(current);
if (CanAllocate(current)) return true;
if (CanAllocate(current)) {
return current;
}
for (int i = 0; i < current->op()->EffectInputCount(); ++i) {
queue.push(NodeProperties::GetEffectInput(current, i));
}
}
}
return nullptr;
}
bool CanLoopAllocate(Node* loop_effect_phi, Zone* temp_zone) {
Node* const control = NodeProperties::GetControlInput(loop_effect_phi);
// Start the effect chain walk from the loop back edges.
for (int i = 1; i < control->InputCount(); ++i) {
if (SearchAllocatingNode(loop_effect_phi->InputAt(i), loop_effect_phi,
temp_zone) != nullptr) {
return true;
}
}
return false;
}
Node* EffectPhiForPhi(Node* phi) {
Node* control = NodeProperties::GetControlInput(phi);
for (Node* use : control->uses()) {
if (use->opcode() == IrOpcode::kEffectPhi) {
return use;
}
}
return nullptr;
}
} // namespace
void MemoryOptimizer::VisitNode(Node* node, AllocationState const* state) {
@ -501,8 +535,9 @@ void MemoryOptimizer::VisitStoreElement(Node* node,
ElementAccess const& access = ElementAccessOf(node->op());
Node* object = node->InputAt(0);
Node* index = node->InputAt(1);
WriteBarrierKind write_barrier_kind =
ComputeWriteBarrierKind(object, state, access.write_barrier_kind);
Node* value = node->InputAt(2);
WriteBarrierKind write_barrier_kind = ComputeWriteBarrierKind(
node, object, value, state, access.write_barrier_kind);
node->ReplaceInput(1, ComputeIndex(access, index));
NodeProperties::ChangeOp(
node, machine()->Store(StoreRepresentation(
@ -515,8 +550,9 @@ void MemoryOptimizer::VisitStoreField(Node* node,
DCHECK_EQ(IrOpcode::kStoreField, node->opcode());
FieldAccess const& access = FieldAccessOf(node->op());
Node* object = node->InputAt(0);
WriteBarrierKind write_barrier_kind =
ComputeWriteBarrierKind(object, state, access.write_barrier_kind);
Node* value = node->InputAt(1);
WriteBarrierKind write_barrier_kind = ComputeWriteBarrierKind(
node, object, value, state, access.write_barrier_kind);
Node* offset = jsgraph()->IntPtrConstant(access.offset - access.tag());
node->InsertInput(graph()->zone(), 1, offset);
NodeProperties::ChangeOp(
@ -529,8 +565,9 @@ void MemoryOptimizer::VisitStore(Node* node, AllocationState const* state) {
DCHECK_EQ(IrOpcode::kStore, node->opcode());
StoreRepresentation representation = StoreRepresentationOf(node->op());
Node* object = node->InputAt(0);
Node* value = node->InputAt(2);
WriteBarrierKind write_barrier_kind = ComputeWriteBarrierKind(
object, state, representation.write_barrier_kind());
node, object, value, state, representation.write_barrier_kind());
if (write_barrier_kind != representation.write_barrier_kind()) {
NodeProperties::ChangeOp(
node, machine()->Store(StoreRepresentation(
@ -559,13 +596,85 @@ Node* MemoryOptimizer::ComputeIndex(ElementAccess const& access, Node* index) {
return index;
}
namespace {
bool ValueNeedsWriteBarrier(Node* value, Isolate* isolate) {
while (true) {
switch (value->opcode()) {
case IrOpcode::kBitcastWordToTaggedSigned:
case IrOpcode::kChangeTaggedSignedToCompressedSigned:
case IrOpcode::kChangeTaggedToCompressedSigned:
return false;
case IrOpcode::kChangeTaggedPointerToCompressedPointer:
case IrOpcode::kChangeTaggedToCompressed:
value = NodeProperties::GetValueInput(value, 0);
continue;
case IrOpcode::kHeapConstant: {
RootIndex root_index;
if (isolate->roots_table().IsRootHandle(HeapConstantOf(value->op()),
&root_index) &&
RootsTable::IsImmortalImmovable(root_index)) {
return false;
}
break;
}
default:
break;
}
return true;
}
}
void WriteBarrierAssertFailed(Node* node, Node* object, const char* name,
Zone* temp_zone) {
std::stringstream str;
str << "MemoryOptimizer could not remove write barrier for node #"
<< node->id() << "\n";
str << " Run mksnapshot with --csa-trap-on-node=" << name << ","
<< node->id() << " to break in CSA code.\n";
Node* object_position = object;
if (object_position->opcode() == IrOpcode::kPhi) {
object_position = EffectPhiForPhi(object_position);
}
Node* allocating_node = nullptr;
if (object_position && object_position->op()->EffectOutputCount() > 0) {
allocating_node = SearchAllocatingNode(node, object_position, temp_zone);
}
if (allocating_node) {
str << "\n There is a potentially allocating node in between:\n";
str << " " << *allocating_node << "\n";
str << " Run mksnapshot with --csa-trap-on-node=" << name << ","
<< allocating_node->id() << " to break there.\n";
if (allocating_node->opcode() == IrOpcode::kCall) {
str << " If this is a never-allocating runtime call, you can add an "
"exception to Runtime::MayAllocate.\n";
}
} else {
str << "\n It seems the store happened to something different than a "
"direct "
"allocation:\n";
str << " " << *object << "\n";
str << " Run mksnapshot with --csa-trap-on-node=" << name << ","
<< object->id() << " to break there.\n";
}
FATAL("%s", str.str().c_str());
}
} // namespace
WriteBarrierKind MemoryOptimizer::ComputeWriteBarrierKind(
Node* object, AllocationState const* state,
Node* node, Node* object, Node* value, AllocationState const* state,
WriteBarrierKind write_barrier_kind) {
if (state->IsYoungGenerationAllocation() &&
state->group()->Contains(object)) {
write_barrier_kind = kNoWriteBarrier;
}
if (!ValueNeedsWriteBarrier(value, isolate())) {
write_barrier_kind = kNoWriteBarrier;
}
if (write_barrier_kind == WriteBarrierKind::kAssertNoWriteBarrier) {
WriteBarrierAssertFailed(node, object, function_debug_name_, zone());
}
return write_barrier_kind;
}

View File

@ -35,7 +35,8 @@ class MemoryOptimizer final {
MemoryOptimizer(JSGraph* jsgraph, Zone* zone,
PoisoningMitigationLevel poisoning_level,
AllocationFolding allocation_folding);
AllocationFolding allocation_folding,
const char* function_debug_name);
~MemoryOptimizer() = default;
void Optimize();
@ -123,7 +124,8 @@ class MemoryOptimizer final {
void VisitOtherEffect(Node*, AllocationState const*);
Node* ComputeIndex(ElementAccess const&, Node*);
WriteBarrierKind ComputeWriteBarrierKind(Node* object,
WriteBarrierKind ComputeWriteBarrierKind(Node* node, Node* object,
Node* value,
AllocationState const* state,
WriteBarrierKind);
@ -153,6 +155,7 @@ class MemoryOptimizer final {
GraphAssembler graph_assembler_;
PoisoningMitigationLevel poisoning_level_;
AllocationFolding allocation_folding_;
const char* function_debug_name_;
DISALLOW_IMPLICIT_CONSTRUCTORS(MemoryOptimizer);
};

View File

@ -1516,7 +1516,8 @@ struct MemoryOptimizationPhase {
data->jsgraph(), temp_zone, data->info()->GetPoisoningMitigationLevel(),
data->info()->is_allocation_folding_enabled()
? MemoryOptimizer::AllocationFolding::kDoAllocationFolding
: MemoryOptimizer::AllocationFolding::kDontAllocationFolding);
: MemoryOptimizer::AllocationFolding::kDontAllocationFolding,
data->debug_name());
optimizer.Optimize();
}
};

View File

@ -16,6 +16,7 @@ namespace compiler {
// Write barrier kinds supported by compiler.
enum WriteBarrierKind : uint8_t {
kNoWriteBarrier,
kAssertNoWriteBarrier,
kMapWriteBarrier,
kPointerWriteBarrier,
kEphemeronKeyWriteBarrier,
@ -30,6 +31,8 @@ inline std::ostream& operator<<(std::ostream& os, WriteBarrierKind kind) {
switch (kind) {
case kNoWriteBarrier:
return os << "NoWriteBarrier";
case kAssertNoWriteBarrier:
return os << "AssertNoWriteBarrier";
case kMapWriteBarrier:
return os << "MapWriteBarrier";
case kPointerWriteBarrier:

View File

@ -1839,7 +1839,8 @@ void AccessorAssembler::StoreNamedField(Node* handler_word, Node* object,
StoreObjectFieldNoWriteBarrier(property_storage, offset, value,
MachineRepresentation::kFloat64);
} else if (representation.IsSmi()) {
StoreObjectFieldNoWriteBarrier(property_storage, offset, value);
TNode<Smi> value_smi = CAST(value);
StoreObjectFieldNoWriteBarrier(property_storage, offset, value_smi);
} else {
StoreObjectField(property_storage, offset, value);
}

View File

@ -352,8 +352,8 @@ void KeyedStoreGenericAssembler::StoreElementWithCapacity(
TryChangeToHoleyMapMulti(receiver, receiver_map, elements_kind, context,
PACKED_SMI_ELEMENTS, PACKED_ELEMENTS, slow);
}
StoreNoWriteBarrier(MachineRepresentation::kTagged, elements, offset,
value);
StoreNoWriteBarrier(MachineRepresentation::kTaggedSigned, elements,
offset, value);
MaybeUpdateLengthAndReturn(receiver, intptr_index, value, update_length);
BIND(&non_smi_value);

View File

@ -188,12 +188,15 @@ namespace internal {
struct InliningPosition;
class PropertyDescriptorObject;
// SKIP_WRITE_BARRIER skips the write barrier.
// UNSAFE_SKIP_WRITE_BARRIER skips the write barrier.
// SKIP_WRITE_BARRIER skips the write barrier and asserts that this is safe in
// the MemoryOptimizer
// UPDATE_WEAK_WRITE_BARRIER skips the marking part of the write barrier and
// only performs the generational part.
// UPDATE_WRITE_BARRIER is doing the full barrier, marking and generational.
enum WriteBarrierMode {
SKIP_WRITE_BARRIER,
UNSAFE_SKIP_WRITE_BARRIER,
UPDATE_WEAK_WRITE_BARRIER,
UPDATE_EPHEMERON_KEY_WRITE_BARRIER,
UPDATE_WRITE_BARRIER

View File

@ -178,6 +178,16 @@ bool Runtime::IsNonReturning(FunctionId id) {
}
}
bool Runtime::MayAllocate(FunctionId id) {
switch (id) {
case Runtime::kCompleteInobjectSlackTracking:
case Runtime::kCompleteInobjectSlackTrackingForMap:
return false;
default:
return true;
}
}
const Runtime::Function* Runtime::FunctionForName(const unsigned char* name,
int length) {
base::CallOnce(&initialize_function_name_map_once,

View File

@ -694,6 +694,10 @@ class Runtime : public AllStatic {
// sentinel, always.
static bool IsNonReturning(FunctionId id);
// Check if a runtime function with the given {id} may trigger a heap
// allocation.
static bool MayAllocate(FunctionId id);
// Get the intrinsic function with the given name.
static const Function* FunctionForName(const unsigned char* name, int length);

View File

@ -192,9 +192,9 @@ Handle<Code> BuildSetupFunction(Isolate* isolate,
//
// Finally, it is important that this function does not call `RecordWrite` which
// is why "setup" is in charge of all allocations and we are using
// SKIP_WRITE_BARRIER. The reason for this is that `RecordWrite` may clobber the
// top 64 bits of Simd128 registers. This is the case on x64, ia32 and Arm64 for
// example.
// UNSAFE_SKIP_WRITE_BARRIER. The reason for this is that `RecordWrite` may
// clobber the top 64 bits of Simd128 registers. This is the case on x64, ia32
// and Arm64 for example.
Handle<Code> BuildTeardownFunction(Isolate* isolate,
CallDescriptor* call_descriptor,
std::vector<AllocatedOperand> parameters) {
@ -206,7 +206,8 @@ Handle<Code> BuildTeardownFunction(Isolate* isolate,
Node* param = __ Parameter(i + 2);
switch (parameters[i].representation()) {
case MachineRepresentation::kTagged:
__ StoreFixedArrayElement(result_array, i, param, SKIP_WRITE_BARRIER);
__ StoreFixedArrayElement(result_array, i, param,
UNSAFE_SKIP_WRITE_BARRIER);
break;
// Box FP values into HeapNumbers.
case MachineRepresentation::kFloat32:
@ -229,7 +230,7 @@ Handle<Code> BuildTeardownFunction(Isolate* isolate,
->I32x4ExtractLane(lane),
param));
__ StoreFixedArrayElement(vector, lane, lane_value,
SKIP_WRITE_BARRIER);
UNSAFE_SKIP_WRITE_BARRIER);
}
break;
}

View File

@ -247,7 +247,8 @@ namespace array {
context: Context, sortState: SortState, index: Smi, value: Object): Smi {
const object = UnsafeCast<JSObject>(sortState.receiver);
const elements = UnsafeCast<FixedArray>(object.elements);
StoreFixedArrayElementSmi(elements, index, value, SKIP_WRITE_BARRIER);
const value = UnsafeCast<Smi>(value);
StoreFixedArrayElement(elements, index, value, SKIP_WRITE_BARRIER);
return kSuccess;
}