Reland^2 "[serializer] Allocate during deserialization"

This is a reland of 28a30c578c
which was a reland of 5d7a29c90e

The crashes were from calling RegisterDeserializerFinished on a null
Isolate pointer, for a deserializer that was never initialised
(specifically, ReadOnlyDeserializer when ROHeap is shared).

Original change's description:
> Reland "[serializer] Allocate during deserialization"
>
> This is a reland of 5d7a29c90e
>
> This reland shuffles around the order of checks in Heap::AllocateRawWith
> to not check the new space addresses until it's known that this is a new
> space allocation. This fixes an UBSan failure during read-only space
> deserialization, which happens before the new space is initialized.
>
> It also fixes some issues discovered by --stress-snapshot, around
> serializing ThinStrings (which are now elided as part of serialization),
> handle counts (I bumped the maximum handle count in that check), and
> clearing map transitions (the map backpointer field needed a Smi
> uninitialized value check).
>
> Original change's description:
> > [serializer] Allocate during deserialization
> >
> > This patch removes the concept of reservations and a specialized
> > deserializer allocator, and instead makes the deserializer allocate
> > directly with the Heap's Allocate method.
> >
> > The major consequence of this is that the GC can now run during
> > deserialization, which means that:
> >
> >   a) Deserialized objects are visible to the GC, and
> >   b) Objects that the deserializer/deserialized objects point to can
> >      move.
> >
> > Point a) is mostly not a problem due to previous work in making
> > deserialized objects "GC valid", i.e. making sure that they have a valid
> > size before any subsequent allocation/safepoint. We now additionally
> > have to initialize the allocated space with a valid tagged value -- this
> > is a magic Smi value to keep "uninitialized" checks simple.
> >
> > Point b) is solved by Handlifying the deserializer. This involves
> > changing any vectors of objects into vectors of Handles, and any object
> > keyed map into an IdentityMap (we can't use Handles as keys because
> > the object's address is no longer a stable hash).
> >
> > Back-references can no longer be direct chunk offsets, so instead the
> > deserializer stores a Handle to each deserialized object, and the
> > backreference is an index into this handle array. This encoding could
> > be optimized in the future with e.g. a second pass over the serialized
> > array which emits a different bytecode for objects that are and aren't
> > back-referenced.
> >
> > Additionally, the slot-walk over objects to initialize them can no
> > longer use absolute slot offsets, as again an object may move and its
> > slot address would become invalid. Now, slots are walked as relative
> > offsets to a Handle to the object, or as absolute slots for the case of
> > root pointers. A concept of "slot accessor" is introduced to share the
> > code between these two modes, and writing the slot (including write
> > barriers) is abstracted into this accessor.
> >
> > Finally, the Code body walk is modified to deserialize all objects
> > referred to by RelocInfos before doing the RelocInfo walk itself. This
> > is because RelocInfoIterator uses raw pointers, so we cannot allocate
> > during a RelocInfo walk.
> >
> > As a drive-by, the VariableRawData bytecode is tweaked to use tagged
> > size rather than byte size -- the size is expected to be tagged-aligned
> > anyway, so now we get an extra few bits in the size encoding.
> >
> > Bug: chromium:1075999
> > Change-Id: I672c42f553f2669888cc5e35d692c1b8ece1845e
> > Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2404451
> > Commit-Queue: Leszek Swirski <leszeks@chromium.org>
> > Reviewed-by: Jakob Gruber <jgruber@chromium.org>
> > Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
> > Cr-Commit-Position: refs/heads/master@{#70229}
>
> Bug: chromium:1075999
> Change-Id: Ibc77cc48b3440b4a28b09746cfc47e50c340ce54
> Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2440828
> Commit-Queue: Leszek Swirski <leszeks@chromium.org>
> Auto-Submit: Leszek Swirski <leszeks@chromium.org>
> Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
> Reviewed-by: Jakob Gruber <jgruber@chromium.org>
> Cr-Commit-Position: refs/heads/master@{#70267}

Tbr: jgruber@chromium.org,ulan@chromium.org
Bug: chromium:1075999
Change-Id: Iaa8dc54895866ada0e34a7c9e8fff9ae1cb13f2d
Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2444991
Reviewed-by: Ulan Degenbaev <ulan@chromium.org>
Commit-Queue: Leszek Swirski <leszeks@chromium.org>
Cr-Commit-Position: refs/heads/master@{#70279}
This commit is contained in:
Leszek Swirski 2020-10-02 09:53:35 +02:00 committed by Commit Bot
parent aaf8d462c8
commit c4a062a958
66 changed files with 1775 additions and 2291 deletions

View File

@ -3186,8 +3186,6 @@ v8_source_set("v8_base_without_compiler") {
"src/snapshot/context-deserializer.h",
"src/snapshot/context-serializer.cc",
"src/snapshot/context-serializer.h",
"src/snapshot/deserializer-allocator.cc",
"src/snapshot/deserializer-allocator.h",
"src/snapshot/deserializer.cc",
"src/snapshot/deserializer.h",
"src/snapshot/embedded/embedded-data.cc",
@ -3201,8 +3199,6 @@ v8_source_set("v8_base_without_compiler") {
"src/snapshot/references.h",
"src/snapshot/roots-serializer.cc",
"src/snapshot/roots-serializer.h",
"src/snapshot/serializer-allocator.cc",
"src/snapshot/serializer-allocator.h",
"src/snapshot/serializer-deserializer.cc",
"src/snapshot/serializer-deserializer.h",
"src/snapshot/serializer.cc",

View File

@ -130,6 +130,8 @@ template class PerThreadAssertScope<HANDLE_DEREFERENCE_ASSERT, false>;
template class PerThreadAssertScope<HANDLE_DEREFERENCE_ASSERT, true>;
template class PerThreadAssertScope<CODE_DEPENDENCY_CHANGE_ASSERT, false>;
template class PerThreadAssertScope<CODE_DEPENDENCY_CHANGE_ASSERT, true>;
template class PerThreadAssertScope<CODE_ALLOCATION_ASSERT, false>;
template class PerThreadAssertScope<CODE_ALLOCATION_ASSERT, true>;
template class PerIsolateAssertScope<JAVASCRIPT_EXECUTION_ASSERT, false>;
template class PerIsolateAssertScope<JAVASCRIPT_EXECUTION_ASSERT, true>;

View File

@ -33,6 +33,7 @@ enum PerThreadAssertType {
HANDLE_ALLOCATION_ASSERT,
HANDLE_DEREFERENCE_ASSERT,
CODE_DEPENDENCY_CHANGE_ASSERT,
CODE_ALLOCATION_ASSERT,
LAST_PER_THREAD_ASSERT_TYPE
};
@ -128,9 +129,17 @@ using AllowHandleAllocation =
PerThreadAssertScopeDebugOnly<HANDLE_ALLOCATION_ASSERT, true>;
// Scope to document where we do not expect garbage collections. It differs from
// DisallowHeapAllocation by also forbiding safepoints.
// DisallowHeapAllocation by also forbidding safepoints.
using DisallowGarbageCollection =
PerThreadAssertScopeDebugOnly<GARBAGE_COLLECTION_ASSERT, false>;
// The DISALLOW_GARBAGE_COLLECTION macro can be used to define a
// DisallowGarbageCollection field in classes that isn't present in release
// builds.
#ifdef DEBUG
#define DISALLOW_GARBAGE_COLLECTION(name) DisallowGarbageCollection name;
#else
#define DISALLOW_GARBAGE_COLLECTION(name)
#endif
// Scope to introduce an exception to DisallowGarbageCollection.
using AllowGarbageCollection =
@ -140,6 +149,9 @@ using AllowGarbageCollection =
// and will eventually be removed, use DisallowGarbageCollection instead.
using DisallowHeapAllocation =
PerThreadAssertScopeDebugOnly<HEAP_ALLOCATION_ASSERT, false>;
// The DISALLOW_HEAP_ALLOCATION macro can be used to define a
// DisallowHeapAllocation field in classes that isn't present in release
// builds.
#ifdef DEBUG
#define DISALLOW_HEAP_ALLOCATION(name) DisallowHeapAllocation name;
#else
@ -166,6 +178,14 @@ using DisallowCodeDependencyChange =
using AllowCodeDependencyChange =
PerThreadAssertScopeDebugOnly<CODE_DEPENDENCY_CHANGE_ASSERT, true>;
// Scope to document where we do not expect code to be allocated.
using DisallowCodeAllocation =
PerThreadAssertScopeDebugOnly<CODE_ALLOCATION_ASSERT, false>;
// Scope to introduce an exception to DisallowCodeAllocation.
using AllowCodeAllocation =
PerThreadAssertScopeDebugOnly<CODE_ALLOCATION_ASSERT, true>;
class DisallowHeapAccess {
DisallowCodeDependencyChange no_dependency_change_;
DisallowHandleAllocation no_handle_allocation_;
@ -273,6 +293,8 @@ extern template class PerThreadAssertScope<HANDLE_DEREFERENCE_ASSERT, true>;
extern template class PerThreadAssertScope<CODE_DEPENDENCY_CHANGE_ASSERT,
false>;
extern template class PerThreadAssertScope<CODE_DEPENDENCY_CHANGE_ASSERT, true>;
extern template class PerThreadAssertScope<CODE_ALLOCATION_ASSERT, false>;
extern template class PerThreadAssertScope<CODE_ALLOCATION_ASSERT, true>;
extern template class PerIsolateAssertScope<JAVASCRIPT_EXECUTION_ASSERT, false>;
extern template class PerIsolateAssertScope<JAVASCRIPT_EXECUTION_ASSERT, true>;

View File

@ -780,12 +780,7 @@ inline std::ostream& operator<<(std::ostream& os, AllocationType kind) {
}
// TODO(ishell): review and rename kWordAligned to kTaggedAligned.
enum AllocationAlignment {
kWordAligned,
kDoubleAligned,
kDoubleUnaligned,
kCodeAligned
};
enum AllocationAlignment { kWordAligned, kDoubleAligned, kDoubleUnaligned };
enum class AccessMode { ATOMIC, NON_ATOMIC };

View File

@ -295,6 +295,7 @@ Type::bitset BitsetType::Lub(const MapRefLike& map) {
case OBJECT_BOILERPLATE_DESCRIPTION_TYPE:
case ARRAY_BOILERPLATE_DESCRIPTION_TYPE:
case DESCRIPTOR_ARRAY_TYPE:
case STRONG_DESCRIPTOR_ARRAY_TYPE:
case TRANSITION_ARRAY_TYPE:
case FEEDBACK_CELL_TYPE:
case CLOSURE_FEEDBACK_CELL_ARRAY_TYPE:

View File

@ -27,6 +27,7 @@
#include "src/objects/free-space-inl.h"
#include "src/objects/function-kind.h"
#include "src/objects/hash-table-inl.h"
#include "src/objects/instance-type.h"
#include "src/objects/js-array-inl.h"
#include "src/objects/layout-descriptor.h"
#include "src/objects/objects-inl.h"
@ -250,6 +251,11 @@ void HeapObject::HeapObjectVerify(Isolate* isolate) {
TORQUE_INSTANCE_CHECKERS_MULTIPLE_FULLY_DEFINED(MAKE_TORQUE_CASE)
#undef MAKE_TORQUE_CASE
case DESCRIPTOR_ARRAY_TYPE:
case STRONG_DESCRIPTOR_ARRAY_TYPE:
DescriptorArray::cast(*this).DescriptorArrayVerify(isolate);
break;
case FOREIGN_TYPE:
break; // No interesting fields.

View File

@ -220,6 +220,10 @@ void HeapObject::HeapObjectPrint(std::ostream& os) { // NOLINT
TORQUE_INSTANCE_CHECKERS_MULTIPLE_FULLY_DEFINED(MAKE_TORQUE_CASE)
#undef MAKE_TORQUE_CASE
case DESCRIPTOR_ARRAY_TYPE:
case STRONG_DESCRIPTOR_ARRAY_TYPE:
DescriptorArray::cast(*this).DescriptorArrayPrint(os);
break;
case FOREIGN_TYPE:
Foreign::cast(*this).ForeignPrint(os);
break;

View File

@ -2932,6 +2932,9 @@ Isolate::Isolate(std::unique_ptr<i::IsolateAllocator> isolate_allocator)
id_(isolate_counter.fetch_add(1, std::memory_order_relaxed)),
allocator_(new TracingAccountingAllocator(this)),
builtins_(this),
#if defined(DEBUG) || defined(VERIFY_HEAP)
num_active_deserializers_(0),
#endif
rail_mode_(PERFORMANCE_ANIMATION),
code_event_dispatcher_(new CodeEventDispatcher()),
persistent_handles_list_(new PersistentHandlesList()),

View File

@ -694,6 +694,27 @@ class V8_EXPORT_PRIVATE Isolate final : private HiddenFactory {
return &thread_local_top()->c_function_;
}
#if defined(DEBUG) || defined(VERIFY_HEAP)
// Count the number of active deserializers, so that the heap verifier knows
// whether there is currently an active deserialization happening.
//
// This is needed as the verifier currently doesn't support verifying objects
// which are partially deserialized.
//
// TODO(leszeks): Make the verifier a bit more deserialization compatible.
void RegisterDeserializerStarted() { ++num_active_deserializers_; }
void RegisterDeserializerFinished() {
CHECK_GE(--num_active_deserializers_, 0);
}
bool has_active_deserializer() const {
return num_active_deserializers_.load(std::memory_order_acquire) > 0;
}
#else
void RegisterDeserializerStarted() {}
void RegisterDeserializerFinished() {}
bool has_active_deserializer() const { UNREACHABLE(); }
#endif
// Bottom JS entry.
Address js_entry_sp() { return thread_local_top()->js_entry_sp_; }
inline Address* js_entry_sp_address() {
@ -1715,6 +1736,9 @@ class V8_EXPORT_PRIVATE Isolate final : private HiddenFactory {
RuntimeState runtime_state_;
Builtins builtins_;
SetupIsolateDelegate* setup_delegate_ = nullptr;
#if defined(DEBUG) || defined(VERIFY_HEAP)
std::atomic<int> num_active_deserializers_;
#endif
#ifndef V8_INTL_SUPPORT
unibrow::Mapping<unibrow::Ecma262UnCanonicalize> jsregexp_uncanonicalize_;
unibrow::Mapping<unibrow::CanonicalizationRange> jsregexp_canonrange_;

View File

@ -1444,13 +1444,6 @@ DEFINE_BOOL(profile_deserialization, false,
"Print the time it takes to deserialize the snapshot.")
DEFINE_BOOL(serialization_statistics, false,
"Collect statistics on serialized objects.")
#ifdef V8_ENABLE_THIRD_PARTY_HEAP
DEFINE_UINT_READONLY(serialization_chunk_size, 1,
"Custom size for serialization chunks")
#else
DEFINE_UINT(serialization_chunk_size, 4096,
"Custom size for serialization chunks")
#endif
// Regexp
DEFINE_BOOL(regexp_optimization, true, "generate optimized regexp code")
DEFINE_BOOL(regexp_mode_modifiers, false, "enable inline flags in regexp.")

View File

@ -248,7 +248,7 @@ class HandleScope {
// Limit for number of handles with --check-handle-count. This is
// large enough to compile natives and pass unit tests with some
// slack for future changes to natives.
static const int kCheckHandleThreshold = 30 * 1024;
static const int kCheckHandleThreshold = 42 * 1024;
private:
Isolate* isolate_;

View File

@ -147,15 +147,12 @@ MaybeHandle<Code> Factory::CodeBuilder::BuildInternal(
HeapObject result;
AllocationType allocation_type =
is_executable_ ? AllocationType::kCode : AllocationType::kReadOnly;
AllocationAlignment alignment = is_executable_
? AllocationAlignment::kCodeAligned
: AllocationAlignment::kWordAligned;
if (retry_allocation_or_fail) {
result = heap->AllocateRawWith<Heap::kRetryOrFail>(
object_size, allocation_type, AllocationOrigin::kRuntime, alignment);
object_size, allocation_type, AllocationOrigin::kRuntime);
} else {
result = heap->AllocateRawWith<Heap::kLightRetry>(
object_size, allocation_type, AllocationOrigin::kRuntime, alignment);
object_size, allocation_type, AllocationOrigin::kRuntime);
// Return an empty handle if we cannot allocate the code object.
if (result.is_null()) return MaybeHandle<Code>();
}
@ -2126,8 +2123,7 @@ Handle<Code> Factory::CopyCode(Handle<Code> code) {
int obj_size = code->Size();
CodePageCollectionMemoryModificationScope code_allocation(heap);
HeapObject result = heap->AllocateRawWith<Heap::kRetryOrFail>(
obj_size, AllocationType::kCode, AllocationOrigin::kRuntime,
AllocationAlignment::kCodeAligned);
obj_size, AllocationType::kCode, AllocationOrigin::kRuntime);
// Copy code object.
Address old_addr = code->address();

View File

@ -171,8 +171,8 @@ AllocationResult Heap::AllocateRaw(int size_in_bytes, AllocationType type,
DCHECK(AllowHandleAllocation::IsAllowed());
DCHECK(AllowHeapAllocation::IsAllowed());
DCHECK(AllowGarbageCollection::IsAllowed());
DCHECK_IMPLIES(type == AllocationType::kCode,
alignment == AllocationAlignment::kCodeAligned);
DCHECK_IMPLIES(type == AllocationType::kCode || type == AllocationType::kMap,
alignment == AllocationAlignment::kWordAligned);
DCHECK_EQ(gc_state(), NOT_IN_GC);
#ifdef V8_ENABLE_ALLOCATION_TIMEOUT
if (FLAG_random_gc_interval > 0 || FLAG_gc_interval >= 0) {
@ -223,6 +223,7 @@ AllocationResult Heap::AllocateRaw(int size_in_bytes, AllocationType type,
allocation = old_space_->AllocateRaw(size_in_bytes, alignment, origin);
}
} else if (AllocationType::kCode == type) {
DCHECK(AllowCodeAllocation::IsAllowed());
if (large_object) {
allocation = code_lo_space_->AllocateRaw(size_in_bytes);
} else {
@ -231,7 +232,6 @@ AllocationResult Heap::AllocateRaw(int size_in_bytes, AllocationType type,
} else if (AllocationType::kMap == type) {
allocation = map_space_->AllocateRawUnaligned(size_in_bytes);
} else if (AllocationType::kReadOnly == type) {
DCHECK(isolate_->serializer_enabled());
DCHECK(!large_object);
DCHECK(CanAllocateInReadOnlySpace());
DCHECK_EQ(AllocationOrigin::kRuntime, origin);
@ -282,20 +282,21 @@ HeapObject Heap::AllocateRawWith(int size, AllocationType allocation,
}
DCHECK_EQ(gc_state(), NOT_IN_GC);
Heap* heap = isolate()->heap();
Address* top = heap->NewSpaceAllocationTopAddress();
Address* limit = heap->NewSpaceAllocationLimitAddress();
if (allocation == AllocationType::kYoung &&
alignment == AllocationAlignment::kWordAligned &&
size <= kMaxRegularHeapObjectSize &&
(*limit - *top >= static_cast<unsigned>(size)) &&
V8_LIKELY(!FLAG_single_generation && FLAG_inline_new &&
FLAG_gc_interval == 0)) {
DCHECK(IsAligned(size, kTaggedSize));
HeapObject obj = HeapObject::FromAddress(*top);
*top += size;
heap->CreateFillerObjectAt(obj.address(), size, ClearRecordedSlots::kNo);
MSAN_ALLOCATED_UNINITIALIZED_MEMORY(obj.address(), size);
return obj;
size <= kMaxRegularHeapObjectSize) {
Address* top = heap->NewSpaceAllocationTopAddress();
Address* limit = heap->NewSpaceAllocationLimitAddress();
if ((*limit - *top >= static_cast<unsigned>(size)) &&
V8_LIKELY(!FLAG_single_generation && FLAG_inline_new &&
FLAG_gc_interval == 0)) {
DCHECK(IsAligned(size, kTaggedSize));
HeapObject obj = HeapObject::FromAddress(*top);
*top += size;
heap->CreateFillerObjectAt(obj.address(), size, ClearRecordedSlots::kNo);
MSAN_ALLOCATED_UNINITIALIZED_MEMORY(obj.address(), size);
return obj;
}
}
switch (mode) {
case kLightRetry:

View File

@ -1882,125 +1882,6 @@ static void VerifyStringTable(Isolate* isolate) {
}
#endif // VERIFY_HEAP
bool Heap::ReserveSpace(Reservation* reservations, std::vector<Address>* maps) {
bool gc_performed = true;
int counter = 0;
static const int kThreshold = 20;
while (gc_performed && counter++ < kThreshold) {
gc_performed = false;
for (int space = FIRST_SPACE;
space < static_cast<int>(SnapshotSpace::kNumberOfHeapSpaces);
space++) {
DCHECK_NE(space, NEW_SPACE);
DCHECK_NE(space, NEW_LO_SPACE);
Reservation* reservation = &reservations[space];
DCHECK_LE(1, reservation->size());
if (reservation->at(0).size == 0) {
DCHECK_EQ(1, reservation->size());
continue;
}
bool perform_gc = false;
if (space == MAP_SPACE) {
// We allocate each map individually to avoid fragmentation.
maps->clear();
DCHECK_LE(reservation->size(), 2);
int reserved_size = 0;
for (const Chunk& c : *reservation) reserved_size += c.size;
DCHECK_EQ(0, reserved_size % Map::kSize);
int num_maps = reserved_size / Map::kSize;
for (int i = 0; i < num_maps; i++) {
AllocationResult allocation;
#if V8_ENABLE_THIRD_PARTY_HEAP_BOOL
allocation = AllocateRaw(Map::kSize, AllocationType::kMap,
AllocationOrigin::kRuntime, kWordAligned);
#else
allocation = map_space()->AllocateRawUnaligned(Map::kSize);
#endif
HeapObject free_space;
if (allocation.To(&free_space)) {
// Mark with a free list node, in case we have a GC before
// deserializing.
Address free_space_address = free_space.address();
CreateFillerObjectAt(free_space_address, Map::kSize,
ClearRecordedSlots::kNo);
maps->push_back(free_space_address);
} else {
perform_gc = true;
break;
}
}
} else if (space == LO_SPACE) {
// Just check that we can allocate during deserialization.
DCHECK_LE(reservation->size(), 2);
int reserved_size = 0;
for (const Chunk& c : *reservation) reserved_size += c.size;
perform_gc = !CanExpandOldGeneration(reserved_size);
} else {
for (auto& chunk : *reservation) {
AllocationResult allocation;
int size = chunk.size;
DCHECK_LE(static_cast<size_t>(size),
MemoryChunkLayout::AllocatableMemoryInMemoryChunk(
static_cast<AllocationSpace>(space)));
#if V8_ENABLE_THIRD_PARTY_HEAP_BOOL
AllocationType type = (space == CODE_SPACE)
? AllocationType::kCode
: (space == RO_SPACE)
? AllocationType::kReadOnly
: AllocationType::kYoung;
AllocationAlignment align =
(space == CODE_SPACE) ? kCodeAligned : kWordAligned;
allocation =
AllocateRaw(size, type, AllocationOrigin::kRuntime, align);
#else
if (space == RO_SPACE) {
allocation = read_only_space()->AllocateRaw(
size, AllocationAlignment::kWordAligned);
} else {
// The deserializer will update the skip list.
allocation = paged_space(space)->AllocateRawUnaligned(size);
}
#endif
HeapObject free_space;
if (allocation.To(&free_space)) {
// Mark with a free list node, in case we have a GC before
// deserializing.
Address free_space_address = free_space.address();
CreateFillerObjectAt(free_space_address, size,
ClearRecordedSlots::kNo);
DCHECK(IsPreAllocatedSpace(static_cast<SnapshotSpace>(space)));
chunk.start = free_space_address;
chunk.end = free_space_address + size;
} else {
perform_gc = true;
break;
}
}
}
if (perform_gc) {
// We cannot perfom a GC with an uninitialized isolate. This check
// fails for example if the max old space size is chosen unwisely,
// so that we cannot allocate space to deserialize the initial heap.
if (!deserialization_complete_) {
V8::FatalProcessOutOfMemory(
isolate(), "insufficient memory to create an Isolate");
}
if (counter > 1) {
CollectAllGarbage(kReduceMemoryFootprintMask,
GarbageCollectionReason::kDeserializer);
} else {
CollectAllGarbage(kNoGCFlags, GarbageCollectionReason::kDeserializer);
}
gc_performed = true;
break; // Abort for-loop over spaces and retry.
}
}
}
return !gc_performed;
}
void Heap::EnsureFromSpaceIsCommitted() {
if (new_space_->CommitFromSpaceIfNeeded()) return;
@ -3580,47 +3461,6 @@ void Heap::FinalizeIncrementalMarkingIncrementally(
InvokeIncrementalMarkingEpilogueCallbacks();
}
void Heap::RegisterDeserializedObjectsForBlackAllocation(
Reservation* reservations, const std::vector<HeapObject>& large_objects,
const std::vector<Address>& maps) {
// TODO(ulan): pause black allocation during deserialization to avoid
// iterating all these objects in one go.
if (!incremental_marking()->black_allocation()) return;
// Iterate black objects in old space, code space, map space, and large
// object space for side effects.
IncrementalMarking::MarkingState* marking_state =
incremental_marking()->marking_state();
for (int i = OLD_SPACE;
i < static_cast<int>(SnapshotSpace::kNumberOfHeapSpaces); i++) {
const Heap::Reservation& res = reservations[i];
for (auto& chunk : res) {
Address addr = chunk.start;
while (addr < chunk.end) {
HeapObject obj = HeapObject::FromAddress(addr);
// Objects can have any color because incremental marking can
// start in the middle of Heap::ReserveSpace().
if (marking_state->IsBlack(obj)) {
incremental_marking()->ProcessBlackAllocatedObject(obj);
}
addr += obj.Size();
}
}
}
// Large object space doesn't use reservations, so it needs custom handling.
for (HeapObject object : large_objects) {
incremental_marking()->ProcessBlackAllocatedObject(object);
}
// Map space doesn't use reservations, so it needs custom handling.
for (Address addr : maps) {
incremental_marking()->ProcessBlackAllocatedObject(
HeapObject::FromAddress(addr));
}
}
void Heap::NotifyObjectLayoutChange(
HeapObject object, const DisallowHeapAllocation&,
InvalidateRecordedSlots invalidate_recorded_slots) {
@ -4193,6 +4033,7 @@ void Heap::Verify() {
// We have to wait here for the sweeper threads to have an iterable heap.
mark_compact_collector()->EnsureSweepingCompleted();
array_buffer_sweeper()->EnsureFinished();
VerifyPointersVisitor visitor(this);
@ -4204,6 +4045,12 @@ void Heap::Verify() {
.NormalizedMapCacheVerify(isolate());
}
// The heap verifier can't deal with partially deserialized objects, so
// disable it if a deserializer is active.
// TODO(leszeks): Enable verification during deserialization, e.g. by only
// blocklisting objects that are in a partially deserialized state.
if (isolate()->has_active_deserializer()) return;
VerifySmisVisitor smis_visitor;
IterateSmiRoots(&smis_visitor);
@ -5193,7 +5040,14 @@ HeapObject Heap::AllocateRawWithLightRetrySlowPath(
HeapObject result;
AllocationResult alloc = AllocateRaw(size, allocation, origin, alignment);
if (alloc.To(&result)) {
DCHECK(result != ReadOnlyRoots(this).exception());
// DCHECK that the successful allocation is not "exception". The one
// exception to this is when allocating the "exception" object itself, in
// which case this must be an ROSpace allocation and the exception object
// in the roots has to be unset.
DCHECK((CanAllocateInReadOnlySpace() &&
allocation == AllocationType::kReadOnly &&
ReadOnlyRoots(this).unchecked_exception() == Smi::zero()) ||
result != ReadOnlyRoots(this).exception());
return result;
}
// Two GCs before panicking. In newspace will almost always succeed.

View File

@ -667,9 +667,6 @@ class Heap {
template <FindMementoMode mode>
inline AllocationMemento FindAllocationMemento(Map map, HeapObject object);
// Returns false if not able to reserve.
bool ReserveSpace(Reservation* reservations, std::vector<Address>* maps);
// Requests collection and blocks until GC is finished.
void RequestCollectionBackground();
@ -1070,10 +1067,6 @@ class Heap {
V8_EXPORT_PRIVATE void FinalizeIncrementalMarkingAtomically(
GarbageCollectionReason gc_reason);
void RegisterDeserializedObjectsForBlackAllocation(
Reservation* reservations, const std::vector<HeapObject>& large_objects,
const std::vector<Address>& maps);
IncrementalMarking* incremental_marking() {
return incremental_marking_.get();
}
@ -2129,7 +2122,7 @@ class Heap {
// and reset by a mark-compact garbage collection.
std::atomic<MemoryPressureLevel> memory_pressure_level_;
std::vector<std::pair<v8::NearHeapLimitCallback, void*> >
std::vector<std::pair<v8::NearHeapLimitCallback, void*>>
near_heap_limit_callbacks_;
// For keeping track of context disposals.
@ -2404,6 +2397,7 @@ class Heap {
// The allocator interface.
friend class Factory;
friend class Deserializer;
// The Isolate constructs us.
friend class Isolate;

View File

@ -20,8 +20,8 @@ AllocationResult LocalHeap::AllocateRaw(int size_in_bytes, AllocationType type,
DCHECK(AllowHandleAllocation::IsAllowed());
DCHECK(AllowHeapAllocation::IsAllowed());
DCHECK(AllowGarbageCollection::IsAllowed());
DCHECK_IMPLIES(type == AllocationType::kCode,
alignment == AllocationAlignment::kCodeAligned);
DCHECK_IMPLIES(type == AllocationType::kCode || type == AllocationType::kMap,
alignment == AllocationAlignment::kWordAligned);
Heap::HeapState state = heap()->gc_state();
DCHECK(state == Heap::TEAR_DOWN || state == Heap::NOT_IN_GC);
#endif

View File

@ -2266,6 +2266,13 @@ void MarkCompactCollector::ClearFullMapTransitions() {
// filled. Allow it.
if (array.GetTargetIfExists(0, isolate(), &map)) {
DCHECK(!map.is_null()); // Weak pointers aren't cleared yet.
Object constructor_or_backpointer = map.constructor_or_backpointer();
if (constructor_or_backpointer.IsSmi()) {
DCHECK(isolate()->has_active_deserializer());
DCHECK_EQ(constructor_or_backpointer,
Deserializer::uninitialized_field_value());
continue;
}
Map parent = Map::cast(map.constructor_or_backpointer());
bool parent_is_alive =
non_atomic_marking_state()->IsBlackOrGrey(parent);

View File

@ -9,6 +9,8 @@
#include "src/heap/objects-visiting-inl.h"
#include "src/heap/objects-visiting.h"
#include "src/heap/spaces.h"
#include "src/objects/objects.h"
#include "src/snapshot/deserializer.h"
namespace v8 {
namespace internal {
@ -349,8 +351,7 @@ int MarkingVisitorBase<ConcreteVisitor, MarkingState>::VisitWeakCell(
// ===========================================================================
template <typename ConcreteVisitor, typename MarkingState>
size_t
MarkingVisitorBase<ConcreteVisitor, MarkingState>::MarkDescriptorArrayBlack(
int MarkingVisitorBase<ConcreteVisitor, MarkingState>::MarkDescriptorArrayBlack(
DescriptorArray descriptors) {
concrete_visitor()->marking_state()->WhiteToGrey(descriptors);
if (concrete_visitor()->marking_state()->GreyToBlack(descriptors)) {
@ -388,37 +389,65 @@ int MarkingVisitorBase<ConcreteVisitor, MarkingState>::VisitDescriptorArray(
return size;
}
template <typename ConcreteVisitor, typename MarkingState>
int MarkingVisitorBase<ConcreteVisitor, MarkingState>::VisitDescriptorsForMap(
Map map) {
if (!map.CanTransition()) return 0;
// Maps that can transition share their descriptor arrays and require
// special visiting logic to avoid memory leaks.
// Since descriptor arrays are potentially shared, ensure that only the
// descriptors that belong to this map are marked. The first time a
// non-empty descriptor array is marked, its header is also visited. The
// slot holding the descriptor array will be implicitly recorded when the
// pointer fields of this map are visited.
Object maybe_descriptors =
TaggedField<Object, Map::kInstanceDescriptorsOffset>::Acquire_Load(
heap_->isolate(), map);
// If the descriptors are a Smi, then this Map is in the process of being
// deserialized, and doesn't yet have an initialized descriptor field.
if (maybe_descriptors.IsSmi()) {
DCHECK_EQ(maybe_descriptors, Deserializer::uninitialized_field_value());
return 0;
}
DescriptorArray descriptors = DescriptorArray::cast(maybe_descriptors);
// Don't do any special processing of strong descriptor arrays, let them get
// marked through the normal visitor mechanism.
if (descriptors.IsStrongDescriptorArray()) {
return 0;
}
int size = MarkDescriptorArrayBlack(descriptors);
int number_of_own_descriptors = map.NumberOfOwnDescriptors();
if (number_of_own_descriptors) {
// It is possible that the concurrent marker observes the
// number_of_own_descriptors out of sync with the descriptors. In that
// case the marking write barrier for the descriptor array will ensure
// that all required descriptors are marked. The concurrent marker
// just should avoid crashing in that case. That's why we need the
// std::min<int>() below.
VisitDescriptors(descriptors,
std::min<int>(number_of_own_descriptors,
descriptors.number_of_descriptors()));
}
return size;
}
template <typename ConcreteVisitor, typename MarkingState>
int MarkingVisitorBase<ConcreteVisitor, MarkingState>::VisitMap(Map meta_map,
Map map) {
if (!concrete_visitor()->ShouldVisit(map)) return 0;
int size = Map::BodyDescriptor::SizeOf(meta_map, map);
if (map.CanTransition()) {
// Maps that can transition share their descriptor arrays and require
// special visiting logic to avoid memory leaks.
// Since descriptor arrays are potentially shared, ensure that only the
// descriptors that belong to this map are marked. The first time a
// non-empty descriptor array is marked, its header is also visited. The
// slot holding the descriptor array will be implicitly recorded when the
// pointer fields of this map are visited.
DescriptorArray descriptors = map.synchronized_instance_descriptors();
size += MarkDescriptorArrayBlack(descriptors);
int number_of_own_descriptors = map.NumberOfOwnDescriptors();
if (number_of_own_descriptors) {
// It is possible that the concurrent marker observes the
// number_of_own_descriptors out of sync with the descriptors. In that
// case the marking write barrier for the descriptor array will ensure
// that all required descriptors are marked. The concurrent marker
// just should avoid crashing in that case. That's why we need the
// std::min<int>() below.
VisitDescriptors(descriptors,
std::min<int>(number_of_own_descriptors,
descriptors.number_of_descriptors()));
}
// Mark the pointer fields of the Map. Since the transitions array has
// been marked already, it is fine that one of these fields contains a
// pointer to it.
}
size += VisitDescriptorsForMap(map);
// Mark the pointer fields of the Map. If there is a transitions array, it has
// been marked already, so it is fine that one of these fields contains a
// pointer to it.
Map::BodyDescriptor::IterateBody(meta_map, map, size, this);
return size;
}

View File

@ -220,6 +220,9 @@ class MarkingVisitorBase : public HeapVisitor<int, ConcreteVisitor> {
V8_INLINE void VisitDescriptors(DescriptorArray descriptors,
int number_of_own_descriptors);
V8_INLINE int VisitDescriptorsForMap(Map map);
template <typename T>
int VisitEmbedderTracingSubclass(Map map, T object);
V8_INLINE int VisitFixedArrayWithProgressBar(Map map, FixedArray object,
@ -227,7 +230,7 @@ class MarkingVisitorBase : public HeapVisitor<int, ConcreteVisitor> {
// Marks the descriptor array black without pushing it on the marking work
// list and visits its header. Returns the size of the descriptor array
// if it was successully marked as black.
V8_INLINE size_t MarkDescriptorArrayBlack(DescriptorArray descriptors);
V8_INLINE int MarkDescriptorArrayBlack(DescriptorArray descriptors);
// Marks the object grey and pushes it on the marking work list.
V8_INLINE void MarkObject(HeapObject host, HeapObject obj);

View File

@ -386,6 +386,7 @@ bool Heap::CreateInitialMaps() {
ALLOCATE_PRIMITIVE_MAP(SYMBOL_TYPE, Symbol::kSize, symbol,
Context::SYMBOL_FUNCTION_INDEX)
ALLOCATE_MAP(FOREIGN_TYPE, Foreign::kSize, foreign)
ALLOCATE_VARSIZE_MAP(STRONG_DESCRIPTOR_ARRAY_TYPE, strong_descriptor_array)
ALLOCATE_PRIMITIVE_MAP(ODDBALL_TYPE, Oddball::kSize, boolean,
Context::BOOLEAN_FUNCTION_INDEX);

View File

@ -25,3 +25,7 @@ extern class DescriptorArray extends HeapObject {
enum_cache: EnumCache;
descriptors[number_of_all_descriptors]: DescriptorEntry;
}
// A descriptor array where all values are held strongly.
extern class StrongDescriptorArray extends DescriptorArray
generates 'TNode<DescriptorArray>';

View File

@ -20,6 +20,7 @@
#include "src/objects/shared-function-info.h"
#include "src/objects/templates-inl.h"
#include "src/objects/transitions-inl.h"
#include "src/objects/transitions.h"
#include "src/wasm/wasm-objects-inl.h"
// Has to be the last include (doesn't have include guards):

View File

@ -204,6 +204,7 @@ VisitorId Map::GetVisitorId(Map map) {
return kVisitPropertyCell;
case DESCRIPTOR_ARRAY_TYPE:
case STRONG_DESCRIPTOR_ARRAY_TYPE:
return kVisitDescriptorArray;
case TRANSITION_ARRAY_TYPE:

View File

@ -281,6 +281,7 @@ class ZoneForwardList;
V(ModuleContext) \
V(NonNullForeign) \
V(ScriptContext) \
V(StrongDescriptorArray) \
V(WithContext)
#define HEAP_OBJECT_TYPE_LIST(V) \

View File

@ -947,6 +947,7 @@ ReturnType BodyDescriptorApply(InstanceType type, T1 p1, T2 p2, T3 p3, T4 p4) {
case PROPERTY_ARRAY_TYPE:
return Op::template apply<PropertyArray::BodyDescriptor>(p1, p2, p3, p4);
case DESCRIPTOR_ARRAY_TYPE:
case STRONG_DESCRIPTOR_ARRAY_TYPE:
return Op::template apply<DescriptorArray::BodyDescriptor>(p1, p2, p3,
p4);
case TRANSITION_ARRAY_TYPE:

View File

@ -63,6 +63,7 @@
#include "src/objects/free-space-inl.h"
#include "src/objects/function-kind.h"
#include "src/objects/hash-table-inl.h"
#include "src/objects/instance-type.h"
#include "src/objects/js-array-inl.h"
#include "src/objects/keys.h"
#include "src/objects/lookup-inl.h"
@ -2241,7 +2242,8 @@ int HeapObject::SizeFromMap(Map map) const {
return FeedbackMetadata::SizeFor(
FeedbackMetadata::unchecked_cast(*this).synchronized_slot_count());
}
if (instance_type == DESCRIPTOR_ARRAY_TYPE) {
if (base::IsInRange(instance_type, FIRST_DESCRIPTOR_ARRAY_TYPE,
LAST_DESCRIPTOR_ARRAY_TYPE)) {
return DescriptorArray::SizeFor(
DescriptorArray::unchecked_cast(*this).number_of_all_descriptors());
}
@ -2306,6 +2308,7 @@ int HeapObject::SizeFromMap(Map map) const {
bool HeapObject::NeedsRehashing() const {
switch (map().instance_type()) {
case DESCRIPTOR_ARRAY_TYPE:
case STRONG_DESCRIPTOR_ARRAY_TYPE:
return DescriptorArray::cast(*this).number_of_descriptors() > 1;
case TRANSITION_ARRAY_TYPE:
return TransitionArray::cast(*this).number_of_entries() > 1;
@ -2345,6 +2348,7 @@ bool HeapObject::CanBeRehashed() const {
case SIMPLE_NUMBER_DICTIONARY_TYPE:
return true;
case DESCRIPTOR_ARRAY_TYPE:
case STRONG_DESCRIPTOR_ARRAY_TYPE:
return true;
case TRANSITION_ARRAY_TYPE:
return true;

View File

@ -5,13 +5,13 @@
#ifndef V8_OBJECTS_TRANSITIONS_INL_H_
#define V8_OBJECTS_TRANSITIONS_INL_H_
#include "src/objects/transitions.h"
#include "src/ic/handler-configuration-inl.h"
#include "src/objects/fixed-array-inl.h"
#include "src/objects/maybe-object-inl.h"
#include "src/objects/slots.h"
#include "src/objects/smi.h"
#include "src/objects/transitions.h"
#include "src/snapshot/deserializer.h"
// Has to be the last include (doesn't have include guards):
#include "src/objects/object-macros.h"
@ -157,6 +157,14 @@ bool TransitionArray::GetTargetIfExists(int transition_number, Isolate* isolate,
Map* target) {
MaybeObject raw = GetRawTarget(transition_number);
HeapObject heap_object;
// If the raw target is a Smi, then this TransitionArray is in the process of
// being deserialized, and doesn't yet have an initialized entry for this
// transition.
if (raw.IsSmi()) {
DCHECK(isolate->has_active_deserializer());
DCHECK_EQ(raw.ToSmi(), Deserializer::uninitialized_field_value());
return false;
}
if (raw->GetHeapObjectIfStrong(&heap_object) &&
heap_object.IsUndefined(isolate)) {
return false;

View File

@ -86,6 +86,7 @@ class Symbol;
V(Map, code_data_container_map, CodeDataContainerMap) \
V(Map, coverage_info_map, CoverageInfoMap) \
V(Map, descriptor_array_map, DescriptorArrayMap) \
V(Map, strong_descriptor_array_map, StrongDescriptorArrayMap) \
V(Map, fixed_double_array_map, FixedDoubleArrayMap) \
V(Map, global_dictionary_map, GlobalDictionaryMap) \
V(Map, many_closures_cell_map, ManyClosuresCellMap) \

View File

@ -36,9 +36,7 @@ ScriptData::ScriptData(const byte* data, int length)
CodeSerializer::CodeSerializer(Isolate* isolate, uint32_t source_hash)
: Serializer(isolate, Snapshot::kDefaultSerializerFlags),
source_hash_(source_hash) {
allocator()->UseCustomChunkSize(FLAG_serialization_chunk_size);
}
source_hash_(source_hash) {}
// static
ScriptCompiler::CachedData* CodeSerializer::Serialize(
@ -64,11 +62,11 @@ ScriptCompiler::CachedData* CodeSerializer::Serialize(
// Serialize code object.
Handle<String> source(String::cast(script->source()), isolate);
HandleScope scope(isolate);
CodeSerializer cs(isolate, SerializedCodeData::SourceHash(
source, script->origin_options()));
DisallowGarbageCollection no_gc;
cs.reference_map()->AddAttachedReference(
reinterpret_cast<void*>(source->ptr()));
cs.reference_map()->AddAttachedReference(*source);
ScriptData* script_data = cs.SerializeSharedFunctionInfo(info);
if (FLAG_profile_deserialization) {
@ -100,13 +98,13 @@ ScriptData* CodeSerializer::SerializeSharedFunctionInfo(
return data.GetScriptData();
}
bool CodeSerializer::SerializeReadOnlyObject(HeapObject obj) {
if (!ReadOnlyHeap::Contains(obj)) return false;
bool CodeSerializer::SerializeReadOnlyObject(Handle<HeapObject> obj) {
if (!ReadOnlyHeap::Contains(*obj)) return false;
// For objects on the read-only heap, never serialize the object, but instead
// create a back reference that encodes the page number as the chunk_index and
// the offset within the page as the chunk_offset.
Address address = obj.address();
Address address = obj->address();
BasicMemoryChunk* chunk = BasicMemoryChunk::FromAddress(address);
uint32_t chunk_index = 0;
ReadOnlySpace* const read_only_space = isolate()->heap()->read_only_space();
@ -115,14 +113,13 @@ bool CodeSerializer::SerializeReadOnlyObject(HeapObject obj) {
++chunk_index;
}
uint32_t chunk_offset = static_cast<uint32_t>(chunk->Offset(address));
SerializerReference back_reference = SerializerReference::BackReference(
SnapshotSpace::kReadOnlyHeap, chunk_index, chunk_offset);
reference_map()->Add(reinterpret_cast<void*>(obj.ptr()), back_reference);
CHECK(SerializeBackReference(obj));
sink_.Put(kReadOnlyHeapRef, "ReadOnlyHeapRef");
sink_.PutInt(chunk_index, "ReadOnlyHeapRefChunkIndex");
sink_.PutInt(chunk_offset, "ReadOnlyHeapRefChunkOffset");
return true;
}
void CodeSerializer::SerializeObject(HeapObject obj) {
void CodeSerializer::SerializeObjectImpl(Handle<HeapObject> obj) {
if (SerializeHotObject(obj)) return;
if (SerializeRoot(obj)) return;
@ -131,60 +128,60 @@ void CodeSerializer::SerializeObject(HeapObject obj) {
if (SerializeReadOnlyObject(obj)) return;
CHECK(!obj.IsCode());
CHECK(!obj->IsCode());
ReadOnlyRoots roots(isolate());
if (ElideObject(obj)) {
return SerializeObject(roots.undefined_value());
if (ElideObject(*obj)) {
return SerializeObject(roots.undefined_value_handle());
}
if (obj.IsScript()) {
Script script_obj = Script::cast(obj);
DCHECK_NE(script_obj.compilation_type(), Script::COMPILATION_TYPE_EVAL);
if (obj->IsScript()) {
Handle<Script> script_obj = Handle<Script>::cast(obj);
DCHECK_NE(script_obj->compilation_type(), Script::COMPILATION_TYPE_EVAL);
// We want to differentiate between undefined and uninitialized_symbol for
// context_data for now. It is hack to allow debugging for scripts that are
// included as a part of custom snapshot. (see debug::Script::IsEmbedded())
Object context_data = script_obj.context_data();
Object context_data = script_obj->context_data();
if (context_data != roots.undefined_value() &&
context_data != roots.uninitialized_symbol()) {
script_obj.set_context_data(roots.undefined_value());
script_obj->set_context_data(roots.undefined_value());
}
// We don't want to serialize host options to avoid serializing unnecessary
// object graph.
FixedArray host_options = script_obj.host_defined_options();
script_obj.set_host_defined_options(roots.empty_fixed_array());
FixedArray host_options = script_obj->host_defined_options();
script_obj->set_host_defined_options(roots.empty_fixed_array());
SerializeGeneric(obj);
script_obj.set_host_defined_options(host_options);
script_obj.set_context_data(context_data);
script_obj->set_host_defined_options(host_options);
script_obj->set_context_data(context_data);
return;
}
if (obj.IsSharedFunctionInfo()) {
SharedFunctionInfo sfi = SharedFunctionInfo::cast(obj);
if (obj->IsSharedFunctionInfo()) {
Handle<SharedFunctionInfo> sfi = Handle<SharedFunctionInfo>::cast(obj);
// TODO(7110): Enable serializing of Asm modules once the AsmWasmData
// is context independent.
DCHECK(!sfi.IsApiFunction() && !sfi.HasAsmWasmData());
DCHECK(!sfi->IsApiFunction() && !sfi->HasAsmWasmData());
DebugInfo debug_info;
BytecodeArray debug_bytecode_array;
if (sfi.HasDebugInfo()) {
if (sfi->HasDebugInfo()) {
// Clear debug info.
debug_info = sfi.GetDebugInfo();
debug_info = sfi->GetDebugInfo();
if (debug_info.HasInstrumentedBytecodeArray()) {
debug_bytecode_array = debug_info.DebugBytecodeArray();
sfi.SetDebugBytecodeArray(debug_info.OriginalBytecodeArray());
sfi->SetDebugBytecodeArray(debug_info.OriginalBytecodeArray());
}
sfi.set_script_or_debug_info(debug_info.script());
sfi->set_script_or_debug_info(debug_info.script());
}
DCHECK(!sfi.HasDebugInfo());
DCHECK(!sfi->HasDebugInfo());
SerializeGeneric(obj);
// Restore debug info
if (!debug_info.is_null()) {
sfi.set_script_or_debug_info(debug_info);
sfi->set_script_or_debug_info(debug_info);
if (!debug_bytecode_array.is_null()) {
sfi.SetDebugBytecodeArray(debug_bytecode_array);
sfi->SetDebugBytecodeArray(debug_bytecode_array);
}
}
return;
@ -197,24 +194,24 @@ void CodeSerializer::SerializeObject(HeapObject obj) {
// --interpreted-frames-native-stack is on. See v8:9122 for more context
#ifndef V8_TARGET_ARCH_ARM
if (V8_UNLIKELY(FLAG_interpreted_frames_native_stack) &&
obj.IsInterpreterData()) {
obj = InterpreterData::cast(obj).bytecode_array();
obj->IsInterpreterData()) {
obj = handle(InterpreterData::cast(*obj).bytecode_array(), isolate());
}
#endif // V8_TARGET_ARCH_ARM
// Past this point we should not see any (context-specific) maps anymore.
CHECK(!obj.IsMap());
CHECK(!obj->IsMap());
// There should be no references to the global object embedded.
CHECK(!obj.IsJSGlobalProxy() && !obj.IsJSGlobalObject());
CHECK(!obj->IsJSGlobalProxy() && !obj->IsJSGlobalObject());
// Embedded FixedArrays that need rehashing must support rehashing.
CHECK_IMPLIES(obj.NeedsRehashing(), obj.CanBeRehashed());
CHECK_IMPLIES(obj->NeedsRehashing(), obj->CanBeRehashed());
// We expect no instantiated function objects or contexts.
CHECK(!obj.IsJSFunction() && !obj.IsContext());
CHECK(!obj->IsJSFunction() && !obj->IsContext());
SerializeGeneric(obj);
}
void CodeSerializer::SerializeGeneric(HeapObject heap_object) {
void CodeSerializer::SerializeGeneric(Handle<HeapObject> heap_object) {
// Object has not yet been serialized. Serialize it here.
ObjectSerializer serializer(this, heap_object, &sink_);
serializer.Serialize();
@ -408,44 +405,29 @@ MaybeHandle<SharedFunctionInfo> CodeSerializer::Deserialize(
SerializedCodeData::SerializedCodeData(const std::vector<byte>* payload,
const CodeSerializer* cs) {
DisallowGarbageCollection no_gc;
std::vector<Reservation> reservations = cs->EncodeReservations();
// Calculate sizes.
uint32_t reservation_size =
static_cast<uint32_t>(reservations.size()) * kUInt32Size;
uint32_t num_stub_keys = 0; // TODO(jgruber): Remove.
uint32_t stub_keys_size = num_stub_keys * kUInt32Size;
uint32_t payload_offset = kHeaderSize + reservation_size + stub_keys_size;
uint32_t padded_payload_offset = POINTER_SIZE_ALIGN(payload_offset);
uint32_t size =
padded_payload_offset + static_cast<uint32_t>(payload->size());
uint32_t size = kHeaderSize + static_cast<uint32_t>(payload->size());
DCHECK(IsAligned(size, kPointerAlignment));
// Allocate backing store and create result data.
AllocateData(size);
// Zero out pre-payload data. Part of that is only used for padding.
memset(data_, 0, padded_payload_offset);
memset(data_, 0, kHeaderSize);
// Set header values.
SetMagicNumber();
SetHeaderValue(kVersionHashOffset, Version::Hash());
SetHeaderValue(kSourceHashOffset, cs->source_hash());
SetHeaderValue(kFlagHashOffset, FlagList::Hash());
SetHeaderValue(kNumReservationsOffset,
static_cast<uint32_t>(reservations.size()));
SetHeaderValue(kPayloadLengthOffset, static_cast<uint32_t>(payload->size()));
// Zero out any padding in the header.
memset(data_ + kUnalignedHeaderSize, 0, kHeaderSize - kUnalignedHeaderSize);
// Copy reservation chunk sizes.
CopyBytes(data_ + kHeaderSize,
reinterpret_cast<const byte*>(reservations.data()),
reservation_size);
// Copy serialized data.
CopyBytes(data_ + padded_payload_offset, payload->data(),
CopyBytes(data_ + kHeaderSize, payload->data(),
static_cast<size_t>(payload->size()));
SetHeaderValue(kChecksumOffset, Checksum(ChecksummedContent()));
@ -464,10 +446,7 @@ SerializedCodeData::SanityCheckResult SerializedCodeData::SanityCheck(
if (version_hash != Version::Hash()) return VERSION_MISMATCH;
if (source_hash != expected_source_hash) return SOURCE_MISMATCH;
if (flags_hash != FlagList::Hash()) return FLAGS_MISMATCH;
uint32_t max_payload_length =
this->size_ -
POINTER_SIZE_ALIGN(kHeaderSize +
GetHeaderValue(kNumReservationsOffset) * kInt32Size);
uint32_t max_payload_length = this->size_ - kHeaderSize;
if (payload_length > max_payload_length) return LENGTH_MISMATCH;
if (Checksum(ChecksummedContent()) != c) return CHECKSUM_MISMATCH;
return CHECK_SUCCESS;
@ -494,20 +473,8 @@ ScriptData* SerializedCodeData::GetScriptData() {
return result;
}
std::vector<SerializedData::Reservation> SerializedCodeData::Reservations()
const {
uint32_t size = GetHeaderValue(kNumReservationsOffset);
std::vector<Reservation> reservations(size);
memcpy(reservations.data(), data_ + kHeaderSize,
size * sizeof(SerializedData::Reservation));
return reservations;
}
Vector<const byte> SerializedCodeData::Payload() const {
int reservations_size = GetHeaderValue(kNumReservationsOffset) * kInt32Size;
int payload_offset = kHeaderSize + reservations_size;
int padded_payload_offset = POINTER_SIZE_ALIGN(payload_offset);
const byte* payload = data_ + padded_payload_offset;
const byte* payload = data_ + kHeaderSize;
DCHECK(IsAligned(reinterpret_cast<intptr_t>(payload), kPointerAlignment));
int length = GetHeaderValue(kPayloadLengthOffset);
DCHECK_EQ(data_ + size_, payload + length);

View File

@ -7,6 +7,7 @@
#include "src/base/macros.h"
#include "src/snapshot/serializer.h"
#include "src/snapshot/snapshot-data.h"
namespace v8 {
namespace internal {
@ -61,12 +62,12 @@ class CodeSerializer : public Serializer {
~CodeSerializer() override { OutputStatistics("CodeSerializer"); }
virtual bool ElideObject(Object obj) { return false; }
void SerializeGeneric(HeapObject heap_object);
void SerializeGeneric(Handle<HeapObject> heap_object);
private:
void SerializeObject(HeapObject o) override;
void SerializeObjectImpl(Handle<HeapObject> o) override;
bool SerializeReadOnlyObject(HeapObject obj);
bool SerializeReadOnlyObject(Handle<HeapObject> obj);
DISALLOW_HEAP_ALLOCATION(no_gc_)
uint32_t source_hash_;
@ -92,18 +93,13 @@ class SerializedCodeData : public SerializedData {
// [1] version hash
// [2] source hash
// [3] flag hash
// [4] number of reservation size entries
// [5] payload length
// [6] payload checksum
// ... reservations
// ... code stub keys
// [4] payload length
// [5] payload checksum
// ... serialized payload
static const uint32_t kVersionHashOffset = kMagicNumberOffset + kUInt32Size;
static const uint32_t kSourceHashOffset = kVersionHashOffset + kUInt32Size;
static const uint32_t kFlagHashOffset = kSourceHashOffset + kUInt32Size;
static const uint32_t kNumReservationsOffset = kFlagHashOffset + kUInt32Size;
static const uint32_t kPayloadLengthOffset =
kNumReservationsOffset + kUInt32Size;
static const uint32_t kPayloadLengthOffset = kFlagHashOffset + kUInt32Size;
static const uint32_t kChecksumOffset = kPayloadLengthOffset + kUInt32Size;
static const uint32_t kUnalignedHeaderSize = kChecksumOffset + kUInt32Size;
static const uint32_t kHeaderSize = POINTER_SIZE_ALIGN(kUnalignedHeaderSize);
@ -120,7 +116,6 @@ class SerializedCodeData : public SerializedData {
// Return ScriptData object and relinquish ownership over it to the caller.
ScriptData* GetScriptData();
std::vector<Reservation> Reservations() const;
Vector<const byte> Payload() const;
static uint32_t SourceHash(Handle<String> source,

View File

@ -5,6 +5,7 @@
#include "src/snapshot/context-deserializer.h"
#include "src/api/api-inl.h"
#include "src/common/assert-scope.h"
#include "src/heap/heap-inl.h"
#include "src/objects/slots.h"
#include "src/snapshot/snapshot.h"
@ -31,9 +32,6 @@ MaybeHandle<Object> ContextDeserializer::Deserialize(
Isolate* isolate, Handle<JSGlobalProxy> global_proxy,
v8::DeserializeEmbedderFieldsCallback embedder_fields_deserializer) {
Initialize(isolate);
if (!allocator()->ReserveSpace()) {
V8::FatalProcessOutOfMemory(isolate, "ContextDeserializer");
}
// Replace serialized references to the global proxy and its map with the
// given global proxy and its map.
@ -42,26 +40,17 @@ MaybeHandle<Object> ContextDeserializer::Deserialize(
Handle<Object> result;
{
DisallowGarbageCollection no_gc;
// Keep track of the code space start and end pointers in case new
// code objects were unserialized
CodeSpace* code_space = isolate->heap()->code_space();
Address start_address = code_space->top();
Object root;
VisitRootPointer(Root::kStartupObjectCache, nullptr, FullObjectSlot(&root));
// There's no code deserialized here. If this assert fires then that's
// changed and logging should be added to notify the profiler et al. of
// the new code, which also has to be flushed from instruction cache.
DisallowCodeAllocation no_code_allocation;
result = ReadObject();
DeserializeDeferredObjects();
DeserializeEmbedderFields(embedder_fields_deserializer);
allocator()->RegisterDeserializedObjectsForBlackAllocation();
// There's no code deserialized here. If this assert fires then that's
// changed and logging should be added to notify the profiler et al of the
// new code, which also has to be flushed from instruction cache.
CHECK_EQ(start_address, code_space->top());
LogNewMapEvents();
result = handle(root, isolate);
WeakenDescriptorArrays();
}
if (FLAG_rehash_snapshot && can_rehash()) Rehash();
@ -91,9 +80,7 @@ void ContextDeserializer::DeserializeEmbedderFields(
for (int code = source()->Get(); code != kSynchronize;
code = source()->Get()) {
HandleScope scope(isolate());
SnapshotSpace space = NewObject::Decode(code);
Handle<JSObject> obj(JSObject::cast(GetBackReferencedObject(space)),
isolate());
Handle<JSObject> obj = Handle<JSObject>::cast(GetBackReferencedObject());
int index = source()->GetInt();
int size = source()->GetInt();
// TODO(yangguo,jgruber): Turn this into a reusable shared buffer.

View File

@ -6,6 +6,7 @@
#define V8_SNAPSHOT_CONTEXT_DESERIALIZER_H_
#include "src/snapshot/deserializer.h"
#include "src/snapshot/snapshot-data.h"
#include "src/snapshot/snapshot.h"
namespace v8 {

View File

@ -74,7 +74,6 @@ ContextSerializer::ContextSerializer(
serialize_embedder_fields_(callback),
can_be_rehashed_(true) {
InitializeCodeAddressMap();
allocator()->UseCustomChunkSize(FLAG_serialization_chunk_size);
}
ContextSerializer::~ContextSerializer() {
@ -88,10 +87,8 @@ void ContextSerializer::Serialize(Context* o,
// Upon deserialization, references to the global proxy and its map will be
// replaced.
reference_map()->AddAttachedReference(
reinterpret_cast<void*>(context_.global_proxy().ptr()));
reference_map()->AddAttachedReference(
reinterpret_cast<void*>(context_.global_proxy().map().ptr()));
reference_map()->AddAttachedReference(context_.global_proxy());
reference_map()->AddAttachedReference(context_.global_proxy().map());
// The bootstrap snapshot has a code-stub context. When serializing the
// context snapshot, it is chained into the weak context list on the isolate
@ -123,7 +120,7 @@ void ContextSerializer::Serialize(Context* o,
Pad();
}
void ContextSerializer::SerializeObject(HeapObject obj) {
void ContextSerializer::SerializeObjectImpl(Handle<HeapObject> obj) {
DCHECK(!ObjectIsBytecodeHandler(obj)); // Only referenced in dispatch table.
if (!allow_active_isolate_for_testing()) {
@ -132,7 +129,7 @@ void ContextSerializer::SerializeObject(HeapObject obj) {
// But in test scenarios there is no way to avoid this. Since we only
// serialize a single context in these cases, and this context does not
// have to be executable, we can simply ignore this.
DCHECK_IMPLIES(obj.IsNativeContext(), obj == context_);
DCHECK_IMPLIES(obj->IsNativeContext(), *obj == context_);
}
if (SerializeHotObject(obj)) return;
@ -145,7 +142,7 @@ void ContextSerializer::SerializeObject(HeapObject obj) {
return;
}
if (ShouldBeInTheStartupObjectCache(obj)) {
if (ShouldBeInTheStartupObjectCache(*obj)) {
startup_serializer_->SerializeUsingStartupObjectCache(&sink_, obj);
return;
}
@ -156,31 +153,33 @@ void ContextSerializer::SerializeObject(HeapObject obj) {
DCHECK(!startup_serializer_->ReferenceMapContains(obj));
// All the internalized strings that the context snapshot needs should be
// either in the root table or in the startup object cache.
DCHECK(!obj.IsInternalizedString());
DCHECK(!obj->IsInternalizedString());
// Function and object templates are not context specific.
DCHECK(!obj.IsTemplateInfo());
DCHECK(!obj->IsTemplateInfo());
// Clear literal boilerplates and feedback.
if (obj.IsFeedbackVector()) FeedbackVector::cast(obj).ClearSlots(isolate());
if (obj->IsFeedbackVector()) {
Handle<FeedbackVector>::cast(obj)->ClearSlots(isolate());
}
// Clear InterruptBudget when serializing FeedbackCell.
if (obj.IsFeedbackCell()) {
FeedbackCell::cast(obj).SetInitialInterruptBudget();
if (obj->IsFeedbackCell()) {
Handle<FeedbackCell>::cast(obj)->SetInitialInterruptBudget();
}
if (SerializeJSObjectWithEmbedderFields(obj)) {
return;
}
if (obj.IsJSFunction()) {
if (obj->IsJSFunction()) {
// Unconditionally reset the JSFunction to its SFI's code, since we can't
// serialize optimized code anyway.
JSFunction closure = JSFunction::cast(obj);
closure.ResetIfBytecodeFlushed();
if (closure.is_compiled()) closure.set_code(closure.shared().GetCode());
Handle<JSFunction> closure = Handle<JSFunction>::cast(obj);
closure->ResetIfBytecodeFlushed();
if (closure->is_compiled()) closure->set_code(closure->shared().GetCode());
}
CheckRehashability(obj);
CheckRehashability(*obj);
// Object has not yet been serialized. Serialize it here.
ObjectSerializer serializer(this, obj, &sink_);
@ -204,21 +203,20 @@ namespace {
bool DataIsEmpty(const StartupData& data) { return data.raw_size == 0; }
} // anonymous namespace
bool ContextSerializer::SerializeJSObjectWithEmbedderFields(Object obj) {
if (!obj.IsJSObject()) return false;
JSObject js_obj = JSObject::cast(obj);
int embedder_fields_count = js_obj.GetEmbedderFieldCount();
bool ContextSerializer::SerializeJSObjectWithEmbedderFields(
Handle<HeapObject> obj) {
if (!obj->IsJSObject()) return false;
Handle<JSObject> js_obj = Handle<JSObject>::cast(obj);
int embedder_fields_count = js_obj->GetEmbedderFieldCount();
if (embedder_fields_count == 0) return false;
CHECK_GT(embedder_fields_count, 0);
DCHECK(!js_obj.NeedsRehashing());
DCHECK(!js_obj->NeedsRehashing());
DisallowGarbageCollection no_gc;
DisallowJavascriptExecution no_js(isolate());
DisallowCompilation no_compile(isolate());
HandleScope scope(isolate());
Handle<JSObject> obj_handle(js_obj, isolate());
v8::Local<v8::Object> api_obj = v8::Utils::ToLocal(obj_handle);
v8::Local<v8::Object> api_obj = v8::Utils::ToLocal(js_obj);
std::vector<EmbedderDataSlot::RawData> original_embedder_values;
std::vector<StartupData> serialized_data;
@ -228,7 +226,7 @@ bool ContextSerializer::SerializeJSObjectWithEmbedderFields(Object obj) {
// serializer. For aligned pointers, call the serialize callback. Hold
// onto the result.
for (int i = 0; i < embedder_fields_count; i++) {
EmbedderDataSlot embedder_data_slot(js_obj, i);
EmbedderDataSlot embedder_data_slot(*js_obj, i);
original_embedder_values.emplace_back(
embedder_data_slot.load_raw(isolate(), no_gc));
Object object = embedder_data_slot.load_tagged();
@ -257,7 +255,7 @@ bool ContextSerializer::SerializeJSObjectWithEmbedderFields(Object obj) {
// with embedder callbacks.
for (int i = 0; i < embedder_fields_count; i++) {
if (!DataIsEmpty(serialized_data[i])) {
EmbedderDataSlot(js_obj, i).store_raw(isolate(), kNullAddress, no_gc);
EmbedderDataSlot(*js_obj, i).store_raw(isolate(), kNullAddress, no_gc);
}
}
@ -266,9 +264,10 @@ bool ContextSerializer::SerializeJSObjectWithEmbedderFields(Object obj) {
ObjectSerializer(this, js_obj, &sink_).Serialize();
// 4) Obtain back reference for the serialized object.
SerializerReference reference =
reference_map()->LookupReference(reinterpret_cast<void*>(js_obj.ptr()));
DCHECK(reference.is_back_reference());
const SerializerReference* reference =
reference_map()->LookupReference(js_obj);
DCHECK_NOT_NULL(reference);
DCHECK(reference->is_back_reference());
// 5) Write data returned by the embedder callbacks into a separate sink,
// headed by the back reference. Restore the original embedder fields.
@ -276,13 +275,10 @@ bool ContextSerializer::SerializeJSObjectWithEmbedderFields(Object obj) {
StartupData data = serialized_data[i];
if (DataIsEmpty(data)) continue;
// Restore original values from cleared fields.
EmbedderDataSlot(js_obj, i).store_raw(isolate(),
original_embedder_values[i], no_gc);
embedder_fields_sink_.Put(kNewObject + static_cast<int>(reference.space()),
"embedder field holder");
embedder_fields_sink_.PutInt(reference.chunk_index(), "BackRefChunkIndex");
embedder_fields_sink_.PutInt(reference.chunk_offset(),
"BackRefChunkOffset");
EmbedderDataSlot(*js_obj, i)
.store_raw(isolate(), original_embedder_values[i], no_gc);
embedder_fields_sink_.Put(kNewObject, "embedder field holder");
embedder_fields_sink_.PutInt(reference->back_ref_index(), "BackRefIndex");
embedder_fields_sink_.PutInt(i, "embedder field index");
embedder_fields_sink_.PutInt(data.raw_size, "embedder fields data size");
embedder_fields_sink_.PutRaw(reinterpret_cast<const byte*>(data.data),

View File

@ -28,9 +28,9 @@ class V8_EXPORT_PRIVATE ContextSerializer : public Serializer {
bool can_be_rehashed() const { return can_be_rehashed_; }
private:
void SerializeObject(HeapObject o) override;
void SerializeObjectImpl(Handle<HeapObject> o) override;
bool ShouldBeInTheStartupObjectCache(HeapObject o);
bool SerializeJSObjectWithEmbedderFields(Object obj);
bool SerializeJSObjectWithEmbedderFields(Handle<HeapObject> obj);
void CheckRehashability(HeapObject obj);
StartupSerializer* startup_serializer_;

View File

@ -1,217 +0,0 @@
// Copyright 2017 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "src/snapshot/deserializer-allocator.h"
#include "src/heap/heap-inl.h" // crbug.com/v8/8499
#include "src/heap/memory-chunk.h"
#include "src/roots/roots.h"
namespace v8 {
namespace internal {
void DeserializerAllocator::Initialize(Heap* heap) {
heap_ = heap;
roots_ = ReadOnlyRoots(heap);
}
// We know the space requirements before deserialization and can
// pre-allocate that reserved space. During deserialization, all we need
// to do is to bump up the pointer for each space in the reserved
// space. This is also used for fixing back references.
// We may have to split up the pre-allocation into several chunks
// because it would not fit onto a single page. We do not have to keep
// track of when to move to the next chunk. An opcode will signal this.
// Since multiple large objects cannot be folded into one large object
// space allocation, we have to do an actual allocation when deserializing
// each large object. Instead of tracking offset for back references, we
// reference large objects by index.
Address DeserializerAllocator::AllocateRaw(SnapshotSpace space, int size) {
const int space_number = static_cast<int>(space);
if (space == SnapshotSpace::kLargeObject) {
// Note that we currently do not support deserialization of large code
// objects.
HeapObject obj;
AlwaysAllocateScope scope(heap_);
OldLargeObjectSpace* lo_space = heap_->lo_space();
AllocationResult result = lo_space->AllocateRaw(size);
obj = result.ToObjectChecked();
deserialized_large_objects_.push_back(obj);
return obj.address();
} else if (space == SnapshotSpace::kMap) {
DCHECK_EQ(Map::kSize, size);
return allocated_maps_[next_map_index_++];
} else {
DCHECK(IsPreAllocatedSpace(space));
Address address = high_water_[space_number];
DCHECK_NE(address, kNullAddress);
high_water_[space_number] += size;
#ifdef DEBUG
// Assert that the current reserved chunk is still big enough.
const Heap::Reservation& reservation = reservations_[space_number];
int chunk_index = current_chunk_[space_number];
DCHECK_LE(high_water_[space_number], reservation[chunk_index].end);
#endif
#ifndef V8_ENABLE_THIRD_PARTY_HEAP
if (space == SnapshotSpace::kCode)
MemoryChunk::FromAddress(address)
->GetCodeObjectRegistry()
->RegisterNewlyAllocatedCodeObject(address);
#endif
return address;
}
}
Address DeserializerAllocator::Allocate(SnapshotSpace space, int size) {
#ifdef DEBUG
if (previous_allocation_start_ != kNullAddress) {
// Make sure that the previous allocation is initialized sufficiently to
// be iterated over by the GC.
Address object_address = previous_allocation_start_;
Address previous_allocation_end =
previous_allocation_start_ + previous_allocation_size_;
while (object_address != previous_allocation_end) {
int object_size = HeapObject::FromAddress(object_address).Size();
DCHECK_GT(object_size, 0);
DCHECK_LE(object_address + object_size, previous_allocation_end);
object_address += object_size;
}
}
#endif
Address address;
HeapObject obj;
// TODO(steveblackburn) Note that the third party heap allocates objects
// at reservation time, which means alignment must be acted on at
// reservation time, not here. Since the current encoding does not
// inform the reservation of the alignment, it must be conservatively
// aligned.
//
// A more general approach will be to avoid reservation altogether, and
// instead of chunk index/offset encoding, simply encode backreferences
// by index (this can be optimized by applying something like register
// allocation to keep the metadata needed to record the in-flight
// backreferences minimal). This has the significant advantage of
// abstracting away the details of the memory allocator from this code.
// At each allocation, the regular allocator performs allocation,
// and a fixed-sized table is used to track and fix all back references.
if (V8_ENABLE_THIRD_PARTY_HEAP_BOOL) {
address = AllocateRaw(space, size);
} else if (next_alignment_ != kWordAligned) {
const int reserved = size + Heap::GetMaximumFillToAlign(next_alignment_);
address = AllocateRaw(space, reserved);
obj = HeapObject::FromAddress(address);
// If one of the following assertions fails, then we are deserializing an
// aligned object when the filler maps have not been deserialized yet.
// We require filler maps as padding to align the object.
DCHECK(roots_.free_space_map().IsMap());
DCHECK(roots_.one_pointer_filler_map().IsMap());
DCHECK(roots_.two_pointer_filler_map().IsMap());
obj = Heap::AlignWithFiller(roots_, obj, size, reserved, next_alignment_);
address = obj.address();
next_alignment_ = kWordAligned;
} else {
address = AllocateRaw(space, size);
}
#ifdef DEBUG
previous_allocation_start_ = address;
previous_allocation_size_ = size;
#endif
return address;
}
void DeserializerAllocator::MoveToNextChunk(SnapshotSpace space) {
DCHECK(IsPreAllocatedSpace(space));
const int space_number = static_cast<int>(space);
uint32_t chunk_index = current_chunk_[space_number];
const Heap::Reservation& reservation = reservations_[space_number];
// Make sure the current chunk is indeed exhausted.
CHECK_EQ(reservation[chunk_index].end, high_water_[space_number]);
// Move to next reserved chunk.
chunk_index = ++current_chunk_[space_number];
CHECK_LT(chunk_index, reservation.size());
high_water_[space_number] = reservation[chunk_index].start;
}
HeapObject DeserializerAllocator::GetMap(uint32_t index) {
DCHECK_LT(index, next_map_index_);
return HeapObject::FromAddress(allocated_maps_[index]);
}
HeapObject DeserializerAllocator::GetLargeObject(uint32_t index) {
DCHECK_LT(index, deserialized_large_objects_.size());
return deserialized_large_objects_[index];
}
HeapObject DeserializerAllocator::GetObject(SnapshotSpace space,
uint32_t chunk_index,
uint32_t chunk_offset) {
DCHECK(IsPreAllocatedSpace(space));
const int space_number = static_cast<int>(space);
DCHECK_LE(chunk_index, current_chunk_[space_number]);
Address address =
reservations_[space_number][chunk_index].start + chunk_offset;
if (next_alignment_ != kWordAligned) {
int padding = Heap::GetFillToAlign(address, next_alignment_);
next_alignment_ = kWordAligned;
DCHECK(padding == 0 ||
HeapObject::FromAddress(address).IsFreeSpaceOrFiller());
address += padding;
}
return HeapObject::FromAddress(address);
}
void DeserializerAllocator::DecodeReservation(
const std::vector<SerializedData::Reservation>& res) {
DCHECK_EQ(0, reservations_[0].size());
int current_space = 0;
for (auto& r : res) {
reservations_[current_space].push_back(
{r.chunk_size(), kNullAddress, kNullAddress});
if (r.is_last()) current_space++;
}
DCHECK_EQ(kNumberOfSpaces, current_space);
for (int i = 0; i < kNumberOfPreallocatedSpaces; i++) current_chunk_[i] = 0;
}
bool DeserializerAllocator::ReserveSpace() {
#ifdef DEBUG
for (int i = 0; i < kNumberOfSpaces; ++i) {
DCHECK_GT(reservations_[i].size(), 0);
}
#endif // DEBUG
DCHECK(allocated_maps_.empty());
// TODO(v8:7464): Allocate using the off-heap ReadOnlySpace here once
// implemented.
if (!heap_->ReserveSpace(reservations_, &allocated_maps_)) {
return false;
}
for (int i = 0; i < kNumberOfPreallocatedSpaces; i++) {
high_water_[i] = reservations_[i][0].start;
}
return true;
}
bool DeserializerAllocator::ReservationsAreFullyUsed() const {
for (int space = 0; space < kNumberOfPreallocatedSpaces; space++) {
const uint32_t chunk_index = current_chunk_[space];
if (reservations_[space].size() != chunk_index + 1) {
return false;
}
if (reservations_[space][chunk_index].end != high_water_[space]) {
return false;
}
}
return (allocated_maps_.size() == next_map_index_);
}
void DeserializerAllocator::RegisterDeserializedObjectsForBlackAllocation() {
heap_->RegisterDeserializedObjectsForBlackAllocation(
reservations_, deserialized_large_objects_, allocated_maps_);
}
} // namespace internal
} // namespace v8

View File

@ -1,104 +0,0 @@
// Copyright 2017 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef V8_SNAPSHOT_DESERIALIZER_ALLOCATOR_H_
#define V8_SNAPSHOT_DESERIALIZER_ALLOCATOR_H_
#include "src/common/globals.h"
#include "src/heap/heap.h"
#include "src/objects/heap-object.h"
#include "src/roots/roots.h"
#include "src/snapshot/references.h"
#include "src/snapshot/snapshot-data.h"
namespace v8 {
namespace internal {
class Deserializer;
class StartupDeserializer;
class DeserializerAllocator final {
public:
DeserializerAllocator() = default;
void Initialize(Heap* heap);
// ------- Allocation Methods -------
// Methods related to memory allocation during deserialization.
Address Allocate(SnapshotSpace space, int size);
void MoveToNextChunk(SnapshotSpace space);
void SetAlignment(AllocationAlignment alignment) {
DCHECK_EQ(kWordAligned, next_alignment_);
DCHECK_LE(kWordAligned, alignment);
DCHECK_LE(alignment, kDoubleUnaligned);
next_alignment_ = static_cast<AllocationAlignment>(alignment);
}
HeapObject GetMap(uint32_t index);
HeapObject GetLargeObject(uint32_t index);
HeapObject GetObject(SnapshotSpace space, uint32_t chunk_index,
uint32_t chunk_offset);
// ------- Reservation Methods -------
// Methods related to memory reservations (prior to deserialization).
V8_EXPORT_PRIVATE void DecodeReservation(
const std::vector<SerializedData::Reservation>& res);
bool ReserveSpace();
bool ReservationsAreFullyUsed() const;
// ------- Misc Utility Methods -------
void RegisterDeserializedObjectsForBlackAllocation();
private:
// Raw allocation without considering alignment.
Address AllocateRaw(SnapshotSpace space, int size);
private:
static constexpr int kNumberOfPreallocatedSpaces =
static_cast<int>(SnapshotSpace::kNumberOfPreallocatedSpaces);
static constexpr int kNumberOfSpaces =
static_cast<int>(SnapshotSpace::kNumberOfSpaces);
// The address of the next object that will be allocated in each space.
// Each space has a number of chunks reserved by the GC, with each chunk
// fitting into a page. Deserialized objects are allocated into the
// current chunk of the target space by bumping up high water mark.
Heap::Reservation reservations_[kNumberOfSpaces];
uint32_t current_chunk_[kNumberOfPreallocatedSpaces];
Address high_water_[kNumberOfPreallocatedSpaces];
#ifdef DEBUG
// Record the previous object allocated for DCHECKs.
Address previous_allocation_start_ = kNullAddress;
int previous_allocation_size_ = 0;
#endif
// The alignment of the next allocation.
AllocationAlignment next_alignment_ = kWordAligned;
// All required maps are pre-allocated during reservation. {next_map_index_}
// stores the index of the next map to return from allocation.
uint32_t next_map_index_ = 0;
std::vector<Address> allocated_maps_;
// Allocated large objects are kept in this map and may be fetched later as
// back-references.
std::vector<HeapObject> deserialized_large_objects_;
// ReadOnlyRoots and heap are null until Initialize is called.
Heap* heap_ = nullptr;
ReadOnlyRoots roots_ = ReadOnlyRoots(static_cast<Address*>(nullptr));
DISALLOW_COPY_AND_ASSIGN(DeserializerAllocator);
};
} // namespace internal
} // namespace v8
#endif // V8_SNAPSHOT_DESERIALIZER_ALLOCATOR_H_

File diff suppressed because it is too large Load Diff

View File

@ -17,7 +17,6 @@
#include "src/objects/map.h"
#include "src/objects/string-table.h"
#include "src/objects/string.h"
#include "src/snapshot/deserializer-allocator.h"
#include "src/snapshot/serializer-deserializer.h"
#include "src/snapshot/snapshot-source-sink.h"
@ -40,6 +39,12 @@ class Object;
// A Deserializer reads a snapshot and reconstructs the Object graph it defines.
class V8_EXPORT_PRIVATE Deserializer : public SerializerDeserializer {
public:
// Smi value for filling in not-yet initialized tagged field values with a
// valid tagged pointer. A field value equal to this doesn't necessarily
// indicate that a field is uninitialized, but an uninitialized field should
// definitely equal this value.
static constexpr Smi uninitialized_field_value() { return Smi(0xdeadbed0); }
~Deserializer() override;
void SetRehashability(bool v) { can_rehash_ = v; }
@ -54,7 +59,6 @@ class V8_EXPORT_PRIVATE Deserializer : public SerializerDeserializer {
magic_number_(data->GetMagicNumber()),
deserializing_user_code_(deserializing_user_code),
can_rehash_(false) {
allocator()->DecodeReservation(data->Reservations());
// We start the indices here at 1, so that we can distinguish between an
// actual index and a nullptr (serialized as kNullRefSentinel) in a
// deserialized object requiring fix-up.
@ -70,9 +74,14 @@ class V8_EXPORT_PRIVATE Deserializer : public SerializerDeserializer {
void LogScriptEvents(Script script);
void LogNewMapEvents();
// Descriptor arrays are deserialized as "strong", so that there is no risk of
// them getting trimmed during a partial deserialization. This method makes
// them "weak" again after deserialization completes.
void WeakenDescriptorArrays();
// This returns the address of an object that has been described in the
// snapshot by chunk index and offset.
HeapObject GetBackReferencedObject(SnapshotSpace space);
// snapshot by object vector index.
Handle<HeapObject> GetBackReferencedObject();
// Add an object to back an attached reference. The order to add objects must
// mirror the order they are added in the serializer.
@ -87,17 +96,17 @@ class V8_EXPORT_PRIVATE Deserializer : public SerializerDeserializer {
Isolate* isolate() const { return isolate_; }
SnapshotByteSource* source() { return &source_; }
const std::vector<AllocationSite>& new_allocation_sites() const {
const std::vector<Handle<AllocationSite>>& new_allocation_sites() const {
return new_allocation_sites_;
}
const std::vector<Code>& new_code_objects() const {
const std::vector<Handle<Code>>& new_code_objects() const {
return new_code_objects_;
}
const std::vector<Map>& new_maps() const { return new_maps_; }
const std::vector<AccessorInfo>& accessor_infos() const {
const std::vector<Handle<Map>>& new_maps() const { return new_maps_; }
const std::vector<Handle<AccessorInfo>>& accessor_infos() const {
return accessor_infos_;
}
const std::vector<CallHandlerInfo>& call_handler_infos() const {
const std::vector<Handle<CallHandlerInfo>>& call_handler_infos() const {
return call_handler_infos_;
}
const std::vector<Handle<Script>>& new_scripts() const {
@ -108,76 +117,67 @@ class V8_EXPORT_PRIVATE Deserializer : public SerializerDeserializer {
return new_off_heap_array_buffers_;
}
const std::vector<Handle<DescriptorArray>>& new_descriptor_arrays() const {
return new_descriptor_arrays_;
}
std::shared_ptr<BackingStore> backing_store(size_t i) {
DCHECK_LT(i, backing_stores_.size());
return backing_stores_[i];
}
DeserializerAllocator* allocator() { return &allocator_; }
bool deserializing_user_code() const { return deserializing_user_code_; }
bool can_rehash() const { return can_rehash_; }
void Rehash();
Handle<HeapObject> ReadObject();
private:
class RelocInfoVisitor;
void VisitRootPointers(Root root, const char* description,
FullObjectSlot start, FullObjectSlot end) override;
void Synchronize(VisitorSynchronization::SyncTag tag) override;
template <typename TSlot>
inline TSlot Write(TSlot dest, MaybeObject value);
inline int WriteAddress(TSlot dest, Address value);
template <typename TSlot>
inline TSlot Write(TSlot dest, HeapObject value,
HeapObjectReferenceType type);
inline int WriteExternalPointer(TSlot dest, Address value);
template <typename TSlot>
inline TSlot WriteAddress(TSlot dest, Address value);
// Fills in a heap object's data from start to end (exclusive). Start and end
// are slot indices within the object.
void ReadData(Handle<HeapObject> object, int start_slot_index,
int end_slot_index);
template <typename TSlot>
inline TSlot WriteExternalPointer(TSlot dest, Address value);
// Fills in some heap data in an area from start to end (non-inclusive). The
// object_address is the address of the object we are writing into, or nullptr
// if we are not writing into an object, i.e. if we are writing a series of
// tagged values that are not on the heap.
template <typename TSlot>
void ReadData(TSlot start, TSlot end, Address object_address);
// Fills in a contiguous range of full object slots (e.g. root pointers) from
// start to end (exclusive).
void ReadData(FullMaybeObjectSlot start, FullMaybeObjectSlot end);
// Helper for ReadData which reads the given bytecode and fills in some heap
// data into the given slot. May fill in zero or multiple slots, so it returns
// the next unfilled slot.
template <typename TSlot>
TSlot ReadSingleBytecodeData(byte data, TSlot current,
Address object_address);
// the number of slots filled.
template <typename SlotAccessor>
int ReadSingleBytecodeData(byte data, SlotAccessor slot_accessor);
// A helper function for ReadData for reading external references.
inline Address ReadExternalReferenceCase();
HeapObject ReadObject(SnapshotSpace space_number);
HeapObject ReadMetaMap();
void ReadCodeObjectBody(Address code_object_address);
Handle<HeapObject> ReadObject(SnapshotSpace space_number);
Handle<HeapObject> ReadMetaMap();
HeapObjectReferenceType GetAndResetNextReferenceType();
protected:
HeapObject ReadObject();
public:
void VisitCodeTarget(Code host, RelocInfo* rinfo);
void VisitEmbeddedPointer(Code host, RelocInfo* rinfo);
void VisitRuntimeEntry(Code host, RelocInfo* rinfo);
void VisitExternalReference(Code host, RelocInfo* rinfo);
void VisitInternalReference(Code host, RelocInfo* rinfo);
void VisitOffHeapTarget(Code host, RelocInfo* rinfo);
private:
template <typename TSlot>
TSlot ReadRepeatedObject(TSlot current, int repeat_count);
template <typename SlotGetter>
int ReadRepeatedObject(SlotGetter slot_getter, int repeat_count);
// Special handling for serialized code like hooking up internalized strings.
HeapObject PostProcessNewObject(HeapObject obj, SnapshotSpace space);
void PostProcessNewObject(Handle<HeapObject> obj, SnapshotSpace space);
HeapObject Allocate(SnapshotSpace space, int size,
AllocationAlignment alignment);
// Cached current isolate.
Isolate* isolate_;
@ -188,15 +188,19 @@ class V8_EXPORT_PRIVATE Deserializer : public SerializerDeserializer {
SnapshotByteSource source_;
uint32_t magic_number_;
std::vector<Map> new_maps_;
std::vector<AllocationSite> new_allocation_sites_;
std::vector<Code> new_code_objects_;
std::vector<AccessorInfo> accessor_infos_;
std::vector<CallHandlerInfo> call_handler_infos_;
std::vector<Handle<Map>> new_maps_;
std::vector<Handle<AllocationSite>> new_allocation_sites_;
std::vector<Handle<Code>> new_code_objects_;
std::vector<Handle<AccessorInfo>> accessor_infos_;
std::vector<Handle<CallHandlerInfo>> call_handler_infos_;
std::vector<Handle<Script>> new_scripts_;
std::vector<Handle<JSArrayBuffer>> new_off_heap_array_buffers_;
std::vector<Handle<DescriptorArray>> new_descriptor_arrays_;
std::vector<std::shared_ptr<BackingStore>> backing_stores_;
// Vector of allocated objects that can be accessed by a backref, by index.
std::vector<Handle<HeapObject>> back_refs_;
// Unresolved forward references (registered with kRegisterPendingForwardRef)
// are collected in order as (object, field offset) pairs. The subsequent
// forward ref resolution (with kResolvePendingForwardRef) accesses this
@ -204,32 +208,32 @@ class V8_EXPORT_PRIVATE Deserializer : public SerializerDeserializer {
//
// The vector is cleared when there are no more unresolved forward refs.
struct UnresolvedForwardRef {
UnresolvedForwardRef(HeapObject object, int offset,
UnresolvedForwardRef(Handle<HeapObject> object, int offset,
HeapObjectReferenceType ref_type)
: object(object), offset(offset), ref_type(ref_type) {}
HeapObject object;
Handle<HeapObject> object;
int offset;
HeapObjectReferenceType ref_type;
};
std::vector<UnresolvedForwardRef> unresolved_forward_refs_;
int num_unresolved_forward_refs_ = 0;
DeserializerAllocator allocator_;
const bool deserializing_user_code_;
bool next_reference_is_weak_ = false;
// TODO(6593): generalize rehashing, and remove this flag.
bool can_rehash_;
std::vector<HeapObject> to_rehash_;
std::vector<Handle<HeapObject>> to_rehash_;
#ifdef DEBUG
uint32_t num_api_references_;
#endif // DEBUG
// For source(), isolate(), and allocator().
friend class DeserializerAllocator;
// Record the previous object allocated for DCHECKs.
Handle<HeapObject> previous_allocation_obj_;
int previous_allocation_size_ = 0;
#endif // DEBUG
DISALLOW_COPY_AND_ASSIGN(Deserializer);
};

View File

@ -41,21 +41,17 @@ ObjectDeserializer::DeserializeSharedFunctionInfoOffThread(
MaybeHandle<HeapObject> ObjectDeserializer::Deserialize(Isolate* isolate) {
Initialize(isolate);
if (!allocator()->ReserveSpace()) return MaybeHandle<HeapObject>();
DCHECK(deserializing_user_code());
HandleScope scope(isolate);
Handle<HeapObject> result;
{
DisallowGarbageCollection no_gc;
Object root;
VisitRootPointer(Root::kStartupObjectCache, nullptr, FullObjectSlot(&root));
result = ReadObject();
DeserializeDeferredObjects();
CHECK(new_code_objects().empty());
LinkAllocationSites();
LogNewMapEvents();
result = handle(HeapObject::cast(root), isolate);
allocator()->RegisterDeserializedObjectsForBlackAllocation();
CHECK(new_maps().empty());
WeakenDescriptorArrays();
}
Rehash();
@ -77,10 +73,10 @@ void ObjectDeserializer::CommitPostProcessedObjects() {
script->set_id(isolate()->GetNextScriptId());
LogScriptEvents(*script);
// Add script to list.
Handle<WeakArrayList> list = isolate()->factory()->script_list();
list = WeakArrayList::AddToEnd(isolate(), list,
MaybeObjectHandle::Weak(script));
isolate()->heap()->SetRootScriptList(*list);
Handle<WeakArrayList> list = isolate()->factory()->script_list();
list = WeakArrayList::AddToEnd(isolate(), list,
MaybeObjectHandle::Weak(script));
isolate()->heap()->SetRootScriptList(*list);
}
}
@ -89,17 +85,17 @@ void ObjectDeserializer::LinkAllocationSites() {
Heap* heap = isolate()->heap();
// Allocation sites are present in the snapshot, and must be linked into
// a list at deserialization time.
for (AllocationSite site : new_allocation_sites()) {
if (!site.HasWeakNext()) continue;
for (Handle<AllocationSite> site : new_allocation_sites()) {
if (!site->HasWeakNext()) continue;
// TODO(mvstanton): consider treating the heap()->allocation_sites_list()
// as a (weak) root. If this root is relocated correctly, this becomes
// unnecessary.
if (heap->allocation_sites_list() == Smi::zero()) {
site.set_weak_next(ReadOnlyRoots(heap).undefined_value());
site->set_weak_next(ReadOnlyRoots(heap).undefined_value());
} else {
site.set_weak_next(heap->allocation_sites_list());
site->set_weak_next(heap->allocation_sites_list());
}
heap->set_allocation_sites_list(site);
heap->set_allocation_sites_list(*site);
}
}

View File

@ -16,10 +16,7 @@ namespace internal {
void ReadOnlyDeserializer::DeserializeInto(Isolate* isolate) {
Initialize(isolate);
if (!allocator()->ReserveSpace()) {
V8::FatalProcessOutOfMemory(isolate, "ReadOnlyDeserializer");
}
HandleScope scope(isolate);
ReadOnlyHeap* ro_heap = isolate->read_only_heap();
@ -35,7 +32,6 @@ void ReadOnlyDeserializer::DeserializeInto(Isolate* isolate) {
DCHECK(!isolate->builtins()->is_initialized());
{
DisallowGarbageCollection no_gc;
ReadOnlyRoots roots(isolate);
roots.Iterate(this);

View File

@ -6,6 +6,7 @@
#define V8_SNAPSHOT_READ_ONLY_DESERIALIZER_H_
#include "src/snapshot/deserializer.h"
#include "src/snapshot/snapshot-data.h"
#include "src/snapshot/snapshot.h"
namespace v8 {

View File

@ -18,32 +18,51 @@ namespace internal {
ReadOnlySerializer::ReadOnlySerializer(Isolate* isolate,
Snapshot::SerializerFlags flags)
: RootsSerializer(isolate, flags, RootIndex::kFirstReadOnlyRoot) {
: RootsSerializer(isolate, flags, RootIndex::kFirstReadOnlyRoot)
#ifdef DEBUG
,
serialized_objects_(isolate->heap()),
did_serialize_not_mapped_symbol_(false)
#endif
{
STATIC_ASSERT(RootIndex::kFirstReadOnlyRoot == RootIndex::kFirstRoot);
allocator()->UseCustomChunkSize(FLAG_serialization_chunk_size);
}
ReadOnlySerializer::~ReadOnlySerializer() {
OutputStatistics("ReadOnlySerializer");
}
void ReadOnlySerializer::SerializeObject(HeapObject obj) {
CHECK(ReadOnlyHeap::Contains(obj));
CHECK_IMPLIES(obj.IsString(), obj.IsInternalizedString());
void ReadOnlySerializer::SerializeObjectImpl(Handle<HeapObject> obj) {
CHECK(ReadOnlyHeap::Contains(*obj));
CHECK_IMPLIES(obj->IsString(), obj->IsInternalizedString());
if (SerializeHotObject(obj)) return;
if (IsRootAndHasBeenSerialized(obj) && SerializeRoot(obj)) {
return;
// There should be no references to the not_mapped_symbol except for the entry
// in the root table, so don't try to serialize a reference and rely on the
// below CHECK(!did_serialize_not_mapped_symbol_) to make sure it doesn't
// serialize twice.
if (*obj != ReadOnlyRoots(isolate()).not_mapped_symbol()) {
if (SerializeHotObject(obj)) return;
if (IsRootAndHasBeenSerialized(*obj) && SerializeRoot(obj)) {
return;
}
if (SerializeBackReference(obj)) return;
}
if (SerializeBackReference(obj)) return;
CheckRehashability(obj);
CheckRehashability(*obj);
// Object has not yet been serialized. Serialize it here.
ObjectSerializer object_serializer(this, obj, &sink_);
object_serializer.Serialize();
#ifdef DEBUG
serialized_objects_.insert(obj);
if (*obj == ReadOnlyRoots(isolate()).not_mapped_symbol()) {
CHECK(!did_serialize_not_mapped_symbol_);
did_serialize_not_mapped_symbol_ = true;
} else {
CHECK_NULL(serialized_objects_.Find(obj));
// There's no "IdentitySet", so use an IdentityMap with a value that is
// later ignored.
serialized_objects_.Set(obj, 0);
}
#endif
}
@ -73,7 +92,11 @@ void ReadOnlySerializer::FinalizeSerialization() {
ReadOnlyHeapObjectIterator iterator(isolate()->read_only_heap());
for (HeapObject object = iterator.Next(); !object.is_null();
object = iterator.Next()) {
CHECK(serialized_objects_.count(object));
if (object == ReadOnlyRoots(isolate()).not_mapped_symbol()) {
CHECK(did_serialize_not_mapped_symbol_);
} else {
CHECK_NOT_NULL(serialized_objects_.Find(object));
}
}
#endif
}
@ -92,8 +115,8 @@ bool ReadOnlySerializer::MustBeDeferred(HeapObject object) {
}
bool ReadOnlySerializer::SerializeUsingReadOnlyObjectCache(
SnapshotByteSink* sink, HeapObject obj) {
if (!ReadOnlyHeap::Contains(obj)) return false;
SnapshotByteSink* sink, Handle<HeapObject> obj) {
if (!ReadOnlyHeap::Contains(*obj)) return false;
// Get the cache index and serialize it into the read-only snapshot if
// necessary.

View File

@ -7,6 +7,7 @@
#include <unordered_set>
#include "src/base/hashmap.h"
#include "src/snapshot/roots-serializer.h"
namespace v8 {
@ -31,14 +32,15 @@ class V8_EXPORT_PRIVATE ReadOnlySerializer : public RootsSerializer {
// ReadOnlyObjectCache bytecode into |sink|. Returns whether this was
// successful.
bool SerializeUsingReadOnlyObjectCache(SnapshotByteSink* sink,
HeapObject obj);
Handle<HeapObject> obj);
private:
void SerializeObject(HeapObject o) override;
void SerializeObjectImpl(Handle<HeapObject> o) override;
bool MustBeDeferred(HeapObject object) override;
#ifdef DEBUG
std::unordered_set<HeapObject, Object::Hasher> serialized_objects_;
IdentityMap<int, base::DefaultAllocationPolicy> serialized_objects_;
bool did_serialize_not_mapped_symbol_;
#endif
DISALLOW_COPY_AND_ASSIGN(ReadOnlySerializer);
};

View File

@ -8,78 +8,42 @@
#include "src/base/bit-field.h"
#include "src/base/hashmap.h"
#include "src/common/assert-scope.h"
#include "src/execution/isolate.h"
#include "src/utils/identity-map.h"
namespace v8 {
namespace internal {
// TODO(goszczycki): Move this somewhere every file in src/snapshot can use it.
// The spaces suported by the serializer. Spaces after LO_SPACE (NEW_LO_SPACE
// and CODE_LO_SPACE) are not supported.
enum class SnapshotSpace : byte {
kReadOnlyHeap = RO_SPACE,
kOld = OLD_SPACE,
kCode = CODE_SPACE,
kMap = MAP_SPACE,
kLargeObject = LO_SPACE,
kNumberOfPreallocatedSpaces = kCode + 1,
kNumberOfSpaces = kLargeObject + 1,
kSpecialValueSpace = kNumberOfSpaces,
// Number of spaces which should be allocated by the heap. Eventually
// kReadOnlyHeap will move to the end of this enum and this will be equal to
// it.
kNumberOfHeapSpaces = kNumberOfSpaces,
kReadOnlyHeap,
kOld,
kCode,
kMap,
};
constexpr bool IsPreAllocatedSpace(SnapshotSpace space) {
return static_cast<int>(space) <
static_cast<int>(SnapshotSpace::kNumberOfPreallocatedSpaces);
}
static constexpr int kNumberOfSnapshotSpaces =
static_cast<int>(SnapshotSpace::kMap) + 1;
class SerializerReference {
private:
enum SpecialValueType {
kInvalidValue,
kBackReference,
kAttachedReference,
kOffHeapBackingStore,
kBuiltinReference,
};
STATIC_ASSERT(static_cast<int>(SnapshotSpace::kSpecialValueSpace) <
(1 << kSpaceTagSize));
SerializerReference(SpecialValueType type, uint32_t value)
: bitfield_(SpaceBits::encode(SnapshotSpace::kSpecialValueSpace) |
SpecialValueTypeBits::encode(type)),
value_(value) {}
: bit_field_(TypeBits::encode(type) | ValueBits::encode(value)) {}
public:
SerializerReference() : SerializerReference(kInvalidValue, 0) {}
SerializerReference(SnapshotSpace space, uint32_t chunk_index,
uint32_t chunk_offset)
: bitfield_(SpaceBits::encode(space) |
ChunkIndexBits::encode(chunk_index)),
value_(chunk_offset) {}
static SerializerReference BackReference(SnapshotSpace space,
uint32_t chunk_index,
uint32_t chunk_offset) {
DCHECK(IsAligned(chunk_offset, kObjectAlignment));
return SerializerReference(space, chunk_index, chunk_offset);
}
static SerializerReference MapReference(uint32_t index) {
return SerializerReference(SnapshotSpace::kMap, 0, index);
static SerializerReference BackReference(uint32_t index) {
return SerializerReference(kBackReference, index);
}
static SerializerReference OffHeapBackingStoreReference(uint32_t index) {
return SerializerReference(kOffHeapBackingStore, index);
}
static SerializerReference LargeObjectReference(uint32_t index) {
return SerializerReference(SnapshotSpace::kLargeObject, 0, index);
}
static SerializerReference AttachedReference(uint32_t index) {
return SerializerReference(kAttachedReference, index);
}
@ -88,127 +52,94 @@ class SerializerReference {
return SerializerReference(kBuiltinReference, index);
}
bool is_valid() const {
return SpaceBits::decode(bitfield_) != SnapshotSpace::kSpecialValueSpace ||
SpecialValueTypeBits::decode(bitfield_) != kInvalidValue;
}
bool is_back_reference() const {
return SpaceBits::decode(bitfield_) != SnapshotSpace::kSpecialValueSpace;
return TypeBits::decode(bit_field_) == kBackReference;
}
SnapshotSpace space() const {
uint32_t back_ref_index() const {
DCHECK(is_back_reference());
return SpaceBits::decode(bitfield_);
}
uint32_t chunk_offset() const {
DCHECK(is_back_reference());
return value_;
}
uint32_t chunk_index() const {
DCHECK(IsPreAllocatedSpace(space()));
return ChunkIndexBits::decode(bitfield_);
}
uint32_t map_index() const {
DCHECK_EQ(SnapshotSpace::kMap, SpaceBits::decode(bitfield_));
return value_;
return ValueBits::decode(bit_field_);
}
bool is_off_heap_backing_store_reference() const {
return SpaceBits::decode(bitfield_) == SnapshotSpace::kSpecialValueSpace &&
SpecialValueTypeBits::decode(bitfield_) == kOffHeapBackingStore;
return TypeBits::decode(bit_field_) == kOffHeapBackingStore;
}
uint32_t off_heap_backing_store_index() const {
DCHECK(is_off_heap_backing_store_reference());
return value_;
}
uint32_t large_object_index() const {
DCHECK_EQ(SnapshotSpace::kLargeObject, SpaceBits::decode(bitfield_));
return value_;
return ValueBits::decode(bit_field_);
}
bool is_attached_reference() const {
return SpaceBits::decode(bitfield_) == SnapshotSpace::kSpecialValueSpace &&
SpecialValueTypeBits::decode(bitfield_) == kAttachedReference;
return TypeBits::decode(bit_field_) == kAttachedReference;
}
uint32_t attached_reference_index() const {
DCHECK(is_attached_reference());
return value_;
return ValueBits::decode(bit_field_);
}
bool is_builtin_reference() const {
return SpaceBits::decode(bitfield_) == SnapshotSpace::kSpecialValueSpace &&
SpecialValueTypeBits::decode(bitfield_) == kBuiltinReference;
return TypeBits::decode(bit_field_) == kBuiltinReference;
}
uint32_t builtin_index() const {
DCHECK(is_builtin_reference());
return value_;
return ValueBits::decode(bit_field_);
}
private:
using SpaceBits = base::BitField<SnapshotSpace, 0, kSpaceTagSize>;
using ChunkIndexBits = SpaceBits::Next<uint32_t, 32 - kSpaceTagSize>;
using SpecialValueTypeBits =
SpaceBits::Next<SpecialValueType, 32 - kSpaceTagSize>;
using TypeBits = base::BitField<SpecialValueType, 0, 2>;
using ValueBits = TypeBits::Next<uint32_t, 32 - TypeBits::kSize>;
// We use two fields to store a reference.
// In case of a normal back reference, the bitfield_ stores the space and
// the chunk index. In case of special references, it uses a special value
// for space and stores the special value type.
uint32_t bitfield_;
// value_ stores either chunk offset or special value.
uint32_t value_;
uint32_t bit_field_;
friend class SerializerReferenceMap;
};
class SerializerReferenceMap
: public base::TemplateHashMapImpl<uintptr_t, SerializerReference,
base::KeyEqualityMatcher<intptr_t>,
base::DefaultAllocationPolicy> {
// SerializerReference has to fit in an IdentityMap value field.
STATIC_ASSERT(sizeof(SerializerReference) <= sizeof(void*));
class SerializerReferenceMap {
public:
using Entry = base::TemplateHashMapEntry<uintptr_t, SerializerReference>;
explicit SerializerReferenceMap(Isolate* isolate)
: map_(isolate->heap()), attached_reference_index_(0) {}
SerializerReferenceMap() : attached_reference_index_(0) {}
SerializerReference LookupReference(void* value) const {
uintptr_t key = Key(value);
Entry* entry = Lookup(key, Hash(key));
if (entry == nullptr) return SerializerReference();
return entry->value;
const SerializerReference* LookupReference(HeapObject object) const {
return map_.Find(object);
}
void Add(void* obj, SerializerReference reference) {
DCHECK(reference.is_valid());
DCHECK(!LookupReference(obj).is_valid());
uintptr_t key = Key(obj);
LookupOrInsert(key, Hash(key))->value = reference;
const SerializerReference* LookupReference(Handle<HeapObject> object) const {
return map_.Find(object);
}
SerializerReference AddAttachedReference(void* attached_reference) {
const SerializerReference* LookupBackingStore(void* backing_store) const {
auto it = backing_store_map_.find(backing_store);
if (it == backing_store_map_.end()) return nullptr;
return &it->second;
}
void Add(HeapObject object, SerializerReference reference) {
DCHECK_NULL(LookupReference(object));
map_.Set(object, reference);
}
void AddBackingStore(void* backing_store, SerializerReference reference) {
DCHECK(backing_store_map_.find(backing_store) == backing_store_map_.end());
backing_store_map_.emplace(backing_store, reference);
}
SerializerReference AddAttachedReference(HeapObject object) {
SerializerReference reference =
SerializerReference::AttachedReference(attached_reference_index_++);
Add(attached_reference, reference);
map_.Set(object, reference);
return reference;
}
private:
static inline uintptr_t Key(void* value) {
return reinterpret_cast<uintptr_t>(value);
}
static uint32_t Hash(uintptr_t key) { return static_cast<uint32_t>(key); }
DISALLOW_HEAP_ALLOCATION(no_allocation_)
IdentityMap<SerializerReference, base::DefaultAllocationPolicy> map_;
std::unordered_map<void*, SerializerReference> backing_store_map_;
int attached_reference_index_;
DISALLOW_COPY_AND_ASSIGN(SerializerReferenceMap);
};
} // namespace internal

View File

@ -17,6 +17,7 @@ RootsSerializer::RootsSerializer(Isolate* isolate,
RootIndex first_root_to_be_serialized)
: Serializer(isolate, flags),
first_root_to_be_serialized_(first_root_to_be_serialized),
object_cache_index_map_(isolate->heap()),
can_be_rehashed_(true) {
for (size_t i = 0; i < static_cast<size_t>(first_root_to_be_serialized);
++i) {
@ -24,7 +25,7 @@ RootsSerializer::RootsSerializer(Isolate* isolate,
}
}
int RootsSerializer::SerializeInObjectCache(HeapObject heap_object) {
int RootsSerializer::SerializeInObjectCache(Handle<HeapObject> heap_object) {
int index;
if (!object_cache_index_map_.LookupOrInsert(heap_object, &index)) {
// This object is not part of the object cache yet. Add it to the cache so

View File

@ -42,7 +42,7 @@ class RootsSerializer : public Serializer {
void CheckRehashability(HeapObject obj);
// Serializes |object| if not previously seen and returns its cache index.
int SerializeInObjectCache(HeapObject object);
int SerializeInObjectCache(Handle<HeapObject> object);
private:
void VisitRootPointers(Root root, const char* description,

View File

@ -1,173 +0,0 @@
// Copyright 2017 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "src/snapshot/serializer-allocator.h"
#include "src/heap/heap-inl.h" // crbug.com/v8/8499
#include "src/snapshot/references.h"
#include "src/snapshot/serializer.h"
#include "src/snapshot/snapshot-source-sink.h"
namespace v8 {
namespace internal {
SerializerAllocator::SerializerAllocator(Serializer* serializer)
: serializer_(serializer) {
// This assert should have been added close to seen_backing_stores_index_
// field definition, but the SerializerDeserializer header is not available
// there.
STATIC_ASSERT(Serializer::kNullRefSentinel == 0);
DCHECK_EQ(seen_backing_stores_index_, 1);
for (int i = 0; i < kNumberOfPreallocatedSpaces; i++) {
pending_chunk_[i] = 0;
}
}
void SerializerAllocator::UseCustomChunkSize(uint32_t chunk_size) {
custom_chunk_size_ = chunk_size;
}
static uint32_t PageSizeOfSpace(SnapshotSpace space) {
return static_cast<uint32_t>(
MemoryChunkLayout::AllocatableMemoryInMemoryChunk(
static_cast<AllocationSpace>(space)));
}
uint32_t SerializerAllocator::TargetChunkSize(SnapshotSpace space) {
if (custom_chunk_size_ == 0) return PageSizeOfSpace(space);
DCHECK_LE(custom_chunk_size_, PageSizeOfSpace(space));
return custom_chunk_size_;
}
SerializerReference SerializerAllocator::Allocate(SnapshotSpace space,
uint32_t size) {
const int space_number = static_cast<int>(space);
DCHECK(IsPreAllocatedSpace(space));
DCHECK(size > 0 && size <= PageSizeOfSpace(space));
// Maps are allocated through AllocateMap.
DCHECK_NE(SnapshotSpace::kMap, space);
uint32_t old_chunk_size = pending_chunk_[space_number];
uint32_t new_chunk_size = old_chunk_size + size;
// Start a new chunk if the new size exceeds the target chunk size.
// We may exceed the target chunk size if the single object size does.
if (new_chunk_size > TargetChunkSize(space) && old_chunk_size != 0) {
serializer_->PutNextChunk(space);
completed_chunks_[space_number].push_back(pending_chunk_[space_number]);
pending_chunk_[space_number] = 0;
new_chunk_size = size;
}
uint32_t offset = pending_chunk_[space_number];
pending_chunk_[space_number] = new_chunk_size;
return SerializerReference::BackReference(
space, static_cast<uint32_t>(completed_chunks_[space_number].size()),
offset);
}
SerializerReference SerializerAllocator::AllocateMap() {
// Maps are allocated one-by-one when deserializing.
return SerializerReference::MapReference(num_maps_++);
}
SerializerReference SerializerAllocator::AllocateLargeObject(uint32_t size) {
// Large objects are allocated one-by-one when deserializing. We do not
// have to keep track of multiple chunks.
large_objects_total_size_ += size;
return SerializerReference::LargeObjectReference(seen_large_objects_index_++);
}
SerializerReference SerializerAllocator::AllocateOffHeapBackingStore() {
DCHECK_NE(0, seen_backing_stores_index_);
return SerializerReference::OffHeapBackingStoreReference(
seen_backing_stores_index_++);
}
#ifdef DEBUG
bool SerializerAllocator::BackReferenceIsAlreadyAllocated(
SerializerReference reference) const {
DCHECK(reference.is_back_reference());
SnapshotSpace space = reference.space();
if (space == SnapshotSpace::kLargeObject) {
return reference.large_object_index() < seen_large_objects_index_;
} else if (space == SnapshotSpace::kMap) {
return reference.map_index() < num_maps_;
} else if (space == SnapshotSpace::kReadOnlyHeap &&
serializer_->isolate()->heap()->deserialization_complete()) {
// If not deserializing the isolate itself, then we create BackReferences
// for all read-only heap objects without ever allocating.
return true;
} else {
const int space_number = static_cast<int>(space);
size_t chunk_index = reference.chunk_index();
if (chunk_index == completed_chunks_[space_number].size()) {
return reference.chunk_offset() < pending_chunk_[space_number];
} else {
return chunk_index < completed_chunks_[space_number].size() &&
reference.chunk_offset() <
completed_chunks_[space_number][chunk_index];
}
}
}
#endif
std::vector<SerializedData::Reservation>
SerializerAllocator::EncodeReservations() const {
std::vector<SerializedData::Reservation> out;
for (int i = 0; i < kNumberOfPreallocatedSpaces; i++) {
for (size_t j = 0; j < completed_chunks_[i].size(); j++) {
out.emplace_back(completed_chunks_[i][j]);
}
if (pending_chunk_[i] > 0 || completed_chunks_[i].size() == 0) {
out.emplace_back(pending_chunk_[i]);
}
out.back().mark_as_last();
}
STATIC_ASSERT(SnapshotSpace::kMap ==
SnapshotSpace::kNumberOfPreallocatedSpaces);
out.emplace_back(num_maps_ * Map::kSize);
out.back().mark_as_last();
STATIC_ASSERT(static_cast<int>(SnapshotSpace::kLargeObject) ==
static_cast<int>(SnapshotSpace::kNumberOfPreallocatedSpaces) +
1);
out.emplace_back(large_objects_total_size_);
out.back().mark_as_last();
return out;
}
void SerializerAllocator::OutputStatistics() {
DCHECK(FLAG_serialization_statistics);
PrintF(" Spaces (bytes):\n");
for (int space = 0; space < kNumberOfSpaces; space++) {
PrintF("%16s",
BaseSpace::GetSpaceName(static_cast<AllocationSpace>(space)));
}
PrintF("\n");
for (int space = 0; space < kNumberOfPreallocatedSpaces; space++) {
size_t s = pending_chunk_[space];
for (uint32_t chunk_size : completed_chunks_[space]) s += chunk_size;
PrintF("%16zu", s);
}
STATIC_ASSERT(SnapshotSpace::kMap ==
SnapshotSpace::kNumberOfPreallocatedSpaces);
PrintF("%16d", num_maps_ * Map::kSize);
STATIC_ASSERT(static_cast<int>(SnapshotSpace::kLargeObject) ==
static_cast<int>(SnapshotSpace::kNumberOfPreallocatedSpaces) +
1);
PrintF("%16d\n", large_objects_total_size_);
}
} // namespace internal
} // namespace v8

View File

@ -1,77 +0,0 @@
// Copyright 2017 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef V8_SNAPSHOT_SERIALIZER_ALLOCATOR_H_
#define V8_SNAPSHOT_SERIALIZER_ALLOCATOR_H_
#include "src/snapshot/references.h"
#include "src/snapshot/snapshot-data.h"
namespace v8 {
namespace internal {
class Serializer;
class SerializerAllocator final {
public:
explicit SerializerAllocator(Serializer* serializer);
SerializerReference Allocate(SnapshotSpace space, uint32_t size);
SerializerReference AllocateMap();
SerializerReference AllocateLargeObject(uint32_t size);
SerializerReference AllocateOffHeapBackingStore();
void UseCustomChunkSize(uint32_t chunk_size);
#ifdef DEBUG
bool BackReferenceIsAlreadyAllocated(
SerializerReference back_reference) const;
#endif
std::vector<SerializedData::Reservation> EncodeReservations() const;
void OutputStatistics();
private:
// We try to not exceed this size for every chunk. We will not succeed for
// larger objects though.
uint32_t TargetChunkSize(SnapshotSpace space);
static constexpr int kNumberOfPreallocatedSpaces =
static_cast<int>(SnapshotSpace::kNumberOfPreallocatedSpaces);
static constexpr int kNumberOfSpaces =
static_cast<int>(SnapshotSpace::kNumberOfSpaces);
// Objects from the same space are put into chunks for bulk-allocation
// when deserializing. We have to make sure that each chunk fits into a
// page. So we track the chunk size in pending_chunk_ of a space, but
// when it exceeds a page, we complete the current chunk and start a new one.
uint32_t pending_chunk_[kNumberOfPreallocatedSpaces];
std::vector<uint32_t> completed_chunks_[kNumberOfPreallocatedSpaces];
// Number of maps that we need to allocate.
uint32_t num_maps_ = 0;
// We map serialized large objects to indexes for back-referencing.
uint32_t large_objects_total_size_ = 0;
uint32_t seen_large_objects_index_ = 0;
// Used to keep track of the off-heap backing stores used by TypedArrays/
// ArrayBuffers. Note that the index begins at 1 and not 0, because the latter
// value represents null backing store pointer (see usages of
// SerializerDeserializer::kNullRefSentinel).
uint32_t seen_backing_stores_index_ = 1;
uint32_t custom_chunk_size_ = 0;
// The current serializer.
Serializer* const serializer_;
DISALLOW_COPY_AND_ASSIGN(SerializerAllocator);
};
} // namespace internal
} // namespace v8
#endif // V8_SNAPSHOT_SERIALIZER_ALLOCATOR_H_

View File

@ -40,19 +40,20 @@ bool SerializerDeserializer::CanBeDeferred(HeapObject o) {
}
void SerializerDeserializer::RestoreExternalReferenceRedirectors(
Isolate* isolate, const std::vector<AccessorInfo>& accessor_infos) {
Isolate* isolate, const std::vector<Handle<AccessorInfo>>& accessor_infos) {
// Restore wiped accessor infos.
for (AccessorInfo info : accessor_infos) {
Foreign::cast(info.js_getter())
.set_foreign_address(isolate, info.redirected_getter());
for (Handle<AccessorInfo> info : accessor_infos) {
Foreign::cast(info->js_getter())
.set_foreign_address(isolate, info->redirected_getter());
}
}
void SerializerDeserializer::RestoreExternalReferenceRedirectors(
Isolate* isolate, const std::vector<CallHandlerInfo>& call_handler_infos) {
for (CallHandlerInfo info : call_handler_infos) {
Foreign::cast(info.js_callback())
.set_foreign_address(isolate, info.redirected_callback());
Isolate* isolate,
const std::vector<Handle<CallHandlerInfo>>& call_handler_infos) {
for (Handle<CallHandlerInfo> info : call_handler_infos) {
Foreign::cast(info->js_callback())
.set_foreign_address(isolate, info->redirected_callback());
}
}

View File

@ -27,14 +27,12 @@ class SerializerDeserializer : public RootVisitor {
public:
HotObjectsList() = default;
void Add(HeapObject object) {
DCHECK(!AllowGarbageCollection::IsAllowed());
void Add(Handle<HeapObject> object) {
circular_queue_[index_] = object;
index_ = (index_ + 1) & kSizeMask;
}
HeapObject Get(int index) {
DCHECK(!AllowGarbageCollection::IsAllowed());
Handle<HeapObject> Get(int index) {
DCHECK(!circular_queue_[index].is_null());
return circular_queue_[index];
}
@ -44,7 +42,9 @@ class SerializerDeserializer : public RootVisitor {
int Find(HeapObject object) {
DCHECK(!AllowGarbageCollection::IsAllowed());
for (int i = 0; i < kSize; i++) {
if (circular_queue_[i] == object) return i;
if (!circular_queue_[i].is_null() && *circular_queue_[i] == object) {
return i;
}
}
return kNotFound;
}
@ -54,7 +54,7 @@ class SerializerDeserializer : public RootVisitor {
private:
STATIC_ASSERT(base::bits::IsPowerOfTwo(kSize));
static const int kSizeMask = kSize - 1;
HeapObject circular_queue_[kSize];
Handle<HeapObject> circular_queue_[kSize];
int index_ = 0;
DISALLOW_COPY_AND_ASSIGN(HotObjectsList);
@ -63,18 +63,19 @@ class SerializerDeserializer : public RootVisitor {
static bool CanBeDeferred(HeapObject o);
void RestoreExternalReferenceRedirectors(
Isolate* isolate, const std::vector<AccessorInfo>& accessor_infos);
Isolate* isolate,
const std::vector<Handle<AccessorInfo>>& accessor_infos);
void RestoreExternalReferenceRedirectors(
Isolate* isolate, const std::vector<CallHandlerInfo>& call_handler_infos);
static const int kNumberOfSpaces =
static_cast<int>(SnapshotSpace::kNumberOfSpaces);
Isolate* isolate,
const std::vector<Handle<CallHandlerInfo>>& call_handler_infos);
// clang-format off
#define UNUSED_SERIALIZER_BYTE_CODES(V) \
V(0x05) V(0x06) V(0x07) V(0x0d) V(0x0e) V(0x0f) \
/* Free range 0x2a..0x2f */ \
V(0x2a) V(0x2b) V(0x2c) V(0x2d) V(0x2e) V(0x2f) \
/* Free range 0x1c..0x1f */ \
V(0x1c) V(0x1d) V(0x1e) V(0x1f) \
/* Free range 0x20..0x2f */ \
V(0x20) V(0x21) V(0x22) V(0x23) V(0x24) V(0x25) V(0x26) V(0x27) \
V(0x28) V(0x29) V(0x2a) V(0x2b) V(0x2c) V(0x2d) V(0x2e) V(0x2f) \
/* Free range 0x30..0x3f */ \
V(0x30) V(0x31) V(0x32) V(0x33) V(0x34) V(0x35) V(0x36) V(0x37) \
V(0x38) V(0x39) V(0x3a) V(0x3b) V(0x3c) V(0x3d) V(0x3e) V(0x3f) \
@ -103,7 +104,7 @@ class SerializerDeserializer : public RootVisitor {
// The static assert below will trigger when the number of preallocated spaces
// changed. If that happens, update the kNewObject and kBackref bytecode
// ranges in the comments below.
STATIC_ASSERT(5 == kNumberOfSpaces);
STATIC_ASSERT(4 == kNumberOfSnapshotSpaces);
// First 32 root array items.
static const int kRootArrayConstantsCount = 0x20;
@ -117,25 +118,19 @@ class SerializerDeserializer : public RootVisitor {
static const int kHotObjectCount = 8;
STATIC_ASSERT(kHotObjectCount == HotObjectsList::kSize);
// 3 alignment prefixes
static const int kAlignmentPrefixCount = 3;
enum Bytecode : byte {
//
// ---------- byte code range 0x00..0x0f ----------
// ---------- byte code range 0x00..0x1b ----------
//
// 0x00..0x04 Allocate new object, in specified space.
// 0x00..0x03 Allocate new object, in specified space.
kNewObject = 0x00,
// 0x08..0x0c Reference to previous object from specified space.
kBackref = 0x08,
//
// ---------- byte code range 0x10..0x27 ----------
//
// Reference to previously allocated object.
kBackref = 0x04,
// Reference to an object in the read only heap.
kReadOnlyHeapRef,
// Object in the startup object cache.
kStartupObjectCache = 0x10,
kStartupObjectCache,
// Root array item.
kRootArray,
// Object provided in the attached list.
@ -144,16 +139,12 @@ class SerializerDeserializer : public RootVisitor {
kReadOnlyObjectCache,
// Do nothing, used for padding.
kNop,
// Move to next reserved chunk.
kNextChunk,
// 3 alignment prefixes 0x16..0x18
kAlignmentPrefix = 0x16,
// A tag emitted at strategic points in the snapshot to delineate sections.
// If the deserializer does not find these at the expected moments then it
// is an indication that the snapshot and the VM do not fit together.
// Examine the build process for architecture, version or configuration
// mismatches.
kSynchronize = 0x19,
kSynchronize,
// Repeats of variable length.
kVariableRepeat,
// Used for embedder-allocated backing stores for TypedArrays.
@ -161,7 +152,6 @@ class SerializerDeserializer : public RootVisitor {
// Used for embedder-provided serialization data for embedder fields.
kEmbedderFieldsData,
// Raw data of variable length.
kVariableRawCode,
kVariableRawData,
// Used to encode external references provided through the API.
kApiReference,
@ -193,6 +183,9 @@ class SerializerDeserializer : public RootVisitor {
// register as the pending field. We could either hack around this, or
// simply introduce this new bytecode.
kNewMetaMap,
// Special construction bytecode for Code object bodies, which have a more
// complex deserialization ordering and RelocInfo processing.
kCodeBody,
//
// ---------- byte code range 0x40..0x7f ----------
@ -248,15 +241,14 @@ class SerializerDeserializer : public RootVisitor {
template <Bytecode bytecode>
using SpaceEncoder =
BytecodeValueEncoder<bytecode, 0, kNumberOfSpaces - 1, SnapshotSpace>;
BytecodeValueEncoder<bytecode, 0, kNumberOfSnapshotSpaces - 1,
SnapshotSpace>;
using NewObject = SpaceEncoder<kNewObject>;
using BackRef = SpaceEncoder<kBackref>;
//
// Some other constants.
//
static const SnapshotSpace kAnyOldSpace = SnapshotSpace::kNumberOfSpaces;
// Sentinel after a new object to indicate that double alignment is needed.
static const int kDoubleAlignmentSentinel = 0;

File diff suppressed because it is too large Load Diff

View File

@ -8,14 +8,15 @@
#include <map>
#include "src/codegen/external-reference-encoder.h"
#include "src/common/assert-scope.h"
#include "src/execution/isolate.h"
#include "src/logging/log.h"
#include "src/objects/objects.h"
#include "src/snapshot/embedded/embedded-data.h"
#include "src/snapshot/serializer-allocator.h"
#include "src/snapshot/serializer-deserializer.h"
#include "src/snapshot/snapshot-source-sink.h"
#include "src/snapshot/snapshot.h"
#include "src/utils/identity-map.h"
namespace v8 {
namespace internal {
@ -132,15 +133,15 @@ class CodeAddressMap : public CodeEventLogger {
class ObjectCacheIndexMap {
public:
ObjectCacheIndexMap() : map_(), next_index_(0) {}
explicit ObjectCacheIndexMap(Heap* heap) : map_(heap), next_index_(0) {}
// If |obj| is in the map, immediately return true. Otherwise add it to the
// map and return false. In either case set |*index_out| to the index
// associated with the map.
bool LookupOrInsert(HeapObject obj, int* index_out) {
Maybe<uint32_t> maybe_index = map_.Get(obj);
if (maybe_index.IsJust()) {
*index_out = maybe_index.FromJust();
bool LookupOrInsert(Handle<HeapObject> obj, int* index_out) {
int* maybe_index = map_.Find(obj);
if (maybe_index) {
*index_out = *maybe_index;
return true;
}
*index_out = next_index_;
@ -151,7 +152,7 @@ class ObjectCacheIndexMap {
private:
DisallowHeapAllocation no_allocation_;
HeapObjectToIndexHashMap map_;
IdentityMap<int, base::DefaultAllocationPolicy> map_;
int next_index_;
DISALLOW_COPY_AND_ASSIGN(ObjectCacheIndexMap);
@ -160,24 +161,18 @@ class ObjectCacheIndexMap {
class Serializer : public SerializerDeserializer {
public:
Serializer(Isolate* isolate, Snapshot::SerializerFlags flags);
std::vector<SerializedData::Reservation> EncodeReservations() const {
return allocator_.EncodeReservations();
}
~Serializer() override { DCHECK_EQ(unresolved_forward_refs_, 0); }
const std::vector<byte>* Payload() const { return sink_.data(); }
bool ReferenceMapContains(HeapObject o) {
return reference_map()
->LookupReference(reinterpret_cast<void*>(o.ptr()))
.is_valid();
bool ReferenceMapContains(Handle<HeapObject> o) {
return reference_map()->LookupReference(o) != nullptr;
}
Isolate* isolate() const { return isolate_; }
protected:
using PendingObjectReference =
std::map<HeapObject, std::vector<int>>::iterator;
using PendingObjectReferences = std::vector<int>*;
class ObjectSerializer;
class RecursionScope {
@ -196,7 +191,8 @@ class Serializer : public SerializerDeserializer {
};
void SerializeDeferredObjects();
virtual void SerializeObject(HeapObject o) = 0;
void SerializeObject(Handle<HeapObject> o);
virtual void SerializeObjectImpl(Handle<HeapObject> o) = 0;
virtual bool MustBeDeferred(HeapObject object);
@ -204,36 +200,35 @@ class Serializer : public SerializerDeserializer {
FullObjectSlot start, FullObjectSlot end) override;
void SerializeRootObject(FullObjectSlot slot);
void PutRoot(RootIndex root_index, HeapObject object);
void PutRoot(RootIndex root_index);
void PutSmiRoot(FullObjectSlot slot);
void PutBackReference(HeapObject object, SerializerReference reference);
void PutBackReference(Handle<HeapObject> object,
SerializerReference reference);
void PutAttachedReference(SerializerReference reference);
// Emit alignment prefix if necessary, return required padding space in bytes.
int PutAlignmentPrefix(HeapObject object);
void PutNextChunk(SnapshotSpace space);
void PutRepeat(int repeat_count);
// Emit a marker noting that this slot is a forward reference to the an
// object which has not yet been serialized.
void PutPendingForwardReferenceTo(PendingObjectReference reference);
void PutPendingForwardReference(PendingObjectReferences& ref);
// Resolve the given previously registered forward reference to the current
// object.
void ResolvePendingForwardReference(int obj);
// Returns true if the object was successfully serialized as a root.
bool SerializeRoot(HeapObject obj);
bool SerializeRoot(Handle<HeapObject> obj);
// Returns true if the object was successfully serialized as hot object.
bool SerializeHotObject(HeapObject obj);
bool SerializeHotObject(Handle<HeapObject> obj);
// Returns true if the object was successfully serialized as back reference.
bool SerializeBackReference(HeapObject obj);
bool SerializeBackReference(Handle<HeapObject> obj);
// Returns true if the object was successfully serialized as pending object.
bool SerializePendingObject(HeapObject obj);
bool SerializePendingObject(Handle<HeapObject> obj);
// Returns true if the given heap object is a bytecode handler code object.
bool ObjectIsBytecodeHandler(HeapObject obj) const;
bool ObjectIsBytecodeHandler(Handle<HeapObject> obj) const;
ExternalReferenceEncoder::Value EncodeExternalReference(Address addr) {
return external_reference_encoder_.Encode(addr);
@ -253,28 +248,25 @@ class Serializer : public SerializerDeserializer {
Code CopyCode(Code code);
void QueueDeferredObject(HeapObject obj) {
DCHECK(!reference_map_.LookupReference(reinterpret_cast<void*>(obj.ptr()))
.is_valid());
void QueueDeferredObject(Handle<HeapObject> obj) {
DCHECK_NULL(reference_map_.LookupReference(obj));
deferred_objects_.push_back(obj);
}
// Register that the the given object shouldn't be immediately serialized, but
// will be serialized later and any references to it should be pending forward
// references.
PendingObjectReference RegisterObjectIsPending(HeapObject obj);
void RegisterObjectIsPending(Handle<HeapObject> obj);
// Resolve the given pending object reference with the current object.
void ResolvePendingObject(PendingObjectReference ref);
void ResolvePendingObject(Handle<HeapObject> obj);
void OutputStatistics(const char* name);
#ifdef OBJECT_PRINT
void CountInstanceType(Map map, int size, SnapshotSpace space);
#endif // OBJECT_PRINT
void CountAllocation(Map map, int size, SnapshotSpace space);
#ifdef DEBUG
void PushStack(HeapObject o) { stack_.push_back(o); }
void PushStack(Handle<HeapObject> o) { stack_.push_back(o); }
void PopStack() { stack_.pop_back(); }
void PrintStack();
void PrintStack(std::ostream&);
@ -282,7 +274,6 @@ class Serializer : public SerializerDeserializer {
SerializerReferenceMap* reference_map() { return &reference_map_; }
const RootIndexMap* root_index_map() const { return &root_index_map_; }
SerializerAllocator* allocator() { return &allocator_; }
SnapshotByteSink sink_; // Used directly by subclasses.
@ -293,10 +284,12 @@ class Serializer : public SerializerDeserializer {
return (flags_ & Snapshot::kAllowActiveIsolateForTesting) != 0;
}
std::vector<Handle<HeapObject>> back_refs_;
private:
// Disallow GC during serialization.
// TODO(leszeks, v8:10815): Remove this constraint.
DisallowHeapAllocation no_gc;
DISALLOW_HEAP_ALLOCATION(no_gc)
Isolate* isolate_;
SerializerReferenceMap reference_map_;
@ -304,7 +297,8 @@ class Serializer : public SerializerDeserializer {
RootIndexMap root_index_map_;
std::unique_ptr<CodeAddressMap> code_address_map_;
std::vector<byte> code_buffer_;
std::vector<HeapObject> deferred_objects_; // To handle stack overflow.
std::vector<Handle<HeapObject>>
deferred_objects_; // To handle stack overflow.
// Objects which have started being serialized, but haven't yet been allocated
// with the allocator, are considered "pending". References to them don't have
@ -319,24 +313,30 @@ class Serializer : public SerializerDeserializer {
// forward refs remaining.
int next_forward_ref_id_ = 0;
int unresolved_forward_refs_ = 0;
std::map<HeapObject, std::vector<int>> forward_refs_per_pending_object_;
IdentityMap<PendingObjectReferences, base::DefaultAllocationPolicy>
forward_refs_per_pending_object_;
// Used to keep track of the off-heap backing stores used by TypedArrays/
// ArrayBuffers. Note that the index begins at 1 and not 0, because when a
// TypedArray has an on-heap backing store, the backing_store pointer in the
// corresponding ArrayBuffer will be null, which makes it indistinguishable
// from index 0.
uint32_t seen_backing_stores_index_ = 1;
int recursion_depth_ = 0;
const Snapshot::SerializerFlags flags_;
SerializerAllocator allocator_;
size_t allocation_size_[kNumberOfSnapshotSpaces];
#ifdef OBJECT_PRINT
static constexpr int kInstanceTypes = LAST_TYPE + 1;
std::unique_ptr<int[]> instance_type_count_[kNumberOfSpaces];
std::unique_ptr<size_t[]> instance_type_size_[kNumberOfSpaces];
std::unique_ptr<int[]> instance_type_count_[kNumberOfSnapshotSpaces];
std::unique_ptr<size_t[]> instance_type_size_[kNumberOfSnapshotSpaces];
#endif // OBJECT_PRINT
#ifdef DEBUG
std::vector<HeapObject> stack_;
std::vector<Handle<HeapObject>> stack_;
#endif // DEBUG
friend class SerializerAllocator;
DISALLOW_COPY_AND_ASSIGN(Serializer);
};
@ -344,7 +344,7 @@ class RelocInfoIterator;
class Serializer::ObjectSerializer : public ObjectVisitor {
public:
ObjectSerializer(Serializer* serializer, HeapObject obj,
ObjectSerializer(Serializer* serializer, Handle<HeapObject> obj,
SnapshotByteSink* sink)
: serializer_(serializer),
object_(obj),
@ -376,6 +376,8 @@ class Serializer::ObjectSerializer : public ObjectVisitor {
void VisitOffHeapTarget(Code host, RelocInfo* target) override;
private:
class RelocInfoObjectPreSerializer;
void SerializePrologue(SnapshotSpace space, int size, Map map);
// This function outputs or skips the raw data between the last pointer and
@ -384,7 +386,7 @@ class Serializer::ObjectSerializer : public ObjectVisitor {
void OutputExternalReference(Address target, int target_size,
bool sandboxify);
void OutputRawData(Address up_to);
void OutputCode(int size);
void SerializeCode(Map map, int size);
uint32_t SerializeBackingStore(void* backing_store, int32_t byte_length);
void SerializeJSTypedArray();
void SerializeJSArrayBuffer();
@ -392,7 +394,7 @@ class Serializer::ObjectSerializer : public ObjectVisitor {
void SerializeExternalStringAsSequentialString();
Serializer* serializer_;
HeapObject object_;
Handle<HeapObject> object_;
SnapshotByteSink* sink_;
int bytes_processed_so_far_;
};

View File

@ -26,51 +26,28 @@ constexpr uint32_t SerializedData::kMagicNumber;
SnapshotData::SnapshotData(const Serializer* serializer) {
DisallowGarbageCollection no_gc;
std::vector<Reservation> reservations = serializer->EncodeReservations();
const std::vector<byte>* payload = serializer->Payload();
// Calculate sizes.
uint32_t reservation_size =
static_cast<uint32_t>(reservations.size()) * kUInt32Size;
uint32_t payload_offset = kHeaderSize + reservation_size;
uint32_t padded_payload_offset = POINTER_SIZE_ALIGN(payload_offset);
uint32_t size =
padded_payload_offset + static_cast<uint32_t>(payload->size());
uint32_t size = kHeaderSize + static_cast<uint32_t>(payload->size());
// Allocate backing store and create result data.
AllocateData(size);
// Zero out pre-payload data. Part of that is only used for padding.
memset(data_, 0, padded_payload_offset);
memset(data_, 0, kHeaderSize);
// Set header values.
SetMagicNumber();
SetHeaderValue(kNumReservationsOffset, static_cast<int>(reservations.size()));
SetHeaderValue(kPayloadLengthOffset, static_cast<int>(payload->size()));
// Copy reservation chunk sizes.
CopyBytes(data_ + kHeaderSize, reinterpret_cast<byte*>(reservations.data()),
reservation_size);
// Copy serialized data.
CopyBytes(data_ + padded_payload_offset, payload->data(),
CopyBytes(data_ + kHeaderSize, payload->data(),
static_cast<size_t>(payload->size()));
}
std::vector<SerializedData::Reservation> SnapshotData::Reservations() const {
uint32_t size = GetHeaderValue(kNumReservationsOffset);
std::vector<SerializedData::Reservation> reservations(size);
memcpy(reservations.data(), data_ + kHeaderSize,
size * sizeof(SerializedData::Reservation));
return reservations;
}
Vector<const byte> SnapshotData::Payload() const {
uint32_t reservations_size =
GetHeaderValue(kNumReservationsOffset) * kUInt32Size;
uint32_t padded_payload_offset =
POINTER_SIZE_ALIGN(kHeaderSize + reservations_size);
const byte* payload = data_ + padded_payload_offset;
const byte* payload = data_ + kHeaderSize;
uint32_t length = GetHeaderValue(kPayloadLengthOffset);
DCHECK_EQ(data_ + size_, payload + length);
return Vector<const byte>(payload, length);

View File

@ -20,21 +20,6 @@ class Serializer;
class SerializedData {
public:
class Reservation {
public:
Reservation() : reservation_(0) {}
explicit Reservation(uint32_t size)
: reservation_(ChunkSizeBits::encode(size)) {}
uint32_t chunk_size() const { return ChunkSizeBits::decode(reservation_); }
bool is_last() const { return IsLastChunkBits::decode(reservation_); }
void mark_as_last() { reservation_ |= IsLastChunkBits::encode(true); }
private:
uint32_t reservation_;
};
SerializedData(byte* data, int size)
: data_(data), size_(size), owns_data_(false) {}
SerializedData() : data_(nullptr), size_(0), owns_data_(false) {}
@ -93,7 +78,6 @@ class V8_EXPORT_PRIVATE SnapshotData : public SerializedData {
: SerializedData(const_cast<byte*>(snapshot.begin()), snapshot.length()) {
}
std::vector<Reservation> Reservations() const;
virtual Vector<const byte> Payload() const;
Vector<const byte> RawData() const {
@ -112,14 +96,9 @@ class V8_EXPORT_PRIVATE SnapshotData : public SerializedData {
// The data header consists of uint32_t-sized entries:
// [0] magic number and (internal) external reference count
// [1] number of reservation size entries
// [2] payload length
// ... reservations
// [1] payload length
// ... serialized payload
static const uint32_t kNumReservationsOffset =
kMagicNumberOffset + kUInt32Size;
static const uint32_t kPayloadLengthOffset =
kNumReservationsOffset + kUInt32Size;
static const uint32_t kPayloadLengthOffset = kMagicNumberOffset + kUInt32Size;
static const uint32_t kHeaderSize = kPayloadLengthOffset + kUInt32Size;
};

View File

@ -317,30 +317,6 @@ void Snapshot::SerializeDeserializeAndVerifyForTesting(
Isolate::Delete(new_isolate);
}
void ProfileDeserialization(
const SnapshotData* read_only_snapshot,
const SnapshotData* startup_snapshot,
const std::vector<SnapshotData*>& context_snapshots) {
if (FLAG_profile_deserialization) {
int startup_total = 0;
PrintF("Deserialization will reserve:\n");
for (const auto& reservation : read_only_snapshot->Reservations()) {
startup_total += reservation.chunk_size();
}
for (const auto& reservation : startup_snapshot->Reservations()) {
startup_total += reservation.chunk_size();
}
PrintF("%10d bytes per isolate\n", startup_total);
for (size_t i = 0; i < context_snapshots.size(); i++) {
int context_total = 0;
for (const auto& reservation : context_snapshots[i]->Reservations()) {
context_total += reservation.chunk_size();
}
PrintF("%10d bytes per context #%zu\n", context_total, i);
}
}
}
// static
constexpr Snapshot::SerializerFlags Snapshot::kDefaultSerializerFlags;
@ -352,6 +328,7 @@ v8::StartupData Snapshot::Create(
const DisallowGarbageCollection& no_gc, SerializerFlags flags) {
DCHECK_EQ(contexts->size(), embedder_fields_serializers.size());
DCHECK_GT(contexts->size(), 0);
HandleScope scope(isolate);
// Enter a safepoint so that the heap is safe to iterate.
// TODO(leszeks): This safepoint's scope could be tightened to just string
@ -454,9 +431,6 @@ v8::StartupData SnapshotImpl::CreateSnapshotBlob(
total_length += static_cast<uint32_t>(context_snapshot->RawData().length());
}
ProfileDeserialization(read_only_snapshot_in, startup_snapshot_in,
context_snapshots_in);
char* data = new char[total_length];
// Zero out pre-payload data. Part of that is only used for padding.
memset(data, 0, SnapshotImpl::StartupSnapshotOffset(num_contexts));
@ -480,9 +454,8 @@ v8::StartupData SnapshotImpl::CreateSnapshotBlob(
reinterpret_cast<const char*>(startup_snapshot->RawData().begin()),
payload_length);
if (FLAG_profile_deserialization) {
PrintF("Snapshot blob consists of:\n%10d bytes in %d chunks for startup\n",
payload_length,
static_cast<uint32_t>(startup_snapshot_in->Reservations().size()));
PrintF("Snapshot blob consists of:\n%10d bytes for startup\n",
payload_length);
}
payload_offset += payload_length;
@ -510,10 +483,7 @@ v8::StartupData SnapshotImpl::CreateSnapshotBlob(
reinterpret_cast<const char*>(context_snapshot->RawData().begin()),
payload_length);
if (FLAG_profile_deserialization) {
PrintF(
"%10d bytes in %d chunks for context #%d\n", payload_length,
static_cast<uint32_t>(context_snapshots_in[i]->Reservations().size()),
i);
PrintF("%10d bytes for context #%d\n", payload_length, i);
}
payload_offset += payload_length;
}

View File

@ -16,10 +16,7 @@ namespace internal {
void StartupDeserializer::DeserializeInto(Isolate* isolate) {
Initialize(isolate);
if (!allocator()->ReserveSpace()) {
V8::FatalProcessOutOfMemory(isolate, "StartupDeserializer");
}
HandleScope scope(isolate);
// No active threads.
DCHECK_NULL(isolate->thread_manager()->FirstThreadStateInUse());
@ -31,7 +28,6 @@ void StartupDeserializer::DeserializeInto(Isolate* isolate) {
DCHECK(!isolate->builtins()->is_initialized());
{
DisallowGarbageCollection no_gc;
isolate->heap()->IterateSmiRoots(this);
isolate->heap()->IterateRoots(
this,
@ -68,6 +64,7 @@ void StartupDeserializer::DeserializeInto(Isolate* isolate) {
isolate->builtins()->MarkInitialized();
LogNewMapEvents();
WeakenDescriptorArrays();
if (FLAG_rehash_snapshot && can_rehash()) {
// Hash seed was initalized in ReadOnlyDeserializer.
@ -84,16 +81,15 @@ void StartupDeserializer::DeserializeStringTable() {
// Add each string to the Isolate's string table.
// TODO(leszeks): Consider pre-sizing the string table.
for (int i = 0; i < string_table_size; ++i) {
String string = String::cast(ReadObject());
Address handle_storage = string.ptr();
Handle<String> handle(&handle_storage);
StringTableInsertionKey key(handle);
String result = *isolate()->string_table()->LookupKey(isolate(), &key);
Handle<String> string = Handle<String>::cast(ReadObject());
StringTableInsertionKey key(string);
Handle<String> result =
isolate()->string_table()->LookupKey(isolate(), &key);
USE(result);
// This is startup, so there should be no duplicate entries in the string
// table, and the lookup should unconditionally add the given string.
DCHECK_EQ(result, string);
DCHECK_EQ(*result, *string);
}
DCHECK_EQ(string_table_size, isolate()->string_table()->NumberOfElements());

View File

@ -6,6 +6,7 @@
#define V8_SNAPSHOT_STARTUP_DESERIALIZER_H_
#include "src/snapshot/deserializer.h"
#include "src/snapshot/snapshot-data.h"
#include "src/snapshot/snapshot.h"
namespace v8 {

View File

@ -67,7 +67,6 @@ StartupSerializer::StartupSerializer(Isolate* isolate,
ReadOnlySerializer* read_only_serializer)
: RootsSerializer(isolate, flags, RootIndex::kFirstStrongRoot),
read_only_serializer_(read_only_serializer) {
allocator()->UseCustomChunkSize(FLAG_serialization_chunk_size);
InitializeCodeAddressMap();
}
@ -115,21 +114,21 @@ bool IsUnexpectedCodeObject(Isolate* isolate, HeapObject obj) {
} // namespace
#endif // DEBUG
void StartupSerializer::SerializeObject(HeapObject obj) {
void StartupSerializer::SerializeObjectImpl(Handle<HeapObject> obj) {
#ifdef DEBUG
if (obj.IsJSFunction()) {
if (obj->IsJSFunction()) {
v8::base::OS::PrintError("Reference stack:\n");
PrintStack(std::cerr);
obj.Print(std::cerr);
obj->Print(std::cerr);
FATAL(
"JSFunction should be added through the context snapshot instead of "
"the isolate snapshot");
}
#endif // DEBUG
DCHECK(!IsUnexpectedCodeObject(isolate(), obj));
DCHECK(!IsUnexpectedCodeObject(isolate(), *obj));
if (SerializeHotObject(obj)) return;
if (IsRootAndHasBeenSerialized(obj) && SerializeRoot(obj)) return;
if (IsRootAndHasBeenSerialized(*obj) && SerializeRoot(obj)) return;
if (SerializeUsingReadOnlyObjectCache(&sink_, obj)) return;
if (SerializeBackReference(obj)) return;
@ -138,37 +137,37 @@ void StartupSerializer::SerializeObject(HeapObject obj) {
use_simulator = true;
#endif
if (use_simulator && obj.IsAccessorInfo()) {
if (use_simulator && obj->IsAccessorInfo()) {
// Wipe external reference redirects in the accessor info.
AccessorInfo info = AccessorInfo::cast(obj);
Handle<AccessorInfo> info = Handle<AccessorInfo>::cast(obj);
Address original_address =
Foreign::cast(info.getter()).foreign_address(isolate());
Foreign::cast(info.js_getter())
Foreign::cast(info->getter()).foreign_address(isolate());
Foreign::cast(info->js_getter())
.set_foreign_address(isolate(), original_address);
accessor_infos_.push_back(info);
} else if (use_simulator && obj.IsCallHandlerInfo()) {
CallHandlerInfo info = CallHandlerInfo::cast(obj);
} else if (use_simulator && obj->IsCallHandlerInfo()) {
Handle<CallHandlerInfo> info = Handle<CallHandlerInfo>::cast(obj);
Address original_address =
Foreign::cast(info.callback()).foreign_address(isolate());
Foreign::cast(info.js_callback())
Foreign::cast(info->callback()).foreign_address(isolate());
Foreign::cast(info->js_callback())
.set_foreign_address(isolate(), original_address);
call_handler_infos_.push_back(info);
} else if (obj.IsScript() && Script::cast(obj).IsUserJavaScript()) {
Script::cast(obj).set_context_data(
} else if (obj->IsScript() && Handle<Script>::cast(obj)->IsUserJavaScript()) {
Handle<Script>::cast(obj)->set_context_data(
ReadOnlyRoots(isolate()).uninitialized_symbol());
} else if (obj.IsSharedFunctionInfo()) {
} else if (obj->IsSharedFunctionInfo()) {
// Clear inferred name for native functions.
SharedFunctionInfo shared = SharedFunctionInfo::cast(obj);
if (!shared.IsSubjectToDebugging() && shared.HasUncompiledData()) {
shared.uncompiled_data().set_inferred_name(
Handle<SharedFunctionInfo> shared = Handle<SharedFunctionInfo>::cast(obj);
if (!shared->IsSubjectToDebugging() && shared->HasUncompiledData()) {
shared->uncompiled_data().set_inferred_name(
ReadOnlyRoots(isolate()).empty_string());
}
}
CheckRehashability(obj);
CheckRehashability(*obj);
// Object has not yet been serialized. Serialize it here.
DCHECK(!ReadOnlyHeap::Contains(obj));
DCHECK(!ReadOnlyHeap::Contains(*obj));
ObjectSerializer object_serializer(this, obj, &sink_);
object_serializer.Serialize();
}
@ -226,7 +225,7 @@ void StartupSerializer::SerializeStringTable(StringTable* string_table) {
Object obj = current.load(isolate);
if (obj.IsHeapObject()) {
DCHECK(obj.IsInternalizedString());
serializer_->SerializeObject(HeapObject::cast(obj));
serializer_->SerializeObject(handle(HeapObject::cast(obj), isolate));
}
}
}
@ -244,9 +243,6 @@ void StartupSerializer::SerializeStrongReferences(
Isolate* isolate = this->isolate();
// No active threads.
CHECK_NULL(isolate->thread_manager()->FirstThreadStateInUse());
// No active or weak handles.
CHECK_IMPLIES(!allow_active_isolate_for_testing(),
isolate->handle_scope_implementer()->blocks()->empty());
SanitizeIsolateScope sanitize_isolate(
isolate, allow_active_isolate_for_testing(), no_gc);
@ -269,12 +265,12 @@ SerializedHandleChecker::SerializedHandleChecker(Isolate* isolate,
}
bool StartupSerializer::SerializeUsingReadOnlyObjectCache(
SnapshotByteSink* sink, HeapObject obj) {
SnapshotByteSink* sink, Handle<HeapObject> obj) {
return read_only_serializer_->SerializeUsingReadOnlyObjectCache(sink, obj);
}
void StartupSerializer::SerializeUsingStartupObjectCache(SnapshotByteSink* sink,
HeapObject obj) {
void StartupSerializer::SerializeUsingStartupObjectCache(
SnapshotByteSink* sink, Handle<HeapObject> obj) {
int cache_index = SerializeInObjectCache(obj);
sink->Put(kStartupObjectCache, "StartupObjectCache");
sink->PutInt(cache_index, "startup_object_cache_index");

View File

@ -35,23 +35,24 @@ class V8_EXPORT_PRIVATE StartupSerializer : public RootsSerializer {
// ReadOnlyObjectCache bytecode into |sink|. Returns whether this was
// successful.
bool SerializeUsingReadOnlyObjectCache(SnapshotByteSink* sink,
HeapObject obj);
Handle<HeapObject> obj);
// Adds |obj| to the startup object object cache if not already present and
// emits a StartupObjectCache bytecode into |sink|.
void SerializeUsingStartupObjectCache(SnapshotByteSink* sink, HeapObject obj);
void SerializeUsingStartupObjectCache(SnapshotByteSink* sink,
Handle<HeapObject> obj);
// The per-heap dirty FinalizationRegistry list is weak and not serialized. No
// JSFinalizationRegistries should be used during startup.
void CheckNoDirtyFinalizationRegistries();
private:
void SerializeObject(HeapObject o) override;
void SerializeObjectImpl(Handle<HeapObject> o) override;
void SerializeStringTable(StringTable* string_table);
ReadOnlySerializer* read_only_serializer_;
std::vector<AccessorInfo> accessor_infos_;
std::vector<CallHandlerInfo> call_handler_infos_;
std::vector<Handle<AccessorInfo>> accessor_infos_;
std::vector<Handle<CallHandlerInfo>> call_handler_infos_;
DISALLOW_COPY_AND_ASSIGN(StartupSerializer);
};

View File

@ -26,7 +26,7 @@ void IdentityMapBase::Clear() {
DCHECK(!is_iterable());
DCHECK_NOT_NULL(strong_roots_entry_);
heap_->UnregisterStrongRoots(strong_roots_entry_);
DeletePointerArray(reinterpret_cast<void**>(keys_), capacity_);
DeletePointerArray(reinterpret_cast<uintptr_t*>(keys_), capacity_);
DeletePointerArray(values_, capacity_);
keys_ = nullptr;
strong_roots_entry_ = nullptr;
@ -82,12 +82,12 @@ int IdentityMapBase::InsertKey(Address address) {
UNREACHABLE();
}
bool IdentityMapBase::DeleteIndex(int index, void** deleted_value) {
bool IdentityMapBase::DeleteIndex(int index, uintptr_t* deleted_value) {
if (deleted_value != nullptr) *deleted_value = values_[index];
Address not_mapped = ReadOnlyRoots(heap_).not_mapped_symbol().ptr();
DCHECK_NE(keys_[index], not_mapped);
keys_[index] = not_mapped;
values_[index] = nullptr;
values_[index] = 0;
size_--;
DCHECK_GE(size_, 0);
@ -113,7 +113,7 @@ bool IdentityMapBase::DeleteIndex(int index, void** deleted_value) {
}
DCHECK_EQ(not_mapped, keys_[index]);
DCHECK_NULL(values_[index]);
DCHECK_EQ(values_[index], 0);
std::swap(keys_[index], keys_[next_index]);
std::swap(values_[index], values_[next_index]);
index = next_index;
@ -165,7 +165,7 @@ IdentityMapBase::RawEntry IdentityMapBase::GetEntry(Address key) {
Address not_mapped = ReadOnlyRoots(heap_).not_mapped_symbol().ptr();
for (int i = 0; i < capacity_; i++) keys_[i] = not_mapped;
values_ = NewPointerArray(capacity_);
memset(values_, 0, sizeof(void*) * capacity_);
memset(values_, 0, sizeof(uintptr_t) * capacity_);
strong_roots_entry_ = heap_->RegisterStrongRoots(
FullObjectSlot(keys_), FullObjectSlot(keys_ + capacity_));
@ -190,7 +190,7 @@ IdentityMapBase::RawEntry IdentityMapBase::FindEntry(Address key) const {
// Deletes the given key from the map using the object's address as the
// identity, returning true iff the key was found (in which case, the value
// argument will be set to the deleted entry's value).
bool IdentityMapBase::DeleteEntry(Address key, void** deleted_value) {
bool IdentityMapBase::DeleteEntry(Address key, uintptr_t* deleted_value) {
CHECK(!is_iterable()); // Don't allow deletion by key while iterable.
if (size_ == 0) return false;
int index = Lookup(key);
@ -232,7 +232,7 @@ void IdentityMapBase::Rehash() {
// Record the current GC counter.
gc_counter_ = heap_->gc_count();
// Assume that most objects won't be moved.
std::vector<std::pair<Address, void*>> reinsert;
std::vector<std::pair<Address, uintptr_t>> reinsert;
// Search the table looking for keys that wouldn't be found with their
// current hashcode and evacuate them.
int last_empty = -1;
@ -244,9 +244,9 @@ void IdentityMapBase::Rehash() {
int pos = Hash(keys_[i]) & mask_;
if (pos <= last_empty || pos > i) {
// Evacuate an entry that is in the wrong place.
reinsert.push_back(std::pair<Address, void*>(keys_[i], values_[i]));
reinsert.push_back(std::pair<Address, uintptr_t>(keys_[i], values_[i]));
keys_[i] = not_mapped;
values_[i] = nullptr;
values_[i] = 0;
last_empty = i;
size_--;
}
@ -266,7 +266,7 @@ void IdentityMapBase::Resize(int new_capacity) {
DCHECK_GT(new_capacity, size_);
int old_capacity = capacity_;
Address* old_keys = keys_;
void** old_values = values_;
uintptr_t* old_values = values_;
capacity_ = new_capacity;
mask_ = capacity_ - 1;
@ -277,7 +277,7 @@ void IdentityMapBase::Resize(int new_capacity) {
Address not_mapped = ReadOnlyRoots(heap_).not_mapped_symbol().ptr();
for (int i = 0; i < capacity_; i++) keys_[i] = not_mapped;
values_ = NewPointerArray(capacity_);
memset(values_, 0, sizeof(void*) * capacity_);
memset(values_, 0, sizeof(uintptr_t) * capacity_);
for (int i = 0; i < old_capacity; i++) {
if (old_keys[i] == not_mapped) continue;
@ -292,7 +292,7 @@ void IdentityMapBase::Resize(int new_capacity) {
FullObjectSlot(keys_ + capacity_));
// Delete old storage;
DeletePointerArray(reinterpret_cast<void**>(old_keys), old_capacity);
DeletePointerArray(reinterpret_cast<uintptr_t*>(old_keys), old_capacity);
DeletePointerArray(old_values, old_capacity);
}

View File

@ -5,6 +5,8 @@
#ifndef V8_UTILS_IDENTITY_MAP_H_
#define V8_UTILS_IDENTITY_MAP_H_
#include <type_traits>
#include "src/base/functional.h"
#include "src/handles/handles.h"
#include "src/objects/heap-object.h"
@ -30,7 +32,7 @@ class V8_EXPORT_PRIVATE IdentityMapBase {
// within the {keys_} array in order to simulate a moving GC.
friend class IdentityMapTester;
using RawEntry = void**;
using RawEntry = uintptr_t*;
explicit IdentityMapBase(Heap* heap)
: heap_(heap),
@ -46,7 +48,7 @@ class V8_EXPORT_PRIVATE IdentityMapBase {
RawEntry GetEntry(Address key);
RawEntry FindEntry(Address key) const;
bool DeleteEntry(Address key, void** deleted_value);
bool DeleteEntry(Address key, uintptr_t* deleted_value);
void Clear();
Address KeyAtIndex(int index) const;
@ -57,8 +59,8 @@ class V8_EXPORT_PRIVATE IdentityMapBase {
void EnableIteration();
void DisableIteration();
virtual void** NewPointerArray(size_t length) = 0;
virtual void DeletePointerArray(void** array, size_t length) = 0;
virtual uintptr_t* NewPointerArray(size_t length) = 0;
virtual void DeletePointerArray(uintptr_t* array, size_t length) = 0;
private:
// Internal implementation should not be called directly by subclasses.
@ -66,7 +68,7 @@ class V8_EXPORT_PRIVATE IdentityMapBase {
int InsertKey(Address address);
int Lookup(Address key) const;
int LookupOrInsert(Address key);
bool DeleteIndex(int index, void** deleted_value);
bool DeleteIndex(int index, uintptr_t* deleted_value);
void Rehash();
void Resize(int new_capacity);
int Hash(Address address) const;
@ -79,7 +81,7 @@ class V8_EXPORT_PRIVATE IdentityMapBase {
int mask_;
Address* keys_;
StrongRootsEntry* strong_roots_entry_;
void** values_;
uintptr_t* values_;
bool is_iterable_;
DISALLOW_COPY_AND_ASSIGN(IdentityMapBase);
@ -89,11 +91,15 @@ class V8_EXPORT_PRIVATE IdentityMapBase {
// The map is robust w.r.t. garbage collection by synchronization with the
// supplied {heap}.
// * Keys are treated as strong roots.
// * The value type {V} must be reinterpret_cast'able to {void*}
// * The value type {V} must be reinterpret_cast'able to {uintptr_t}
// * The value type {V} must not be a heap type.
template <typename V, class AllocationPolicy>
class IdentityMap : public IdentityMapBase {
public:
STATIC_ASSERT(sizeof(V) <= sizeof(uintptr_t));
STATIC_ASSERT(std::is_trivially_copyable<V>::value);
STATIC_ASSERT(std::is_trivially_destructible<V>::value);
explicit IdentityMap(Heap* heap,
AllocationPolicy allocator = AllocationPolicy())
: IdentityMapBase(heap), allocator_(allocator) {}
@ -125,7 +131,7 @@ class IdentityMap : public IdentityMapBase {
return Delete(*key, deleted_value);
}
bool Delete(Object key, V* deleted_value) {
void* v = nullptr;
uintptr_t v;
bool deleted_something = DeleteEntry(key.ptr(), &v);
if (deleted_value != nullptr && deleted_something) {
*deleted_value = *reinterpret_cast<V*>(&v);
@ -188,12 +194,12 @@ class IdentityMap : public IdentityMapBase {
// TODO(ishell): consider removing virtual methods in favor of combining
// IdentityMapBase and IdentityMap into one class. This would also save
// space when sizeof(V) is less than sizeof(void*).
void** NewPointerArray(size_t length) override {
return allocator_.template NewArray<void*, Buffer>(length);
// space when sizeof(V) is less than sizeof(uintptr_t).
uintptr_t* NewPointerArray(size_t length) override {
return allocator_.template NewArray<uintptr_t, Buffer>(length);
}
void DeletePointerArray(void** array, size_t length) override {
allocator_.template DeleteArray<void*, Buffer>(array, length);
void DeletePointerArray(uintptr_t* array, size_t length) override {
allocator_.template DeleteArray<uintptr_t, Buffer>(array, length);
}
private:

View File

@ -87,8 +87,7 @@ Handle<Object> HeapTester::TestAllocateAfterFailures() {
heap::SimulateFullSpace(heap->code_space());
size = CcTest::i_isolate()->builtins()->builtin(Builtins::kIllegal).Size();
obj =
heap->AllocateRaw(size, AllocationType::kCode, AllocationOrigin::kRuntime,
AllocationAlignment::kCodeAligned)
heap->AllocateRaw(size, AllocationType::kCode, AllocationOrigin::kRuntime)
.ToObjectChecked();
heap->CreateFillerObjectAt(obj.address(), size, ClearRecordedSlots::kNo);
return CcTest::i_isolate()->factory()->true_value();

View File

@ -7240,8 +7240,7 @@ HEAP_TEST(CodeLargeObjectSpace) {
heap->AddHeapObjectAllocationTracker(&allocation_tracker);
AllocationResult allocation = heap->AllocateRaw(
size_in_bytes, AllocationType::kCode, AllocationOrigin::kGeneratedCode,
AllocationAlignment::kCodeAligned);
size_in_bytes, AllocationType::kCode, AllocationOrigin::kGeneratedCode);
CHECK(allocation.ToAddress() == allocation_tracker.address());
heap->CreateFillerObjectAt(allocation.ToAddress(), size_in_bytes,

View File

@ -172,6 +172,7 @@ static StartupBlobs Serialize(v8::Isolate* isolate) {
i::GarbageCollectionReason::kTesting);
SafepointScope safepoint(internal_isolate->heap());
HandleScope scope(internal_isolate);
DisallowGarbageCollection no_gc;
ReadOnlySerializer read_only_serializer(internal_isolate,
@ -246,36 +247,6 @@ UNINITIALIZED_TEST(StartupSerializerOnce) {
TestStartupSerializerOnceImpl();
}
UNINITIALIZED_TEST(StartupSerializerOnce1) {
DisableAlwaysOpt();
FLAG_serialization_chunk_size = 1;
TestStartupSerializerOnceImpl();
}
UNINITIALIZED_TEST(StartupSerializerOnce32) {
DisableAlwaysOpt();
FLAG_serialization_chunk_size = 32;
TestStartupSerializerOnceImpl();
}
UNINITIALIZED_TEST(StartupSerializerOnce1K) {
DisableAlwaysOpt();
FLAG_serialization_chunk_size = 1 * KB;
TestStartupSerializerOnceImpl();
}
UNINITIALIZED_TEST(StartupSerializerOnce4K) {
DisableAlwaysOpt();
FLAG_serialization_chunk_size = 4 * KB;
TestStartupSerializerOnceImpl();
}
UNINITIALIZED_TEST(StartupSerializerOnce32K) {
DisableAlwaysOpt();
FLAG_serialization_chunk_size = 32 * KB;
TestStartupSerializerOnceImpl();
}
UNINITIALIZED_TEST(StartupSerializerTwice) {
DisableAlwaysOpt();
v8::Isolate* isolate = TestSerializer::NewIsolateInitialized();
@ -308,7 +279,6 @@ UNINITIALIZED_TEST(StartupSerializerOnceRunScript) {
v8::Isolate::Scope isolate_scope(isolate);
v8::HandleScope handle_scope(isolate);
v8::Local<v8::Context> env = v8::Context::New(isolate);
env->Enter();
@ -380,6 +350,7 @@ static void SerializeContext(Vector<const byte>* startup_blob_out,
v8::Local<v8::Context>::New(v8_isolate, env)->Exit();
}
HandleScope scope(isolate);
i::Context raw_context = i::Context::cast(*v8::Utils::OpenPersistent(env));
env.Reset();
@ -532,6 +503,7 @@ static void SerializeCustomContext(Vector<const byte>* startup_blob_out,
v8::Local<v8::Context>::New(v8_isolate, env)->Exit();
}
HandleScope scope(isolate);
i::Context raw_context = i::Context::cast(*v8::Utils::OpenPersistent(env));
env.Reset();
@ -1010,14 +982,17 @@ UNINITIALIZED_TEST(CustomSnapshotDataBlobArrayBufferWithOffset) {
"var x = new Int32Array([12, 24, 48, 96]);"
"var y = new Int32Array(x.buffer, 4, 2)";
Int32Expectations expectations = {
std::make_tuple("x[1]", 24), std::make_tuple("x[2]", 48),
std::make_tuple("y[0]", 24), std::make_tuple("y[1]", 48),
std::make_tuple("x[1]", 24),
std::make_tuple("x[2]", 48),
std::make_tuple("y[0]", 24),
std::make_tuple("y[1]", 48),
};
// Verify that the typed arrays use the same buffer (not independent copies).
const char* code_to_run_after_restore = "x[2] = 57; y[0] = 42;";
Int32Expectations after_restore_expectations = {
std::make_tuple("x[1]", 42), std::make_tuple("y[1]", 57),
std::make_tuple("x[1]", 42),
std::make_tuple("y[1]", 57),
};
TypedArrayTestHelper(code, expectations, code_to_run_after_restore,
@ -1574,16 +1549,13 @@ UNINITIALIZED_TEST(CustomSnapshotDataBlobImmortalImmovableRoots) {
FreeCurrentEmbeddedBlob();
}
TEST(TestThatAlwaysSucceeds) {
}
TEST(TestThatAlwaysSucceeds) {}
TEST(TestThatAlwaysFails) {
bool ArtificialFailure = false;
CHECK(ArtificialFailure);
}
int CountBuiltins() {
// Check that we have not deserialized any additional builtin.
HeapObjectIterator iterator(CcTest::heap());
@ -1738,21 +1710,6 @@ TEST(CodeSerializerOnePlusOneWithDebugger) {
TestCodeSerializerOnePlusOneImpl();
}
TEST(CodeSerializerOnePlusOne1) {
FLAG_serialization_chunk_size = 1;
TestCodeSerializerOnePlusOneImpl();
}
TEST(CodeSerializerOnePlusOne32) {
FLAG_serialization_chunk_size = 32;
TestCodeSerializerOnePlusOneImpl();
}
TEST(CodeSerializerOnePlusOne4K) {
FLAG_serialization_chunk_size = 4 * KB;
TestCodeSerializerOnePlusOneImpl();
}
TEST(CodeSerializerPromotedToCompilationCache) {
LocalContext context;
Isolate* isolate = CcTest::i_isolate();
@ -2063,8 +2020,9 @@ TEST(CodeSerializerThreeBigStrings) {
Handle<String> source_str =
f->NewConsString(
f->NewConsString(source_a_str, source_b_str).ToHandleChecked(),
source_c_str).ToHandleChecked();
f->NewConsString(source_a_str, source_b_str).ToHandleChecked(),
source_c_str)
.ToHandleChecked();
Handle<JSObject> global(isolate->context().global_object(), isolate);
ScriptData* cache = nullptr;
@ -2116,7 +2074,6 @@ TEST(CodeSerializerThreeBigStrings) {
source_c.Dispose();
}
class SerializerOneByteResource
: public v8::String::ExternalOneByteStringResource {
public:
@ -2133,7 +2090,6 @@ class SerializerOneByteResource
int dispose_count_;
};
class SerializerTwoByteResource : public v8::String::ExternalStringResource {
public:
SerializerTwoByteResource(const char* data, size_t length)
@ -2244,10 +2200,11 @@ TEST(CodeSerializerLargeExternalString) {
// Create the source, which is "var <literal> = 42; <literal>".
Handle<String> source_str =
f->NewConsString(
f->NewConsString(f->NewStringFromAsciiChecked("var "), name)
.ToHandleChecked(),
f->NewConsString(f->NewStringFromAsciiChecked(" = 42; "), name)
.ToHandleChecked()).ToHandleChecked();
f->NewConsString(f->NewStringFromAsciiChecked("var "), name)
.ToHandleChecked(),
f->NewConsString(f->NewStringFromAsciiChecked(" = 42; "), name)
.ToHandleChecked())
.ToHandleChecked();
Handle<JSObject> global(isolate->context().global_object(), isolate);
ScriptData* cache = nullptr;
@ -2331,10 +2288,8 @@ TEST(CodeSerializerExternalScriptName) {
delete cache;
}
static bool toplevel_test_code_event_found = false;
static void SerializerCodeEventListener(const v8::JitCodeEvent* event) {
if (event->type == v8::JitCodeEvent::CODE_ADDED &&
(memcmp(event->name.str, "Script:~ test", 13) == 0 ||
@ -2568,7 +2523,7 @@ TEST(CodeSerializerBitFlip) {
v8::ScriptCompiler::CachedData* cache = CompileRunAndProduceCache(source);
// Arbitrary bit flip.
int arbitrary_spot = 337;
int arbitrary_spot = 237;
CHECK_LT(arbitrary_spot, cache->length);
const_cast<uint8_t*>(cache->data)[arbitrary_spot] ^= 0x40;
@ -2783,7 +2738,6 @@ static void AccessorForSerialization(
info.GetReturnValue().Set(v8_num(2017));
}
static SerializerOneByteResource serializable_one_byte_resource("one_byte", 8);
static SerializerTwoByteResource serializable_two_byte_resource("two_byte", 8);
@ -3438,11 +3392,11 @@ UNINITIALIZED_TEST(SnapshotCreatorAddData) {
v8::Private::ForApi(isolate, v8_str("private_symbol"));
v8::Local<v8::Signature> signature =
v8::Signature::New(isolate, v8::FunctionTemplate::New(isolate));
v8::Signature::New(isolate, v8::FunctionTemplate::New(isolate));
v8::Local<v8::AccessorSignature> accessor_signature =
v8::AccessorSignature::New(isolate,
v8::FunctionTemplate::New(isolate));
v8::AccessorSignature::New(isolate,
v8::FunctionTemplate::New(isolate));
CHECK_EQ(0u, creator.AddData(context, object));
CHECK_EQ(1u, creator.AddData(context, v8_str("context-dependent")));
@ -3529,8 +3483,7 @@ UNINITIALIZED_TEST(SnapshotCreatorAddData) {
isolate->GetDataFromSnapshotOnce<v8::FunctionTemplate>(3).IsEmpty());
isolate->GetDataFromSnapshotOnce<v8::Private>(4).ToLocalChecked();
CHECK(
isolate->GetDataFromSnapshotOnce<v8::Private>(4).IsEmpty());
CHECK(isolate->GetDataFromSnapshotOnce<v8::Private>(4).IsEmpty());
isolate->GetDataFromSnapshotOnce<v8::Signature>(5).ToLocalChecked();
CHECK(isolate->GetDataFromSnapshotOnce<v8::Signature>(5).IsEmpty());

View File

@ -18,7 +18,7 @@
},
{
"name": "SnapshotSizeStartup",
"results_regexp": "(\\d+) bytes in \\d+ chunks for startup$"
"results_regexp": "(\\d+) bytes for startup$"
},
{
"name": "SnapshotSizeReadOnly",
@ -26,7 +26,7 @@
},
{
"name": "SnapshotSizeContext",
"results_regexp": "(\\d+) bytes in \\d+ chunks for context #0$"
"results_regexp": "(\\d+) bytes for context #0$"
},
{
"name": "DeserializationTimeIsolate",

View File

@ -113,42 +113,43 @@ INSTANCE_TYPES = {
149: "SMALL_ORDERED_HASH_MAP_TYPE",
150: "SMALL_ORDERED_HASH_SET_TYPE",
151: "SMALL_ORDERED_NAME_DICTIONARY_TYPE",
152: "SOURCE_TEXT_MODULE_TYPE",
153: "SYNTHETIC_MODULE_TYPE",
154: "UNCOMPILED_DATA_WITH_PREPARSE_DATA_TYPE",
155: "UNCOMPILED_DATA_WITHOUT_PREPARSE_DATA_TYPE",
156: "WEAK_FIXED_ARRAY_TYPE",
157: "TRANSITION_ARRAY_TYPE",
158: "CELL_TYPE",
159: "CODE_TYPE",
160: "CODE_DATA_CONTAINER_TYPE",
161: "COVERAGE_INFO_TYPE",
162: "DESCRIPTOR_ARRAY_TYPE",
163: "EMBEDDER_DATA_ARRAY_TYPE",
164: "FEEDBACK_METADATA_TYPE",
165: "FEEDBACK_VECTOR_TYPE",
166: "FILLER_TYPE",
167: "FREE_SPACE_TYPE",
168: "INTERNAL_CLASS_TYPE",
169: "INTERNAL_CLASS_WITH_STRUCT_ELEMENTS_TYPE",
170: "MAP_TYPE",
171: "ON_HEAP_BASIC_BLOCK_PROFILER_DATA_TYPE",
172: "PREPARSE_DATA_TYPE",
173: "PROPERTY_ARRAY_TYPE",
174: "PROPERTY_CELL_TYPE",
175: "SHARED_FUNCTION_INFO_TYPE",
176: "SMI_BOX_TYPE",
177: "SMI_PAIR_TYPE",
178: "SORT_STATE_TYPE",
179: "WASM_ARRAY_TYPE",
180: "WASM_STRUCT_TYPE",
181: "WEAK_ARRAY_LIST_TYPE",
182: "WEAK_CELL_TYPE",
183: "JS_PROXY_TYPE",
152: "DESCRIPTOR_ARRAY_TYPE",
153: "STRONG_DESCRIPTOR_ARRAY_TYPE",
154: "SOURCE_TEXT_MODULE_TYPE",
155: "SYNTHETIC_MODULE_TYPE",
156: "UNCOMPILED_DATA_WITH_PREPARSE_DATA_TYPE",
157: "UNCOMPILED_DATA_WITHOUT_PREPARSE_DATA_TYPE",
158: "WEAK_FIXED_ARRAY_TYPE",
159: "TRANSITION_ARRAY_TYPE",
160: "CELL_TYPE",
161: "CODE_TYPE",
162: "CODE_DATA_CONTAINER_TYPE",
163: "COVERAGE_INFO_TYPE",
164: "EMBEDDER_DATA_ARRAY_TYPE",
165: "FEEDBACK_METADATA_TYPE",
166: "FEEDBACK_VECTOR_TYPE",
167: "FILLER_TYPE",
168: "FREE_SPACE_TYPE",
169: "INTERNAL_CLASS_TYPE",
170: "INTERNAL_CLASS_WITH_STRUCT_ELEMENTS_TYPE",
171: "MAP_TYPE",
172: "ON_HEAP_BASIC_BLOCK_PROFILER_DATA_TYPE",
173: "PREPARSE_DATA_TYPE",
174: "PROPERTY_ARRAY_TYPE",
175: "PROPERTY_CELL_TYPE",
176: "SHARED_FUNCTION_INFO_TYPE",
177: "SMI_BOX_TYPE",
178: "SMI_PAIR_TYPE",
179: "SORT_STATE_TYPE",
180: "WASM_ARRAY_TYPE",
181: "WASM_STRUCT_TYPE",
182: "WEAK_ARRAY_LIST_TYPE",
183: "WEAK_CELL_TYPE",
184: "JS_PROXY_TYPE",
1057: "JS_OBJECT_TYPE",
184: "JS_GLOBAL_OBJECT_TYPE",
185: "JS_GLOBAL_PROXY_TYPE",
186: "JS_MODULE_NAMESPACE_TYPE",
185: "JS_GLOBAL_OBJECT_TYPE",
186: "JS_GLOBAL_PROXY_TYPE",
187: "JS_MODULE_NAMESPACE_TYPE",
1040: "JS_SPECIAL_API_OBJECT_TYPE",
1041: "JS_PRIMITIVE_WRAPPER_TYPE",
1042: "JS_MAP_KEY_ITERATOR_TYPE",
@ -205,16 +206,16 @@ INSTANCE_TYPES = {
# List of known V8 maps.
KNOWN_MAPS = {
("read_only_space", 0x02115): (170, "MetaMap"),
("read_only_space", 0x02115): (171, "MetaMap"),
("read_only_space", 0x0213d): (67, "NullMap"),
("read_only_space", 0x02165): (162, "DescriptorArrayMap"),
("read_only_space", 0x0218d): (156, "WeakFixedArrayMap"),
("read_only_space", 0x02165): (153, "StrongDescriptorArrayMap"),
("read_only_space", 0x0218d): (158, "WeakFixedArrayMap"),
("read_only_space", 0x021cd): (96, "EnumCacheMap"),
("read_only_space", 0x02201): (117, "FixedArrayMap"),
("read_only_space", 0x0224d): (8, "OneByteInternalizedStringMap"),
("read_only_space", 0x02299): (167, "FreeSpaceMap"),
("read_only_space", 0x022c1): (166, "OnePointerFillerMap"),
("read_only_space", 0x022e9): (166, "TwoPointerFillerMap"),
("read_only_space", 0x02299): (168, "FreeSpaceMap"),
("read_only_space", 0x022c1): (167, "OnePointerFillerMap"),
("read_only_space", 0x022e9): (167, "TwoPointerFillerMap"),
("read_only_space", 0x02311): (67, "UninitializedMap"),
("read_only_space", 0x02389): (67, "UndefinedMap"),
("read_only_space", 0x023cd): (66, "HeapNumberMap"),
@ -226,14 +227,14 @@ KNOWN_MAPS = {
("read_only_space", 0x0257d): (64, "SymbolMap"),
("read_only_space", 0x025a5): (40, "OneByteStringMap"),
("read_only_space", 0x025cd): (129, "ScopeInfoMap"),
("read_only_space", 0x025f5): (175, "SharedFunctionInfoMap"),
("read_only_space", 0x0261d): (159, "CodeMap"),
("read_only_space", 0x02645): (158, "CellMap"),
("read_only_space", 0x0266d): (174, "GlobalPropertyCellMap"),
("read_only_space", 0x025f5): (176, "SharedFunctionInfoMap"),
("read_only_space", 0x0261d): (161, "CodeMap"),
("read_only_space", 0x02645): (160, "CellMap"),
("read_only_space", 0x0266d): (175, "GlobalPropertyCellMap"),
("read_only_space", 0x02695): (70, "ForeignMap"),
("read_only_space", 0x026bd): (157, "TransitionArrayMap"),
("read_only_space", 0x026bd): (159, "TransitionArrayMap"),
("read_only_space", 0x026e5): (45, "ThinOneByteStringMap"),
("read_only_space", 0x0270d): (165, "FeedbackVectorMap"),
("read_only_space", 0x0270d): (166, "FeedbackVectorMap"),
("read_only_space", 0x0273d): (67, "ArgumentsMarkerMap"),
("read_only_space", 0x0279d): (67, "ExceptionMap"),
("read_only_space", 0x027f9): (67, "TerminationExceptionMap"),
@ -241,126 +242,127 @@ KNOWN_MAPS = {
("read_only_space", 0x028c1): (67, "StaleRegisterMap"),
("read_only_space", 0x02921): (130, "ScriptContextTableMap"),
("read_only_space", 0x02949): (127, "ClosureFeedbackCellArrayMap"),
("read_only_space", 0x02971): (164, "FeedbackMetadataArrayMap"),
("read_only_space", 0x02971): (165, "FeedbackMetadataArrayMap"),
("read_only_space", 0x02999): (117, "ArrayListMap"),
("read_only_space", 0x029c1): (65, "BigIntMap"),
("read_only_space", 0x029e9): (128, "ObjectBoilerplateDescriptionMap"),
("read_only_space", 0x02a11): (132, "BytecodeArrayMap"),
("read_only_space", 0x02a39): (160, "CodeDataContainerMap"),
("read_only_space", 0x02a61): (161, "CoverageInfoMap"),
("read_only_space", 0x02a89): (133, "FixedDoubleArrayMap"),
("read_only_space", 0x02ab1): (120, "GlobalDictionaryMap"),
("read_only_space", 0x02ad9): (97, "ManyClosuresCellMap"),
("read_only_space", 0x02b01): (117, "ModuleInfoMap"),
("read_only_space", 0x02b29): (121, "NameDictionaryMap"),
("read_only_space", 0x02b51): (97, "NoClosuresCellMap"),
("read_only_space", 0x02b79): (122, "NumberDictionaryMap"),
("read_only_space", 0x02ba1): (97, "OneClosureCellMap"),
("read_only_space", 0x02bc9): (123, "OrderedHashMapMap"),
("read_only_space", 0x02bf1): (124, "OrderedHashSetMap"),
("read_only_space", 0x02c19): (125, "OrderedNameDictionaryMap"),
("read_only_space", 0x02c41): (172, "PreparseDataMap"),
("read_only_space", 0x02c69): (173, "PropertyArrayMap"),
("read_only_space", 0x02c91): (93, "SideEffectCallHandlerInfoMap"),
("read_only_space", 0x02cb9): (93, "SideEffectFreeCallHandlerInfoMap"),
("read_only_space", 0x02ce1): (93, "NextCallSideEffectFreeCallHandlerInfoMap"),
("read_only_space", 0x02d09): (126, "SimpleNumberDictionaryMap"),
("read_only_space", 0x02d31): (149, "SmallOrderedHashMapMap"),
("read_only_space", 0x02d59): (150, "SmallOrderedHashSetMap"),
("read_only_space", 0x02d81): (151, "SmallOrderedNameDictionaryMap"),
("read_only_space", 0x02da9): (152, "SourceTextModuleMap"),
("read_only_space", 0x02dd1): (153, "SyntheticModuleMap"),
("read_only_space", 0x02df9): (155, "UncompiledDataWithoutPreparseDataMap"),
("read_only_space", 0x02e21): (154, "UncompiledDataWithPreparseDataMap"),
("read_only_space", 0x02e49): (71, "WasmTypeInfoMap"),
("read_only_space", 0x02e71): (181, "WeakArrayListMap"),
("read_only_space", 0x02e99): (119, "EphemeronHashTableMap"),
("read_only_space", 0x02ec1): (163, "EmbedderDataArrayMap"),
("read_only_space", 0x02ee9): (182, "WeakCellMap"),
("read_only_space", 0x02f11): (32, "StringMap"),
("read_only_space", 0x02f39): (41, "ConsOneByteStringMap"),
("read_only_space", 0x02f61): (33, "ConsStringMap"),
("read_only_space", 0x02f89): (37, "ThinStringMap"),
("read_only_space", 0x02fb1): (35, "SlicedStringMap"),
("read_only_space", 0x02fd9): (43, "SlicedOneByteStringMap"),
("read_only_space", 0x03001): (34, "ExternalStringMap"),
("read_only_space", 0x03029): (42, "ExternalOneByteStringMap"),
("read_only_space", 0x03051): (50, "UncachedExternalStringMap"),
("read_only_space", 0x03079): (0, "InternalizedStringMap"),
("read_only_space", 0x030a1): (2, "ExternalInternalizedStringMap"),
("read_only_space", 0x030c9): (10, "ExternalOneByteInternalizedStringMap"),
("read_only_space", 0x030f1): (18, "UncachedExternalInternalizedStringMap"),
("read_only_space", 0x03119): (26, "UncachedExternalOneByteInternalizedStringMap"),
("read_only_space", 0x03141): (58, "UncachedExternalOneByteStringMap"),
("read_only_space", 0x03169): (67, "SelfReferenceMarkerMap"),
("read_only_space", 0x03191): (67, "BasicBlockCountersMarkerMap"),
("read_only_space", 0x031d5): (87, "ArrayBoilerplateDescriptionMap"),
("read_only_space", 0x032a5): (99, "InterceptorInfoMap"),
("read_only_space", 0x05399): (72, "PromiseFulfillReactionJobTaskMap"),
("read_only_space", 0x053c1): (73, "PromiseRejectReactionJobTaskMap"),
("read_only_space", 0x053e9): (74, "CallableTaskMap"),
("read_only_space", 0x05411): (75, "CallbackTaskMap"),
("read_only_space", 0x05439): (76, "PromiseResolveThenableJobTaskMap"),
("read_only_space", 0x05461): (79, "FunctionTemplateInfoMap"),
("read_only_space", 0x05489): (80, "ObjectTemplateInfoMap"),
("read_only_space", 0x054b1): (81, "AccessCheckInfoMap"),
("read_only_space", 0x054d9): (82, "AccessorInfoMap"),
("read_only_space", 0x05501): (83, "AccessorPairMap"),
("read_only_space", 0x05529): (84, "AliasedArgumentsEntryMap"),
("read_only_space", 0x05551): (85, "AllocationMementoMap"),
("read_only_space", 0x05579): (88, "AsmWasmDataMap"),
("read_only_space", 0x055a1): (89, "AsyncGeneratorRequestMap"),
("read_only_space", 0x055c9): (90, "BreakPointMap"),
("read_only_space", 0x055f1): (91, "BreakPointInfoMap"),
("read_only_space", 0x05619): (92, "CachedTemplateObjectMap"),
("read_only_space", 0x05641): (94, "ClassPositionsMap"),
("read_only_space", 0x05669): (95, "DebugInfoMap"),
("read_only_space", 0x05691): (98, "FunctionTemplateRareDataMap"),
("read_only_space", 0x056b9): (100, "InterpreterDataMap"),
("read_only_space", 0x056e1): (101, "PromiseCapabilityMap"),
("read_only_space", 0x05709): (102, "PromiseReactionMap"),
("read_only_space", 0x05731): (103, "PropertyDescriptorObjectMap"),
("read_only_space", 0x05759): (104, "PrototypeInfoMap"),
("read_only_space", 0x05781): (105, "ScriptMap"),
("read_only_space", 0x057a9): (106, "SourceTextModuleInfoEntryMap"),
("read_only_space", 0x057d1): (107, "StackFrameInfoMap"),
("read_only_space", 0x057f9): (108, "StackTraceFrameMap"),
("read_only_space", 0x05821): (109, "TemplateObjectDescriptionMap"),
("read_only_space", 0x05849): (110, "Tuple2Map"),
("read_only_space", 0x05871): (111, "WasmCapiFunctionDataMap"),
("read_only_space", 0x05899): (112, "WasmExceptionTagMap"),
("read_only_space", 0x058c1): (113, "WasmExportedFunctionDataMap"),
("read_only_space", 0x058e9): (114, "WasmIndirectFunctionTableMap"),
("read_only_space", 0x05911): (115, "WasmJSFunctionDataMap"),
("read_only_space", 0x05939): (116, "WasmValueMap"),
("read_only_space", 0x05961): (135, "SloppyArgumentsElementsMap"),
("read_only_space", 0x05989): (171, "OnHeapBasicBlockProfilerDataMap"),
("read_only_space", 0x059b1): (168, "InternalClassMap"),
("read_only_space", 0x059d9): (177, "SmiPairMap"),
("read_only_space", 0x05a01): (176, "SmiBoxMap"),
("read_only_space", 0x05a29): (146, "ExportedSubClassBaseMap"),
("read_only_space", 0x05a51): (147, "ExportedSubClassMap"),
("read_only_space", 0x05a79): (68, "AbstractInternalClassSubclass1Map"),
("read_only_space", 0x05aa1): (69, "AbstractInternalClassSubclass2Map"),
("read_only_space", 0x05ac9): (134, "InternalClassWithSmiElementsMap"),
("read_only_space", 0x05af1): (169, "InternalClassWithStructElementsMap"),
("read_only_space", 0x05b19): (148, "ExportedSubClass2Map"),
("read_only_space", 0x05b41): (178, "SortStateMap"),
("read_only_space", 0x05b69): (86, "AllocationSiteWithWeakNextMap"),
("read_only_space", 0x05b91): (86, "AllocationSiteWithoutWeakNextMap"),
("read_only_space", 0x05bb9): (77, "LoadHandler1Map"),
("read_only_space", 0x05be1): (77, "LoadHandler2Map"),
("read_only_space", 0x05c09): (77, "LoadHandler3Map"),
("read_only_space", 0x05c31): (78, "StoreHandler0Map"),
("read_only_space", 0x05c59): (78, "StoreHandler1Map"),
("read_only_space", 0x05c81): (78, "StoreHandler2Map"),
("read_only_space", 0x05ca9): (78, "StoreHandler3Map"),
("read_only_space", 0x02a39): (162, "CodeDataContainerMap"),
("read_only_space", 0x02a61): (163, "CoverageInfoMap"),
("read_only_space", 0x02a89): (152, "DescriptorArrayMap"),
("read_only_space", 0x02ab1): (133, "FixedDoubleArrayMap"),
("read_only_space", 0x02ad9): (120, "GlobalDictionaryMap"),
("read_only_space", 0x02b01): (97, "ManyClosuresCellMap"),
("read_only_space", 0x02b29): (117, "ModuleInfoMap"),
("read_only_space", 0x02b51): (121, "NameDictionaryMap"),
("read_only_space", 0x02b79): (97, "NoClosuresCellMap"),
("read_only_space", 0x02ba1): (122, "NumberDictionaryMap"),
("read_only_space", 0x02bc9): (97, "OneClosureCellMap"),
("read_only_space", 0x02bf1): (123, "OrderedHashMapMap"),
("read_only_space", 0x02c19): (124, "OrderedHashSetMap"),
("read_only_space", 0x02c41): (125, "OrderedNameDictionaryMap"),
("read_only_space", 0x02c69): (173, "PreparseDataMap"),
("read_only_space", 0x02c91): (174, "PropertyArrayMap"),
("read_only_space", 0x02cb9): (93, "SideEffectCallHandlerInfoMap"),
("read_only_space", 0x02ce1): (93, "SideEffectFreeCallHandlerInfoMap"),
("read_only_space", 0x02d09): (93, "NextCallSideEffectFreeCallHandlerInfoMap"),
("read_only_space", 0x02d31): (126, "SimpleNumberDictionaryMap"),
("read_only_space", 0x02d59): (149, "SmallOrderedHashMapMap"),
("read_only_space", 0x02d81): (150, "SmallOrderedHashSetMap"),
("read_only_space", 0x02da9): (151, "SmallOrderedNameDictionaryMap"),
("read_only_space", 0x02dd1): (154, "SourceTextModuleMap"),
("read_only_space", 0x02df9): (155, "SyntheticModuleMap"),
("read_only_space", 0x02e21): (157, "UncompiledDataWithoutPreparseDataMap"),
("read_only_space", 0x02e49): (156, "UncompiledDataWithPreparseDataMap"),
("read_only_space", 0x02e71): (71, "WasmTypeInfoMap"),
("read_only_space", 0x02e99): (182, "WeakArrayListMap"),
("read_only_space", 0x02ec1): (119, "EphemeronHashTableMap"),
("read_only_space", 0x02ee9): (164, "EmbedderDataArrayMap"),
("read_only_space", 0x02f11): (183, "WeakCellMap"),
("read_only_space", 0x02f39): (32, "StringMap"),
("read_only_space", 0x02f61): (41, "ConsOneByteStringMap"),
("read_only_space", 0x02f89): (33, "ConsStringMap"),
("read_only_space", 0x02fb1): (37, "ThinStringMap"),
("read_only_space", 0x02fd9): (35, "SlicedStringMap"),
("read_only_space", 0x03001): (43, "SlicedOneByteStringMap"),
("read_only_space", 0x03029): (34, "ExternalStringMap"),
("read_only_space", 0x03051): (42, "ExternalOneByteStringMap"),
("read_only_space", 0x03079): (50, "UncachedExternalStringMap"),
("read_only_space", 0x030a1): (0, "InternalizedStringMap"),
("read_only_space", 0x030c9): (2, "ExternalInternalizedStringMap"),
("read_only_space", 0x030f1): (10, "ExternalOneByteInternalizedStringMap"),
("read_only_space", 0x03119): (18, "UncachedExternalInternalizedStringMap"),
("read_only_space", 0x03141): (26, "UncachedExternalOneByteInternalizedStringMap"),
("read_only_space", 0x03169): (58, "UncachedExternalOneByteStringMap"),
("read_only_space", 0x03191): (67, "SelfReferenceMarkerMap"),
("read_only_space", 0x031b9): (67, "BasicBlockCountersMarkerMap"),
("read_only_space", 0x031fd): (87, "ArrayBoilerplateDescriptionMap"),
("read_only_space", 0x032cd): (99, "InterceptorInfoMap"),
("read_only_space", 0x053c1): (72, "PromiseFulfillReactionJobTaskMap"),
("read_only_space", 0x053e9): (73, "PromiseRejectReactionJobTaskMap"),
("read_only_space", 0x05411): (74, "CallableTaskMap"),
("read_only_space", 0x05439): (75, "CallbackTaskMap"),
("read_only_space", 0x05461): (76, "PromiseResolveThenableJobTaskMap"),
("read_only_space", 0x05489): (79, "FunctionTemplateInfoMap"),
("read_only_space", 0x054b1): (80, "ObjectTemplateInfoMap"),
("read_only_space", 0x054d9): (81, "AccessCheckInfoMap"),
("read_only_space", 0x05501): (82, "AccessorInfoMap"),
("read_only_space", 0x05529): (83, "AccessorPairMap"),
("read_only_space", 0x05551): (84, "AliasedArgumentsEntryMap"),
("read_only_space", 0x05579): (85, "AllocationMementoMap"),
("read_only_space", 0x055a1): (88, "AsmWasmDataMap"),
("read_only_space", 0x055c9): (89, "AsyncGeneratorRequestMap"),
("read_only_space", 0x055f1): (90, "BreakPointMap"),
("read_only_space", 0x05619): (91, "BreakPointInfoMap"),
("read_only_space", 0x05641): (92, "CachedTemplateObjectMap"),
("read_only_space", 0x05669): (94, "ClassPositionsMap"),
("read_only_space", 0x05691): (95, "DebugInfoMap"),
("read_only_space", 0x056b9): (98, "FunctionTemplateRareDataMap"),
("read_only_space", 0x056e1): (100, "InterpreterDataMap"),
("read_only_space", 0x05709): (101, "PromiseCapabilityMap"),
("read_only_space", 0x05731): (102, "PromiseReactionMap"),
("read_only_space", 0x05759): (103, "PropertyDescriptorObjectMap"),
("read_only_space", 0x05781): (104, "PrototypeInfoMap"),
("read_only_space", 0x057a9): (105, "ScriptMap"),
("read_only_space", 0x057d1): (106, "SourceTextModuleInfoEntryMap"),
("read_only_space", 0x057f9): (107, "StackFrameInfoMap"),
("read_only_space", 0x05821): (108, "StackTraceFrameMap"),
("read_only_space", 0x05849): (109, "TemplateObjectDescriptionMap"),
("read_only_space", 0x05871): (110, "Tuple2Map"),
("read_only_space", 0x05899): (111, "WasmCapiFunctionDataMap"),
("read_only_space", 0x058c1): (112, "WasmExceptionTagMap"),
("read_only_space", 0x058e9): (113, "WasmExportedFunctionDataMap"),
("read_only_space", 0x05911): (114, "WasmIndirectFunctionTableMap"),
("read_only_space", 0x05939): (115, "WasmJSFunctionDataMap"),
("read_only_space", 0x05961): (116, "WasmValueMap"),
("read_only_space", 0x05989): (135, "SloppyArgumentsElementsMap"),
("read_only_space", 0x059b1): (172, "OnHeapBasicBlockProfilerDataMap"),
("read_only_space", 0x059d9): (169, "InternalClassMap"),
("read_only_space", 0x05a01): (178, "SmiPairMap"),
("read_only_space", 0x05a29): (177, "SmiBoxMap"),
("read_only_space", 0x05a51): (146, "ExportedSubClassBaseMap"),
("read_only_space", 0x05a79): (147, "ExportedSubClassMap"),
("read_only_space", 0x05aa1): (68, "AbstractInternalClassSubclass1Map"),
("read_only_space", 0x05ac9): (69, "AbstractInternalClassSubclass2Map"),
("read_only_space", 0x05af1): (134, "InternalClassWithSmiElementsMap"),
("read_only_space", 0x05b19): (170, "InternalClassWithStructElementsMap"),
("read_only_space", 0x05b41): (148, "ExportedSubClass2Map"),
("read_only_space", 0x05b69): (179, "SortStateMap"),
("read_only_space", 0x05b91): (86, "AllocationSiteWithWeakNextMap"),
("read_only_space", 0x05bb9): (86, "AllocationSiteWithoutWeakNextMap"),
("read_only_space", 0x05be1): (77, "LoadHandler1Map"),
("read_only_space", 0x05c09): (77, "LoadHandler2Map"),
("read_only_space", 0x05c31): (77, "LoadHandler3Map"),
("read_only_space", 0x05c59): (78, "StoreHandler0Map"),
("read_only_space", 0x05c81): (78, "StoreHandler1Map"),
("read_only_space", 0x05ca9): (78, "StoreHandler2Map"),
("read_only_space", 0x05cd1): (78, "StoreHandler3Map"),
("map_space", 0x02115): (1057, "ExternalMap"),
("map_space", 0x0213d): (1072, "JSMessageObjectMap"),
("map_space", 0x02165): (180, "WasmRttEqrefMap"),
("map_space", 0x0218d): (180, "WasmRttExternrefMap"),
("map_space", 0x021b5): (180, "WasmRttFuncrefMap"),
("map_space", 0x021dd): (180, "WasmRttI31refMap"),
("map_space", 0x02165): (181, "WasmRttEqrefMap"),
("map_space", 0x0218d): (181, "WasmRttExternrefMap"),
("map_space", 0x021b5): (181, "WasmRttFuncrefMap"),
("map_space", 0x021dd): (181, "WasmRttI31refMap"),
}
# List of known V8 objects.
@ -384,31 +386,31 @@ KNOWN_OBJECTS = {
("read_only_space", 0x02821): "TerminationException",
("read_only_space", 0x02889): "OptimizedOut",
("read_only_space", 0x028e9): "StaleRegister",
("read_only_space", 0x031b9): "EmptyPropertyArray",
("read_only_space", 0x031c1): "EmptyByteArray",
("read_only_space", 0x031c9): "EmptyObjectBoilerplateDescription",
("read_only_space", 0x031fd): "EmptyArrayBoilerplateDescription",
("read_only_space", 0x03209): "EmptyClosureFeedbackCellArray",
("read_only_space", 0x03211): "EmptySlowElementDictionary",
("read_only_space", 0x03235): "EmptyOrderedHashMap",
("read_only_space", 0x03249): "EmptyOrderedHashSet",
("read_only_space", 0x0325d): "EmptyFeedbackMetadata",
("read_only_space", 0x03269): "EmptyPropertyCell",
("read_only_space", 0x0327d): "EmptyPropertyDictionary",
("read_only_space", 0x032cd): "NoOpInterceptorInfo",
("read_only_space", 0x032f5): "EmptyWeakArrayList",
("read_only_space", 0x03301): "InfinityValue",
("read_only_space", 0x0330d): "MinusZeroValue",
("read_only_space", 0x03319): "MinusInfinityValue",
("read_only_space", 0x03325): "SelfReferenceMarker",
("read_only_space", 0x03365): "BasicBlockCountersMarker",
("read_only_space", 0x033a9): "OffHeapTrampolineRelocationInfo",
("read_only_space", 0x033b5): "TrampolineTrivialCodeDataContainer",
("read_only_space", 0x033c1): "TrampolinePromiseRejectionCodeDataContainer",
("read_only_space", 0x033cd): "GlobalThisBindingScopeInfo",
("read_only_space", 0x03405): "EmptyFunctionScopeInfo",
("read_only_space", 0x0342d): "NativeScopeInfo",
("read_only_space", 0x03449): "HashSeed",
("read_only_space", 0x031e1): "EmptyPropertyArray",
("read_only_space", 0x031e9): "EmptyByteArray",
("read_only_space", 0x031f1): "EmptyObjectBoilerplateDescription",
("read_only_space", 0x03225): "EmptyArrayBoilerplateDescription",
("read_only_space", 0x03231): "EmptyClosureFeedbackCellArray",
("read_only_space", 0x03239): "EmptySlowElementDictionary",
("read_only_space", 0x0325d): "EmptyOrderedHashMap",
("read_only_space", 0x03271): "EmptyOrderedHashSet",
("read_only_space", 0x03285): "EmptyFeedbackMetadata",
("read_only_space", 0x03291): "EmptyPropertyCell",
("read_only_space", 0x032a5): "EmptyPropertyDictionary",
("read_only_space", 0x032f5): "NoOpInterceptorInfo",
("read_only_space", 0x0331d): "EmptyWeakArrayList",
("read_only_space", 0x03329): "InfinityValue",
("read_only_space", 0x03335): "MinusZeroValue",
("read_only_space", 0x03341): "MinusInfinityValue",
("read_only_space", 0x0334d): "SelfReferenceMarker",
("read_only_space", 0x0338d): "BasicBlockCountersMarker",
("read_only_space", 0x033d1): "OffHeapTrampolineRelocationInfo",
("read_only_space", 0x033dd): "TrampolineTrivialCodeDataContainer",
("read_only_space", 0x033e9): "TrampolinePromiseRejectionCodeDataContainer",
("read_only_space", 0x033f5): "GlobalThisBindingScopeInfo",
("read_only_space", 0x0342d): "EmptyFunctionScopeInfo",
("read_only_space", 0x03455): "NativeScopeInfo",
("read_only_space", 0x03471): "HashSeed",
("old_space", 0x02115): "ArgumentsIteratorAccessor",
("old_space", 0x02159): "ArrayLengthAccessor",
("old_space", 0x0219d): "BoundFunctionLengthAccessor",