the speed of deserializing code. The current startup
time improvement for V8 is around 6%, but code deserialization
is speeded up disproportionately, and we will soon have more
code in the snapshot.
* Removed support for deserializing into large object space.
The regular pages are 1Mbyte now and that is plenty. This
is a big simplification.
* Instead of reserving space for the snapshot we actually
allocate it now. This removes some special casing from
the memory management and simplifies deserialization since
we are just bumping a pointer rather than calling the
normal allocation routines during deserialization.
* Record in the snapshot how much we need to boot up and
allocate it instead of just assuming that allocations in
a new VM will always be linear.
* In the snapshot we always address an object as a negative
offset from the current allocation point. We used to
sometimes address from the start of the deserialized data,
but this is less useful now that we have good support for
roots and repetitions in the deserialization data.
* Code objects were previously deserialized (like other
objects) by alternating raw data (deserialized with memcpy)
and pointers (to external references, other objects, etc.).
Now we deserialize code objects with a single memcpy,
followed by a series of skips and pointers that partially
overwrite the code we memcopied out of the snapshot.
The skips are sometimes merged into the following
instruction in the deserialization data to reduce dispatch
time.
* Integers in the snapshot were stored in a variable length
format that gives a compact representation for small positive
integers. This is still the case, but the new encoding can
be decoded without branches or conditional instructions,
which is faster on a modern CPU.
Review URL: https://chromiumcodereview.appspot.com/10918067
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@12505 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
This is basically r11496, with the following changes:
* Set back pointers in maps (cherry-picked from r11528)
* Fixed size calculation in CopyInsert, as proposed by mstarzinger/rossberg
* DefineFastAccessor uses GetCallbackObject instead of GetValue (for __proto__)
* Put the code under a new flag, which is disabled by default
* Cut down the corresponding regression test
* Adapted bootup memory test, we actually only need a bit more memory on 64bit without snapshots, which can easily explained by more live maps lying around. Note that the snapshot variants are back to their previous limits.
Next steps: Investigate any performance degradationswith the flag enabled, and finally remove the flag when things are OK. Furthermore, GetCallbackObject should be merged into GetValue, the distinction is confusing and error-prone.
Review URL: https://chromiumcodereview.appspot.com/10445009
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@11651 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
a mark-sweep. We have a soft limit on old space size, which is designed to
trigger an old-space collection when we hit it. Unfortunately although the
soft limit had already triggered an old space collection, the soft limit was
preventing objects from new space from being promoted. For every promotion
candidate we were checking 3 different ways to allocate in old space before
giving up and putting the object in the other semispace. This change allows
the promoted objects to go to old space and also makes us more eager to
sweep a page before trying other ways to find space for an object.
Review URL: http://codereview.chromium.org/8748005
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@10092 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
We can only call malloc/free once per group and we can avoid scanning
through a list of NULLs if we keep unprocessed groups in the beginning.
I also changed the internal representation of implicit references to
hold a handle to the parent (instead of a direct pointer). The
prologue callback must not trigger a GC, but it's better to be safe.
Review URL: http://codereview.chromium.org/6800003
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@7521 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
If object enters NEAR_DEATH state, it must be explicitly cleared and/or disposed, otherwise
it would retain JS object forever. Note as well that parameter is reset to NULL on first
invocation so weak handle callback would be in hard situation.
Review URL: http://codereview.chromium.org/3011009
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@5096 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
when using snapshots.
The alignment of new space has to match the alignment in the snapshot,
but the max committed amount of memory does not.
For now, we assume that the default semispace size is always used in a
snapshot.
Review URL: http://codereview.chromium.org/300036
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3106 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
The main goal was to improve O(n^2) behavior when there are many object groups. The old API required the grouping to be done on the v8 side, along with a linear search. The new interface requires the caller to do the grouping, passing V8 entire groups at a time. This removes the group id concept on the v8 side.
- Changed AddObjectToGroup to AddObjectGroup.
- Removed the group id concept from the V8 side.
- Remove a static constructor while I'm here, lazily initialize
the object groups list.
- Cleaned up return by non-const references to return pointers.
Review URL: http://codereview.chromium.org/13341
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@965 ce2b1a6d-e550-0410-aec6-3dcde31c8c00