The change removes one unused multiply and reschedules
the shift, multiply and jump instructions to reduce
stall. Experiment shows it improve about 20% performance
on x64 for exponetials from about 100 to 2000.
Review URL: https://chromiumcodereview.appspot.com/10939013
Patch from Xi Qian <xi.qian@intel.com>.
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@12535 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
In WebKit, we would like to store a void* to a data structure that contains
lots of exciting per-context data. The current API restricts us to storing only
Strings, which is less useful.
I've also cleaned up the implementation of GetData to be less convoluted.
Review URL: https://codereview.chromium.org/10907189
Patch from Adam Barth <abarth@chromium.org>.
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@12520 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
With this CL we clearly distinguish two different views on Lithium
instructions: For register allocation, the actual instruction/operand
is irrelevant, so it has only an iterator/indexed view on the
instruction operands. All other places, most importantly code
generation, use named getters for the operands now, making it easy to
see where each one is used.
Review URL: https://codereview.chromium.org/10919261
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@12510 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
the speed of deserializing code. The current startup
time improvement for V8 is around 6%, but code deserialization
is speeded up disproportionately, and we will soon have more
code in the snapshot.
* Removed support for deserializing into large object space.
The regular pages are 1Mbyte now and that is plenty. This
is a big simplification.
* Instead of reserving space for the snapshot we actually
allocate it now. This removes some special casing from
the memory management and simplifies deserialization since
we are just bumping a pointer rather than calling the
normal allocation routines during deserialization.
* Record in the snapshot how much we need to boot up and
allocate it instead of just assuming that allocations in
a new VM will always be linear.
* In the snapshot we always address an object as a negative
offset from the current allocation point. We used to
sometimes address from the start of the deserialized data,
but this is less useful now that we have good support for
roots and repetitions in the deserialization data.
* Code objects were previously deserialized (like other
objects) by alternating raw data (deserialized with memcpy)
and pointers (to external references, other objects, etc.).
Now we deserialize code objects with a single memcpy,
followed by a series of skips and pointers that partially
overwrite the code we memcopied out of the snapshot.
The skips are sometimes merged into the following
instruction in the deserialization data to reduce dispatch
time.
* Integers in the snapshot were stored in a variable length
format that gives a compact representation for small positive
integers. This is still the case, but the new encoding can
be decoded without branches or conditional instructions,
which is faster on a modern CPU.
Review URL: https://chromiumcodereview.appspot.com/10918067
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@12505 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
This CL adds multiple things:
Transition arrays do not directly point at their descriptor array anymore, but rather do so via an indirect pointer (a JSGlobalPropertyCell).
An ownership bit is added to maps indicating whether it owns its own descriptor array or not.
Maps owning a descriptor array can pass on ownership if a transition from that map is generated; but only if the descriptor array stays exactly the same; or if a descriptor is added.
Maps that don't have ownership get ownership back if their direct child to which ownership was passed is cleared in ClearNonLiveTransitions.
To detect which descriptors in an array are valid, each map knows its own NumberOfOwnDescriptors. Since the descriptors are sorted in order of addition, if we search and find a descriptor with index bigger than this number, it is not valid for the given map.
We currently still build up an enumeration cache (although this may disappear). The enumeration cache is always built for the entire descriptor array, even if not all descriptors are owned by the map. Once a descriptor array has an enumeration cache for a given map; this invariant will always be true, even if the descriptor array was extended. The extended array will inherit the enumeration cache from the smaller descriptor array. If a map with more descriptors needs an enumeration cache, it's EnumLength will still be set to invalid, so it will have to recompute the enumeration cache. This new cache will also be valid for smaller maps since they have their own enumlength; and use this to loop over the cache. If the EnumLength is still invalid, but there is already a cache present that is big enough; we just initialize the EnumLength field for the map.
When we apply ClearNonLiveTransitions and descriptor ownership is passed back to a parent map, the descriptor array is trimmed in-place and resorted. At the same time, the enumeration cache is trimmed in-place.
Only transition arrays contain descriptor arrays. If we transition to a map and pass ownership of the descriptor array along, the child map will not store the descriptor array it owns. Rather its parent will keep the pointer. So for every leaf-map, we find the descriptor array by following the back pointer, reading out the transition array, and fetching the descriptor array from the JSGlobalPropertyCell. If a map has a transition array, we fetch it from there. If a map has undefined as its back-pointer and has no transition array; it is considered to have an empty descriptor array.
When we modify properties, we cannot share the descriptor array. To accommodate this, the child map will get its own transition array; even if there are not necessarily any transitions leaving from the child map. This is necessary since it's the only way to store its own descriptor array.
Review URL: https://chromiumcodereview.appspot.com/10909007
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@12492 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
This fixes materialization of arguments objects for strict mode functions during
deoptimization. We materialize arguments from the stack area where optimized
code pushes the arguments when entering the inlined environment. For adapted
invocations we use the arguments adaptor frame for materialization.
R=svenpanne@chromium.org
BUG=v8:2261
TEST=mjsunit/regress/regress-2261,mjsunit/compiler/inline-arguments
Review URL: https://chromiumcodereview.appspot.com/10908194
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@12489 ce2b1a6d-e550-0410-aec6-3dcde31c8c00