The object's space in Page starts after Page header and is aligned to kMapAlignment which is 32 bytes on 32-bit and 8 bytes on 64-bit.
In case of 64-bit target, the current page header size is exactly 32 bytes so we get the code magically aligned at 32 bytes but it is better to have a separate CODE_POINTER_ALIGN macro to make sure the object space in Page is aligned properly for both maps and code.
There could be a small waste of bytes sometimes (since both Page header and Code header sizes are aligned separately) but it seems the optimal one would involve cross-dependencies between .h files and not clear if it's worth it.
This is a back-port from Isolates branch.
Review URL: http://codereview.chromium.org/3461021
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@5526 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
The cause for missing functions is that some of them are created
from compiled code (see FastNewClosureStub), and thus not get
registered in profiler's code map.
My solution is to hook on GC visitor to provide JS functions
addresses to profiler, only if it is enabled.
BUG=858
TEST=
Review URL: http://codereview.chromium.org/3417019
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@5523 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Finally sovles the problem that r5342 attempted to solve.
When adding a stub to a map's code cache we need to make
sure that this map is not used by object that do not need
this stub.
Existing solution had 2 flaws:
1. It checked that the map is cached by asking the current context.
If the object escaped into another context then NormalizedMapCache::Contains
returns false negative.
2. If a map gets evicted from the cache we should not try to modify it
even though Contains returns false.
This patch implements much less fragile solution of the same problem:
A map now has a flag (is_shared) that is set once the map is added
to a cache, stays set even after the cache eviction, and is cleared
if the object goes back to fast mode.
Added a regression test.
Review URL: http://codereview.chromium.org/3472006
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@5518 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
For some reason, the scope's arguments and arguments shadow were
variable proxies, which resulted in all references to the arguments
shadow being shared in the AST. This makes it hard to put per-node
state on the AST nodes.
I took the opportunity to remove Variable::AsVariable which has
confused people in the past, and to rename Variable::slot to the more
accurate Variable::AsSlot.
Review URL: http://codereview.chromium.org/3432022
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@5517 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
The number of inobject properties used to be derived from the number
of this property assignments in the constructor (and increased by 2 to
allow for properties added later). This very often leads to wasted inobject
slots.
This patch reclaims some of the unused inobject space by the following method:
- for each constructor function the first several objects are allocated using the initial
("generous) instance size estimation (this is called 'tracking phase').
- during the tracking phase map transitions are tracked and actual property counts are collected.
- at the end of the tracking phase instance sizes in the maps are decreased if necessary
(starting with the function's initial map and traversing the transition tree).
- all further allocation use more realistic instance size estimation.
Shrinking generously allocated objects without costly heap traversal is made possible
by initializing their inobject properties with one_pointer_filler_map (instead of undefined).
The initial slack for the generous allocation is increased from 2 to 6 which really helps some tests.
Review URL: http://codereview.chromium.org/3329019
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@5510 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Contextual load requires only a map check followed by a cell hole
check so we can generate pretty compact code for that. The fact that
we have inlined code is marked by mov ecx, offset instruction after
the IC call. Inlining is only enabled inside loops and in non-builtin
functions.
The generated code size increase is about 3%. This descreased the
pc-to-code cache hit rate in some of the benchmarks that trigger
GC. To compensate we now have 4 times as much entries in the cache.
Review URL: http://codereview.chromium.org/3402014
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@5497 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
It turns out they were filtered out. But when I unfiltered them, I
discovered another issue: when DevTools run, regexp literals get
recompiled each time they called (looks like this is concerned with
switching to full compiler), so I ended up having multiple entries for
the same regexp. To fix this, I changed the way of how code entries
equivalence is considered.
BUG=crbug/55999
TEST=cctest/test-profile-generator/ProfileNodeFindOrAddChildForSameFunction
(the test isn't for the whole issue, but rather for equivalence testing)
Review URL: http://codereview.chromium.org/3426008
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@5492 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
When running profiling in debug mode, several assertions in frame
iterators that are undoubtedly useful when iterator is started from a
VM thread in a known "good" state, may fail when running over a stack
of a suspended VM thread. This patch makes SafeStackFrameIterator
to proactively check addresses and bail out from iteration early,
before an assertion will be triggered.
BUG=crbug/55565
Review URL: http://codereview.chromium.org/3436006
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@5467 ce2b1a6d-e550-0410-aec6-3dcde31c8c00