objects in the startup heap from a partial snapshot. This happens
through the partial snapshot cache. A startup snapshot and a
partial snapshot are created together so that the startup snapshot
contains the partial snapshot cache entries needed.
Review URL: http://codereview.chromium.org/548149
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3713 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
on ia32.
1. Operate on the values in edx,eax when possible (all operations
except DIV and MOD). This saves moving them on entry and when falling
out to the non-smi code.
2. Do not perform ADD and SUB before the smi check of their inputs.
This saves undoing the operation in the case that we fall through to
the non-smi case due to non-smi inputs (probably common?), and we can
avoid emitting the smi check code twice (code size reduction).
3. Don't perform OR twice (once to smi check the inputs and once to
smi check the result).
Review URL: http://codereview.chromium.org/556019
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3712 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
As the start index is already passed it is easy to calculate the "at start" boolean in generated code. Also as direct entry has been implemented this needs to be done in generated code anyway, and therefore might as well be moved to the generated code for RegExp. The "at start" value is now calcualted as a local variable on the native RegExp frame based on the value of the start index argument.
The x64 version have been tested on both Linux and 64-bit Windows Vista.
For ARM I have tested cctest/test-regexp on ARM hardware, but the rest of the tests have only been run on the ARM simulator.
Review URL: http://codereview.chromium.org/554078
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3709 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Currently arguments are never passed on registers (due to the way ArgsInRegistersSupported is written) and
if they were, the stub would break in several places because registers are not preserved properly in the
course of execution. This CL makes use of registers more often (than never) and makes sure that registers are
handler properly.
A peformance gain is small (0.2-0.3%) but stable.
This CL was extracted from the one sent out earlier (http://codereview.chromium.org/551093).
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3692 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
As an afterthought, I realized that I put function objects moves
reporting into a method that deals with only code object moves. I've
looked up that function objects are allocated in old pointer space and
new space, so I moved logging to the corresponding VM methods.
BUG=553
Review URL: http://codereview.chromium.org/552089
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3679 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
The problem appeared due to a fact that stubs doesn't create a stack
frame, reusing the stack frame of the caller function. When building
stack traces, the current function is retrieved from PC, and its
callees are retrieved by traversing the stack backwards. Thus, for
stubs, the stub itself was discovered via PC, and then stub's caller's
caller was retrieved from stack.
To fix this problem, a pointer to JSFunction object is now captured
from the topmost stack frame, and is saved into stack trace log
record. Then a simple heuristics is applied whether a referred
function should be added to decoded stack, or not, to avoid reporting
the same function twice (from PC and from the pointer.)
BUG=553
TEST=added to mjsunit/tools/tickprocessor
Review URL: http://codereview.chromium.org/546089
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3673 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Always invoke HeapObjectIterator::has_next() before invoking HeapObjectIterator::next().
This is necessary as ::has_next() has an important side-effect of going to the next
page when current page is exhausted.
And to find if pointers are encodable use more precise data---top of map space, not a number
of pages, as pages might stay in map space due to chunking.
Review URL: http://codereview.chromium.org/552066
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3672 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
non-optimizing compiler can cope with. By default it bails out
to the old compiler on encountering a for loop (for performance)
but with this change the --always-fast-compiler flag will enable
functions with for loops to be compiled in the non-optimizing
compiler. Also enables the non-optimizing compiler on functions
that can be lazily compiled (again only with the flag).
Review URL: http://codereview.chromium.org/552065
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3667 ce2b1a6d-e550-0410-aec6-3dcde31c8c00