This is controlled by two flags:
--redirect_code_traces
--redirect_code_traces_to=<filename>
When redirection is enabled but --redirect_code_traces_to is not specified traces are written to a file code-<pid>-<isolate>.asm. This mangling scheme matches hydrogen.cfg and allows easy discovery of compilation artifacts in a multi-V8 environment (e.g. when compilation is traced from inside Chromium).
D8 defines --redirect_code_traces_to=code.asm similar to hydrogen.cfg redirection.
BUG=
R=danno@chromium.org
Review URL: https://codereview.chromium.org/43273004
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@17571 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Changes the way we do lazy deoptimization:
1. For side-effect instructions, we insert the lazy-deopt call at
the following LLazyBailout instruction.
CALL
GAP
LAZY-BAILOUT ==> lazy-deopt-call
2. For other instructions (StackCheck) we insert it right after the
instruction since the deopt targets an earlier deoptimization environment.
STACK-CHECK
GAP ==> lazy-deopt-call
The pc of the lazy-deopt call that will be patched in is recorded in the
deoptimization input data. Each Lithium instruction can have 0..n safepoints.
All safepoints get the deoptimization index of the associated LAZY-BAILOUT
instruction. On lazy deoptimization we use the return-pc to find the safepoint.
The safepoint tells us the deoptimization index, which in turn finds us the
PC where to insert the lazy-deopt-call.
Additional changes:
* RegExpLiteral marked it as having side-effects so that it
gets an explicitlazy-bailout instruction (instead of
treating it specially like stack-checks)
* Enable target recording CallFunctionStub to achieve
more inlining on optimized code.
BUG=v8:1789
TEST=jslint and uglify run without crashing, mjsunit/compiler/regress-lazy-deopt.js
Review URL: http://codereview.chromium.org/8492004
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@10006 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Use the BitField helper class for the code flags, so that we do not have to
define both a shift and a mask explicitly. This makes changing the flags
layout simpler.
Also, make the 'mask' and 'max' members of BitField into constants, because
they are constant and so that they can be used as constant expressions.
E.g., so they can be used in declaring other const members or in static
asserts.
R=fschneider@chromium.org
BUG=
TEST=
Review URL: http://codereview.chromium.org/7787028
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@9232 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Record a safepoint with a deoptimization id for throw in optimized code. We
don't seem to much care what the AST ID is because we will not be using it
for lazy deoptimization (throw doesn't return to the point of throw). For
hygiene we use the actual ID of the throw expression. Throw is no longer a
control-flow instruction, but it's followed by an unconditional abnormal
exit. This is required to insert a simulate between the throw and the exit.
Make our optimized treatment of Function.prototype.apply act like a call and
have side effects. This ensures that it will get a lazy deoptimization
environment. Use that deoptimization ID in the safepoint for the call.
Deleting a property was also missing a deoptimization ID, though there was a
deoptimization environment assigned to the instruction. Record the
environment and use the deoptimization ID at the safepoint.
Review URL: http://codereview.chromium.org/6250105
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@6576 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Refactor SafepointTableBuilder::DefineSafepoint and ARM LCodeGen::RecordSafepoint to use an enum for different kinds of safepoints. This change removes a lot of duplicated code and makes it easier to include new kinds of safepoints in the future. The remaining variants of LCodeGen::RecordSafepoint remain as a convinient way to record common safepoint kinds.
BUG=http://code.google.com/p/v8/issues/detail?id=1043
TEST=none
Review URL: http://codereview.chromium.org/6341008
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@6505 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
This change provides fast code for a few special cases and calls the GenericBinaryOpStub for the rest.
It also changes the register allocation in the generation of lithium instructions to use fixed registers that are compatible with the generic stub. This allocation can be change once we use a more flexible implementation.
Finally, this change provides infrastructure to save double registers at safepoints, which is need to call the stub in deferred code.
BUG=
TEST=
Review URL: http://codereview.chromium.org/6164005
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@6304 ce2b1a6d-e550-0410-aec6-3dcde31c8c00