* src/third_party/valgrind/valgrind.h: Update from upstream valgrind
r11899, so as to get around some unused value warnings. Also adds
support for darwin.
This version of valgrind.h differs from the original in that all
instances of "unsigned long long int" have been replaced with
"uint64_t", as the former is not allowed in ISO C++ 89.
See https://bugs.kde.org/show_bug.cgi?id=211926 for the upstream bug
report.
* src/x64/cpu-x64.cc:
* src/builtins.cc:
* src/conversions-inl.h:
* src/debug.cc:
* src/frames.cc:
* src/full-codegen.cc:
* src/jsregexp.cc:
* src/objects.cc:
* src/parser.cc:
* src/platform-linux.cc:
* src/x64/code-stubs-x64.cc:
* src/x64/deoptimizer-x64.cc:
* src/x64/full-codegen-x64.cc:
* src/x64/lithium-codegen-x64.cc:
* src/x64/regexp-macro-assembler-x64.cc:
* src/x64/stub-cache-x64.cc: Remove a number of assigned but
unreferenced variables.
* SConstruct (CCTEST_EXTRA_FLAGS): Punt on -Wunused-but-set-variable for
the test suite.
BUG=1291
TEST=A build and tools/test.py passes.
Review URL: http://codereview.chromium.org/7400023
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8688 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
When creating a CompilationInfo we always have the script and can
determine if it is a natives script.
Now that all natives functions are recognized as such, many of them
are called with undefined as the receiver. We have to use different
filtering for builtins functions when printing stack traces.
Also, fixed one call of CALL_NON_FUNCTION to be correctly marked as a
method call (with fixed receiver). Now that CALL_NON_FUNCTION is
marked as a native function this caused the receiver to be undefined.
R=svenpanne@chromium.org
BUG=
TEST=
Review URL: http://codereview.chromium.org/7395030
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8680 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Using a C++-style method PrintName (a.k.a. << ;-), things get a lot easier when
two unrelated concerns are separated. Stubs don't need a name cache anymore,
simpler code while generating the stub name, memory allocation is centralized,
etc.
Review URL: http://codereview.chromium.org/7342042
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8627 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
This patch just adds a nop after the call to the binary operation stub in optimized code to avoid the patching for the inlined smi case used in the full code generator to kick in if the next instruction generated by the lithium code generator should accidentially enable that. For calls generated by CallCodeGeneric this was already handled on Intel platforms, but missing on ARM.
On IA-32 I did also try to check for whether the code containing the call was optimized (patch below), but that caused regressions on some benchmarks.
diff --git src/ia32/ic-ia32.cc src/ia32/ic-ia32.cc
index 5f143b1..f70e208 100644
--- src/ia32/ic-ia32.cc
+++ src/ia32/ic-ia32.cc
@@ -1603,12 +1603,18 @@ void CompareIC::UpdateCaches(Handle<Object> x, Handle<Object> y) {
// Activate inlined smi code.
if (previous_state == UNINITIALIZED) {
- PatchInlinedSmiCode(address());
+ PatchInlinedSmiCode(address(), isolate());
}
}
-void PatchInlinedSmiCode(Address address) {
+void PatchInlinedSmiCode(Address address, Isolate* isolate) {
+ // Never patch in optimized code.
+ Code* code = isolate->pc_to_code_cache()->GetCacheEntry(address)->code;
+ if (code->kind() == Code::OPTIMIZED_FUNCTION) {
+ return;
+ }
+
// The address of the instruction following the call.
Address test_instruction_address =
address + Assembler::kCallTargetAddressOffset;
diff --git src/ic.cc src/ic.cc
index f70f75a..62e79da 100644
--- src/ic.cc
+++ src/ic.cc
@@ -2384,7 +2384,7 @@ RUNTIME_FUNCTION(MaybeObject*, BinaryOp_Patch) {
// Activate inlined smi code.
if (previous_type == BinaryOpIC::UNINITIALIZED) {
- PatchInlinedSmiCode(ic.address());
+ PatchInlinedSmiCode(ic.address(), isolate);
}
}
diff --git src/ic.h src/ic.h
index 11c2e3a..9ef4b20 100644
--- src/ic.h
+++ src/ic.h
@@ -721,7 +721,7 @@ class CompareIC: public IC {
};
// Helper for BinaryOpIC and CompareIC.
-void PatchInlinedSmiCode(Address address);
+void PatchInlinedSmiCode(Address address, Isolate* isolate);
} } // namespace v8::internal
R=danno@chromium.org
BUG=none
TEST=none
Review URL: http://codereview.chromium.org//7350015
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8623 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
The preprocessor defines ENABLE_LOGGING_AND_PROFILING and ENABLE_VMSTATE_TRACKING has been removed as these where required to be turned on for Crankshaft to work. To re-enable reducing the binary size by leaving out heap and CPU profiler a new set of defines needs to be created.
R=ager@chromium.org
BUG=v8:1271
TEST=all
Review URL: http://codereview.chromium.org//7350014
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8622 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
architecture-independent.
jsregexp.h is itself included transitively quite a lot, and by getting rid of 19
of its dependencies (which even included things like src/cpu.h, the various
assemblers, etc.), the recompilation behaviour is a bit less funny than it was.
Review URL: http://codereview.chromium.org/7331014
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8589 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
The debugger can be entered from the deferred stack check in optimized code. This can cause both lazy deoptimization and debugger deoptimization (setting the first break point and inspecting the stack for optimized code respectively). This required deoptimization support from the deferred stack check.
The lazy deoptimiztion call is inserted when the deferred code is done including restoring the registers. The bailout to the full code is the begining of the loop body as that is where the stack check is sitting in the optimized code. The bailout is not to the stack check in the full code as that is sitting at the end of the loop.
R=kmillikin@chromium.org
BUG=none
TEST=none
Review URL: http://codereview.chromium.org//7212025
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8535 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Due to issues relating mostly to chrome extensions we have lately been
running into OOMs that are caused by our executable space running
out. This change introduces flushing of code from regexps if we have
not used the code for 5 mark sweeps.
The approach is different from the normal function code flusing. Here
we make a copy of the code inside the data array, and exchange the
original code with a smi determined by the sweep_generation (a new
heap variable increased everytime we do mark sweep/compact). If we
encounter a smi in EnsureCompiled we simply reinstate the code
object. If, in the marking phase of mark sweep, we find a regexp that
already have a smi in the code field, and this is more than 5
generations old we flush the code from the saved index.
Review URL: http://codereview.chromium.org/7282026
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8532 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Eliminates the enum flag RESTORE_CONTEXT and CONTEXT_ADJUSTED, and adds a context HValue and LOperand to many hydrogen and lithium instructions.
Context is still used from the stack from in CallKnownFunction (this seems safe), and in CallRuntimeFromDeferred in lithium-codegen-ia32.cc, which needs to be fixed.
BUG=
TEST=
Review URL: http://codereview.chromium.org/7132002
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8529 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
The catch variable is bound in the catch scope. For simplicity in this
initial implementation, it is always allocated even if unused and always
allocated to a catch context even if it doesn't escape. The presence of
catch is no longer treated as a with.
In this change, care must be taken to distinguish between the scope where a
var declaration is hoisted to and the scope where the initialization occurs.
R=ager@chromium.org
BUG=
TEST=
Review URL: http://codereview.chromium.org/7280012
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8496 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Optimized frames are now handled by the debugger. When discovering optimized frames during stack inspection in the debugger they are "deoptimized" using the normal deoptimization code and the deoptimizer output information is used to provide frame information to the debugger.
Before this change the debugger reported each optimized frame as one frame no matter the number of inlined functuions that might have been called inside of it. Also all locals where reported as undefined. Locals can still be reposted as undefined when their value is not "known" by the optimized frame.
As the structures used to calculate the output frames when deoptimizing are not GC safe the information for the debugger is copied to another structure (DeoptimizedFrameInfo) which is registered with the global deoptimizer data and processed during GC.
R=fschneider@chromium.org
BUG=v8:1140
TEST=test/mjsunit/debug-evaluate-locals-optimized*
Review URL: http://codereview.chromium.org//7230045
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8464 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Before: every context cached the nearest enclosing function context. This
assumed that for nested contexts (i.e., with and catch contexts) the
enclosing function had a materialized link in the context chain.
Now: when necessary, we loop up the context chain to find such a context.
This enables catch contexts without forcing the enclosing function to
allocate its own context.
R=ager@chromium.org
BUG=
TEST=
Review URL: http://codereview.chromium.org/7230047
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8452 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
* src/hydrogen.cc (HEnvironment::CopyForInlining): As the code for both
the ::HYDROGEN and ::LITHIUM compilation phases is the same, just use
one code path and remove the arg.
* src/hydrogen.h (HEnvironment): Remove now-unused CompilationPhase
enum type and arg to CopyForInlining.
* src/arm/lithium-arm.cc (LChunkBuilder::DoEnterInlined):
* src/ia32/lithium-ia32.cc (LChunkBuilder::DoEnterInlined):
* src/x64/lithium-x64.cc (LChunkBuilder::DoEnterInlined): Adapt
callers.
* AUTHORS: Add Igalia.
BUG=
TEST=I ran tools/test.py.
Review URL: http://codereview.chromium.org/7272002
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8442 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
The hydrogen stack check instruction is now added to each loop and the stack check handling on the back edge has been removed.
This change causes regression on small tight loops as the stack check is now at the top of the loop instead of at the bottom, and that requires one additional unconditional jump per loop iteration. However the reason for this change is to avoid worse regressions for upcoming changes to correctly support debugger break in optimized code.
R=fschneider@chromium.org
BUG=none
TEST=none
Review URL: http://codereview.chromium.org//7216009
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8428 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Detect the pattern in both, the full compiler and crankshaft and generate direct pointer
comparisons. Along the way I cleaned up 'typeof <expression> == <string literal>' comparisons
as well by lifting platform independent code and checking the symmetric case.
BUG=v8:1440
TEST=cctest/test-api.cc
Review URL: http://codereview.chromium.org/7216008
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8420 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
The declaration of the ToBoolean class moved to the platform-independent part
and its implementations are now structurally very similar. This is just an
intermediate cleanup step to add type recording at the call site.
Note that the MIPS implementation has not really been touched, so it should
continue to work, too.
Review URL: http://codereview.chromium.org/7218012
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@8359 ce2b1a6d-e550-0410-aec6-3dcde31c8c00