v8/src/assembler.h

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

1334 lines
46 KiB
C
Raw Normal View History

// Copyright (c) 1994-2006 Sun Microsystems Inc.
// All Rights Reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// - Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// - Redistribution in binary form must reproduce the above copyright
// notice, this list of conditions and the following disclaimer in the
// documentation and/or other materials provided with the distribution.
//
// - Neither the name of Sun Microsystems or the names of contributors may
// be used to endorse or promote products derived from this software without
// specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
// IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
// PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
// EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// The original source code covered by the above license above has been
// modified significantly by Google Inc.
// Copyright 2012 the V8 project authors. All rights reserved.
#ifndef V8_ASSEMBLER_H_
#define V8_ASSEMBLER_H_
#include <forward_list>
#include "src/allocation.h"
#include "src/builtins/builtins.h"
#include "src/deoptimize-reason.h"
#include "src/double.h"
#include "src/globals.h"
#include "src/label.h"
#include "src/log.h"
#include "src/register-configuration.h"
#include "src/reglist.h"
#include "src/runtime/runtime.h"
namespace v8 {
// Forward declarations.
class ApiFunction;
namespace internal {
// Forward declarations.
class Isolate;
This CL enables precise source positions for all V8 compilers. It merges compiler::SourcePosition and internal::SourcePosition to a single class used throughout the codebase. The new internal::SourcePosition instances store an id identifying an inlined function in addition to a script offset. SourcePosition::InliningId() refers to a the new table DeoptimizationInputData::InliningPositions(), which provides the following data for every inlining id: - The inlined SharedFunctionInfo as an offset into DeoptimizationInfo::LiteralArray - The SourcePosition of the inlining. Recursively, this yields the full inlining stack. Before the Code object is created, the same information can be found in CompilationInfo::inlined_functions(). If SourcePosition::InliningId() is SourcePosition::kNotInlined, it refers to the outer (non-inlined) function. So every SourcePosition has full information about its inlining stack, as long as the corresponding Code object is known. The internal represenation of a source position is a positive 64bit integer. All compilers create now appropriate source positions for inlined functions. In the case of Turbofan, this required using AstGraphBuilderWithPositions for inlined functions too. So this class is now moved to a header file. At the moment, the additional information in source positions is only used in --trace-deopt and --code-comments. The profiler needs to be updated, at the moment it gets the correct script offsets from the deopt info, but the wrong script id from the reconstructed deopt stack, which can lead to wrong outputs. This should be resolved by making the profiler use the new inlining information for deopts. I activated the inlined deoptimization tests in test-cpu-profiler.cc for Turbofan, changing them to a case where the deopt stack and the inlining position agree. It is currently still broken for other cases. The following additional changes were necessary: - The source position table (internal::SourcePositionTableBuilder etc.) supports now 64bit source positions. Encoding source positions in a single 64bit int together with the difference encoding in the source position table results in very little overhead for the inlining id, since only 12% of the source positions in Octane have a changed inlining id. - The class HPositionInfo was effectively dead code and is now removed. - SourcePosition has new printing and information facilities, including computing a full inlining stack. - I had to rename compiler/source-position.{h,cc} to compiler/compiler-source-position-table.{h,cc} to avoid clashes with the new src/source-position.cc file. - I wrote the new wrapper PodArray for ByteArray. It is a template working with any POD-type. This is used in DeoptimizationInputData::InliningPositions(). - I removed HInlinedFunctionInfo and HGraph::inlined_function_infos, because they were only used for the now obsolete Crankshaft inlining ids. - Crankshaft managed a list of inlined functions in Lithium: LChunk::inlined_functions. This is an analog structure to CompilationInfo::inlined_functions. So I removed LChunk::inlined_functions and made Crankshaft use CompilationInfo::inlined_functions instead, because this was necessary to register the offsets into the literal array in a uniform way. This is a safe change because LChunk::inlined_functions has no other uses and the functions in CompilationInfo::inlined_functions have a strictly longer lifespan, being created earlier (in Hydrogen already). BUG=v8:5432 Review-Url: https://codereview.chromium.org/2451853002 Cr-Commit-Position: refs/heads/master@{#40975}
2016-11-14 17:21:37 +00:00
class SourcePosition;
class StatsCounter;
void SetUpJSCallerSavedCodeData();
// Return the code of the n-th saved register available to JavaScript.
int JSCallerSavedCode(int n);
// -----------------------------------------------------------------------------
// Optimization for far-jmp like instructions that can be replaced by shorter.
class JumpOptimizationInfo {
public:
bool is_collecting() const { return stage_ == kCollection; }
bool is_optimizing() const { return stage_ == kOptimization; }
void set_optimizing() { stage_ = kOptimization; }
bool is_optimizable() const { return optimizable_; }
void set_optimizable() { optimizable_ = true; }
std::vector<uint32_t>& farjmp_bitmap() { return farjmp_bitmap_; }
private:
enum { kCollection, kOptimization } stage_ = kCollection;
bool optimizable_ = false;
std::vector<uint32_t> farjmp_bitmap_;
};
// -----------------------------------------------------------------------------
// Platform independent assembler base class.
enum class CodeObjectRequired { kNo, kYes };
class AssemblerBase: public Malloced {
public:
struct IsolateData {
explicit IsolateData(Isolate* isolate);
IsolateData(const IsolateData&) = default;
bool serializer_enabled_;
#if V8_TARGET_ARCH_X64
Address code_range_start_;
#endif
};
AssemblerBase(IsolateData isolate_data, void* buffer, int buffer_size);
virtual ~AssemblerBase();
IsolateData isolate_data() const { return isolate_data_; }
bool serializer_enabled() const { return isolate_data_.serializer_enabled_; }
void enable_serializer() { isolate_data_.serializer_enabled_ = true; }
bool emit_debug_code() const { return emit_debug_code_; }
void set_emit_debug_code(bool value) { emit_debug_code_ = value; }
bool predictable_code_size() const { return predictable_code_size_; }
void set_predictable_code_size(bool value) { predictable_code_size_ = value; }
uint64_t enabled_cpu_features() const { return enabled_cpu_features_; }
void set_enabled_cpu_features(uint64_t features) {
enabled_cpu_features_ = features;
}
// Features are usually enabled by CpuFeatureScope, which also asserts that
// the features are supported before they are enabled.
bool IsEnabled(CpuFeature f) {
return (enabled_cpu_features_ & (static_cast<uint64_t>(1) << f)) != 0;
}
void EnableCpuFeature(CpuFeature f) {
enabled_cpu_features_ |= (static_cast<uint64_t>(1) << f);
}
bool is_constant_pool_available() const {
if (FLAG_enable_embedded_constant_pool) {
return constant_pool_available_;
} else {
// Embedded constant pool not supported on this architecture.
UNREACHABLE();
}
}
JumpOptimizationInfo* jump_optimization_info() {
return jump_optimization_info_;
}
void set_jump_optimization_info(JumpOptimizationInfo* jump_opt) {
jump_optimization_info_ = jump_opt;
}
// Overwrite a host NaN with a quiet target NaN. Used by mksnapshot for
// cross-snapshotting.
static void QuietNaN(HeapObject* nan) { }
int pc_offset() const { return static_cast<int>(pc_ - buffer_); }
// This function is called when code generation is aborted, so that
// the assembler could clean up internal data structures.
virtual void AbortedCodeGeneration() { }
// Debugging
void Print(Isolate* isolate);
static const int kMinimalBufferSize = 4*KB;
static void FlushICache(Isolate* isolate, void* start, size_t size);
protected:
// The buffer into which code and relocation info are generated. It could
// either be owned by the assembler or be provided externally.
byte* buffer_;
int buffer_size_;
bool own_buffer_;
void set_constant_pool_available(bool available) {
if (FLAG_enable_embedded_constant_pool) {
constant_pool_available_ = available;
} else {
// Embedded constant pool not supported on this architecture.
UNREACHABLE();
}
}
// The program counter, which points into the buffer above and moves forward.
byte* pc_;
private:
IsolateData isolate_data_;
uint64_t enabled_cpu_features_;
bool emit_debug_code_;
bool predictable_code_size_;
// Indicates whether the constant pool can be accessed, which is only possible
// if the pp register points to the current code object's constant pool.
bool constant_pool_available_;
JumpOptimizationInfo* jump_optimization_info_;
// Constant pool.
friend class FrameAndConstantPoolScope;
friend class ConstantPoolUnavailableScope;
};
// Avoids emitting debug code during the lifetime of this scope object.
class DontEmitDebugCodeScope BASE_EMBEDDED {
public:
explicit DontEmitDebugCodeScope(AssemblerBase* assembler)
: assembler_(assembler), old_value_(assembler->emit_debug_code()) {
assembler_->set_emit_debug_code(false);
}
~DontEmitDebugCodeScope() {
assembler_->set_emit_debug_code(old_value_);
}
private:
AssemblerBase* assembler_;
bool old_value_;
};
// Avoids using instructions that vary in size in unpredictable ways between the
// snapshot and the running VM.
class PredictableCodeSizeScope {
public:
explicit PredictableCodeSizeScope(AssemblerBase* assembler);
PredictableCodeSizeScope(AssemblerBase* assembler, int expected_size);
~PredictableCodeSizeScope();
void ExpectSize(int expected_size) { expected_size_ = expected_size; }
private:
AssemblerBase* assembler_;
int expected_size_;
int start_offset_;
bool old_value_;
};
// Enable a specified feature within a scope.
class CpuFeatureScope BASE_EMBEDDED {
public:
enum CheckPolicy {
kCheckSupported,
kDontCheckSupported,
};
#ifdef DEBUG
CpuFeatureScope(AssemblerBase* assembler, CpuFeature f,
CheckPolicy check = kCheckSupported);
~CpuFeatureScope();
private:
AssemblerBase* assembler_;
uint64_t old_enabled_;
#else
CpuFeatureScope(AssemblerBase* assembler, CpuFeature f,
CheckPolicy check = kCheckSupported) {}
#endif
};
// CpuFeatures keeps track of which features are supported by the target CPU.
// Supported features must be enabled by a CpuFeatureScope before use.
// Example:
// if (assembler->IsSupported(SSE3)) {
// CpuFeatureScope fscope(assembler, SSE3);
// // Generate code containing SSE3 instructions.
// } else {
// // Generate alternative code.
// }
class CpuFeatures : public AllStatic {
public:
static void Probe(bool cross_compile) {
STATIC_ASSERT(NUMBER_OF_CPU_FEATURES <= kBitsPerInt);
if (initialized_) return;
initialized_ = true;
ProbeImpl(cross_compile);
}
static unsigned SupportedFeatures() {
Probe(false);
return supported_;
}
static bool IsSupported(CpuFeature f) {
return (supported_ & (1u << f)) != 0;
}
static inline bool SupportsCrankshaft();
static inline bool SupportsWasmSimd128();
static inline unsigned icache_line_size() {
DCHECK_NE(icache_line_size_, 0);
return icache_line_size_;
}
static inline unsigned dcache_line_size() {
DCHECK_NE(dcache_line_size_, 0);
return dcache_line_size_;
}
static void PrintTarget();
static void PrintFeatures();
private:
friend class ExternalReference;
friend class AssemblerBase;
// Flush instruction cache.
static void FlushICache(void* start, size_t size);
// Platform-dependent implementation.
static void ProbeImpl(bool cross_compile);
static unsigned supported_;
static unsigned icache_line_size_;
static unsigned dcache_line_size_;
static bool initialized_;
DISALLOW_COPY_AND_ASSIGN(CpuFeatures);
};
enum SaveFPRegsMode { kDontSaveFPRegs, kSaveFPRegs };
enum ArgvMode { kArgvOnStack, kArgvInRegister };
// Specifies whether to perform icache flush operations on RelocInfo updates.
// If FLUSH_ICACHE_IF_NEEDED, the icache will always be flushed if an
// instruction was modified. If SKIP_ICACHE_FLUSH the flush will always be
// skipped (only use this if you will flush the icache manually before it is
// executed).
enum ICacheFlushMode { FLUSH_ICACHE_IF_NEEDED, SKIP_ICACHE_FLUSH };
// -----------------------------------------------------------------------------
// Relocation information
// Relocation information consists of the address (pc) of the datum
// to which the relocation information applies, the relocation mode
// (rmode), and an optional data field. The relocation mode may be
// "descriptive" and not indicate a need for relocation, but simply
// describe a property of the datum. Such rmodes are useful for GC
// and nice disassembly output.
class RelocInfo {
public:
// This string is used to add padding comments to the reloc info in cases
// where we are not sure to have enough space for patching in during
// lazy deoptimization. This is the case if we have indirect calls for which
// we do not normally record relocation info.
static const char* const kFillerCommentString;
// The minimum size of a comment is equal to two bytes for the extra tagged
// pc and kPointerSize for the actual pointer to the comment.
static const int kMinRelocCommentSize = 2 + kPointerSize;
// The maximum size for a call instruction including pc-jump.
static const int kMaxCallSize = 6;
// The maximum pc delta that will use the short encoding.
static const int kMaxSmallPCDelta;
enum Mode {
// Please note the order is important (see IsCodeTarget, IsGCRelocMode).
CODE_TARGET,
EMBEDDED_OBJECT,
// Wasm entries are to relocate pointers into the wasm memory embedded in
[wasm] Introduce the WasmContext The WasmContext struct introduced in this CL is used to store the mem_size and mem_start address of the wasm memory. These variables can be accessed at C++ level at graph build time (e.g., initialized during instance building). When the GrowMemory runtime is invoked, the context variables can be changed in the WasmContext at C++ level so that the generated code will load the correct values. This requires to insert a relocatable pointer only in the JSToWasmWrapper (and in the other wasm entry points), the value is then passed from function to function as an automatically added additional parameter. The WasmContext is then dropped when creating an Interpreter Entry or when invoking a JavaScript function. This removes the need of patching the generated code at runtime (i.e., when the memory grows) with respect to WASM_MEMORY_REFERENCE and WASM_MEMORY_SIZE_REFERENCE. However, we still need to patch the code at instance build time to patch the JSToWasmWrappers; in fact the address of the WasmContext is not known during compilation, but only when the instance is built. The WasmContext address is passed as the first parameter. This has the advantage of not having to move the WasmContext around if the function does not use many registers. This CL also changes the wasm calling convention so that the first parameter register is different from the return value register. The WasmContext is attached to every WasmMemoryObject, to share the same context with multiple instances sharing the same memory. Moreover, the nodes representing the WasmContext variables are cached in the SSA environment, similarly to other local variables that might change during execution. The nodes are created when initializing the SSA environment and refreshed every time a grow_memory or a function call happens, so that we are sure that they always represent the correct mem_size and mem_start variables. This CL also removes the WasmMemorySize runtime (since it's now possible to directly retrieve mem_size from the context) and simplifies the GrowMemory runtime (since every instance now has a memory_object). R=ahaas@chromium.org,clemensh@chromium.org CC=gdeepti@chromium.org Change-Id: I3f058e641284f5a1bbbfc35a64c88da6ff08e240 Reviewed-on: https://chromium-review.googlesource.com/671008 Commit-Queue: Enrico Bacis <enricobacis@google.com> Reviewed-by: Clemens Hammacher <clemensh@chromium.org> Reviewed-by: Andreas Haas <ahaas@chromium.org> Cr-Commit-Position: refs/heads/master@{#48209}
2017-09-28 14:59:37 +00:00
// wasm code. Everything after WASM_CONTEXT_REFERENCE (inclusive) is not
// GC'ed.
[wasm] Introduce the WasmContext The WasmContext struct introduced in this CL is used to store the mem_size and mem_start address of the wasm memory. These variables can be accessed at C++ level at graph build time (e.g., initialized during instance building). When the GrowMemory runtime is invoked, the context variables can be changed in the WasmContext at C++ level so that the generated code will load the correct values. This requires to insert a relocatable pointer only in the JSToWasmWrapper (and in the other wasm entry points), the value is then passed from function to function as an automatically added additional parameter. The WasmContext is then dropped when creating an Interpreter Entry or when invoking a JavaScript function. This removes the need of patching the generated code at runtime (i.e., when the memory grows) with respect to WASM_MEMORY_REFERENCE and WASM_MEMORY_SIZE_REFERENCE. However, we still need to patch the code at instance build time to patch the JSToWasmWrappers; in fact the address of the WasmContext is not known during compilation, but only when the instance is built. The WasmContext address is passed as the first parameter. This has the advantage of not having to move the WasmContext around if the function does not use many registers. This CL also changes the wasm calling convention so that the first parameter register is different from the return value register. The WasmContext is attached to every WasmMemoryObject, to share the same context with multiple instances sharing the same memory. Moreover, the nodes representing the WasmContext variables are cached in the SSA environment, similarly to other local variables that might change during execution. The nodes are created when initializing the SSA environment and refreshed every time a grow_memory or a function call happens, so that we are sure that they always represent the correct mem_size and mem_start variables. This CL also removes the WasmMemorySize runtime (since it's now possible to directly retrieve mem_size from the context) and simplifies the GrowMemory runtime (since every instance now has a memory_object). R=ahaas@chromium.org,clemensh@chromium.org CC=gdeepti@chromium.org Change-Id: I3f058e641284f5a1bbbfc35a64c88da6ff08e240 Reviewed-on: https://chromium-review.googlesource.com/671008 Commit-Queue: Enrico Bacis <enricobacis@google.com> Reviewed-by: Clemens Hammacher <clemensh@chromium.org> Reviewed-by: Andreas Haas <ahaas@chromium.org> Cr-Commit-Position: refs/heads/master@{#48209}
2017-09-28 14:59:37 +00:00
WASM_CONTEXT_REFERENCE,
WASM_FUNCTION_TABLE_SIZE_REFERENCE,
Revert "Revert "[wasm] Reference indirect tables as addresses of global handles"" This reverts commit af37f6b970e58254088b7cd58c3583e55b79e044. Reason for revert: Reverted dependency fixed. Original change's description: > Revert "[wasm] Reference indirect tables as addresses of global handles" > > This reverts commit 186099d49f57b3eea23412acff2366891b96c144. > > Reason for revert: Need to revert: > https://chromium-review.googlesource.com/c/613880 > > Original change's description: > > [wasm] Reference indirect tables as addresses of global handles > > > > This sets us up for getting the wasm code generation off the GC heap. > > We reference tables as global handles, which have a stable address. This > > requires an extra instruction when attempting to make an indirect call, > > per table (i.e. one for the signature table and one for the function > > table). > > > > Bug: > > Change-Id: I83743ba0f1dfdeba9aee5d27232f8823981288f8 > > Reviewed-on: https://chromium-review.googlesource.com/612322 > > Commit-Queue: Mircea Trofin <mtrofin@chromium.org> > > Reviewed-by: Brad Nelson <bradnelson@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#47444} > > TBR=bradnelson@chromium.org,titzer@chromium.org,mtrofin@chromium.org > > Change-Id: Ic3dff87410a51a2072ddc16cfc83a230526d4c56 > No-Presubmit: true > No-Tree-Checks: true > No-Try: true > Reviewed-on: https://chromium-review.googlesource.com/622568 > Reviewed-by: Michael Achenbach <machenbach@chromium.org> > Commit-Queue: Michael Achenbach <machenbach@chromium.org> > Cr-Commit-Position: refs/heads/master@{#47450} TBR=bradnelson@chromium.org,machenbach@chromium.org,titzer@chromium.org,mtrofin@chromium.org Change-Id: I3dc5dc8be26b5462703edac954cbedbb8f504c1e No-Presubmit: true No-Tree-Checks: true No-Try: true Reviewed-on: https://chromium-review.googlesource.com/622035 Reviewed-by: Mircea Trofin <mtrofin@chromium.org> Commit-Queue: Mircea Trofin <mtrofin@chromium.org> Cr-Commit-Position: refs/heads/master@{#47455}
2017-08-19 16:35:05 +00:00
WASM_GLOBAL_HANDLE,
2017-11-20 21:34:04 +00:00
WASM_CALL,
JS_TO_WASM_CALL,
RUNTIME_ENTRY,
COMMENT,
EXTERNAL_REFERENCE, // The address of an external C++ function.
INTERNAL_REFERENCE, // An address inside the same function.
// Encoded internal reference, used only on MIPS, MIPS64 and PPC.
INTERNAL_REFERENCE_ENCODED,
// Marks constant and veneer pools. Only used on ARM and ARM64.
// They use a custom noncompact encoding.
CONST_POOL,
VENEER_POOL,
This CL enables precise source positions for all V8 compilers. It merges compiler::SourcePosition and internal::SourcePosition to a single class used throughout the codebase. The new internal::SourcePosition instances store an id identifying an inlined function in addition to a script offset. SourcePosition::InliningId() refers to a the new table DeoptimizationInputData::InliningPositions(), which provides the following data for every inlining id: - The inlined SharedFunctionInfo as an offset into DeoptimizationInfo::LiteralArray - The SourcePosition of the inlining. Recursively, this yields the full inlining stack. Before the Code object is created, the same information can be found in CompilationInfo::inlined_functions(). If SourcePosition::InliningId() is SourcePosition::kNotInlined, it refers to the outer (non-inlined) function. So every SourcePosition has full information about its inlining stack, as long as the corresponding Code object is known. The internal represenation of a source position is a positive 64bit integer. All compilers create now appropriate source positions for inlined functions. In the case of Turbofan, this required using AstGraphBuilderWithPositions for inlined functions too. So this class is now moved to a header file. At the moment, the additional information in source positions is only used in --trace-deopt and --code-comments. The profiler needs to be updated, at the moment it gets the correct script offsets from the deopt info, but the wrong script id from the reconstructed deopt stack, which can lead to wrong outputs. This should be resolved by making the profiler use the new inlining information for deopts. I activated the inlined deoptimization tests in test-cpu-profiler.cc for Turbofan, changing them to a case where the deopt stack and the inlining position agree. It is currently still broken for other cases. The following additional changes were necessary: - The source position table (internal::SourcePositionTableBuilder etc.) supports now 64bit source positions. Encoding source positions in a single 64bit int together with the difference encoding in the source position table results in very little overhead for the inlining id, since only 12% of the source positions in Octane have a changed inlining id. - The class HPositionInfo was effectively dead code and is now removed. - SourcePosition has new printing and information facilities, including computing a full inlining stack. - I had to rename compiler/source-position.{h,cc} to compiler/compiler-source-position-table.{h,cc} to avoid clashes with the new src/source-position.cc file. - I wrote the new wrapper PodArray for ByteArray. It is a template working with any POD-type. This is used in DeoptimizationInputData::InliningPositions(). - I removed HInlinedFunctionInfo and HGraph::inlined_function_infos, because they were only used for the now obsolete Crankshaft inlining ids. - Crankshaft managed a list of inlined functions in Lithium: LChunk::inlined_functions. This is an analog structure to CompilationInfo::inlined_functions. So I removed LChunk::inlined_functions and made Crankshaft use CompilationInfo::inlined_functions instead, because this was necessary to register the offsets into the literal array in a uniform way. This is a safe change because LChunk::inlined_functions has no other uses and the functions in CompilationInfo::inlined_functions have a strictly longer lifespan, being created earlier (in Hydrogen already). BUG=v8:5432 Review-Url: https://codereview.chromium.org/2451853002 Cr-Commit-Position: refs/heads/master@{#40975}
2016-11-14 17:21:37 +00:00
DEOPT_SCRIPT_OFFSET,
DEOPT_INLINING_ID, // Deoptimization source position.
DEOPT_REASON, // Deoptimization reason index.
DEOPT_ID, // Deoptimization inlining id.
// This is not an actual reloc mode, but used to encode a long pc jump that
// cannot be encoded as part of another record.
PC_JUMP,
// Pseudo-types
NUMBER_OF_MODES,
[wasm] Introduce the WasmContext The WasmContext struct introduced in this CL is used to store the mem_size and mem_start address of the wasm memory. These variables can be accessed at C++ level at graph build time (e.g., initialized during instance building). When the GrowMemory runtime is invoked, the context variables can be changed in the WasmContext at C++ level so that the generated code will load the correct values. This requires to insert a relocatable pointer only in the JSToWasmWrapper (and in the other wasm entry points), the value is then passed from function to function as an automatically added additional parameter. The WasmContext is then dropped when creating an Interpreter Entry or when invoking a JavaScript function. This removes the need of patching the generated code at runtime (i.e., when the memory grows) with respect to WASM_MEMORY_REFERENCE and WASM_MEMORY_SIZE_REFERENCE. However, we still need to patch the code at instance build time to patch the JSToWasmWrappers; in fact the address of the WasmContext is not known during compilation, but only when the instance is built. The WasmContext address is passed as the first parameter. This has the advantage of not having to move the WasmContext around if the function does not use many registers. This CL also changes the wasm calling convention so that the first parameter register is different from the return value register. The WasmContext is attached to every WasmMemoryObject, to share the same context with multiple instances sharing the same memory. Moreover, the nodes representing the WasmContext variables are cached in the SSA environment, similarly to other local variables that might change during execution. The nodes are created when initializing the SSA environment and refreshed every time a grow_memory or a function call happens, so that we are sure that they always represent the correct mem_size and mem_start variables. This CL also removes the WasmMemorySize runtime (since it's now possible to directly retrieve mem_size from the context) and simplifies the GrowMemory runtime (since every instance now has a memory_object). R=ahaas@chromium.org,clemensh@chromium.org CC=gdeepti@chromium.org Change-Id: I3f058e641284f5a1bbbfc35a64c88da6ff08e240 Reviewed-on: https://chromium-review.googlesource.com/671008 Commit-Queue: Enrico Bacis <enricobacis@google.com> Reviewed-by: Clemens Hammacher <clemensh@chromium.org> Reviewed-by: Andreas Haas <ahaas@chromium.org> Cr-Commit-Position: refs/heads/master@{#48209}
2017-09-28 14:59:37 +00:00
NONE32, // never recorded 32-bit value
NONE64, // never recorded 64-bit value
FIRST_REAL_RELOC_MODE = CODE_TARGET,
LAST_REAL_RELOC_MODE = VENEER_POOL,
LAST_CODE_ENUM = CODE_TARGET,
LAST_GCED_ENUM = EMBEDDED_OBJECT,
FIRST_SHAREABLE_RELOC_MODE = RUNTIME_ENTRY,
};
STATIC_ASSERT(NUMBER_OF_MODES <= kBitsPerInt);
RelocInfo() = default;
RelocInfo(byte* pc, Mode rmode, intptr_t data, Code* host)
: pc_(pc), rmode_(rmode), data_(data), host_(host) {}
static inline bool IsRealRelocMode(Mode mode) {
return mode >= FIRST_REAL_RELOC_MODE && mode <= LAST_REAL_RELOC_MODE;
}
static inline bool IsCodeTarget(Mode mode) {
return mode <= LAST_CODE_ENUM;
}
static inline bool IsEmbeddedObject(Mode mode) {
return mode == EMBEDDED_OBJECT;
}
static inline bool IsRuntimeEntry(Mode mode) {
return mode == RUNTIME_ENTRY;
}
2017-11-20 21:34:04 +00:00
static inline bool IsWasmCall(Mode mode) { return mode == WASM_CALL; }
// Is the relocation mode affected by GC?
static inline bool IsGCRelocMode(Mode mode) {
return mode <= LAST_GCED_ENUM;
}
static inline bool IsComment(Mode mode) {
return mode == COMMENT;
}
static inline bool IsConstPool(Mode mode) {
return mode == CONST_POOL;
}
static inline bool IsVeneerPool(Mode mode) {
return mode == VENEER_POOL;
}
static inline bool IsDeoptPosition(Mode mode) {
This CL enables precise source positions for all V8 compilers. It merges compiler::SourcePosition and internal::SourcePosition to a single class used throughout the codebase. The new internal::SourcePosition instances store an id identifying an inlined function in addition to a script offset. SourcePosition::InliningId() refers to a the new table DeoptimizationInputData::InliningPositions(), which provides the following data for every inlining id: - The inlined SharedFunctionInfo as an offset into DeoptimizationInfo::LiteralArray - The SourcePosition of the inlining. Recursively, this yields the full inlining stack. Before the Code object is created, the same information can be found in CompilationInfo::inlined_functions(). If SourcePosition::InliningId() is SourcePosition::kNotInlined, it refers to the outer (non-inlined) function. So every SourcePosition has full information about its inlining stack, as long as the corresponding Code object is known. The internal represenation of a source position is a positive 64bit integer. All compilers create now appropriate source positions for inlined functions. In the case of Turbofan, this required using AstGraphBuilderWithPositions for inlined functions too. So this class is now moved to a header file. At the moment, the additional information in source positions is only used in --trace-deopt and --code-comments. The profiler needs to be updated, at the moment it gets the correct script offsets from the deopt info, but the wrong script id from the reconstructed deopt stack, which can lead to wrong outputs. This should be resolved by making the profiler use the new inlining information for deopts. I activated the inlined deoptimization tests in test-cpu-profiler.cc for Turbofan, changing them to a case where the deopt stack and the inlining position agree. It is currently still broken for other cases. The following additional changes were necessary: - The source position table (internal::SourcePositionTableBuilder etc.) supports now 64bit source positions. Encoding source positions in a single 64bit int together with the difference encoding in the source position table results in very little overhead for the inlining id, since only 12% of the source positions in Octane have a changed inlining id. - The class HPositionInfo was effectively dead code and is now removed. - SourcePosition has new printing and information facilities, including computing a full inlining stack. - I had to rename compiler/source-position.{h,cc} to compiler/compiler-source-position-table.{h,cc} to avoid clashes with the new src/source-position.cc file. - I wrote the new wrapper PodArray for ByteArray. It is a template working with any POD-type. This is used in DeoptimizationInputData::InliningPositions(). - I removed HInlinedFunctionInfo and HGraph::inlined_function_infos, because they were only used for the now obsolete Crankshaft inlining ids. - Crankshaft managed a list of inlined functions in Lithium: LChunk::inlined_functions. This is an analog structure to CompilationInfo::inlined_functions. So I removed LChunk::inlined_functions and made Crankshaft use CompilationInfo::inlined_functions instead, because this was necessary to register the offsets into the literal array in a uniform way. This is a safe change because LChunk::inlined_functions has no other uses and the functions in CompilationInfo::inlined_functions have a strictly longer lifespan, being created earlier (in Hydrogen already). BUG=v8:5432 Review-Url: https://codereview.chromium.org/2451853002 Cr-Commit-Position: refs/heads/master@{#40975}
2016-11-14 17:21:37 +00:00
return mode == DEOPT_SCRIPT_OFFSET || mode == DEOPT_INLINING_ID;
}
static inline bool IsDeoptReason(Mode mode) {
return mode == DEOPT_REASON;
}
static inline bool IsDeoptId(Mode mode) {
return mode == DEOPT_ID;
}
static inline bool IsExternalReference(Mode mode) {
return mode == EXTERNAL_REFERENCE;
}
static inline bool IsInternalReference(Mode mode) {
return mode == INTERNAL_REFERENCE;
}
static inline bool IsInternalReferenceEncoded(Mode mode) {
return mode == INTERNAL_REFERENCE_ENCODED;
}
static inline bool IsNone(Mode mode) {
return mode == NONE32 || mode == NONE64;
}
[wasm] Introduce the WasmContext The WasmContext struct introduced in this CL is used to store the mem_size and mem_start address of the wasm memory. These variables can be accessed at C++ level at graph build time (e.g., initialized during instance building). When the GrowMemory runtime is invoked, the context variables can be changed in the WasmContext at C++ level so that the generated code will load the correct values. This requires to insert a relocatable pointer only in the JSToWasmWrapper (and in the other wasm entry points), the value is then passed from function to function as an automatically added additional parameter. The WasmContext is then dropped when creating an Interpreter Entry or when invoking a JavaScript function. This removes the need of patching the generated code at runtime (i.e., when the memory grows) with respect to WASM_MEMORY_REFERENCE and WASM_MEMORY_SIZE_REFERENCE. However, we still need to patch the code at instance build time to patch the JSToWasmWrappers; in fact the address of the WasmContext is not known during compilation, but only when the instance is built. The WasmContext address is passed as the first parameter. This has the advantage of not having to move the WasmContext around if the function does not use many registers. This CL also changes the wasm calling convention so that the first parameter register is different from the return value register. The WasmContext is attached to every WasmMemoryObject, to share the same context with multiple instances sharing the same memory. Moreover, the nodes representing the WasmContext variables are cached in the SSA environment, similarly to other local variables that might change during execution. The nodes are created when initializing the SSA environment and refreshed every time a grow_memory or a function call happens, so that we are sure that they always represent the correct mem_size and mem_start variables. This CL also removes the WasmMemorySize runtime (since it's now possible to directly retrieve mem_size from the context) and simplifies the GrowMemory runtime (since every instance now has a memory_object). R=ahaas@chromium.org,clemensh@chromium.org CC=gdeepti@chromium.org Change-Id: I3f058e641284f5a1bbbfc35a64c88da6ff08e240 Reviewed-on: https://chromium-review.googlesource.com/671008 Commit-Queue: Enrico Bacis <enricobacis@google.com> Reviewed-by: Clemens Hammacher <clemensh@chromium.org> Reviewed-by: Andreas Haas <ahaas@chromium.org> Cr-Commit-Position: refs/heads/master@{#48209}
2017-09-28 14:59:37 +00:00
static inline bool IsWasmContextReference(Mode mode) {
return mode == WASM_CONTEXT_REFERENCE;
}
static inline bool IsWasmFunctionTableSizeReference(Mode mode) {
return mode == WASM_FUNCTION_TABLE_SIZE_REFERENCE;
}
static inline bool IsWasmReference(Mode mode) {
Revert "Revert "[wasm] Reference indirect tables as addresses of global handles"" This reverts commit af37f6b970e58254088b7cd58c3583e55b79e044. Reason for revert: Reverted dependency fixed. Original change's description: > Revert "[wasm] Reference indirect tables as addresses of global handles" > > This reverts commit 186099d49f57b3eea23412acff2366891b96c144. > > Reason for revert: Need to revert: > https://chromium-review.googlesource.com/c/613880 > > Original change's description: > > [wasm] Reference indirect tables as addresses of global handles > > > > This sets us up for getting the wasm code generation off the GC heap. > > We reference tables as global handles, which have a stable address. This > > requires an extra instruction when attempting to make an indirect call, > > per table (i.e. one for the signature table and one for the function > > table). > > > > Bug: > > Change-Id: I83743ba0f1dfdeba9aee5d27232f8823981288f8 > > Reviewed-on: https://chromium-review.googlesource.com/612322 > > Commit-Queue: Mircea Trofin <mtrofin@chromium.org> > > Reviewed-by: Brad Nelson <bradnelson@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#47444} > > TBR=bradnelson@chromium.org,titzer@chromium.org,mtrofin@chromium.org > > Change-Id: Ic3dff87410a51a2072ddc16cfc83a230526d4c56 > No-Presubmit: true > No-Tree-Checks: true > No-Try: true > Reviewed-on: https://chromium-review.googlesource.com/622568 > Reviewed-by: Michael Achenbach <machenbach@chromium.org> > Commit-Queue: Michael Achenbach <machenbach@chromium.org> > Cr-Commit-Position: refs/heads/master@{#47450} TBR=bradnelson@chromium.org,machenbach@chromium.org,titzer@chromium.org,mtrofin@chromium.org Change-Id: I3dc5dc8be26b5462703edac954cbedbb8f504c1e No-Presubmit: true No-Tree-Checks: true No-Try: true Reviewed-on: https://chromium-review.googlesource.com/622035 Reviewed-by: Mircea Trofin <mtrofin@chromium.org> Commit-Queue: Mircea Trofin <mtrofin@chromium.org> Cr-Commit-Position: refs/heads/master@{#47455}
2017-08-19 16:35:05 +00:00
return IsWasmPtrReference(mode) || IsWasmSizeReference(mode);
}
static inline bool IsWasmSizeReference(Mode mode) {
[wasm] Introduce the WasmContext The WasmContext struct introduced in this CL is used to store the mem_size and mem_start address of the wasm memory. These variables can be accessed at C++ level at graph build time (e.g., initialized during instance building). When the GrowMemory runtime is invoked, the context variables can be changed in the WasmContext at C++ level so that the generated code will load the correct values. This requires to insert a relocatable pointer only in the JSToWasmWrapper (and in the other wasm entry points), the value is then passed from function to function as an automatically added additional parameter. The WasmContext is then dropped when creating an Interpreter Entry or when invoking a JavaScript function. This removes the need of patching the generated code at runtime (i.e., when the memory grows) with respect to WASM_MEMORY_REFERENCE and WASM_MEMORY_SIZE_REFERENCE. However, we still need to patch the code at instance build time to patch the JSToWasmWrappers; in fact the address of the WasmContext is not known during compilation, but only when the instance is built. The WasmContext address is passed as the first parameter. This has the advantage of not having to move the WasmContext around if the function does not use many registers. This CL also changes the wasm calling convention so that the first parameter register is different from the return value register. The WasmContext is attached to every WasmMemoryObject, to share the same context with multiple instances sharing the same memory. Moreover, the nodes representing the WasmContext variables are cached in the SSA environment, similarly to other local variables that might change during execution. The nodes are created when initializing the SSA environment and refreshed every time a grow_memory or a function call happens, so that we are sure that they always represent the correct mem_size and mem_start variables. This CL also removes the WasmMemorySize runtime (since it's now possible to directly retrieve mem_size from the context) and simplifies the GrowMemory runtime (since every instance now has a memory_object). R=ahaas@chromium.org,clemensh@chromium.org CC=gdeepti@chromium.org Change-Id: I3f058e641284f5a1bbbfc35a64c88da6ff08e240 Reviewed-on: https://chromium-review.googlesource.com/671008 Commit-Queue: Enrico Bacis <enricobacis@google.com> Reviewed-by: Clemens Hammacher <clemensh@chromium.org> Reviewed-by: Andreas Haas <ahaas@chromium.org> Cr-Commit-Position: refs/heads/master@{#48209}
2017-09-28 14:59:37 +00:00
return IsWasmFunctionTableSizeReference(mode);
}
static inline bool IsWasmPtrReference(Mode mode) {
return mode == WASM_CONTEXT_REFERENCE || mode == WASM_GLOBAL_HANDLE ||
mode == WASM_CALL || mode == JS_TO_WASM_CALL;
}
static inline int ModeMask(Mode mode) { return 1 << mode; }
// Accessors
byte* pc() const { return pc_; }
void set_pc(byte* pc) { pc_ = pc; }
Mode rmode() const { return rmode_; }
intptr_t data() const { return data_; }
Code* host() const { return host_; }
// Apply a relocation by delta bytes. When the code object is moved, PC
// relative addresses have to be updated as well as absolute addresses
// inside the code (internal references).
// Do not forget to flush the icache afterwards!
INLINE(void apply(intptr_t delta));
// Is the pointer this relocation info refers to coded like a plain pointer
// or is it strange in some way (e.g. relative or patched into a series of
// instructions).
bool IsCodedSpecially();
// If true, the pointer this relocation info refers to is an entry in the
// constant pool, otherwise the pointer is embedded in the instruction stream.
bool IsInConstantPool();
[wasm] Introduce the WasmContext The WasmContext struct introduced in this CL is used to store the mem_size and mem_start address of the wasm memory. These variables can be accessed at C++ level at graph build time (e.g., initialized during instance building). When the GrowMemory runtime is invoked, the context variables can be changed in the WasmContext at C++ level so that the generated code will load the correct values. This requires to insert a relocatable pointer only in the JSToWasmWrapper (and in the other wasm entry points), the value is then passed from function to function as an automatically added additional parameter. The WasmContext is then dropped when creating an Interpreter Entry or when invoking a JavaScript function. This removes the need of patching the generated code at runtime (i.e., when the memory grows) with respect to WASM_MEMORY_REFERENCE and WASM_MEMORY_SIZE_REFERENCE. However, we still need to patch the code at instance build time to patch the JSToWasmWrappers; in fact the address of the WasmContext is not known during compilation, but only when the instance is built. The WasmContext address is passed as the first parameter. This has the advantage of not having to move the WasmContext around if the function does not use many registers. This CL also changes the wasm calling convention so that the first parameter register is different from the return value register. The WasmContext is attached to every WasmMemoryObject, to share the same context with multiple instances sharing the same memory. Moreover, the nodes representing the WasmContext variables are cached in the SSA environment, similarly to other local variables that might change during execution. The nodes are created when initializing the SSA environment and refreshed every time a grow_memory or a function call happens, so that we are sure that they always represent the correct mem_size and mem_start variables. This CL also removes the WasmMemorySize runtime (since it's now possible to directly retrieve mem_size from the context) and simplifies the GrowMemory runtime (since every instance now has a memory_object). R=ahaas@chromium.org,clemensh@chromium.org CC=gdeepti@chromium.org Change-Id: I3f058e641284f5a1bbbfc35a64c88da6ff08e240 Reviewed-on: https://chromium-review.googlesource.com/671008 Commit-Queue: Enrico Bacis <enricobacis@google.com> Reviewed-by: Clemens Hammacher <clemensh@chromium.org> Reviewed-by: Andreas Haas <ahaas@chromium.org> Cr-Commit-Position: refs/heads/master@{#48209}
2017-09-28 14:59:37 +00:00
Address wasm_context_reference() const;
uint32_t wasm_function_table_size_reference() const;
Revert "Revert "[wasm] Reference indirect tables as addresses of global handles"" This reverts commit af37f6b970e58254088b7cd58c3583e55b79e044. Reason for revert: Reverted dependency fixed. Original change's description: > Revert "[wasm] Reference indirect tables as addresses of global handles" > > This reverts commit 186099d49f57b3eea23412acff2366891b96c144. > > Reason for revert: Need to revert: > https://chromium-review.googlesource.com/c/613880 > > Original change's description: > > [wasm] Reference indirect tables as addresses of global handles > > > > This sets us up for getting the wasm code generation off the GC heap. > > We reference tables as global handles, which have a stable address. This > > requires an extra instruction when attempting to make an indirect call, > > per table (i.e. one for the signature table and one for the function > > table). > > > > Bug: > > Change-Id: I83743ba0f1dfdeba9aee5d27232f8823981288f8 > > Reviewed-on: https://chromium-review.googlesource.com/612322 > > Commit-Queue: Mircea Trofin <mtrofin@chromium.org> > > Reviewed-by: Brad Nelson <bradnelson@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#47444} > > TBR=bradnelson@chromium.org,titzer@chromium.org,mtrofin@chromium.org > > Change-Id: Ic3dff87410a51a2072ddc16cfc83a230526d4c56 > No-Presubmit: true > No-Tree-Checks: true > No-Try: true > Reviewed-on: https://chromium-review.googlesource.com/622568 > Reviewed-by: Michael Achenbach <machenbach@chromium.org> > Commit-Queue: Michael Achenbach <machenbach@chromium.org> > Cr-Commit-Position: refs/heads/master@{#47450} TBR=bradnelson@chromium.org,machenbach@chromium.org,titzer@chromium.org,mtrofin@chromium.org Change-Id: I3dc5dc8be26b5462703edac954cbedbb8f504c1e No-Presubmit: true No-Tree-Checks: true No-Try: true Reviewed-on: https://chromium-review.googlesource.com/622035 Reviewed-by: Mircea Trofin <mtrofin@chromium.org> Commit-Queue: Mircea Trofin <mtrofin@chromium.org> Cr-Commit-Position: refs/heads/master@{#47455}
2017-08-19 16:35:05 +00:00
Address global_handle() const;
Address js_to_wasm_address() const;
2017-11-20 21:34:04 +00:00
Address wasm_call_address() const;
Revert "Revert "[wasm] Reference indirect tables as addresses of global handles"" This reverts commit af37f6b970e58254088b7cd58c3583e55b79e044. Reason for revert: Reverted dependency fixed. Original change's description: > Revert "[wasm] Reference indirect tables as addresses of global handles" > > This reverts commit 186099d49f57b3eea23412acff2366891b96c144. > > Reason for revert: Need to revert: > https://chromium-review.googlesource.com/c/613880 > > Original change's description: > > [wasm] Reference indirect tables as addresses of global handles > > > > This sets us up for getting the wasm code generation off the GC heap. > > We reference tables as global handles, which have a stable address. This > > requires an extra instruction when attempting to make an indirect call, > > per table (i.e. one for the signature table and one for the function > > table). > > > > Bug: > > Change-Id: I83743ba0f1dfdeba9aee5d27232f8823981288f8 > > Reviewed-on: https://chromium-review.googlesource.com/612322 > > Commit-Queue: Mircea Trofin <mtrofin@chromium.org> > > Reviewed-by: Brad Nelson <bradnelson@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#47444} > > TBR=bradnelson@chromium.org,titzer@chromium.org,mtrofin@chromium.org > > Change-Id: Ic3dff87410a51a2072ddc16cfc83a230526d4c56 > No-Presubmit: true > No-Tree-Checks: true > No-Try: true > Reviewed-on: https://chromium-review.googlesource.com/622568 > Reviewed-by: Michael Achenbach <machenbach@chromium.org> > Commit-Queue: Michael Achenbach <machenbach@chromium.org> > Cr-Commit-Position: refs/heads/master@{#47450} TBR=bradnelson@chromium.org,machenbach@chromium.org,titzer@chromium.org,mtrofin@chromium.org Change-Id: I3dc5dc8be26b5462703edac954cbedbb8f504c1e No-Presubmit: true No-Tree-Checks: true No-Try: true Reviewed-on: https://chromium-review.googlesource.com/622035 Reviewed-by: Mircea Trofin <mtrofin@chromium.org> Commit-Queue: Mircea Trofin <mtrofin@chromium.org> Cr-Commit-Position: refs/heads/master@{#47455}
2017-08-19 16:35:05 +00:00
[wasm] Introduce the WasmContext The WasmContext struct introduced in this CL is used to store the mem_size and mem_start address of the wasm memory. These variables can be accessed at C++ level at graph build time (e.g., initialized during instance building). When the GrowMemory runtime is invoked, the context variables can be changed in the WasmContext at C++ level so that the generated code will load the correct values. This requires to insert a relocatable pointer only in the JSToWasmWrapper (and in the other wasm entry points), the value is then passed from function to function as an automatically added additional parameter. The WasmContext is then dropped when creating an Interpreter Entry or when invoking a JavaScript function. This removes the need of patching the generated code at runtime (i.e., when the memory grows) with respect to WASM_MEMORY_REFERENCE and WASM_MEMORY_SIZE_REFERENCE. However, we still need to patch the code at instance build time to patch the JSToWasmWrappers; in fact the address of the WasmContext is not known during compilation, but only when the instance is built. The WasmContext address is passed as the first parameter. This has the advantage of not having to move the WasmContext around if the function does not use many registers. This CL also changes the wasm calling convention so that the first parameter register is different from the return value register. The WasmContext is attached to every WasmMemoryObject, to share the same context with multiple instances sharing the same memory. Moreover, the nodes representing the WasmContext variables are cached in the SSA environment, similarly to other local variables that might change during execution. The nodes are created when initializing the SSA environment and refreshed every time a grow_memory or a function call happens, so that we are sure that they always represent the correct mem_size and mem_start variables. This CL also removes the WasmMemorySize runtime (since it's now possible to directly retrieve mem_size from the context) and simplifies the GrowMemory runtime (since every instance now has a memory_object). R=ahaas@chromium.org,clemensh@chromium.org CC=gdeepti@chromium.org Change-Id: I3f058e641284f5a1bbbfc35a64c88da6ff08e240 Reviewed-on: https://chromium-review.googlesource.com/671008 Commit-Queue: Enrico Bacis <enricobacis@google.com> Reviewed-by: Clemens Hammacher <clemensh@chromium.org> Reviewed-by: Andreas Haas <ahaas@chromium.org> Cr-Commit-Position: refs/heads/master@{#48209}
2017-09-28 14:59:37 +00:00
void set_wasm_context_reference(
Isolate* isolate, Address address,
ICacheFlushMode icache_flush_mode = FLUSH_ICACHE_IF_NEEDED);
void update_wasm_function_table_size_reference(
Isolate* isolate, uint32_t old_base, uint32_t new_base,
ICacheFlushMode icache_flush_mode = FLUSH_ICACHE_IF_NEEDED);
void set_target_address(
Isolate* isolate, Address target,
WriteBarrierMode write_barrier_mode = UPDATE_WRITE_BARRIER,
ICacheFlushMode icache_flush_mode = FLUSH_ICACHE_IF_NEEDED);
Revert "Revert "[wasm] Reference indirect tables as addresses of global handles"" This reverts commit af37f6b970e58254088b7cd58c3583e55b79e044. Reason for revert: Reverted dependency fixed. Original change's description: > Revert "[wasm] Reference indirect tables as addresses of global handles" > > This reverts commit 186099d49f57b3eea23412acff2366891b96c144. > > Reason for revert: Need to revert: > https://chromium-review.googlesource.com/c/613880 > > Original change's description: > > [wasm] Reference indirect tables as addresses of global handles > > > > This sets us up for getting the wasm code generation off the GC heap. > > We reference tables as global handles, which have a stable address. This > > requires an extra instruction when attempting to make an indirect call, > > per table (i.e. one for the signature table and one for the function > > table). > > > > Bug: > > Change-Id: I83743ba0f1dfdeba9aee5d27232f8823981288f8 > > Reviewed-on: https://chromium-review.googlesource.com/612322 > > Commit-Queue: Mircea Trofin <mtrofin@chromium.org> > > Reviewed-by: Brad Nelson <bradnelson@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#47444} > > TBR=bradnelson@chromium.org,titzer@chromium.org,mtrofin@chromium.org > > Change-Id: Ic3dff87410a51a2072ddc16cfc83a230526d4c56 > No-Presubmit: true > No-Tree-Checks: true > No-Try: true > Reviewed-on: https://chromium-review.googlesource.com/622568 > Reviewed-by: Michael Achenbach <machenbach@chromium.org> > Commit-Queue: Michael Achenbach <machenbach@chromium.org> > Cr-Commit-Position: refs/heads/master@{#47450} TBR=bradnelson@chromium.org,machenbach@chromium.org,titzer@chromium.org,mtrofin@chromium.org Change-Id: I3dc5dc8be26b5462703edac954cbedbb8f504c1e No-Presubmit: true No-Tree-Checks: true No-Try: true Reviewed-on: https://chromium-review.googlesource.com/622035 Reviewed-by: Mircea Trofin <mtrofin@chromium.org> Commit-Queue: Mircea Trofin <mtrofin@chromium.org> Cr-Commit-Position: refs/heads/master@{#47455}
2017-08-19 16:35:05 +00:00
void set_global_handle(
Isolate* isolate, Address address,
ICacheFlushMode icache_flush_mode = FLUSH_ICACHE_IF_NEEDED);
2017-11-20 21:34:04 +00:00
void set_wasm_call_address(
Isolate*, Address,
ICacheFlushMode icache_flush_mode = FLUSH_ICACHE_IF_NEEDED);
void set_js_to_wasm_address(
Isolate*, Address,
ICacheFlushMode icache_flush_mode = FLUSH_ICACHE_IF_NEEDED);
Revert "Revert "[wasm] Reference indirect tables as addresses of global handles"" This reverts commit af37f6b970e58254088b7cd58c3583e55b79e044. Reason for revert: Reverted dependency fixed. Original change's description: > Revert "[wasm] Reference indirect tables as addresses of global handles" > > This reverts commit 186099d49f57b3eea23412acff2366891b96c144. > > Reason for revert: Need to revert: > https://chromium-review.googlesource.com/c/613880 > > Original change's description: > > [wasm] Reference indirect tables as addresses of global handles > > > > This sets us up for getting the wasm code generation off the GC heap. > > We reference tables as global handles, which have a stable address. This > > requires an extra instruction when attempting to make an indirect call, > > per table (i.e. one for the signature table and one for the function > > table). > > > > Bug: > > Change-Id: I83743ba0f1dfdeba9aee5d27232f8823981288f8 > > Reviewed-on: https://chromium-review.googlesource.com/612322 > > Commit-Queue: Mircea Trofin <mtrofin@chromium.org> > > Reviewed-by: Brad Nelson <bradnelson@chromium.org> > > Cr-Commit-Position: refs/heads/master@{#47444} > > TBR=bradnelson@chromium.org,titzer@chromium.org,mtrofin@chromium.org > > Change-Id: Ic3dff87410a51a2072ddc16cfc83a230526d4c56 > No-Presubmit: true > No-Tree-Checks: true > No-Try: true > Reviewed-on: https://chromium-review.googlesource.com/622568 > Reviewed-by: Michael Achenbach <machenbach@chromium.org> > Commit-Queue: Michael Achenbach <machenbach@chromium.org> > Cr-Commit-Position: refs/heads/master@{#47450} TBR=bradnelson@chromium.org,machenbach@chromium.org,titzer@chromium.org,mtrofin@chromium.org Change-Id: I3dc5dc8be26b5462703edac954cbedbb8f504c1e No-Presubmit: true No-Tree-Checks: true No-Try: true Reviewed-on: https://chromium-review.googlesource.com/622035 Reviewed-by: Mircea Trofin <mtrofin@chromium.org> Commit-Queue: Mircea Trofin <mtrofin@chromium.org> Cr-Commit-Position: refs/heads/master@{#47455}
2017-08-19 16:35:05 +00:00
// this relocation applies to;
// can only be called if IsCodeTarget(rmode_) || IsRuntimeEntry(rmode_)
INLINE(Address target_address());
INLINE(HeapObject* target_object());
INLINE(Handle<HeapObject> target_object_handle(Assembler* origin));
INLINE(void set_target_object(
HeapObject* target,
WriteBarrierMode write_barrier_mode = UPDATE_WRITE_BARRIER,
ICacheFlushMode icache_flush_mode = FLUSH_ICACHE_IF_NEEDED));
INLINE(Address target_runtime_entry(Assembler* origin));
INLINE(void set_target_runtime_entry(
Isolate* isolate, Address target,
WriteBarrierMode write_barrier_mode = UPDATE_WRITE_BARRIER,
ICacheFlushMode icache_flush_mode = FLUSH_ICACHE_IF_NEEDED));
INLINE(Cell* target_cell());
INLINE(Handle<Cell> target_cell_handle());
INLINE(void set_target_cell(
Cell* cell, WriteBarrierMode write_barrier_mode = UPDATE_WRITE_BARRIER,
ICacheFlushMode icache_flush_mode = FLUSH_ICACHE_IF_NEEDED));
// Returns the address of the constant pool entry where the target address
// is held. This should only be called if IsInConstantPool returns true.
INLINE(Address constant_pool_entry_address());
// Read the address of the word containing the target_address in an
// instruction stream. What this means exactly is architecture-independent.
// The only architecture-independent user of this function is the serializer.
// The serializer uses it to find out how many raw bytes of instruction to
// output before the next target. Architecture-independent code shouldn't
// dereference the pointer it gets back from this.
INLINE(Address target_address_address());
// This indicates how much space a target takes up when deserializing a code
// stream. For most architectures this is just the size of a pointer. For
// an instruction like movw/movt where the target bits are mixed into the
// instruction bits the size of the target will be zero, indicating that the
// serializer should not step forwards in memory after a target is resolved
// and written. In this case the target_address_address function above
// should return the end of the instructions to be patched, allowing the
// deserializer to deserialize the instructions as raw bytes and put them in
// place, ready to be patched with the target.
INLINE(int target_address_size());
// Read the reference in the instruction this relocation
// applies to; can only be called if rmode_ is EXTERNAL_REFERENCE.
INLINE(Address target_external_reference());
// Read the reference in the instruction this relocation
// applies to; can only be called if rmode_ is INTERNAL_REFERENCE.
INLINE(Address target_internal_reference());
// Return the reference address this relocation applies to;
// can only be called if rmode_ is INTERNAL_REFERENCE.
INLINE(Address target_internal_reference_address());
// Wipe out a relocation to a fixed value, used for making snapshots
// reproducible.
INLINE(void WipeOut(Isolate* isolate));
template <typename ObjectVisitor>
inline void Visit(Isolate* isolate, ObjectVisitor* v);
#ifdef DEBUG
// Check whether the given code contains relocation information that
// either is position-relative or movable by the garbage collector.
static bool RequiresRelocation(Isolate* isolate, const CodeDesc& desc);
#endif
#ifdef ENABLE_DISASSEMBLER
// Printing
static const char* RelocModeName(Mode rmode);
void Print(Isolate* isolate, std::ostream& os); // NOLINT
#endif // ENABLE_DISASSEMBLER
#ifdef VERIFY_HEAP
void Verify(Isolate* isolate);
#endif
static const int kCodeTargetMask = (1 << (LAST_CODE_ENUM + 1)) - 1;
static const int kApplyMask; // Modes affected by apply. Depends on arch.
private:
void set_embedded_address(Isolate* isolate, Address address,
ICacheFlushMode flush_mode);
void set_embedded_size(Isolate* isolate, uint32_t size,
ICacheFlushMode flush_mode);
uint32_t embedded_size() const;
Address embedded_address() const;
// On ARM, note that pc_ is the address of the constant pool entry
// to be relocated and not the address of the instruction
// referencing the constant pool entry (except when rmode_ ==
// comment).
byte* pc_;
Mode rmode_;
intptr_t data_;
Code* host_;
Address constant_pool_ = nullptr;
friend class RelocIterator;
};
// RelocInfoWriter serializes a stream of relocation info. It writes towards
// lower addresses.
class RelocInfoWriter BASE_EMBEDDED {
public:
RelocInfoWriter() : pos_(nullptr), last_pc_(nullptr) {}
RelocInfoWriter(byte* pos, byte* pc) : pos_(pos), last_pc_(pc) {}
byte* pos() const { return pos_; }
byte* last_pc() const { return last_pc_; }
void Write(const RelocInfo* rinfo);
// Update the state of the stream after reloc info buffer
// and/or code is moved while the stream is active.
void Reposition(byte* pos, byte* pc) {
pos_ = pos;
last_pc_ = pc;
}
// Max size (bytes) of a written RelocInfo. Longest encoding is
// ExtraTag, VariableLengthPCJump, ExtraTag, pc_delta, data_delta.
// On ia32 and arm this is 1 + 4 + 1 + 1 + 4 = 11.
// On x64 this is 1 + 4 + 1 + 1 + 8 == 15;
// Here we use the maximum of the two.
static const int kMaxSize = 15;
private:
inline uint32_t WriteLongPCJump(uint32_t pc_delta);
inline void WriteShortTaggedPC(uint32_t pc_delta, int tag);
inline void WriteShortData(intptr_t data_delta);
inline void WriteMode(RelocInfo::Mode rmode);
inline void WriteModeAndPC(uint32_t pc_delta, RelocInfo::Mode rmode);
inline void WriteIntData(int data_delta);
inline void WriteData(intptr_t data_delta);
byte* pos_;
byte* last_pc_;
RelocInfo::Mode last_mode_;
DISALLOW_COPY_AND_ASSIGN(RelocInfoWriter);
};
// A RelocIterator iterates over relocation information.
// Typical use:
//
// for (RelocIterator it(code); !it.done(); it.next()) {
// // do something with it.rinfo() here
// }
//
// A mask can be specified to skip unwanted modes.
class RelocIterator: public Malloced {
public:
// Create a new iterator positioned at
// the beginning of the reloc info.
// Relocation information with mode k is included in the
// iteration iff bit k of mode_mask is set.
explicit RelocIterator(Code* code, int mode_mask = -1);
explicit RelocIterator(const CodeDesc& desc, int mode_mask = -1);
explicit RelocIterator(Vector<byte> instructions,
Vector<const byte> reloc_info, Address const_pool,
int mode_mask = -1);
RelocIterator(RelocIterator&&) = default;
RelocIterator& operator=(RelocIterator&&) = default;
// Iteration
bool done() const { return done_; }
void next();
// Return pointer valid until next next().
RelocInfo* rinfo() {
DCHECK(!done());
return &rinfo_;
}
private:
// Advance* moves the position before/after reading.
// *Read* reads from current byte(s) into rinfo_.
// *Get* just reads and returns info on current byte.
void Advance(int bytes = 1) { pos_ -= bytes; }
int AdvanceGetTag();
RelocInfo::Mode GetMode();
void AdvanceReadLongPCJump();
void ReadShortTaggedPC();
void ReadShortData();
void AdvanceReadPC();
void AdvanceReadInt();
void AdvanceReadData();
// If the given mode is wanted, set it in rinfo_ and return true.
// Else return false. Used for efficiently skipping unwanted modes.
bool SetMode(RelocInfo::Mode mode) {
return (mode_mask_ & (1 << mode)) ? (rinfo_.rmode_ = mode, true) : false;
}
const byte* pos_;
const byte* end_;
RelocInfo rinfo_;
bool done_;
int mode_mask_;
DISALLOW_COPY_AND_ASSIGN(RelocIterator);
};
//------------------------------------------------------------------------------
// External function
//----------------------------------------------------------------------------
class SCTableReference;
class Debug_Address;
// An ExternalReference represents a C++ address used in the generated
// code. All references to C++ functions and variables must be encapsulated in
// an ExternalReference instance. This is done in order to track the origin of
// all external references in the code so that they can be bound to the correct
// addresses when deserializing a heap.
class ExternalReference BASE_EMBEDDED {
public:
// Used in the simulator to support different native api calls.
enum Type {
// Builtin call.
// Object* f(v8::internal::Arguments).
BUILTIN_CALL, // default
// Builtin call returning object pair.
// ObjectPair f(v8::internal::Arguments).
BUILTIN_CALL_PAIR,
// Builtin that takes float arguments and returns an int.
// int f(double, double).
BUILTIN_COMPARE_CALL,
// Builtin call that returns floating point.
// double f(double, double).
BUILTIN_FP_FP_CALL,
// Builtin call that returns floating point.
// double f(double).
BUILTIN_FP_CALL,
// Builtin call that returns floating point.
// double f(double, int).
BUILTIN_FP_INT_CALL,
// Direct call to API function callback.
// void f(v8::FunctionCallbackInfo&)
DIRECT_API_CALL,
// Call to function callback via InvokeFunctionCallback.
// void f(v8::FunctionCallbackInfo&, v8::FunctionCallback)
PROFILING_API_CALL,
// Direct call to accessor getter callback.
// void f(Local<Name> property, PropertyCallbackInfo& info)
DIRECT_GETTER_CALL,
// Call to accessor getter callback via InvokeAccessorGetterCallback.
// void f(Local<Name> property, PropertyCallbackInfo& info,
// AccessorNameGetterCallback callback)
PROFILING_GETTER_CALL
};
static void SetUp();
// These functions must use the isolate in a thread-safe way.
typedef void* ExternalReferenceRedirector(Isolate* isolate, void* original,
Type type);
ExternalReference() : address_(nullptr) {}
ExternalReference(Address address, Isolate* isolate);
ExternalReference(ApiFunction* ptr, Type type, Isolate* isolate);
ExternalReference(Runtime::FunctionId id, Isolate* isolate);
ExternalReference(const Runtime::Function* f, Isolate* isolate);
explicit ExternalReference(StatsCounter* counter);
ExternalReference(IsolateAddressId id, Isolate* isolate);
explicit ExternalReference(const SCTableReference& table_ref);
// Isolate as an external reference.
static ExternalReference isolate_address(Isolate* isolate);
[builtins] Implement lazy deserialization for TFJ builtins This adds support for lazy deserialization of JS-linkage (TFJ) builtins, still gated behind the --lazy-deserialization flag. If enabled, we proceed as follows: During isolate initialization, only eager builtins are deserialized. All references to lazy builtins are replaced by the DeserializeLazy builtin. In particular, this happens in the builtin table (Builtins::builtins_) and in SharedFunctionInfo objects. When calling into a not-yet deserialized function (i.e. the JSFunction's code object is the DeserializeLazy builtin), the DeserializeLazy builtin takes over. It checks the builtin table to see if the target builtin (determined by looking at the builtin id stored on the SharedFunctionInfo) has already been deserialized. If so, it simply copies the builtin code object to the JSFunction and SharedFunctionInfo. Otherwise, we enter Runtime::kDeserializeLazy to deserialize the builtin. With --lazy-deserialization, isolate deserialization is 11% faster (1.5ms vs. 1.7ms), and code_space->Size() is 33% lower (984K vs. 1475K). Moving relocation infos & handler tables out of the partial snapshot cache would additionally let us save up to 30K per isolate. Adding code stubs to that list increases further potential savings to 262K. Bug: v8:6624 Change-Id: I0ac7d05d165d2466998269bd431ac076a311cbeb Reviewed-on: https://chromium-review.googlesource.com/649166 Commit-Queue: Jakob Gruber <jgruber@chromium.org> Reviewed-by: Yang Guo <yangguo@chromium.org> Cr-Commit-Position: refs/heads/master@{#47818}
2017-09-05 09:24:39 +00:00
// The builtins table as an external reference, used by lazy deserialization.
static ExternalReference builtins_address(Isolate* isolate);
// One-of-a-kind references. These references are not part of a general
// pattern. This means that they have to be added to the
// ExternalReferenceTable in serialize.cc manually.
static ExternalReference interpreter_dispatch_table_address(Isolate* isolate);
static ExternalReference interpreter_dispatch_counters(Isolate* isolate);
static ExternalReference bytecode_size_table_address(Isolate* isolate);
static ExternalReference incremental_marking_record_write_function(
Isolate* isolate);
static ExternalReference store_buffer_overflow_function(
Isolate* isolate);
static ExternalReference delete_handle_scope_extensions(Isolate* isolate);
static ExternalReference get_date_field_function(Isolate* isolate);
static ExternalReference date_cache_stamp(Isolate* isolate);
// Deoptimization support.
static ExternalReference new_deoptimizer_function(Isolate* isolate);
static ExternalReference compute_output_frames_function(Isolate* isolate);
static ExternalReference wasm_f32_trunc(Isolate* isolate);
static ExternalReference wasm_f32_floor(Isolate* isolate);
static ExternalReference wasm_f32_ceil(Isolate* isolate);
static ExternalReference wasm_f32_nearest_int(Isolate* isolate);
static ExternalReference wasm_f64_trunc(Isolate* isolate);
static ExternalReference wasm_f64_floor(Isolate* isolate);
static ExternalReference wasm_f64_ceil(Isolate* isolate);
static ExternalReference wasm_f64_nearest_int(Isolate* isolate);
static ExternalReference wasm_int64_to_float32(Isolate* isolate);
static ExternalReference wasm_uint64_to_float32(Isolate* isolate);
static ExternalReference wasm_int64_to_float64(Isolate* isolate);
static ExternalReference wasm_uint64_to_float64(Isolate* isolate);
static ExternalReference wasm_float32_to_int64(Isolate* isolate);
static ExternalReference wasm_float32_to_uint64(Isolate* isolate);
static ExternalReference wasm_float64_to_int64(Isolate* isolate);
static ExternalReference wasm_float64_to_uint64(Isolate* isolate);
static ExternalReference wasm_int64_div(Isolate* isolate);
static ExternalReference wasm_int64_mod(Isolate* isolate);
static ExternalReference wasm_uint64_div(Isolate* isolate);
static ExternalReference wasm_uint64_mod(Isolate* isolate);
static ExternalReference wasm_word32_ctz(Isolate* isolate);
static ExternalReference wasm_word64_ctz(Isolate* isolate);
static ExternalReference wasm_word32_popcnt(Isolate* isolate);
static ExternalReference wasm_word64_popcnt(Isolate* isolate);
static ExternalReference wasm_float64_pow(Isolate* isolate);
static ExternalReference wasm_set_thread_in_wasm_flag(Isolate* isolate);
static ExternalReference wasm_clear_thread_in_wasm_flag(Isolate* isolate);
static ExternalReference f64_acos_wrapper_function(Isolate* isolate);
static ExternalReference f64_asin_wrapper_function(Isolate* isolate);
static ExternalReference f64_mod_wrapper_function(Isolate* isolate);
[wasm] Introduce the TrapIf and TrapUnless operators to generate trap code. Some instructions in WebAssembly trap for some inputs, which means that the execution is terminated and (at least at the moment) a JavaScript exception is thrown. Examples for traps are out-of-bounds memory accesses, or integer divisions by zero. Without the TrapIf and TrapUnless operators trap check in WebAssembly introduces 5 TurboFan nodes (branch, if_true, if_false, trap-reason constant, trap-position constant), in addition to the trap condition itself. Additionally, each WebAssembly function has four TurboFan nodes (merge, effect_phi, 2 phis) whose number of inputs is linear to the number of trap checks in the function. Especially for functions with high numbers of trap checks we observe a significant slowdown in compilation time, down to 0.22 MiB/s in the sqlite benchmark instead of the average of 3 MiB/s in other benchmarks. By introducing a TrapIf common operator only a single node is necessary per trap check, in addition to the trap condition. Also the nodes which are shared between trap checks (merge, effect_phi, 2 phis) would disappear. First measurements suggest a speedup of 30-50% on average. This CL only implements TrapIf and TrapUnless on x64. The implementation is also hidden behind the --wasm-trap-if flag. Please take a special look at how the source position is transfered from the instruction selector to the code generator, and at the context that is used for the runtime call. R=titzer@chromium.org Review-Url: https://codereview.chromium.org/2562393002 Cr-Commit-Position: refs/heads/master@{#41720}
2016-12-15 13:31:29 +00:00
// Trap callback function for cctest/wasm/wasm-run-utils.h
static ExternalReference wasm_call_trap_callback_for_testing(
Isolate* isolate);
// Log support.
static ExternalReference log_enter_external_function(Isolate* isolate);
static ExternalReference log_leave_external_function(Isolate* isolate);
// Static variable Heap::roots_array_start()
static ExternalReference roots_array_start(Isolate* isolate);
// Static variable Heap::allocation_sites_list_address()
static ExternalReference allocation_sites_list_address(Isolate* isolate);
// Static variable StackGuard::address_of_jslimit()
V8_EXPORT_PRIVATE static ExternalReference address_of_stack_limit(
Isolate* isolate);
// Static variable StackGuard::address_of_real_jslimit()
static ExternalReference address_of_real_stack_limit(Isolate* isolate);
// Static variable RegExpStack::limit_address()
static ExternalReference address_of_regexp_stack_limit(Isolate* isolate);
// Static variables for RegExp.
static ExternalReference address_of_static_offsets_vector(Isolate* isolate);
static ExternalReference address_of_regexp_stack_memory_address(
Isolate* isolate);
static ExternalReference address_of_regexp_stack_memory_size(
Isolate* isolate);
// Write barrier.
static ExternalReference store_buffer_top(Isolate* isolate);
static ExternalReference heap_is_marking_flag_address(Isolate* isolate);
// Used for fast allocation in generated code.
static ExternalReference new_space_allocation_top_address(Isolate* isolate);
static ExternalReference new_space_allocation_limit_address(Isolate* isolate);
static ExternalReference old_space_allocation_top_address(Isolate* isolate);
static ExternalReference old_space_allocation_limit_address(Isolate* isolate);
static ExternalReference mod_two_doubles_operation(Isolate* isolate);
static ExternalReference power_double_double_function(Isolate* isolate);
static ExternalReference handle_scope_next_address(Isolate* isolate);
static ExternalReference handle_scope_limit_address(Isolate* isolate);
static ExternalReference handle_scope_level_address(Isolate* isolate);
static ExternalReference scheduled_exception_address(Isolate* isolate);
static ExternalReference address_of_pending_message_obj(Isolate* isolate);
// Static variables containing common double constants.
static ExternalReference address_of_min_int();
static ExternalReference address_of_one_half();
static ExternalReference address_of_minus_one_half();
static ExternalReference address_of_negative_infinity();
static ExternalReference address_of_the_hole_nan();
static ExternalReference address_of_uint32_bias();
// Static variables containing simd constants.
static ExternalReference address_of_float_abs_constant();
static ExternalReference address_of_float_neg_constant();
static ExternalReference address_of_double_abs_constant();
static ExternalReference address_of_double_neg_constant();
// IEEE 754 functions.
static ExternalReference ieee754_acos_function(Isolate* isolate);
static ExternalReference ieee754_acosh_function(Isolate* isolate);
static ExternalReference ieee754_asin_function(Isolate* isolate);
static ExternalReference ieee754_asinh_function(Isolate* isolate);
static ExternalReference ieee754_atan_function(Isolate* isolate);
static ExternalReference ieee754_atanh_function(Isolate* isolate);
static ExternalReference ieee754_atan2_function(Isolate* isolate);
static ExternalReference ieee754_cbrt_function(Isolate* isolate);
static ExternalReference ieee754_cos_function(Isolate* isolate);
static ExternalReference ieee754_cosh_function(Isolate* isolate);
static ExternalReference ieee754_exp_function(Isolate* isolate);
static ExternalReference ieee754_expm1_function(Isolate* isolate);
static ExternalReference ieee754_log_function(Isolate* isolate);
static ExternalReference ieee754_log1p_function(Isolate* isolate);
static ExternalReference ieee754_log10_function(Isolate* isolate);
static ExternalReference ieee754_log2_function(Isolate* isolate);
static ExternalReference ieee754_sin_function(Isolate* isolate);
static ExternalReference ieee754_sinh_function(Isolate* isolate);
static ExternalReference ieee754_tan_function(Isolate* isolate);
static ExternalReference ieee754_tanh_function(Isolate* isolate);
static ExternalReference libc_memchr_function(Isolate* isolate);
static ExternalReference libc_memcpy_function(Isolate* isolate);
static ExternalReference libc_memmove_function(Isolate* isolate);
static ExternalReference libc_memset_function(Isolate* isolate);
static ExternalReference printf_function(Isolate* isolate);
static ExternalReference try_internalize_string_function(Isolate* isolate);
static ExternalReference check_object_type(Isolate* isolate);
#ifdef V8_INTL_SUPPORT
static ExternalReference intl_convert_one_byte_to_lower(Isolate* isolate);
static ExternalReference intl_to_latin1_lower_table(Isolate* isolate);
#endif // V8_INTL_SUPPORT
template <typename SubjectChar, typename PatternChar>
static ExternalReference search_string_raw(Isolate* isolate);
static ExternalReference orderedhashmap_gethash_raw(Isolate* isolate);
static ExternalReference get_or_create_hash_raw(Isolate* isolate);
static ExternalReference jsreceiver_create_identity_hash(Isolate* isolate);
static ExternalReference copy_fast_number_jsarray_elements_to_typed_array(
Isolate* isolate);
static ExternalReference page_flags(Page* page);
static ExternalReference ForDeoptEntry(Address entry);
static ExternalReference cpu_features();
static ExternalReference debug_is_active_address(Isolate* isolate);
static ExternalReference debug_hook_on_function_call_address(
Isolate* isolate);
static ExternalReference is_profiling_address(Isolate* isolate);
static ExternalReference invoke_function_callback(Isolate* isolate);
static ExternalReference invoke_accessor_getter_callback(Isolate* isolate);
[inspector] change target promise for kDebugWillHandle & kDebugDidHandle - kDebugPromiseCreated(task, parent_task) This event occurs when promise is created (PromiseHookType::Init). V8Debugger uses this event to maintain task -> parent task map. - kDebugEnqueueAsyncFunction(task) This event occurs when first internal promise for async function is created. V8Debugger collects stack trace at this point. - kDebugEnqueuePromiseResolve(task), This event occurs when Promise fulfills with resolved status. V8Debugger collects stack trace at this point. - kDebugEnqueuePromiseReject(task), This event occurs when Promise fulfills with rejected status. V8Debugger collects stack trace at this point. - kDebugPromiseCollected, This event occurs when Promise is collected and no other chained callbacks can be added. V8Debugger removes information about async task for this promise. - kDebugWillHandle, This event occurs when chained promise function (either resolve or reject handler) is called. V8Debugger installs parent promise's stack (based on task -> parent_task map) as current if available or current promise's scheduled stack otherwise. - kDebugDidHandle, This event occurs after chained promise function has finished. V8Debugger restores asynchronous call chain to previous one. With this change all instrumentation calls are related to current promise (before WillHandle and DidHandle were related to next async task). Before V8Debugger supported only the following: - asyncTaskScheduled(task1) - asyncTaskStarted(task1) - asyncTaskFinished(task1) Now V8Debugger supports the following: - asyncTaskScheduled(parent_task) .. - asyncTaskCreated(task, parent_task), - asyncTaskStarted(task), uses parent_task scheduled stack - asyncTaskScheduled(task) - asyncTaskFinished(task) Additionally: WillHandle and DidHandle were migrated to PromiseHook API. More details: https://docs.google.com/document/d/1u19N45f1gSF7M39mGsycJEK3IPyJgIXCBnWyiPeuJFE BUG=v8:5738 R=dgozman@chromium.org,gsathya@chromium.org,yangguo@chromium.org Review-Url: https://codereview.chromium.org/2650803003 Cr-Commit-Position: refs/heads/master@{#42644}
2017-01-25 07:05:43 +00:00
static ExternalReference promise_hook_or_debug_is_active_address(
Isolate* isolate);
V8_EXPORT_PRIVATE static ExternalReference runtime_function_table_address(
Isolate* isolate);
Address address() const { return reinterpret_cast<Address>(address_); }
// Used to read out the last step action of the debugger.
static ExternalReference debug_last_step_action_address(Isolate* isolate);
// Used to check for suspended generator, used for stepping across await call.
static ExternalReference debug_suspended_generator_address(Isolate* isolate);
// Used to store the frame pointer to drop to when restarting a frame.
static ExternalReference debug_restart_fp_address(Isolate* isolate);
#ifndef V8_INTERPRETED_REGEXP
// C functions called from RegExp generated code.
// Function NativeRegExpMacroAssembler::CaseInsensitiveCompareUC16()
static ExternalReference re_case_insensitive_compare_uc16(Isolate* isolate);
// Function RegExpMacroAssembler*::CheckStackGuardState()
static ExternalReference re_check_stack_guard_state(Isolate* isolate);
// Function NativeRegExpMacroAssembler::GrowStack()
static ExternalReference re_grow_stack(Isolate* isolate);
// byte NativeRegExpMacroAssembler::word_character_bitmap
static ExternalReference re_word_character_map();
#endif
// This lets you register a function that rewrites all external references.
// Used by the ARM simulator to catch calls to external references.
static void set_redirector(Isolate* isolate,
ExternalReferenceRedirector* redirector);
static ExternalReference stress_deopt_count(Isolate* isolate);
static ExternalReference fixed_typed_array_base_data_offset();
private:
explicit ExternalReference(void* address)
: address_(address) {}
static void* Redirect(Isolate* isolate,
Address address_arg,
Type type = ExternalReference::BUILTIN_CALL) {
ExternalReferenceRedirector* redirector =
reinterpret_cast<ExternalReferenceRedirector*>(
isolate->external_reference_redirector());
void* address = reinterpret_cast<void*>(address_arg);
void* answer = (redirector == nullptr)
? address
: (*redirector)(isolate, address, type);
return answer;
}
void* address_;
};
V8_EXPORT_PRIVATE bool operator==(ExternalReference, ExternalReference);
bool operator!=(ExternalReference, ExternalReference);
size_t hash_value(ExternalReference);
V8_EXPORT_PRIVATE std::ostream& operator<<(std::ostream&, ExternalReference);
// -----------------------------------------------------------------------------
// Utility functions
// Computes pow(x, y) with the special cases in the spec for Math.pow.
double power_helper(Isolate* isolate, double x, double y);
double power_double_int(double x, int y);
double power_double_double(double x, double y);
// -----------------------------------------------------------------------------
// Constant pool support
class ConstantPoolEntry {
public:
ConstantPoolEntry() {}
ConstantPoolEntry(int position, intptr_t value, bool sharing_ok)
: position_(position),
merged_index_(sharing_ok ? SHARING_ALLOWED : SHARING_PROHIBITED),
value_(value) {}
ConstantPoolEntry(int position, Double value)
: position_(position),
merged_index_(SHARING_ALLOWED),
value64_(value.AsUint64()) {}
int position() const { return position_; }
bool sharing_ok() const { return merged_index_ != SHARING_PROHIBITED; }
bool is_merged() const { return merged_index_ >= 0; }
int merged_index(void) const {
DCHECK(is_merged());
return merged_index_;
}
void set_merged_index(int index) {
DCHECK(sharing_ok());
merged_index_ = index;
DCHECK(is_merged());
}
int offset(void) const {
DCHECK_GE(merged_index_, 0);
return merged_index_;
}
void set_offset(int offset) {
DCHECK_GE(offset, 0);
merged_index_ = offset;
}
intptr_t value() const { return value_; }
uint64_t value64() const { return value64_; }
enum Type { INTPTR, DOUBLE, NUMBER_OF_TYPES };
static int size(Type type) {
return (type == INTPTR) ? kPointerSize : kDoubleSize;
}
enum Access { REGULAR, OVERFLOWED };
private:
int position_;
int merged_index_;
union {
intptr_t value_;
uint64_t value64_;
};
enum { SHARING_PROHIBITED = -2, SHARING_ALLOWED = -1 };
};
// -----------------------------------------------------------------------------
// Embedded constant pool support
class ConstantPoolBuilder BASE_EMBEDDED {
public:
ConstantPoolBuilder(int ptr_reach_bits, int double_reach_bits);
// Add pointer-sized constant to the embedded constant pool
ConstantPoolEntry::Access AddEntry(int position, intptr_t value,
bool sharing_ok) {
ConstantPoolEntry entry(position, value, sharing_ok);
return AddEntry(entry, ConstantPoolEntry::INTPTR);
}
// Add double constant to the embedded constant pool
ConstantPoolEntry::Access AddEntry(int position, Double value) {
ConstantPoolEntry entry(position, value);
return AddEntry(entry, ConstantPoolEntry::DOUBLE);
}
// Add double constant to the embedded constant pool
ConstantPoolEntry::Access AddEntry(int position, double value) {
return AddEntry(position, Double(value));
}
// Previews the access type required for the next new entry to be added.
ConstantPoolEntry::Access NextAccess(ConstantPoolEntry::Type type) const;
bool IsEmpty() {
return info_[ConstantPoolEntry::INTPTR].entries.empty() &&
info_[ConstantPoolEntry::INTPTR].shared_entries.empty() &&
info_[ConstantPoolEntry::DOUBLE].entries.empty() &&
info_[ConstantPoolEntry::DOUBLE].shared_entries.empty();
}
// Emit the constant pool. Invoke only after all entries have been
// added and all instructions have been emitted.
// Returns position of the emitted pool (zero implies no constant pool).
int Emit(Assembler* assm);
// Returns the label associated with the start of the constant pool.
// Linking to this label in the function prologue may provide an
// efficient means of constant pool pointer register initialization
// on some architectures.
inline Label* EmittedPosition() { return &emitted_label_; }
private:
ConstantPoolEntry::Access AddEntry(ConstantPoolEntry& entry,
ConstantPoolEntry::Type type);
void EmitSharedEntries(Assembler* assm, ConstantPoolEntry::Type type);
void EmitGroup(Assembler* assm, ConstantPoolEntry::Access access,
ConstantPoolEntry::Type type);
struct PerTypeEntryInfo {
PerTypeEntryInfo() : regular_count(0), overflow_start(-1) {}
bool overflow() const {
return (overflow_start >= 0 &&
overflow_start < static_cast<int>(entries.size()));
}
int regular_reach_bits;
int regular_count;
int overflow_start;
std::vector<ConstantPoolEntry> entries;
std::vector<ConstantPoolEntry> shared_entries;
};
Label emitted_label_; // Records pc_offset of emitted pool
PerTypeEntryInfo info_[ConstantPoolEntry::NUMBER_OF_TYPES];
};
class HeapObjectRequest {
public:
explicit HeapObjectRequest(double heap_number, int offset = -1);
explicit HeapObjectRequest(CodeStub* code_stub, int offset = -1);
enum Kind { kHeapNumber, kCodeStub };
Kind kind() const { return kind_; }
double heap_number() const {
DCHECK_EQ(kind(), kHeapNumber);
return value_.heap_number;
}
CodeStub* code_stub() const {
DCHECK_EQ(kind(), kCodeStub);
return value_.code_stub;
}
// The code buffer offset at the time of the request.
int offset() const {
DCHECK_GE(offset_, 0);
return offset_;
}
void set_offset(int offset) {
DCHECK_LT(offset_, 0);
offset_ = offset;
DCHECK_GE(offset_, 0);
}
private:
Kind kind_;
union {
double heap_number;
CodeStub* code_stub;
} value_;
int offset_;
};
[assembler] Make Register et al. real classes Up to now, each architecture defined all Register types as structs, with lots of redundancy. An often found comment noted that they cannot be classes due to initialization order problems. As these problems are gone with C++11 constexpr constants, I now tried making Registers classes again. All register types now inherit from RegisterBase, which provides a default set of methods and named constructors (like ::from_code, code(), bit(), is_valid(), ...). This design allows to guarantee an interesting property: Each register is either valid, or it's the no_reg register. There are no other invalid registers. This is guaranteed statically by the constexpr constructor, and dynamically by ::from_code. I decided to disallow the default constructor completely, so instead of "Register reg;" you now need "Register reg = no_reg;". This makes explicit how the Register is initialized. I did this change to the x64, ia32, arm, arm64, mips and mips64 ports. Overall, code got much more compact and more safe. In theory, it should also increase performance (since the is_valid() check is simpler), but this is probably not measurable. R=mstarzinger@chromium.org Change-Id: I5ccfa4050daf4e146a557970e9d37fd3d2788d4a Reviewed-on: https://chromium-review.googlesource.com/650927 Reviewed-by: Jakob Gruber <jgruber@chromium.org> Reviewed-by: Michael Starzinger <mstarzinger@chromium.org> Reviewed-by: Igor Sheludko <ishell@chromium.org> Commit-Queue: Clemens Hammacher <clemensh@chromium.org> Cr-Commit-Position: refs/heads/master@{#47847}
2017-09-06 08:05:07 +00:00
// Base type for CPU Registers.
//
// 1) We would prefer to use an enum for registers, but enum values are
// assignment-compatible with int, which has caused code-generation bugs.
//
// 2) By not using an enum, we are possibly preventing the compiler from
// doing certain constant folds, which may significantly reduce the
// code generated for some assembly instructions (because they boil down
// to a few constants). If this is a problem, we could change the code
// such that we use an enum in optimized mode, and the class in debug
// mode. This way we get the compile-time error checking in debug mode
// and best performance in optimized code.
template <typename SubType, int kAfterLastRegister>
class RegisterBase {
// Internal enum class; used for calling constexpr methods, where we need to
// pass an integral type as template parameter.
enum class RegisterCode : int { kFirst = 0, kAfterLast = kAfterLastRegister };
[assembler] Make Register et al. real classes Up to now, each architecture defined all Register types as structs, with lots of redundancy. An often found comment noted that they cannot be classes due to initialization order problems. As these problems are gone with C++11 constexpr constants, I now tried making Registers classes again. All register types now inherit from RegisterBase, which provides a default set of methods and named constructors (like ::from_code, code(), bit(), is_valid(), ...). This design allows to guarantee an interesting property: Each register is either valid, or it's the no_reg register. There are no other invalid registers. This is guaranteed statically by the constexpr constructor, and dynamically by ::from_code. I decided to disallow the default constructor completely, so instead of "Register reg;" you now need "Register reg = no_reg;". This makes explicit how the Register is initialized. I did this change to the x64, ia32, arm, arm64, mips and mips64 ports. Overall, code got much more compact and more safe. In theory, it should also increase performance (since the is_valid() check is simpler), but this is probably not measurable. R=mstarzinger@chromium.org Change-Id: I5ccfa4050daf4e146a557970e9d37fd3d2788d4a Reviewed-on: https://chromium-review.googlesource.com/650927 Reviewed-by: Jakob Gruber <jgruber@chromium.org> Reviewed-by: Michael Starzinger <mstarzinger@chromium.org> Reviewed-by: Igor Sheludko <ishell@chromium.org> Commit-Queue: Clemens Hammacher <clemensh@chromium.org> Cr-Commit-Position: refs/heads/master@{#47847}
2017-09-06 08:05:07 +00:00
public:
static constexpr int kCode_no_reg = -1;
static constexpr int kNumRegisters = kAfterLastRegister;
static constexpr SubType no_reg() { return SubType{kCode_no_reg}; }
template <int code>
static constexpr SubType from_code() {
static_assert(code >= 0 && code < kNumRegisters, "must be valid reg code");
return SubType{code};
}
constexpr operator RegisterCode() const {
return static_cast<RegisterCode>(reg_code_);
}
template <RegisterCode reg_code>
static constexpr int code() {
static_assert(
reg_code >= RegisterCode::kFirst && reg_code < RegisterCode::kAfterLast,
"must be valid reg");
return static_cast<int>(reg_code);
}
template <RegisterCode reg_code>
static constexpr int bit() {
return 1 << code<reg_code>();
}
[assembler] Make Register et al. real classes Up to now, each architecture defined all Register types as structs, with lots of redundancy. An often found comment noted that they cannot be classes due to initialization order problems. As these problems are gone with C++11 constexpr constants, I now tried making Registers classes again. All register types now inherit from RegisterBase, which provides a default set of methods and named constructors (like ::from_code, code(), bit(), is_valid(), ...). This design allows to guarantee an interesting property: Each register is either valid, or it's the no_reg register. There are no other invalid registers. This is guaranteed statically by the constexpr constructor, and dynamically by ::from_code. I decided to disallow the default constructor completely, so instead of "Register reg;" you now need "Register reg = no_reg;". This makes explicit how the Register is initialized. I did this change to the x64, ia32, arm, arm64, mips and mips64 ports. Overall, code got much more compact and more safe. In theory, it should also increase performance (since the is_valid() check is simpler), but this is probably not measurable. R=mstarzinger@chromium.org Change-Id: I5ccfa4050daf4e146a557970e9d37fd3d2788d4a Reviewed-on: https://chromium-review.googlesource.com/650927 Reviewed-by: Jakob Gruber <jgruber@chromium.org> Reviewed-by: Michael Starzinger <mstarzinger@chromium.org> Reviewed-by: Igor Sheludko <ishell@chromium.org> Commit-Queue: Clemens Hammacher <clemensh@chromium.org> Cr-Commit-Position: refs/heads/master@{#47847}
2017-09-06 08:05:07 +00:00
static SubType from_code(int code) {
DCHECK_LE(0, code);
DCHECK_GT(kNumRegisters, code);
return SubType{code};
}
template <RegisterCode... reg_codes>
static constexpr RegList ListOf() {
return CombineRegLists(RegisterBase::bit<reg_codes>()...);
}
[assembler] Make Register et al. real classes Up to now, each architecture defined all Register types as structs, with lots of redundancy. An often found comment noted that they cannot be classes due to initialization order problems. As these problems are gone with C++11 constexpr constants, I now tried making Registers classes again. All register types now inherit from RegisterBase, which provides a default set of methods and named constructors (like ::from_code, code(), bit(), is_valid(), ...). This design allows to guarantee an interesting property: Each register is either valid, or it's the no_reg register. There are no other invalid registers. This is guaranteed statically by the constexpr constructor, and dynamically by ::from_code. I decided to disallow the default constructor completely, so instead of "Register reg;" you now need "Register reg = no_reg;". This makes explicit how the Register is initialized. I did this change to the x64, ia32, arm, arm64, mips and mips64 ports. Overall, code got much more compact and more safe. In theory, it should also increase performance (since the is_valid() check is simpler), but this is probably not measurable. R=mstarzinger@chromium.org Change-Id: I5ccfa4050daf4e146a557970e9d37fd3d2788d4a Reviewed-on: https://chromium-review.googlesource.com/650927 Reviewed-by: Jakob Gruber <jgruber@chromium.org> Reviewed-by: Michael Starzinger <mstarzinger@chromium.org> Reviewed-by: Igor Sheludko <ishell@chromium.org> Commit-Queue: Clemens Hammacher <clemensh@chromium.org> Cr-Commit-Position: refs/heads/master@{#47847}
2017-09-06 08:05:07 +00:00
bool is_valid() const { return reg_code_ != kCode_no_reg; }
int code() const {
DCHECK(is_valid());
return reg_code_;
}
int bit() const { return 1 << code(); }
inline bool operator==(SubType other) const {
return reg_code_ == other.reg_code_;
}
inline bool operator!=(SubType other) const { return !(*this == other); }
protected:
explicit constexpr RegisterBase(int code) : reg_code_(code) {}
int reg_code_;
};
} // namespace internal
} // namespace v8
#endif // V8_ASSEMBLER_H_