On 32-bit the maps are now aligned on a 32-byte boundary in order to encode more maps during compacting GC. The actual size of a map on 32-bit is 28 bytes making this change waste 4 bytes per map.
On 64-bit the encoding for compacting GC is now using more than 32-bits and the maps here are still pointer size aligned. The actual size of a map on 64-bit is 48 bytes and this change does not intruduce any waste.
My choice of 16 bits for kMapPageIndexBits for 64-bit should give the same maximum number of pages (8K) for map space. As maps on 64-bit are larger than on 32-bit the total number of maps on 64-bit will be smaller than on 32-bit. We could consider raising this to 17 or 18.
I moved the kPageSizeBits to globals.h as the calculation of the encoding really depended on this.
There are still an #ifdef/#endif in objects.h and this constant could be moved to globaks.h as well, but I kept it together with the related constants.
All the tests run in debug mode with additional options --gc-global --always-compact as well (except for a few tests on which also fails before this change when run with --gc-global --always-compact).
BUG=http://code.google.com/p/v8/issues/detail?id=524
BUG=http://crbug.com/29428
TEST=test/mjsunit/regress/regress-524.js
Review URL: http://codereview.chromium.org/504026
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3481 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Instead of weak handles external strings use a separate table. This
table uses 5 times less memory than weak handles. Moreover, since we
don't have to follow the weak handle callback protocol we can collect
the strings faster and even on scavenge collections.
Review URL: http://codereview.chromium.org/467037
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3439 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
The different length string types was used to encode the string length and the hash in one field. This is now split into two fields one for length and one for hash. The hash field still encodes the array index of the string if it has one. If an array index is encoded in the hash field the string length is added to the top bits of the hash field to avoid a hash value of zero.
On 32-bit this causes an additional 4 bytes to be used for all string objects. On 64-bit this will be half on average dur to pointer alignment.
Review URL: http://codereview.chromium.org/436001
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3350 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
potentially leading to bogus FatalProcessOutOfMemory situations. Also
fixed a few cases where callers relied on getting a NewSpace object
back (to avoid write barrier overhead) which they can't when
always_allocate is in effect.
Review URL: http://codereview.chromium.org/391018
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3285 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
In the generated code for function.apply there was a loop checking the stack limit for interruption. This loop would call into the runtime system to handle interuption and keep running until there was no interruption. However if the interuption was debug break the runtime system would never clear the interruption as debug break is prevented in builtins are prevented and the assumption here was that returning with the debug break flag set would move execution forward.
Renamed initial_jslimit and initial_climit to real_jslimit and real_climit. Renamed a few external references related to the stack limit as well.
Exposed the real stack limit to generated code to make the stack check when entering function.apply use the real stack limit and not the stack limit which is changed to signal interruption.
Added the real stack limit to the roots array.
BUG=http://code.google.com/p/v8/issues/detail?id=493
TEST=cctest/test-debug/DebugBreakFunctionApply
Review URL: http://codereview.chromium.org/345048
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3229 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
As the list of code-stubs is used in two places it is now handled through a macro to keep this in sync. As some code-stubs is only used on ARM the list have been split into two parts to indicate this and get rid of dummy implementation on ia32 and x64 platforms.
BUG=484
Review URL: http://codereview.chromium.org/335025
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3127 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
when using snapshots.
The alignment of new space has to match the alignment in the snapshot,
but the max committed amount of memory does not.
For now, we assume that the default semispace size is always used in a
snapshot.
Review URL: http://codereview.chromium.org/300036
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3106 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
specification under development. The optimizations are patterned after
those previously done for CanvasPixelArray. This CL adds all of the
necessary framework but continues to use the generic KeyedLoadIC and
KeyedStoreIC code, to create a baseline for benchmarking purposes. The
next CL will add the optimized ICs to ic-ia32.cc and ic-x64.cc.
These new CanvasArray types have different semantics than
CanvasPixelArray; out-of-range values are clamped via C cast
semantics, which is cheaper than the clamping behavior specified by
CanvasPixelArray. Out-of-range indices raise exceptions instead of
being silently ignored.
As part of this work, pulled FloatingPointHelper::AllocateHeapNumber
up to MacroAssembler on ia32 and x64 platforms. Slightly refactored
KeyedLoadIC and KeyedStoreIC. Fixed encoding for fistp_d on x64 and
added a few more instructions that are needed for the new ICs. The
test cases in test-api.cc have been verified by hand to exercise all
of the generated code paths in the forthcoming specialized ICs.
Review URL: http://codereview.chromium.org/293023
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3096 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
Turned on with '--log-producers' flag, also needs '--noinline-new' (this is temporarily), '--log-code', '--log-gc'. Not all allocations are traced (I'm investigating.)
Stacks are stored using weak handles. Thus, when an object is collected, its allocation stack is deleted.
Review URL: http://codereview.chromium.org/267077
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@3069 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
The profile is taken together with constructors profile. In theory, it
should represent a complete heap graph. However, this takes a lot of memory,
so it is reduced to a more compact, but still useful form. Namely:
- objects are aggregated by their constructors, except for Array and Object
instances, that are too hetereogeneous;
- for Arrays and Objects, initially every instance is concerned, but then
they are grouped together based on their retainer graph paths similarity (e.g.
if two objects has the same retainer, they are considered equal);
Review URL: http://codereview.chromium.org/200132
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@2903 ce2b1a6d-e550-0410-aec6-3dcde31c8c00
This only wins us around 1% in performance, but it makes the code more
compact. We don't currently have a way to represent in the virtual
frame that a slot contains a value from the root array. Adding this
would probably make the code more compact.
Review URL: http://codereview.chromium.org/174639
git-svn-id: http://v8.googlecode.com/svn/branches/bleeding_edge@2783 ce2b1a6d-e550-0410-aec6-3dcde31c8c00