Revert "[vm, gc] Mark through new-space."

This reverts commit 3fb88e4c66.

Reason for revert: b/297175670

Original change's description:
> [vm, gc] Mark through new-space.
>
>  - Initial and final marking no longer visit all of new-space, reducing the STW pause for major GC.
>  - A scavenge during concurrent marking must forward / filter objects in the marking worklist that are moved / collected, increasing the STW pause for minor GC.
>  - Unreachable intergenerational cycles and weak references are collected in the next mark-sweep instead of first requiring enough scavenges to promote the whole cycle or weak target into old-space.
>  - Artificial minor GCs are no longer needed to avoid memory leaks from back-to-back major GCs.
>  - reachabilityBarrier is now just a count of major GCs.
>
> TEST=ci
> Change-Id: I6362802cd93ba5ba9c39f116ddff82e4feb4c312
> Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/321304
> Commit-Queue: Ryan Macnak <rmacnak@google.com>
> Reviewed-by: Siva Annamalai <asiva@google.com>

Change-Id: I33075156160dc35861355d738a5776b74dce88b9
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/322344
Reviewed-by: Martin Kustermann <kustermann@google.com>
Auto-Submit: Ivan Inozemtsev <iinozemtsev@google.com>
Commit-Queue: Martin Kustermann <kustermann@google.com>
This commit is contained in:
Ivan Inozemtsev 2023-08-23 13:13:28 +00:00 committed by Commit Queue
parent 6d8ef05196
commit 3176cfcd21
44 changed files with 506 additions and 875 deletions

View file

@ -57,9 +57,17 @@ FLAG_scavenger_tasks (default 2) workers are started on separate threads. Each w
All objects have a bit in their header called the mark bit. At the start of a collection cycle, all objects have this bit clear.
During the marking phase, the collector visits each of the root pointers. If the target object has its mark bit clear, the mark bit is set and the target added to the marking stack (grey set). The collector then removes and visits objects in the marking stack, marking more objects and adding them to the marking stack, until the marking stack is empty. At this point, all reachable objects have their mark bits set and all unreachable objects have their mark bits clear.
During the marking phase, the collector visits each of the root pointers. If the target object is an old-space object and its mark bit is clear, the mark bit is set and the target added to the marking stack (grey set). The collector then removes and visits objects in the marking stack, marking more old-space objects and adding them to the marking stack, until the marking stack is empty. At this point, all reachable objects have their mark bits set and all unreachable objects have their mark bits clear.
During the sweeping phase, the collector visits each object. If the mark bit is clear, the object's memory is added to a [free list](https://github.com/dart-lang/sdk/blob/main/runtime/vm/heap/freelist.h) to be used for future allocations. Otherwise the object's mark bit is cleared. If every object on some page is unreachable, the page is released to the OS.
During the sweeping phase, the collector visits each old-space object. If the mark bit is clear, the object's memory is added to a [free list](https://github.com/dart-lang/sdk/blob/main/runtime/vm/heap/freelist.h) to be used for future allocations. Otherwise the object's mark bit is cleared. If every object on some page is unreachable, the page is released to the OS.
### New-Space as Roots
We do not mark new-space objects, and pointers to new-space objects are ignored; instead all objects in new-space are treated as part of the root set.
This has the advantage of making collections of the two spaces more independent. In particular, the concurrent marker never needs to dereference any memory in new-space, avoiding several data race issues, and avoiding the need to pause or otherwise synchronize with the concurrent marker when starting a scavenge.
It has the disadvantage that no single collection will collect all garbage. An unreachable old-space object that is referenced by an unreachable new-space object will not be collected until a scavenge first collects the new-space object, and unreachable objects that have a generation-crossing cycle will not be collected until the whole subgraph is promoted into old-space. The growth policy must be careful to ensure it doesn't perform old-space collections without interleaving new-space collections, such as when the program performs mostly large allocation that go directly to old-space, or old-space can accumulate such floating garbage and grow without bound.
## Mark-Compact
@ -95,17 +103,17 @@ But we combine the generational and incremental checks with a shift-and-mask.
```c++
enum HeaderBits {
...
kNotMarkedBit, // Incremental barrier target.
kOldAndNotMarkedBit, // Incremental barrier target.
kNewBit, // Generational barrier target.
kAlwaysSetBit, // Incremental barrier source.
kOldBit, // Incremental barrier source.
kOldAndNotRememberedBit, // Generational barrier source.
...
};
static constexpr intptr_t kGenerationalBarrierMask = 1 << kNewBit;
static constexpr intptr_t kIncrementalBarrierMask = 1 << kNotMarkedBit;
static constexpr intptr_t kIncrementalBarrierMask = 1 << kOldAndNotMarkedBit;
static constexpr intptr_t kBarrierOverlapShift = 2;
COMPILE_ASSERT(kNotMarkedBit + kBarrierOverlapShift == kAlwaysSetBit);
COMPILE_ASSERT(kOldAndNotMarkedBit + kBarrierOverlapShift == kOldBit);
COMPILE_ASSERT(kNewBit + kBarrierOverlapShift == kOldAndNotRememberedBit);
StorePointer(ObjectPtr source, ObjectPtr* slot, ObjectPtr target) {

View file

@ -146,27 +146,6 @@ class AcqRelAtomic {
std::atomic<T> value_;
};
template <typename T>
static inline T LoadRelaxed(const T* ptr) {
static_assert(sizeof(std::atomic<T>) == sizeof(T));
return reinterpret_cast<const std::atomic<T>*>(ptr)->load(
std::memory_order_relaxed);
}
template <typename T>
static inline T LoadAcquire(const T* ptr) {
static_assert(sizeof(std::atomic<T>) == sizeof(T));
return reinterpret_cast<const std::atomic<T>*>(ptr)->load(
std::memory_order_acquire);
}
template <typename T>
static inline void StoreRelease(T* ptr, T value) {
static_assert(sizeof(std::atomic<T>) == sizeof(T));
reinterpret_cast<std::atomic<T>*>(ptr)->store(value,
std::memory_order_release);
}
} // namespace dart
#endif // RUNTIME_PLATFORM_ATOMIC_H_

View file

@ -875,8 +875,8 @@ void Deserializer::InitializeHeader(ObjectPtr raw,
tags = UntaggedObject::ClassIdTag::update(class_id, tags);
tags = UntaggedObject::SizeTag::update(size, tags);
tags = UntaggedObject::CanonicalBit::update(is_canonical, tags);
tags = UntaggedObject::AlwaysSetBit::update(true, tags);
tags = UntaggedObject::NotMarkedBit::update(true, tags);
tags = UntaggedObject::OldBit::update(true, tags);
tags = UntaggedObject::OldAndNotMarkedBit::update(true, tags);
tags = UntaggedObject::OldAndNotRememberedBit::update(true, tags);
tags = UntaggedObject::NewBit::update(false, tags);
raw->untag()->tags_ = tags;

View file

@ -360,8 +360,6 @@ uword MakeTagWordForNewSpaceObject(classid_t cid, uword instance_size) {
TranslateOffsetInWordsToHost(instance_size)) |
dart::UntaggedObject::ClassIdTag::encode(cid) |
dart::UntaggedObject::NewBit::encode(true) |
dart::UntaggedObject::AlwaysSetBit::encode(true) |
dart::UntaggedObject::NotMarkedBit::encode(true) |
dart::UntaggedObject::ImmutableBit::encode(
ShouldHaveImmutabilityBitSet(cid));
}
@ -380,7 +378,8 @@ const word UntaggedObject::kNewBit = dart::UntaggedObject::kNewBit;
const word UntaggedObject::kOldAndNotRememberedBit =
dart::UntaggedObject::kOldAndNotRememberedBit;
const word UntaggedObject::kNotMarkedBit = dart::UntaggedObject::kNotMarkedBit;
const word UntaggedObject::kOldAndNotMarkedBit =
dart::UntaggedObject::kOldAndNotMarkedBit;
const word UntaggedObject::kImmutableBit = dart::UntaggedObject::kImmutableBit;

View file

@ -418,7 +418,7 @@ class UntaggedObject : public AllStatic {
static const word kCanonicalBit;
static const word kNewBit;
static const word kOldAndNotRememberedBit;
static const word kNotMarkedBit;
static const word kOldAndNotMarkedBit;
static const word kImmutableBit;
static const word kSizeTagPos;
static const word kSizeTagSize;
@ -1493,8 +1493,6 @@ class Page : public AllStatic {
static const word kBytesPerCardLog2;
static word card_table_offset();
static word original_top_offset();
static word original_end_offset();
};
class Heap : public AllStatic {

View file

@ -199,8 +199,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x4;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x10;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x1c;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x20;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x8;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -914,8 +912,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x38;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x40;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -1631,8 +1627,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x4;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x10;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x1c;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x20;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x8;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -2345,8 +2339,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x38;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x40;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -3065,8 +3057,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x38;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x40;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -3782,8 +3772,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x38;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x40;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -4500,8 +4488,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x4;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x10;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x1c;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x20;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x8;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -5216,8 +5202,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x38;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x40;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -5929,8 +5913,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x4;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x10;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x1c;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x20;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x8;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -6636,8 +6618,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x38;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x40;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -7345,8 +7325,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x4;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x10;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x1c;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x20;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x8;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -8051,8 +8029,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x38;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x40;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -8763,8 +8739,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x38;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x40;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -9472,8 +9446,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x38;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x40;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -10182,8 +10154,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x4;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x10;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x1c;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x20;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x8;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -10890,8 +10860,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word Page_original_top_offset = 0x38;
static constexpr dart::compiler::target::word Page_original_end_offset = 0x40;
static constexpr dart::compiler::target::word
CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word ICData_NumArgsTestedMask = 0x3;
@ -11620,10 +11588,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x4;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x10;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x1c;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x20;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x8;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -12413,10 +12377,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x38;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x40;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -13213,10 +13173,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x38;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x40;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -14009,10 +13965,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x38;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x40;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -14805,10 +14757,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x38;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x40;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -15603,10 +15551,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x4;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x10;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x1c;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x20;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x8;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -16397,10 +16341,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x38;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x40;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -17187,10 +17127,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x4;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x10;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x1c;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x20;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x8;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -17971,10 +17907,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x38;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x40;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -18762,10 +18694,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x38;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x40;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -19549,10 +19477,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x38;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x40;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -20336,10 +20260,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x38;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x40;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -21125,10 +21045,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x4;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x10;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x1c;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x20;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x8;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =
@ -21910,10 +21826,6 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_GrowableObjectArray_type_arguments_offset = 0x8;
static constexpr dart::compiler::target::word AOT_Page_card_table_offset = 0x20;
static constexpr dart::compiler::target::word AOT_Page_original_top_offset =
0x38;
static constexpr dart::compiler::target::word AOT_Page_original_end_offset =
0x40;
static constexpr dart::compiler::target::word
AOT_CallSiteData_arguments_descriptor_offset = 0x10;
static constexpr dart::compiler::target::word AOT_ICData_NumArgsTestedMask =

View file

@ -160,8 +160,6 @@
FIELD(GrowableObjectArray, length_offset) \
FIELD(GrowableObjectArray, type_arguments_offset) \
FIELD(Page, card_table_offset) \
FIELD(Page, original_top_offset) \
FIELD(Page, original_end_offset) \
FIELD(CallSiteData, arguments_descriptor_offset) \
FIELD(ICData, NumArgsTestedMask) \
FIELD(ICData, NumArgsTestedShift) \
@ -311,6 +309,7 @@
FIELD(Thread, return_async_not_future_stub_offset) \
FIELD(Thread, return_async_star_stub_offset) \
FIELD(Thread, return_async_stub_offset) \
\
FIELD(Thread, object_null_offset) \
FIELD(Thread, predefined_symbols_address_offset) \
FIELD(Thread, resume_pc_offset) \
@ -324,6 +323,7 @@
FIELD(Thread, stack_overflow_shared_with_fpu_regs_entry_point_offset) \
FIELD(Thread, stack_overflow_shared_with_fpu_regs_stub_offset) \
FIELD(Thread, stack_overflow_shared_without_fpu_regs_entry_point_offset) \
\
FIELD(Thread, stack_overflow_shared_without_fpu_regs_stub_offset) \
FIELD(Thread, store_buffer_block_offset) \
FIELD(Thread, suspend_state_await_entry_point_offset) \
@ -405,6 +405,7 @@
kNumberOfCpuRegisters - 1, [](Register reg) { \
return (kDartAvailableCpuRegs & (1 << reg)) != 0; \
}) \
\
SIZEOF(AbstractType, InstanceSize, UntaggedAbstractType) \
SIZEOF(ApiError, InstanceSize, UntaggedApiError) \
SIZEOF(Array, header_size, UntaggedArray) \

View file

@ -1867,7 +1867,7 @@ void StubCodeCompiler::GenerateSuspendStub(
const Register kSrcFrame = SuspendStubABI::kSrcFrameReg;
const Register kDstFrame = SuspendStubABI::kDstFrameReg;
Label alloc_slow_case, alloc_done, init_done, resize_suspend_state,
remember_object, call_dart;
old_gen_object, call_dart;
#if defined(TARGET_ARCH_ARM) || defined(TARGET_ARCH_ARM64)
SPILLS_LR_TO_FRAME({}); // Simulate entering the caller (Dart) frame.
@ -2005,13 +2005,8 @@ void StubCodeCompiler::GenerateSuspendStub(
}
// Write barrier.
__ AndImmediate(kTemp, kSuspendState, target::kPageMask);
__ LoadFromOffset(kTemp, Address(kTemp, target::Page::original_top_offset()));
__ CompareRegisters(kSuspendState, kTemp);
__ BranchIf(UNSIGNED_LESS, &remember_object);
// Assumption: SuspendStates are always on non-image pages.
// TODO(rmacnak): Also check original_end if we bound TLABs to smaller than a
// heap page.
__ BranchIfBit(kSuspendState, target::ObjectAlignment::kNewObjectBitPosition,
ZERO, &old_gen_object);
__ Bind(&call_dart);
if (call_suspend_function) {
@ -2087,7 +2082,7 @@ void StubCodeCompiler::GenerateSuspendStub(
__ PopRegister(kArgument); // Restore argument.
__ Jump(&alloc_done);
__ Bind(&remember_object);
__ Bind(&old_gen_object);
__ Comment("Old gen SuspendState slow case");
if (!call_suspend_function) {
// Save kArgument which contains the return value

View file

@ -1678,16 +1678,16 @@ static void GenerateWriteBarrierStubHelper(Assembler* assembler,
__ b(&skip_marking, ZERO);
{
// Atomically clear kNotMarkedBit.
// Atomically clear kOldAndNotMarkedBit.
Label retry, done;
__ PushList((1 << R2) | (1 << R3) | (1 << R4)); // Spill.
__ AddImmediate(R3, R0, target::Object::tags_offset() - kHeapObjectTag);
// R3: Untagged address of header word (ldrex/strex do not support offsets).
__ Bind(&retry);
__ ldrex(R2, R3);
__ tst(R2, Operand(1 << target::UntaggedObject::kNotMarkedBit));
__ tst(R2, Operand(1 << target::UntaggedObject::kOldAndNotMarkedBit));
__ b(&done, ZERO); // Marked by another thread.
__ bic(R2, R2, Operand(1 << target::UntaggedObject::kNotMarkedBit));
__ bic(R2, R2, Operand(1 << target::UntaggedObject::kOldAndNotMarkedBit));
__ strex(R4, R2, R3);
__ cmp(R4, Operand(1));
__ b(&retry, EQ);
@ -1711,6 +1711,7 @@ static void GenerateWriteBarrierStubHelper(Assembler* assembler,
__ Bind(&done);
__ clrex();
__ PopList((1 << R2) | (1 << R3) | (1 << R4)); // Unspill.
__ Ret();
}
Label add_to_remembered_set, remember_card;

View file

@ -1979,20 +1979,21 @@ static void GenerateWriteBarrierStubHelper(Assembler* assembler,
__ b(&skip_marking, ZERO);
{
// Atomically clear kNotMarkedBit.
// Atomically clear kOldAndNotMarkedBit.
Label retry, done;
__ PushRegisters(spill_set);
__ AddImmediate(R3, R0, target::Object::tags_offset() - kHeapObjectTag);
// R3: Untagged address of header word (atomics do not support offsets).
if (TargetCPUFeatures::atomic_memory_supported()) {
__ LoadImmediate(TMP, 1 << target::UntaggedObject::kNotMarkedBit);
__ LoadImmediate(TMP, 1 << target::UntaggedObject::kOldAndNotMarkedBit);
__ ldclr(TMP, TMP, R3);
__ tbz(&done, TMP, target::UntaggedObject::kNotMarkedBit);
__ tbz(&done, TMP, target::UntaggedObject::kOldAndNotMarkedBit);
} else {
__ Bind(&retry);
__ ldxr(R2, R3, kEightBytes);
__ tbz(&done, R2, target::UntaggedObject::kNotMarkedBit);
__ AndImmediate(R2, R2, ~(1 << target::UntaggedObject::kNotMarkedBit));
__ tbz(&done, R2, target::UntaggedObject::kOldAndNotMarkedBit);
__ AndImmediate(R2, R2,
~(1 << target::UntaggedObject::kOldAndNotMarkedBit));
__ stxr(R4, R2, R3, kEightBytes);
__ cbnz(&retry, R4);
}

View file

@ -1411,14 +1411,15 @@ static void GenerateWriteBarrierStubHelper(Assembler* assembler, bool cards) {
__ j(ZERO, &skip_marking);
{
// Atomically clear kNotMarkedBit.
// Atomically clear kOldAndNotMarkedBit.
Label retry, done;
__ movl(EAX, FieldAddress(EBX, target::Object::tags_offset()));
__ Bind(&retry);
__ movl(ECX, EAX);
__ testl(ECX, Immediate(1 << target::UntaggedObject::kNotMarkedBit));
__ testl(ECX, Immediate(1 << target::UntaggedObject::kOldAndNotMarkedBit));
__ j(ZERO, &done); // Marked by another thread.
__ andl(ECX, Immediate(~(1 << target::UntaggedObject::kNotMarkedBit)));
__ andl(ECX,
Immediate(~(1 << target::UntaggedObject::kOldAndNotMarkedBit)));
// Cmpxchgq: compare value = implicit operand EAX, new value = ECX.
// On failure, EAX is updated with the current value.
__ LockCmpxchgl(FieldAddress(EBX, target::Object::tags_offset()), ECX);

View file

@ -1797,18 +1797,18 @@ static void GenerateWriteBarrierStubHelper(Assembler* assembler,
__ beqz(TMP, &skip_marking);
{
// Atomically clear kNotMarkedBit.
// Atomically clear kOldAndNotMarkedBit.
Label done;
__ PushRegisters(spill_set);
__ addi(T3, A1, target::Object::tags_offset() - kHeapObjectTag);
// T3: Untagged address of header word (amo's do not support offsets).
__ li(TMP2, ~(1 << target::UntaggedObject::kNotMarkedBit));
__ li(TMP2, ~(1 << target::UntaggedObject::kOldAndNotMarkedBit));
#if XLEN == 32
__ amoandw(TMP2, TMP2, Address(T3, 0));
#else
__ amoandd(TMP2, TMP2, Address(T3, 0));
#endif
__ andi(TMP2, TMP2, 1 << target::UntaggedObject::kNotMarkedBit);
__ andi(TMP2, TMP2, 1 << target::UntaggedObject::kOldAndNotMarkedBit);
__ beqz(TMP2, &done); // Was already clear -> lost race.
__ lx(T4, Address(THR, target::Thread::marking_stack_block_offset()));

View file

@ -1906,7 +1906,7 @@ static void GenerateWriteBarrierStubHelper(Assembler* assembler, bool cards) {
__ j(ZERO, &skip_marking);
{
// Atomically clear kNotMarkedBit.
// Atomically clear kOldAndNotMarkedBit.
Label retry, done;
__ pushq(RAX); // Spill.
__ pushq(RCX); // Spill.
@ -1915,10 +1915,11 @@ static void GenerateWriteBarrierStubHelper(Assembler* assembler, bool cards) {
__ Bind(&retry);
__ movq(RCX, RAX);
__ testq(RCX, Immediate(1 << target::UntaggedObject::kNotMarkedBit));
__ testq(RCX, Immediate(1 << target::UntaggedObject::kOldAndNotMarkedBit));
__ j(ZERO, &done); // Marked by another thread.
__ andq(RCX, Immediate(~(1 << target::UntaggedObject::kNotMarkedBit)));
__ andq(RCX,
Immediate(~(1 << target::UntaggedObject::kOldAndNotMarkedBit)));
// Cmpxchgq: compare value = implicit operand RAX, new value = RCX.
// On failure, RAX is updated with the current value.
__ LockCmpxchgq(FieldAddress(TMP, target::Object::tags_offset()), RCX);

View file

@ -1741,6 +1741,9 @@ TEST_CASE(DartAPI_ExternalStringCallback) {
TransitionNativeToVM transition(thread);
EXPECT_EQ(40, peer8);
EXPECT_EQ(41, peer16);
GCTestHelper::CollectOldSpace();
EXPECT_EQ(40, peer8);
EXPECT_EQ(41, peer16);
GCTestHelper::CollectNewSpace();
EXPECT_EQ(80, peer8);
EXPECT_EQ(82, peer16);
@ -3255,6 +3258,8 @@ TEST_CASE(DartAPI_ExternalTypedDataCallback) {
{
TransitionNativeToVM transition(thread);
EXPECT(peer == 0);
GCTestHelper::CollectOldSpace();
EXPECT(peer == 0);
GCTestHelper::CollectNewSpace();
EXPECT(peer == 42);
}
@ -4032,6 +4037,8 @@ TEST_CASE(DartAPI_WeakPersistentHandleCallback) {
}
{
TransitionNativeToVM transition(thread);
GCTestHelper::CollectOldSpace();
EXPECT(peer == 0);
GCTestHelper::CollectNewSpace();
EXPECT(peer == 42);
}
@ -4055,6 +4062,8 @@ TEST_CASE(DartAPI_FinalizableHandleCallback) {
}
{
TransitionNativeToVM transition(thread);
GCTestHelper::CollectOldSpace();
EXPECT(peer == 0);
GCTestHelper::CollectNewSpace();
EXPECT(peer == 42);
}
@ -4142,6 +4151,8 @@ TEST_CASE(DartAPI_WeakPersistentHandleCallbackSelfDelete) {
}
{
TransitionNativeToVM transition(thread);
GCTestHelper::CollectOldSpace();
EXPECT(peer == 0);
GCTestHelper::CollectNewSpace();
EXPECT(peer == 42);
ASSERT(delete_on_finalization == nullptr);
@ -4176,8 +4187,8 @@ VM_UNIT_TEST_CASE(DartAPI_FinalizableHandlesCallbackShutdown) {
TEST_CASE(DartAPI_WeakPersistentHandleExternalAllocationSize) {
Heap* heap = IsolateGroup::Current()->heap();
EXPECT_EQ(heap->ExternalInWords(Heap::kNew), 0);
EXPECT_EQ(heap->ExternalInWords(Heap::kOld), 0);
EXPECT(heap->ExternalInWords(Heap::kNew) == 0);
EXPECT(heap->ExternalInWords(Heap::kOld) == 0);
Dart_WeakPersistentHandle weak1 = nullptr;
const intptr_t kWeak1ExternalSize = 1 * KB;
{
@ -4205,26 +4216,20 @@ TEST_CASE(DartAPI_WeakPersistentHandleExternalAllocationSize) {
}
{
TransitionNativeToVM transition(thread);
EXPECT_EQ(heap->ExternalInWords(Heap::kNew),
(kWeak1ExternalSize + kWeak2ExternalSize) / kWordSize);
EXPECT_EQ(heap->ExternalInWords(Heap::kOld), 0);
// Collect weakly referenced string.
GCTestHelper::CollectOldSpace();
EXPECT(heap->ExternalInWords(Heap::kNew) ==
(kWeak1ExternalSize + kWeak2ExternalSize) / kWordSize);
// Collect weakly referenced string, and promote strongly referenced string.
GCTestHelper::CollectNewSpace();
EXPECT_EQ(heap->ExternalInWords(Heap::kNew),
kWeak2ExternalSize / kWordSize);
EXPECT_EQ(heap->ExternalInWords(Heap::kOld), 0);
// Promote strongly referenced string.
GCTestHelper::CollectNewSpace();
EXPECT_EQ(heap->ExternalInWords(Heap::kNew), 0);
EXPECT_EQ(heap->ExternalInWords(Heap::kOld),
kWeak2ExternalSize / kWordSize);
EXPECT(heap->ExternalInWords(Heap::kNew) == 0);
EXPECT(heap->ExternalInWords(Heap::kOld) == kWeak2ExternalSize / kWordSize);
}
Dart_DeletePersistentHandle(strong_ref);
{
TransitionNativeToVM transition(thread);
GCTestHelper::CollectOldSpace();
EXPECT_EQ(heap->ExternalInWords(Heap::kNew), 0);
EXPECT_EQ(heap->ExternalInWords(Heap::kOld), 0);
EXPECT(heap->ExternalInWords(Heap::kOld) == 0);
}
Dart_DeleteWeakPersistentHandle(weak1);
Dart_DeleteWeakPersistentHandle(weak2);
@ -4232,8 +4237,8 @@ TEST_CASE(DartAPI_WeakPersistentHandleExternalAllocationSize) {
TEST_CASE(DartAPI_FinalizableHandleExternalAllocationSize) {
Heap* heap = IsolateGroup::Current()->heap();
EXPECT_EQ(heap->ExternalInWords(Heap::kNew), 0);
EXPECT_EQ(heap->ExternalInWords(Heap::kOld), 0);
EXPECT(heap->ExternalInWords(Heap::kNew) == 0);
EXPECT(heap->ExternalInWords(Heap::kOld) == 0);
const intptr_t kWeak1ExternalSize = 1 * KB;
{
Dart_EnterScope();
@ -4255,26 +4260,20 @@ TEST_CASE(DartAPI_FinalizableHandleExternalAllocationSize) {
}
{
TransitionNativeToVM transition(thread);
EXPECT_EQ(heap->ExternalInWords(Heap::kNew),
(kWeak1ExternalSize + kWeak2ExternalSize) / kWordSize);
EXPECT_EQ(heap->ExternalInWords(Heap::kOld), 0);
// Collect weakly referenced string.
GCTestHelper::CollectOldSpace();
EXPECT(heap->ExternalInWords(Heap::kNew) ==
(kWeak1ExternalSize + kWeak2ExternalSize) / kWordSize);
// Collect weakly referenced string, and promote strongly referenced string.
GCTestHelper::CollectNewSpace();
EXPECT_EQ(heap->ExternalInWords(Heap::kNew),
kWeak2ExternalSize / kWordSize);
EXPECT_EQ(heap->ExternalInWords(Heap::kOld), 0);
// Promote strongly referenced string.
GCTestHelper::CollectNewSpace();
EXPECT_EQ(heap->ExternalInWords(Heap::kNew), 0);
EXPECT_EQ(heap->ExternalInWords(Heap::kOld),
kWeak2ExternalSize / kWordSize);
EXPECT(heap->ExternalInWords(Heap::kNew) == 0);
EXPECT(heap->ExternalInWords(Heap::kOld) == kWeak2ExternalSize / kWordSize);
}
Dart_DeletePersistentHandle(strong_ref);
{
TransitionNativeToVM transition(thread);
GCTestHelper::CollectOldSpace();
EXPECT_EQ(heap->ExternalInWords(Heap::kNew), 0);
EXPECT_EQ(heap->ExternalInWords(Heap::kOld), 0);
EXPECT(heap->ExternalInWords(Heap::kOld) == 0);
}
}
@ -4729,6 +4728,20 @@ TEST_CASE(DartAPI_ImplicitReferencesNewSpace) {
Dart_ExitScope();
}
{
TransitionNativeToVM transition(thread);
GCTestHelper::CollectOldSpace();
}
{
Dart_EnterScope();
// Old space collection should not affect old space objects.
EXPECT(!Dart_IsNull(AsHandle(weak1)));
EXPECT(!Dart_IsNull(AsHandle(weak2)));
EXPECT(!Dart_IsNull(AsHandle(weak3)));
Dart_ExitScope();
}
Dart_DeleteWeakPersistentHandle(strong_weak);
Dart_DeleteWeakPersistentHandle(weak1);
Dart_DeleteWeakPersistentHandle(weak2);

View file

@ -27,7 +27,8 @@ ForwardingCorpse* ForwardingCorpse::AsForwarder(uword addr, intptr_t size) {
tags = UntaggedObject::SizeTag::update(size, tags);
tags = UntaggedObject::ClassIdTag::update(kForwardingCorpse, tags);
bool is_old = (addr & kNewObjectAlignmentOffset) == kOldObjectAlignmentOffset;
tags = UntaggedObject::NotMarkedBit::update(true, tags);
tags = UntaggedObject::OldBit::update(is_old, tags);
tags = UntaggedObject::OldAndNotMarkedBit::update(is_old, tags);
tags = UntaggedObject::OldAndNotRememberedBit::update(is_old, tags);
tags = UntaggedObject::NewBit::update(!is_old, tags);

View file

@ -25,8 +25,8 @@ FreeListElement* FreeListElement::AsElement(uword addr, intptr_t size) {
tags = UntaggedObject::SizeTag::update(size, tags);
tags = UntaggedObject::ClassIdTag::update(kFreeListElement, tags);
ASSERT((addr & kNewObjectAlignmentOffset) == kOldObjectAlignmentOffset);
tags = UntaggedObject::AlwaysSetBit::update(true, tags);
tags = UntaggedObject::NotMarkedBit::update(true, tags);
tags = UntaggedObject::OldBit::update(true, tags);
tags = UntaggedObject::OldAndNotMarkedBit::update(true, tags);
tags = UntaggedObject::OldAndNotRememberedBit::update(true, tags);
tags = UntaggedObject::NewBit::update(false, tags);
result->tags_ = tags;
@ -40,29 +40,6 @@ FreeListElement* FreeListElement::AsElement(uword addr, intptr_t size) {
// writable.
}
FreeListElement* FreeListElement::AsElementNew(uword addr, intptr_t size) {
ASSERT(size >= kObjectAlignment);
ASSERT(Utils::IsAligned(size, kObjectAlignment));
FreeListElement* result = reinterpret_cast<FreeListElement*>(addr);
uword tags = 0;
tags = UntaggedObject::SizeTag::update(size, tags);
tags = UntaggedObject::ClassIdTag::update(kFreeListElement, tags);
ASSERT((addr & kNewObjectAlignmentOffset) == kNewObjectAlignmentOffset);
tags = UntaggedObject::AlwaysSetBit::update(true, tags);
tags = UntaggedObject::NotMarkedBit::update(true, tags);
tags = UntaggedObject::OldAndNotRememberedBit::update(false, tags);
tags = UntaggedObject::NewBit::update(true, tags);
result->tags_ = tags;
if (size > UntaggedObject::SizeTag::kMaxSizeTag) {
*result->SizeAddress() = size;
}
result->set_next(nullptr);
return result;
}
void FreeListElement::Init() {
ASSERT(sizeof(FreeListElement) == kObjectAlignment);
ASSERT(OFFSET_OF(FreeListElement, tags_) == Object::tags_offset());

View file

@ -36,7 +36,6 @@ class FreeListElement {
}
static FreeListElement* AsElement(uword addr, intptr_t size);
static FreeListElement* AsElementNew(uword addr, intptr_t size);
static void Init();

View file

@ -49,6 +49,7 @@ Heap::Heap(IsolateGroup* isolate_group,
new_space_(this, max_new_gen_semi_words),
old_space_(this, max_old_gen_words),
read_only_(false),
last_gc_was_old_space_(false),
assume_scavenge_will_fail_(false),
gc_on_nth_allocation_(kNoForcedGarbageCollection) {
UpdateGlobalMaxUsed();
@ -57,6 +58,8 @@ Heap::Heap(IsolateGroup* isolate_group,
old_weak_tables_[sel] = new WeakTable();
}
stats_.num_ = 0;
stats_.state_ = kInitial;
stats_.reachability_barrier_ = 0;
}
Heap::~Heap() {
@ -137,7 +140,7 @@ uword Heap::AllocateOld(Thread* thread, intptr_t size, bool is_exec) {
}
// All GC tasks finished without allocating successfully. Collect both
// generations.
CollectOldSpaceGarbage(thread, GCType::kMarkSweep, GCReason::kOldSpace);
CollectMostGarbage(GCReason::kOldSpace, /*compact=*/false);
addr = old_space_.TryAllocate(size, is_exec);
if (addr != 0) {
return addr;
@ -154,7 +157,7 @@ uword Heap::AllocateOld(Thread* thread, intptr_t size, bool is_exec) {
return addr;
}
// Before throwing an out-of-memory error try a synchronous GC.
CollectOldSpaceGarbage(thread, GCType::kMarkCompact, GCReason::kOldSpace);
CollectAllGarbage(GCReason::kOldSpace, /*compact=*/true);
WaitForSweeperTasksAtSafepoint(thread);
}
uword addr = old_space_.TryAllocate(size, is_exec, PageSpace::kForceGrowth);
@ -229,6 +232,9 @@ void Heap::CheckExternalGC(Thread* thread) {
}
if (old_space_.ReachedHardThreshold()) {
if (last_gc_was_old_space_) {
CollectNewSpaceGarbage(thread, GCType::kScavenge, GCReason::kFull);
}
CollectGarbage(thread, GCType::kMarkSweep, GCReason::kExternal);
} else {
CheckConcurrentMarking(thread, GCReason::kExternal, 0);
@ -474,6 +480,7 @@ void Heap::CollectNewSpaceGarbage(Thread* thread,
#if defined(SUPPORT_TIMELINE)
PrintStatsToTimeline(&tbes, reason);
#endif
last_gc_was_old_space_ = false;
}
if (type == GCType::kScavenge && reason == GCReason::kNewSpace) {
if (old_space_.ReachedHardThreshold()) {
@ -540,6 +547,7 @@ void Heap::CollectOldSpaceGarbage(Thread* thread,
isolate->catch_entry_moves_cache()->Clear();
},
/*at_safepoint=*/true);
last_gc_was_old_space_ = true;
assume_scavenge_will_fail_ = false;
}
}
@ -559,8 +567,19 @@ void Heap::CollectGarbage(Thread* thread, GCType type, GCReason reason) {
}
}
void Heap::CollectMostGarbage(GCReason reason, bool compact) {
Thread* thread = Thread::Current();
CollectNewSpaceGarbage(thread, GCType::kScavenge, reason);
CollectOldSpaceGarbage(
thread, compact ? GCType::kMarkCompact : GCType::kMarkSweep, reason);
}
void Heap::CollectAllGarbage(GCReason reason, bool compact) {
Thread* thread = Thread::Current();
// New space is evacuated so this GC will collect all dead objects
// kept alive by a cross-generational pointer.
CollectNewSpaceGarbage(thread, GCType::kEvacuate, reason);
if (thread->is_marking()) {
// If incremental marking is happening, we need to finish the GC cycle
// and perform a follow-up GC to purge any "floating garbage" that may be
@ -605,6 +624,19 @@ void Heap::CheckConcurrentMarking(Thread* thread,
return;
case PageSpace::kDone:
if (old_space_.ReachedSoftThreshold()) {
// New-space objects are roots during old-space GC. This means that even
// unreachable new-space objects prevent old-space objects they
// reference from being collected during an old-space GC. Normally this
// is not an issue because new-space GCs run much more frequently than
// old-space GCs. If new-space allocation is low and direct old-space
// allocation is high, which can happen in a program that allocates
// large objects and little else, old-space can fill up with unreachable
// objects until the next new-space GC. This check is the
// concurrent-marking equivalent to the new-space GC before
// synchronous-marking in CollectMostGarbage.
if (last_gc_was_old_space_) {
CollectNewSpaceGarbage(thread, GCType::kScavenge, GCReason::kFull);
}
StartConcurrentMarking(thread, reason);
}
return;

View file

@ -54,6 +54,20 @@ class Heap {
kNumWeakSelectors
};
// States for a state machine that represents the worst-case set of GCs
// that an unreachable object could survive before begin collected:
// a new-space object that is involved with a cycle with an old-space object
// is copied to survivor space, then promoted during concurrent marking,
// and finally proven unreachable in the next round of old-gen marking.
// We ignore the case of unreachable-but-not-yet-collected objects being
// made reachable again by allInstances.
enum LeakCountState {
kInitial = 0,
kFirstScavenge,
kSecondScavenge,
kMarkingStart,
};
// Pattern for unused new space and swept old space.
static constexpr uint8_t kZapByte = 0xf3;
@ -106,10 +120,18 @@ class Heap {
// Collect a single generation.
void CollectGarbage(Thread* thread, GCType type, GCReason reason);
// Collect both generations by performing a mark-sweep. If incremental marking
// was in progress, perform another mark-sweep. This function will collect all
// unreachable objects, including those in inter-generational cycles or stored
// during incremental marking.
// Collect both generations by performing a scavenge followed by a
// mark-sweep. This function may not collect all unreachable objects. Because
// mark-sweep treats new space as roots, a cycle between unreachable old and
// new objects will not be collected until the new objects are promoted.
// Verification based on heap iteration should instead use CollectAllGarbage.
void CollectMostGarbage(GCReason reason = GCReason::kFull,
bool compact = false);
// Collect both generations by performing an evacuation followed by a
// mark-sweep. If incremental marking was in progress, perform another
// mark-sweep. This function will collect all unreachable objects, including
// those in inter-generational cycles or stored during incremental marking.
void CollectAllGarbage(GCReason reason = GCReason::kFull,
bool compact = false);
@ -267,7 +289,7 @@ class Heap {
}
#endif // PRODUCT
intptr_t ReachabilityBarrier() { return old_space_.collections(); }
intptr_t ReachabilityBarrier() { return stats_.reachability_barrier_; }
IsolateGroup* isolate_group() const { return isolate_group_; }
bool is_vm_isolate() const { return is_vm_isolate_; }
@ -287,6 +309,8 @@ class Heap {
intptr_t num_;
GCType type_;
GCReason reason_;
LeakCountState state_; // State to track finalization of GCed object.
intptr_t reachability_barrier_; // Tracks reachability of GCed objects.
class Data : public ValueObject {
public:
@ -363,6 +387,7 @@ class Heap {
// This heap is in read-only mode: No allocation is allowed.
bool read_only_;
bool last_gc_was_old_space_;
bool assume_scavenge_will_fail_;
static constexpr intptr_t kNoForcedGarbageCollection = -1;

View file

@ -955,9 +955,9 @@ ISOLATE_UNIT_TEST_CASE(WeakProperty_Generations) {
WeakProperty_Generations(kNew, kNew, kNew, true, true, true);
WeakProperty_Generations(kNew, kNew, kOld, true, true, true);
WeakProperty_Generations(kNew, kNew, kImm, true, true, true);
WeakProperty_Generations(kNew, kOld, kNew, false, true, true);
WeakProperty_Generations(kNew, kOld, kOld, false, true, true);
WeakProperty_Generations(kNew, kOld, kImm, false, true, true);
WeakProperty_Generations(kNew, kOld, kNew, false, false, true);
WeakProperty_Generations(kNew, kOld, kOld, false, false, true);
WeakProperty_Generations(kNew, kOld, kImm, false, false, true);
WeakProperty_Generations(kNew, kImm, kNew, false, false, false);
WeakProperty_Generations(kNew, kImm, kOld, false, false, false);
WeakProperty_Generations(kNew, kImm, kImm, false, false, false);
@ -1036,7 +1036,7 @@ ISOLATE_UNIT_TEST_CASE(WeakReference_Generations) {
FLAG_early_tenuring_threshold = 100; // I.e., off.
WeakReference_Generations(kNew, kNew, true, true, true);
WeakReference_Generations(kNew, kOld, false, true, true);
WeakReference_Generations(kNew, kOld, false, false, true);
WeakReference_Generations(kNew, kImm, false, false, false);
WeakReference_Generations(kOld, kNew, true, true, true);
WeakReference_Generations(kOld, kOld, false, true, true);
@ -1107,7 +1107,7 @@ ISOLATE_UNIT_TEST_CASE(WeakArray_Generations) {
FLAG_early_tenuring_threshold = 100; // I.e., off.
WeakArray_Generations(kNew, kNew, true, true, true);
WeakArray_Generations(kNew, kOld, false, true, true);
WeakArray_Generations(kNew, kOld, false, false, true);
WeakArray_Generations(kNew, kImm, false, false, false);
WeakArray_Generations(kOld, kNew, true, true, true);
WeakArray_Generations(kOld, kOld, false, true, true);
@ -1178,7 +1178,7 @@ ISOLATE_UNIT_TEST_CASE(FinalizerEntry_Generations) {
FLAG_early_tenuring_threshold = 100; // I.e., off.
FinalizerEntry_Generations(kNew, kNew, true, true, true);
FinalizerEntry_Generations(kNew, kOld, false, true, true);
FinalizerEntry_Generations(kNew, kOld, false, false, true);
FinalizerEntry_Generations(kNew, kImm, false, false, false);
FinalizerEntry_Generations(kOld, kNew, true, true, true);
FinalizerEntry_Generations(kOld, kOld, false, true, true);

View file

@ -30,22 +30,18 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
MarkingVisitorBase(IsolateGroup* isolate_group,
PageSpace* page_space,
MarkingStack* marking_stack,
MarkingStack* new_marking_stack,
MarkingStack* deferred_marking_stack)
: ObjectPointerVisitor(isolate_group),
page_space_(page_space),
work_list_(marking_stack),
new_work_list_(new_marking_stack),
deferred_work_list_(deferred_marking_stack),
marked_bytes_(0),
marked_micros_(0),
concurrent_(true) {}
marked_micros_(0) {}
~MarkingVisitorBase() { ASSERT(delayed_.IsEmpty()); }
uintptr_t marked_bytes() const { return marked_bytes_; }
int64_t marked_micros() const { return marked_micros_; }
void AddMicros(int64_t micros) { marked_micros_ += micros; }
void set_concurrent(bool value) { concurrent_ = value; }
#ifdef DEBUG
constexpr static const char* const kName = "Marker";
@ -53,6 +49,7 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
static bool IsMarked(ObjectPtr raw) {
ASSERT(raw->IsHeapObject());
ASSERT(raw->IsOldObject());
return raw->untag()->IsMarked();
}
@ -65,9 +62,10 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
ObjectPtr raw_key = cur_weak->untag()->key();
// Reset the next pointer in the weak property.
cur_weak->untag()->next_seen_by_gc_ = WeakProperty::null();
if (raw_key->IsImmediateObject() || raw_key->untag()->IsMarked()) {
if (raw_key->IsImmediateOrNewObject() || raw_key->untag()->IsMarked()) {
ObjectPtr raw_val = cur_weak->untag()->value();
if (!raw_val->IsImmediateObject() && !raw_val->untag()->IsMarked()) {
if (!raw_val->IsImmediateOrNewObject() &&
!raw_val->untag()->IsMarked()) {
more_to_mark = true;
}
@ -87,59 +85,29 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
void DrainMarkingStackWithPauseChecks() {
do {
ObjectPtr obj;
while (work_list_.Pop(&obj)) {
if (obj->IsNewObject()) {
Page* page = Page::Of(obj);
uword top = page->original_top();
uword end = page->original_end();
uword addr = static_cast<uword>(obj);
if (top <= addr && addr < end) {
new_work_list_.Push(obj);
if (UNLIKELY(page_space_->pause_concurrent_marking())) {
work_list_.Flush();
new_work_list_.Flush();
deferred_work_list_.Flush();
page_space_->YieldConcurrentMarking();
}
continue;
}
}
const intptr_t class_id = obj->GetClassId();
ASSERT(class_id != kIllegalCid);
ASSERT(class_id != kFreeListElement);
ASSERT(class_id != kForwardingCorpse);
ObjectPtr raw_obj;
while (work_list_.Pop(&raw_obj)) {
const intptr_t class_id = raw_obj->GetClassId();
intptr_t size;
if (class_id == kWeakPropertyCid) {
size = ProcessWeakProperty(static_cast<WeakPropertyPtr>(obj));
size = ProcessWeakProperty(static_cast<WeakPropertyPtr>(raw_obj));
} else if (class_id == kWeakReferenceCid) {
size = ProcessWeakReference(static_cast<WeakReferencePtr>(obj));
size = ProcessWeakReference(static_cast<WeakReferencePtr>(raw_obj));
} else if (class_id == kWeakArrayCid) {
size = ProcessWeakArray(static_cast<WeakArrayPtr>(obj));
size = ProcessWeakArray(static_cast<WeakArrayPtr>(raw_obj));
} else if (class_id == kFinalizerEntryCid) {
size = ProcessFinalizerEntry(static_cast<FinalizerEntryPtr>(obj));
size = ProcessFinalizerEntry(static_cast<FinalizerEntryPtr>(raw_obj));
} else {
size = obj->untag()->VisitPointersNonvirtual(this);
}
if (!obj->IsNewObject()) {
marked_bytes_ += size;
size = raw_obj->untag()->VisitPointersNonvirtual(this);
}
marked_bytes_ += size;
if (UNLIKELY(page_space_->pause_concurrent_marking())) {
work_list_.Flush();
new_work_list_.Flush();
deferred_work_list_.Flush();
page_space_->YieldConcurrentMarking();
}
}
} while (ProcessPendingWeakProperties());
ASSERT(work_list_.IsLocalEmpty());
// In case of scavenge before final marking.
new_work_list_.Flush();
deferred_work_list_.Flush();
}
void DrainMarkingStack() {
@ -168,53 +136,30 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
bool ProcessMarkingStack(intptr_t remaining_budget) {
do {
// First drain the marking stacks.
ObjectPtr obj;
while (work_list_.Pop(&obj)) {
if (sync && concurrent_ && obj->IsNewObject()) {
Page* page = Page::Of(obj);
uword top = page->original_top();
uword end = page->original_end();
uword addr = static_cast<uword>(obj);
if (top <= addr && addr < end) {
new_work_list_.Push(obj);
// We did some work routing this object, but didn't look at any of
// its slots.
intptr_t size = kObjectAlignment;
remaining_budget -= size;
if (remaining_budget < 0) {
return true; // More to mark.
}
continue;
}
}
const intptr_t class_id = obj->GetClassId();
ASSERT(class_id != kIllegalCid);
ASSERT(class_id != kFreeListElement);
ASSERT(class_id != kForwardingCorpse);
ObjectPtr raw_obj;
while (work_list_.Pop(&raw_obj)) {
const intptr_t class_id = raw_obj->GetClassId();
intptr_t size;
if (class_id == kWeakPropertyCid) {
size = ProcessWeakProperty(static_cast<WeakPropertyPtr>(obj));
size = ProcessWeakProperty(static_cast<WeakPropertyPtr>(raw_obj));
} else if (class_id == kWeakReferenceCid) {
size = ProcessWeakReference(static_cast<WeakReferencePtr>(obj));
size = ProcessWeakReference(static_cast<WeakReferencePtr>(raw_obj));
} else if (class_id == kWeakArrayCid) {
size = ProcessWeakArray(static_cast<WeakArrayPtr>(obj));
size = ProcessWeakArray(static_cast<WeakArrayPtr>(raw_obj));
} else if (class_id == kFinalizerEntryCid) {
size = ProcessFinalizerEntry(static_cast<FinalizerEntryPtr>(obj));
size = ProcessFinalizerEntry(static_cast<FinalizerEntryPtr>(raw_obj));
} else {
if ((class_id == kArrayCid) || (class_id == kImmutableArrayCid)) {
size = obj->untag()->HeapSize();
size = raw_obj->untag()->HeapSize();
if (size > remaining_budget) {
work_list_.Push(obj);
work_list_.Push(raw_obj);
return true; // More to mark.
}
}
size = obj->untag()->VisitPointersNonvirtual(this);
}
if (!obj->IsNewObject()) {
marked_bytes_ += size;
size = raw_obj->untag()->VisitPointersNonvirtual(this);
}
marked_bytes_ += size;
remaining_budget -= size;
if (remaining_budget < 0) {
return true; // More to mark.
@ -267,7 +212,8 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
ObjectPtr raw_key =
LoadCompressedPointerIgnoreRace(&raw_weak->untag()->key_)
.Decompress(raw_weak->heap_base());
if (raw_key->IsHeapObject() && !raw_key->untag()->IsMarked()) {
if (raw_key->IsHeapObject() && raw_key->IsOldObject() &&
!raw_key->untag()->IsMarked()) {
// Key was white. Enqueue the weak property.
ASSERT(IsMarked(raw_weak));
delayed_.weak_properties.Enqueue(raw_weak);
@ -283,7 +229,8 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
ObjectPtr raw_target =
LoadCompressedPointerIgnoreRace(&raw_weak->untag()->target_)
.Decompress(raw_weak->heap_base());
if (raw_target->IsHeapObject() && !raw_target->untag()->IsMarked()) {
if (raw_target->IsHeapObject() && raw_target->IsOldObject() &&
!raw_target->untag()->IsMarked()) {
// Target was white. Enqueue the weak reference. It is potentially dead.
// It might still be made alive by weak properties in next rounds.
ASSERT(IsMarked(raw_weak));
@ -314,11 +261,9 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
}
void ProcessDeferredMarking() {
TIMELINE_FUNCTION_GC_DURATION(Thread::Current(), "ProcessDeferredMarking");
ObjectPtr obj;
while (deferred_work_list_.Pop(&obj)) {
ASSERT(obj->IsHeapObject());
ObjectPtr raw_obj;
while (deferred_work_list_.Pop(&raw_obj)) {
ASSERT(raw_obj->IsHeapObject() && raw_obj->IsOldObject());
// We need to scan objects even if they were already scanned via ordinary
// marking. An object may have changed since its ordinary scan and been
// added to deferred marking stack to compensate for write-barrier
@ -335,13 +280,11 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
// encounters it during ordinary marking. This is in the same spirit as
// the eliminated write barrier, which would have added the newly written
// key and value to the ordinary marking stack.
intptr_t size = obj->untag()->VisitPointersNonvirtual(this);
intptr_t size = raw_obj->untag()->VisitPointersNonvirtual(this);
// Add the size only if we win the marking race to prevent
// double-counting.
if (TryAcquireMarkBit(obj)) {
if (!obj->IsNewObject()) {
marked_bytes_ += size;
}
if (TryAcquireMarkBit(raw_obj)) {
marked_bytes_ += size;
}
}
}
@ -350,7 +293,6 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
// after this will trigger an error.
void FinalizeMarking() {
work_list_.Finalize();
new_work_list_.Finalize();
deferred_work_list_.Finalize();
MournFinalizerEntries();
// MournFinalizerEntries inserts newly discovered dead entries into the
@ -410,7 +352,7 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
static bool ForwardOrSetNullIfCollected(ObjectPtr parent,
CompressedObjectPtr* slot) {
ObjectPtr target = slot->Decompress(parent->heap_base());
if (target->IsImmediateObject()) {
if (target->IsImmediateOrNewObject()) {
// Object not touched during this GC.
return false;
}
@ -429,7 +371,6 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
void Flush(GCLinkedLists* global_list) {
work_list_.Flush();
new_work_list_.Flush();
deferred_work_list_.Flush();
delayed_.FlushInto(global_list);
}
@ -441,7 +382,6 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
void AbandonWork() {
work_list_.AbandonWork();
new_work_list_.AbandonWork();
deferred_work_list_.AbandonWork();
delayed_.Release();
}
@ -449,47 +389,39 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
void FinalizeIncremental(GCLinkedLists* global_list) {
work_list_.Flush();
work_list_.Finalize();
new_work_list_.Flush();
new_work_list_.Finalize();
deferred_work_list_.Flush();
deferred_work_list_.Finalize();
delayed_.FlushInto(global_list);
}
GCLinkedLists* delayed() { return &delayed_; }
private:
void PushMarked(ObjectPtr obj) {
ASSERT(obj->IsHeapObject());
void PushMarked(ObjectPtr raw_obj) {
ASSERT(raw_obj->IsHeapObject());
ASSERT(raw_obj->IsOldObject());
// Push the marked object on the marking stack.
ASSERT(obj->untag()->IsMarked());
work_list_.Push(obj);
ASSERT(raw_obj->untag()->IsMarked());
work_list_.Push(raw_obj);
}
static bool TryAcquireMarkBit(ObjectPtr obj) {
if (FLAG_write_protect_code && obj->IsInstructions()) {
static bool TryAcquireMarkBit(ObjectPtr raw_obj) {
if (FLAG_write_protect_code && raw_obj->IsInstructions()) {
// A non-writable alias mapping may exist for instruction pages.
obj = Page::ToWritable(obj);
raw_obj = Page::ToWritable(raw_obj);
}
if (!sync) {
obj->untag()->SetMarkBitUnsynchronized();
raw_obj->untag()->SetMarkBitUnsynchronized();
return true;
} else {
return obj->untag()->TryAcquireMarkBit();
return raw_obj->untag()->TryAcquireMarkBit();
}
}
DART_FORCE_INLINE
void MarkObject(ObjectPtr obj) {
if (obj->IsImmediateObject()) {
return;
}
if (sync && concurrent_ && obj->IsNewObject()) {
if (TryAcquireMarkBit(obj)) {
PushMarked(obj);
}
void MarkObject(ObjectPtr raw_obj) {
// Fast exit if the raw object is immediate or in new space. No memory
// access.
if (raw_obj->IsImmediateOrNewObject()) {
return;
}
@ -503,36 +435,34 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
// was allocated after the concurrent marker started. It can read either a
// zero or the header of an object allocated black, both of which appear
// marked.
if (obj->untag()->IsMarkedIgnoreRace()) {
if (raw_obj->untag()->IsMarkedIgnoreRace()) {
return;
}
intptr_t class_id = obj->GetClassId();
intptr_t class_id = raw_obj->GetClassId();
ASSERT(class_id != kFreeListElement);
if (sync && UNLIKELY(class_id == kInstructionsCid)) {
// If this is the concurrent marker, this object may be non-writable due
// to W^X (--write-protect-code).
deferred_work_list_.Push(obj);
deferred_work_list_.Push(raw_obj);
return;
}
if (!TryAcquireMarkBit(obj)) {
if (!TryAcquireMarkBit(raw_obj)) {
// Already marked.
return;
}
PushMarked(obj);
PushMarked(raw_obj);
}
PageSpace* page_space_;
MarkerWorkList work_list_;
MarkerWorkList new_work_list_;
MarkerWorkList deferred_work_list_;
GCLinkedLists delayed_;
uintptr_t marked_bytes_;
int64_t marked_micros_;
bool concurrent_;
DISALLOW_IMPLICIT_CONSTRUCTORS(MarkingVisitorBase);
};
@ -540,11 +470,11 @@ class MarkingVisitorBase : public ObjectPointerVisitor {
typedef MarkingVisitorBase<false> UnsyncMarkingVisitor;
typedef MarkingVisitorBase<true> SyncMarkingVisitor;
static bool IsUnreachable(const ObjectPtr obj) {
if (obj->IsImmediateObject()) {
static bool IsUnreachable(const ObjectPtr raw_obj) {
if (raw_obj->IsImmediateOrNewObject()) {
return false;
}
return !obj->untag()->IsMarked();
return !raw_obj->untag()->IsMarked();
}
class MarkingWeakVisitor : public HandleVisitor {
@ -554,8 +484,8 @@ class MarkingWeakVisitor : public HandleVisitor {
void VisitHandle(uword addr) override {
FinalizablePersistentHandle* handle =
reinterpret_cast<FinalizablePersistentHandle*>(addr);
ObjectPtr obj = handle->ptr();
if (IsUnreachable(obj)) {
ObjectPtr raw_obj = handle->ptr();
if (IsUnreachable(raw_obj)) {
handle->UpdateUnreachable(thread()->isolate_group());
}
}
@ -566,10 +496,17 @@ class MarkingWeakVisitor : public HandleVisitor {
void GCMarker::Prologue() {
isolate_group_->ReleaseStoreBuffers();
marking_stack_.PushAll(new_marking_stack_.PopAll());
if (heap_->stats_.state_ == Heap::kSecondScavenge) {
heap_->stats_.state_ = Heap::kMarkingStart;
}
}
void GCMarker::Epilogue() {}
void GCMarker::Epilogue() {
if (heap_->stats_.state_ == Heap::kMarkingStart) {
heap_->stats_.state_ = Heap::kInitial;
heap_->stats_.reachability_barrier_ += 1;
}
}
enum RootSlices {
kIsolate = 0,
@ -582,6 +519,10 @@ void GCMarker::ResetSlices() {
root_slices_started_ = 0;
root_slices_finished_ = 0;
root_slices_count_ = kNumFixedRootSlices;
new_page_ = heap_->new_space()->head();
for (Page* p = new_page_; p != nullptr; p = p->next()) {
root_slices_count_++;
}
weak_slices_started_ = 0;
}
@ -601,6 +542,17 @@ void GCMarker::IterateRoots(ObjectPointerVisitor* visitor) {
visitor, ValidationPolicy::kDontValidateFrames);
break;
}
default: {
Page* page;
{
MonitorLocker ml(&root_slices_monitor_);
page = new_page_;
ASSERT(page != nullptr);
new_page_ = page->next();
}
TIMELINE_FUNCTION_GC_DURATION(Thread::Current(), "ProcessNewSpace");
page->VisitObjectPointers(visitor);
}
}
MonitorLocker ml(&root_slices_monitor_);
@ -668,23 +620,8 @@ void GCMarker::ProcessWeakTables(Thread* thread) {
for (intptr_t i = 0; i < size; i++) {
if (table->IsValidEntryAtExclusive(i)) {
// The object has been collected.
ObjectPtr obj = table->ObjectAtExclusive(i);
if (obj->IsHeapObject() && !obj->untag()->IsMarked()) {
if (cleanup != nullptr) {
cleanup(reinterpret_cast<void*>(table->ValueAtExclusive(i)));
}
table->InvalidateAtExclusive(i);
}
}
}
table =
heap_->GetWeakTable(Heap::kNew, static_cast<Heap::WeakSelector>(sel));
size = table->size();
for (intptr_t i = 0; i < size; i++) {
if (table->IsValidEntryAtExclusive(i)) {
// The object has been collected.
ObjectPtr obj = table->ObjectAtExclusive(i);
if (obj->IsHeapObject() && !obj->untag()->IsMarked()) {
ObjectPtr raw_obj = table->ObjectAtExclusive(i);
if (raw_obj->IsHeapObject() && !raw_obj->untag()->IsMarked()) {
if (cleanup != nullptr) {
cleanup(reinterpret_cast<void*>(table->ValueAtExclusive(i)));
}
@ -699,18 +636,18 @@ void GCMarker::ProcessRememberedSet(Thread* thread) {
TIMELINE_FUNCTION_GC_DURATION(thread, "ProcessRememberedSet");
// Filter collected objects from the remembered set.
StoreBuffer* store_buffer = isolate_group_->store_buffer();
StoreBufferBlock* reading = store_buffer->PopAll();
StoreBufferBlock* reading = store_buffer->TakeBlocks();
StoreBufferBlock* writing = store_buffer->PopNonFullBlock();
while (reading != nullptr) {
StoreBufferBlock* next = reading->next();
// Generated code appends to store buffers; tell MemorySanitizer.
MSAN_UNPOISON(reading, sizeof(*reading));
while (!reading->IsEmpty()) {
ObjectPtr obj = reading->Pop();
ASSERT(!obj->IsForwardingCorpse());
ASSERT(obj->untag()->IsRemembered());
if (obj->untag()->IsMarked()) {
writing->Push(obj);
ObjectPtr raw_object = reading->Pop();
ASSERT(!raw_object->IsForwardingCorpse());
ASSERT(raw_object->untag()->IsRemembered());
if (raw_object->untag()->IsMarked()) {
writing->Push(raw_object);
if (writing->IsFull()) {
store_buffer->PushBlock(writing, StoreBuffer::kIgnoreThreshold);
writing = store_buffer->PopNonFullBlock();
@ -732,9 +669,9 @@ class ObjectIdRingClearPointerVisitor : public ObjectPointerVisitor {
void VisitPointers(ObjectPtr* first, ObjectPtr* last) override {
for (ObjectPtr* current = first; current <= last; current++) {
ObjectPtr obj = *current;
ASSERT(obj->IsHeapObject());
if (!obj->untag()->IsMarked()) {
ObjectPtr raw_obj = *current;
ASSERT(raw_obj->IsHeapObject());
if (raw_obj->IsOldObject() && !raw_obj->untag()->IsMarked()) {
// Object has become garbage. Replace it will null.
*current = Object::null();
}
@ -799,7 +736,6 @@ class ParallelMarkTask : public ThreadPool::Task {
// Phase 1: Iterate over roots and drain marking stack in tasks.
num_busy_->fetch_add(1u);
visitor_->set_concurrent(false);
marker_->IterateRoots(visitor_);
visitor_->ProcessDeferredMarking();
@ -955,7 +891,6 @@ GCMarker::GCMarker(IsolateGroup* isolate_group, Heap* heap)
: isolate_group_(isolate_group),
heap_(heap),
marking_stack_(),
new_marking_stack_(),
deferred_marking_stack_(),
global_list_(),
visitors_(),
@ -1003,9 +938,8 @@ void GCMarker::StartConcurrentMark(PageSpace* page_space) {
ResetSlices();
for (intptr_t i = 0; i < num_tasks; i++) {
ASSERT(visitors_[i] == nullptr);
SyncMarkingVisitor* visitor =
new SyncMarkingVisitor(isolate_group_, page_space, &marking_stack_,
&new_marking_stack_, &deferred_marking_stack_);
SyncMarkingVisitor* visitor = new SyncMarkingVisitor(
isolate_group_, page_space, &marking_stack_, &deferred_marking_stack_);
visitors_[i] = visitor;
if (i < (num_tasks - 1)) {
@ -1045,7 +979,7 @@ void GCMarker::IncrementalMarkWithUnlimitedBudget(PageSpace* page_space) {
"IncrementalMarkWithUnlimitedBudget");
SyncMarkingVisitor visitor(isolate_group_, page_space, &marking_stack_,
&new_marking_stack_, &deferred_marking_stack_);
&deferred_marking_stack_);
int64_t start = OS::GetCurrentMonotonicMicros();
visitor.DrainMarkingStack();
int64_t stop = OS::GetCurrentMonotonicMicros();
@ -1069,7 +1003,7 @@ void GCMarker::IncrementalMarkWithSizeBudget(PageSpace* page_space,
"IncrementalMarkWithSizeBudget");
SyncMarkingVisitor visitor(isolate_group_, page_space, &marking_stack_,
&new_marking_stack_, &deferred_marking_stack_);
&deferred_marking_stack_);
int64_t start = OS::GetCurrentMonotonicMicros();
visitor.ProcessMarkingStack(size);
int64_t stop = OS::GetCurrentMonotonicMicros();
@ -1088,7 +1022,7 @@ void GCMarker::IncrementalMarkWithTimeBudget(PageSpace* page_space,
"IncrementalMarkWithTimeBudget");
SyncMarkingVisitor visitor(isolate_group_, page_space, &marking_stack_,
&new_marking_stack_, &deferred_marking_stack_);
&deferred_marking_stack_);
int64_t start = OS::GetCurrentMonotonicMicros();
visitor.ProcessMarkingStackUntil(deadline);
int64_t stop = OS::GetCurrentMonotonicMicros();
@ -1108,8 +1042,7 @@ class VerifyAfterMarkingVisitor : public ObjectVisitor,
: ObjectVisitor(), ObjectPointerVisitor(IsolateGroup::Current()) {}
void VisitObject(ObjectPtr obj) override {
if (obj->untag()->IsMarked()) {
current_ = obj;
if (obj->IsNewObject() || obj->untag()->IsMarked()) {
obj->untag()->VisitPointers(this);
}
}
@ -1117,11 +1050,10 @@ class VerifyAfterMarkingVisitor : public ObjectVisitor,
void VisitPointers(ObjectPtr* from, ObjectPtr* to) override {
for (ObjectPtr* ptr = from; ptr <= to; ptr++) {
ObjectPtr obj = *ptr;
if (obj->IsHeapObject() && !obj->untag()->IsMarked()) {
OS::PrintErr("object=0x%" Px ", slot=0x%" Px ", value=0x%" Px "\n",
static_cast<uword>(current_), reinterpret_cast<uword>(ptr),
static_cast<uword>(obj));
failed_ = true;
if (obj->IsHeapObject() && obj->IsOldObject() &&
!obj->untag()->IsMarked()) {
FATAL("Verifying after marking: Not marked: *0x%" Px " = 0x%" Px "\n",
reinterpret_cast<uword>(ptr), static_cast<uword>(obj));
}
}
}
@ -1132,21 +1064,14 @@ class VerifyAfterMarkingVisitor : public ObjectVisitor,
CompressedObjectPtr* to) override {
for (CompressedObjectPtr* ptr = from; ptr <= to; ptr++) {
ObjectPtr obj = ptr->Decompress(heap_base);
if (obj->IsHeapObject() && !obj->untag()->IsMarked()) {
OS::PrintErr("object=0x%" Px ", slot=0x%" Px ", value=0x%" Px "\n",
static_cast<uword>(current_), reinterpret_cast<uword>(ptr),
static_cast<uword>(obj));
failed_ = true;
if (obj->IsHeapObject() && obj->IsOldObject() &&
!obj->untag()->IsMarked()) {
FATAL("Verifying after marking: Not marked: *0x%" Px " = 0x%" Px "\n",
reinterpret_cast<uword>(ptr), static_cast<uword>(obj));
}
}
}
#endif
bool failed() const { return failed_; }
private:
ObjectPtr current_;
bool failed_ = false;
};
void GCMarker::MarkObjects(PageSpace* page_space) {
@ -1163,9 +1088,7 @@ void GCMarker::MarkObjects(PageSpace* page_space) {
int64_t start = OS::GetCurrentMonotonicMicros();
// Mark everything on main thread.
UnsyncMarkingVisitor visitor(isolate_group_, page_space, &marking_stack_,
&new_marking_stack_,
&deferred_marking_stack_);
visitor.set_concurrent(false);
ResetSlices();
IterateRoots(&visitor);
visitor.ProcessDeferredMarking();
@ -1195,9 +1118,9 @@ void GCMarker::MarkObjects(PageSpace* page_space) {
// Visitors may or may not have already been created depending on
// whether we did some concurrent marking.
if (visitor == nullptr) {
visitor = new SyncMarkingVisitor(isolate_group_, page_space,
&marking_stack_, &new_marking_stack_,
&deferred_marking_stack_);
visitor =
new SyncMarkingVisitor(isolate_group_, page_space,
&marking_stack_, &deferred_marking_stack_);
visitors_[i] = visitor;
}
@ -1243,19 +1166,9 @@ void GCMarker::MarkObjects(PageSpace* page_space) {
if (FLAG_verify_after_marking) {
VerifyAfterMarkingVisitor visitor;
heap_->VisitObjects(&visitor);
if (visitor.failed()) {
FATAL("verify after marking");
}
}
Epilogue();
}
void GCMarker::PruneWeak(Scavenger* scavenger) {
scavenger->PruneWeak(&global_list_);
for (intptr_t i = 0, n = FLAG_marker_tasks; i < n; i++) {
scavenger->PruneWeak(visitors_[i]->delayed());
}
}
} // namespace dart

View file

@ -51,8 +51,6 @@ class GCMarker {
intptr_t marked_words() const { return marked_bytes_ >> kWordSizeLog2; }
intptr_t MarkedWordsPerMicro() const;
void PruneWeak(Scavenger* scavenger);
private:
void Prologue();
void Epilogue();
@ -71,11 +69,11 @@ class GCMarker {
IsolateGroup* const isolate_group_;
Heap* const heap_;
MarkingStack marking_stack_;
MarkingStack new_marking_stack_;
MarkingStack deferred_marking_stack_;
GCLinkedLists global_list_;
MarkingVisitorBase<true>** visitors_;
Page* new_page_;
Monitor root_slices_monitor_;
RelaxedAtomic<intptr_t> root_slices_started_;
intptr_t root_slices_finished_;
@ -87,9 +85,6 @@ class GCMarker {
friend class ConcurrentMarkTask;
friend class ParallelMarkTask;
friend class Scavenger;
template <bool sync>
friend class MarkingVisitorBase;
DISALLOW_IMPLICIT_CONSTRUCTORS(GCMarker);
};

View file

@ -126,14 +126,6 @@ class Page {
return Utils::RoundUp(sizeof(Page), kObjectAlignment,
kNewObjectAlignmentOffset);
}
// These are "original" in the sense that they reflect TLAB boundaries when
// the TLAB was acquired, not the current boundaries. An object between
// original_top and top may still be in use by Dart code that has eliminated
// write barriers.
uword original_top() const { return LoadAcquire(&top_); }
uword original_end() const { return LoadRelaxed(&end_); }
static intptr_t original_top_offset() { return OFFSET_OF(Page, top_); }
static intptr_t original_end_offset() { return OFFSET_OF(Page, end_); }
// Warning: This does not work for objects on image pages because image pages
// are not aligned. However, it works for objects on large pages, because
@ -232,8 +224,6 @@ class Page {
void Acquire(Thread* thread) {
ASSERT(owner_ == nullptr);
owner_ = thread;
ASSERT(thread->top() == 0);
ASSERT(thread->end() == 0);
thread->set_top(top_);
thread->set_end(end_);
thread->set_true_end(end_);
@ -243,7 +233,7 @@ class Page {
owner_ = nullptr;
uword old_top = top_;
uword new_top = thread->top();
StoreRelease(&top_, new_top);
top_ = new_top;
thread->set_top(0);
thread->set_end(0);
thread->set_true_end(0);

View file

@ -1096,7 +1096,6 @@ void PageSpace::CollectGarbageHelper(Thread* thread,
}
bool can_verify;
SweepNew();
if (compact) {
Compact(thread);
set_phase(kDone);
@ -1139,20 +1138,6 @@ void PageSpace::CollectGarbageHelper(Thread* thread,
}
}
void PageSpace::SweepNew() {
// TODO(rmacnak): Run in parallel with SweepExecutable.
TIMELINE_FUNCTION_GC_DURATION(Thread::Current(), "SweepNew");
GCSweeper sweeper;
intptr_t free = 0;
for (Page* page = heap_->new_space()->head(); page != nullptr;
page = page->next()) {
page->Release();
free += sweeper.SweepNewPage(page);
}
heap_->new_space()->set_freed_in_words(free >> kWordSizeLog2);
}
void PageSpace::SweepLarge() {
TIMELINE_FUNCTION_GC_DURATION(Thread::Current(), "SweepLarge");

View file

@ -344,8 +344,6 @@ class PageSpace {
bool IsObjectFromImagePages(ObjectPtr object);
GCMarker* marker() const { return marker_; }
private:
// Ids for time and data records in Heap::GCStats.
enum {
@ -406,7 +404,6 @@ class PageSpace {
void FreePages(Page* pages);
void CollectGarbageHelper(Thread* thread, bool compact, bool finalize);
void SweepNew();
void SweepLarge();
void Sweep(bool exclusive);
void ConcurrentSweep(IsolateGroup* isolate_group);

View file

@ -69,7 +69,7 @@ void BlockStack<BlockSize>::Reset() {
}
template <int BlockSize>
typename BlockStack<BlockSize>::Block* BlockStack<BlockSize>::PopAll() {
typename BlockStack<BlockSize>::Block* BlockStack<BlockSize>::TakeBlocks() {
MonitorLocker ml(&monitor_);
while (!partial_.IsEmpty()) {
full_.Push(partial_.Pop());
@ -77,16 +77,6 @@ typename BlockStack<BlockSize>::Block* BlockStack<BlockSize>::PopAll() {
return full_.PopAll();
}
template <int BlockSize>
void BlockStack<BlockSize>::PushAll(Block* block) {
while (block != nullptr) {
Block* next = block->next();
block->set_next(nullptr);
PushBlockImpl(block);
block = next;
}
}
template <int BlockSize>
void BlockStack<BlockSize>::PushBlockImpl(Block* block) {
ASSERT(block->next() == nullptr); // Should be just a single block.
@ -247,8 +237,7 @@ intptr_t StoreBuffer::Size() {
return full_.length() + partial_.length();
}
template <int BlockSize>
void BlockStack<BlockSize>::VisitObjectPointers(ObjectPointerVisitor* visitor) {
void StoreBuffer::VisitObjectPointers(ObjectPointerVisitor* visitor) {
for (Block* block = full_.Peek(); block != nullptr; block = block->next()) {
block->VisitObjectPointers(visitor);
}

View file

@ -103,8 +103,7 @@ class BlockStack {
Block* PopNonEmptyBlock();
// Pops and returns all non-empty blocks as a linked list (owned by caller).
Block* PopAll();
void PushAll(Block* blocks);
Block* TakeBlocks();
// Discards the contents of all non-empty blocks.
void Reset();
@ -113,8 +112,6 @@ class BlockStack {
Block* WaitForWork(RelaxedAtomic<uintptr_t>* num_busy, bool abort);
void VisitObjectPointers(ObjectPointerVisitor* visitor);
protected:
class List {
public:
@ -277,6 +274,8 @@ class StoreBuffer : public BlockStack<kStoreBufferBlockSize> {
// action).
bool Overflowed();
intptr_t Size();
void VisitObjectPointers(ObjectPointerVisitor* visitor);
};
typedef StoreBuffer::Block StoreBufferBlock;

View file

@ -14,7 +14,6 @@
#include "vm/flags.h"
#include "vm/heap/become.h"
#include "vm/heap/gc_shared.h"
#include "vm/heap/marker.h"
#include "vm/heap/pages.h"
#include "vm/heap/pointer_block.h"
#include "vm/heap/safepoint.h"
@ -115,14 +114,14 @@ static void objcpy(void* dst, const void* src, size_t size) {
}
DART_FORCE_INLINE
static uword ReadHeaderRelaxed(ObjectPtr obj) {
return reinterpret_cast<std::atomic<uword>*>(UntaggedObject::ToAddr(obj))
static uword ReadHeaderRelaxed(ObjectPtr raw_obj) {
return reinterpret_cast<std::atomic<uword>*>(UntaggedObject::ToAddr(raw_obj))
->load(std::memory_order_relaxed);
}
DART_FORCE_INLINE
static void WriteHeaderRelaxed(ObjectPtr obj, uword header) {
reinterpret_cast<std::atomic<uword>*>(UntaggedObject::ToAddr(obj))
static void WriteHeaderRelaxed(ObjectPtr raw_obj, uword header) {
reinterpret_cast<std::atomic<uword>*>(UntaggedObject::ToAddr(raw_obj))
->store(header, std::memory_order_relaxed);
}
@ -359,17 +358,24 @@ class ScavengerVisitorBase : public ObjectPointerVisitor {
DART_FORCE_INLINE
void ScavengePointer(ObjectPtr* p) {
// ScavengePointer cannot be called recursively.
ObjectPtr obj = *p;
ObjectPtr raw_obj = *p;
if (obj->IsImmediateOrOldObject()) {
if (raw_obj->IsImmediateOrOldObject()) {
return;
}
ObjectPtr new_obj = ScavengeObject(obj);
ObjectPtr new_obj = ScavengeObject(raw_obj);
// Update the reference.
*p = new_obj;
if (new_obj->IsNewObject()) {
if (!new_obj->IsNewObject()) {
// Setting the mark bit above must not be ordered after a publishing store
// of this object. Note this could be a publishing store even if the
// object was promoted by an early invocation of ScavengePointer. Compare
// Object::Allocate.
reinterpret_cast<std::atomic<ObjectPtr>*>(p)->store(
static_cast<ObjectPtr>(new_obj), std::memory_order_release);
} else {
*p = new_obj;
// Update the store buffer as needed.
ObjectPtr visiting_object = visiting_old_object_;
if (visiting_object != nullptr &&
@ -382,18 +388,25 @@ class ScavengerVisitorBase : public ObjectPointerVisitor {
DART_FORCE_INLINE
void ScavengeCompressedPointer(uword heap_base, CompressedObjectPtr* p) {
// ScavengePointer cannot be called recursively.
ObjectPtr obj = p->Decompress(heap_base);
ObjectPtr raw_obj = p->Decompress(heap_base);
// Could be tested without decompression.
if (obj->IsImmediateOrOldObject()) {
if (raw_obj->IsImmediateOrOldObject()) {
return;
}
ObjectPtr new_obj = ScavengeObject(obj);
ObjectPtr new_obj = ScavengeObject(raw_obj);
// Update the reference.
*p = new_obj;
if (new_obj->IsNewObject()) {
if (!new_obj->IsNewObject()) {
// Setting the mark bit above must not be ordered after a publishing store
// of this object. Note this could be a publishing store even if the
// object was promoted by an early invocation of ScavengePointer. Compare
// Object::Allocate.
reinterpret_cast<std::atomic<CompressedObjectPtr>*>(p)->store(
static_cast<CompressedObjectPtr>(new_obj), std::memory_order_release);
} else {
*p = new_obj;
// Update the store buffer as needed.
ObjectPtr visiting_object = visiting_old_object_;
if (visiting_object != nullptr &&
@ -404,27 +417,27 @@ class ScavengerVisitorBase : public ObjectPointerVisitor {
}
DART_FORCE_INLINE
ObjectPtr ScavengeObject(ObjectPtr obj) {
ObjectPtr ScavengeObject(ObjectPtr raw_obj) {
// Fragmentation might cause the scavenge to fail. Ensure we always have
// somewhere to bail out to.
ASSERT(thread_->long_jump_base() != nullptr);
uword raw_addr = UntaggedObject::ToAddr(obj);
uword raw_addr = UntaggedObject::ToAddr(raw_obj);
// The scavenger is only expects objects located in the from space.
ASSERT(from_->Contains(raw_addr));
// Read the header word of the object and determine if the object has
// already been copied.
uword header = ReadHeaderRelaxed(obj);
uword header = ReadHeaderRelaxed(raw_obj);
ObjectPtr new_obj;
if (IsForwarding(header)) {
// Get the new location of the object.
new_obj = ForwardedObj(header);
} else {
intptr_t size = obj->untag()->HeapSize(header);
intptr_t size = raw_obj->untag()->HeapSize(header);
ASSERT(IsAllocatableInNewSpace(size));
uword new_addr = 0;
// Check whether object should be promoted.
if (!Page::Of(obj)->IsSurvivor(raw_addr)) {
if (!Page::Of(raw_obj)->IsSurvivor(raw_addr)) {
// Not a survivor of a previous scavenge. Just copy the object into the
// to space.
new_addr = TryAllocateCopy(size);
@ -433,7 +446,12 @@ class ScavengerVisitorBase : public ObjectPointerVisitor {
// This object is a survivor of a previous scavenge. Attempt to promote
// the object. (Or, unlikely, to-space was exhausted by fragmentation.)
new_addr = page_space_->TryAllocatePromoLocked(freelist_, size);
if (UNLIKELY(new_addr == 0)) {
if (LIKELY(new_addr != 0)) {
// If promotion succeeded then we need to remember it so that it can
// be traversed later.
promoted_list_.Push(UntaggedObject::FromAddr(new_addr));
bytes_promoted_ += size;
} else {
// Promotion did not succeed. Copy into the to space instead.
scavenger_->failed_to_promote_ = true;
new_addr = TryAllocateCopy(size);
@ -453,9 +471,19 @@ class ScavengerVisitorBase : public ObjectPointerVisitor {
if (new_obj->IsOldObject()) {
// Promoted: update age/barrier tags.
uword tags = static_cast<uword>(header);
tags = UntaggedObject::OldBit::update(true, tags);
tags = UntaggedObject::OldAndNotRememberedBit::update(true, tags);
tags = UntaggedObject::NewBit::update(false, tags);
new_obj->untag()->tags_.store(tags, std::memory_order_relaxed);
// Setting the forwarding pointer below will make this tenured object
// visible to the concurrent marker, but we haven't visited its slots
// yet. We mark the object here to prevent the concurrent marker from
// adding it to the mark stack and visiting its unprocessed slots. We
// push it to the mark stack after forwarding its slots.
tags = UntaggedObject::OldAndNotMarkedBit::update(
!thread_->is_marking(), tags);
// release: Setting the mark bit above must not be ordered after a
// publishing store of this object. Compare Object::Allocate.
new_obj->untag()->tags_.store(tags, std::memory_order_release);
}
intptr_t cid = UntaggedObject::ClassIdTag::decode(header);
@ -465,14 +493,7 @@ class ScavengerVisitorBase : public ObjectPointerVisitor {
// Try to install forwarding address.
uword forwarding_header = ForwardingHeader(new_obj);
if (InstallForwardingPointer(raw_addr, &header, forwarding_header)) {
if (new_obj->IsOldObject()) {
// If promotion succeeded then we need to remember it so that it can
// be traversed later.
promoted_list_.Push(new_obj);
bytes_promoted_ += size;
}
} else {
if (!InstallForwardingPointer(raw_addr, &header, forwarding_header)) {
ASSERT(IsForwarding(header));
if (new_obj->IsOldObject()) {
// Abandon as a free list element.
@ -590,11 +611,11 @@ typedef ScavengerVisitorBase<false> SerialScavengerVisitor;
typedef ScavengerVisitorBase<true> ParallelScavengerVisitor;
static bool IsUnreachable(ObjectPtr* ptr) {
ObjectPtr obj = *ptr;
if (obj->IsImmediateOrOldObject()) {
ObjectPtr raw_obj = *ptr;
if (raw_obj->IsImmediateOrOldObject()) {
return false;
}
uword raw_addr = UntaggedObject::ToAddr(obj);
uword raw_addr = UntaggedObject::ToAddr(raw_obj);
uword header = *reinterpret_cast<uword*>(raw_addr);
if (IsForwarding(header)) {
*ptr = ForwardedObj(header);
@ -777,7 +798,14 @@ static constexpr intptr_t kConservativeInitialScavengeSpeed = 40;
Scavenger::Scavenger(Heap* heap, intptr_t max_semi_capacity_in_words)
: heap_(heap),
max_semi_capacity_in_words_(max_semi_capacity_in_words),
scavenge_words_per_micro_(kConservativeInitialScavengeSpeed) {
scavenging_(false),
gc_time_micros_(0),
collections_(0),
scavenge_words_per_micro_(kConservativeInitialScavengeSpeed),
idle_scavenge_threshold_in_words_(0),
external_size_(0),
failed_to_promote_(false),
abort_(false) {
ASSERT(heap != nullptr);
// Verify assumptions about the first word in objects which the scavenger is
@ -841,18 +869,18 @@ class CollectStoreBufferVisitor : public ObjectPointerVisitor {
void VisitPointers(ObjectPtr* from, ObjectPtr* to) override {
for (ObjectPtr* ptr = from; ptr <= to; ptr++) {
ObjectPtr obj = *ptr;
RELEASE_ASSERT_WITH_MSG(obj->untag()->IsRemembered(), msg_);
RELEASE_ASSERT_WITH_MSG(obj->IsOldObject(), msg_);
ObjectPtr raw_obj = *ptr;
RELEASE_ASSERT_WITH_MSG(raw_obj->untag()->IsRemembered(), msg_);
RELEASE_ASSERT_WITH_MSG(raw_obj->IsOldObject(), msg_);
RELEASE_ASSERT_WITH_MSG(!obj->untag()->IsCardRemembered(), msg_);
if (obj.GetClassId() == kArrayCid) {
RELEASE_ASSERT_WITH_MSG(!raw_obj->untag()->IsCardRemembered(), msg_);
if (raw_obj.GetClassId() == kArrayCid) {
const uword length =
Smi::Value(static_cast<UntaggedArray*>(obj.untag())->length());
Smi::Value(static_cast<UntaggedArray*>(raw_obj.untag())->length());
RELEASE_ASSERT_WITH_MSG(!Array::UseCardMarkingForAllocation(length),
msg_);
}
in_store_buffer_->Add(obj);
in_store_buffer_->Add(raw_obj);
}
}
@ -881,27 +909,28 @@ class CheckStoreBufferVisitor : public ObjectVisitor,
to_(to),
msg_(msg) {}
void VisitObject(ObjectPtr obj) override {
if (obj->IsPseudoObject()) return;
RELEASE_ASSERT_WITH_MSG(obj->IsOldObject(), msg_);
void VisitObject(ObjectPtr raw_obj) override {
if (raw_obj->IsPseudoObject()) return;
RELEASE_ASSERT_WITH_MSG(raw_obj->IsOldObject(), msg_);
RELEASE_ASSERT_WITH_MSG(
obj->untag()->IsRemembered() == in_store_buffer_->Contains(obj), msg_);
raw_obj->untag()->IsRemembered() == in_store_buffer_->Contains(raw_obj),
msg_);
visiting_ = obj;
is_remembered_ = obj->untag()->IsRemembered();
is_card_remembered_ = obj->untag()->IsCardRemembered();
visiting_ = raw_obj;
is_remembered_ = raw_obj->untag()->IsRemembered();
is_card_remembered_ = raw_obj->untag()->IsCardRemembered();
if (is_card_remembered_) {
RELEASE_ASSERT_WITH_MSG(!is_remembered_, msg_);
RELEASE_ASSERT_WITH_MSG(Page::Of(obj)->progress_bar_ == 0, msg_);
RELEASE_ASSERT_WITH_MSG(Page::Of(raw_obj)->progress_bar_ == 0, msg_);
}
obj->untag()->VisitPointers(this);
raw_obj->untag()->VisitPointers(this);
}
void VisitPointers(ObjectPtr* from, ObjectPtr* to) override {
for (ObjectPtr* ptr = from; ptr <= to; ptr++) {
ObjectPtr obj = *ptr;
if (obj->IsHeapObject() && obj->IsNewObject()) {
ObjectPtr raw_obj = *ptr;
if (raw_obj->IsHeapObject() && raw_obj->IsNewObject()) {
if (is_card_remembered_) {
if (!Page::Of(visiting_)->IsCardRemembered(ptr)) {
FATAL(
@ -910,8 +939,8 @@ class CheckStoreBufferVisitor : public ObjectVisitor,
"slot's card is not remembered. Consider using rr to watch the "
"slot %p and reverse-continue to find the store with a missing "
"barrier.\n",
msg_, static_cast<uword>(visiting_), static_cast<uword>(obj),
ptr);
msg_, static_cast<uword>(visiting_),
static_cast<uword>(raw_obj), ptr);
}
} else if (!is_remembered_) {
FATAL("%s: Old object %#" Px " references new object %#" Px
@ -919,10 +948,10 @@ class CheckStoreBufferVisitor : public ObjectVisitor,
"not in any store buffer. Consider using rr to watch the "
"slot %p and reverse-continue to find the store with a missing "
"barrier.\n",
msg_, static_cast<uword>(visiting_), static_cast<uword>(obj),
ptr);
msg_, static_cast<uword>(visiting_),
static_cast<uword>(raw_obj), ptr);
}
RELEASE_ASSERT_WITH_MSG(to_->Contains(UntaggedObject::ToAddr(obj)),
RELEASE_ASSERT_WITH_MSG(to_->Contains(UntaggedObject::ToAddr(raw_obj)),
msg_);
}
}
@ -933,8 +962,8 @@ class CheckStoreBufferVisitor : public ObjectVisitor,
CompressedObjectPtr* from,
CompressedObjectPtr* to) override {
for (CompressedObjectPtr* ptr = from; ptr <= to; ptr++) {
ObjectPtr obj = ptr->Decompress(heap_base);
if (obj->IsHeapObject() && obj->IsNewObject()) {
ObjectPtr raw_obj = ptr->Decompress(heap_base);
if (raw_obj->IsHeapObject() && raw_obj->IsNewObject()) {
if (is_card_remembered_) {
if (!Page::Of(visiting_)->IsCardRemembered(ptr)) {
FATAL(
@ -943,8 +972,8 @@ class CheckStoreBufferVisitor : public ObjectVisitor,
"slot's card is not remembered. Consider using rr to watch the "
"slot %p and reverse-continue to find the store with a missing "
"barrier.\n",
msg_, static_cast<uword>(visiting_), static_cast<uword>(obj),
ptr);
msg_, static_cast<uword>(visiting_),
static_cast<uword>(raw_obj), ptr);
}
} else if (!is_remembered_) {
FATAL("%s: Old object %#" Px " references new object %#" Px
@ -952,10 +981,10 @@ class CheckStoreBufferVisitor : public ObjectVisitor,
"not in any store buffer. Consider using rr to watch the "
"slot %p and reverse-continue to find the store with a missing "
"barrier.\n",
msg_, static_cast<uword>(visiting_), static_cast<uword>(obj),
ptr);
msg_, static_cast<uword>(visiting_),
static_cast<uword>(raw_obj), ptr);
}
RELEASE_ASSERT_WITH_MSG(to_->Contains(UntaggedObject::ToAddr(obj)),
RELEASE_ASSERT_WITH_MSG(to_->Contains(UntaggedObject::ToAddr(raw_obj)),
msg_);
}
}
@ -995,7 +1024,6 @@ SemiSpace* Scavenger::Prologue(GCReason reason) {
TIMELINE_FUNCTION_GC_DURATION(Thread::Current(), "Prologue");
heap_->isolate_group()->ReleaseStoreBuffers();
heap_->isolate_group()->FlushMarkingStacks();
if (FLAG_verify_store_buffer) {
heap_->WaitForSweeperTasksAtSafepoint(Thread::Current());
@ -1004,13 +1032,7 @@ SemiSpace* Scavenger::Prologue(GCReason reason) {
// Need to stash the old remembered set before any worker begins adding to the
// new remembered set.
blocks_ = heap_->isolate_group()->store_buffer()->PopAll();
GCMarker* marker = heap_->old_space()->marker();
if (marker != nullptr) {
mark_blocks_ = marker->marking_stack_.PopAll();
new_blocks_ = marker->new_marking_stack_.PopAll();
deferred_blocks_ = marker->deferred_marking_stack_.PopAll();
}
blocks_ = heap_->isolate_group()->store_buffer()->TakeBlocks();
// Flip the two semi-spaces so that to_ is always the space for allocating
// objects.
@ -1116,12 +1138,23 @@ bool Scavenger::ShouldPerformIdleScavenge(int64_t deadline) {
NoSafepointScope no_safepoint;
// TODO(rmacnak): Investigate collecting a history of idle period durations.
intptr_t used_in_words = UsedInWords() + freed_in_words_;
intptr_t used_in_words = UsedInWords();
intptr_t external_in_words = ExternalInWords();
// Normal reason: new space is getting full.
bool for_new_space = (used_in_words >= idle_scavenge_threshold_in_words_) ||
(external_in_words >= idle_scavenge_threshold_in_words_);
if (!for_new_space) {
// New-space objects are roots during old-space GC. This means that even
// unreachable new-space objects prevent old-space objects they reference
// from being collected during an old-space GC. Normally this is not an
// issue because new-space GCs run much more frequently than old-space GCs.
// If new-space allocation is low and direct old-space allocation is high,
// which can happen in a program that allocates large objects and little
// else, old-space can fill up with unreachable objects until the next
// new-space GC. This check is the idle equivalent to the
// new-space GC before synchronous-marking in CollectMostGarbage.
bool for_old_space = heap_->last_gc_was_old_space_ &&
heap_->old_space()->ReachedIdleThreshold();
if (!for_new_space && !for_old_space) {
return false;
}
@ -1153,7 +1186,7 @@ void Scavenger::IterateStoreBuffers(ScavengerVisitorBase<parallel>* visitor) {
ObjectPtr obj = pending->Pop();
ASSERT(!obj->IsForwardingCorpse());
ASSERT(obj->untag()->IsRemembered());
obj->untag()->ClearRememberedBitUnsynchronized();
obj->untag()->ClearRememberedBit();
visitor->VisitingOldObject(obj);
visitor->ProcessObject(obj);
}
@ -1217,7 +1250,6 @@ enum WeakSlices {
kWeakTables,
kProgressBars,
kRememberLiveTemporaries,
kPruneWeak,
kNumWeakSlices,
};
@ -1242,23 +1274,10 @@ void Scavenger::IterateWeak() {
// Restore write-barrier assumptions.
heap_->isolate_group()->RememberLiveTemporaries();
break;
case kPruneWeak: {
GCMarker* marker = heap_->old_space()->marker();
if (marker != nullptr) {
marker->PruneWeak(this);
}
} break;
default:
UNREACHABLE();
}
}
GCMarker* marker = heap_->old_space()->marker();
if (marker != nullptr) {
Prune(&mark_blocks_, &marker->marking_stack_);
Prune(&new_blocks_, &marker->marking_stack_);
Prune(&deferred_blocks_, &marker->deferred_marking_stack_);
}
}
void Scavenger::MournWeakHandles() {
@ -1274,8 +1293,8 @@ void ScavengerVisitorBase<parallel>::ProcessToSpace() {
while (scan_ != nullptr) {
uword resolved_top = scan_->resolved_top_;
while (resolved_top < scan_->top_) {
ObjectPtr obj = UntaggedObject::FromAddr(resolved_top);
resolved_top += ProcessObject(obj);
ObjectPtr raw_obj = UntaggedObject::FromAddr(resolved_top);
resolved_top += ProcessObject(raw_obj);
}
scan_->resolved_top_ = resolved_top;
@ -1294,8 +1313,12 @@ void ScavengerVisitorBase<parallel>::ProcessPromotedList() {
while (promoted_list_.Pop(&obj)) {
VisitingOldObject(obj);
ProcessObject(obj);
// Black allocation.
if (thread_->is_marking() && obj->untag()->TryAcquireMarkBit()) {
if (obj->untag()->IsMarked()) {
// Complete our promise from ScavengePointer. Note that marker cannot
// visit this object until it pops a block from the mark stack, which
// involves a memory fence from the mutex, so even on architectures
// with a relaxed memory model, the marker will see the fully
// forwarded contents of this object.
thread_->MarkingStackAddObject(obj);
}
}
@ -1406,16 +1429,16 @@ void Scavenger::MournWeakTables() {
intptr_t size = table->size();
for (intptr_t i = 0; i < size; i++) {
if (table->IsValidEntryAtExclusive(i)) {
ObjectPtr obj = table->ObjectAtExclusive(i);
ASSERT(obj->IsHeapObject());
uword raw_addr = UntaggedObject::ToAddr(obj);
ObjectPtr raw_obj = table->ObjectAtExclusive(i);
ASSERT(raw_obj->IsHeapObject());
uword raw_addr = UntaggedObject::ToAddr(raw_obj);
uword header = *reinterpret_cast<uword*>(raw_addr);
if (IsForwarding(header)) {
// The object has survived. Preserve its record.
obj = ForwardedObj(header);
raw_obj = ForwardedObj(header);
auto replacement =
obj->IsNewObject() ? replacement_new : replacement_old;
replacement->SetValueExclusive(obj, table->ValueAtExclusive(i));
raw_obj->IsNewObject() ? replacement_new : replacement_old;
replacement->SetValueExclusive(raw_obj, table->ValueAtExclusive(i));
} else {
// The object has been collected.
if (cleanup != nullptr) {
@ -1464,117 +1487,6 @@ void Scavenger::MournWeakTables() {
/*at_safepoint=*/true);
}
void Scavenger::Forward(MarkingStack* marking_stack) {
ASSERT(abort_);
class ReverseMarkStack : public ObjectPointerVisitor {
public:
explicit ReverseMarkStack(IsolateGroup* group)
: ObjectPointerVisitor(group) {}
void VisitPointers(ObjectPtr* first, ObjectPtr* last) override {
for (ObjectPtr* p = first; p <= last; p++) {
ObjectPtr obj = *p;
#if defined(DEBUG)
if (obj->IsNewObject()) {
uword header = ReadHeaderRelaxed(obj);
ASSERT(!IsForwarding(header));
}
#endif
if (obj->IsForwardingCorpse()) {
// Promoted object was pushed to mark list but reversed.
*p = reinterpret_cast<ForwardingCorpse*>(UntaggedObject::ToAddr(obj))
->target();
}
}
}
#if defined(DART_COMPRESSED_POINTERS)
void VisitCompressedPointers(uword heap_base,
CompressedObjectPtr* first,
CompressedObjectPtr* last) override {
UNREACHABLE();
}
#endif
};
ReverseMarkStack visitor(heap_->isolate_group());
marking_stack->VisitObjectPointers(&visitor);
}
void Scavenger::Prune(MarkingStackBlock** source, MarkingStack* marking_stack) {
ASSERT(!abort_);
TIMELINE_FUNCTION_GC_DURATION(Thread::Current(), "PruneMarkingStack");
MarkingStackBlock* reading;
MarkingStackBlock* writing = marking_stack->PopNonFullBlock();
for (;;) {
{
MutexLocker ml(&space_lock_);
reading = *source;
if (reading == nullptr) break;
*source = reading->next();
}
// Generated code appends to marking stacks; tell MemorySanitizer.
MSAN_UNPOISON(reading, sizeof(*reading));
while (!reading->IsEmpty()) {
ObjectPtr obj = reading->Pop();
ASSERT(obj->IsHeapObject());
if (obj->IsNewObject()) {
uword header = ReadHeaderRelaxed(obj);
if (!IsForwarding(header)) continue;
obj = ForwardedObj(header);
}
ASSERT(!obj->IsForwardingCorpse());
ASSERT(!obj->IsFreeListElement());
writing->Push(obj);
if (writing->IsFull()) {
marking_stack->PushBlock(writing);
writing = marking_stack->PopNonFullBlock();
}
}
reading->Reset();
marking_stack->PushBlock(reading);
}
marking_stack->PushBlock(writing);
}
void Scavenger::PruneWeak(GCLinkedLists* deferred) {
ASSERT(!abort_);
TIMELINE_FUNCTION_GC_DURATION(Thread::Current(), "PruneWeak");
PruneWeak(&deferred->weak_properties);
PruneWeak(&deferred->weak_references);
PruneWeak(&deferred->weak_arrays);
PruneWeak(&deferred->finalizer_entries);
}
template <typename Type, typename PtrType>
void Scavenger::PruneWeak(GCLinkedList<Type, PtrType>* list) {
PtrType weak = list->Release();
while (weak != Object::null()) {
PtrType next;
if (weak->IsOldObject()) {
ASSERT(weak->GetClassId() == Type::kClassId);
next = weak->untag()->next_seen_by_gc_.Decompress(weak->heap_base());
weak->untag()->next_seen_by_gc_ = Type::null();
list->Enqueue(weak);
} else {
uword header = ReadHeaderRelaxed(weak);
if (IsForwarding(header)) {
weak = static_cast<PtrType>(ForwardedObj(header));
ASSERT(weak->GetClassId() == Type::kClassId);
next = weak->untag()->next_seen_by_gc_.Decompress(weak->heap_base());
weak->untag()->next_seen_by_gc_ = Type::null();
list->Enqueue(weak);
} else {
// Collected in this scavenge.
ASSERT(weak->GetClassId() == Type::kClassId);
next = weak->untag()->next_seen_by_gc_.Decompress(weak->heap_base());
}
}
weak = next;
}
}
// Returns whether the object referred to in `slot` was GCed this GC.
template <bool parallel>
bool ScavengerVisitorBase<parallel>::ForwardOrSetNullIfCollected(
@ -1693,9 +1605,6 @@ intptr_t Scavenger::AbandonRemainingTLAB(Thread* thread) {
Page* page = Page::Of(thread->top() - 1);
intptr_t allocated;
{
if (thread->is_marking()) {
thread->DeferredMarkLiveTemporaries();
}
MutexLocker ml(&space_lock_);
allocated = page->Release(thread);
}
@ -1750,7 +1659,6 @@ void Scavenger::Scavenge(Thread* thread, GCType type, GCReason reason) {
abort_ = false;
root_slices_started_ = 0;
weak_slices_started_ = 0;
freed_in_words_ = 0;
intptr_t abandoned_bytes = 0; // TODO(rmacnak): Count fragmentation?
SpaceUsage usage_before = GetCurrentUsage();
intptr_t promo_candidate_words = 0;
@ -1774,6 +1682,18 @@ void Scavenger::Scavenge(Thread* thread, GCType type, GCReason reason) {
ReverseScavenge(&from);
bytes_promoted = 0;
} else {
if (type == GCType::kEvacuate) {
if (heap_->stats_.state_ == Heap::kInitial ||
heap_->stats_.state_ == Heap::kFirstScavenge) {
heap_->stats_.state_ = Heap::kSecondScavenge;
}
} else {
if (heap_->stats_.state_ == Heap::kInitial) {
heap_->stats_.state_ = Heap::kFirstScavenge;
} else if (heap_->stats_.state_ == Heap::kFirstScavenge) {
heap_->stats_.state_ = Heap::kSecondScavenge;
}
}
if ((CapacityInWords() - UsedInWords()) < KBInWords) {
// Don't scavenge again until the next old-space GC has occurred. Prevents
// performing one scavenge per allocation as the heap limit is approached.
@ -1873,9 +1793,12 @@ void Scavenger::ReverseScavenge(SemiSpace** from) {
// Reset the ages bits in case this was a promotion.
uword from_header = static_cast<uword>(to_header);
from_header = UntaggedObject::OldBit::update(false, from_header);
from_header =
UntaggedObject::OldAndNotRememberedBit::update(false, from_header);
from_header = UntaggedObject::NewBit::update(true, from_header);
from_header =
UntaggedObject::OldAndNotMarkedBit::update(false, from_header);
WriteHeaderRelaxed(from_obj, from_header);
@ -1921,26 +1844,6 @@ void Scavenger::ReverseScavenge(SemiSpace** from) {
heap_->old_space()->ResetProgressBars();
GCMarker* marker = heap_->old_space()->marker();
if (marker != nullptr) {
marker->marking_stack_.PushAll(mark_blocks_);
mark_blocks_ = nullptr;
marker->marking_stack_.PushAll(new_blocks_);
new_blocks_ = nullptr;
marker->deferred_marking_stack_.PushAll(deferred_blocks_);
deferred_blocks_ = nullptr;
// Not redudant with the flush at the beginning of the scavenge because
// the scavenge workers may add promoted objects to the mark stack.
heap_->isolate_group()->FlushMarkingStacks();
Forward(&marker->marking_stack_);
ASSERT(marker->new_marking_stack_.IsEmpty());
Forward(&marker->deferred_marking_stack_);
}
// Restore write-barrier assumptions. Must occur after mark list fixups.
heap_->isolate_group()->RememberLiveTemporaries();
// Don't scavenge again until the next old-space GC has occurred. Prevents
// performing one scavenge per allocation as the heap limit is approached.
heap_->assume_scavenge_will_fail_ = true;

View file

@ -29,10 +29,6 @@ class JSONObject;
class ObjectSet;
template <bool parallel>
class ScavengerVisitorBase;
class GCMarker;
template <typename Type, typename PtrType>
class GCLinkedList;
struct GCLinkedLists;
class SemiSpace {
public:
@ -158,7 +154,7 @@ class Scavenger {
intptr_t UsedInWords() const {
MutexLocker ml(&space_lock_);
return to_->used_in_words() - freed_in_words_;
return to_->used_in_words();
}
intptr_t CapacityInWords() const { return to_->max_capacity_in_words(); }
intptr_t ExternalInWords() const { return external_size_ >> kWordSizeLog2; }
@ -216,7 +212,7 @@ class Scavenger {
ASSERT(external_size_ >= 0);
}
void set_freed_in_words(intptr_t value) { freed_in_words_ = value; }
int64_t FreeSpaceInWords(Isolate* isolate) const;
// The maximum number of Dart mutator threads we allow to execute at the same
// time.
@ -233,12 +229,6 @@ class Scavenger {
return to_->head();
}
void Prune(MarkingStackBlock** from, MarkingStack* to);
void Forward(MarkingStack* stack);
void PruneWeak(GCLinkedLists* delayed);
template <typename Type, typename PtrType>
void PruneWeak(GCLinkedList<Type, PtrType>* list);
private:
// Ids for time and data records in Heap::GCStats.
enum {
@ -301,29 +291,25 @@ class Scavenger {
intptr_t max_semi_capacity_in_words_;
// Keep track whether a scavenge is currently running.
bool scavenging_ = false;
bool scavenging_;
bool early_tenure_ = false;
RelaxedAtomic<intptr_t> root_slices_started_ = {0};
RelaxedAtomic<intptr_t> weak_slices_started_ = {0};
RelaxedAtomic<intptr_t> root_slices_started_;
RelaxedAtomic<intptr_t> weak_slices_started_;
StoreBufferBlock* blocks_ = nullptr;
MarkingStackBlock* mark_blocks_ = nullptr;
MarkingStackBlock* new_blocks_ = nullptr;
MarkingStackBlock* deferred_blocks_ = nullptr;
int64_t gc_time_micros_ = 0;
intptr_t collections_ = 0;
int64_t gc_time_micros_;
intptr_t collections_;
static constexpr int kStatsHistoryCapacity = 4;
RingBuffer<ScavengeStats, kStatsHistoryCapacity> stats_history_;
intptr_t scavenge_words_per_micro_;
intptr_t idle_scavenge_threshold_in_words_ = 0;
intptr_t idle_scavenge_threshold_in_words_;
// The total size of external data associated with objects in this scavenger.
RelaxedAtomic<intptr_t> external_size_ = {0};
intptr_t freed_in_words_ = 0;
RelaxedAtomic<intptr_t> external_size_;
RelaxedAtomic<bool> failed_to_promote_ = {false};
RelaxedAtomic<bool> abort_ = {false};
RelaxedAtomic<bool> failed_to_promote_;
RelaxedAtomic<bool> abort_;
// Protects new space during the allocation of new TLABs
mutable Mutex space_lock_;

View file

@ -15,48 +15,6 @@
namespace dart {
intptr_t GCSweeper::SweepNewPage(Page* page) {
ASSERT(!page->is_image());
ASSERT(!page->is_old());
ASSERT(!page->is_executable());
uword start = page->object_start();
uword end = page->object_end();
uword current = start;
intptr_t free = 0;
while (current < end) {
ObjectPtr raw_obj = UntaggedObject::FromAddr(current);
ASSERT(Page::Of(raw_obj) == page);
uword tags = raw_obj->untag()->tags_.load(std::memory_order_relaxed);
intptr_t obj_size = raw_obj->untag()->HeapSize(tags);
if (UntaggedObject::IsMarked(tags)) {
// Found marked object. Clear the mark bit and update swept bytes.
raw_obj->untag()->ClearMarkBitUnsynchronized();
ASSERT(IsAllocatableInNewSpace(obj_size));
} else {
uword free_end = current + obj_size;
while (free_end < end) {
ObjectPtr next_obj = UntaggedObject::FromAddr(free_end);
tags = next_obj->untag()->tags_.load(std::memory_order_relaxed);
if (UntaggedObject::IsMarked(tags)) {
// Reached the end of the free block.
break;
}
// Expand the free block by the size of this object.
free_end += next_obj->untag()->HeapSize(tags);
}
obj_size = free_end - current;
#if defined(DEBUG)
memset(reinterpret_cast<void*>(current), Heap::kZapByte, obj_size);
#endif // DEBUG
FreeListElement::AsElementNew(current, obj_size);
free += obj_size;
}
current += obj_size;
}
return free;
}
bool GCSweeper::SweepPage(Page* page, FreeList* freelist) {
ASSERT(!page->is_image());
// Large executable pages are handled here. We never truncate Instructions

View file

@ -34,8 +34,6 @@ class GCSweeper {
// last marked object.
intptr_t SweepLargePage(Page* page);
intptr_t SweepNewPage(Page* page);
// Sweep the large and regular sized data pages.
static void SweepConcurrent(IsolateGroup* isolate_group);
};

View file

@ -596,8 +596,8 @@ void ImageWriter::WriteROData(NonStreamingWriteStream* stream, bool vm) {
}
static constexpr uword kReadOnlyGCBits =
UntaggedObject::AlwaysSetBit::encode(true) |
UntaggedObject::NotMarkedBit::encode(false) |
UntaggedObject::OldBit::encode(true) |
UntaggedObject::OldAndNotMarkedBit::encode(false) |
UntaggedObject::OldAndNotRememberedBit::encode(true) |
UntaggedObject::NewBit::encode(false);

View file

@ -2695,10 +2695,6 @@ void IsolateGroup::ReleaseStoreBuffers() {
thread_registry()->ReleaseStoreBuffers();
}
void IsolateGroup::FlushMarkingStacks() {
thread_registry()->FlushMarkingStacks();
}
void Isolate::RememberLiveTemporaries() {
if (mutator_thread_ != nullptr) {
mutator_thread_->RememberLiveTemporaries();

View file

@ -591,7 +591,6 @@ class IsolateGroup : public IntrusiveDListEntry<IsolateGroup> {
// Prepares all threads in an isolate for Garbage Collection.
void ReleaseStoreBuffers();
void FlushMarkingStacks();
void EnableIncrementalBarrier(MarkingStack* marking_stack,
MarkingStack* deferred_marking_stack);
void DisableIncrementalBarrier();

View file

@ -1637,8 +1637,8 @@ void Object::MakeUnusedSpaceTraversable(const Object& obj,
UntaggedObject::ClassIdTag::update(kTypedDataInt8ArrayCid, 0);
new_tags = UntaggedObject::SizeTag::update(leftover_size, new_tags);
const bool is_old = obj.ptr()->IsOldObject();
new_tags = UntaggedObject::AlwaysSetBit::update(true, new_tags);
new_tags = UntaggedObject::NotMarkedBit::update(true, new_tags);
new_tags = UntaggedObject::OldBit::update(is_old, new_tags);
new_tags = UntaggedObject::OldAndNotMarkedBit::update(is_old, new_tags);
new_tags =
UntaggedObject::OldAndNotRememberedBit::update(is_old, new_tags);
new_tags = UntaggedObject::NewBit::update(!is_old, new_tags);
@ -1660,8 +1660,8 @@ void Object::MakeUnusedSpaceTraversable(const Object& obj,
uword new_tags = UntaggedObject::ClassIdTag::update(kInstanceCid, 0);
new_tags = UntaggedObject::SizeTag::update(leftover_size, new_tags);
const bool is_old = obj.ptr()->IsOldObject();
new_tags = UntaggedObject::AlwaysSetBit::update(true, new_tags);
new_tags = UntaggedObject::NotMarkedBit::update(true, new_tags);
new_tags = UntaggedObject::OldBit::update(is_old, new_tags);
new_tags = UntaggedObject::OldAndNotMarkedBit::update(is_old, new_tags);
new_tags =
UntaggedObject::OldAndNotRememberedBit::update(is_old, new_tags);
new_tags = UntaggedObject::NewBit::update(!is_old, new_tags);
@ -2799,8 +2799,8 @@ void Object::InitializeObject(uword address,
tags = UntaggedObject::SizeTag::update(size, tags);
const bool is_old =
(address & kNewObjectAlignmentOffset) == kOldObjectAlignmentOffset;
tags = UntaggedObject::AlwaysSetBit::update(true, tags);
tags = UntaggedObject::NotMarkedBit::update(true, tags);
tags = UntaggedObject::OldBit::update(is_old, tags);
tags = UntaggedObject::OldAndNotMarkedBit::update(is_old, tags);
tags = UntaggedObject::OldAndNotRememberedBit::update(is_old, tags);
tags = UntaggedObject::NewBit::update(!is_old, tags);
tags = UntaggedObject::ImmutableBit::update(
@ -26948,10 +26948,12 @@ SuspendStatePtr SuspendState::Clone(Thread* thread,
dst.set_pc(src.pc());
// Trigger write barrier if needed.
if (dst.ptr()->IsOldObject()) {
dst.untag()->EnsureInRememberedSet(thread);
}
if (thread->is_marking()) {
thread->DeferredMarkingStackAddObject(dst.ptr());
if (!dst.untag()->IsRemembered()) {
dst.untag()->EnsureInRememberedSet(thread);
}
if (thread->is_marking()) {
thread->DeferredMarkingStackAddObject(dst.ptr());
}
}
}
return dst.ptr();

View file

@ -230,8 +230,8 @@ void SetNewSpaceTaggingWord(ObjectPtr to, classid_t cid, uint32_t size) {
tags = UntaggedObject::SizeTag::update(size, tags);
tags = UntaggedObject::ClassIdTag::update(cid, tags);
tags = UntaggedObject::AlwaysSetBit::update(true, tags);
tags = UntaggedObject::NotMarkedBit::update(true, tags);
tags = UntaggedObject::OldBit::update(false, tags);
tags = UntaggedObject::OldAndNotMarkedBit::update(false, tags);
tags = UntaggedObject::OldAndNotRememberedBit::update(false, tags);
tags = UntaggedObject::CanonicalBit::update(false, tags);
tags = UntaggedObject::NewBit::update(true, tags);

View file

@ -51,6 +51,12 @@ void UntaggedObject::Validate(IsolateGroup* isolate_group) const {
if (!NewBit::decode(tags)) {
FATAL("New object missing kNewBit: %" Px "\n", tags);
}
if (OldBit::decode(tags)) {
FATAL("New object has kOldBit: %" Px "\n", tags);
}
if (OldAndNotMarkedBit::decode(tags)) {
FATAL("New object has kOldAndNotMarkedBit: %" Px "\n", tags);
}
if (OldAndNotRememberedBit::decode(tags)) {
FATAL("New object has kOldAndNotRememberedBit: %" Px "\n", tags);
}
@ -58,6 +64,9 @@ void UntaggedObject::Validate(IsolateGroup* isolate_group) const {
if (NewBit::decode(tags)) {
FATAL("Old object has kNewBit: %" Px "\n", tags);
}
if (!OldBit::decode(tags)) {
FATAL("Old object missing kOldBit: %" Px "\n", tags);
}
}
const intptr_t class_id = ClassIdTag::decode(tags);
if (!isolate_group->class_table()->IsValidIndex(class_id)) {

View file

@ -162,9 +162,9 @@ class UntaggedObject {
enum TagBits {
kCardRememberedBit = 0,
kCanonicalBit = 1,
kNotMarkedBit = 2, // Incremental barrier target.
kOldAndNotMarkedBit = 2, // Incremental barrier target.
kNewBit = 3, // Generational barrier target.
kAlwaysSetBit = 4, // Incremental barrier source.
kOldBit = 4, // Incremental barrier source.
kOldAndNotRememberedBit = 5, // Generational barrier source.
kImmutableBit = 6,
kReservedBit = 7,
@ -178,9 +178,9 @@ class UntaggedObject {
};
static constexpr intptr_t kGenerationalBarrierMask = 1 << kNewBit;
static constexpr intptr_t kIncrementalBarrierMask = 1 << kNotMarkedBit;
static constexpr intptr_t kIncrementalBarrierMask = 1 << kOldAndNotMarkedBit;
static constexpr intptr_t kBarrierOverlapShift = 2;
COMPILE_ASSERT(kNotMarkedBit + kBarrierOverlapShift == kAlwaysSetBit);
COMPILE_ASSERT(kOldAndNotMarkedBit + kBarrierOverlapShift == kOldBit);
COMPILE_ASSERT(kNewBit + kBarrierOverlapShift == kOldAndNotRememberedBit);
// The bit in the Smi tag position must be something that can be set to 0
@ -244,13 +244,14 @@ class UntaggedObject {
class CardRememberedBit
: public BitField<uword, bool, kCardRememberedBit, 1> {};
class NotMarkedBit : public BitField<uword, bool, kNotMarkedBit, 1> {};
class OldAndNotMarkedBit
: public BitField<uword, bool, kOldAndNotMarkedBit, 1> {};
class NewBit : public BitField<uword, bool, kNewBit, 1> {};
class CanonicalBit : public BitField<uword, bool, kCanonicalBit, 1> {};
class AlwaysSetBit : public BitField<uword, bool, kAlwaysSetBit, 1> {};
class OldBit : public BitField<uword, bool, kOldBit, 1> {};
class OldAndNotRememberedBit
: public BitField<uword, bool, kOldAndNotRememberedBit, 1> {};
@ -277,34 +278,41 @@ class UntaggedObject {
// Support for GC marking bit. Marked objects are either grey (not yet
// visited) or black (already visited).
static bool IsMarked(uword tags) { return !NotMarkedBit::decode(tags); }
bool IsMarked() const { return !tags_.Read<NotMarkedBit>(); }
static bool IsMarked(uword tags) { return !OldAndNotMarkedBit::decode(tags); }
bool IsMarked() const {
ASSERT(IsOldObject());
return !tags_.Read<OldAndNotMarkedBit>();
}
bool IsMarkedIgnoreRace() const {
return !tags_.ReadIgnoreRace<NotMarkedBit>();
ASSERT(IsOldObject());
return !tags_.ReadIgnoreRace<OldAndNotMarkedBit>();
}
void SetMarkBit() {
ASSERT(IsOldObject());
ASSERT(!IsMarked());
tags_.UpdateBool<NotMarkedBit>(false);
tags_.UpdateBool<OldAndNotMarkedBit>(false);
}
void SetMarkBitUnsynchronized() {
ASSERT(IsOldObject());
ASSERT(!IsMarked());
tags_.UpdateUnsynchronized<NotMarkedBit>(false);
tags_.UpdateUnsynchronized<OldAndNotMarkedBit>(false);
}
void SetMarkBitRelease() {
ASSERT(IsOldObject());
ASSERT(!IsMarked());
tags_.UpdateBool<NotMarkedBit, std::memory_order_release>(false);
tags_.UpdateBool<OldAndNotMarkedBit, std::memory_order_release>(false);
}
void ClearMarkBit() {
ASSERT(IsOldObject());
ASSERT(IsMarked());
tags_.UpdateBool<NotMarkedBit>(true);
}
void ClearMarkBitUnsynchronized() {
ASSERT(IsMarked());
tags_.UpdateUnsynchronized<NotMarkedBit>(true);
tags_.UpdateBool<OldAndNotMarkedBit>(true);
}
// Returns false if the bit was already set.
DART_WARN_UNUSED_RESULT
bool TryAcquireMarkBit() { return tags_.TryClear<NotMarkedBit>(); }
bool TryAcquireMarkBit() {
ASSERT(IsOldObject());
return tags_.TryClear<OldAndNotMarkedBit>();
}
// Canonical objects have the property that two canonical objects are
// logically equal iff they are the same object (pointer equal).
@ -331,10 +339,6 @@ class UntaggedObject {
ASSERT(IsOldObject());
tags_.UpdateBool<OldAndNotRememberedBit>(true);
}
void ClearRememberedBitUnsynchronized() {
ASSERT(IsOldObject());
tags_.UpdateUnsynchronized<OldAndNotRememberedBit>(true);
}
DART_FORCE_INLINE
void EnsureInRememberedSet(Thread* thread) {
@ -701,17 +705,16 @@ class UntaggedObject {
void CheckHeapPointerStore(ObjectPtr value, Thread* thread) {
uword source_tags = this->tags_;
uword target_tags = value->untag()->tags_;
uword overlap = (source_tags >> kBarrierOverlapShift) & target_tags &
thread->write_barrier_mask();
if (overlap != 0) {
if ((overlap & kGenerationalBarrierMask) != 0) {
if (((source_tags >> kBarrierOverlapShift) & target_tags &
thread->write_barrier_mask()) != 0) {
if (value->IsNewObject()) {
// Generational barrier: record when a store creates an
// old-and-not-remembered -> new reference.
EnsureInRememberedSet(thread);
}
if ((overlap & kIncrementalBarrierMask) != 0) {
} else {
// Incremental barrier: record when a store creates an
// any -> not-marked reference.
// old -> old-and-not-marked reference.
ASSERT(value->IsOldObject());
if (ClassIdTag::decode(target_tags) == kInstructionsCid) {
// Instruction pages may be non-writable. Defer marking.
thread->DeferredMarkingStackAddObject(value);
@ -730,10 +733,9 @@ class UntaggedObject {
Thread* thread) {
uword source_tags = this->tags_;
uword target_tags = value->untag()->tags_;
uword overlap = (source_tags >> kBarrierOverlapShift) & target_tags &
thread->write_barrier_mask();
if (overlap != 0) {
if ((overlap & kGenerationalBarrierMask) != 0) {
if (((source_tags >> kBarrierOverlapShift) & target_tags &
thread->write_barrier_mask()) != 0) {
if (value->IsNewObject()) {
// Generational barrier: record when a store creates an
// old-and-not-remembered -> new reference.
if (this->IsCardRemembered()) {
@ -741,10 +743,10 @@ class UntaggedObject {
} else if (this->TryAcquireRememberedBit()) {
thread->StoreBufferAddObject(static_cast<ObjectPtr>(this));
}
}
if ((overlap & kIncrementalBarrierMask) != 0) {
} else {
// Incremental barrier: record when a store creates an
// old -> old-and-not-marked reference.
ASSERT(value->IsOldObject());
if (ClassIdTag::decode(target_tags) == kInstructionsCid) {
// Instruction pages may be non-writable. Defer marking.
thread->DeferredMarkingStackAddObject(value);
@ -1809,7 +1811,6 @@ class UntaggedWeakArray : public UntaggedObject {
friend class MarkingVisitorBase;
template <bool>
friend class ScavengerVisitorBase;
friend class Scavenger;
};
// WeakArray is special in that it has a pointer field which is not
@ -3557,7 +3558,6 @@ class UntaggedWeakProperty : public UntaggedInstance {
friend class MarkingVisitorBase;
template <bool>
friend class ScavengerVisitorBase;
friend class Scavenger;
friend class FastObjectCopy; // For OFFSET_OF
friend class SlowObjectCopy; // For OFFSET_OF
};
@ -3590,7 +3590,6 @@ class UntaggedWeakReference : public UntaggedInstance {
friend class MarkingVisitorBase;
template <bool>
friend class ScavengerVisitorBase;
friend class Scavenger;
friend class ObjectGraph;
friend class FastObjectCopy; // For OFFSET_OF
friend class SlowObjectCopy; // For OFFSET_OF
@ -3704,7 +3703,6 @@ class UntaggedFinalizerEntry : public UntaggedInstance {
friend class MarkingVisitorBase;
template <bool>
friend class ScavengerVisitorBase;
friend class Scavenger;
friend class ObjectGraph;
};

View file

@ -494,18 +494,25 @@ DEFINE_LEAF_RUNTIME_ENTRY(uword /*ObjectPtr*/,
uword /*ObjectPtr*/ object_in,
Thread* thread) {
ObjectPtr object = static_cast<ObjectPtr>(object_in);
// The allocation stubs will call this leaf method for newly allocated
// old space objects.
RELEASE_ASSERT(object->IsOldObject());
// If we eliminate a generational write barriers on allocations of an object
// we need to ensure it's either a new-space object or it has been added to
// the remembered set.
//
// NOTE: We use static_cast<>() instead of ::RawCast() to avoid handle
// NOTE: We use reinterpret_cast<>() instead of ::RawCast() to avoid handle
// allocations in debug mode. Handle allocations in leaf runtimes can cause
// memory leaks because they will allocate into a handle scope from the next
// outermost runtime code (to which the generated Dart code might not return
// in a long time).
bool add_to_remembered_set = true;
if (object->IsNewObject()) {
if (object->untag()->IsRemembered()) {
// Objects must not be added to the remembered set twice because the
// scavenger's visitor is not idempotent.
// Might already be remembered because of type argument store in
// AllocateArray or any field in CloneContext.
add_to_remembered_set = false;
} else if (object->IsArray()) {
const intptr_t length = Array::LengthOf(static_cast<ArrayPtr>(object));

View file

@ -311,12 +311,6 @@ struct base_ptr_type<
#define DEFINE_COMPRESSED_POINTER(klass, base) \
class Compressed##klass##Ptr : public Compressed##base##Ptr { \
public: \
Compressed##klass##Ptr* operator->() { \
return this; \
} \
const Compressed##klass##Ptr* operator->() const { \
return this; \
} \
explicit Compressed##klass##Ptr(klass##Ptr uncompressed) \
: Compressed##base##Ptr(uncompressed) {} \
const klass##Ptr& operator=(const klass##Ptr& other) { \

View file

@ -849,11 +849,6 @@ void Thread::MarkingStackAcquire() {
UntaggedObject::kIncrementalBarrierMask;
}
void Thread::MarkingStackFlush() {
isolate_group()->marking_stack()->PushBlock(marking_stack_block_);
marking_stack_block_ = isolate_group()->marking_stack()->PopEmptyBlock();
}
void Thread::DeferredMarkingStackRelease() {
MarkingStackBlock* block = deferred_marking_stack_block_;
deferred_marking_stack_block_ = nullptr;
@ -865,13 +860,6 @@ void Thread::DeferredMarkingStackAcquire() {
isolate_group()->deferred_marking_stack()->PopEmptyBlock();
}
void Thread::DeferredMarkingStackFlush() {
isolate_group()->deferred_marking_stack()->PushBlock(
deferred_marking_stack_block_);
deferred_marking_stack_block_ =
isolate_group()->deferred_marking_stack()->PopEmptyBlock();
}
Heap* Thread::heap() const {
return isolate_group_->heap();
}
@ -965,7 +953,7 @@ class RestoreWriteBarrierInvariantVisitor : public ObjectPointerVisitor {
for (; first != last + 1; first++) {
ObjectPtr obj = *first;
// Stores into new-space objects don't need a write barrier.
if (obj->IsImmediateObject()) continue;
if (obj->IsImmediateOrNewObject()) continue;
// To avoid adding too much work into the remembered set, skip large
// arrays. Write barrier elimination will not remove the barrier
@ -994,16 +982,13 @@ class RestoreWriteBarrierInvariantVisitor : public ObjectPointerVisitor {
switch (op_) {
case Thread::RestoreWriteBarrierInvariantOp::kAddToRememberedSet:
if (obj->IsOldObject()) {
obj->untag()->EnsureInRememberedSet(current_);
}
obj->untag()->EnsureInRememberedSet(current_);
if (current_->is_marking()) {
current_->DeferredMarkingStackAddObject(obj);
}
break;
case Thread::RestoreWriteBarrierInvariantOp::kAddToDeferredMarkingStack:
// Re-scan obj when finalizing marking.
ASSERT(current_->is_marking());
current_->DeferredMarkingStackAddObject(obj);
break;
}
@ -1036,7 +1021,8 @@ class RestoreWriteBarrierInvariantVisitor : public ObjectPointerVisitor {
// Dart frames preceding an exit frame to the store buffer or deferred
// marking stack.
void Thread::RestoreWriteBarrierInvariant(RestoreWriteBarrierInvariantOp op) {
ASSERT(IsAtSafepoint() || OwnsGCSafepoint() || this == Thread::Current());
ASSERT(IsAtSafepoint() || OwnsGCSafepoint());
ASSERT(IsDartMutatorThread());
const StackFrameIterator::CrossThreadPolicy cross_thread_policy =
StackFrameIterator::kAllowCrossThreadIteration;

View file

@ -1396,10 +1396,8 @@ class Thread : public ThreadState {
void MarkingStackRelease();
void MarkingStackAcquire();
void MarkingStackFlush();
void DeferredMarkingStackRelease();
void DeferredMarkingStackAcquire();
void DeferredMarkingStackFlush();
void set_safepoint_state(uint32_t value) { safepoint_state_ = value; }
void EnterSafepointUsingLock();

View file

@ -105,19 +105,6 @@ void ThreadRegistry::ReleaseMarkingStacks() {
}
}
void ThreadRegistry::FlushMarkingStacks() {
MonitorLocker ml(threads_lock());
Thread* thread = active_list_;
while (thread != nullptr) {
if (!thread->BypassSafepoints() && thread->is_marking()) {
thread->MarkingStackFlush();
thread->DeferredMarkingStackFlush();
ASSERT(thread->is_marking());
}
thread = thread->next_;
}
}
void ThreadRegistry::AddToActiveListLocked(Thread* thread) {
ASSERT(thread != nullptr);
ASSERT(threads_lock()->IsOwnedByCurrentThread());

View file

@ -37,7 +37,6 @@ class ThreadRegistry {
void ReleaseStoreBuffers();
void AcquireMarkingStacks();
void ReleaseMarkingStacks();
void FlushMarkingStacks();
// Concurrent-approximate number of active isolates in the active_list
intptr_t active_isolates_count() { return active_isolates_count_.load(); }