[vm] Make reloading of isolate groups use new safepoint-level mechanism

The current hot-reload implementation [0] will perform a reload by
first sending OOB messages to all isolates and waiting until those OOB
messages are being handled. The handler of the OOB message will block
the thread (and unschedule isolate) and notify the thread performing
reload it's ready.

This requires that all isolates within a group can actually run & block.
This is the case for the VM implementation of isolates (as they are
run an unlimited size thread pool).

Though flutter seems to multiplex several engine isolates on the same OS
thread. Reloading can then result in one engine isolate performing
reload waiting for another to act on the OOB message (which it will not
do as it's multiplexed on the same thread as the former).

Now that we have a more flexible safepointing mechanism (introduced in
[1]) we can utilize for hot reloading by introducing a new "reloading"
safepoint level.

Reload safepoints
-----------------------

We introduce a new safepoint level (SafepointLevel::kGCAndDeoptAndReload).

Being at a "reload safepoint" implies being at a "deopt safepoint"
which implies being at a "gc safepoint".

Code has to explicitly opt-into making safepoint checks participate /
check into "reload safepoints" using [ReloadParticipationScope]. We do
that at certain well-defined places where reload is possible (e.g. event
loop boundaries, descheduling of isolates, OOM message processing, ...).

While running under [NoReloadScope] we disable checking into "reload
safepoints".

Initiator of hot-reload
-----------------------

When a mutator initiates a reload operation (e.g. as part of a
`ReloadSources` `vm-service` API call) it will use a
[ReloadSafepointOperationScope] to get all other mutators to a
safepoint.

For mutators that aren't already at a "reload safepoint", we'll
notify them via an OOB message (instead of scheduling kVMInterrupt).

While waiting for all mutators to check into a "reload safepoint", the
thread is itself at a safepoint (as other mutators may perform lower
level safepoint operations - e.g. GC, Deopt, ...)

Once all mutators are at a "reload safepoint" the thread will take
ownership of all safepoint levels.

Other mutators
-----------------------

Mutators can be at a "reload safepoint" already (e.g. isolate is not
scheduled). If they try to exit safepoint they will block until the
reload operation is finished.

Mutators that are not at a "reload safepoint" (e.g. executing Dart or VM
code) will be sent an OOB message indicating it should check into a
"reload safepoint". We assume mutators make progress until they can
process OOB message.

Mutators may run under a [NoReloadScope] when handling the OOM message.
In that case they will not check into the "reload safepoint" and simply
ignore the message. To ensure the thread will eventually check-in,
we'll make the destructor of [~NoReloadScope] check & send itself a new OOB
message indicating reload should happen. Eventually getting the mutator
to process the OOM message (which is a well-defined place where we can
check into the reload safepoint).

Non-isolate mutators such as the background compiler do not react to OOB
messages. This means that either those mutators have to be stopped (e.g.
bg compiler) before initiating a reload safepoint operation, the
threads have to explicitly opt-into participating in reload safepoints
or the threads have to deschedule themselves eventually.

Misc
----

Owning a reload safepoint operation implies also owning the deopt &
gc safepoint operation. Yet some code would like to ensure it actually
runs under a [DeoptSafepointOperatoinScope]/[GCSafepointOperationScope].
=> The `Thread::OwnsGCSafepoint()` handles that.

While performing hot-reload we may exercise common code (e.g. kernel
loader, ...) that acquires safepoint locks. Normally it's disallows to
acquire safepoint locks while holding a safepoint operation (since
mutators may be stopped at places where they hold locks, creating
deadlock scenarios).
=> We explicitly opt code into participating in reload safepointing
requests. Those well-defined places aren't holding safepoint locks.
=> The `Thread::CanAcquireSafepointLocks()` will return `true` despite
owning a reload operation. (But if one also holds deopt/gc safepoint
operation it will return false)

Example where this matters: As part of hot-reload, we load kernel which
may create new symbols. The symbol creation code may acquire the symbol
lock and `InsertNewOrGet()` a symbol. This is safe as other mutators
don't hold the symbol lock at reload safepoints. The same cannot be said
for Deopt/GC safepoint operations - as they can interrupt code at many
more places where there's no guarantee that no locks are held.

[0] https://dart-review.googlesource.com/c/sdk/+/187461
[1] https://dart-review.googlesource.com/c/sdk/+/196927

Issue https://github.com/flutter/flutter/issues/124546

TEST=Newly added Reload_* tests.

Change-Id: I6842d7d2b284d043cc047fd702b7c5c7dd1fa3c5
Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/296183
Commit-Queue: Martin Kustermann <kustermann@google.com>
Reviewed-by: Slava Egorov <vegorov@google.com>
This commit is contained in:
Martin Kustermann 2023-04-21 13:56:49 +00:00 committed by Commit Queue
parent 742612ddac
commit 2ea92acba5
13 changed files with 621 additions and 283 deletions

View file

@ -65,6 +65,8 @@ SafepointHandler::SafepointHandler(IsolateGroup* isolate_group)
new LevelHandler(isolate_group, SafepointLevel::kGC);
handlers_[SafepointLevel::kGCAndDeopt] =
new LevelHandler(isolate_group, SafepointLevel::kGCAndDeopt);
handlers_[SafepointLevel::kGCAndDeoptAndReload] =
new LevelHandler(isolate_group, SafepointLevel::kGCAndDeoptAndReload);
}
SafepointHandler::~SafepointHandler() {
@ -79,6 +81,7 @@ void SafepointHandler::SafepointThreads(Thread* T, SafepointLevel level) {
ASSERT(T->execution_state() == Thread::kThreadInVM);
ASSERT(T->current_safepoint_level() >= level);
MallocGrowableArray<Dart_Port> oob_isolates;
{
MonitorLocker tl(threads_lock());
@ -116,7 +119,12 @@ void SafepointHandler::SafepointThreads(Thread* T, SafepointLevel level) {
handlers_[level]->SetSafepointInProgress(T);
// Ensure a thread is at a safepoint or notify it to get to one.
handlers_[level]->NotifyThreadsToGetToSafepointLevel(T);
handlers_[level]->NotifyThreadsToGetToSafepointLevel(T, &oob_isolates);
}
for (auto main_port : oob_isolates) {
Isolate::SendInternalLibMessage(main_port, Isolate::kCheckForReload,
/*ignored=*/-1);
}
// Now wait for all threads that are not already at a safepoint to check-in.
@ -152,7 +160,8 @@ void SafepointHandler::AssertWeDoNotOwnLowerLevelSafepoints(
}
void SafepointHandler::LevelHandler::NotifyThreadsToGetToSafepointLevel(
Thread* T) {
Thread* T,
MallocGrowableArray<Dart_Port>* oob_isolates) {
ASSERT(num_threads_not_parked_ == 0);
for (auto current = isolate_group()->thread_registry()->active_list();
current != nullptr; current = current->next()) {
@ -160,9 +169,25 @@ void SafepointHandler::LevelHandler::NotifyThreadsToGetToSafepointLevel(
if (!current->BypassSafepoints() && current != T) {
const uint32_t state = current->SetSafepointRequested(level_, true);
if (!Thread::IsAtSafepoint(level_, state)) {
// Send OOB message to get it to safepoint.
if (current->IsDartMutatorThread()) {
current->ScheduleInterrupts(Thread::kVMInterrupt);
if (level_ == SafepointLevel::kGCAndDeoptAndReload) {
// Interrupt the mutator by sending an reload OOB message. The
// mutator will only check-in once it's handling the reload OOB
// message.
//
// If there's no isolate, it may be a helper thread that has entered
// via `Thread::EnterIsolateGroupAsHelper()`. In that case we cannot
// send an OOB message. Instead we'll have to wait until that thread
// de-schedules itself.
auto isolate = current->isolate();
if (isolate != nullptr) {
oob_isolates->Add(isolate->main_port());
}
} else {
// Interrupt the mutator and ask it to block at any interruption
// point.
if (current->IsDartMutatorThread()) {
current->ScheduleInterrupts(Thread::kVMInterrupt);
}
}
MonitorLocker sl(&parked_lock_);
num_threads_not_parked_++;

View file

@ -50,6 +50,17 @@ class DeoptSafepointOperationScope : public SafepointOperationScope {
DISALLOW_COPY_AND_ASSIGN(DeoptSafepointOperationScope);
};
// Gets all mutators to a safepoint where GC, Deopt and Reload is allowed.
class ReloadSafepointOperationScope : public SafepointOperationScope {
public:
explicit ReloadSafepointOperationScope(Thread* T)
: SafepointOperationScope(T, SafepointLevel::kGCAndDeoptAndReload) {}
~ReloadSafepointOperationScope() {}
private:
DISALLOW_COPY_AND_ASSIGN(ReloadSafepointOperationScope);
};
// A stack based scope that can be used to perform an operation after getting
// all threads to a safepoint. At the end of the operation all the threads are
// resumed. Allocations in the scope will force heap growth.
@ -140,7 +151,9 @@ class SafepointHandler {
friend class SafepointHandler;
// Helper methods for [SafepointThreads]
void NotifyThreadsToGetToSafepointLevel(Thread* T);
void NotifyThreadsToGetToSafepointLevel(
Thread* T,
MallocGrowableArray<Dart_Port>* oob_isolates);
void WaitUntilThreadsReachedSafepointLevel();
// Helper methods for [ResumeThreads]
@ -314,7 +327,7 @@ class TransitionGeneratedToNative : public TransitionSafepointState {
class TransitionVMToBlocked : public TransitionSafepointState {
public:
explicit TransitionVMToBlocked(Thread* T) : TransitionSafepointState(T) {
ASSERT(!T->OwnsSafepoint());
ASSERT(T->CanAcquireSafepointLocks());
// A thread blocked on a monitor is considered to be at a safepoint.
ASSERT(T->execution_state() == Thread::kThreadInVM);
T->set_execution_state(Thread::kThreadInBlockedState);

View file

@ -10,7 +10,10 @@
#include "vm/heap/safepoint.h"
#include "vm/isolate.h"
#include "vm/isolate_reload.h"
#include "vm/lockers.h"
#include "vm/message_handler.h"
#include "vm/message_snapshot.h"
#include "vm/random.h"
#include "vm/thread_pool.h"
#include "vm/unit_test.h"
@ -296,16 +299,19 @@ class CheckinTask : public StateMachineTask {
SafepointLevel level,
std::atomic<intptr_t>* gc_only_checkins,
std::atomic<intptr_t>* deopt_checkin,
std::atomic<intptr_t>* reload_checkin,
std::atomic<intptr_t>* timeout_checkin)
: StateMachineTask::Data(isolate_group),
level(level),
gc_only_checkins(gc_only_checkins),
deopt_checkin(deopt_checkin),
reload_checkin(reload_checkin),
timeout_checkin(timeout_checkin) {}
SafepointLevel level;
std::atomic<intptr_t>* gc_only_checkins;
std::atomic<intptr_t>* deopt_checkin;
std::atomic<intptr_t>* reload_checkin;
std::atomic<intptr_t>* timeout_checkin;
};
@ -331,12 +337,20 @@ class CheckinTask : public StateMachineTask {
break;
}
case SafepointLevel::kGCAndDeopt: {
// This thread should join any safepoint operations.
// This thread should join only GC and Deopt safepoint operations.
if (SafepointIfRequested(thread_, data()->deopt_checkin)) {
last_sync = OS::GetCurrentTimeMillis();
}
break;
}
case SafepointLevel::kGCAndDeoptAndReload: {
// This thread should join any safepoint operations.
ReloadParticipationScope allow_reload(thread_);
if (SafepointIfRequested(thread_, data()->reload_checkin)) {
last_sync = OS::GetCurrentTimeMillis();
}
break;
}
case SafepointLevel::kNumLevels:
case SafepointLevel::kNoSafepoint:
UNREACHABLE();
@ -352,6 +366,7 @@ class CheckinTask : public StateMachineTask {
// thread continue.
const auto now = OS::GetCurrentTimeMillis();
if ((now - last_sync) > 1000) {
ReloadParticipationScope allow_reload(thread_);
if (SafepointIfRequested(thread_, data()->timeout_checkin)) {
last_sync = now;
}
@ -378,11 +393,13 @@ ISOLATE_UNIT_TEST_CASE(SafepointOperation_SafepointPointTest) {
const intptr_t kTaskCount = 6;
std::atomic<intptr_t> gc_only_checkins[kTaskCount];
std::atomic<intptr_t> deopt_checkin[kTaskCount];
std::atomic<intptr_t> deopt_checkins[kTaskCount];
std::atomic<intptr_t> reload_checkins[kTaskCount];
std::atomic<intptr_t> timeout_checkins[kTaskCount];
for (intptr_t i = 0; i < kTaskCount; ++i) {
gc_only_checkins[i] = 0;
deopt_checkin[i] = 0;
deopt_checkins[i] = 0;
reload_checkins[i] = 0;
timeout_checkins[i] = 0;
}
@ -390,12 +407,13 @@ ISOLATE_UNIT_TEST_CASE(SafepointOperation_SafepointPointTest) {
switch (task_id) {
case 0:
case 1:
case 2:
return SafepointLevel::kGC;
case 2:
case 3:
return SafepointLevel::kGCAndDeopt;
case 4:
case 5:
return SafepointLevel::kGCAndDeopt;
return SafepointLevel::kGCAndDeoptAndReload;
default:
UNREACHABLE();
return SafepointLevel::kGC;
@ -405,8 +423,8 @@ ISOLATE_UNIT_TEST_CASE(SafepointOperation_SafepointPointTest) {
while (true) {
bool ready = true;
for (intptr_t i = 0; i < kTaskCount; ++i) {
const intptr_t all =
gc_only_checkins[i] + deopt_checkin[i] + timeout_checkins[i];
const intptr_t all = gc_only_checkins[i] + deopt_checkins[i] +
reload_checkins[i] + timeout_checkins[i];
if (all != syncs) {
ready = false;
break;
@ -422,9 +440,9 @@ ISOLATE_UNIT_TEST_CASE(SafepointOperation_SafepointPointTest) {
std::vector<std::shared_ptr<CheckinTask::Data>> threads;
for (intptr_t i = 0; i < kTaskCount; ++i) {
const auto level = task_to_level(i);
std::unique_ptr<CheckinTask::Data> data(
new CheckinTask::Data(isolate_group, level, &gc_only_checkins[i],
&deopt_checkin[i], &timeout_checkins[i]));
std::unique_ptr<CheckinTask::Data> data(new CheckinTask::Data(
isolate_group, level, &gc_only_checkins[i], &deopt_checkins[i],
&reload_checkins[i], &timeout_checkins[i]));
threads.push_back(std::move(data));
}
@ -446,9 +464,13 @@ ISOLATE_UNIT_TEST_CASE(SafepointOperation_SafepointPointTest) {
wait_for_sync(1); // Wait for threads to exit safepoint
{ DeoptSafepointOperationScope safepoint_operation(thread); }
wait_for_sync(2); // Wait for threads to exit safepoint
{ GcSafepointOperationScope safepoint_operation(thread); }
{ ReloadOperationScope reload(thread); }
wait_for_sync(3); // Wait for threads to exit safepoint
{ GcSafepointOperationScope safepoint_operation(thread); }
wait_for_sync(4); // Wait for threads to exit safepoint
{ DeoptSafepointOperationScope safepoint_operation(thread); }
wait_for_sync(5); // Wait for threads to exit safepoint
{ ReloadOperationScope reload(thread); }
}
for (intptr_t i = 0; i < kTaskCount; i++) {
threads[i]->MarkAndNotify(CheckinTask::kPleaseExit);
@ -459,13 +481,21 @@ ISOLATE_UNIT_TEST_CASE(SafepointOperation_SafepointPointTest) {
for (intptr_t i = 0; i < kTaskCount; ++i) {
switch (task_to_level(i)) {
case SafepointLevel::kGC:
EXPECT_EQ(0, deopt_checkin[i]);
EXPECT_EQ(2, gc_only_checkins[i]);
EXPECT_EQ(2, timeout_checkins[i]);
EXPECT_EQ(0, deopt_checkins[i]);
EXPECT_EQ(0, reload_checkins[i]);
EXPECT_EQ(4, timeout_checkins[i]);
break;
case SafepointLevel::kGCAndDeopt:
EXPECT_EQ(4, deopt_checkin[i]);
EXPECT_EQ(0, gc_only_checkins[i]);
EXPECT_EQ(4, deopt_checkins[i]);
EXPECT_EQ(0, reload_checkins[i]);
EXPECT_EQ(2, timeout_checkins[i]);
break;
case SafepointLevel::kGCAndDeoptAndReload:
EXPECT_EQ(0, gc_only_checkins[i]);
EXPECT_EQ(0, deopt_checkins[i]);
EXPECT_EQ(6, reload_checkins[i]);
EXPECT_EQ(0, timeout_checkins[i]);
break;
case SafepointLevel::kNumLevels:
@ -607,4 +637,200 @@ ISOLATE_UNIT_TEST_CASE_WITH_EXPECTATION(
DeoptSafepointOperationScope safepoint_scope2(thread);
}
class IsolateExitScope {
public:
IsolateExitScope() : saved_isolate_(Dart_CurrentIsolate()) {
Dart_ExitIsolate();
}
~IsolateExitScope() { Dart_EnterIsolate(saved_isolate_); }
private:
Dart_Isolate saved_isolate_;
};
ISOLATE_UNIT_TEST_CASE(ReloadScopes_Test) {
// Unscheduling an isolate will enter a safepoint that is reloadable.
{
TransitionVMToNative transition(thread);
IsolateExitScope isolate_leave_scope;
EXPECT(thread->IsAtSafepoint(SafepointLevel::kGCAndDeoptAndReload));
}
// Unscheduling an isolate with active [NoReloadScope] will enter a safepoint
// that is not reloadable.
{
// [NoReloadScope] only works if reload is supported.
#if !defined(PRODUCT)
NoReloadScope no_reload_scope(thread);
TransitionVMToNative transition(thread);
IsolateExitScope isolate_leave_scope;
EXPECT(!thread->IsAtSafepoint(SafepointLevel::kGCAndDeoptAndReload));
EXPECT(thread->IsAtSafepoint(SafepointLevel::kGCAndDeopt));
#endif // !defined(PRODUCT)
}
// Transitioning to native doesn't mean we enter a safepoint that is
// reloadable.
// => We may want to allow this in the future (so e.g. isolates that perform
// blocking FFI call can be reloaded while being blocked).
{
TransitionVMToNative transition(thread);
EXPECT(!thread->IsAtSafepoint(SafepointLevel::kGCAndDeoptAndReload));
EXPECT(thread->IsAtSafepoint(SafepointLevel::kGCAndDeopt));
}
// Transitioning to native with explicit [ReloadParticipationScope] will
// enter a safepoint that is reloadable.
{
ReloadParticipationScope enable_reload(thread);
TransitionVMToNative transition(thread);
EXPECT(thread->IsAtSafepoint(SafepointLevel::kGCAndDeoptAndReload));
}
}
class ReloadTask : public StateMachineTask {
public:
using Data = StateMachineTask::Data;
explicit ReloadTask(std::shared_ptr<Data> data) : StateMachineTask(data) {}
protected:
virtual void RunInternal() {
ReloadOperationScope reload_operation_scope(thread_);
}
};
ISOLATE_UNIT_TEST_CASE(Reload_AtReloadSafepoint) {
auto isolate = thread->isolate();
auto messages = isolate->message_handler();
ThreadPool pool;
{
ReloadParticipationScope allow_reload(thread);
// We are not at a safepoint.
ASSERT(!thread->IsAtSafepoint());
// Enter a reload safepoint.
thread->EnterSafepoint();
{
// The [ReloadTask] will trigger a reload safepoint operation, sees that
// we are at reload safepoint & finishes without sending OOB message.
std::shared_ptr<ReloadTask::Data> task(
new ReloadTask::Data(isolate->group()));
pool.Run<ReloadTask>(task);
task->WaitUntil(ReloadTask::kEntered);
task->MarkAndNotify(ReloadTask::kPleaseExit);
task->WaitUntil(ReloadTask::kExited);
}
thread->ExitSafepoint();
EXPECT(!messages->HasOOBMessages());
}
}
static void EnsureValidOOBMessage(Thread* thread,
Isolate* isolate,
std::unique_ptr<Message> message) {
EXPECT(message->IsOOB());
EXPECT(message->dest_port() == isolate->main_port());
const auto& msg = Object::Handle(ReadMessage(thread, message.get()));
EXPECT(msg.IsArray());
const auto& array = Array::Cast(msg);
EXPECT(array.Length() == 3);
EXPECT(Smi::Value(Smi::RawCast(array.At(0))) == Message::kIsolateLibOOBMsg);
EXPECT(Smi::Value(Smi::RawCast(array.At(1))) == Isolate::kCheckForReload);
// 3rd value is ignored.
}
ISOLATE_UNIT_TEST_CASE(Reload_NotAtSafepoint) {
auto isolate = thread->isolate();
auto messages = isolate->message_handler();
ThreadPool pool;
std::shared_ptr<ReloadTask::Data> task(
new ReloadTask::Data(isolate->group()));
pool.Run<ReloadTask>(task);
task->WaitUntil(ReloadTask::kEntered);
// We are not at a safepoint. The [ReloadTask] will trigger a reload
// safepoint operation, sees that we are not at reload safepoint and instead
// sends us an OOB.
ASSERT(!thread->IsAtSafepoint());
while (!messages->HasOOBMessages()) {
OS::Sleep(1000);
}
// Examine the OOB message for it's content.
std::unique_ptr<Message> message = messages->StealOOBMessage();
EnsureValidOOBMessage(thread, isolate, std::move(message));
// Finally participate in the reload safepoint and finish.
{
ReloadParticipationScope allow_reload(thread);
thread->BlockForSafepoint();
}
task->MarkAndNotify(ReloadTask::kPleaseExit);
task->WaitUntil(ReloadTask::kExited);
}
ISOLATE_UNIT_TEST_CASE(Reload_AtNonReloadSafepoint) {
auto isolate = thread->isolate();
auto messages = isolate->message_handler();
ThreadPool pool;
// The [ReloadTask] will trigger a reload safepoint operation, sees that
// we are at not at reload safepoint & sends us an OOB and waits for us to
// check-in.
std::shared_ptr<ReloadTask::Data> task(
new ReloadTask::Data(isolate->group()));
pool.Run<ReloadTask>(task);
task->WaitUntil(ReloadTask::kEntered);
{
NoReloadScope no_reload(thread);
// We are not at a safepoint.
ASSERT(!thread->IsAtSafepoint());
// Enter a non-reload safepoint.
thread->EnterSafepoint();
{
// We are at a safepoint but not a reload safepoint. So we'll get an OOM.
ASSERT(thread->IsAtSafepoint());
while (!messages->HasOOBMessages()) {
OS::Sleep(1000);
}
// Ensure we got a valid OOM
std::unique_ptr<Message> message = messages->StealOOBMessage();
EnsureValidOOBMessage(thread, isolate, std::move(message));
}
thread->ExitSafepoint();
EXPECT(!messages->HasOOBMessages());
}
// We left the [NoReloadScope] which in it's destructor should detect
// that a reload safepoint operation is requested and re-send OOM message to
// current isolate.
EXPECT(messages->HasOOBMessages());
std::unique_ptr<Message> message = messages->StealOOBMessage();
EnsureValidOOBMessage(thread, isolate, std::move(message));
// Finally participate in the reload safepoint and finish.
{
ReloadParticipationScope allow_reload(thread);
thread->BlockForSafepoint();
}
task->MarkAndNotify(ReloadTask::kPleaseExit);
task->WaitUntil(ReloadTask::kExited);
}
} // namespace dart

View file

@ -343,9 +343,6 @@ IsolateGroup::IsolateGroup(std::shared_ptr<IsolateGroupSource> source,
api_state_(new ApiState()),
thread_registry_(new ThreadRegistry()),
safepoint_handler_(new SafepointHandler(this)),
#if !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
reload_handler_(new ReloadHandler()),
#endif
store_buffer_(new StoreBuffer()),
heap_(nullptr),
saved_unlinked_calls_(Array::null()),
@ -434,13 +431,10 @@ IsolateGroup::~IsolateGroup() {
}
void IsolateGroup::RegisterIsolate(Isolate* isolate) {
{
SafepointWriteRwLocker ml(Thread::Current(), isolates_lock_.get());
ASSERT(isolates_lock_->IsCurrentThreadWriter());
isolates_.Append(isolate);
isolate_count_++;
}
NOT_IN_PRODUCT(NOT_IN_PRECOMPILED(reload_handler()->RegisterIsolate()));
SafepointWriteRwLocker ml(Thread::Current(), isolates_lock_.get());
ASSERT(isolates_lock_->IsCurrentThreadWriter());
isolates_.Append(isolate);
isolate_count_++;
}
bool IsolateGroup::ContainsOnlyOneIsolate() {
@ -457,11 +451,8 @@ void IsolateGroup::RunWithLockedGroup(std::function<void()> fun) {
}
void IsolateGroup::UnregisterIsolate(Isolate* isolate) {
NOT_IN_PRODUCT(NOT_IN_PRECOMPILED(reload_handler()->UnregisterIsolate()));
{
SafepointWriteRwLocker ml(Thread::Current(), isolates_lock_.get());
isolates_.Remove(isolate);
}
SafepointWriteRwLocker ml(Thread::Current(), isolates_lock_.get());
isolates_.Remove(isolate);
}
bool IsolateGroup::UnregisterIsolateDecrementCount() {
@ -577,7 +568,6 @@ void IsolateGroup::set_saved_unlinked_calls(const Array& saved_unlinked_calls) {
saved_unlinked_calls_ = saved_unlinked_calls.ptr();
}
void IsolateGroup::IncreaseMutatorCount(Isolate* mutator,
bool is_nested_reenter) {
ASSERT(mutator->group() == this);
@ -879,18 +869,42 @@ void IsolateGroup::ValidateConstants() {
#endif // DEBUG
void Isolate::SendInternalLibMessage(LibMsgId msg_id, uint64_t capability) {
const Array& msg = Array::Handle(Array::New(3));
Object& element = Object::Handle();
const bool ok = SendInternalLibMessage(main_port(), msg_id, capability);
if (!ok) UNREACHABLE();
}
element = Smi::New(Message::kIsolateLibOOBMsg);
msg.SetAt(0, element);
element = Smi::New(msg_id);
msg.SetAt(1, element);
element = Capability::New(capability);
msg.SetAt(2, element);
bool Isolate::SendInternalLibMessage(Dart_Port main_port,
LibMsgId msg_id,
uint64_t capability) {
Dart_CObject array_entry_msg_kind;
array_entry_msg_kind.type = Dart_CObject_kInt64;
array_entry_msg_kind.value.as_int64 = Message::kIsolateLibOOBMsg;
PortMap::PostMessage(WriteMessage(/* same_group */ false, msg, main_port(),
Message::kOOBPriority));
Dart_CObject array_entry_msg_id;
array_entry_msg_id.type = Dart_CObject_kInt64;
array_entry_msg_id.value.as_int64 = msg_id;
Dart_CObject array_entry_capability;
array_entry_capability.type = Dart_CObject_kCapability;
array_entry_capability.value.as_capability.id = capability;
Dart_CObject* array_entries[3] = {
&array_entry_msg_kind,
&array_entry_msg_id,
&array_entry_capability,
};
Dart_CObject message;
message.type = Dart_CObject_kArray;
message.value.as_array.values = array_entries;
message.value.as_array.length = ARRAY_SIZE(array_entries);
AllocOnlyStackZone zone;
std::unique_ptr<Message> msg = WriteApiMessage(
zone.GetZone(), &message, main_port, Message::kOOBPriority);
if (msg == nullptr) UNREACHABLE();
return PortMap::PostMessage(std::move(msg));
}
void IsolateGroup::set_object_store(ObjectStore* object_store) {
@ -1138,7 +1152,10 @@ ErrorPtr IsolateMessageHandler::HandleLibMessage(const Array& message) {
case Isolate::kCheckForReload: {
// [ OOB, kCheckForReload, ignored ]
#if !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
IG->reload_handler()->CheckForReload();
{
ReloadParticipationScope allow_reload(T);
T->CheckForSafepoint();
}
#else
UNREACHABLE();
#endif
@ -1854,7 +1871,7 @@ bool IsolateGroup::CanReload() {
// During reload itself we don't process OOB messages and don't execute Dart
// code, so the caller should implicitly have a guarantee we're not reloading
// already.
RELEASE_ASSERT(!IsReloading());
RELEASE_ASSERT(!Thread::Current()->OwnsReloadSafepoint());
// We only allow reload to take place from the point on where the first
// isolate within an isolate group has setup it's root library. From that
@ -1881,12 +1898,12 @@ bool IsolateGroup::ReloadSources(JSONStream* js,
const char* root_script_url,
const char* packages_url,
bool dont_delete_reload_context) {
ASSERT(!IsReloading());
// Ensure all isolates inside the isolate group are paused at a place where we
// can safely do a reload.
ReloadOperationScope reload_operation(Thread::Current());
ASSERT(!IsReloading());
auto class_table = IsolateGroup::Current()->class_table();
std::shared_ptr<IsolateGroupReloadContext> group_reload_context(
new IsolateGroupReloadContext(this, class_table, js));
@ -1910,12 +1927,12 @@ bool IsolateGroup::ReloadKernel(JSONStream* js,
const uint8_t* kernel_buffer,
intptr_t kernel_buffer_size,
bool dont_delete_reload_context) {
ASSERT(!IsReloading());
// Ensure all isolates inside the isolate group are paused at a place where we
// can safely do a reload.
ReloadOperationScope reload_operation(Thread::Current());
ASSERT(!IsReloading());
auto class_table = IsolateGroup::Current()->class_table();
std::shared_ptr<IsolateGroupReloadContext> group_reload_context(
new IsolateGroupReloadContext(this, class_table, js));

View file

@ -61,7 +61,6 @@ class IsolateGroupReloadContext;
class IsolateObjectStore;
class IsolateProfilerData;
class ProgramReloadContext;
class ReloadHandler;
class Log;
class Message;
class MessageHandler;
@ -341,9 +340,6 @@ class IsolateGroup : public IntrusiveDListEntry<IsolateGroup> {
ThreadRegistry* thread_registry() const { return thread_registry_.get(); }
SafepointHandler* safepoint_handler() { return safepoint_handler_.get(); }
#if !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
ReloadHandler* reload_handler() { return reload_handler_.get(); }
#endif
void CreateHeap(bool is_vm_isolate, bool is_service_or_kernel_isolate);
void SetupImagePage(const uint8_t* snapshot_buffer, bool is_executable);
@ -863,9 +859,6 @@ class IsolateGroup : public IntrusiveDListEntry<IsolateGroup> {
std::unique_ptr<ThreadRegistry> thread_registry_;
std::unique_ptr<SafepointHandler> safepoint_handler_;
NOT_IN_PRODUCT(
NOT_IN_PRECOMPILED(std::unique_ptr<ReloadHandler> reload_handler_));
static RwLock* isolate_groups_rwlock_;
static IntrusiveDList<IsolateGroup>* isolate_groups_;
static Random* isolate_group_random_;
@ -1054,6 +1047,9 @@ class Isolate : public BaseIsolate, public IntrusiveDListEntry<Isolate> {
uint64_t terminate_capability() const { return terminate_capability_; }
void SendInternalLibMessage(LibMsgId msg_id, uint64_t capability);
static bool SendInternalLibMessage(Dart_Port main_port,
LibMsgId msg_id,
uint64_t capability);
void set_init_callback_data(void* value) { init_callback_data_ = value; }
void* init_callback_data() const { return init_callback_data_; }

View file

@ -787,6 +787,7 @@ bool IsolateGroupReloadContext::Reload(bool force_reload,
TIMELINE_SCOPE(Reload);
Thread* thread = Thread::Current();
ASSERT(thread->OwnsReloadSafepoint());
Heap* heap = IG->heap();
num_old_libs_ =
@ -897,9 +898,6 @@ bool IsolateGroupReloadContext::Reload(bool force_reload,
isolate_group_->ForEachIsolate(
[&](Isolate* isolate) { number_of_isolates++; });
// Disable the background compiler while we are performing the reload.
NoBackgroundCompilerScope stop_bg_compiler(thread);
// Wait for any concurrent marking tasks to finish and turn off the
// concurrent marker during reload as we might be allocating new instances
// (constants) when loading the new kernel file and this could cause
@ -2826,113 +2824,14 @@ void ProgramReloadContext::RebuildDirectSubclasses() {
}
}
void ReloadHandler::RegisterIsolate() {
SafepointMonitorLocker ml(&monitor_);
ParticipateIfReloadRequested(&ml, /*is_registered=*/false,
/*allow_later_retry=*/false);
ASSERT(reloading_thread_ == nullptr);
++registered_isolate_count_;
}
void ReloadHandler::UnregisterIsolate() {
SafepointMonitorLocker ml(&monitor_);
ParticipateIfReloadRequested(&ml, /*is_registered=*/true,
/*allow_later_retry=*/false);
ASSERT(reloading_thread_ == nullptr);
--registered_isolate_count_;
}
void ReloadHandler::CheckForReload() {
SafepointMonitorLocker ml(&monitor_);
ParticipateIfReloadRequested(&ml, /*is_registered=*/true,
/*allow_later_retry=*/true);
}
void ReloadHandler::ParticipateIfReloadRequested(SafepointMonitorLocker* ml,
bool is_registered,
bool allow_later_retry) {
if (reloading_thread_ != nullptr) {
auto thread = Thread::Current();
auto isolate = thread->isolate();
// If the current thread is in a no reload scope, we'll not participate here
// and instead delay to a point (further up the stack, namely in the main
// message handling loop) where this isolate can participate.
if (thread->IsInNoReloadScope()) {
RELEASE_ASSERT(allow_later_retry);
isolate->SendInternalLibMessage(Isolate::kCheckForReload, /*ignored=*/-1);
return;
}
if (is_registered) {
SafepointMonitorLocker ml(&checkin_monitor_);
++isolates_checked_in_;
ml.NotifyAll();
}
// While we're waiting for the reload to be performed, we'll exit the
// isolate. That will transition into a safepoint - which a blocking `Wait`
// would also do - but it does something in addition: It will release it's
// current TLAB and decrease the mutator count. We want this in order to let
// all isolates in the group participate in the reload, despite our parallel
// mutator limit.
while (reloading_thread_ != nullptr) {
SafepointMonitorUnlockScope ml_unlocker(ml);
Thread::ExitIsolate(/*isolate_shutdown=*/false,
/*treat_as_nested_exit=*/true);
{
MonitorLocker ml(&monitor_);
while (reloading_thread_ != nullptr) {
ml.Wait();
}
}
Thread::EnterIsolate(isolate, /*treat_as_nested_enter=*/true);
}
if (is_registered) {
SafepointMonitorLocker ml(&checkin_monitor_);
--isolates_checked_in_;
}
}
}
void ReloadHandler::PauseIsolatesForReloadLocked() {
intptr_t registered = -1;
{
SafepointMonitorLocker ml(&monitor_);
// Maybe participate in existing reload requested by another isolate.
ParticipateIfReloadRequested(&ml, /*registered=*/true,
/*allow_later_retry=*/false);
// Now it's our turn to request reload.
ASSERT(reloading_thread_ == nullptr);
reloading_thread_ = Thread::Current();
// At this point no isolate register/unregister, so we save the current
// number of registered isolates.
registered = registered_isolate_count_;
}
// Send OOB to a superset of all registered isolates and make them participate
// in this reload.
reloading_thread_->isolate_group()->ForEachIsolate([](Isolate* isolate) {
isolate->SendInternalLibMessage(Isolate::kCheckForReload, /*ignored=*/-1);
});
{
SafepointMonitorLocker ml(&checkin_monitor_);
while (isolates_checked_in_ < (registered - /*reload_requester=*/1)) {
ml.Wait();
}
}
}
void ReloadHandler::ResumeIsolatesLocked() {
{
SafepointMonitorLocker ml(&monitor_);
ASSERT(reloading_thread_ == Thread::Current());
reloading_thread_ = nullptr;
ml.NotifyAll();
}
ReloadOperationScope::ReloadOperationScope(Thread* thread)
: StackResource(thread),
stop_bg_compiler_(thread),
allow_reload_(thread),
safepoint_operation_(thread) {
ASSERT(!thread->IsInNoReloadScope());
ASSERT(thread->current_safepoint_level() ==
SafepointLevel::kGCAndDeoptAndReload);
}
#endif // !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)

View file

@ -10,10 +10,12 @@
#include "include/dart_tools_api.h"
#include "vm/compiler/jit/compiler.h"
#include "vm/globals.h"
#include "vm/growable_array.h"
#include "vm/hash_map.h"
#include "vm/heap/become.h"
#include "vm/heap/safepoint.h"
#include "vm/log.h"
#include "vm/object.h"
@ -436,46 +438,28 @@ class CallSiteResetter : public ValueObject {
ICData& ic_data_;
};
class ReloadHandler {
// Ensures all other mutators are stopped at a well-defined place where reload
// is allowed.
class ReloadOperationScope : public StackResource {
public:
ReloadHandler() {}
~ReloadHandler() {}
void RegisterIsolate();
void UnregisterIsolate();
void CheckForReload();
explicit ReloadOperationScope(Thread* thread);
~ReloadOperationScope() {}
private:
friend class ReloadOperationScope;
// As the background compiler is a mutator it participates in safepoint
// operations. Though the BG compiler won't check into reload safepoint
// requests - as it's not a well-defined place to do reload.
// So we ensure the background compiler is stopped before we get all other
// mutators to reload safepoints.
NoBackgroundCompilerScope stop_bg_compiler_;
void PauseIsolatesForReloadLocked();
void ResumeIsolatesLocked();
void ParticipateIfReloadRequested(SafepointMonitorLocker* ml,
bool is_registered,
bool allow_later_retry);
// This will enable the current thread to perform reload operations (as well
// as check-in with other thread's reload operations).
ReloadParticipationScope allow_reload_;
intptr_t registered_isolate_count_ = 0;
Monitor monitor_;
Thread* reloading_thread_ = nullptr;
Monitor checkin_monitor_;
intptr_t isolates_checked_in_ = 0;
};
class ReloadOperationScope : public ThreadStackResource {
public:
explicit ReloadOperationScope(Thread* thread)
: ThreadStackResource(thread), isolate_group_(thread->isolate_group()) {
isolate_group_->reload_handler()->PauseIsolatesForReloadLocked();
}
~ReloadOperationScope() {
isolate_group_->reload_handler()->ResumeIsolatesLocked();
}
private:
IsolateGroup* isolate_group_;
// The actual reload operation that will ensure all other mutators are stopped
// at well-defined places where reload can happen.
ReloadSafepointOperationScope safepoint_operation_;
};
} // namespace dart

View file

@ -375,6 +375,14 @@ bool MessageHandler::HasOOBMessages() {
return !oob_queue_->IsEmpty();
}
#if defined(TESTING)
std::unique_ptr<Message> MessageHandler::StealOOBMessage() {
MonitorLocker ml(&monitor_);
ASSERT(!oob_queue_->IsEmpty());
return oob_queue_->Dequeue();
}
#endif
bool MessageHandler::HasMessages() {
MonitorLocker ml(&monitor_);
return !queue_->IsEmpty();

View file

@ -76,6 +76,10 @@ class MessageHandler {
// handler.
bool HasOOBMessages();
#if defined(TESTING)
std::unique_ptr<Message> StealOOBMessage();
#endif
// Returns true if there are pending normal messages for this message
// handler.
bool HasMessages();

View file

@ -18205,7 +18205,7 @@ CodePtr Code::FinalizeCode(FlowGraphCompiler* compiler,
void Code::NotifyCodeObservers(const Code& code, bool optimized) {
#if !defined(PRODUCT)
ASSERT(!Thread::Current()->OwnsSafepoint());
ASSERT(!Thread::Current()->OwnsGCSafepoint());
if (CodeObservers::AreActive()) {
if (code.IsFunctionCode()) {
const auto& function = Function::Handle(code.function());
@ -18223,7 +18223,7 @@ void Code::NotifyCodeObservers(const Function& function,
bool optimized) {
#if !defined(PRODUCT)
ASSERT(!function.IsNull());
ASSERT(!Thread::Current()->OwnsSafepoint());
ASSERT(!Thread::Current()->OwnsGCSafepoint());
// Calling ToLibNamePrefixedQualifiedCString is very expensive,
// try to avoid it.
if (CodeObservers::AreActive()) {
@ -18239,7 +18239,7 @@ void Code::NotifyCodeObservers(const char* name,
#if !defined(PRODUCT)
ASSERT(name != nullptr);
ASSERT(!code.IsNull());
ASSERT(!Thread::Current()->OwnsSafepoint());
ASSERT(!Thread::Current()->OwnsGCSafepoint());
if (CodeObservers::AreActive()) {
const auto& instrs = Instructions::Handle(code.instructions());
CodeObservers::NotifyAll(name, instrs.PayloadStart(),

View file

@ -368,13 +368,12 @@ bool Thread::HasActiveState() {
return false;
}
bool Thread::EnterIsolate(Isolate* isolate, bool treat_as_nested_enter) {
bool Thread::EnterIsolate(Isolate* isolate) {
const bool is_resumable = isolate->mutator_thread() != nullptr;
// To let VM's thread pool (if we run on it) know that this thread is
// occupying a mutator again (decreases its max size).
const bool is_nested_reenter =
treat_as_nested_enter ||
(is_resumable && isolate->mutator_thread()->top_exit_frame_info() != 0);
auto group = isolate->group();
@ -391,7 +390,11 @@ bool Thread::EnterIsolate(Isolate* isolate, bool treat_as_nested_enter) {
ASSERT(thread->is_dart_mutator_);
ASSERT(thread->isolate() == isolate);
ASSERT(thread->isolate_group() == isolate->group());
thread->ExitSafepoint();
{
// Descheduled isolates are reloadable (if nothing else prevents it).
RawReloadParticipationScope enable_reload(thread);
thread->ExitSafepoint();
}
} else {
thread = AddActiveThread(group, isolate, /*is_dart_mutator*/ true,
/*bypass_safepoint=*/false);
@ -406,7 +409,7 @@ bool Thread::EnterIsolate(Isolate* isolate, bool treat_as_nested_enter) {
return true;
}
void Thread::ExitIsolate(bool isolate_shutdown, bool treat_as_nested_exit) {
void Thread::ExitIsolate(bool isolate_shutdown) {
Thread* thread = Thread::Current();
ASSERT(thread != nullptr);
ASSERT(thread->IsDartMutatorThread());
@ -440,7 +443,11 @@ void Thread::ExitIsolate(bool isolate_shutdown, bool treat_as_nested_exit) {
const auto tag =
isolate->is_runnable() ? VMTag::kIdleTagId : VMTag::kLoadWaitTagId;
SuspendThreadInternal(thread, tag);
thread->EnterSafepoint();
{
// Descheduled isolates are reloadable (if nothing else prevents it).
RawReloadParticipationScope enable_reload(thread);
thread->EnterSafepoint();
}
thread->set_execution_state(Thread::kThreadInNative);
} else {
thread->ResetDartMutatorState(isolate);
@ -452,8 +459,7 @@ void Thread::ExitIsolate(bool isolate_shutdown, bool treat_as_nested_exit) {
// To let VM's thread pool (if we run on it) know that this thread is
// occupying a mutator again (decreases its max size).
const bool is_nested_exit =
treat_as_nested_exit || thread->top_exit_frame_info() != 0;
const bool is_nested_exit = thread->top_exit_frame_info() != 0;
ASSERT(!(isolate_shutdown && is_nested_exit));
group->DecreaseMutatorCount(isolate, is_nested_exit);
}
@ -607,6 +613,15 @@ void Thread::FreeActiveThread(Thread* thread, bool bypass_safepoint) {
// means if this other thread runs code as part of a safepoint operation it
// will still wait for us to finish here before it tries to iterate the
// active mutators (e.g. when GC starts/stops incremental marking).
//
// The thread is empty and the corresponding isolate (if any) is therefore
// at event-loop boundary (or shutting down). We participate in reload in
// those scenarios.
//
// (It may be that an active [ReloadOperationScope] sent an OOB message to
// this isolate but it didn't handle the OOB due to shutting down, so we'll
// still have to update the reloading thread that it's ok to continue)
RawReloadParticipationScope enable_reload(thread);
thread->EnterSafepoint();
}
@ -1233,6 +1248,11 @@ bool Thread::OwnsDeoptSafepoint() const {
this) == SafepointLevel::kGCAndDeopt;
}
bool Thread::OwnsReloadSafepoint() const {
return isolate_group()->safepoint_handler()->InnermostSafepointOperation(
this) <= SafepointLevel::kGCAndDeoptAndReload;
}
bool Thread::OwnsSafepoint() const {
return isolate_group()->safepoint_handler()->InnermostSafepointOperation(
this) != SafepointLevel::kNoSafepoint;
@ -1242,9 +1262,19 @@ bool Thread::CanAcquireSafepointLocks() const {
// A thread may acquire locks and then enter a safepoint operation (e.g.
// holding program lock, allocating objects which triggers GC).
//
// So it's generally not safe to acquire locks while we are inside a GC
// safepoint operation scope.
return !OwnsGCSafepoint();
// So if this code is called inside safepoint operation, we generally have to
// assume other threads may hold locks and are blocked on the safepoint,
// meaning we cannot hold safepoint and acquire locks (deadlock!).
//
// Though if we own a reload safepoint operation it means all other mutators
// are blocked in very specific places, where we know no locks are held. As
// such we allow the current thread to acquire locks.
//
// Example: We own reload safepoint operation, load kernel, which allocates
// symbols, where the symbol implementation acquires the symbol lock (we know
// other mutators at reload safepoint do not hold symbol lock).
return isolate_group()->safepoint_handler()->InnermostSafepointOperation(
this) >= SafepointLevel::kGCAndDeoptAndReload;
}
void Thread::SetupState(TaskKind kind) {
@ -1350,6 +1380,38 @@ DisableThreadInterruptsScope::~DisableThreadInterruptsScope() {
}
}
NoReloadScope::NoReloadScope(Thread* thread) : ThreadStackResource(thread) {
#if !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
thread->no_reload_scope_depth_++;
ASSERT(thread->no_reload_scope_depth_ >= 0);
#endif // !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
}
NoReloadScope::~NoReloadScope() {
#if !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
thread()->no_reload_scope_depth_ -= 1;
ASSERT(thread()->no_reload_scope_depth_ >= 0);
auto isolate = thread()->isolate();
const intptr_t state = thread()->safepoint_state();
if (thread()->no_reload_scope_depth_ == 0) {
// If we were asked to go to a reload safepoint & block for a reload
// safepoint operation on another thread - *while* being inside
// [NoReloadScope] - we may have handled & ignored the OOB message telling
// us to reload.
//
// Since we're exiting now the [NoReloadScope], we'll make another OOB
// reload request message to ourselves, which will be handled in
// well-defined place where we can perform reload.
if (isolate != nullptr &&
Thread::IsSafepointLevelRequested(
state, SafepointLevel::kGCAndDeoptAndReload)) {
isolate->SendInternalLibMessage(Isolate::kCheckForReload, /*ignored=*/-1);
}
}
#endif // !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
}
void Thread::EnsureFfiCallbackMetadata(intptr_t callback_id) {
static constexpr intptr_t kInitialCallbackIdsReserved = 16;

View file

@ -289,6 +289,8 @@ enum SafepointLevel {
kGC,
// Safe to GC as well as Deopt.
kGCAndDeopt,
// Safe to GC, Deopt as well as Reload.
kGCAndDeoptAndReload,
// Number of levels.
kNumLevels,
@ -367,17 +369,9 @@ class Thread : public ThreadState {
void AssertEmptyThreadInvariants();
// Makes the current thread enter 'isolate'.
//
// TODO(kustermann): Remove [treat_as_nested_exit] once safepoint-based reload
// implementation landed.
static bool EnterIsolate(Isolate* isolate,
bool treat_as_nested_enter = false);
static bool EnterIsolate(Isolate* isolate);
// Makes the current thread exit its isolate.
//
// TODO(kustermann): Remove [treat_as_nested_exit] once safepoint-based reload
// implementation landed.
static void ExitIsolate(bool isolate_shutdown = false,
bool treat_as_nested_exit = false);
static void ExitIsolate(bool isolate_shutdown = false);
static bool EnterIsolateGroupAsHelper(IsolateGroup* isolate_group,
TaskKind kind,
@ -888,32 +882,6 @@ class Thread : public ThreadState {
REUSABLE_HANDLE_LIST(REUSABLE_HANDLE)
#undef REUSABLE_HANDLE
/*
* Fields used to support safepointing a thread.
*
* - Bit 0 of the safepoint_state_ field is used to indicate if the thread is
* already at a safepoint,
* - Bit 1 of the safepoint_state_ field is used to indicate if a safepoint
* is requested for this thread.
* - Bit 2 of the safepoint_state_ field is used to indicate if the thread is
* already at a deopt safepoint,
* - Bit 3 of the safepoint_state_ field is used to indicate if a deopt
* safepoint is requested for this thread.
* - Bit 4 of the safepoint_state_ field is used to indicate that the thread
* is blocked at a (deopt)safepoint and has to be woken up once the
* (deopt)safepoint operation is complete.
* - Bit 6 of the safepoint_state_ field is used to indicate that the isolate
* running on this thread has triggered unwind error, which requires
* enforced exit on a transition from native back to generated.
*
* The safepoint execution state (described above) for a thread is stored in
* in the execution_state_ field.
* Potential execution states a thread could be in:
* kThreadInGenerated - The thread is running jitted dart/stub code.
* kThreadInVM - The thread is running VM code.
* kThreadInNative - The thread is running native code.
* kThreadInBlockedState - The thread is blocked waiting for a resource.
*/
static bool IsAtSafepoint(SafepointLevel level, uword state) {
const uword mask = AtSafepointBits(level);
return (state & mask) == mask;
@ -964,18 +932,32 @@ class Thread : public ThreadState {
return (state & SafepointRequestedField::mask_in_place()) != 0;
case SafepointLevel::kGCAndDeopt:
return (state & DeoptSafepointRequestedField::mask_in_place()) != 0;
case SafepointLevel::kGCAndDeoptAndReload:
return (state & ReloadSafepointRequestedField::mask_in_place()) != 0;
default:
UNREACHABLE();
}
}
void BlockForSafepoint();
uword SetSafepointRequested(SafepointLevel level, bool value) {
ASSERT(thread_lock()->IsOwnedByCurrentThread());
const uword mask = level == SafepointLevel::kGC
? SafepointRequestedField::mask_in_place()
: DeoptSafepointRequestedField::mask_in_place();
uword mask = 0;
switch (level) {
case SafepointLevel::kGC:
mask = SafepointRequestedField::mask_in_place();
break;
case SafepointLevel::kGCAndDeopt:
mask = DeoptSafepointRequestedField::mask_in_place();
break;
case SafepointLevel::kGCAndDeoptAndReload:
mask = ReloadSafepointRequestedField::mask_in_place();
break;
default:
UNREACHABLE();
}
if (value) {
// acquire pulls from the release in TryEnterSafepoint.
@ -1015,6 +997,7 @@ class Thread : public ThreadState {
}
bool OwnsGCSafepoint() const;
bool OwnsReloadSafepoint() const;
bool OwnsDeoptSafepoint() const;
bool OwnsSafepoint() const;
bool CanAcquireSafepointLocks() const;
@ -1059,10 +1042,7 @@ class Thread : public ThreadState {
bool TryEnterSafepoint() {
uword old_state = 0;
uword new_state = AtSafepointField::encode(true);
if (current_safepoint_level() == SafepointLevel::kGCAndDeopt) {
new_state |= AtDeoptSafepointField::encode(true);
}
uword new_state = AtSafepointBits(current_safepoint_level());
return safepoint_state_.compare_exchange_strong(old_state, new_state,
std::memory_order_release);
}
@ -1079,10 +1059,7 @@ class Thread : public ThreadState {
}
bool TryExitSafepoint() {
uword old_state = AtSafepointField::encode(true);
if (current_safepoint_level() == SafepointLevel::kGCAndDeopt) {
old_state |= AtDeoptSafepointField::encode(true);
}
uword old_state = AtSafepointBits(current_safepoint_level());
uword new_state = 0;
return safepoint_state_.compare_exchange_strong(old_state, new_state,
std::memory_order_acquire);
@ -1157,10 +1134,14 @@ class Thread : public ThreadState {
PendingDeopts& pending_deopts() { return pending_deopts_; }
SafepointLevel current_safepoint_level() const {
return runtime_call_deopt_ability_ ==
RuntimeCallDeoptAbility::kCannotLazyDeopt
? SafepointLevel::kGC
: SafepointLevel::kGCAndDeopt;
if (runtime_call_deopt_ability_ ==
RuntimeCallDeoptAbility::kCannotLazyDeopt) {
return SafepointLevel::kGC;
}
if (no_reload_scope_depth_ > 0 || allow_reload_scope_depth_ <= 0) {
return SafepointLevel::kGCAndDeopt;
}
return SafepointLevel::kGCAndDeoptAndReload;
}
private:
@ -1225,6 +1206,9 @@ class Thread : public ThreadState {
IsolateGroup* isolate_group_ = nullptr;
uword saved_stack_limit_ = 0;
// The mutator uses this to indicate it wants to OSR (by
// setting [Thread::kOsrRequest]) before going to runtime which will see this
// bit.
uword stack_overflow_flags_ = 0;
uword volatile top_exit_frame_info_ = 0;
StoreBufferBlock* store_buffer_block_ = nullptr;
@ -1244,7 +1228,47 @@ class Thread : public ThreadState {
ObjectPoolPtr global_object_pool_;
uword resume_pc_;
uword saved_shadow_call_stack_ = 0;
/*
* The execution state for a thread.
*
* Potential execution states a thread could be in:
* kThreadInGenerated - The thread is running jitted dart/stub code.
* kThreadInVM - The thread is running VM code.
* kThreadInNative - The thread is running native code.
* kThreadInBlockedState - The thread is blocked waiting for a resource.
*
* Warning: Execution state doesn't imply the safepoint state. It's possible
* to be in [kThreadInNative] and still not be at-safepoint (e.g. due to a
* pending Dart_TypedDataAcquire() that increases no-callback-scope)
*/
uword execution_state_;
/*
* Stores
*
* - whether the thread is at a safepoint (current thread sets these)
* [AtSafepointField]
* [AtDeoptSafepointField]
* [AtReloadSafepointField]
*
* - whether the thread is requested to safepoint (other thread sets these)
* [SafepointRequestedField]
* [DeoptSafepointRequestedField]
* [ReloadSafepointRequestedField]
*
* - whether the thread is blocked due to safepoint request and needs to
* be resumed after safepoint is done (current thread sets this)
* [BlockedForSafepointField]
*
* - whether the thread should be ignored for safepointing purposes
* [BypassSafepointsField]
*
* - whether the isolate running this thread has triggered an unwind error,
* which requires enforced exit on a transition from native back to
* generated.
* [UnwindErrorInProgressField]
*/
std::atomic<uword> safepoint_state_;
GrowableObjectArrayPtr ffi_callback_code_;
TypedDataPtr ffi_callback_stack_return_;
@ -1272,6 +1296,7 @@ class Thread : public ThreadState {
int32_t no_callback_scope_depth_;
int32_t force_growth_scope_depth_ = 0;
intptr_t no_reload_scope_depth_ = 0;
intptr_t allow_reload_scope_depth_ = 0;
intptr_t stopped_mutators_scope_depth_ = 0;
#if defined(DEBUG)
int32_t no_safepoint_scope_depth_;
@ -1312,15 +1337,25 @@ class Thread : public ThreadState {
class AtSafepointField : public BitField<uword, bool, 0, 1> {};
class SafepointRequestedField
: public BitField<uword, bool, AtSafepointField::kNextBit, 1> {};
class AtDeoptSafepointField
: public BitField<uword, bool, SafepointRequestedField::kNextBit, 1> {};
class DeoptSafepointRequestedField
: public BitField<uword, bool, AtDeoptSafepointField::kNextBit, 1> {};
class BlockedForSafepointField
class AtReloadSafepointField
: public BitField<uword,
bool,
DeoptSafepointRequestedField::kNextBit,
1> {};
class ReloadSafepointRequestedField
: public BitField<uword, bool, AtReloadSafepointField::kNextBit, 1> {};
class BlockedForSafepointField
: public BitField<uword,
bool,
ReloadSafepointRequestedField::kNextBit,
1> {};
class BypassSafepointsField
: public BitField<uword, bool, BlockedForSafepointField::kNextBit, 1> {};
class UnwindErrorInProgressField
@ -1333,6 +1368,10 @@ class Thread : public ThreadState {
case SafepointLevel::kGCAndDeopt:
return AtSafepointField::mask_in_place() |
AtDeoptSafepointField::mask_in_place();
case SafepointLevel::kGCAndDeoptAndReload:
return AtSafepointField::mask_in_place() |
AtDeoptSafepointField::mask_in_place() |
AtReloadSafepointField::mask_in_place();
default:
UNREACHABLE();
}
@ -1419,6 +1458,7 @@ class Thread : public ThreadState {
friend class IsolateGroup;
friend class NoActiveIsolateScope;
friend class NoReloadScope;
friend class RawReloadParticipationScope;
friend class Simulator;
friend class StackZone;
friend class StoppedMutatorsScope;
@ -1495,26 +1535,76 @@ class NoSafepointScope : public ValueObject {
};
#endif // defined(DEBUG)
// Disables initiating a reload operation as well as participating in another
// threads reload operation.
//
// Reload triggered by a mutator thread happens by sending all other mutator
// threads (that are running) OOB messages to check into a safepoint. The thread
// initiating the reload operation will block until all mutators are at a reload
// safepoint.
//
// When running under this scope, the processing of those OOB messages will
// ignore reload safepoint checkin requests. Yet we'll have to ensure that the
// dropped message is still acted upon.
//
// => To solve this we make the [~NoReloadScope] destructor resend a new reload
// OOB request to itself (the [~NoReloadScope] destructor is not necessarily at
// well-defined place where reload can happen - those places will explicitly
// opt-in via [ReloadParticipationScope]).
//
class NoReloadScope : public ThreadStackResource {
public:
explicit NoReloadScope(Thread* thread) : ThreadStackResource(thread) {
#if !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
thread->no_reload_scope_depth_++;
ASSERT(thread->no_reload_scope_depth_ >= 0);
#endif // !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
}
~NoReloadScope() {
#if !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
thread()->no_reload_scope_depth_ -= 1;
ASSERT(thread()->no_reload_scope_depth_ >= 0);
#endif // !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
}
explicit NoReloadScope(Thread* thread);
~NoReloadScope();
private:
DISALLOW_COPY_AND_ASSIGN(NoReloadScope);
};
// Allows triggering reload safepoint operations as well as participating in
// reload operations (at safepoint checks).
//
// By-default safepoint checkins will not participate in reload operations, as
// reload has to happen at very well-defined places. This scope is intended
// for those places where we explicitly want to allow safepoint checkins to
// participate in reload operations (triggered by other threads).
//
// If there is any [NoReloadScope] active we will still disable the safepoint
// checkins to participate in reload.
//
// We also require the thread inititating a reload operation to explicitly
// opt-in via this scope.
class RawReloadParticipationScope {
public:
explicit RawReloadParticipationScope(Thread* thread) : thread_(thread) {
#if !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
if (thread->allow_reload_scope_depth_ == 0) {
ASSERT(thread->current_safepoint_level() == SafepointLevel::kGCAndDeopt);
}
thread->allow_reload_scope_depth_++;
ASSERT(thread->allow_reload_scope_depth_ >= 0);
#endif // !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
}
~RawReloadParticipationScope() {
#if !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
thread_->allow_reload_scope_depth_ -= 1;
ASSERT(thread_->allow_reload_scope_depth_ >= 0);
if (thread_->allow_reload_scope_depth_ == 0) {
ASSERT(thread_->current_safepoint_level() == SafepointLevel::kGCAndDeopt);
}
#endif // !defined(PRODUCT) && !defined(DART_PRECOMPILED_RUNTIME)
}
private:
Thread* thread_;
DISALLOW_COPY_AND_ASSIGN(RawReloadParticipationScope);
};
using ReloadParticipationScope =
AsThreadStackResource<RawReloadParticipationScope>;
class StoppedMutatorsScope : public ThreadStackResource {
public:
explicit StoppedMutatorsScope(Thread* thread) : ThreadStackResource(thread) {

View file

@ -5,6 +5,8 @@
#ifndef RUNTIME_VM_THREAD_STACK_RESOURCE_H_
#define RUNTIME_VM_THREAD_STACK_RESOURCE_H_
#include <utility>
#include "vm/allocation.h"
#include "vm/globals.h"
@ -29,6 +31,18 @@ class ThreadStackResource : public StackResource {
IsolateGroup* isolate_group() const;
};
template <typename T, typename... Args>
class AsThreadStackResource : public ThreadStackResource {
public:
AsThreadStackResource(Thread* thread, Args&&... args)
: ThreadStackResource(thread),
member_(thread, std::forward<Args>(args)...) {}
~AsThreadStackResource() {}
private:
T member_;
};
} // namespace dart
#endif // RUNTIME_VM_THREAD_STACK_RESOURCE_H_