ART: Make mterp jit profiling race tolerant
The JIT profiling mechanism is intentionally non-precise to minimize
performance overhead. In general, this is not a problem. However,
the on-stack replacement mechanism assumes an order of method
compilation than can sometimes be violated if conditions are just
right.
This change allows compilation requests that were dropped due to
a race condition to eventually be re-issued. It does this by allowing
the 16-bit hotness counter to wrap around.
Change-Id: I2ac8056af8c4f7f8cef3f2c3db70b0394c26a566
diff --git a/runtime/interpreter/mterp/mterp.cc b/runtime/interpreter/mterp/mterp.cc
index de9041b..1da1181 100644
--- a/runtime/interpreter/mterp/mterp.cc
+++ b/runtime/interpreter/mterp/mterp.cc
@@ -718,6 +718,11 @@
ArtMethod* method = shadow_frame->GetMethod();
JValue* result = shadow_frame->GetResultRegister();
uint32_t dex_pc = shadow_frame->GetDexPC();
+ jit::Jit* jit = Runtime::Current()->GetJit();
+ if (offset <= 0) {
+ // Keep updating hotness in case a compilation request was dropped. Eventually it will retry.
+ jit->GetInstrumentationCache()->AddSamples(self, method, 1);
+ }
// Assumes caller has already determined that an OSR check is appropriate.
return jit::Jit::MaybeDoOnStackReplacement(self, method, dex_pc, offset, result);
}
diff --git a/runtime/jit/jit_instrumentation.cc b/runtime/jit/jit_instrumentation.cc
index b18d6a2..d2180c7 100644
--- a/runtime/jit/jit_instrumentation.cc
+++ b/runtime/jit/jit_instrumentation.cc
@@ -183,9 +183,8 @@
thread_pool_->AddTask(self, new JitCompileTask(method, JitCompileTask::kCompileOsr));
}
}
- // Update hotness counter, but avoid wrap around.
- method->SetCounter(
- std::min(new_count, static_cast<int32_t>(std::numeric_limits<uint16_t>::max())));
+ // Update hotness counter
+ method->SetCounter(new_count);
}
JitInstrumentationListener::JitInstrumentationListener(JitInstrumentationCache* cache)