MIPS: Implement read barriers.
This is the core functionality. Further improvements
will be done separately.
This also adds/moves memory barriers where they belong and
removes the UnsafeGetLongVolatile and UnsafePutLongVolatile
MIPS32 intrinsics as they need to load/store a pair of
registers atomically, which is not supported directly by
the CPU.
Test: booted MIPS32R2 in QEMU
Test: test-art-target-run-test
Test: booted MIPS64 (with 2nd arch MIPS32R6) in QEMU
Test: "testrunner.py --target --optimizing -j1"
Test: same MIPS64 boot/test with ART_READ_BARRIER_TYPE=TABLELOOKUP
Test: "testrunner.py --target --optimizing --32 -j2" on CI20
Test: same CI20 test with ART_READ_BARRIER_TYPE=TABLELOOKUP
Change-Id: I0ff91525fefba3ec1cc019f50316478a888acced
diff --git a/runtime/mirror/object-inl.h b/runtime/mirror/object-inl.h
index 8e591e4..811f1ea 100644
--- a/runtime/mirror/object-inl.h
+++ b/runtime/mirror/object-inl.h
@@ -187,10 +187,12 @@
uint32_t rb_state = lw.ReadBarrierState();
return rb_state;
#else
- // mips/mips64
- LOG(FATAL) << "Unreachable";
- UNREACHABLE();
- UNUSED(fake_address_dependency);
+ // MIPS32/MIPS64: use a memory barrier to prevent load-load reordering.
+ LockWord lw = GetLockWord(false);
+ *fake_address_dependency = 0;
+ std::atomic_thread_fence(std::memory_order_acquire);
+ uint32_t rb_state = lw.ReadBarrierState();
+ return rb_state;
#endif
}