Implement TLAB fast paths in artAllocObjectFromCode.
GSS/TLAB GC speedup on N4 (ms):
MemAllocTest 2963 -> 2792
BinaryTrees 2205 -> 2113
Also, measured wth -XX:IgnoreMaxFootprint to invoke GC less often
(only when the bump pointer space is filled rather than based on the
target utilization):
MemAllocTest 2707 -> 2590
BinaryTrees 2023 -> 1906
TODO: implement fast paths for array allocations.
Bug: 9986565
Change-Id: I73ff6327b229704f8ae5924ae9b747443c229841
diff --git a/runtime/gc/heap.h b/runtime/gc/heap.h
index 8ffadd5..a82392a 100644
--- a/runtime/gc/heap.h
+++ b/runtime/gc/heap.h
@@ -662,7 +662,7 @@
SHARED_LOCKS_REQUIRED(Locks::mutator_lock_);
template <bool kGrow>
- bool IsOutOfMemoryOnAllocation(AllocatorType allocator_type, size_t alloc_size);
+ ALWAYS_INLINE bool IsOutOfMemoryOnAllocation(AllocatorType allocator_type, size_t alloc_size);
// Returns true if the address passed in is within the address range of a continuous space.
bool IsValidContinuousSpaceObjectAddress(const mirror::Object* obj) const