Suspend check reworking (ready for rewiew)
I hate burning a register, but the cost of suspend checks was just too high
in our current environment. There are things that can be done in future
releases to avoid the register burn, but for now it's worthwhile.
The general strategy is to reserve r4 as a suspend check counter.
Rather than poll the thread suspendPending counter, we instead simply
decrement the counter register. When it rolls to zero, we check. For
now I'm just using the counter scheme on backwards branches - we always
poll on returns (which is already heavyweight enough that the extra cost
isn't especially noticable).
I've also added an optimization hint to the MIR in case we have enough
time to test and enable the existing loop analysis code that omits the
suspend check on smallish counted loops.
Change-Id: I82d8bad5882a4cf2ccff590942e2d1520d58969d
diff --git a/src/compiler/CompilerIR.h b/src/compiler/CompilerIR.h
index b697292..0965c14 100644
--- a/src/compiler/CompilerIR.h
+++ b/src/compiler/CompilerIR.h
@@ -87,6 +87,7 @@
kMIRInlined, // Invoke is inlined (ie dead)
kMIRInlinedPred, // Invoke is inlined via prediction
kMIRCallee, // Instruction is inlined from callee
+ kMIRIgnoreSuspendCheck,
} MIROptimizationFlagPositons;
#define MIR_IGNORE_NULL_CHECK (1 << kMIRIgnoreNullCheck)
@@ -96,6 +97,7 @@
#define MIR_INLINED (1 << kMIRInlined)
#define MIR_INLINED_PRED (1 << kMIRInlinedPred)
#define MIR_CALLEE (1 << kMIRCallee)
+#define MIR_IGNORE_SUSPEND_CHECK (1 << kMIRIgnoreSuspendCheck)
typedef struct CallsiteInfo {
const char* classDescriptor;
@@ -239,6 +241,7 @@
GrowableList dfsOrder;
GrowableList domPostOrderTraversal;
GrowableList throwLaunchpads;
+ GrowableList suspendLaunchpads;
ArenaBitVector* tryBlockAddr;
ArenaBitVector** defBlockMatrix; // numDalvikRegister x numBlocks
ArenaBitVector* tempBlockV;