Add "select" detection to common frontend
dx produces a somewhat ugly code pattern for selects:
foo = (condition) ? true : false;
There is no select Dex opcode, so this turns into:
IF_EQ v0, L1
CONST_4 V2, #0
L2:
<rejoin>
.
.
L1:
CONST_4 V2, #1
GOTO L2
... or ...
foo = (condition) ? bar1 : bar2;
IF_EQ v0, L1
MOVE V2, V3
L2:
<rejoin>
.
.
L1:
MOVE V2, V4
GOTO L2
Not only do we end up with excessive branching (and, unless we
something special, really poor code layout), but the compilers
generally drop down a suspend check on backwards branches - which is
completely unnecessary in the "GOTO L2" case above. There are ~2100
instances of the simplest variants of this pattern in the framework.
With this new optimization, boot.oat size is reduced by 90K bytes
and one of our standard benchmarks got an 8% pop.
This CL adds a select detection operation to the common frontend's
BasicBlock optimization pass, and introduces a new extended MIR
opcode: kMirOpSelect.
Change-Id: I06249956ba21afb0ed5cdd35019ac87cd063a17b
diff --git a/src/compiler/codegen/codegen.h b/src/compiler/codegen/codegen.h
index 372e842..4085a41 100644
--- a/src/compiler/codegen/codegen.h
+++ b/src/compiler/codegen/codegen.h
@@ -335,6 +335,7 @@
virtual void GenFusedFPCmpBranch(CompilationUnit* cu, BasicBlock* bb, MIR* mir, bool gt_bias,
bool is_double) = 0;
virtual void GenFusedLongCmpBranch(CompilationUnit* cu, BasicBlock* bb, MIR* mir) = 0;
+ virtual void GenSelect(CompilationUnit* cu, BasicBlock* bb, MIR* mir) = 0;
virtual void GenMemBarrier(CompilationUnit* cu, MemBarrierKind barrier_kind) = 0;
virtual void GenMonitorEnter(CompilationUnit* cu, int opt_flags, RegLocation rl_src) = 0;
virtual void GenMonitorExit(CompilationUnit* cu, int opt_flags, RegLocation rl_src) = 0;