commit | f98b9a50ec24388f765cf2a5777d1594a23d242f | [log] [tgz] |
---|---|---|
author | Sanjay Patel <spatel@rotateright.com> | Thu Mar 31 17:30:06 2016 +0000 |
committer | Sanjay Patel <spatel@rotateright.com> | Thu Mar 31 17:30:06 2016 +0000 |
tree | 476766de2f098f716583f84cfcd7095675716a11 | |
parent | b098ec7d06ed24ef40548b16a1560972071984db [diff] |
[x86] use SSE/AVX ops for non-zero memsets (PR27100) Move the memset check down to the CPU-with-slow-SSE-unaligned-memops case: this allows fast targets to take advantage of SSE/AVX instructions and prevents slow targets from stepping into a codegen sinkhole while trying to splat a byte into an XMM reg. Follow-on bugs exposed by the current codegen are: https://llvm.org/bugs/show_bug.cgi?id=27141 https://llvm.org/bugs/show_bug.cgi?id=27143 Differential Revision: http://reviews.llvm.org/D18566 git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@265029 91177308-0d34-0410-b5e6-96231b3b80d8