Fix a double unmap issue in MemMap::UnMapAtEnd().
MemMap::UnMapAtEnd() unmaps the unused tail of the alloc space during
a zygote fork. But it can cause the same tail region of the memory to
be unmapped twice (once in UnMapAtEnd() and once more in ~MemMap()
during a shutdown.)
I encountered a crash because of this issue in SpaceTest.ZygoteTest
(which happens to happen only on a device in a branch with the
rosalloc change probably due to some randomness in mmap address
choice, etc.)
Here's what happens:
1) CreateZygoteSpace() will call UnMapAtEnd() and unmap the unused
tail of the alloc space.
2) In the same function, after UnMapAtEnd(), several libc new/malloc
allocations, including a new DlMallocSpace object, happen. This
happens to cause libc to map a new memory region that overlaps with
the memory region that has just been unmapped in 1) and use it to
allocate those allocations (that is, the new DlMallocSpace object is
allocated in that memory region.) This is a second DlMallocSpace that
becomes the new alloc space after zygote fork. The first DlMallocSpace
becomes the zygote space. Note that that libc maps that memory region
before the underlying memory of the second DlMallocSpace is mapped.
3) During a Runtime shutdown (which happens once for a normal VM
shutdown or at the end of each test run) all the spaces get destructed
including the the two DlMallocSpaces one by one. When the first
DlMallocSpace gets destructed (note the space list is sorted by
address,) its super destructor ~MemMap() unmaps the original memory
region that's already partially unmapped in 2). Now this memory region
includes the libc memory region that includes the second DlMallocSpace
object.
4) When the second DlMallocSpace object gets attempted to be
destructed, the memory in which the object resides is already unmapped
in 3) and causes a SIGSEGV.
This change replaces UnMapAtEnd() with a new function RemapAtEnd()
which combines the unmapping of the tail region and remapping of it to
achieve the following two things:
1) Fixes this double unmap issue by updating the base_size_ member
variable to exclude the already-unmapped tail region so that ~MemMap()
will not unmap the tail region again.
2) Improves on the non-atomicity issue in the unmap/map sequence in
CreateZygoteSpace(). That is, once the unused tail portion of the
memory region of the origina alloc space is unmapped, something like
libc could come along and take that memory region, before the memory
region is mapped again for the new alloc space. This, as a result,
would make a hole between the old alloc (new zygote) space and the new
alloc space and cause the two spaces to be
non-contiguous. RemapAtEnd() eliminates new/malloc allocations between
the unmap and the map calls. But note this still isn't perfect as
other threads could in theory take the memory region between the
munmap and the mmap calls.
Added tests.
Change-Id: I43bc3a33a2cbfc7a092890312e34aa5285384589
diff --git a/runtime/mem_map.h b/runtime/mem_map.h
index 919463c..2c65833 100644
--- a/runtime/mem_map.h
+++ b/runtime/mem_map.h
@@ -84,8 +84,9 @@
return Begin() <= addr && addr < End();
}
- // Trim by unmapping pages at the end of the map.
- void UnMapAtEnd(byte* new_end);
+ // Unmap the pages at end and remap them to create another memory map.
+ MemMap* RemapAtEnd(byte* new_end, const char* tail_name, int tail_prot,
+ std::string* error_msg);
private:
MemMap(const std::string& name, byte* begin, size_t size, void* base_begin, size_t base_size,
@@ -96,8 +97,10 @@
size_t size_; // Length of data.
void* const base_begin_; // Page-aligned base address.
- const size_t base_size_; // Length of mapping.
+ size_t base_size_; // Length of mapping. May be changed by RemapAtEnd (ie Zygote).
int prot_; // Protection of the map.
+
+ friend class MemMapTest; // To allow access to base_begin_ and base_size_.
};
} // namespace art