blob: f88a813039434b81bc8d269c2fa68446371cdd5f [file] [log] [blame]
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07001/*
2 * Copyright (C) 2008 The Android Open Source Project
3 * All rights reserved.
4 *
5 * Redistribution and use in source and binary forms, with or without
6 * modification, are permitted provided that the following conditions
7 * are met:
8 * * Redistributions of source code must retain the above copyright
9 * notice, this list of conditions and the following disclaimer.
10 * * Redistributions in binary form must reproduce the above copyright
11 * notice, this list of conditions and the following disclaimer in
12 * the documentation and/or other materials provided with the
13 * distribution.
14 *
15 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
16 * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
17 * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
18 * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
19 * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
20 * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
21 * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
22 * OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
23 * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
24 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
25 * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
26 * SUCH DAMAGE.
27 */
28/*
29 This is a version (aka dlmalloc) of malloc/free/realloc written by
30 Doug Lea and released to the public domain, as explained at
31 http://creativecommons.org/licenses/publicdomain. Send questions,
32 comments, complaints, performance data, etc to dl@cs.oswego.edu
33
34* Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)
35
36 Note: There may be an updated version of this malloc obtainable at
37 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
38 Check before installing!
39
40* Quickstart
41
42 This library is all in one file to simplify the most common usage:
43 ftp it, compile it (-O3), and link it into another program. All of
44 the compile-time options default to reasonable values for use on
45 most platforms. You might later want to step through various
46 compile-time and dynamic tuning options.
47
48 For convenience, an include file for code using this malloc is at:
49 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
50 You don't really need this .h file unless you call functions not
51 defined in your system include files. The .h file contains only the
52 excerpts from this file needed for using this malloc on ANSI C/C++
53 systems, so long as you haven't changed compile-time options about
54 naming and tuning parameters. If you do, then you can create your
55 own malloc.h that does include all settings by cutting at the point
56 indicated below. Note that you may already by default be using a C
57 library containing a malloc that is based on some version of this
58 malloc (for example in linux). You might still want to use the one
59 in this file to customize settings or to avoid overheads associated
60 with library versions.
61
62* Vital statistics:
63
64 Supported pointer/size_t representation: 4 or 8 bytes
65 size_t MUST be an unsigned type of the same width as
66 pointers. (If you are using an ancient system that declares
67 size_t as a signed type, or need it to be a different width
68 than pointers, you can use a previous release of this malloc
69 (e.g. 2.7.2) supporting these.)
70
71 Alignment: 8 bytes (default)
72 This suffices for nearly all current machines and C compilers.
73 However, you can define MALLOC_ALIGNMENT to be wider than this
74 if necessary (up to 128bytes), at the expense of using more space.
75
76 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
77 8 or 16 bytes (if 8byte sizes)
78 Each malloced chunk has a hidden word of overhead holding size
79 and status information, and additional cross-check word
80 if FOOTERS is defined.
81
82 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
83 8-byte ptrs: 32 bytes (including overhead)
84
85 Even a request for zero bytes (i.e., malloc(0)) returns a
86 pointer to something of the minimum allocatable size.
87 The maximum overhead wastage (i.e., number of extra bytes
88 allocated than were requested in malloc) is less than or equal
89 to the minimum size, except for requests >= mmap_threshold that
90 are serviced via mmap(), where the worst case wastage is about
91 32 bytes plus the remainder from a system page (the minimal
92 mmap unit); typically 4096 or 8192 bytes.
93
94 Security: static-safe; optionally more or less
95 The "security" of malloc refers to the ability of malicious
96 code to accentuate the effects of errors (for example, freeing
97 space that is not currently malloc'ed or overwriting past the
98 ends of chunks) in code that calls malloc. This malloc
99 guarantees not to modify any memory locations below the base of
100 heap, i.e., static variables, even in the presence of usage
101 errors. The routines additionally detect most improper frees
102 and reallocs. All this holds as long as the static bookkeeping
103 for malloc itself is not corrupted by some other means. This
104 is only one aspect of security -- these checks do not, and
105 cannot, detect all possible programming errors.
106
107 If FOOTERS is defined nonzero, then each allocated chunk
108 carries an additional check word to verify that it was malloced
109 from its space. These check words are the same within each
110 execution of a program using malloc, but differ across
111 executions, so externally crafted fake chunks cannot be
112 freed. This improves security by rejecting frees/reallocs that
113 could corrupt heap memory, in addition to the checks preventing
114 writes to statics that are always on. This may further improve
115 security at the expense of time and space overhead. (Note that
116 FOOTERS may also be worth using with MSPACES.)
117
118 By default detected errors cause the program to abort (calling
119 "abort()"). You can override this to instead proceed past
120 errors by defining PROCEED_ON_ERROR. In this case, a bad free
121 has no effect, and a malloc that encounters a bad address
122 caused by user overwrites will ignore the bad address by
123 dropping pointers and indices to all known memory. This may
124 be appropriate for programs that should continue if at all
125 possible in the face of programming errors, although they may
126 run out of memory because dropped memory is never reclaimed.
127
128 If you don't like either of these options, you can define
129 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
130 else. And if if you are sure that your program using malloc has
131 no errors or vulnerabilities, you can define INSECURE to 1,
132 which might (or might not) provide a small performance improvement.
133
134 Thread-safety: NOT thread-safe unless USE_LOCKS defined
135 When USE_LOCKS is defined, each public call to malloc, free,
136 etc is surrounded with either a pthread mutex or a win32
137 spinlock (depending on WIN32). This is not especially fast, and
138 can be a major bottleneck. It is designed only to provide
139 minimal protection in concurrent environments, and to provide a
140 basis for extensions. If you are using malloc in a concurrent
141 program, consider instead using ptmalloc, which is derived from
142 a version of this malloc. (See http://www.malloc.de).
143
144 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
145 This malloc can use unix sbrk or any emulation (invoked using
146 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
147 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
148 memory. On most unix systems, it tends to work best if both
149 MORECORE and MMAP are enabled. On Win32, it uses emulations
150 based on VirtualAlloc. It also uses common C library functions
151 like memset.
152
153 Compliance: I believe it is compliant with the Single Unix Specification
154 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
155 others as well.
156
157* Overview of algorithms
158
159 This is not the fastest, most space-conserving, most portable, or
160 most tunable malloc ever written. However it is among the fastest
161 while also being among the most space-conserving, portable and
162 tunable. Consistent balance across these factors results in a good
163 general-purpose allocator for malloc-intensive programs.
164
165 In most ways, this malloc is a best-fit allocator. Generally, it
166 chooses the best-fitting existing chunk for a request, with ties
167 broken in approximately least-recently-used order. (This strategy
168 normally maintains low fragmentation.) However, for requests less
169 than 256bytes, it deviates from best-fit when there is not an
170 exactly fitting available chunk by preferring to use space adjacent
171 to that used for the previous small request, as well as by breaking
172 ties in approximately most-recently-used order. (These enhance
173 locality of series of small allocations.) And for very large requests
174 (>= 256Kb by default), it relies on system memory mapping
175 facilities, if supported. (This helps avoid carrying around and
176 possibly fragmenting memory used only for large chunks.)
177
178 All operations (except malloc_stats and mallinfo) have execution
179 times that are bounded by a constant factor of the number of bits in
180 a size_t, not counting any clearing in calloc or copying in realloc,
181 or actions surrounding MORECORE and MMAP that have times
182 proportional to the number of non-contiguous regions returned by
183 system allocation routines, which is often just 1.
184
185 The implementation is not very modular and seriously overuses
186 macros. Perhaps someday all C compilers will do as good a job
187 inlining modular code as can now be done by brute-force expansion,
188 but now, enough of them seem not to.
189
190 Some compilers issue a lot of warnings about code that is
191 dead/unreachable only on some platforms, and also about intentional
192 uses of negation on unsigned types. All known cases of each can be
193 ignored.
194
195 For a longer but out of date high-level description, see
196 http://gee.cs.oswego.edu/dl/html/malloc.html
197
198* MSPACES
199 If MSPACES is defined, then in addition to malloc, free, etc.,
200 this file also defines mspace_malloc, mspace_free, etc. These
201 are versions of malloc routines that take an "mspace" argument
202 obtained using create_mspace, to control all internal bookkeeping.
203 If ONLY_MSPACES is defined, only these versions are compiled.
204 So if you would like to use this allocator for only some allocations,
205 and your system malloc for others, you can compile with
206 ONLY_MSPACES and then do something like...
207 static mspace mymspace = create_mspace(0,0); // for example
208 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
209
210 (Note: If you only need one instance of an mspace, you can instead
211 use "USE_DL_PREFIX" to relabel the global malloc.)
212
213 You can similarly create thread-local allocators by storing
214 mspaces as thread-locals. For example:
215 static __thread mspace tlms = 0;
216 void* tlmalloc(size_t bytes) {
217 if (tlms == 0) tlms = create_mspace(0, 0);
218 return mspace_malloc(tlms, bytes);
219 }
220 void tlfree(void* mem) { mspace_free(tlms, mem); }
221
222 Unless FOOTERS is defined, each mspace is completely independent.
223 You cannot allocate from one and free to another (although
224 conformance is only weakly checked, so usage errors are not always
225 caught). If FOOTERS is defined, then each chunk carries around a tag
226 indicating its originating mspace, and frees are directed to their
227 originating spaces.
228
229 ------------------------- Compile-time options ---------------------------
230
231Be careful in setting #define values for numerical constants of type
232size_t. On some systems, literal values are not automatically extended
233to size_t precision unless they are explicitly casted.
234
235WIN32 default: defined if _WIN32 defined
236 Defining WIN32 sets up defaults for MS environment and compilers.
237 Otherwise defaults are for unix.
238
239MALLOC_ALIGNMENT default: (size_t)8
240 Controls the minimum alignment for malloc'ed chunks. It must be a
241 power of two and at least 8, even on machines for which smaller
242 alignments would suffice. It may be defined as larger than this
243 though. Note however that code and data structures are optimized for
244 the case of 8-byte alignment.
245
246MSPACES default: 0 (false)
247 If true, compile in support for independent allocation spaces.
248 This is only supported if HAVE_MMAP is true.
249
250ONLY_MSPACES default: 0 (false)
251 If true, only compile in mspace versions, not regular versions.
252
253USE_LOCKS default: 0 (false)
254 Causes each call to each public routine to be surrounded with
255 pthread or WIN32 mutex lock/unlock. (If set true, this can be
256 overridden on a per-mspace basis for mspace versions.)
257
258FOOTERS default: 0
259 If true, provide extra checking and dispatching by placing
260 information in the footers of allocated chunks. This adds
261 space and time overhead.
262
263INSECURE default: 0
264 If true, omit checks for usage errors and heap space overwrites.
265
266USE_DL_PREFIX default: NOT defined
267 Causes compiler to prefix all public routines with the string 'dl'.
268 This can be useful when you only want to use this malloc in one part
269 of a program, using your regular system malloc elsewhere.
270
271ABORT default: defined as abort()
272 Defines how to abort on failed checks. On most systems, a failed
273 check cannot die with an "assert" or even print an informative
274 message, because the underlying print routines in turn call malloc,
275 which will fail again. Generally, the best policy is to simply call
276 abort(). It's not very useful to do more than this because many
277 errors due to overwriting will show up as address faults (null, odd
278 addresses etc) rather than malloc-triggered checks, so will also
279 abort. Also, most compilers know that abort() does not return, so
280 can better optimize code conditionally calling it.
281
282PROCEED_ON_ERROR default: defined as 0 (false)
283 Controls whether detected bad addresses cause them to bypassed
284 rather than aborting. If set, detected bad arguments to free and
285 realloc are ignored. And all bookkeeping information is zeroed out
286 upon a detected overwrite of freed heap space, thus losing the
287 ability to ever return it from malloc again, but enabling the
288 application to proceed. If PROCEED_ON_ERROR is defined, the
289 static variable malloc_corruption_error_count is compiled in
290 and can be examined to see if errors have occurred. This option
291 generates slower code than the default abort policy.
292
293DEBUG default: NOT defined
294 The DEBUG setting is mainly intended for people trying to modify
295 this code or diagnose problems when porting to new platforms.
296 However, it may also be able to better isolate user errors than just
297 using runtime checks. The assertions in the check routines spell
298 out in more detail the assumptions and invariants underlying the
299 algorithms. The checking is fairly extensive, and will slow down
300 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
301 set will attempt to check every non-mmapped allocated and free chunk
302 in the course of computing the summaries.
303
304ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
305 Debugging assertion failures can be nearly impossible if your
306 version of the assert macro causes malloc to be called, which will
307 lead to a cascade of further failures, blowing the runtime stack.
308 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
309 which will usually make debugging easier.
310
311MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
312 The action to take before "return 0" when malloc fails to be able to
313 return memory because there is none available.
314
315HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
316 True if this system supports sbrk or an emulation of it.
317
318MORECORE default: sbrk
319 The name of the sbrk-style system routine to call to obtain more
320 memory. See below for guidance on writing custom MORECORE
321 functions. The type of the argument to sbrk/MORECORE varies across
322 systems. It cannot be size_t, because it supports negative
323 arguments, so it is normally the signed type of the same width as
324 size_t (sometimes declared as "intptr_t"). It doesn't much matter
325 though. Internally, we only call it with arguments less than half
326 the max value of a size_t, which should work across all reasonable
327 possibilities, although sometimes generating compiler warnings. See
328 near the end of this file for guidelines for creating a custom
329 version of MORECORE.
330
331MORECORE_CONTIGUOUS default: 1 (true)
332 If true, take advantage of fact that consecutive calls to MORECORE
333 with positive arguments always return contiguous increasing
334 addresses. This is true of unix sbrk. It does not hurt too much to
335 set it true anyway, since malloc copes with non-contiguities.
336 Setting it false when definitely non-contiguous saves time
337 and possibly wasted space it would take to discover this though.
338
339MORECORE_CANNOT_TRIM default: NOT defined
340 True if MORECORE cannot release space back to the system when given
341 negative arguments. This is generally necessary only if you are
342 using a hand-crafted MORECORE function that cannot handle negative
343 arguments.
344
345HAVE_MMAP default: 1 (true)
346 True if this system supports mmap or an emulation of it. If so, and
347 HAVE_MORECORE is not true, MMAP is used for all system
348 allocation. If set and HAVE_MORECORE is true as well, MMAP is
349 primarily used to directly allocate very large blocks. It is also
350 used as a backup strategy in cases where MORECORE fails to provide
351 space from system. Note: A single call to MUNMAP is assumed to be
352 able to unmap memory that may have be allocated using multiple calls
353 to MMAP, so long as they are adjacent.
354
355HAVE_MREMAP default: 1 on linux, else 0
356 If true realloc() uses mremap() to re-allocate large blocks and
357 extend or shrink allocation spaces.
358
359MMAP_CLEARS default: 1 on unix
360 True if mmap clears memory so calloc doesn't need to. This is true
361 for standard unix mmap using /dev/zero.
362
363USE_BUILTIN_FFS default: 0 (i.e., not used)
364 Causes malloc to use the builtin ffs() function to compute indices.
365 Some compilers may recognize and intrinsify ffs to be faster than the
366 supplied C version. Also, the case of x86 using gcc is special-cased
367 to an asm instruction, so is already as fast as it can be, and so
368 this setting has no effect. (On most x86s, the asm version is only
369 slightly faster than the C version.)
370
371malloc_getpagesize default: derive from system includes, or 4096.
372 The system page size. To the extent possible, this malloc manages
373 memory from the system in page-size units. This may be (and
374 usually is) a function rather than a constant. This is ignored
375 if WIN32, where page size is determined using getSystemInfo during
376 initialization.
377
378USE_DEV_RANDOM default: 0 (i.e., not used)
379 Causes malloc to use /dev/random to initialize secure magic seed for
380 stamping footers. Otherwise, the current time is used.
381
382NO_MALLINFO default: 0
383 If defined, don't compile "mallinfo". This can be a simple way
384 of dealing with mismatches between system declarations and
385 those in this file.
386
387MALLINFO_FIELD_TYPE default: size_t
388 The type of the fields in the mallinfo struct. This was originally
389 defined as "int" in SVID etc, but is more usefully defined as
390 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
391
392REALLOC_ZERO_BYTES_FREES default: not defined
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -0800393 This should be set if a call to realloc with zero bytes should
394 be the same as a call to free. Some people think it should. Otherwise,
395 since this malloc returns a unique pointer for malloc(0), so does
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -0700396 realloc(p, 0).
397
398LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
399LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
400LACKS_STDLIB_H default: NOT defined unless on WIN32
401 Define these if your system does not have these header files.
402 You might need to manually insert some of the declarations they provide.
403
404DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
405 system_info.dwAllocationGranularity in WIN32,
406 otherwise 64K.
407 Also settable using mallopt(M_GRANULARITY, x)
408 The unit for allocating and deallocating memory from the system. On
409 most systems with contiguous MORECORE, there is no reason to
410 make this more than a page. However, systems with MMAP tend to
411 either require or encourage larger granularities. You can increase
412 this value to prevent system allocation functions to be called so
413 often, especially if they are slow. The value must be at least one
414 page and must be a power of two. Setting to 0 causes initialization
415 to either page size or win32 region size. (Note: In previous
416 versions of malloc, the equivalent of this option was called
417 "TOP_PAD")
418
419DEFAULT_TRIM_THRESHOLD default: 2MB
420 Also settable using mallopt(M_TRIM_THRESHOLD, x)
421 The maximum amount of unused top-most memory to keep before
422 releasing via malloc_trim in free(). Automatic trimming is mainly
423 useful in long-lived programs using contiguous MORECORE. Because
424 trimming via sbrk can be slow on some systems, and can sometimes be
425 wasteful (in cases where programs immediately afterward allocate
426 more large chunks) the value should be high enough so that your
427 overall system performance would improve by releasing this much
428 memory. As a rough guide, you might set to a value close to the
429 average size of a process (program) running on your system.
430 Releasing this much memory would allow such a process to run in
431 memory. Generally, it is worth tuning trim thresholds when a
432 program undergoes phases where several large chunks are allocated
433 and released in ways that can reuse each other's storage, perhaps
434 mixed with phases where there are no such chunks at all. The trim
435 value must be greater than page size to have any useful effect. To
436 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
437 some people use of mallocing a huge space and then freeing it at
438 program startup, in an attempt to reserve system memory, doesn't
439 have the intended effect under automatic trimming, since that memory
440 will immediately be returned to the system.
441
442DEFAULT_MMAP_THRESHOLD default: 256K
443 Also settable using mallopt(M_MMAP_THRESHOLD, x)
444 The request size threshold for using MMAP to directly service a
445 request. Requests of at least this size that cannot be allocated
446 using already-existing space will be serviced via mmap. (If enough
447 normal freed space already exists it is used instead.) Using mmap
448 segregates relatively large chunks of memory so that they can be
449 individually obtained and released from the host system. A request
450 serviced through mmap is never reused by any other request (at least
451 not directly; the system may just so happen to remap successive
452 requests to the same locations). Segregating space in this way has
453 the benefits that: Mmapped space can always be individually released
454 back to the system, which helps keep the system level memory demands
455 of a long-lived program low. Also, mapped memory doesn't become
456 `locked' between other chunks, as can happen with normally allocated
457 chunks, which means that even trimming via malloc_trim would not
458 release them. However, it has the disadvantage that the space
459 cannot be reclaimed, consolidated, and then used to service later
460 requests, as happens with normal chunks. The advantages of mmap
461 nearly always outweigh disadvantages for "large" chunks, but the
462 value of "large" may vary across systems. The default is an
463 empirically derived value that works well in most systems. You can
464 disable mmap by setting to MAX_SIZE_T.
465
466*/
467
468#ifndef WIN32
469#ifdef _WIN32
470#define WIN32 1
471#endif /* _WIN32 */
472#endif /* WIN32 */
473#ifdef WIN32
474#define WIN32_LEAN_AND_MEAN
475#include <windows.h>
476#define HAVE_MMAP 1
477#define HAVE_MORECORE 0
478#define LACKS_UNISTD_H
479#define LACKS_SYS_PARAM_H
480#define LACKS_SYS_MMAN_H
481#define LACKS_STRING_H
482#define LACKS_STRINGS_H
483#define LACKS_SYS_TYPES_H
484#define LACKS_ERRNO_H
485#define MALLOC_FAILURE_ACTION
486#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
487#endif /* WIN32 */
488
489#if defined(DARWIN) || defined(_DARWIN)
490/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
491#ifndef HAVE_MORECORE
492#define HAVE_MORECORE 0
493#define HAVE_MMAP 1
494#endif /* HAVE_MORECORE */
495#endif /* DARWIN */
496
497#ifndef LACKS_SYS_TYPES_H
498#include <sys/types.h> /* For size_t */
499#endif /* LACKS_SYS_TYPES_H */
500
501/* The maximum possible size_t value has all bits set */
502#define MAX_SIZE_T (~(size_t)0)
503
504#ifndef ONLY_MSPACES
505#define ONLY_MSPACES 0
506#endif /* ONLY_MSPACES */
507#ifndef MSPACES
508#if ONLY_MSPACES
509#define MSPACES 1
510#else /* ONLY_MSPACES */
511#define MSPACES 0
512#endif /* ONLY_MSPACES */
513#endif /* MSPACES */
514#ifndef MALLOC_ALIGNMENT
515#define MALLOC_ALIGNMENT ((size_t)8U)
516#endif /* MALLOC_ALIGNMENT */
517#ifndef FOOTERS
518#define FOOTERS 0
519#endif /* FOOTERS */
520#ifndef USE_MAX_ALLOWED_FOOTPRINT
521#define USE_MAX_ALLOWED_FOOTPRINT 0
522#endif
523#ifndef ABORT
524#define ABORT abort()
525#endif /* ABORT */
526#ifndef ABORT_ON_ASSERT_FAILURE
527#define ABORT_ON_ASSERT_FAILURE 1
528#endif /* ABORT_ON_ASSERT_FAILURE */
529#ifndef PROCEED_ON_ERROR
530#define PROCEED_ON_ERROR 0
531#endif /* PROCEED_ON_ERROR */
532#ifndef USE_LOCKS
533#define USE_LOCKS 0
534#endif /* USE_LOCKS */
535#ifndef INSECURE
536#define INSECURE 0
537#endif /* INSECURE */
538#ifndef HAVE_MMAP
539#define HAVE_MMAP 1
540#endif /* HAVE_MMAP */
541#ifndef MMAP_CLEARS
542#define MMAP_CLEARS 1
543#endif /* MMAP_CLEARS */
544#ifndef HAVE_MREMAP
545#ifdef linux
546#define HAVE_MREMAP 1
547#else /* linux */
548#define HAVE_MREMAP 0
549#endif /* linux */
550#endif /* HAVE_MREMAP */
551#ifndef MALLOC_FAILURE_ACTION
552#define MALLOC_FAILURE_ACTION errno = ENOMEM;
553#endif /* MALLOC_FAILURE_ACTION */
554#ifndef HAVE_MORECORE
555#if ONLY_MSPACES
556#define HAVE_MORECORE 0
557#else /* ONLY_MSPACES */
558#define HAVE_MORECORE 1
559#endif /* ONLY_MSPACES */
560#endif /* HAVE_MORECORE */
561#if !HAVE_MORECORE
562#define MORECORE_CONTIGUOUS 0
563#else /* !HAVE_MORECORE */
564#ifndef MORECORE
565#define MORECORE sbrk
566#endif /* MORECORE */
567#ifndef MORECORE_CONTIGUOUS
568#define MORECORE_CONTIGUOUS 1
569#endif /* MORECORE_CONTIGUOUS */
570#endif /* HAVE_MORECORE */
571#ifndef DEFAULT_GRANULARITY
572#if MORECORE_CONTIGUOUS
573#define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
574#else /* MORECORE_CONTIGUOUS */
575#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
576#endif /* MORECORE_CONTIGUOUS */
577#endif /* DEFAULT_GRANULARITY */
578#ifndef DEFAULT_TRIM_THRESHOLD
579#ifndef MORECORE_CANNOT_TRIM
580#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
581#else /* MORECORE_CANNOT_TRIM */
582#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
583#endif /* MORECORE_CANNOT_TRIM */
584#endif /* DEFAULT_TRIM_THRESHOLD */
585#ifndef DEFAULT_MMAP_THRESHOLD
586#if HAVE_MMAP
587#define DEFAULT_MMAP_THRESHOLD ((size_t)64U * (size_t)1024U)
588#else /* HAVE_MMAP */
589#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
590#endif /* HAVE_MMAP */
591#endif /* DEFAULT_MMAP_THRESHOLD */
592#ifndef USE_BUILTIN_FFS
593#define USE_BUILTIN_FFS 0
594#endif /* USE_BUILTIN_FFS */
595#ifndef USE_DEV_RANDOM
596#define USE_DEV_RANDOM 0
597#endif /* USE_DEV_RANDOM */
598#ifndef NO_MALLINFO
599#define NO_MALLINFO 0
600#endif /* NO_MALLINFO */
601#ifndef MALLINFO_FIELD_TYPE
602#define MALLINFO_FIELD_TYPE size_t
603#endif /* MALLINFO_FIELD_TYPE */
604
605/*
606 mallopt tuning options. SVID/XPG defines four standard parameter
607 numbers for mallopt, normally defined in malloc.h. None of these
608 are used in this malloc, so setting them has no effect. But this
609 malloc does support the following options.
610*/
611
612#define M_TRIM_THRESHOLD (-1)
613#define M_GRANULARITY (-2)
614#define M_MMAP_THRESHOLD (-3)
615
616/* ------------------------ Mallinfo declarations ------------------------ */
617
618#if !NO_MALLINFO
619/*
620 This version of malloc supports the standard SVID/XPG mallinfo
621 routine that returns a struct containing usage properties and
622 statistics. It should work on any system that has a
623 /usr/include/malloc.h defining struct mallinfo. The main
624 declaration needed is the mallinfo struct that is returned (by-copy)
625 by mallinfo(). The malloinfo struct contains a bunch of fields that
626 are not even meaningful in this version of malloc. These fields are
627 are instead filled by mallinfo() with other numbers that might be of
628 interest.
629
630 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
631 /usr/include/malloc.h file that includes a declaration of struct
632 mallinfo. If so, it is included; else a compliant version is
633 declared below. These must be precisely the same for mallinfo() to
634 work. The original SVID version of this struct, defined on most
635 systems with mallinfo, declares all fields as ints. But some others
636 define as unsigned long. If your system defines the fields using a
637 type of different width than listed here, you MUST #include your
638 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
639*/
640
641/* #define HAVE_USR_INCLUDE_MALLOC_H */
642
643#if !ANDROID
644#ifdef HAVE_USR_INCLUDE_MALLOC_H
645#include "/usr/include/malloc.h"
646#else /* HAVE_USR_INCLUDE_MALLOC_H */
647
648struct mallinfo {
649 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
650 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
651 MALLINFO_FIELD_TYPE smblks; /* always 0 */
652 MALLINFO_FIELD_TYPE hblks; /* always 0 */
653 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
654 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
655 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
656 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
657 MALLINFO_FIELD_TYPE fordblks; /* total free space */
658 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
659};
660
661#endif /* HAVE_USR_INCLUDE_MALLOC_H */
662#endif /* NO_MALLINFO */
663#endif /* ANDROID */
664
665#ifdef __cplusplus
666extern "C" {
667#endif /* __cplusplus */
668
669#if !ONLY_MSPACES
670
671/* ------------------- Declarations of public routines ------------------- */
672
673/* Check an additional macro for the five primary functions */
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -0800674#ifndef USE_DL_PREFIX
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -0700675#define dlcalloc calloc
676#define dlfree free
677#define dlmalloc malloc
678#define dlmemalign memalign
679#define dlrealloc realloc
680#endif
681
682#ifndef USE_DL_PREFIX
683#define dlvalloc valloc
684#define dlpvalloc pvalloc
685#define dlmallinfo mallinfo
686#define dlmallopt mallopt
687#define dlmalloc_trim malloc_trim
688#define dlmalloc_walk_free_pages \
689 malloc_walk_free_pages
690#define dlmalloc_walk_heap \
691 malloc_walk_heap
692#define dlmalloc_stats malloc_stats
693#define dlmalloc_usable_size malloc_usable_size
694#define dlmalloc_footprint malloc_footprint
695#define dlmalloc_max_allowed_footprint \
696 malloc_max_allowed_footprint
697#define dlmalloc_set_max_allowed_footprint \
698 malloc_set_max_allowed_footprint
699#define dlmalloc_max_footprint malloc_max_footprint
700#define dlindependent_calloc independent_calloc
701#define dlindependent_comalloc independent_comalloc
702#endif /* USE_DL_PREFIX */
703
704
705/*
706 malloc(size_t n)
707 Returns a pointer to a newly allocated chunk of at least n bytes, or
708 null if no space is available, in which case errno is set to ENOMEM
709 on ANSI C systems.
710
711 If n is zero, malloc returns a minimum-sized chunk. (The minimum
712 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
713 systems.) Note that size_t is an unsigned type, so calls with
714 arguments that would be negative if signed are interpreted as
715 requests for huge amounts of space, which will often fail. The
716 maximum supported value of n differs across systems, but is in all
717 cases less than the maximum representable value of a size_t.
718*/
719void* dlmalloc(size_t);
720
721/*
722 free(void* p)
723 Releases the chunk of memory pointed to by p, that had been previously
724 allocated using malloc or a related routine such as realloc.
725 It has no effect if p is null. If p was not malloced or already
726 freed, free(p) will by default cause the current program to abort.
727*/
728void dlfree(void*);
729
730/*
731 calloc(size_t n_elements, size_t element_size);
732 Returns a pointer to n_elements * element_size bytes, with all locations
733 set to zero.
734*/
735void* dlcalloc(size_t, size_t);
736
737/*
738 realloc(void* p, size_t n)
739 Returns a pointer to a chunk of size n that contains the same data
740 as does chunk p up to the minimum of (n, p's size) bytes, or null
741 if no space is available.
742
743 The returned pointer may or may not be the same as p. The algorithm
744 prefers extending p in most cases when possible, otherwise it
745 employs the equivalent of a malloc-copy-free sequence.
746
747 If p is null, realloc is equivalent to malloc.
748
749 If space is not available, realloc returns null, errno is set (if on
750 ANSI) and p is NOT freed.
751
752 if n is for fewer bytes than already held by p, the newly unused
753 space is lopped off and freed if possible. realloc with a size
754 argument of zero (re)allocates a minimum-sized chunk.
755
756 The old unix realloc convention of allowing the last-free'd chunk
757 to be used as an argument to realloc is not supported.
758*/
759
760void* dlrealloc(void*, size_t);
761
762/*
763 memalign(size_t alignment, size_t n);
764 Returns a pointer to a newly allocated chunk of n bytes, aligned
765 in accord with the alignment argument.
766
767 The alignment argument should be a power of two. If the argument is
768 not a power of two, the nearest greater power is used.
769 8-byte alignment is guaranteed by normal malloc calls, so don't
770 bother calling memalign with an argument of 8 or less.
771
772 Overreliance on memalign is a sure way to fragment space.
773*/
774void* dlmemalign(size_t, size_t);
775
776/*
Ken Sumrall85aad902011-12-14 20:50:01 -0800777 int posix_memalign(void **memptr, size_t alignment, size_t size);
778 Places a pointer to a newly allocated chunk of size bytes, aligned
779 in accord with the alignment argument, in *memptr.
780
781 The return value is 0 on success, and ENOMEM on failure.
782
783 The alignment argument should be a power of two. If the argument is
784 not a power of two, the nearest greater power is used.
785 8-byte alignment is guaranteed by normal malloc calls, so don't
786 bother calling memalign with an argument of 8 or less.
787
788 Overreliance on posix_memalign is a sure way to fragment space.
789*/
790int posix_memalign(void **memptr, size_t alignment, size_t size);
791
792/*
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -0700793 valloc(size_t n);
794 Equivalent to memalign(pagesize, n), where pagesize is the page
795 size of the system. If the pagesize is unknown, 4096 is used.
796*/
797void* dlvalloc(size_t);
798
799/*
800 mallopt(int parameter_number, int parameter_value)
801 Sets tunable parameters The format is to provide a
802 (parameter-number, parameter-value) pair. mallopt then sets the
803 corresponding parameter to the argument value if it can (i.e., so
804 long as the value is meaningful), and returns 1 if successful else
805 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
806 normally defined in malloc.h. None of these are use in this malloc,
807 so setting them has no effect. But this malloc also supports other
808 options in mallopt. See below for details. Briefly, supported
809 parameters are as follows (listed defaults are for "typical"
810 configurations).
811
812 Symbol param # default allowed param values
813 M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)
814 M_GRANULARITY -2 page size any power of 2 >= page size
815 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
816*/
817int dlmallopt(int, int);
818
819/*
820 malloc_footprint();
821 Returns the number of bytes obtained from the system. The total
822 number of bytes allocated by malloc, realloc etc., is less than this
823 value. Unlike mallinfo, this function returns only a precomputed
824 result, so can be called frequently to monitor memory consumption.
825 Even if locks are otherwise defined, this function does not use them,
826 so results might not be up to date.
827*/
828size_t dlmalloc_footprint(void);
829
830#if USE_MAX_ALLOWED_FOOTPRINT
831/*
832 malloc_max_allowed_footprint();
833 Returns the number of bytes that the heap is allowed to obtain
834 from the system. malloc_footprint() should always return a
835 size less than or equal to max_allowed_footprint, unless the
836 max_allowed_footprint was set to a value smaller than the
837 footprint at the time.
838*/
839size_t dlmalloc_max_allowed_footprint();
840
841/*
842 malloc_set_max_allowed_footprint();
843 Set the maximum number of bytes that the heap is allowed to
844 obtain from the system. The size will be rounded up to a whole
845 page, and the rounded number will be returned from future calls
846 to malloc_max_allowed_footprint(). If the new max_allowed_footprint
847 is larger than the current footprint, the heap will never grow
848 larger than max_allowed_footprint. If the new max_allowed_footprint
849 is smaller than the current footprint, the heap will not grow
850 further.
851
852 TODO: try to force the heap to give up memory in the shrink case,
853 and update this comment once that happens.
854*/
855void dlmalloc_set_max_allowed_footprint(size_t bytes);
856#endif /* USE_MAX_ALLOWED_FOOTPRINT */
857
858/*
859 malloc_max_footprint();
860 Returns the maximum number of bytes obtained from the system. This
861 value will be greater than current footprint if deallocated space
862 has been reclaimed by the system. The peak number of bytes allocated
863 by malloc, realloc etc., is less than this value. Unlike mallinfo,
864 this function returns only a precomputed result, so can be called
865 frequently to monitor memory consumption. Even if locks are
866 otherwise defined, this function does not use them, so results might
867 not be up to date.
868*/
869size_t dlmalloc_max_footprint(void);
870
871#if !NO_MALLINFO
872/*
873 mallinfo()
874 Returns (by copy) a struct containing various summary statistics:
875
876 arena: current total non-mmapped bytes allocated from system
877 ordblks: the number of free chunks
878 smblks: always zero.
879 hblks: current number of mmapped regions
880 hblkhd: total bytes held in mmapped regions
881 usmblks: the maximum total allocated space. This will be greater
882 than current total if trimming has occurred.
883 fsmblks: always zero
884 uordblks: current total allocated space (normal or mmapped)
885 fordblks: total free space
886 keepcost: the maximum number of bytes that could ideally be released
887 back to system via malloc_trim. ("ideally" means that
888 it ignores page restrictions etc.)
889
890 Because these fields are ints, but internal bookkeeping may
891 be kept as longs, the reported values may wrap around zero and
892 thus be inaccurate.
893*/
894struct mallinfo dlmallinfo(void);
895#endif /* NO_MALLINFO */
896
897/*
898 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
899
900 independent_calloc is similar to calloc, but instead of returning a
901 single cleared space, it returns an array of pointers to n_elements
902 independent elements that can hold contents of size elem_size, each
903 of which starts out cleared, and can be independently freed,
904 realloc'ed etc. The elements are guaranteed to be adjacently
905 allocated (this is not guaranteed to occur with multiple callocs or
906 mallocs), which may also improve cache locality in some
907 applications.
908
909 The "chunks" argument is optional (i.e., may be null, which is
910 probably the most typical usage). If it is null, the returned array
911 is itself dynamically allocated and should also be freed when it is
912 no longer needed. Otherwise, the chunks array must be of at least
913 n_elements in length. It is filled in with the pointers to the
914 chunks.
915
916 In either case, independent_calloc returns this pointer array, or
917 null if the allocation failed. If n_elements is zero and "chunks"
918 is null, it returns a chunk representing an array with zero elements
919 (which should be freed if not wanted).
920
921 Each element must be individually freed when it is no longer
922 needed. If you'd like to instead be able to free all at once, you
923 should instead use regular calloc and assign pointers into this
924 space to represent elements. (In this case though, you cannot
925 independently free elements.)
926
927 independent_calloc simplifies and speeds up implementations of many
928 kinds of pools. It may also be useful when constructing large data
929 structures that initially have a fixed number of fixed-sized nodes,
930 but the number is not known at compile time, and some of the nodes
931 may later need to be freed. For example:
932
933 struct Node { int item; struct Node* next; };
934
935 struct Node* build_list() {
936 struct Node** pool;
937 int n = read_number_of_nodes_needed();
938 if (n <= 0) return 0;
939 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
940 if (pool == 0) die();
941 // organize into a linked list...
942 struct Node* first = pool[0];
943 for (i = 0; i < n-1; ++i)
944 pool[i]->next = pool[i+1];
945 free(pool); // Can now free the array (or not, if it is needed later)
946 return first;
947 }
948*/
949void** dlindependent_calloc(size_t, size_t, void**);
950
951/*
952 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
953
954 independent_comalloc allocates, all at once, a set of n_elements
955 chunks with sizes indicated in the "sizes" array. It returns
956 an array of pointers to these elements, each of which can be
957 independently freed, realloc'ed etc. The elements are guaranteed to
958 be adjacently allocated (this is not guaranteed to occur with
959 multiple callocs or mallocs), which may also improve cache locality
960 in some applications.
961
962 The "chunks" argument is optional (i.e., may be null). If it is null
963 the returned array is itself dynamically allocated and should also
964 be freed when it is no longer needed. Otherwise, the chunks array
965 must be of at least n_elements in length. It is filled in with the
966 pointers to the chunks.
967
968 In either case, independent_comalloc returns this pointer array, or
969 null if the allocation failed. If n_elements is zero and chunks is
970 null, it returns a chunk representing an array with zero elements
971 (which should be freed if not wanted).
972
973 Each element must be individually freed when it is no longer
974 needed. If you'd like to instead be able to free all at once, you
975 should instead use a single regular malloc, and assign pointers at
976 particular offsets in the aggregate space. (In this case though, you
977 cannot independently free elements.)
978
979 independent_comallac differs from independent_calloc in that each
980 element may have a different size, and also that it does not
981 automatically clear elements.
982
983 independent_comalloc can be used to speed up allocation in cases
984 where several structs or objects must always be allocated at the
985 same time. For example:
986
987 struct Head { ... }
988 struct Foot { ... }
989
990 void send_message(char* msg) {
991 int msglen = strlen(msg);
992 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
993 void* chunks[3];
994 if (independent_comalloc(3, sizes, chunks) == 0)
995 die();
996 struct Head* head = (struct Head*)(chunks[0]);
997 char* body = (char*)(chunks[1]);
998 struct Foot* foot = (struct Foot*)(chunks[2]);
999 // ...
1000 }
1001
1002 In general though, independent_comalloc is worth using only for
1003 larger values of n_elements. For small values, you probably won't
1004 detect enough difference from series of malloc calls to bother.
1005
1006 Overuse of independent_comalloc can increase overall memory usage,
1007 since it cannot reuse existing noncontiguous small chunks that
1008 might be available for some of the elements.
1009*/
1010void** dlindependent_comalloc(size_t, size_t*, void**);
1011
1012
1013/*
1014 pvalloc(size_t n);
1015 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1016 round up n to nearest pagesize.
1017 */
1018void* dlpvalloc(size_t);
1019
1020/*
1021 malloc_trim(size_t pad);
1022
1023 If possible, gives memory back to the system (via negative arguments
1024 to sbrk) if there is unused memory at the `high' end of the malloc
1025 pool or in unused MMAP segments. You can call this after freeing
1026 large blocks of memory to potentially reduce the system-level memory
1027 requirements of a program. However, it cannot guarantee to reduce
1028 memory. Under some allocation patterns, some large free blocks of
1029 memory will be locked between two used chunks, so they cannot be
1030 given back to the system.
1031
1032 The `pad' argument to malloc_trim represents the amount of free
1033 trailing space to leave untrimmed. If this argument is zero, only
1034 the minimum amount of memory to maintain internal data structures
1035 will be left. Non-zero arguments can be supplied to maintain enough
1036 trailing space to service future expected allocations without having
1037 to re-obtain memory from the system.
1038
1039 Malloc_trim returns 1 if it actually released any memory, else 0.
1040*/
1041int dlmalloc_trim(size_t);
1042
1043/*
1044 malloc_walk_free_pages(handler, harg)
1045
1046 Calls the provided handler on each free region in the heap. The
1047 memory between start and end are guaranteed not to contain any
1048 important data, so the handler is free to alter the contents
1049 in any way. This can be used to advise the OS that large free
1050 regions may be swapped out.
1051
1052 The value in harg will be passed to each call of the handler.
1053 */
1054void dlmalloc_walk_free_pages(void(*)(void*, void*, void*), void*);
1055
1056/*
1057 malloc_walk_heap(handler, harg)
1058
1059 Calls the provided handler on each object or free region in the
1060 heap. The handler will receive the chunk pointer and length, the
1061 object pointer and length, and the value in harg on each call.
1062 */
1063void dlmalloc_walk_heap(void(*)(const void*, size_t,
1064 const void*, size_t, void*),
1065 void*);
1066
1067/*
1068 malloc_usable_size(void* p);
1069
1070 Returns the number of bytes you can actually use in
1071 an allocated chunk, which may be more than you requested (although
1072 often not) due to alignment and minimum size constraints.
1073 You can use this many bytes without worrying about
1074 overwriting other allocated objects. This is not a particularly great
1075 programming practice. malloc_usable_size can be more useful in
1076 debugging and assertions, for example:
1077
1078 p = malloc(n);
1079 assert(malloc_usable_size(p) >= 256);
1080*/
1081size_t dlmalloc_usable_size(void*);
1082
1083/*
1084 malloc_stats();
1085 Prints on stderr the amount of space obtained from the system (both
1086 via sbrk and mmap), the maximum amount (which may be more than
1087 current if malloc_trim and/or munmap got called), and the current
1088 number of bytes allocated via malloc (or realloc, etc) but not yet
1089 freed. Note that this is the number of bytes allocated, not the
1090 number requested. It will be larger than the number requested
1091 because of alignment and bookkeeping overhead. Because it includes
1092 alignment wastage as being in use, this figure may be greater than
1093 zero even when no user-level chunks are allocated.
1094
1095 The reported current and maximum system memory can be inaccurate if
1096 a program makes other calls to system memory allocation functions
1097 (normally sbrk) outside of malloc.
1098
1099 malloc_stats prints only the most commonly interesting statistics.
1100 More information can be obtained by calling mallinfo.
1101*/
1102void dlmalloc_stats(void);
1103
1104#endif /* ONLY_MSPACES */
1105
1106#if MSPACES
1107
1108/*
1109 mspace is an opaque type representing an independent
1110 region of space that supports mspace_malloc, etc.
1111*/
1112typedef void* mspace;
1113
1114/*
1115 create_mspace creates and returns a new independent space with the
1116 given initial capacity, or, if 0, the default granularity size. It
1117 returns null if there is no system memory available to create the
1118 space. If argument locked is non-zero, the space uses a separate
1119 lock to control access. The capacity of the space will grow
1120 dynamically as needed to service mspace_malloc requests. You can
1121 control the sizes of incremental increases of this space by
1122 compiling with a different DEFAULT_GRANULARITY or dynamically
1123 setting with mallopt(M_GRANULARITY, value).
1124*/
1125mspace create_mspace(size_t capacity, int locked);
1126
1127/*
1128 destroy_mspace destroys the given space, and attempts to return all
1129 of its memory back to the system, returning the total number of
1130 bytes freed. After destruction, the results of access to all memory
1131 used by the space become undefined.
1132*/
1133size_t destroy_mspace(mspace msp);
1134
1135/*
1136 create_mspace_with_base uses the memory supplied as the initial base
1137 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1138 space is used for bookkeeping, so the capacity must be at least this
1139 large. (Otherwise 0 is returned.) When this initial space is
1140 exhausted, additional memory will be obtained from the system.
1141 Destroying this space will deallocate all additionally allocated
1142 space (if possible) but not the initial base.
1143*/
1144mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1145
1146/*
1147 mspace_malloc behaves as malloc, but operates within
1148 the given space.
1149*/
1150void* mspace_malloc(mspace msp, size_t bytes);
1151
1152/*
1153 mspace_free behaves as free, but operates within
1154 the given space.
1155
1156 If compiled with FOOTERS==1, mspace_free is not actually needed.
1157 free may be called instead of mspace_free because freed chunks from
1158 any space are handled by their originating spaces.
1159*/
1160void mspace_free(mspace msp, void* mem);
1161
1162/*
1163 mspace_realloc behaves as realloc, but operates within
1164 the given space.
1165
1166 If compiled with FOOTERS==1, mspace_realloc is not actually
1167 needed. realloc may be called instead of mspace_realloc because
1168 realloced chunks from any space are handled by their originating
1169 spaces.
1170*/
1171void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1172
Barry Hayesf30dae92009-05-26 10:33:04 -07001173#if ANDROID /* Added for Android, not part of dlmalloc as released */
1174/*
1175 mspace_merge_objects will merge allocated memory mema and memb
1176 together, provided memb immediately follows mema. It is roughly as
1177 if memb has been freed and mema has been realloced to a larger size.
1178 On successfully merging, mema will be returned. If either argument
1179 is null or memb does not immediately follow mema, null will be
1180 returned.
1181
1182 Both mema and memb should have been previously allocated using
1183 malloc or a related routine such as realloc. If either mema or memb
1184 was not malloced or was previously freed, the result is undefined,
1185 but like mspace_free, the default is to abort the program.
1186*/
1187void* mspace_merge_objects(mspace msp, void* mema, void* memb);
1188#endif
1189
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07001190/*
1191 mspace_calloc behaves as calloc, but operates within
1192 the given space.
1193*/
1194void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1195
1196/*
1197 mspace_memalign behaves as memalign, but operates within
1198 the given space.
1199*/
1200void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1201
1202/*
1203 mspace_independent_calloc behaves as independent_calloc, but
1204 operates within the given space.
1205*/
1206void** mspace_independent_calloc(mspace msp, size_t n_elements,
1207 size_t elem_size, void* chunks[]);
1208
1209/*
1210 mspace_independent_comalloc behaves as independent_comalloc, but
1211 operates within the given space.
1212*/
1213void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1214 size_t sizes[], void* chunks[]);
1215
1216/*
1217 mspace_footprint() returns the number of bytes obtained from the
1218 system for this space.
1219*/
1220size_t mspace_footprint(mspace msp);
1221
1222/*
1223 mspace_max_footprint() returns the peak number of bytes obtained from the
1224 system for this space.
1225*/
1226size_t mspace_max_footprint(mspace msp);
1227
1228
1229#if !NO_MALLINFO
1230/*
1231 mspace_mallinfo behaves as mallinfo, but reports properties of
1232 the given space.
1233*/
1234struct mallinfo mspace_mallinfo(mspace msp);
1235#endif /* NO_MALLINFO */
1236
1237/*
1238 mspace_malloc_stats behaves as malloc_stats, but reports
1239 properties of the given space.
1240*/
1241void mspace_malloc_stats(mspace msp);
1242
1243/*
1244 mspace_trim behaves as malloc_trim, but
1245 operates within the given space.
1246*/
1247int mspace_trim(mspace msp, size_t pad);
1248
1249/*
1250 An alias for mallopt.
1251*/
1252int mspace_mallopt(int, int);
1253
1254#endif /* MSPACES */
1255
1256#ifdef __cplusplus
1257}; /* end of extern "C" */
1258#endif /* __cplusplus */
1259
1260/*
1261 ========================================================================
1262 To make a fully customizable malloc.h header file, cut everything
1263 above this line, put into file malloc.h, edit to suit, and #include it
1264 on the next line, as well as in programs that use this malloc.
1265 ========================================================================
1266*/
1267
1268/* #include "malloc.h" */
1269
1270/*------------------------------ internal #includes ---------------------- */
1271
1272#ifdef WIN32
1273#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1274#endif /* WIN32 */
1275
1276#include <stdio.h> /* for printing in malloc_stats */
1277
1278#ifndef LACKS_ERRNO_H
1279#include <errno.h> /* for MALLOC_FAILURE_ACTION */
1280#endif /* LACKS_ERRNO_H */
1281#if FOOTERS
1282#include <time.h> /* for magic initialization */
1283#endif /* FOOTERS */
1284#ifndef LACKS_STDLIB_H
1285#include <stdlib.h> /* for abort() */
1286#endif /* LACKS_STDLIB_H */
1287#ifdef DEBUG
1288#if ABORT_ON_ASSERT_FAILURE
1289#define assert(x) if(!(x)) ABORT
1290#else /* ABORT_ON_ASSERT_FAILURE */
1291#include <assert.h>
1292#endif /* ABORT_ON_ASSERT_FAILURE */
1293#else /* DEBUG */
1294#define assert(x)
1295#endif /* DEBUG */
1296#ifndef LACKS_STRING_H
1297#include <string.h> /* for memset etc */
1298#endif /* LACKS_STRING_H */
1299#if USE_BUILTIN_FFS
1300#ifndef LACKS_STRINGS_H
1301#include <strings.h> /* for ffs */
1302#endif /* LACKS_STRINGS_H */
1303#endif /* USE_BUILTIN_FFS */
1304#if HAVE_MMAP
1305#ifndef LACKS_SYS_MMAN_H
1306#include <sys/mman.h> /* for mmap */
1307#endif /* LACKS_SYS_MMAN_H */
1308#ifndef LACKS_FCNTL_H
1309#include <fcntl.h>
1310#endif /* LACKS_FCNTL_H */
1311#endif /* HAVE_MMAP */
1312#if HAVE_MORECORE
1313#ifndef LACKS_UNISTD_H
1314#include <unistd.h> /* for sbrk */
1315#else /* LACKS_UNISTD_H */
1316#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1317extern void* sbrk(ptrdiff_t);
1318#endif /* FreeBSD etc */
1319#endif /* LACKS_UNISTD_H */
1320#endif /* HAVE_MMAP */
1321
1322#ifndef WIN32
1323#ifndef malloc_getpagesize
1324# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1325# ifndef _SC_PAGE_SIZE
1326# define _SC_PAGE_SIZE _SC_PAGESIZE
1327# endif
1328# endif
1329# ifdef _SC_PAGE_SIZE
1330# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1331# else
1332# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1333 extern size_t getpagesize();
1334# define malloc_getpagesize getpagesize()
1335# else
1336# ifdef WIN32 /* use supplied emulation of getpagesize */
1337# define malloc_getpagesize getpagesize()
1338# else
1339# ifndef LACKS_SYS_PARAM_H
1340# include <sys/param.h>
1341# endif
1342# ifdef EXEC_PAGESIZE
1343# define malloc_getpagesize EXEC_PAGESIZE
1344# else
1345# ifdef NBPG
1346# ifndef CLSIZE
1347# define malloc_getpagesize NBPG
1348# else
1349# define malloc_getpagesize (NBPG * CLSIZE)
1350# endif
1351# else
1352# ifdef NBPC
1353# define malloc_getpagesize NBPC
1354# else
1355# ifdef PAGESIZE
1356# define malloc_getpagesize PAGESIZE
1357# else /* just guess */
1358# define malloc_getpagesize ((size_t)4096U)
1359# endif
1360# endif
1361# endif
1362# endif
1363# endif
1364# endif
1365# endif
1366#endif
1367#endif
1368
1369/* ------------------- size_t and alignment properties -------------------- */
1370
1371/* The byte and bit size of a size_t */
1372#define SIZE_T_SIZE (sizeof(size_t))
1373#define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1374
1375/* Some constants coerced to size_t */
1376/* Annoying but necessary to avoid errors on some plaftorms */
1377#define SIZE_T_ZERO ((size_t)0)
1378#define SIZE_T_ONE ((size_t)1)
1379#define SIZE_T_TWO ((size_t)2)
1380#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1381#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1382#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1383#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1384
1385/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1386#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1387
1388/* True if address a has acceptable alignment */
1389#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1390
1391/* the number of bytes to offset an address to align it */
1392#define align_offset(A)\
1393 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1394 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1395
1396/* -------------------------- MMAP preliminaries ------------------------- */
1397
1398/*
1399 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1400 checks to fail so compiler optimizer can delete code rather than
1401 using so many "#if"s.
1402*/
1403
1404
1405/* MORECORE and MMAP must return MFAIL on failure */
1406#define MFAIL ((void*)(MAX_SIZE_T))
1407#define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1408
1409#if !HAVE_MMAP
1410#define IS_MMAPPED_BIT (SIZE_T_ZERO)
1411#define USE_MMAP_BIT (SIZE_T_ZERO)
1412#define CALL_MMAP(s) MFAIL
1413#define CALL_MUNMAP(a, s) (-1)
1414#define DIRECT_MMAP(s) MFAIL
1415
1416#else /* HAVE_MMAP */
1417#define IS_MMAPPED_BIT (SIZE_T_ONE)
1418#define USE_MMAP_BIT (SIZE_T_ONE)
1419
1420#ifndef WIN32
1421#define CALL_MUNMAP(a, s) munmap((a), (s))
1422#define MMAP_PROT (PROT_READ|PROT_WRITE)
1423#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1424#define MAP_ANONYMOUS MAP_ANON
1425#endif /* MAP_ANON */
1426#ifdef MAP_ANONYMOUS
1427#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1428#define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1429#else /* MAP_ANONYMOUS */
1430/*
1431 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1432 is unlikely to be needed, but is supplied just in case.
1433*/
1434#define MMAP_FLAGS (MAP_PRIVATE)
1435static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1436#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1437 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1438 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1439 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1440#endif /* MAP_ANONYMOUS */
1441
1442#define DIRECT_MMAP(s) CALL_MMAP(s)
1443#else /* WIN32 */
1444
1445/* Win32 MMAP via VirtualAlloc */
1446static void* win32mmap(size_t size) {
1447 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1448 return (ptr != 0)? ptr: MFAIL;
1449}
1450
1451/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1452static void* win32direct_mmap(size_t size) {
1453 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1454 PAGE_READWRITE);
1455 return (ptr != 0)? ptr: MFAIL;
1456}
1457
1458/* This function supports releasing coalesed segments */
1459static int win32munmap(void* ptr, size_t size) {
1460 MEMORY_BASIC_INFORMATION minfo;
1461 char* cptr = ptr;
1462 while (size) {
1463 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1464 return -1;
1465 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1466 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1467 return -1;
1468 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1469 return -1;
1470 cptr += minfo.RegionSize;
1471 size -= minfo.RegionSize;
1472 }
1473 return 0;
1474}
1475
1476#define CALL_MMAP(s) win32mmap(s)
1477#define CALL_MUNMAP(a, s) win32munmap((a), (s))
1478#define DIRECT_MMAP(s) win32direct_mmap(s)
1479#endif /* WIN32 */
1480#endif /* HAVE_MMAP */
1481
1482#if HAVE_MMAP && HAVE_MREMAP
1483#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1484#else /* HAVE_MMAP && HAVE_MREMAP */
1485#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1486#endif /* HAVE_MMAP && HAVE_MREMAP */
1487
1488#if HAVE_MORECORE
1489#define CALL_MORECORE(S) MORECORE(S)
1490#else /* HAVE_MORECORE */
1491#define CALL_MORECORE(S) MFAIL
1492#endif /* HAVE_MORECORE */
1493
1494/* mstate bit set if continguous morecore disabled or failed */
1495#define USE_NONCONTIGUOUS_BIT (4U)
1496
1497/* segment bit set in create_mspace_with_base */
1498#define EXTERN_BIT (8U)
1499
1500
1501/* --------------------------- Lock preliminaries ------------------------ */
1502
1503#if USE_LOCKS
1504
1505/*
1506 When locks are defined, there are up to two global locks:
1507
1508 * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1509 MORECORE. In many cases sys_alloc requires two calls, that should
1510 not be interleaved with calls by other threads. This does not
1511 protect against direct calls to MORECORE by other threads not
1512 using this lock, so there is still code to cope the best we can on
1513 interference.
1514
1515 * magic_init_mutex ensures that mparams.magic and other
1516 unique mparams values are initialized only once.
1517*/
1518
1519#ifndef WIN32
1520/* By default use posix locks */
1521#include <pthread.h>
1522#define MLOCK_T pthread_mutex_t
1523#define INITIAL_LOCK(l) pthread_mutex_init(l, NULL)
1524#define ACQUIRE_LOCK(l) pthread_mutex_lock(l)
1525#define RELEASE_LOCK(l) pthread_mutex_unlock(l)
1526
1527#if HAVE_MORECORE
1528static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1529#endif /* HAVE_MORECORE */
1530
1531static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1532
1533#else /* WIN32 */
1534/*
1535 Because lock-protected regions have bounded times, and there
1536 are no recursive lock calls, we can use simple spinlocks.
1537*/
1538
1539#define MLOCK_T long
1540static int win32_acquire_lock (MLOCK_T *sl) {
1541 for (;;) {
1542#ifdef InterlockedCompareExchangePointer
1543 if (!InterlockedCompareExchange(sl, 1, 0))
1544 return 0;
1545#else /* Use older void* version */
1546 if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1547 return 0;
1548#endif /* InterlockedCompareExchangePointer */
1549 Sleep (0);
1550 }
1551}
1552
1553static void win32_release_lock (MLOCK_T *sl) {
1554 InterlockedExchange (sl, 0);
1555}
1556
1557#define INITIAL_LOCK(l) *(l)=0
1558#define ACQUIRE_LOCK(l) win32_acquire_lock(l)
1559#define RELEASE_LOCK(l) win32_release_lock(l)
1560#if HAVE_MORECORE
1561static MLOCK_T morecore_mutex;
1562#endif /* HAVE_MORECORE */
1563static MLOCK_T magic_init_mutex;
1564#endif /* WIN32 */
1565
1566#define USE_LOCK_BIT (2U)
1567#else /* USE_LOCKS */
1568#define USE_LOCK_BIT (0U)
1569#define INITIAL_LOCK(l)
1570#endif /* USE_LOCKS */
1571
1572#if USE_LOCKS && HAVE_MORECORE
1573#define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);
1574#define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);
1575#else /* USE_LOCKS && HAVE_MORECORE */
1576#define ACQUIRE_MORECORE_LOCK()
1577#define RELEASE_MORECORE_LOCK()
1578#endif /* USE_LOCKS && HAVE_MORECORE */
1579
1580#if USE_LOCKS
1581#define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);
1582#define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);
1583#else /* USE_LOCKS */
1584#define ACQUIRE_MAGIC_INIT_LOCK()
1585#define RELEASE_MAGIC_INIT_LOCK()
1586#endif /* USE_LOCKS */
1587
1588
1589/* ----------------------- Chunk representations ------------------------ */
1590
1591/*
1592 (The following includes lightly edited explanations by Colin Plumb.)
1593
1594 The malloc_chunk declaration below is misleading (but accurate and
1595 necessary). It declares a "view" into memory allowing access to
1596 necessary fields at known offsets from a given base.
1597
1598 Chunks of memory are maintained using a `boundary tag' method as
1599 originally described by Knuth. (See the paper by Paul Wilson
1600 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1601 techniques.) Sizes of free chunks are stored both in the front of
1602 each chunk and at the end. This makes consolidating fragmented
1603 chunks into bigger chunks fast. The head fields also hold bits
1604 representing whether chunks are free or in use.
1605
1606 Here are some pictures to make it clearer. They are "exploded" to
1607 show that the state of a chunk can be thought of as extending from
1608 the high 31 bits of the head field of its header through the
1609 prev_foot and PINUSE_BIT bit of the following chunk header.
1610
1611 A chunk that's in use looks like:
1612
1613 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1614 | Size of previous chunk (if P = 1) |
1615 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1616 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1617 | Size of this chunk 1| +-+
1618 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1619 | |
1620 +- -+
1621 | |
1622 +- -+
1623 | :
1624 +- size - sizeof(size_t) available payload bytes -+
1625 : |
1626 chunk-> +- -+
1627 | |
1628 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1629 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1630 | Size of next chunk (may or may not be in use) | +-+
1631 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1632
1633 And if it's free, it looks like this:
1634
1635 chunk-> +- -+
1636 | User payload (must be in use, or we would have merged!) |
1637 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1638 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1639 | Size of this chunk 0| +-+
1640 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1641 | Next pointer |
1642 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1643 | Prev pointer |
1644 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1645 | :
1646 +- size - sizeof(struct chunk) unused bytes -+
1647 : |
1648 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1649 | Size of this chunk |
1650 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1651 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1652 | Size of next chunk (must be in use, or we would have merged)| +-+
1653 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1654 | :
1655 +- User payload -+
1656 : |
1657 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1658 |0|
1659 +-+
1660 Note that since we always merge adjacent free chunks, the chunks
1661 adjacent to a free chunk must be in use.
1662
1663 Given a pointer to a chunk (which can be derived trivially from the
1664 payload pointer) we can, in O(1) time, find out whether the adjacent
1665 chunks are free, and if so, unlink them from the lists that they
1666 are on and merge them with the current chunk.
1667
1668 Chunks always begin on even word boundaries, so the mem portion
1669 (which is returned to the user) is also on an even word boundary, and
1670 thus at least double-word aligned.
1671
1672 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1673 chunk size (which is always a multiple of two words), is an in-use
1674 bit for the *previous* chunk. If that bit is *clear*, then the
1675 word before the current chunk size contains the previous chunk
1676 size, and can be used to find the front of the previous chunk.
1677 The very first chunk allocated always has this bit set, preventing
1678 access to non-existent (or non-owned) memory. If pinuse is set for
1679 any given chunk, then you CANNOT determine the size of the
1680 previous chunk, and might even get a memory addressing fault when
1681 trying to do so.
1682
1683 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1684 the chunk size redundantly records whether the current chunk is
1685 inuse. This redundancy enables usage checks within free and realloc,
1686 and reduces indirection when freeing and consolidating chunks.
1687
1688 Each freshly allocated chunk must have both cinuse and pinuse set.
1689 That is, each allocated chunk borders either a previously allocated
1690 and still in-use chunk, or the base of its memory arena. This is
1691 ensured by making all allocations from the the `lowest' part of any
1692 found chunk. Further, no free chunk physically borders another one,
1693 so each free chunk is known to be preceded and followed by either
1694 inuse chunks or the ends of memory.
1695
1696 Note that the `foot' of the current chunk is actually represented
1697 as the prev_foot of the NEXT chunk. This makes it easier to
1698 deal with alignments etc but can be very confusing when trying
1699 to extend or adapt this code.
1700
1701 The exceptions to all this are
1702
1703 1. The special chunk `top' is the top-most available chunk (i.e.,
1704 the one bordering the end of available memory). It is treated
1705 specially. Top is never included in any bin, is used only if
1706 no other chunk is available, and is released back to the
1707 system if it is very large (see M_TRIM_THRESHOLD). In effect,
1708 the top chunk is treated as larger (and thus less well
1709 fitting) than any other available chunk. The top chunk
1710 doesn't update its trailing size field since there is no next
1711 contiguous chunk that would have to index off it. However,
1712 space is still allocated for it (TOP_FOOT_SIZE) to enable
1713 separation or merging when space is extended.
1714
1715 3. Chunks allocated via mmap, which have the lowest-order bit
1716 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1717 PINUSE_BIT in their head fields. Because they are allocated
1718 one-by-one, each must carry its own prev_foot field, which is
1719 also used to hold the offset this chunk has within its mmapped
1720 region, which is needed to preserve alignment. Each mmapped
1721 chunk is trailed by the first two fields of a fake next-chunk
1722 for sake of usage checks.
1723
1724*/
1725
1726struct malloc_chunk {
1727 size_t prev_foot; /* Size of previous chunk (if free). */
1728 size_t head; /* Size and inuse bits. */
1729 struct malloc_chunk* fd; /* double links -- used only if free. */
1730 struct malloc_chunk* bk;
1731};
1732
1733typedef struct malloc_chunk mchunk;
1734typedef struct malloc_chunk* mchunkptr;
1735typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
1736typedef unsigned int bindex_t; /* Described below */
1737typedef unsigned int binmap_t; /* Described below */
1738typedef unsigned int flag_t; /* The type of various bit flag sets */
1739
1740/* ------------------- Chunks sizes and alignments ----------------------- */
1741
1742#define MCHUNK_SIZE (sizeof(mchunk))
1743
1744#if FOOTERS
1745#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1746#else /* FOOTERS */
1747#define CHUNK_OVERHEAD (SIZE_T_SIZE)
1748#endif /* FOOTERS */
1749
1750/* MMapped chunks need a second word of overhead ... */
1751#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1752/* ... and additional padding for fake next-chunk at foot */
1753#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
1754
1755/* The smallest size we can malloc is an aligned minimal chunk */
1756#define MIN_CHUNK_SIZE\
1757 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1758
1759/* conversion from malloc headers to user pointers, and back */
1760#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
1761#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1762/* chunk associated with aligned address A */
1763#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
1764
1765/* Bounds on request (not chunk) sizes. */
1766#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
1767#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1768
1769/* pad request bytes into a usable size */
1770#define pad_request(req) \
1771 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1772
1773/* pad request, checking for minimum (but not maximum) */
1774#define request2size(req) \
1775 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1776
1777
1778/* ------------------ Operations on head and foot fields ----------------- */
1779
1780/*
1781 The head field of a chunk is or'ed with PINUSE_BIT when previous
1782 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1783 use. If the chunk was obtained with mmap, the prev_foot field has
1784 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1785 mmapped region to the base of the chunk.
1786*/
1787
1788#define PINUSE_BIT (SIZE_T_ONE)
1789#define CINUSE_BIT (SIZE_T_TWO)
1790#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
1791
1792/* Head value for fenceposts */
1793#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
1794
1795/* extraction of fields from head words */
1796#define cinuse(p) ((p)->head & CINUSE_BIT)
1797#define pinuse(p) ((p)->head & PINUSE_BIT)
1798#define chunksize(p) ((p)->head & ~(INUSE_BITS))
1799
1800#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
1801#define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
1802
1803/* Treat space at ptr +/- offset as a chunk */
1804#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1805#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1806
1807/* Ptr to next or previous physical malloc_chunk. */
1808#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1809#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1810
1811/* extract next chunk's pinuse bit */
1812#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
1813
1814/* Get/set size at footer */
1815#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1816#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1817
1818/* Set size, pinuse bit, and foot */
1819#define set_size_and_pinuse_of_free_chunk(p, s)\
1820 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1821
1822/* Set size, pinuse bit, foot, and clear next pinuse */
1823#define set_free_with_pinuse(p, s, n)\
1824 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1825
1826#define is_mmapped(p)\
1827 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1828
1829/* Get the internal overhead associated with chunk p */
1830#define overhead_for(p)\
1831 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1832
1833/* Return true if malloced space is not necessarily cleared */
1834#if MMAP_CLEARS
1835#define calloc_must_clear(p) (!is_mmapped(p))
1836#else /* MMAP_CLEARS */
1837#define calloc_must_clear(p) (1)
1838#endif /* MMAP_CLEARS */
1839
1840/* ---------------------- Overlaid data structures ----------------------- */
1841
1842/*
1843 When chunks are not in use, they are treated as nodes of either
1844 lists or trees.
1845
1846 "Small" chunks are stored in circular doubly-linked lists, and look
1847 like this:
1848
1849 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1850 | Size of previous chunk |
1851 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1852 `head:' | Size of chunk, in bytes |P|
1853 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1854 | Forward pointer to next chunk in list |
1855 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1856 | Back pointer to previous chunk in list |
1857 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1858 | Unused space (may be 0 bytes long) .
1859 . .
1860 . |
1861nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1862 `foot:' | Size of chunk, in bytes |
1863 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1864
1865 Larger chunks are kept in a form of bitwise digital trees (aka
1866 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
1867 free chunks greater than 256 bytes, their size doesn't impose any
1868 constraints on user chunk sizes. Each node looks like:
1869
1870 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1871 | Size of previous chunk |
1872 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1873 `head:' | Size of chunk, in bytes |P|
1874 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1875 | Forward pointer to next chunk of same size |
1876 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1877 | Back pointer to previous chunk of same size |
1878 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1879 | Pointer to left child (child[0]) |
1880 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1881 | Pointer to right child (child[1]) |
1882 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1883 | Pointer to parent |
1884 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1885 | bin index of this chunk |
1886 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1887 | Unused space .
1888 . |
1889nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1890 `foot:' | Size of chunk, in bytes |
1891 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1892
1893 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
1894 of the same size are arranged in a circularly-linked list, with only
1895 the oldest chunk (the next to be used, in our FIFO ordering)
1896 actually in the tree. (Tree members are distinguished by a non-null
1897 parent pointer.) If a chunk with the same size an an existing node
1898 is inserted, it is linked off the existing node using pointers that
1899 work in the same way as fd/bk pointers of small chunks.
1900
1901 Each tree contains a power of 2 sized range of chunk sizes (the
1902 smallest is 0x100 <= x < 0x180), which is is divided in half at each
1903 tree level, with the chunks in the smaller half of the range (0x100
1904 <= x < 0x140 for the top nose) in the left subtree and the larger
1905 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
1906 done by inspecting individual bits.
1907
1908 Using these rules, each node's left subtree contains all smaller
1909 sizes than its right subtree. However, the node at the root of each
1910 subtree has no particular ordering relationship to either. (The
1911 dividing line between the subtree sizes is based on trie relation.)
1912 If we remove the last chunk of a given size from the interior of the
1913 tree, we need to replace it with a leaf node. The tree ordering
1914 rules permit a node to be replaced by any leaf below it.
1915
1916 The smallest chunk in a tree (a common operation in a best-fit
1917 allocator) can be found by walking a path to the leftmost leaf in
1918 the tree. Unlike a usual binary tree, where we follow left child
1919 pointers until we reach a null, here we follow the right child
1920 pointer any time the left one is null, until we reach a leaf with
1921 both child pointers null. The smallest chunk in the tree will be
1922 somewhere along that path.
1923
1924 The worst case number of steps to add, find, or remove a node is
1925 bounded by the number of bits differentiating chunks within
1926 bins. Under current bin calculations, this ranges from 6 up to 21
1927 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1928 is of course much better.
1929*/
1930
1931struct malloc_tree_chunk {
1932 /* The first four fields must be compatible with malloc_chunk */
1933 size_t prev_foot;
1934 size_t head;
1935 struct malloc_tree_chunk* fd;
1936 struct malloc_tree_chunk* bk;
1937
1938 struct malloc_tree_chunk* child[2];
1939 struct malloc_tree_chunk* parent;
1940 bindex_t index;
1941};
1942
1943typedef struct malloc_tree_chunk tchunk;
1944typedef struct malloc_tree_chunk* tchunkptr;
1945typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1946
1947/* A little helper macro for trees */
1948#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1949
1950/* ----------------------------- Segments -------------------------------- */
1951
1952/*
1953 Each malloc space may include non-contiguous segments, held in a
1954 list headed by an embedded malloc_segment record representing the
1955 top-most space. Segments also include flags holding properties of
1956 the space. Large chunks that are directly allocated by mmap are not
1957 included in this list. They are instead independently created and
1958 destroyed without otherwise keeping track of them.
1959
1960 Segment management mainly comes into play for spaces allocated by
1961 MMAP. Any call to MMAP might or might not return memory that is
1962 adjacent to an existing segment. MORECORE normally contiguously
1963 extends the current space, so this space is almost always adjacent,
1964 which is simpler and faster to deal with. (This is why MORECORE is
1965 used preferentially to MMAP when both are available -- see
1966 sys_alloc.) When allocating using MMAP, we don't use any of the
1967 hinting mechanisms (inconsistently) supported in various
1968 implementations of unix mmap, or distinguish reserving from
1969 committing memory. Instead, we just ask for space, and exploit
1970 contiguity when we get it. It is probably possible to do
1971 better than this on some systems, but no general scheme seems
1972 to be significantly better.
1973
1974 Management entails a simpler variant of the consolidation scheme
1975 used for chunks to reduce fragmentation -- new adjacent memory is
1976 normally prepended or appended to an existing segment. However,
1977 there are limitations compared to chunk consolidation that mostly
1978 reflect the fact that segment processing is relatively infrequent
1979 (occurring only when getting memory from system) and that we
1980 don't expect to have huge numbers of segments:
1981
1982 * Segments are not indexed, so traversal requires linear scans. (It
1983 would be possible to index these, but is not worth the extra
1984 overhead and complexity for most programs on most platforms.)
1985 * New segments are only appended to old ones when holding top-most
1986 memory; if they cannot be prepended to others, they are held in
1987 different segments.
1988
1989 Except for the top-most segment of an mstate, each segment record
1990 is kept at the tail of its segment. Segments are added by pushing
1991 segment records onto the list headed by &mstate.seg for the
1992 containing mstate.
1993
1994 Segment flags control allocation/merge/deallocation policies:
1995 * If EXTERN_BIT set, then we did not allocate this segment,
1996 and so should not try to deallocate or merge with others.
1997 (This currently holds only for the initial segment passed
1998 into create_mspace_with_base.)
1999 * If IS_MMAPPED_BIT set, the segment may be merged with
2000 other surrounding mmapped segments and trimmed/de-allocated
2001 using munmap.
2002 * If neither bit is set, then the segment was obtained using
2003 MORECORE so can be merged with surrounding MORECORE'd segments
2004 and deallocated/trimmed using MORECORE with negative arguments.
2005*/
2006
2007struct malloc_segment {
2008 char* base; /* base address */
2009 size_t size; /* allocated size */
2010 struct malloc_segment* next; /* ptr to next segment */
2011 flag_t sflags; /* mmap and extern flag */
2012};
2013
2014#define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
2015#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2016
2017typedef struct malloc_segment msegment;
2018typedef struct malloc_segment* msegmentptr;
2019
2020/* ---------------------------- malloc_state ----------------------------- */
2021
2022/*
2023 A malloc_state holds all of the bookkeeping for a space.
2024 The main fields are:
2025
2026 Top
2027 The topmost chunk of the currently active segment. Its size is
2028 cached in topsize. The actual size of topmost space is
2029 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2030 fenceposts and segment records if necessary when getting more
2031 space from the system. The size at which to autotrim top is
2032 cached from mparams in trim_check, except that it is disabled if
2033 an autotrim fails.
2034
2035 Designated victim (dv)
2036 This is the preferred chunk for servicing small requests that
2037 don't have exact fits. It is normally the chunk split off most
2038 recently to service another small request. Its size is cached in
2039 dvsize. The link fields of this chunk are not maintained since it
2040 is not kept in a bin.
2041
2042 SmallBins
2043 An array of bin headers for free chunks. These bins hold chunks
2044 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2045 chunks of all the same size, spaced 8 bytes apart. To simplify
2046 use in double-linked lists, each bin header acts as a malloc_chunk
2047 pointing to the real first node, if it exists (else pointing to
2048 itself). This avoids special-casing for headers. But to avoid
2049 waste, we allocate only the fd/bk pointers of bins, and then use
2050 repositioning tricks to treat these as the fields of a chunk.
2051
2052 TreeBins
2053 Treebins are pointers to the roots of trees holding a range of
2054 sizes. There are 2 equally spaced treebins for each power of two
2055 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2056 larger.
2057
2058 Bin maps
2059 There is one bit map for small bins ("smallmap") and one for
2060 treebins ("treemap). Each bin sets its bit when non-empty, and
2061 clears the bit when empty. Bit operations are then used to avoid
2062 bin-by-bin searching -- nearly all "search" is done without ever
2063 looking at bins that won't be selected. The bit maps
2064 conservatively use 32 bits per map word, even if on 64bit system.
2065 For a good description of some of the bit-based techniques used
2066 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2067 supplement at http://hackersdelight.org/). Many of these are
2068 intended to reduce the branchiness of paths through malloc etc, as
2069 well as to reduce the number of memory locations read or written.
2070
2071 Segments
2072 A list of segments headed by an embedded malloc_segment record
2073 representing the initial space.
2074
2075 Address check support
2076 The least_addr field is the least address ever obtained from
2077 MORECORE or MMAP. Attempted frees and reallocs of any address less
2078 than this are trapped (unless INSECURE is defined).
2079
2080 Magic tag
2081 A cross-check field that should always hold same value as mparams.magic.
2082
2083 Flags
2084 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2085
2086 Statistics
2087 Each space keeps track of current and maximum system memory
2088 obtained via MORECORE or MMAP.
2089
2090 Locking
2091 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2092 around every public call using this mspace.
2093*/
2094
2095/* Bin types, widths and sizes */
2096#define NSMALLBINS (32U)
2097#define NTREEBINS (32U)
2098#define SMALLBIN_SHIFT (3U)
2099#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2100#define TREEBIN_SHIFT (8U)
2101#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2102#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2103#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2104
2105struct malloc_state {
2106 binmap_t smallmap;
2107 binmap_t treemap;
2108 size_t dvsize;
2109 size_t topsize;
2110 char* least_addr;
2111 mchunkptr dv;
2112 mchunkptr top;
2113 size_t trim_check;
2114 size_t magic;
2115 mchunkptr smallbins[(NSMALLBINS+1)*2];
2116 tbinptr treebins[NTREEBINS];
2117 size_t footprint;
2118#if USE_MAX_ALLOWED_FOOTPRINT
2119 size_t max_allowed_footprint;
2120#endif
2121 size_t max_footprint;
2122 flag_t mflags;
2123#if USE_LOCKS
2124 MLOCK_T mutex; /* locate lock among fields that rarely change */
2125#endif /* USE_LOCKS */
2126 msegment seg;
2127};
2128
2129typedef struct malloc_state* mstate;
2130
2131/* ------------- Global malloc_state and malloc_params ------------------- */
2132
2133/*
2134 malloc_params holds global properties, including those that can be
2135 dynamically set using mallopt. There is a single instance, mparams,
2136 initialized in init_mparams.
2137*/
2138
2139struct malloc_params {
2140 size_t magic;
2141 size_t page_size;
2142 size_t granularity;
2143 size_t mmap_threshold;
2144 size_t trim_threshold;
2145 flag_t default_mflags;
2146};
2147
2148static struct malloc_params mparams;
2149
2150/* The global malloc_state used for all non-"mspace" calls */
2151static struct malloc_state _gm_
2152#if USE_MAX_ALLOWED_FOOTPRINT
2153 = { .max_allowed_footprint = MAX_SIZE_T };
2154#else
2155 ;
2156#endif
2157
2158#define gm (&_gm_)
2159#define is_global(M) ((M) == &_gm_)
2160#define is_initialized(M) ((M)->top != 0)
2161
2162/* -------------------------- system alloc setup ------------------------- */
2163
2164/* Operations on mflags */
2165
2166#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2167#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2168#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2169
2170#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2171#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2172#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2173
2174#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2175#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2176
2177#define set_lock(M,L)\
2178 ((M)->mflags = (L)?\
2179 ((M)->mflags | USE_LOCK_BIT) :\
2180 ((M)->mflags & ~USE_LOCK_BIT))
2181
2182/* page-align a size */
2183#define page_align(S)\
2184 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2185
2186/* granularity-align a size */
2187#define granularity_align(S)\
2188 (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2189
2190#define is_page_aligned(S)\
2191 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2192#define is_granularity_aligned(S)\
2193 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2194
2195/* True if segment S holds address A */
2196#define segment_holds(S, A)\
2197 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2198
2199/* Return segment holding given address */
2200static msegmentptr segment_holding(mstate m, char* addr) {
2201 msegmentptr sp = &m->seg;
2202 for (;;) {
2203 if (addr >= sp->base && addr < sp->base + sp->size)
2204 return sp;
2205 if ((sp = sp->next) == 0)
2206 return 0;
2207 }
2208}
2209
2210/* Return true if segment contains a segment link */
2211static int has_segment_link(mstate m, msegmentptr ss) {
2212 msegmentptr sp = &m->seg;
2213 for (;;) {
2214 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2215 return 1;
2216 if ((sp = sp->next) == 0)
2217 return 0;
2218 }
2219}
2220
2221#ifndef MORECORE_CANNOT_TRIM
2222#define should_trim(M,s) ((s) > (M)->trim_check)
2223#else /* MORECORE_CANNOT_TRIM */
2224#define should_trim(M,s) (0)
2225#endif /* MORECORE_CANNOT_TRIM */
2226
2227/*
2228 TOP_FOOT_SIZE is padding at the end of a segment, including space
2229 that may be needed to place segment records and fenceposts when new
2230 noncontiguous segments are added.
2231*/
2232#define TOP_FOOT_SIZE\
2233 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2234
2235
2236/* ------------------------------- Hooks -------------------------------- */
2237
2238/*
2239 PREACTION should be defined to return 0 on success, and nonzero on
2240 failure. If you are not using locking, you can redefine these to do
2241 anything you like.
2242*/
2243
2244#if USE_LOCKS
2245
2246/* Ensure locks are initialized */
2247#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2248
2249#define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2250#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2251#else /* USE_LOCKS */
2252
2253#ifndef PREACTION
2254#define PREACTION(M) (0)
2255#endif /* PREACTION */
2256
2257#ifndef POSTACTION
2258#define POSTACTION(M)
2259#endif /* POSTACTION */
2260
2261#endif /* USE_LOCKS */
2262
2263/*
2264 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2265 USAGE_ERROR_ACTION is triggered on detected bad frees and
2266 reallocs. The argument p is an address that might have triggered the
2267 fault. It is ignored by the two predefined actions, but might be
2268 useful in custom actions that try to help diagnose errors.
2269*/
2270
2271#if PROCEED_ON_ERROR
2272
2273/* A count of the number of corruption errors causing resets */
2274int malloc_corruption_error_count;
2275
2276/* default corruption action */
2277static void reset_on_error(mstate m);
2278
2279#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2280#define USAGE_ERROR_ACTION(m, p)
2281
2282#else /* PROCEED_ON_ERROR */
2283
David 'Digit' Turner7708a892011-06-30 18:32:03 +02002284/* The following Android-specific code is used to print an informative
2285 * fatal error message to the log when we detect that a heap corruption
2286 * was detected. We need to be careful about not using a log function
2287 * that may require an allocation here!
2288 */
David 'Digit' Turnerc51871d2011-07-06 19:02:15 +02002289#ifdef LOG_ON_HEAP_ERROR
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07002290
David 'Digit' Turner7708a892011-06-30 18:32:03 +02002291# include <private/logd.h>
2292
Ben Chengc84ff112012-05-24 16:56:53 -07002293/* Convert a pointer into hex string */
2294static void __bionic_itox(char* hex, void* ptr)
2295{
2296 intptr_t val = (intptr_t) ptr;
2297 /* Terminate with NULL */
2298 hex[8] = 0;
2299 int i;
2300
2301 for (i = 7; i >= 0; i--) {
2302 int digit = val & 15;
2303 hex[i] = (digit <= 9) ? digit + '0' : digit - 10 + 'a';
2304 val >>= 4;
2305 }
2306}
2307
2308static void __bionic_heap_error(const char* msg, const char* function, void* p)
David 'Digit' Turner7708a892011-06-30 18:32:03 +02002309{
2310 /* We format the buffer explicitely, i.e. without using snprintf()
2311 * which may use malloc() internally. Not something we can trust
2312 * if we just detected a corrupted heap.
2313 */
2314 char buffer[256];
2315 strlcpy(buffer, "@@@ ABORTING: ", sizeof(buffer));
2316 strlcat(buffer, msg, sizeof(buffer));
2317 if (function != NULL) {
2318 strlcat(buffer, " IN ", sizeof(buffer));
2319 strlcat(buffer, function, sizeof(buffer));
2320 }
Ben Chengc84ff112012-05-24 16:56:53 -07002321
2322 if (p != NULL) {
2323 char hexbuffer[9];
2324 __bionic_itox(hexbuffer, p);
2325 strlcat(buffer, " addr=0x", sizeof(buffer));
2326 strlcat(buffer, hexbuffer, sizeof(buffer));
2327 }
2328
David 'Digit' Turnera4824462011-07-06 17:54:35 +02002329 __libc_android_log_write(ANDROID_LOG_FATAL,"libc",buffer);
Ben Chengec2ab732012-06-19 07:11:38 -07002330
2331 /* So that we can get a memory dump around p */
2332 *((int **) 0xdeadbaad) = (int *) p;
David 'Digit' Turner7708a892011-06-30 18:32:03 +02002333}
2334
2335# ifndef CORRUPTION_ERROR_ACTION
Ben Chengec2ab732012-06-19 07:11:38 -07002336# define CORRUPTION_ERROR_ACTION(m,p) \
2337 __bionic_heap_error("HEAP MEMORY CORRUPTION", __FUNCTION__, p)
David 'Digit' Turner7708a892011-06-30 18:32:03 +02002338# endif
2339# ifndef USAGE_ERROR_ACTION
2340# define USAGE_ERROR_ACTION(m,p) \
Ben Chengc84ff112012-05-24 16:56:53 -07002341 __bionic_heap_error("INVALID HEAP ADDRESS", __FUNCTION__, p)
David 'Digit' Turner7708a892011-06-30 18:32:03 +02002342# endif
2343
David 'Digit' Turnerc51871d2011-07-06 19:02:15 +02002344#else /* !LOG_ON_HEAP_ERROR */
David 'Digit' Turner7708a892011-06-30 18:32:03 +02002345
2346# ifndef CORRUPTION_ERROR_ACTION
Ben Chengec2ab732012-06-19 07:11:38 -07002347# define CORRUPTION_ERROR_ACTION(m,p) ABORT
David 'Digit' Turner7708a892011-06-30 18:32:03 +02002348# endif /* CORRUPTION_ERROR_ACTION */
2349
2350# ifndef USAGE_ERROR_ACTION
2351# define USAGE_ERROR_ACTION(m,p) ABORT
2352# endif /* USAGE_ERROR_ACTION */
2353
David 'Digit' Turnerc51871d2011-07-06 19:02:15 +02002354#endif /* !LOG_ON_HEAP_ERROR */
David 'Digit' Turner7708a892011-06-30 18:32:03 +02002355
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07002356
2357#endif /* PROCEED_ON_ERROR */
2358
2359/* -------------------------- Debugging setup ---------------------------- */
2360
2361#if ! DEBUG
2362
2363#define check_free_chunk(M,P)
2364#define check_inuse_chunk(M,P)
2365#define check_malloced_chunk(M,P,N)
2366#define check_mmapped_chunk(M,P)
2367#define check_malloc_state(M)
2368#define check_top_chunk(M,P)
2369
2370#else /* DEBUG */
2371#define check_free_chunk(M,P) do_check_free_chunk(M,P)
2372#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2373#define check_top_chunk(M,P) do_check_top_chunk(M,P)
2374#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2375#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2376#define check_malloc_state(M) do_check_malloc_state(M)
2377
2378static void do_check_any_chunk(mstate m, mchunkptr p);
2379static void do_check_top_chunk(mstate m, mchunkptr p);
2380static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2381static void do_check_inuse_chunk(mstate m, mchunkptr p);
2382static void do_check_free_chunk(mstate m, mchunkptr p);
2383static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2384static void do_check_tree(mstate m, tchunkptr t);
2385static void do_check_treebin(mstate m, bindex_t i);
2386static void do_check_smallbin(mstate m, bindex_t i);
2387static void do_check_malloc_state(mstate m);
2388static int bin_find(mstate m, mchunkptr x);
2389static size_t traverse_and_check(mstate m);
2390#endif /* DEBUG */
2391
2392/* ---------------------------- Indexing Bins ---------------------------- */
2393
2394#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2395#define small_index(s) ((s) >> SMALLBIN_SHIFT)
2396#define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2397#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2398
2399/* addressing by index. See above about smallbin repositioning */
2400#define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2401#define treebin_at(M,i) (&((M)->treebins[i]))
2402
2403/* assign tree index for size S to variable I */
2404#if defined(__GNUC__) && defined(i386)
2405#define compute_tree_index(S, I)\
2406{\
2407 size_t X = S >> TREEBIN_SHIFT;\
2408 if (X == 0)\
2409 I = 0;\
2410 else if (X > 0xFFFF)\
2411 I = NTREEBINS-1;\
2412 else {\
2413 unsigned int K;\
2414 __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\
2415 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2416 }\
2417}
2418#else /* GNUC */
2419#define compute_tree_index(S, I)\
2420{\
2421 size_t X = S >> TREEBIN_SHIFT;\
2422 if (X == 0)\
2423 I = 0;\
2424 else if (X > 0xFFFF)\
2425 I = NTREEBINS-1;\
2426 else {\
2427 unsigned int Y = (unsigned int)X;\
2428 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2429 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2430 N += K;\
2431 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2432 K = 14 - N + ((Y <<= K) >> 15);\
2433 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2434 }\
2435}
2436#endif /* GNUC */
2437
2438/* Bit representing maximum resolved size in a treebin at i */
2439#define bit_for_tree_index(i) \
2440 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2441
2442/* Shift placing maximum resolved bit in a treebin at i as sign bit */
2443#define leftshift_for_tree_index(i) \
2444 ((i == NTREEBINS-1)? 0 : \
2445 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2446
2447/* The size of the smallest chunk held in bin with index i */
2448#define minsize_for_tree_index(i) \
2449 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2450 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2451
2452
2453/* ------------------------ Operations on bin maps ----------------------- */
2454
2455/* bit corresponding to given index */
2456#define idx2bit(i) ((binmap_t)(1) << (i))
2457
2458/* Mark/Clear bits with given index */
2459#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2460#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2461#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2462
2463#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2464#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2465#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2466
2467/* index corresponding to given bit */
2468
2469#if defined(__GNUC__) && defined(i386)
2470#define compute_bit2idx(X, I)\
2471{\
2472 unsigned int J;\
2473 __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2474 I = (bindex_t)J;\
2475}
2476
2477#else /* GNUC */
2478#if USE_BUILTIN_FFS
2479#define compute_bit2idx(X, I) I = ffs(X)-1
2480
2481#else /* USE_BUILTIN_FFS */
2482#define compute_bit2idx(X, I)\
2483{\
2484 unsigned int Y = X - 1;\
2485 unsigned int K = Y >> (16-4) & 16;\
2486 unsigned int N = K; Y >>= K;\
2487 N += K = Y >> (8-3) & 8; Y >>= K;\
2488 N += K = Y >> (4-2) & 4; Y >>= K;\
2489 N += K = Y >> (2-1) & 2; Y >>= K;\
2490 N += K = Y >> (1-0) & 1; Y >>= K;\
2491 I = (bindex_t)(N + Y);\
2492}
2493#endif /* USE_BUILTIN_FFS */
2494#endif /* GNUC */
2495
2496/* isolate the least set bit of a bitmap */
2497#define least_bit(x) ((x) & -(x))
2498
2499/* mask with all bits to left of least bit of x on */
2500#define left_bits(x) ((x<<1) | -(x<<1))
2501
2502/* mask with all bits to left of or equal to least bit of x on */
2503#define same_or_left_bits(x) ((x) | -(x))
2504
2505
2506/* ----------------------- Runtime Check Support ------------------------- */
2507
2508/*
2509 For security, the main invariant is that malloc/free/etc never
2510 writes to a static address other than malloc_state, unless static
2511 malloc_state itself has been corrupted, which cannot occur via
2512 malloc (because of these checks). In essence this means that we
2513 believe all pointers, sizes, maps etc held in malloc_state, but
2514 check all of those linked or offsetted from other embedded data
2515 structures. These checks are interspersed with main code in a way
2516 that tends to minimize their run-time cost.
2517
2518 When FOOTERS is defined, in addition to range checking, we also
2519 verify footer fields of inuse chunks, which can be used guarantee
2520 that the mstate controlling malloc/free is intact. This is a
2521 streamlined version of the approach described by William Robertson
2522 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2523 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2524 of an inuse chunk holds the xor of its mstate and a random seed,
2525 that is checked upon calls to free() and realloc(). This is
2526 (probablistically) unguessable from outside the program, but can be
2527 computed by any code successfully malloc'ing any chunk, so does not
2528 itself provide protection against code that has already broken
2529 security through some other means. Unlike Robertson et al, we
2530 always dynamically check addresses of all offset chunks (previous,
2531 next, etc). This turns out to be cheaper than relying on hashes.
2532*/
2533
2534#if !INSECURE
2535/* Check if address a is at least as high as any from MORECORE or MMAP */
2536#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2537/* Check if address of next chunk n is higher than base chunk p */
2538#define ok_next(p, n) ((char*)(p) < (char*)(n))
2539/* Check if p has its cinuse bit on */
2540#define ok_cinuse(p) cinuse(p)
2541/* Check if p has its pinuse bit on */
2542#define ok_pinuse(p) pinuse(p)
2543
2544#else /* !INSECURE */
2545#define ok_address(M, a) (1)
2546#define ok_next(b, n) (1)
2547#define ok_cinuse(p) (1)
2548#define ok_pinuse(p) (1)
2549#endif /* !INSECURE */
2550
2551#if (FOOTERS && !INSECURE)
2552/* Check if (alleged) mstate m has expected magic field */
2553#define ok_magic(M) ((M)->magic == mparams.magic)
2554#else /* (FOOTERS && !INSECURE) */
2555#define ok_magic(M) (1)
2556#endif /* (FOOTERS && !INSECURE) */
2557
2558
2559/* In gcc, use __builtin_expect to minimize impact of checks */
2560#if !INSECURE
2561#if defined(__GNUC__) && __GNUC__ >= 3
2562#define RTCHECK(e) __builtin_expect(e, 1)
2563#else /* GNUC */
2564#define RTCHECK(e) (e)
2565#endif /* GNUC */
2566#else /* !INSECURE */
2567#define RTCHECK(e) (1)
2568#endif /* !INSECURE */
2569
2570/* macros to set up inuse chunks with or without footers */
2571
2572#if !FOOTERS
2573
2574#define mark_inuse_foot(M,p,s)
2575
2576/* Set cinuse bit and pinuse bit of next chunk */
2577#define set_inuse(M,p,s)\
2578 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2579 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2580
2581/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2582#define set_inuse_and_pinuse(M,p,s)\
2583 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2584 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2585
2586/* Set size, cinuse and pinuse bit of this chunk */
2587#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2588 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2589
2590#else /* FOOTERS */
2591
2592/* Set foot of inuse chunk to be xor of mstate and seed */
2593#define mark_inuse_foot(M,p,s)\
2594 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2595
2596#define get_mstate_for(p)\
2597 ((mstate)(((mchunkptr)((char*)(p) +\
2598 (chunksize(p))))->prev_foot ^ mparams.magic))
2599
2600#define set_inuse(M,p,s)\
2601 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2602 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2603 mark_inuse_foot(M,p,s))
2604
2605#define set_inuse_and_pinuse(M,p,s)\
2606 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2607 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2608 mark_inuse_foot(M,p,s))
2609
2610#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2611 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2612 mark_inuse_foot(M, p, s))
2613
2614#endif /* !FOOTERS */
2615
2616/* ---------------------------- setting mparams -------------------------- */
2617
2618/* Initialize mparams */
2619static int init_mparams(void) {
2620 if (mparams.page_size == 0) {
2621 size_t s;
2622
2623 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2624 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2625#if MORECORE_CONTIGUOUS
2626 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2627#else /* MORECORE_CONTIGUOUS */
2628 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2629#endif /* MORECORE_CONTIGUOUS */
2630
2631#if (FOOTERS && !INSECURE)
2632 {
2633#if USE_DEV_RANDOM
2634 int fd;
2635 unsigned char buf[sizeof(size_t)];
2636 /* Try to use /dev/urandom, else fall back on using time */
2637 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2638 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2639 s = *((size_t *) buf);
2640 close(fd);
2641 }
2642 else
2643#endif /* USE_DEV_RANDOM */
2644 s = (size_t)(time(0) ^ (size_t)0x55555555U);
2645
2646 s |= (size_t)8U; /* ensure nonzero */
2647 s &= ~(size_t)7U; /* improve chances of fault for bad values */
2648
2649 }
2650#else /* (FOOTERS && !INSECURE) */
2651 s = (size_t)0x58585858U;
2652#endif /* (FOOTERS && !INSECURE) */
2653 ACQUIRE_MAGIC_INIT_LOCK();
2654 if (mparams.magic == 0) {
2655 mparams.magic = s;
2656 /* Set up lock for main malloc area */
2657 INITIAL_LOCK(&gm->mutex);
2658 gm->mflags = mparams.default_mflags;
2659 }
2660 RELEASE_MAGIC_INIT_LOCK();
2661
2662#ifndef WIN32
2663 mparams.page_size = malloc_getpagesize;
2664 mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2665 DEFAULT_GRANULARITY : mparams.page_size);
2666#else /* WIN32 */
2667 {
2668 SYSTEM_INFO system_info;
2669 GetSystemInfo(&system_info);
2670 mparams.page_size = system_info.dwPageSize;
2671 mparams.granularity = system_info.dwAllocationGranularity;
2672 }
2673#endif /* WIN32 */
2674
2675 /* Sanity-check configuration:
2676 size_t must be unsigned and as wide as pointer type.
2677 ints must be at least 4 bytes.
2678 alignment must be at least 8.
2679 Alignment, min chunk size, and page size must all be powers of 2.
2680 */
2681 if ((sizeof(size_t) != sizeof(char*)) ||
2682 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
2683 (sizeof(int) < 4) ||
2684 (MALLOC_ALIGNMENT < (size_t)8U) ||
2685 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
2686 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
2687 ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2688 ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0))
2689 ABORT;
2690 }
2691 return 0;
2692}
2693
2694/* support for mallopt */
2695static int change_mparam(int param_number, int value) {
2696 size_t val = (size_t)value;
2697 init_mparams();
2698 switch(param_number) {
2699 case M_TRIM_THRESHOLD:
2700 mparams.trim_threshold = val;
2701 return 1;
2702 case M_GRANULARITY:
2703 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2704 mparams.granularity = val;
2705 return 1;
2706 }
2707 else
2708 return 0;
2709 case M_MMAP_THRESHOLD:
2710 mparams.mmap_threshold = val;
2711 return 1;
2712 default:
2713 return 0;
2714 }
2715}
2716
2717#if DEBUG
2718/* ------------------------- Debugging Support --------------------------- */
2719
2720/* Check properties of any chunk, whether free, inuse, mmapped etc */
2721static void do_check_any_chunk(mstate m, mchunkptr p) {
2722 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2723 assert(ok_address(m, p));
2724}
2725
2726/* Check properties of top chunk */
2727static void do_check_top_chunk(mstate m, mchunkptr p) {
2728 msegmentptr sp = segment_holding(m, (char*)p);
2729 size_t sz = chunksize(p);
2730 assert(sp != 0);
2731 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2732 assert(ok_address(m, p));
2733 assert(sz == m->topsize);
2734 assert(sz > 0);
2735 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2736 assert(pinuse(p));
2737 assert(!next_pinuse(p));
2738}
2739
2740/* Check properties of (inuse) mmapped chunks */
2741static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2742 size_t sz = chunksize(p);
2743 size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2744 assert(is_mmapped(p));
2745 assert(use_mmap(m));
2746 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2747 assert(ok_address(m, p));
2748 assert(!is_small(sz));
2749 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2750 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2751 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2752}
2753
2754/* Check properties of inuse chunks */
2755static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2756 do_check_any_chunk(m, p);
2757 assert(cinuse(p));
2758 assert(next_pinuse(p));
2759 /* If not pinuse and not mmapped, previous chunk has OK offset */
2760 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2761 if (is_mmapped(p))
2762 do_check_mmapped_chunk(m, p);
2763}
2764
2765/* Check properties of free chunks */
2766static void do_check_free_chunk(mstate m, mchunkptr p) {
2767 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2768 mchunkptr next = chunk_plus_offset(p, sz);
2769 do_check_any_chunk(m, p);
2770 assert(!cinuse(p));
2771 assert(!next_pinuse(p));
2772 assert (!is_mmapped(p));
2773 if (p != m->dv && p != m->top) {
2774 if (sz >= MIN_CHUNK_SIZE) {
2775 assert((sz & CHUNK_ALIGN_MASK) == 0);
2776 assert(is_aligned(chunk2mem(p)));
2777 assert(next->prev_foot == sz);
2778 assert(pinuse(p));
2779 assert (next == m->top || cinuse(next));
2780 assert(p->fd->bk == p);
2781 assert(p->bk->fd == p);
2782 }
2783 else /* markers are always of size SIZE_T_SIZE */
2784 assert(sz == SIZE_T_SIZE);
2785 }
2786}
2787
2788/* Check properties of malloced chunks at the point they are malloced */
2789static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2790 if (mem != 0) {
2791 mchunkptr p = mem2chunk(mem);
2792 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2793 do_check_inuse_chunk(m, p);
2794 assert((sz & CHUNK_ALIGN_MASK) == 0);
2795 assert(sz >= MIN_CHUNK_SIZE);
2796 assert(sz >= s);
2797 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2798 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2799 }
2800}
2801
2802/* Check a tree and its subtrees. */
2803static void do_check_tree(mstate m, tchunkptr t) {
2804 tchunkptr head = 0;
2805 tchunkptr u = t;
2806 bindex_t tindex = t->index;
2807 size_t tsize = chunksize(t);
2808 bindex_t idx;
2809 compute_tree_index(tsize, idx);
2810 assert(tindex == idx);
2811 assert(tsize >= MIN_LARGE_SIZE);
2812 assert(tsize >= minsize_for_tree_index(idx));
2813 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2814
2815 do { /* traverse through chain of same-sized nodes */
2816 do_check_any_chunk(m, ((mchunkptr)u));
2817 assert(u->index == tindex);
2818 assert(chunksize(u) == tsize);
2819 assert(!cinuse(u));
2820 assert(!next_pinuse(u));
2821 assert(u->fd->bk == u);
2822 assert(u->bk->fd == u);
2823 if (u->parent == 0) {
2824 assert(u->child[0] == 0);
2825 assert(u->child[1] == 0);
2826 }
2827 else {
2828 assert(head == 0); /* only one node on chain has parent */
2829 head = u;
2830 assert(u->parent != u);
2831 assert (u->parent->child[0] == u ||
2832 u->parent->child[1] == u ||
2833 *((tbinptr*)(u->parent)) == u);
2834 if (u->child[0] != 0) {
2835 assert(u->child[0]->parent == u);
2836 assert(u->child[0] != u);
2837 do_check_tree(m, u->child[0]);
2838 }
2839 if (u->child[1] != 0) {
2840 assert(u->child[1]->parent == u);
2841 assert(u->child[1] != u);
2842 do_check_tree(m, u->child[1]);
2843 }
2844 if (u->child[0] != 0 && u->child[1] != 0) {
2845 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2846 }
2847 }
2848 u = u->fd;
2849 } while (u != t);
2850 assert(head != 0);
2851}
2852
2853/* Check all the chunks in a treebin. */
2854static void do_check_treebin(mstate m, bindex_t i) {
2855 tbinptr* tb = treebin_at(m, i);
2856 tchunkptr t = *tb;
2857 int empty = (m->treemap & (1U << i)) == 0;
2858 if (t == 0)
2859 assert(empty);
2860 if (!empty)
2861 do_check_tree(m, t);
2862}
2863
2864/* Check all the chunks in a smallbin. */
2865static void do_check_smallbin(mstate m, bindex_t i) {
2866 sbinptr b = smallbin_at(m, i);
2867 mchunkptr p = b->bk;
2868 unsigned int empty = (m->smallmap & (1U << i)) == 0;
2869 if (p == b)
2870 assert(empty);
2871 if (!empty) {
2872 for (; p != b; p = p->bk) {
2873 size_t size = chunksize(p);
2874 mchunkptr q;
2875 /* each chunk claims to be free */
2876 do_check_free_chunk(m, p);
2877 /* chunk belongs in bin */
2878 assert(small_index(size) == i);
2879 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2880 /* chunk is followed by an inuse chunk */
2881 q = next_chunk(p);
2882 if (q->head != FENCEPOST_HEAD)
2883 do_check_inuse_chunk(m, q);
2884 }
2885 }
2886}
2887
2888/* Find x in a bin. Used in other check functions. */
2889static int bin_find(mstate m, mchunkptr x) {
2890 size_t size = chunksize(x);
2891 if (is_small(size)) {
2892 bindex_t sidx = small_index(size);
2893 sbinptr b = smallbin_at(m, sidx);
2894 if (smallmap_is_marked(m, sidx)) {
2895 mchunkptr p = b;
2896 do {
2897 if (p == x)
2898 return 1;
2899 } while ((p = p->fd) != b);
2900 }
2901 }
2902 else {
2903 bindex_t tidx;
2904 compute_tree_index(size, tidx);
2905 if (treemap_is_marked(m, tidx)) {
2906 tchunkptr t = *treebin_at(m, tidx);
2907 size_t sizebits = size << leftshift_for_tree_index(tidx);
2908 while (t != 0 && chunksize(t) != size) {
2909 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2910 sizebits <<= 1;
2911 }
2912 if (t != 0) {
2913 tchunkptr u = t;
2914 do {
2915 if (u == (tchunkptr)x)
2916 return 1;
2917 } while ((u = u->fd) != t);
2918 }
2919 }
2920 }
2921 return 0;
2922}
2923
2924/* Traverse each chunk and check it; return total */
2925static size_t traverse_and_check(mstate m) {
2926 size_t sum = 0;
2927 if (is_initialized(m)) {
2928 msegmentptr s = &m->seg;
2929 sum += m->topsize + TOP_FOOT_SIZE;
2930 while (s != 0) {
2931 mchunkptr q = align_as_chunk(s->base);
2932 mchunkptr lastq = 0;
2933 assert(pinuse(q));
2934 while (segment_holds(s, q) &&
2935 q != m->top && q->head != FENCEPOST_HEAD) {
2936 sum += chunksize(q);
2937 if (cinuse(q)) {
2938 assert(!bin_find(m, q));
2939 do_check_inuse_chunk(m, q);
2940 }
2941 else {
2942 assert(q == m->dv || bin_find(m, q));
2943 assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2944 do_check_free_chunk(m, q);
2945 }
2946 lastq = q;
2947 q = next_chunk(q);
2948 }
2949 s = s->next;
2950 }
2951 }
2952 return sum;
2953}
2954
2955/* Check all properties of malloc_state. */
2956static void do_check_malloc_state(mstate m) {
2957 bindex_t i;
2958 size_t total;
2959 /* check bins */
2960 for (i = 0; i < NSMALLBINS; ++i)
2961 do_check_smallbin(m, i);
2962 for (i = 0; i < NTREEBINS; ++i)
2963 do_check_treebin(m, i);
2964
2965 if (m->dvsize != 0) { /* check dv chunk */
2966 do_check_any_chunk(m, m->dv);
2967 assert(m->dvsize == chunksize(m->dv));
2968 assert(m->dvsize >= MIN_CHUNK_SIZE);
2969 assert(bin_find(m, m->dv) == 0);
2970 }
2971
2972 if (m->top != 0) { /* check top chunk */
2973 do_check_top_chunk(m, m->top);
2974 assert(m->topsize == chunksize(m->top));
2975 assert(m->topsize > 0);
2976 assert(bin_find(m, m->top) == 0);
2977 }
2978
2979 total = traverse_and_check(m);
2980 assert(total <= m->footprint);
2981 assert(m->footprint <= m->max_footprint);
2982#if USE_MAX_ALLOWED_FOOTPRINT
2983 //TODO: change these assertions if we allow for shrinking.
2984 assert(m->footprint <= m->max_allowed_footprint);
2985 assert(m->max_footprint <= m->max_allowed_footprint);
2986#endif
2987}
2988#endif /* DEBUG */
2989
2990/* ----------------------------- statistics ------------------------------ */
2991
2992#if !NO_MALLINFO
2993static struct mallinfo internal_mallinfo(mstate m) {
2994 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2995 if (!PREACTION(m)) {
2996 check_malloc_state(m);
2997 if (is_initialized(m)) {
2998 size_t nfree = SIZE_T_ONE; /* top always free */
2999 size_t mfree = m->topsize + TOP_FOOT_SIZE;
3000 size_t sum = mfree;
3001 msegmentptr s = &m->seg;
3002 while (s != 0) {
3003 mchunkptr q = align_as_chunk(s->base);
3004 while (segment_holds(s, q) &&
3005 q != m->top && q->head != FENCEPOST_HEAD) {
3006 size_t sz = chunksize(q);
3007 sum += sz;
3008 if (!cinuse(q)) {
3009 mfree += sz;
3010 ++nfree;
3011 }
3012 q = next_chunk(q);
3013 }
3014 s = s->next;
3015 }
3016
3017 nm.arena = sum;
3018 nm.ordblks = nfree;
3019 nm.hblkhd = m->footprint - sum;
3020 nm.usmblks = m->max_footprint;
3021 nm.uordblks = m->footprint - mfree;
3022 nm.fordblks = mfree;
3023 nm.keepcost = m->topsize;
3024 }
3025
3026 POSTACTION(m);
3027 }
3028 return nm;
3029}
3030#endif /* !NO_MALLINFO */
3031
3032static void internal_malloc_stats(mstate m) {
3033 if (!PREACTION(m)) {
3034 size_t maxfp = 0;
3035 size_t fp = 0;
3036 size_t used = 0;
3037 check_malloc_state(m);
3038 if (is_initialized(m)) {
3039 msegmentptr s = &m->seg;
3040 maxfp = m->max_footprint;
3041 fp = m->footprint;
3042 used = fp - (m->topsize + TOP_FOOT_SIZE);
3043
3044 while (s != 0) {
3045 mchunkptr q = align_as_chunk(s->base);
3046 while (segment_holds(s, q) &&
3047 q != m->top && q->head != FENCEPOST_HEAD) {
3048 if (!cinuse(q))
3049 used -= chunksize(q);
3050 q = next_chunk(q);
3051 }
3052 s = s->next;
3053 }
3054 }
3055
3056 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3057 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
3058 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
3059
3060 POSTACTION(m);
3061 }
3062}
3063
3064/* ----------------------- Operations on smallbins ----------------------- */
3065
3066/*
3067 Various forms of linking and unlinking are defined as macros. Even
3068 the ones for trees, which are very long but have very short typical
3069 paths. This is ugly but reduces reliance on inlining support of
3070 compilers.
3071*/
3072
3073/* Link a free chunk into a smallbin */
3074#define insert_small_chunk(M, P, S) {\
3075 bindex_t I = small_index(S);\
3076 mchunkptr B = smallbin_at(M, I);\
3077 mchunkptr F = B;\
3078 assert(S >= MIN_CHUNK_SIZE);\
3079 if (!smallmap_is_marked(M, I))\
3080 mark_smallmap(M, I);\
3081 else if (RTCHECK(ok_address(M, B->fd)))\
3082 F = B->fd;\
3083 else {\
Ben Chengec2ab732012-06-19 07:11:38 -07003084 CORRUPTION_ERROR_ACTION(M, B);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003085 }\
3086 B->fd = P;\
3087 F->bk = P;\
3088 P->fd = F;\
3089 P->bk = B;\
3090}
3091
3092/* Unlink a chunk from a smallbin
3093 * Added check: if F->bk != P or B->fd != P, we have double linked list
3094 * corruption, and abort.
3095 */
3096#define unlink_small_chunk(M, P, S) {\
3097 mchunkptr F = P->fd;\
3098 mchunkptr B = P->bk;\
3099 bindex_t I = small_index(S);\
3100 if (__builtin_expect (F->bk != P || B->fd != P, 0))\
Ben Chengec2ab732012-06-19 07:11:38 -07003101 CORRUPTION_ERROR_ACTION(M, P);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003102 assert(P != B);\
3103 assert(P != F);\
3104 assert(chunksize(P) == small_index2size(I));\
3105 if (F == B)\
3106 clear_smallmap(M, I);\
3107 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
3108 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
3109 F->bk = B;\
3110 B->fd = F;\
3111 }\
3112 else {\
Ben Chengec2ab732012-06-19 07:11:38 -07003113 CORRUPTION_ERROR_ACTION(M, P);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003114 }\
3115}
3116
3117/* Unlink the first chunk from a smallbin
3118 * Added check: if F->bk != P or B->fd != P, we have double linked list
3119 * corruption, and abort.
3120 */
3121#define unlink_first_small_chunk(M, B, P, I) {\
3122 mchunkptr F = P->fd;\
3123 if (__builtin_expect (F->bk != P || B->fd != P, 0))\
Ben Chengec2ab732012-06-19 07:11:38 -07003124 CORRUPTION_ERROR_ACTION(M, P);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003125 assert(P != B);\
3126 assert(P != F);\
3127 assert(chunksize(P) == small_index2size(I));\
3128 if (B == F)\
3129 clear_smallmap(M, I);\
3130 else if (RTCHECK(ok_address(M, F))) {\
3131 B->fd = F;\
3132 F->bk = B;\
3133 }\
3134 else {\
Ben Chengec2ab732012-06-19 07:11:38 -07003135 CORRUPTION_ERROR_ACTION(M, P);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003136 }\
3137}
3138
3139/* Replace dv node, binning the old one */
3140/* Used only when dvsize known to be small */
3141#define replace_dv(M, P, S) {\
3142 size_t DVS = M->dvsize;\
3143 if (DVS != 0) {\
3144 mchunkptr DV = M->dv;\
3145 assert(is_small(DVS));\
3146 insert_small_chunk(M, DV, DVS);\
3147 }\
3148 M->dvsize = S;\
3149 M->dv = P;\
3150}
3151
3152/* ------------------------- Operations on trees ------------------------- */
3153
3154/* Insert chunk into tree */
3155#define insert_large_chunk(M, X, S) {\
3156 tbinptr* H;\
3157 bindex_t I;\
3158 compute_tree_index(S, I);\
3159 H = treebin_at(M, I);\
3160 X->index = I;\
3161 X->child[0] = X->child[1] = 0;\
3162 if (!treemap_is_marked(M, I)) {\
3163 mark_treemap(M, I);\
3164 *H = X;\
3165 X->parent = (tchunkptr)H;\
3166 X->fd = X->bk = X;\
3167 }\
3168 else {\
3169 tchunkptr T = *H;\
3170 size_t K = S << leftshift_for_tree_index(I);\
3171 for (;;) {\
3172 if (chunksize(T) != S) {\
3173 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3174 K <<= 1;\
3175 if (*C != 0)\
3176 T = *C;\
3177 else if (RTCHECK(ok_address(M, C))) {\
3178 *C = X;\
3179 X->parent = T;\
3180 X->fd = X->bk = X;\
3181 break;\
3182 }\
3183 else {\
Ben Chengec2ab732012-06-19 07:11:38 -07003184 CORRUPTION_ERROR_ACTION(M, C);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003185 break;\
3186 }\
3187 }\
3188 else {\
3189 tchunkptr F = T->fd;\
3190 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3191 T->fd = F->bk = X;\
3192 X->fd = F;\
3193 X->bk = T;\
3194 X->parent = 0;\
3195 break;\
3196 }\
3197 else {\
Ben Chengec2ab732012-06-19 07:11:38 -07003198 CORRUPTION_ERROR_ACTION(M, F);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003199 break;\
3200 }\
3201 }\
3202 }\
3203 }\
3204}
3205
3206/*
3207 Unlink steps:
3208
3209 1. If x is a chained node, unlink it from its same-sized fd/bk links
3210 and choose its bk node as its replacement.
3211 2. If x was the last node of its size, but not a leaf node, it must
3212 be replaced with a leaf node (not merely one with an open left or
3213 right), to make sure that lefts and rights of descendents
3214 correspond properly to bit masks. We use the rightmost descendent
3215 of x. We could use any other leaf, but this is easy to locate and
3216 tends to counteract removal of leftmosts elsewhere, and so keeps
3217 paths shorter than minimally guaranteed. This doesn't loop much
3218 because on average a node in a tree is near the bottom.
3219 3. If x is the base of a chain (i.e., has parent links) relink
3220 x's parent and children to x's replacement (or null if none).
3221
3222 Added check: if F->bk != X or R->fd != X, we have double linked list
3223 corruption, and abort.
3224*/
3225
3226#define unlink_large_chunk(M, X) {\
3227 tchunkptr XP = X->parent;\
3228 tchunkptr R;\
3229 if (X->bk != X) {\
3230 tchunkptr F = X->fd;\
3231 R = X->bk;\
3232 if (__builtin_expect (F->bk != X || R->fd != X, 0))\
Ben Chengec2ab732012-06-19 07:11:38 -07003233 CORRUPTION_ERROR_ACTION(M, X);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003234 if (RTCHECK(ok_address(M, F))) {\
3235 F->bk = R;\
3236 R->fd = F;\
3237 }\
3238 else {\
Ben Chengec2ab732012-06-19 07:11:38 -07003239 CORRUPTION_ERROR_ACTION(M, F);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003240 }\
3241 }\
3242 else {\
3243 tchunkptr* RP;\
3244 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3245 ((R = *(RP = &(X->child[0]))) != 0)) {\
3246 tchunkptr* CP;\
3247 while ((*(CP = &(R->child[1])) != 0) ||\
3248 (*(CP = &(R->child[0])) != 0)) {\
3249 R = *(RP = CP);\
3250 }\
3251 if (RTCHECK(ok_address(M, RP)))\
3252 *RP = 0;\
3253 else {\
Ben Chengec2ab732012-06-19 07:11:38 -07003254 CORRUPTION_ERROR_ACTION(M, RP);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003255 }\
3256 }\
3257 }\
3258 if (XP != 0) {\
3259 tbinptr* H = treebin_at(M, X->index);\
3260 if (X == *H) {\
3261 if ((*H = R) == 0) \
3262 clear_treemap(M, X->index);\
3263 }\
3264 else if (RTCHECK(ok_address(M, XP))) {\
3265 if (XP->child[0] == X) \
3266 XP->child[0] = R;\
3267 else \
3268 XP->child[1] = R;\
3269 }\
3270 else\
Ben Chengec2ab732012-06-19 07:11:38 -07003271 CORRUPTION_ERROR_ACTION(M, XP);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003272 if (R != 0) {\
3273 if (RTCHECK(ok_address(M, R))) {\
3274 tchunkptr C0, C1;\
3275 R->parent = XP;\
3276 if ((C0 = X->child[0]) != 0) {\
3277 if (RTCHECK(ok_address(M, C0))) {\
3278 R->child[0] = C0;\
3279 C0->parent = R;\
3280 }\
3281 else\
Ben Chengec2ab732012-06-19 07:11:38 -07003282 CORRUPTION_ERROR_ACTION(M, C0);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003283 }\
3284 if ((C1 = X->child[1]) != 0) {\
3285 if (RTCHECK(ok_address(M, C1))) {\
3286 R->child[1] = C1;\
3287 C1->parent = R;\
3288 }\
3289 else\
Ben Chengec2ab732012-06-19 07:11:38 -07003290 CORRUPTION_ERROR_ACTION(M, C1);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003291 }\
3292 }\
3293 else\
Ben Chengec2ab732012-06-19 07:11:38 -07003294 CORRUPTION_ERROR_ACTION(M, R);\
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003295 }\
3296 }\
3297}
3298
3299/* Relays to large vs small bin operations */
3300
3301#define insert_chunk(M, P, S)\
3302 if (is_small(S)) insert_small_chunk(M, P, S)\
3303 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3304
3305#define unlink_chunk(M, P, S)\
3306 if (is_small(S)) unlink_small_chunk(M, P, S)\
3307 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3308
3309
3310/* Relays to internal calls to malloc/free from realloc, memalign etc */
3311
3312#if ONLY_MSPACES
3313#define internal_malloc(m, b) mspace_malloc(m, b)
3314#define internal_free(m, mem) mspace_free(m,mem);
3315#else /* ONLY_MSPACES */
3316#if MSPACES
3317#define internal_malloc(m, b)\
3318 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3319#define internal_free(m, mem)\
3320 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3321#else /* MSPACES */
3322#define internal_malloc(m, b) dlmalloc(b)
3323#define internal_free(m, mem) dlfree(mem)
3324#endif /* MSPACES */
3325#endif /* ONLY_MSPACES */
3326
3327/* ----------------------- Direct-mmapping chunks ----------------------- */
3328
3329/*
3330 Directly mmapped chunks are set up with an offset to the start of
3331 the mmapped region stored in the prev_foot field of the chunk. This
3332 allows reconstruction of the required argument to MUNMAP when freed,
3333 and also allows adjustment of the returned chunk to meet alignment
3334 requirements (especially in memalign). There is also enough space
3335 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3336 the PINUSE bit so frees can be checked.
3337*/
3338
3339/* Malloc using mmap */
3340static void* mmap_alloc(mstate m, size_t nb) {
3341 size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3342#if USE_MAX_ALLOWED_FOOTPRINT
3343 size_t new_footprint = m->footprint + mmsize;
3344 if (new_footprint <= m->footprint || /* Check for wrap around 0 */
3345 new_footprint > m->max_allowed_footprint)
3346 return 0;
3347#endif
3348 if (mmsize > nb) { /* Check for wrap around 0 */
3349 char* mm = (char*)(DIRECT_MMAP(mmsize));
3350 if (mm != CMFAIL) {
3351 size_t offset = align_offset(chunk2mem(mm));
3352 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3353 mchunkptr p = (mchunkptr)(mm + offset);
3354 p->prev_foot = offset | IS_MMAPPED_BIT;
3355 (p)->head = (psize|CINUSE_BIT);
3356 mark_inuse_foot(m, p, psize);
3357 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3358 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3359
Ben Chengeaae8102012-03-21 15:47:12 -07003360 if (m->least_addr == 0 || mm < m->least_addr)
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003361 m->least_addr = mm;
3362 if ((m->footprint += mmsize) > m->max_footprint)
3363 m->max_footprint = m->footprint;
3364 assert(is_aligned(chunk2mem(p)));
3365 check_mmapped_chunk(m, p);
3366 return chunk2mem(p);
3367 }
3368 }
3369 return 0;
3370}
3371
3372/* Realloc using mmap */
3373static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3374 size_t oldsize = chunksize(oldp);
3375 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3376 return 0;
3377 /* Keep old chunk if big enough but not too big */
3378 if (oldsize >= nb + SIZE_T_SIZE &&
3379 (oldsize - nb) <= (mparams.granularity << 1))
3380 return oldp;
3381 else {
3382 size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3383 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3384 size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3385 CHUNK_ALIGN_MASK);
3386 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3387 oldmmsize, newmmsize, 1);
3388 if (cp != CMFAIL) {
3389 mchunkptr newp = (mchunkptr)(cp + offset);
3390 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3391 newp->head = (psize|CINUSE_BIT);
3392 mark_inuse_foot(m, newp, psize);
3393 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3394 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3395
3396 if (cp < m->least_addr)
3397 m->least_addr = cp;
3398 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3399 m->max_footprint = m->footprint;
3400 check_mmapped_chunk(m, newp);
3401 return newp;
3402 }
3403 }
3404 return 0;
3405}
3406
3407/* -------------------------- mspace management -------------------------- */
3408
3409/* Initialize top chunk and its size */
3410static void init_top(mstate m, mchunkptr p, size_t psize) {
3411 /* Ensure alignment */
3412 size_t offset = align_offset(chunk2mem(p));
3413 p = (mchunkptr)((char*)p + offset);
3414 psize -= offset;
3415
3416 m->top = p;
3417 m->topsize = psize;
3418 p->head = psize | PINUSE_BIT;
3419 /* set size of fake trailing chunk holding overhead space only once */
3420 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3421 m->trim_check = mparams.trim_threshold; /* reset on each update */
3422}
3423
3424/* Initialize bins for a new mstate that is otherwise zeroed out */
3425static void init_bins(mstate m) {
3426 /* Establish circular links for smallbins */
3427 bindex_t i;
3428 for (i = 0; i < NSMALLBINS; ++i) {
3429 sbinptr bin = smallbin_at(m,i);
3430 bin->fd = bin->bk = bin;
3431 }
3432}
3433
3434#if PROCEED_ON_ERROR
3435
3436/* default corruption action */
3437static void reset_on_error(mstate m) {
3438 int i;
3439 ++malloc_corruption_error_count;
3440 /* Reinitialize fields to forget about all memory */
3441 m->smallbins = m->treebins = 0;
3442 m->dvsize = m->topsize = 0;
3443 m->seg.base = 0;
3444 m->seg.size = 0;
3445 m->seg.next = 0;
3446 m->top = m->dv = 0;
3447 for (i = 0; i < NTREEBINS; ++i)
3448 *treebin_at(m, i) = 0;
3449 init_bins(m);
3450}
3451#endif /* PROCEED_ON_ERROR */
3452
3453/* Allocate chunk and prepend remainder with chunk in successor base. */
3454static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3455 size_t nb) {
3456 mchunkptr p = align_as_chunk(newbase);
3457 mchunkptr oldfirst = align_as_chunk(oldbase);
3458 size_t psize = (char*)oldfirst - (char*)p;
3459 mchunkptr q = chunk_plus_offset(p, nb);
3460 size_t qsize = psize - nb;
3461 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3462
3463 assert((char*)oldfirst > (char*)q);
3464 assert(pinuse(oldfirst));
3465 assert(qsize >= MIN_CHUNK_SIZE);
3466
3467 /* consolidate remainder with first chunk of old base */
3468 if (oldfirst == m->top) {
3469 size_t tsize = m->topsize += qsize;
3470 m->top = q;
3471 q->head = tsize | PINUSE_BIT;
3472 check_top_chunk(m, q);
3473 }
3474 else if (oldfirst == m->dv) {
3475 size_t dsize = m->dvsize += qsize;
3476 m->dv = q;
3477 set_size_and_pinuse_of_free_chunk(q, dsize);
3478 }
3479 else {
3480 if (!cinuse(oldfirst)) {
3481 size_t nsize = chunksize(oldfirst);
3482 unlink_chunk(m, oldfirst, nsize);
3483 oldfirst = chunk_plus_offset(oldfirst, nsize);
3484 qsize += nsize;
3485 }
3486 set_free_with_pinuse(q, qsize, oldfirst);
3487 insert_chunk(m, q, qsize);
3488 check_free_chunk(m, q);
3489 }
3490
3491 check_malloced_chunk(m, chunk2mem(p), nb);
3492 return chunk2mem(p);
3493}
3494
3495
3496/* Add a segment to hold a new noncontiguous region */
3497static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3498 /* Determine locations and sizes of segment, fenceposts, old top */
3499 char* old_top = (char*)m->top;
3500 msegmentptr oldsp = segment_holding(m, old_top);
3501 char* old_end = oldsp->base + oldsp->size;
3502 size_t ssize = pad_request(sizeof(struct malloc_segment));
3503 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3504 size_t offset = align_offset(chunk2mem(rawsp));
3505 char* asp = rawsp + offset;
3506 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3507 mchunkptr sp = (mchunkptr)csp;
3508 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3509 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3510 mchunkptr p = tnext;
3511 int nfences = 0;
3512
3513 /* reset top to new space */
3514 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3515
3516 /* Set up segment record */
3517 assert(is_aligned(ss));
3518 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3519 *ss = m->seg; /* Push current record */
3520 m->seg.base = tbase;
3521 m->seg.size = tsize;
3522 m->seg.sflags = mmapped;
3523 m->seg.next = ss;
3524
3525 /* Insert trailing fenceposts */
3526 for (;;) {
3527 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3528 p->head = FENCEPOST_HEAD;
3529 ++nfences;
3530 if ((char*)(&(nextp->head)) < old_end)
3531 p = nextp;
3532 else
3533 break;
3534 }
3535 assert(nfences >= 2);
3536
3537 /* Insert the rest of old top into a bin as an ordinary free chunk */
3538 if (csp != old_top) {
3539 mchunkptr q = (mchunkptr)old_top;
3540 size_t psize = csp - old_top;
3541 mchunkptr tn = chunk_plus_offset(q, psize);
3542 set_free_with_pinuse(q, psize, tn);
3543 insert_chunk(m, q, psize);
3544 }
3545
3546 check_top_chunk(m, m->top);
3547}
3548
3549/* -------------------------- System allocation -------------------------- */
3550
3551/* Get memory from system using MORECORE or MMAP */
3552static void* sys_alloc(mstate m, size_t nb) {
3553 char* tbase = CMFAIL;
3554 size_t tsize = 0;
3555 flag_t mmap_flag = 0;
3556
3557 init_mparams();
3558
3559 /* Directly map large chunks */
3560 if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3561 void* mem = mmap_alloc(m, nb);
3562 if (mem != 0)
3563 return mem;
3564 }
3565
3566#if USE_MAX_ALLOWED_FOOTPRINT
3567 /* Make sure the footprint doesn't grow past max_allowed_footprint.
3568 * This covers all cases except for where we need to page align, below.
3569 */
3570 {
3571 size_t new_footprint = m->footprint +
3572 granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3573 if (new_footprint <= m->footprint || /* Check for wrap around 0 */
3574 new_footprint > m->max_allowed_footprint)
3575 return 0;
3576 }
3577#endif
3578
3579 /*
3580 Try getting memory in any of three ways (in most-preferred to
3581 least-preferred order):
3582 1. A call to MORECORE that can normally contiguously extend memory.
3583 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3584 or main space is mmapped or a previous contiguous call failed)
3585 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3586 Note that under the default settings, if MORECORE is unable to
3587 fulfill a request, and HAVE_MMAP is true, then mmap is
3588 used as a noncontiguous system allocator. This is a useful backup
3589 strategy for systems with holes in address spaces -- in this case
3590 sbrk cannot contiguously expand the heap, but mmap may be able to
3591 find space.
3592 3. A call to MORECORE that cannot usually contiguously extend memory.
3593 (disabled if not HAVE_MORECORE)
3594 */
3595
3596 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3597 char* br = CMFAIL;
3598 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3599 size_t asize = 0;
3600 ACQUIRE_MORECORE_LOCK();
3601
3602 if (ss == 0) { /* First time through or recovery */
3603 char* base = (char*)CALL_MORECORE(0);
3604 if (base != CMFAIL) {
3605 asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3606 /* Adjust to end on a page boundary */
3607 if (!is_page_aligned(base)) {
3608 asize += (page_align((size_t)base) - (size_t)base);
3609#if USE_MAX_ALLOWED_FOOTPRINT
3610 /* If the alignment pushes us over max_allowed_footprint,
3611 * poison the upcoming call to MORECORE and continue.
3612 */
3613 {
3614 size_t new_footprint = m->footprint + asize;
3615 if (new_footprint <= m->footprint || /* Check for wrap around 0 */
3616 new_footprint > m->max_allowed_footprint) {
3617 asize = HALF_MAX_SIZE_T;
3618 }
3619 }
3620#endif
3621 }
3622 /* Can't call MORECORE if size is negative when treated as signed */
3623 if (asize < HALF_MAX_SIZE_T &&
3624 (br = (char*)(CALL_MORECORE(asize))) == base) {
3625 tbase = base;
3626 tsize = asize;
3627 }
3628 }
3629 }
3630 else {
3631 /* Subtract out existing available top space from MORECORE request. */
3632 asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3633 /* Use mem here only if it did continuously extend old space */
3634 if (asize < HALF_MAX_SIZE_T &&
3635 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3636 tbase = br;
3637 tsize = asize;
3638 }
3639 }
3640
3641 if (tbase == CMFAIL) { /* Cope with partial failure */
3642 if (br != CMFAIL) { /* Try to use/extend the space we did get */
3643 if (asize < HALF_MAX_SIZE_T &&
3644 asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3645 size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3646 if (esize < HALF_MAX_SIZE_T) {
3647 char* end = (char*)CALL_MORECORE(esize);
3648 if (end != CMFAIL)
3649 asize += esize;
3650 else { /* Can't use; try to release */
3651 CALL_MORECORE(-asize);
3652 br = CMFAIL;
3653 }
3654 }
3655 }
3656 }
3657 if (br != CMFAIL) { /* Use the space we did get */
3658 tbase = br;
3659 tsize = asize;
3660 }
3661 else
3662 disable_contiguous(m); /* Don't try contiguous path in the future */
3663 }
3664
3665 RELEASE_MORECORE_LOCK();
3666 }
3667
3668 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
3669 size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3670 size_t rsize = granularity_align(req);
3671 if (rsize > nb) { /* Fail if wraps around zero */
3672 char* mp = (char*)(CALL_MMAP(rsize));
3673 if (mp != CMFAIL) {
3674 tbase = mp;
3675 tsize = rsize;
3676 mmap_flag = IS_MMAPPED_BIT;
3677 }
3678 }
3679 }
3680
3681 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3682 size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3683 if (asize < HALF_MAX_SIZE_T) {
3684 char* br = CMFAIL;
3685 char* end = CMFAIL;
3686 ACQUIRE_MORECORE_LOCK();
3687 br = (char*)(CALL_MORECORE(asize));
3688 end = (char*)(CALL_MORECORE(0));
3689 RELEASE_MORECORE_LOCK();
3690 if (br != CMFAIL && end != CMFAIL && br < end) {
3691 size_t ssize = end - br;
3692 if (ssize > nb + TOP_FOOT_SIZE) {
3693 tbase = br;
3694 tsize = ssize;
3695 }
3696 }
3697 }
3698 }
3699
3700 if (tbase != CMFAIL) {
3701
3702 if ((m->footprint += tsize) > m->max_footprint)
3703 m->max_footprint = m->footprint;
3704
3705 if (!is_initialized(m)) { /* first-time initialization */
Ben Chengeaae8102012-03-21 15:47:12 -07003706 if (m->least_addr == 0 || tbase < m->least_addr)
3707 m->least_addr = tbase;
3708 m->seg.base = tbase;
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003709 m->seg.size = tsize;
3710 m->seg.sflags = mmap_flag;
3711 m->magic = mparams.magic;
3712 init_bins(m);
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -08003713 if (is_global(m))
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003714 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3715 else {
3716 /* Offset top by embedded malloc_state */
3717 mchunkptr mn = next_chunk(mem2chunk(m));
3718 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3719 }
3720 }
3721
3722 else {
3723 /* Try to merge with an existing segment */
3724 msegmentptr sp = &m->seg;
3725 while (sp != 0 && tbase != sp->base + sp->size)
3726 sp = sp->next;
3727 if (sp != 0 &&
3728 !is_extern_segment(sp) &&
3729 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
3730 segment_holds(sp, m->top)) { /* append */
3731 sp->size += tsize;
3732 init_top(m, m->top, m->topsize + tsize);
3733 }
3734 else {
3735 if (tbase < m->least_addr)
3736 m->least_addr = tbase;
3737 sp = &m->seg;
3738 while (sp != 0 && sp->base != tbase + tsize)
3739 sp = sp->next;
3740 if (sp != 0 &&
3741 !is_extern_segment(sp) &&
3742 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
3743 char* oldbase = sp->base;
3744 sp->base = tbase;
3745 sp->size += tsize;
3746 return prepend_alloc(m, tbase, oldbase, nb);
3747 }
3748 else
3749 add_segment(m, tbase, tsize, mmap_flag);
3750 }
3751 }
3752
3753 if (nb < m->topsize) { /* Allocate from new or extended top space */
3754 size_t rsize = m->topsize -= nb;
3755 mchunkptr p = m->top;
3756 mchunkptr r = m->top = chunk_plus_offset(p, nb);
3757 r->head = rsize | PINUSE_BIT;
3758 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3759 check_top_chunk(m, m->top);
3760 check_malloced_chunk(m, chunk2mem(p), nb);
3761 return chunk2mem(p);
3762 }
3763 }
3764
3765 MALLOC_FAILURE_ACTION;
3766 return 0;
3767}
3768
3769/* ----------------------- system deallocation -------------------------- */
3770
3771/* Unmap and unlink any mmapped segments that don't contain used chunks */
3772static size_t release_unused_segments(mstate m) {
3773 size_t released = 0;
3774 msegmentptr pred = &m->seg;
3775 msegmentptr sp = pred->next;
3776 while (sp != 0) {
3777 char* base = sp->base;
3778 size_t size = sp->size;
3779 msegmentptr next = sp->next;
3780 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3781 mchunkptr p = align_as_chunk(base);
3782 size_t psize = chunksize(p);
3783 /* Can unmap if first chunk holds entire segment and not pinned */
3784 if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3785 tchunkptr tp = (tchunkptr)p;
3786 assert(segment_holds(sp, (char*)sp));
3787 if (p == m->dv) {
3788 m->dv = 0;
3789 m->dvsize = 0;
3790 }
3791 else {
3792 unlink_large_chunk(m, tp);
3793 }
3794 if (CALL_MUNMAP(base, size) == 0) {
3795 released += size;
3796 m->footprint -= size;
3797 /* unlink obsoleted record */
3798 sp = pred;
3799 sp->next = next;
3800 }
3801 else { /* back out if cannot unmap */
3802 insert_large_chunk(m, tp, psize);
3803 }
3804 }
3805 }
3806 pred = sp;
3807 sp = next;
3808 }
3809 return released;
3810}
3811
3812static int sys_trim(mstate m, size_t pad) {
3813 size_t released = 0;
3814 if (pad < MAX_REQUEST && is_initialized(m)) {
3815 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3816
3817 if (m->topsize > pad) {
3818 /* Shrink top space in granularity-size units, keeping at least one */
3819 size_t unit = mparams.granularity;
3820 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3821 SIZE_T_ONE) * unit;
3822 msegmentptr sp = segment_holding(m, (char*)m->top);
3823
3824 if (!is_extern_segment(sp)) {
3825 if (is_mmapped_segment(sp)) {
3826 if (HAVE_MMAP &&
3827 sp->size >= extra &&
3828 !has_segment_link(m, sp)) { /* can't shrink if pinned */
3829 size_t newsize = sp->size - extra;
3830 /* Prefer mremap, fall back to munmap */
3831 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3832 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3833 released = extra;
3834 }
3835 }
3836 }
3837 else if (HAVE_MORECORE) {
3838 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3839 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3840 ACQUIRE_MORECORE_LOCK();
3841 {
3842 /* Make sure end of memory is where we last set it. */
3843 char* old_br = (char*)(CALL_MORECORE(0));
3844 if (old_br == sp->base + sp->size) {
3845 char* rel_br = (char*)(CALL_MORECORE(-extra));
3846 char* new_br = (char*)(CALL_MORECORE(0));
3847 if (rel_br != CMFAIL && new_br < old_br)
3848 released = old_br - new_br;
3849 }
3850 }
3851 RELEASE_MORECORE_LOCK();
3852 }
3853 }
3854
3855 if (released != 0) {
3856 sp->size -= released;
3857 m->footprint -= released;
3858 init_top(m, m->top, m->topsize - released);
3859 check_top_chunk(m, m->top);
3860 }
3861 }
3862
3863 /* Unmap any unused mmapped segments */
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -08003864 if (HAVE_MMAP)
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003865 released += release_unused_segments(m);
3866
3867 /* On failure, disable autotrim to avoid repeated failed future calls */
3868 if (released == 0)
3869 m->trim_check = MAX_SIZE_T;
3870 }
3871
3872 return (released != 0)? 1 : 0;
3873}
3874
3875/* ---------------------------- malloc support --------------------------- */
3876
3877/* allocate a large request from the best fitting chunk in a treebin */
3878static void* tmalloc_large(mstate m, size_t nb) {
3879 tchunkptr v = 0;
3880 size_t rsize = -nb; /* Unsigned negation */
3881 tchunkptr t;
3882 bindex_t idx;
3883 compute_tree_index(nb, idx);
3884
3885 if ((t = *treebin_at(m, idx)) != 0) {
3886 /* Traverse tree for this bin looking for node with size == nb */
3887 size_t sizebits = nb << leftshift_for_tree_index(idx);
3888 tchunkptr rst = 0; /* The deepest untaken right subtree */
3889 for (;;) {
3890 tchunkptr rt;
3891 size_t trem = chunksize(t) - nb;
3892 if (trem < rsize) {
3893 v = t;
3894 if ((rsize = trem) == 0)
3895 break;
3896 }
3897 rt = t->child[1];
3898 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3899 if (rt != 0 && rt != t)
3900 rst = rt;
3901 if (t == 0) {
3902 t = rst; /* set t to least subtree holding sizes > nb */
3903 break;
3904 }
3905 sizebits <<= 1;
3906 }
3907 }
3908
3909 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3910 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3911 if (leftbits != 0) {
3912 bindex_t i;
3913 binmap_t leastbit = least_bit(leftbits);
3914 compute_bit2idx(leastbit, i);
3915 t = *treebin_at(m, i);
3916 }
3917 }
3918
3919 while (t != 0) { /* find smallest of tree or subtree */
3920 size_t trem = chunksize(t) - nb;
3921 if (trem < rsize) {
3922 rsize = trem;
3923 v = t;
3924 }
3925 t = leftmost_child(t);
3926 }
3927
3928 /* If dv is a better fit, return 0 so malloc will use it */
3929 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3930 if (RTCHECK(ok_address(m, v))) { /* split */
3931 mchunkptr r = chunk_plus_offset(v, nb);
3932 assert(chunksize(v) == rsize + nb);
3933 if (RTCHECK(ok_next(v, r))) {
3934 unlink_large_chunk(m, v);
3935 if (rsize < MIN_CHUNK_SIZE)
3936 set_inuse_and_pinuse(m, v, (rsize + nb));
3937 else {
3938 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3939 set_size_and_pinuse_of_free_chunk(r, rsize);
3940 insert_chunk(m, r, rsize);
3941 }
3942 return chunk2mem(v);
3943 }
3944 }
Ben Chengec2ab732012-06-19 07:11:38 -07003945 CORRUPTION_ERROR_ACTION(m, v);
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003946 }
3947 return 0;
3948}
3949
3950/* allocate a small request from the best fitting chunk in a treebin */
3951static void* tmalloc_small(mstate m, size_t nb) {
3952 tchunkptr t, v;
3953 size_t rsize;
3954 bindex_t i;
3955 binmap_t leastbit = least_bit(m->treemap);
3956 compute_bit2idx(leastbit, i);
3957
3958 v = t = *treebin_at(m, i);
3959 rsize = chunksize(t) - nb;
3960
3961 while ((t = leftmost_child(t)) != 0) {
3962 size_t trem = chunksize(t) - nb;
3963 if (trem < rsize) {
3964 rsize = trem;
3965 v = t;
3966 }
3967 }
3968
3969 if (RTCHECK(ok_address(m, v))) {
3970 mchunkptr r = chunk_plus_offset(v, nb);
3971 assert(chunksize(v) == rsize + nb);
3972 if (RTCHECK(ok_next(v, r))) {
3973 unlink_large_chunk(m, v);
3974 if (rsize < MIN_CHUNK_SIZE)
3975 set_inuse_and_pinuse(m, v, (rsize + nb));
3976 else {
3977 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3978 set_size_and_pinuse_of_free_chunk(r, rsize);
3979 replace_dv(m, r, rsize);
3980 }
3981 return chunk2mem(v);
3982 }
3983 }
3984
Ben Chengec2ab732012-06-19 07:11:38 -07003985 CORRUPTION_ERROR_ACTION(m, v);
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003986 return 0;
3987}
3988
3989/* --------------------------- realloc support --------------------------- */
3990
3991static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3992 if (bytes >= MAX_REQUEST) {
3993 MALLOC_FAILURE_ACTION;
3994 return 0;
3995 }
3996 if (!PREACTION(m)) {
3997 mchunkptr oldp = mem2chunk(oldmem);
3998 size_t oldsize = chunksize(oldp);
3999 mchunkptr next = chunk_plus_offset(oldp, oldsize);
4000 mchunkptr newp = 0;
4001 void* extra = 0;
4002
4003 /* Try to either shrink or extend into top. Else malloc-copy-free */
4004
4005 if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
4006 ok_next(oldp, next) && ok_pinuse(next))) {
4007 size_t nb = request2size(bytes);
4008 if (is_mmapped(oldp))
4009 newp = mmap_resize(m, oldp, nb);
4010 else if (oldsize >= nb) { /* already big enough */
4011 size_t rsize = oldsize - nb;
4012 newp = oldp;
4013 if (rsize >= MIN_CHUNK_SIZE) {
4014 mchunkptr remainder = chunk_plus_offset(newp, nb);
4015 set_inuse(m, newp, nb);
4016 set_inuse(m, remainder, rsize);
4017 extra = chunk2mem(remainder);
4018 }
4019 }
4020 else if (next == m->top && oldsize + m->topsize > nb) {
4021 /* Expand into top */
4022 size_t newsize = oldsize + m->topsize;
4023 size_t newtopsize = newsize - nb;
4024 mchunkptr newtop = chunk_plus_offset(oldp, nb);
4025 set_inuse(m, oldp, nb);
4026 newtop->head = newtopsize |PINUSE_BIT;
4027 m->top = newtop;
4028 m->topsize = newtopsize;
4029 newp = oldp;
4030 }
4031 }
4032 else {
4033 USAGE_ERROR_ACTION(m, oldmem);
4034 POSTACTION(m);
4035 return 0;
4036 }
4037
4038 POSTACTION(m);
4039
4040 if (newp != 0) {
4041 if (extra != 0) {
4042 internal_free(m, extra);
4043 }
4044 check_inuse_chunk(m, newp);
4045 return chunk2mem(newp);
4046 }
4047 else {
4048 void* newmem = internal_malloc(m, bytes);
4049 if (newmem != 0) {
4050 size_t oc = oldsize - overhead_for(oldp);
4051 memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
4052 internal_free(m, oldmem);
4053 }
4054 return newmem;
4055 }
4056 }
4057 return 0;
4058}
4059
4060/* --------------------------- memalign support -------------------------- */
4061
4062static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4063 if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
4064 return internal_malloc(m, bytes);
4065 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4066 alignment = MIN_CHUNK_SIZE;
4067 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4068 size_t a = MALLOC_ALIGNMENT << 1;
4069 while (a < alignment) a <<= 1;
4070 alignment = a;
4071 }
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -08004072
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07004073 if (bytes >= MAX_REQUEST - alignment) {
4074 if (m != 0) { /* Test isn't needed but avoids compiler warning */
4075 MALLOC_FAILURE_ACTION;
4076 }
4077 }
4078 else {
4079 size_t nb = request2size(bytes);
4080 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4081 char* mem = (char*)internal_malloc(m, req);
4082 if (mem != 0) {
4083 void* leader = 0;
4084 void* trailer = 0;
4085 mchunkptr p = mem2chunk(mem);
4086
4087 if (PREACTION(m)) return 0;
4088 if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
4089 /*
4090 Find an aligned spot inside chunk. Since we need to give
4091 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4092 the first calculation places us at a spot with less than
4093 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4094 We've allocated enough total room so that this is always
4095 possible.
4096 */
4097 char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
4098 alignment -
4099 SIZE_T_ONE)) &
4100 -alignment));
4101 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4102 br : br+alignment;
4103 mchunkptr newp = (mchunkptr)pos;
4104 size_t leadsize = pos - (char*)(p);
4105 size_t newsize = chunksize(p) - leadsize;
4106
4107 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4108 newp->prev_foot = p->prev_foot + leadsize;
4109 newp->head = (newsize|CINUSE_BIT);
4110 }
4111 else { /* Otherwise, give back leader, use the rest */
4112 set_inuse(m, newp, newsize);
4113 set_inuse(m, p, leadsize);
4114 leader = chunk2mem(p);
4115 }
4116 p = newp;
4117 }
4118
4119 /* Give back spare room at the end */
4120 if (!is_mmapped(p)) {
4121 size_t size = chunksize(p);
4122 if (size > nb + MIN_CHUNK_SIZE) {
4123 size_t remainder_size = size - nb;
4124 mchunkptr remainder = chunk_plus_offset(p, nb);
4125 set_inuse(m, p, nb);
4126 set_inuse(m, remainder, remainder_size);
4127 trailer = chunk2mem(remainder);
4128 }
4129 }
4130
4131 assert (chunksize(p) >= nb);
4132 assert((((size_t)(chunk2mem(p))) % alignment) == 0);
4133 check_inuse_chunk(m, p);
4134 POSTACTION(m);
4135 if (leader != 0) {
4136 internal_free(m, leader);
4137 }
4138 if (trailer != 0) {
4139 internal_free(m, trailer);
4140 }
4141 return chunk2mem(p);
4142 }
4143 }
4144 return 0;
4145}
4146
4147/* ------------------------ comalloc/coalloc support --------------------- */
4148
4149static void** ialloc(mstate m,
4150 size_t n_elements,
4151 size_t* sizes,
4152 int opts,
4153 void* chunks[]) {
4154 /*
4155 This provides common support for independent_X routines, handling
4156 all of the combinations that can result.
4157
4158 The opts arg has:
4159 bit 0 set if all elements are same size (using sizes[0])
4160 bit 1 set if elements should be zeroed
4161 */
4162
4163 size_t element_size; /* chunksize of each element, if all same */
4164 size_t contents_size; /* total size of elements */
4165 size_t array_size; /* request size of pointer array */
4166 void* mem; /* malloced aggregate space */
4167 mchunkptr p; /* corresponding chunk */
4168 size_t remainder_size; /* remaining bytes while splitting */
4169 void** marray; /* either "chunks" or malloced ptr array */
4170 mchunkptr array_chunk; /* chunk for malloced ptr array */
4171 flag_t was_enabled; /* to disable mmap */
4172 size_t size;
4173 size_t i;
4174
4175 /* compute array length, if needed */
4176 if (chunks != 0) {
4177 if (n_elements == 0)
4178 return chunks; /* nothing to do */
4179 marray = chunks;
4180 array_size = 0;
4181 }
4182 else {
4183 /* if empty req, must still return chunk representing empty array */
4184 if (n_elements == 0)
4185 return (void**)internal_malloc(m, 0);
4186 marray = 0;
4187 array_size = request2size(n_elements * (sizeof(void*)));
4188 }
4189
4190 /* compute total element size */
4191 if (opts & 0x1) { /* all-same-size */
4192 element_size = request2size(*sizes);
4193 contents_size = n_elements * element_size;
4194 }
4195 else { /* add up all the sizes */
4196 element_size = 0;
4197 contents_size = 0;
4198 for (i = 0; i != n_elements; ++i)
4199 contents_size += request2size(sizes[i]);
4200 }
4201
4202 size = contents_size + array_size;
4203
4204 /*
4205 Allocate the aggregate chunk. First disable direct-mmapping so
4206 malloc won't use it, since we would not be able to later
4207 free/realloc space internal to a segregated mmap region.
4208 */
4209 was_enabled = use_mmap(m);
4210 disable_mmap(m);
4211 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4212 if (was_enabled)
4213 enable_mmap(m);
4214 if (mem == 0)
4215 return 0;
4216
4217 if (PREACTION(m)) return 0;
4218 p = mem2chunk(mem);
4219 remainder_size = chunksize(p);
4220
4221 assert(!is_mmapped(p));
4222
4223 if (opts & 0x2) { /* optionally clear the elements */
4224 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4225 }
4226
4227 /* If not provided, allocate the pointer array as final part of chunk */
4228 if (marray == 0) {
4229 size_t array_chunk_size;
4230 array_chunk = chunk_plus_offset(p, contents_size);
4231 array_chunk_size = remainder_size - contents_size;
4232 marray = (void**) (chunk2mem(array_chunk));
4233 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4234 remainder_size = contents_size;
4235 }
4236
4237 /* split out elements */
4238 for (i = 0; ; ++i) {
4239 marray[i] = chunk2mem(p);
4240 if (i != n_elements-1) {
4241 if (element_size != 0)
4242 size = element_size;
4243 else
4244 size = request2size(sizes[i]);
4245 remainder_size -= size;
4246 set_size_and_pinuse_of_inuse_chunk(m, p, size);
4247 p = chunk_plus_offset(p, size);
4248 }
4249 else { /* the final element absorbs any overallocation slop */
4250 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4251 break;
4252 }
4253 }
4254
4255#if DEBUG
4256 if (marray != chunks) {
4257 /* final element must have exactly exhausted chunk */
4258 if (element_size != 0) {
4259 assert(remainder_size == element_size);
4260 }
4261 else {
4262 assert(remainder_size == request2size(sizes[i]));
4263 }
4264 check_inuse_chunk(m, mem2chunk(marray));
4265 }
4266 for (i = 0; i != n_elements; ++i)
4267 check_inuse_chunk(m, mem2chunk(marray[i]));
4268
4269#endif /* DEBUG */
4270
4271 POSTACTION(m);
4272 return marray;
4273}
4274
4275
4276/* -------------------------- public routines ---------------------------- */
4277
4278#if !ONLY_MSPACES
4279
4280void* dlmalloc(size_t bytes) {
4281 /*
4282 Basic algorithm:
4283 If a small request (< 256 bytes minus per-chunk overhead):
4284 1. If one exists, use a remainderless chunk in associated smallbin.
4285 (Remainderless means that there are too few excess bytes to
4286 represent as a chunk.)
4287 2. If it is big enough, use the dv chunk, which is normally the
4288 chunk adjacent to the one used for the most recent small request.
4289 3. If one exists, split the smallest available chunk in a bin,
4290 saving remainder in dv.
4291 4. If it is big enough, use the top chunk.
4292 5. If available, get memory from system and use it
4293 Otherwise, for a large request:
4294 1. Find the smallest available binned chunk that fits, and use it
4295 if it is better fitting than dv chunk, splitting if necessary.
4296 2. If better fitting than any binned chunk, use the dv chunk.
4297 3. If it is big enough, use the top chunk.
4298 4. If request size >= mmap threshold, try to directly mmap this chunk.
4299 5. If available, get memory from system and use it
4300
4301 The ugly goto's here ensure that postaction occurs along all paths.
4302 */
4303
4304 if (!PREACTION(gm)) {
4305 void* mem;
4306 size_t nb;
4307 if (bytes <= MAX_SMALL_REQUEST) {
4308 bindex_t idx;
4309 binmap_t smallbits;
4310 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4311 idx = small_index(nb);
4312 smallbits = gm->smallmap >> idx;
4313
4314 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4315 mchunkptr b, p;
4316 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4317 b = smallbin_at(gm, idx);
4318 p = b->fd;
4319 assert(chunksize(p) == small_index2size(idx));
4320 unlink_first_small_chunk(gm, b, p, idx);
4321 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4322 mem = chunk2mem(p);
4323 check_malloced_chunk(gm, mem, nb);
4324 goto postaction;
4325 }
4326
4327 else if (nb > gm->dvsize) {
4328 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4329 mchunkptr b, p, r;
4330 size_t rsize;
4331 bindex_t i;
4332 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4333 binmap_t leastbit = least_bit(leftbits);
4334 compute_bit2idx(leastbit, i);
4335 b = smallbin_at(gm, i);
4336 p = b->fd;
4337 assert(chunksize(p) == small_index2size(i));
4338 unlink_first_small_chunk(gm, b, p, i);
4339 rsize = small_index2size(i) - nb;
4340 /* Fit here cannot be remainderless if 4byte sizes */
4341 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4342 set_inuse_and_pinuse(gm, p, small_index2size(i));
4343 else {
4344 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4345 r = chunk_plus_offset(p, nb);
4346 set_size_and_pinuse_of_free_chunk(r, rsize);
4347 replace_dv(gm, r, rsize);
4348 }
4349 mem = chunk2mem(p);
4350 check_malloced_chunk(gm, mem, nb);
4351 goto postaction;
4352 }
4353
4354 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4355 check_malloced_chunk(gm, mem, nb);
4356 goto postaction;
4357 }
4358 }
4359 }
4360 else if (bytes >= MAX_REQUEST)
4361 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4362 else {
4363 nb = pad_request(bytes);
4364 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4365 check_malloced_chunk(gm, mem, nb);
4366 goto postaction;
4367 }
4368 }
4369
4370 if (nb <= gm->dvsize) {
4371 size_t rsize = gm->dvsize - nb;
4372 mchunkptr p = gm->dv;
4373 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4374 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4375 gm->dvsize = rsize;
4376 set_size_and_pinuse_of_free_chunk(r, rsize);
4377 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4378 }
4379 else { /* exhaust dv */
4380 size_t dvs = gm->dvsize;
4381 gm->dvsize = 0;
4382 gm->dv = 0;
4383 set_inuse_and_pinuse(gm, p, dvs);
4384 }
4385 mem = chunk2mem(p);
4386 check_malloced_chunk(gm, mem, nb);
4387 goto postaction;
4388 }
4389
4390 else if (nb < gm->topsize) { /* Split top */
4391 size_t rsize = gm->topsize -= nb;
4392 mchunkptr p = gm->top;
4393 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4394 r->head = rsize | PINUSE_BIT;
4395 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4396 mem = chunk2mem(p);
4397 check_top_chunk(gm, gm->top);
4398 check_malloced_chunk(gm, mem, nb);
4399 goto postaction;
4400 }
4401
4402 mem = sys_alloc(gm, nb);
4403
4404 postaction:
4405 POSTACTION(gm);
4406 return mem;
4407 }
4408
4409 return 0;
4410}
4411
4412void dlfree(void* mem) {
4413 /*
4414 Consolidate freed chunks with preceeding or succeeding bordering
4415 free chunks, if they exist, and then place in a bin. Intermixed
4416 with special cases for top, dv, mmapped chunks, and usage errors.
4417 */
4418
4419 if (mem != 0) {
4420 mchunkptr p = mem2chunk(mem);
4421#if FOOTERS
4422 mstate fm = get_mstate_for(p);
4423 if (!ok_magic(fm)) {
4424 USAGE_ERROR_ACTION(fm, p);
4425 return;
4426 }
4427#else /* FOOTERS */
4428#define fm gm
4429#endif /* FOOTERS */
4430 if (!PREACTION(fm)) {
4431 check_inuse_chunk(fm, p);
4432 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4433 size_t psize = chunksize(p);
4434 mchunkptr next = chunk_plus_offset(p, psize);
4435 if (!pinuse(p)) {
4436 size_t prevsize = p->prev_foot;
4437 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4438 prevsize &= ~IS_MMAPPED_BIT;
4439 psize += prevsize + MMAP_FOOT_PAD;
4440 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4441 fm->footprint -= psize;
4442 goto postaction;
4443 }
4444 else {
4445 mchunkptr prev = chunk_minus_offset(p, prevsize);
4446 psize += prevsize;
4447 p = prev;
4448 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4449 if (p != fm->dv) {
4450 unlink_chunk(fm, p, prevsize);
4451 }
4452 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4453 fm->dvsize = psize;
4454 set_free_with_pinuse(p, psize, next);
4455 goto postaction;
4456 }
4457 }
4458 else
4459 goto erroraction;
4460 }
4461 }
4462
4463 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4464 if (!cinuse(next)) { /* consolidate forward */
4465 if (next == fm->top) {
4466 size_t tsize = fm->topsize += psize;
4467 fm->top = p;
4468 p->head = tsize | PINUSE_BIT;
4469 if (p == fm->dv) {
4470 fm->dv = 0;
4471 fm->dvsize = 0;
4472 }
4473 if (should_trim(fm, tsize))
4474 sys_trim(fm, 0);
4475 goto postaction;
4476 }
4477 else if (next == fm->dv) {
4478 size_t dsize = fm->dvsize += psize;
4479 fm->dv = p;
4480 set_size_and_pinuse_of_free_chunk(p, dsize);
4481 goto postaction;
4482 }
4483 else {
4484 size_t nsize = chunksize(next);
4485 psize += nsize;
4486 unlink_chunk(fm, next, nsize);
4487 set_size_and_pinuse_of_free_chunk(p, psize);
4488 if (p == fm->dv) {
4489 fm->dvsize = psize;
4490 goto postaction;
4491 }
4492 }
4493 }
4494 else
4495 set_free_with_pinuse(p, psize, next);
4496 insert_chunk(fm, p, psize);
4497 check_free_chunk(fm, p);
4498 goto postaction;
4499 }
4500 }
4501 erroraction:
4502 USAGE_ERROR_ACTION(fm, p);
4503 postaction:
4504 POSTACTION(fm);
4505 }
4506 }
4507#if !FOOTERS
4508#undef fm
4509#endif /* FOOTERS */
4510}
4511
4512void* dlcalloc(size_t n_elements, size_t elem_size) {
4513 void *mem;
4514 if (n_elements && MAX_SIZE_T / n_elements < elem_size) {
4515 /* Fail on overflow */
4516 MALLOC_FAILURE_ACTION;
4517 return NULL;
4518 }
4519 elem_size *= n_elements;
4520 mem = dlmalloc(elem_size);
4521 if (mem && calloc_must_clear(mem2chunk(mem)))
4522 memset(mem, 0, elem_size);
4523 return mem;
4524}
4525
4526void* dlrealloc(void* oldmem, size_t bytes) {
4527 if (oldmem == 0)
4528 return dlmalloc(bytes);
4529#ifdef REALLOC_ZERO_BYTES_FREES
4530 if (bytes == 0) {
4531 dlfree(oldmem);
4532 return 0;
4533 }
4534#endif /* REALLOC_ZERO_BYTES_FREES */
4535 else {
4536#if ! FOOTERS
4537 mstate m = gm;
4538#else /* FOOTERS */
4539 mstate m = get_mstate_for(mem2chunk(oldmem));
4540 if (!ok_magic(m)) {
4541 USAGE_ERROR_ACTION(m, oldmem);
4542 return 0;
4543 }
4544#endif /* FOOTERS */
4545 return internal_realloc(m, oldmem, bytes);
4546 }
4547}
4548
4549void* dlmemalign(size_t alignment, size_t bytes) {
4550 return internal_memalign(gm, alignment, bytes);
4551}
4552
Ken Sumrall85aad902011-12-14 20:50:01 -08004553int posix_memalign(void **memptr, size_t alignment, size_t size) {
4554 int ret = 0;
4555
4556 *memptr = dlmemalign(alignment, size);
4557
4558 if (*memptr == 0) {
4559 ret = ENOMEM;
4560 }
4561
4562 return ret;
4563}
4564
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07004565void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4566 void* chunks[]) {
4567 size_t sz = elem_size; /* serves as 1-element array */
4568 return ialloc(gm, n_elements, &sz, 3, chunks);
4569}
4570
4571void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4572 void* chunks[]) {
4573 return ialloc(gm, n_elements, sizes, 0, chunks);
4574}
4575
4576void* dlvalloc(size_t bytes) {
4577 size_t pagesz;
4578 init_mparams();
4579 pagesz = mparams.page_size;
4580 return dlmemalign(pagesz, bytes);
4581}
4582
4583void* dlpvalloc(size_t bytes) {
4584 size_t pagesz;
4585 init_mparams();
4586 pagesz = mparams.page_size;
4587 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4588}
4589
4590int dlmalloc_trim(size_t pad) {
4591 int result = 0;
4592 if (!PREACTION(gm)) {
4593 result = sys_trim(gm, pad);
4594 POSTACTION(gm);
4595 }
4596 return result;
4597}
4598
4599size_t dlmalloc_footprint(void) {
4600 return gm->footprint;
4601}
4602
4603#if USE_MAX_ALLOWED_FOOTPRINT
4604size_t dlmalloc_max_allowed_footprint(void) {
4605 return gm->max_allowed_footprint;
4606}
4607
4608void dlmalloc_set_max_allowed_footprint(size_t bytes) {
4609 if (bytes > gm->footprint) {
4610 /* Increase the size in multiples of the granularity,
4611 * which is the smallest unit we request from the system.
4612 */
4613 gm->max_allowed_footprint = gm->footprint +
4614 granularity_align(bytes - gm->footprint);
4615 }
4616 else {
4617 //TODO: allow for reducing the max footprint
4618 gm->max_allowed_footprint = gm->footprint;
4619 }
4620}
4621#endif
4622
4623size_t dlmalloc_max_footprint(void) {
4624 return gm->max_footprint;
4625}
4626
4627#if !NO_MALLINFO
4628struct mallinfo dlmallinfo(void) {
4629 return internal_mallinfo(gm);
4630}
4631#endif /* NO_MALLINFO */
4632
4633void dlmalloc_stats() {
4634 internal_malloc_stats(gm);
4635}
4636
4637size_t dlmalloc_usable_size(void* mem) {
4638 if (mem != 0) {
4639 mchunkptr p = mem2chunk(mem);
4640 if (cinuse(p))
4641 return chunksize(p) - overhead_for(p);
4642 }
4643 return 0;
4644}
4645
4646int dlmallopt(int param_number, int value) {
4647 return change_mparam(param_number, value);
4648}
4649
4650#endif /* !ONLY_MSPACES */
4651
4652/* ----------------------------- user mspaces ---------------------------- */
4653
4654#if MSPACES
4655
4656static mstate init_user_mstate(char* tbase, size_t tsize) {
4657 size_t msize = pad_request(sizeof(struct malloc_state));
4658 mchunkptr mn;
4659 mchunkptr msp = align_as_chunk(tbase);
4660 mstate m = (mstate)(chunk2mem(msp));
4661 memset(m, 0, msize);
4662 INITIAL_LOCK(&m->mutex);
4663 msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4664 m->seg.base = m->least_addr = tbase;
4665 m->seg.size = m->footprint = m->max_footprint = tsize;
4666#if USE_MAX_ALLOWED_FOOTPRINT
4667 m->max_allowed_footprint = MAX_SIZE_T;
4668#endif
4669 m->magic = mparams.magic;
4670 m->mflags = mparams.default_mflags;
4671 disable_contiguous(m);
4672 init_bins(m);
4673 mn = next_chunk(mem2chunk(m));
4674 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4675 check_top_chunk(m, m->top);
4676 return m;
4677}
4678
4679mspace create_mspace(size_t capacity, int locked) {
4680 mstate m = 0;
4681 size_t msize = pad_request(sizeof(struct malloc_state));
4682 init_mparams(); /* Ensure pagesize etc initialized */
4683
4684 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4685 size_t rs = ((capacity == 0)? mparams.granularity :
4686 (capacity + TOP_FOOT_SIZE + msize));
4687 size_t tsize = granularity_align(rs);
4688 char* tbase = (char*)(CALL_MMAP(tsize));
4689 if (tbase != CMFAIL) {
4690 m = init_user_mstate(tbase, tsize);
4691 m->seg.sflags = IS_MMAPPED_BIT;
4692 set_lock(m, locked);
4693 }
4694 }
4695 return (mspace)m;
4696}
4697
4698mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4699 mstate m = 0;
4700 size_t msize = pad_request(sizeof(struct malloc_state));
4701 init_mparams(); /* Ensure pagesize etc initialized */
4702
4703 if (capacity > msize + TOP_FOOT_SIZE &&
4704 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4705 m = init_user_mstate((char*)base, capacity);
4706 m->seg.sflags = EXTERN_BIT;
4707 set_lock(m, locked);
4708 }
4709 return (mspace)m;
4710}
4711
4712size_t destroy_mspace(mspace msp) {
4713 size_t freed = 0;
4714 mstate ms = (mstate)msp;
4715 if (ok_magic(ms)) {
4716 msegmentptr sp = &ms->seg;
4717 while (sp != 0) {
4718 char* base = sp->base;
4719 size_t size = sp->size;
4720 flag_t flag = sp->sflags;
4721 sp = sp->next;
4722 if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4723 CALL_MUNMAP(base, size) == 0)
4724 freed += size;
4725 }
4726 }
4727 else {
4728 USAGE_ERROR_ACTION(ms,ms);
4729 }
4730 return freed;
4731}
4732
4733/*
4734 mspace versions of routines are near-clones of the global
4735 versions. This is not so nice but better than the alternatives.
4736*/
4737
4738
4739void* mspace_malloc(mspace msp, size_t bytes) {
4740 mstate ms = (mstate)msp;
4741 if (!ok_magic(ms)) {
4742 USAGE_ERROR_ACTION(ms,ms);
4743 return 0;
4744 }
4745 if (!PREACTION(ms)) {
4746 void* mem;
4747 size_t nb;
4748 if (bytes <= MAX_SMALL_REQUEST) {
4749 bindex_t idx;
4750 binmap_t smallbits;
4751 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4752 idx = small_index(nb);
4753 smallbits = ms->smallmap >> idx;
4754
4755 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4756 mchunkptr b, p;
4757 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4758 b = smallbin_at(ms, idx);
4759 p = b->fd;
4760 assert(chunksize(p) == small_index2size(idx));
4761 unlink_first_small_chunk(ms, b, p, idx);
4762 set_inuse_and_pinuse(ms, p, small_index2size(idx));
4763 mem = chunk2mem(p);
4764 check_malloced_chunk(ms, mem, nb);
4765 goto postaction;
4766 }
4767
4768 else if (nb > ms->dvsize) {
4769 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4770 mchunkptr b, p, r;
4771 size_t rsize;
4772 bindex_t i;
4773 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4774 binmap_t leastbit = least_bit(leftbits);
4775 compute_bit2idx(leastbit, i);
4776 b = smallbin_at(ms, i);
4777 p = b->fd;
4778 assert(chunksize(p) == small_index2size(i));
4779 unlink_first_small_chunk(ms, b, p, i);
4780 rsize = small_index2size(i) - nb;
4781 /* Fit here cannot be remainderless if 4byte sizes */
4782 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4783 set_inuse_and_pinuse(ms, p, small_index2size(i));
4784 else {
4785 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4786 r = chunk_plus_offset(p, nb);
4787 set_size_and_pinuse_of_free_chunk(r, rsize);
4788 replace_dv(ms, r, rsize);
4789 }
4790 mem = chunk2mem(p);
4791 check_malloced_chunk(ms, mem, nb);
4792 goto postaction;
4793 }
4794
4795 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4796 check_malloced_chunk(ms, mem, nb);
4797 goto postaction;
4798 }
4799 }
4800 }
4801 else if (bytes >= MAX_REQUEST)
4802 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4803 else {
4804 nb = pad_request(bytes);
4805 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4806 check_malloced_chunk(ms, mem, nb);
4807 goto postaction;
4808 }
4809 }
4810
4811 if (nb <= ms->dvsize) {
4812 size_t rsize = ms->dvsize - nb;
4813 mchunkptr p = ms->dv;
4814 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4815 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4816 ms->dvsize = rsize;
4817 set_size_and_pinuse_of_free_chunk(r, rsize);
4818 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4819 }
4820 else { /* exhaust dv */
4821 size_t dvs = ms->dvsize;
4822 ms->dvsize = 0;
4823 ms->dv = 0;
4824 set_inuse_and_pinuse(ms, p, dvs);
4825 }
4826 mem = chunk2mem(p);
4827 check_malloced_chunk(ms, mem, nb);
4828 goto postaction;
4829 }
4830
4831 else if (nb < ms->topsize) { /* Split top */
4832 size_t rsize = ms->topsize -= nb;
4833 mchunkptr p = ms->top;
4834 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4835 r->head = rsize | PINUSE_BIT;
4836 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4837 mem = chunk2mem(p);
4838 check_top_chunk(ms, ms->top);
4839 check_malloced_chunk(ms, mem, nb);
4840 goto postaction;
4841 }
4842
4843 mem = sys_alloc(ms, nb);
4844
4845 postaction:
4846 POSTACTION(ms);
4847 return mem;
4848 }
4849
4850 return 0;
4851}
4852
4853void mspace_free(mspace msp, void* mem) {
4854 if (mem != 0) {
4855 mchunkptr p = mem2chunk(mem);
4856#if FOOTERS
4857 mstate fm = get_mstate_for(p);
4858#else /* FOOTERS */
4859 mstate fm = (mstate)msp;
4860#endif /* FOOTERS */
4861 if (!ok_magic(fm)) {
4862 USAGE_ERROR_ACTION(fm, p);
4863 return;
4864 }
4865 if (!PREACTION(fm)) {
4866 check_inuse_chunk(fm, p);
4867 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4868 size_t psize = chunksize(p);
4869 mchunkptr next = chunk_plus_offset(p, psize);
4870 if (!pinuse(p)) {
4871 size_t prevsize = p->prev_foot;
4872 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4873 prevsize &= ~IS_MMAPPED_BIT;
4874 psize += prevsize + MMAP_FOOT_PAD;
4875 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4876 fm->footprint -= psize;
4877 goto postaction;
4878 }
4879 else {
4880 mchunkptr prev = chunk_minus_offset(p, prevsize);
4881 psize += prevsize;
4882 p = prev;
4883 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4884 if (p != fm->dv) {
4885 unlink_chunk(fm, p, prevsize);
4886 }
4887 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4888 fm->dvsize = psize;
4889 set_free_with_pinuse(p, psize, next);
4890 goto postaction;
4891 }
4892 }
4893 else
4894 goto erroraction;
4895 }
4896 }
4897
4898 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4899 if (!cinuse(next)) { /* consolidate forward */
4900 if (next == fm->top) {
4901 size_t tsize = fm->topsize += psize;
4902 fm->top = p;
4903 p->head = tsize | PINUSE_BIT;
4904 if (p == fm->dv) {
4905 fm->dv = 0;
4906 fm->dvsize = 0;
4907 }
4908 if (should_trim(fm, tsize))
4909 sys_trim(fm, 0);
4910 goto postaction;
4911 }
4912 else if (next == fm->dv) {
4913 size_t dsize = fm->dvsize += psize;
4914 fm->dv = p;
4915 set_size_and_pinuse_of_free_chunk(p, dsize);
4916 goto postaction;
4917 }
4918 else {
4919 size_t nsize = chunksize(next);
4920 psize += nsize;
4921 unlink_chunk(fm, next, nsize);
4922 set_size_and_pinuse_of_free_chunk(p, psize);
4923 if (p == fm->dv) {
4924 fm->dvsize = psize;
4925 goto postaction;
4926 }
4927 }
4928 }
4929 else
4930 set_free_with_pinuse(p, psize, next);
4931 insert_chunk(fm, p, psize);
4932 check_free_chunk(fm, p);
4933 goto postaction;
4934 }
4935 }
4936 erroraction:
4937 USAGE_ERROR_ACTION(fm, p);
4938 postaction:
4939 POSTACTION(fm);
4940 }
4941 }
4942}
4943
4944void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4945 void *mem;
4946 mstate ms = (mstate)msp;
4947 if (!ok_magic(ms)) {
4948 USAGE_ERROR_ACTION(ms,ms);
4949 return 0;
4950 }
4951 if (n_elements && MAX_SIZE_T / n_elements < elem_size) {
4952 /* Fail on overflow */
4953 MALLOC_FAILURE_ACTION;
4954 return NULL;
4955 }
4956 elem_size *= n_elements;
4957 mem = internal_malloc(ms, elem_size);
4958 if (mem && calloc_must_clear(mem2chunk(mem)))
4959 memset(mem, 0, elem_size);
4960 return mem;
4961}
4962
4963void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4964 if (oldmem == 0)
4965 return mspace_malloc(msp, bytes);
4966#ifdef REALLOC_ZERO_BYTES_FREES
4967 if (bytes == 0) {
4968 mspace_free(msp, oldmem);
4969 return 0;
4970 }
4971#endif /* REALLOC_ZERO_BYTES_FREES */
4972 else {
4973#if FOOTERS
4974 mchunkptr p = mem2chunk(oldmem);
4975 mstate ms = get_mstate_for(p);
4976#else /* FOOTERS */
4977 mstate ms = (mstate)msp;
4978#endif /* FOOTERS */
4979 if (!ok_magic(ms)) {
4980 USAGE_ERROR_ACTION(ms,ms);
4981 return 0;
4982 }
4983 return internal_realloc(ms, oldmem, bytes);
4984 }
4985}
4986
Barry Hayesf30dae92009-05-26 10:33:04 -07004987#if ANDROID
4988void* mspace_merge_objects(mspace msp, void* mema, void* memb)
4989{
4990 /* PREACTION/POSTACTION aren't necessary because we are only
4991 modifying fields of inuse chunks owned by the current thread, in
4992 which case no other malloc operations can touch them.
4993 */
4994 if (mema == NULL || memb == NULL) {
4995 return NULL;
4996 }
4997 mchunkptr pa = mem2chunk(mema);
4998 mchunkptr pb = mem2chunk(memb);
4999
5000#if FOOTERS
5001 mstate fm = get_mstate_for(pa);
5002#else /* FOOTERS */
5003 mstate fm = (mstate)msp;
5004#endif /* FOOTERS */
5005 if (!ok_magic(fm)) {
5006 USAGE_ERROR_ACTION(fm, pa);
5007 return NULL;
5008 }
5009 check_inuse_chunk(fm, pa);
5010 if (RTCHECK(ok_address(fm, pa) && ok_cinuse(pa))) {
5011 if (next_chunk(pa) != pb) {
5012 /* Since pb may not be in fm, we can't check ok_address(fm, pb);
5013 since ok_cinuse(pb) would be unsafe before an address check,
5014 return NULL rather than invoke USAGE_ERROR_ACTION if pb is not
5015 in use or is a bogus address.
5016 */
5017 return NULL;
5018 }
5019 /* Since b follows a, they share the mspace. */
5020#if FOOTERS
5021 assert(fm == get_mstate_for(pb));
5022#endif /* FOOTERS */
5023 check_inuse_chunk(fm, pb);
5024 if (RTCHECK(ok_address(fm, pb) && ok_cinuse(pb))) {
5025 size_t sz = chunksize(pb);
5026 pa->head += sz;
5027 /* Make sure pa still passes. */
5028 check_inuse_chunk(fm, pa);
5029 return mema;
5030 }
5031 else {
5032 USAGE_ERROR_ACTION(fm, pb);
5033 return NULL;
5034 }
5035 }
5036 else {
5037 USAGE_ERROR_ACTION(fm, pa);
5038 return NULL;
5039 }
5040}
5041#endif /* ANDROID */
5042
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07005043void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5044 mstate ms = (mstate)msp;
5045 if (!ok_magic(ms)) {
5046 USAGE_ERROR_ACTION(ms,ms);
5047 return 0;
5048 }
5049 return internal_memalign(ms, alignment, bytes);
5050}
5051
5052void** mspace_independent_calloc(mspace msp, size_t n_elements,
5053 size_t elem_size, void* chunks[]) {
5054 size_t sz = elem_size; /* serves as 1-element array */
5055 mstate ms = (mstate)msp;
5056 if (!ok_magic(ms)) {
5057 USAGE_ERROR_ACTION(ms,ms);
5058 return 0;
5059 }
5060 return ialloc(ms, n_elements, &sz, 3, chunks);
5061}
5062
5063void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5064 size_t sizes[], void* chunks[]) {
5065 mstate ms = (mstate)msp;
5066 if (!ok_magic(ms)) {
5067 USAGE_ERROR_ACTION(ms,ms);
5068 return 0;
5069 }
5070 return ialloc(ms, n_elements, sizes, 0, chunks);
5071}
5072
5073int mspace_trim(mspace msp, size_t pad) {
5074 int result = 0;
5075 mstate ms = (mstate)msp;
5076 if (ok_magic(ms)) {
5077 if (!PREACTION(ms)) {
5078 result = sys_trim(ms, pad);
5079 POSTACTION(ms);
5080 }
5081 }
5082 else {
5083 USAGE_ERROR_ACTION(ms,ms);
5084 }
5085 return result;
5086}
5087
5088void mspace_malloc_stats(mspace msp) {
5089 mstate ms = (mstate)msp;
5090 if (ok_magic(ms)) {
5091 internal_malloc_stats(ms);
5092 }
5093 else {
5094 USAGE_ERROR_ACTION(ms,ms);
5095 }
5096}
5097
5098size_t mspace_footprint(mspace msp) {
5099 size_t result;
5100 mstate ms = (mstate)msp;
5101 if (ok_magic(ms)) {
5102 result = ms->footprint;
5103 }
5104 else {
5105 USAGE_ERROR_ACTION(ms,ms);
5106 }
5107 return result;
5108}
5109
5110#if USE_MAX_ALLOWED_FOOTPRINT
5111size_t mspace_max_allowed_footprint(mspace msp) {
5112 size_t result;
5113 mstate ms = (mstate)msp;
5114 if (ok_magic(ms)) {
5115 result = ms->max_allowed_footprint;
5116 }
5117 else {
5118 USAGE_ERROR_ACTION(ms,ms);
5119 }
5120 return result;
5121}
5122
5123void mspace_set_max_allowed_footprint(mspace msp, size_t bytes) {
5124 mstate ms = (mstate)msp;
5125 if (ok_magic(ms)) {
5126 if (bytes > ms->footprint) {
5127 /* Increase the size in multiples of the granularity,
5128 * which is the smallest unit we request from the system.
5129 */
5130 ms->max_allowed_footprint = ms->footprint +
5131 granularity_align(bytes - ms->footprint);
5132 }
5133 else {
5134 //TODO: allow for reducing the max footprint
5135 ms->max_allowed_footprint = ms->footprint;
5136 }
5137 }
5138 else {
5139 USAGE_ERROR_ACTION(ms,ms);
5140 }
5141}
5142#endif
5143
5144size_t mspace_max_footprint(mspace msp) {
5145 size_t result;
5146 mstate ms = (mstate)msp;
5147 if (ok_magic(ms)) {
5148 result = ms->max_footprint;
5149 }
5150 else {
5151 USAGE_ERROR_ACTION(ms,ms);
5152 }
5153 return result;
5154}
5155
5156
5157#if !NO_MALLINFO
5158struct mallinfo mspace_mallinfo(mspace msp) {
5159 mstate ms = (mstate)msp;
5160 if (!ok_magic(ms)) {
5161 USAGE_ERROR_ACTION(ms,ms);
5162 }
5163 return internal_mallinfo(ms);
5164}
5165#endif /* NO_MALLINFO */
5166
5167int mspace_mallopt(int param_number, int value) {
5168 return change_mparam(param_number, value);
5169}
5170
5171#endif /* MSPACES */
5172
5173#if MSPACES && ONLY_MSPACES
5174void mspace_walk_free_pages(mspace msp,
5175 void(*handler)(void *start, void *end, void *arg), void *harg)
5176{
5177 mstate m = (mstate)msp;
5178 if (!ok_magic(m)) {
5179 USAGE_ERROR_ACTION(m,m);
5180 return;
5181 }
5182#else
5183void dlmalloc_walk_free_pages(void(*handler)(void *start, void *end, void *arg),
5184 void *harg)
5185{
5186 mstate m = (mstate)gm;
5187#endif
5188 if (!PREACTION(m)) {
5189 if (is_initialized(m)) {
5190 msegmentptr s = &m->seg;
5191 while (s != 0) {
5192 mchunkptr p = align_as_chunk(s->base);
5193 while (segment_holds(s, p) &&
5194 p != m->top && p->head != FENCEPOST_HEAD) {
5195 void *chunkptr, *userptr;
5196 size_t chunklen, userlen;
5197 chunkptr = p;
5198 chunklen = chunksize(p);
5199 if (!cinuse(p)) {
5200 void *start;
5201 if (is_small(chunklen)) {
5202 start = (void *)(p + 1);
5203 }
5204 else {
5205 start = (void *)((tchunkptr)p + 1);
5206 }
5207 handler(start, next_chunk(p), harg);
5208 }
5209 p = next_chunk(p);
5210 }
5211 if (p == m->top) {
5212 handler((void *)(p + 1), next_chunk(p), harg);
5213 }
5214 s = s->next;
5215 }
5216 }
5217 POSTACTION(m);
5218 }
5219}
5220
5221
5222#if MSPACES && ONLY_MSPACES
5223void mspace_walk_heap(mspace msp,
5224 void(*handler)(const void *chunkptr, size_t chunklen,
5225 const void *userptr, size_t userlen,
5226 void *arg),
5227 void *harg)
5228{
5229 msegmentptr s;
5230 mstate m = (mstate)msp;
5231 if (!ok_magic(m)) {
5232 USAGE_ERROR_ACTION(m,m);
5233 return;
5234 }
5235#else
5236void dlmalloc_walk_heap(void(*handler)(const void *chunkptr, size_t chunklen,
5237 const void *userptr, size_t userlen,
5238 void *arg),
5239 void *harg)
5240{
5241 msegmentptr s;
5242 mstate m = (mstate)gm;
5243#endif
5244
5245 s = &m->seg;
5246 while (s != 0) {
5247 mchunkptr p = align_as_chunk(s->base);
5248 while (segment_holds(s, p) &&
5249 p != m->top && p->head != FENCEPOST_HEAD) {
5250 void *chunkptr, *userptr;
5251 size_t chunklen, userlen;
5252 chunkptr = p;
5253 chunklen = chunksize(p);
5254 if (cinuse(p)) {
5255 userptr = chunk2mem(p);
5256 userlen = chunklen - overhead_for(p);
5257 }
5258 else {
5259 userptr = NULL;
5260 userlen = 0;
5261 }
5262 handler(chunkptr, chunklen, userptr, userlen, harg);
5263 p = next_chunk(p);
5264 }
5265 if (p == m->top) {
5266 /* The top chunk is just a big free chunk for our purposes.
5267 */
5268 handler(m->top, m->topsize, NULL, 0, harg);
5269 }
5270 s = s->next;
5271 }
5272}
5273
5274/* -------------------- Alternative MORECORE functions ------------------- */
5275
5276/*
5277 Guidelines for creating a custom version of MORECORE:
5278
5279 * For best performance, MORECORE should allocate in multiples of pagesize.
5280 * MORECORE may allocate more memory than requested. (Or even less,
5281 but this will usually result in a malloc failure.)
5282 * MORECORE must not allocate memory when given argument zero, but
5283 instead return one past the end address of memory from previous
5284 nonzero call.
5285 * For best performance, consecutive calls to MORECORE with positive
5286 arguments should return increasing addresses, indicating that
5287 space has been contiguously extended.
5288 * Even though consecutive calls to MORECORE need not return contiguous
5289 addresses, it must be OK for malloc'ed chunks to span multiple
5290 regions in those cases where they do happen to be contiguous.
5291 * MORECORE need not handle negative arguments -- it may instead
5292 just return MFAIL when given negative arguments.
5293 Negative arguments are always multiples of pagesize. MORECORE
5294 must not misinterpret negative args as large positive unsigned
5295 args. You can suppress all such calls from even occurring by defining
5296 MORECORE_CANNOT_TRIM,
5297
5298 As an example alternative MORECORE, here is a custom allocator
5299 kindly contributed for pre-OSX macOS. It uses virtually but not
5300 necessarily physically contiguous non-paged memory (locked in,
5301 present and won't get swapped out). You can use it by uncommenting
5302 this section, adding some #includes, and setting up the appropriate
5303 defines above:
5304
5305 #define MORECORE osMoreCore
5306
5307 There is also a shutdown routine that should somehow be called for
5308 cleanup upon program exit.
5309
5310 #define MAX_POOL_ENTRIES 100
5311 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
5312 static int next_os_pool;
5313 void *our_os_pools[MAX_POOL_ENTRIES];
5314
5315 void *osMoreCore(int size)
5316 {
5317 void *ptr = 0;
5318 static void *sbrk_top = 0;
5319
5320 if (size > 0)
5321 {
5322 if (size < MINIMUM_MORECORE_SIZE)
5323 size = MINIMUM_MORECORE_SIZE;
5324 if (CurrentExecutionLevel() == kTaskLevel)
5325 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
5326 if (ptr == 0)
5327 {
5328 return (void *) MFAIL;
5329 }
5330 // save ptrs so they can be freed during cleanup
5331 our_os_pools[next_os_pool] = ptr;
5332 next_os_pool++;
5333 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
5334 sbrk_top = (char *) ptr + size;
5335 return ptr;
5336 }
5337 else if (size < 0)
5338 {
5339 // we don't currently support shrink behavior
5340 return (void *) MFAIL;
5341 }
5342 else
5343 {
5344 return sbrk_top;
5345 }
5346 }
5347
5348 // cleanup any allocated memory pools
5349 // called as last thing before shutting down driver
5350
5351 void osCleanupMem(void)
5352 {
5353 void **ptr;
5354
5355 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
5356 if (*ptr)
5357 {
5358 PoolDeallocate(*ptr);
5359 *ptr = 0;
5360 }
5361 }
5362
5363*/
5364
5365
5366/* -----------------------------------------------------------------------
5367History:
5368 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
5369 * Add max_footprint functions
5370 * Ensure all appropriate literals are size_t
5371 * Fix conditional compilation problem for some #define settings
5372 * Avoid concatenating segments with the one provided
5373 in create_mspace_with_base
5374 * Rename some variables to avoid compiler shadowing warnings
5375 * Use explicit lock initialization.
5376 * Better handling of sbrk interference.
5377 * Simplify and fix segment insertion, trimming and mspace_destroy
5378 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5379 * Thanks especially to Dennis Flanagan for help on these.
5380
5381 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
5382 * Fix memalign brace error.
5383
5384 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
5385 * Fix improper #endif nesting in C++
5386 * Add explicit casts needed for C++
5387
5388 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
5389 * Use trees for large bins
5390 * Support mspaces
5391 * Use segments to unify sbrk-based and mmap-based system allocation,
5392 removing need for emulation on most platforms without sbrk.
5393 * Default safety checks
5394 * Optional footer checks. Thanks to William Robertson for the idea.
5395 * Internal code refactoring
5396 * Incorporate suggestions and platform-specific changes.
5397 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5398 Aaron Bachmann, Emery Berger, and others.
5399 * Speed up non-fastbin processing enough to remove fastbins.
5400 * Remove useless cfree() to avoid conflicts with other apps.
5401 * Remove internal memcpy, memset. Compilers handle builtins better.
5402 * Remove some options that no one ever used and rename others.
5403
5404 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5405 * Fix malloc_state bitmap array misdeclaration
5406
5407 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5408 * Allow tuning of FIRST_SORTED_BIN_SIZE
5409 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5410 * Better detection and support for non-contiguousness of MORECORE.
5411 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5412 * Bypass most of malloc if no frees. Thanks To Emery Berger.
5413 * Fix freeing of old top non-contiguous chunk im sysmalloc.
5414 * Raised default trim and map thresholds to 256K.
5415 * Fix mmap-related #defines. Thanks to Lubos Lunak.
5416 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5417 * Branch-free bin calculation
5418 * Default trim and mmap thresholds now 256K.
5419
5420 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5421 * Introduce independent_comalloc and independent_calloc.
5422 Thanks to Michael Pachos for motivation and help.
5423 * Make optional .h file available
5424 * Allow > 2GB requests on 32bit systems.
5425 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5426 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5427 and Anonymous.
5428 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5429 helping test this.)
5430 * memalign: check alignment arg
5431 * realloc: don't try to shift chunks backwards, since this
5432 leads to more fragmentation in some programs and doesn't
5433 seem to help in any others.
5434 * Collect all cases in malloc requiring system memory into sysmalloc
5435 * Use mmap as backup to sbrk
5436 * Place all internal state in malloc_state
5437 * Introduce fastbins (although similar to 2.5.1)
5438 * Many minor tunings and cosmetic improvements
5439 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5440 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5441 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5442 * Include errno.h to support default failure action.
5443
5444 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5445 * return null for negative arguments
5446 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5447 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5448 (e.g. WIN32 platforms)
5449 * Cleanup header file inclusion for WIN32 platforms
5450 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5451 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5452 memory allocation routines
5453 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5454 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5455 usage of 'assert' in non-WIN32 code
5456 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5457 avoid infinite loop
5458 * Always call 'fREe()' rather than 'free()'
5459
5460 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5461 * Fixed ordering problem with boundary-stamping
5462
5463 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5464 * Added pvalloc, as recommended by H.J. Liu
5465 * Added 64bit pointer support mainly from Wolfram Gloger
5466 * Added anonymously donated WIN32 sbrk emulation
5467 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5468 * malloc_extend_top: fix mask error that caused wastage after
5469 foreign sbrks
5470 * Add linux mremap support code from HJ Liu
5471
5472 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5473 * Integrated most documentation with the code.
5474 * Add support for mmap, with help from
5475 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5476 * Use last_remainder in more cases.
5477 * Pack bins using idea from colin@nyx10.cs.du.edu
5478 * Use ordered bins instead of best-fit threshhold
5479 * Eliminate block-local decls to simplify tracing and debugging.
5480 * Support another case of realloc via move into top
5481 * Fix error occuring when initial sbrk_base not word-aligned.
5482 * Rely on page size for units instead of SBRK_UNIT to
5483 avoid surprises about sbrk alignment conventions.
5484 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5485 (raymond@es.ele.tue.nl) for the suggestion.
5486 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5487 * More precautions for cases where other routines call sbrk,
5488 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5489 * Added macros etc., allowing use in linux libc from
5490 H.J. Lu (hjl@gnu.ai.mit.edu)
5491 * Inverted this history list
5492
5493 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5494 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5495 * Removed all preallocation code since under current scheme
5496 the work required to undo bad preallocations exceeds
5497 the work saved in good cases for most test programs.
5498 * No longer use return list or unconsolidated bins since
5499 no scheme using them consistently outperforms those that don't
5500 given above changes.
5501 * Use best fit for very large chunks to prevent some worst-cases.
5502 * Added some support for debugging
5503
5504 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5505 * Removed footers when chunks are in use. Thanks to
5506 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5507
5508 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5509 * Added malloc_trim, with help from Wolfram Gloger
5510 (wmglo@Dent.MED.Uni-Muenchen.DE).
5511
5512 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5513
5514 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5515 * realloc: try to expand in both directions
5516 * malloc: swap order of clean-bin strategy;
5517 * realloc: only conditionally expand backwards
5518 * Try not to scavenge used bins
5519 * Use bin counts as a guide to preallocation
5520 * Occasionally bin return list chunks in first scan
5521 * Add a few optimizations from colin@nyx10.cs.du.edu
5522
5523 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5524 * faster bin computation & slightly different binning
5525 * merged all consolidations to one part of malloc proper
5526 (eliminating old malloc_find_space & malloc_clean_bin)
5527 * Scan 2 returns chunks (not just 1)
5528 * Propagate failure in realloc if malloc returns 0
5529 * Add stuff to allow compilation on non-ANSI compilers
5530 from kpv@research.att.com
5531
5532 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5533 * removed potential for odd address access in prev_chunk
5534 * removed dependency on getpagesize.h
5535 * misc cosmetics and a bit more internal documentation
5536 * anticosmetics: mangled names in macros to evade debugger strangeness
5537 * tested on sparc, hp-700, dec-mips, rs6000
5538 with gcc & native cc (hp, dec only) allowing
5539 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5540
5541 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5542 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5543 structure of old version, but most details differ.)
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -08005544
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07005545*/