blob: 19fbb752126a80ae79cc92e05e4af6e36f875add [file] [log] [blame]
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07001/*
2 * Copyright (C) 2008 The Android Open Source Project
3 * All rights reserved.
4 *
5 * Redistribution and use in source and binary forms, with or without
6 * modification, are permitted provided that the following conditions
7 * are met:
8 * * Redistributions of source code must retain the above copyright
9 * notice, this list of conditions and the following disclaimer.
10 * * Redistributions in binary form must reproduce the above copyright
11 * notice, this list of conditions and the following disclaimer in
12 * the documentation and/or other materials provided with the
13 * distribution.
14 *
15 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
16 * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
17 * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
18 * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
19 * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
20 * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
21 * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
22 * OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
23 * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
24 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
25 * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
26 * SUCH DAMAGE.
27 */
28/*
29 This is a version (aka dlmalloc) of malloc/free/realloc written by
30 Doug Lea and released to the public domain, as explained at
31 http://creativecommons.org/licenses/publicdomain. Send questions,
32 comments, complaints, performance data, etc to dl@cs.oswego.edu
33
34* Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)
35
36 Note: There may be an updated version of this malloc obtainable at
37 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
38 Check before installing!
39
40* Quickstart
41
42 This library is all in one file to simplify the most common usage:
43 ftp it, compile it (-O3), and link it into another program. All of
44 the compile-time options default to reasonable values for use on
45 most platforms. You might later want to step through various
46 compile-time and dynamic tuning options.
47
48 For convenience, an include file for code using this malloc is at:
49 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
50 You don't really need this .h file unless you call functions not
51 defined in your system include files. The .h file contains only the
52 excerpts from this file needed for using this malloc on ANSI C/C++
53 systems, so long as you haven't changed compile-time options about
54 naming and tuning parameters. If you do, then you can create your
55 own malloc.h that does include all settings by cutting at the point
56 indicated below. Note that you may already by default be using a C
57 library containing a malloc that is based on some version of this
58 malloc (for example in linux). You might still want to use the one
59 in this file to customize settings or to avoid overheads associated
60 with library versions.
61
62* Vital statistics:
63
64 Supported pointer/size_t representation: 4 or 8 bytes
65 size_t MUST be an unsigned type of the same width as
66 pointers. (If you are using an ancient system that declares
67 size_t as a signed type, or need it to be a different width
68 than pointers, you can use a previous release of this malloc
69 (e.g. 2.7.2) supporting these.)
70
71 Alignment: 8 bytes (default)
72 This suffices for nearly all current machines and C compilers.
73 However, you can define MALLOC_ALIGNMENT to be wider than this
74 if necessary (up to 128bytes), at the expense of using more space.
75
76 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
77 8 or 16 bytes (if 8byte sizes)
78 Each malloced chunk has a hidden word of overhead holding size
79 and status information, and additional cross-check word
80 if FOOTERS is defined.
81
82 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
83 8-byte ptrs: 32 bytes (including overhead)
84
85 Even a request for zero bytes (i.e., malloc(0)) returns a
86 pointer to something of the minimum allocatable size.
87 The maximum overhead wastage (i.e., number of extra bytes
88 allocated than were requested in malloc) is less than or equal
89 to the minimum size, except for requests >= mmap_threshold that
90 are serviced via mmap(), where the worst case wastage is about
91 32 bytes plus the remainder from a system page (the minimal
92 mmap unit); typically 4096 or 8192 bytes.
93
94 Security: static-safe; optionally more or less
95 The "security" of malloc refers to the ability of malicious
96 code to accentuate the effects of errors (for example, freeing
97 space that is not currently malloc'ed or overwriting past the
98 ends of chunks) in code that calls malloc. This malloc
99 guarantees not to modify any memory locations below the base of
100 heap, i.e., static variables, even in the presence of usage
101 errors. The routines additionally detect most improper frees
102 and reallocs. All this holds as long as the static bookkeeping
103 for malloc itself is not corrupted by some other means. This
104 is only one aspect of security -- these checks do not, and
105 cannot, detect all possible programming errors.
106
107 If FOOTERS is defined nonzero, then each allocated chunk
108 carries an additional check word to verify that it was malloced
109 from its space. These check words are the same within each
110 execution of a program using malloc, but differ across
111 executions, so externally crafted fake chunks cannot be
112 freed. This improves security by rejecting frees/reallocs that
113 could corrupt heap memory, in addition to the checks preventing
114 writes to statics that are always on. This may further improve
115 security at the expense of time and space overhead. (Note that
116 FOOTERS may also be worth using with MSPACES.)
117
118 By default detected errors cause the program to abort (calling
119 "abort()"). You can override this to instead proceed past
120 errors by defining PROCEED_ON_ERROR. In this case, a bad free
121 has no effect, and a malloc that encounters a bad address
122 caused by user overwrites will ignore the bad address by
123 dropping pointers and indices to all known memory. This may
124 be appropriate for programs that should continue if at all
125 possible in the face of programming errors, although they may
126 run out of memory because dropped memory is never reclaimed.
127
128 If you don't like either of these options, you can define
129 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
130 else. And if if you are sure that your program using malloc has
131 no errors or vulnerabilities, you can define INSECURE to 1,
132 which might (or might not) provide a small performance improvement.
133
134 Thread-safety: NOT thread-safe unless USE_LOCKS defined
135 When USE_LOCKS is defined, each public call to malloc, free,
136 etc is surrounded with either a pthread mutex or a win32
137 spinlock (depending on WIN32). This is not especially fast, and
138 can be a major bottleneck. It is designed only to provide
139 minimal protection in concurrent environments, and to provide a
140 basis for extensions. If you are using malloc in a concurrent
141 program, consider instead using ptmalloc, which is derived from
142 a version of this malloc. (See http://www.malloc.de).
143
144 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
145 This malloc can use unix sbrk or any emulation (invoked using
146 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
147 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
148 memory. On most unix systems, it tends to work best if both
149 MORECORE and MMAP are enabled. On Win32, it uses emulations
150 based on VirtualAlloc. It also uses common C library functions
151 like memset.
152
153 Compliance: I believe it is compliant with the Single Unix Specification
154 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
155 others as well.
156
157* Overview of algorithms
158
159 This is not the fastest, most space-conserving, most portable, or
160 most tunable malloc ever written. However it is among the fastest
161 while also being among the most space-conserving, portable and
162 tunable. Consistent balance across these factors results in a good
163 general-purpose allocator for malloc-intensive programs.
164
165 In most ways, this malloc is a best-fit allocator. Generally, it
166 chooses the best-fitting existing chunk for a request, with ties
167 broken in approximately least-recently-used order. (This strategy
168 normally maintains low fragmentation.) However, for requests less
169 than 256bytes, it deviates from best-fit when there is not an
170 exactly fitting available chunk by preferring to use space adjacent
171 to that used for the previous small request, as well as by breaking
172 ties in approximately most-recently-used order. (These enhance
173 locality of series of small allocations.) And for very large requests
174 (>= 256Kb by default), it relies on system memory mapping
175 facilities, if supported. (This helps avoid carrying around and
176 possibly fragmenting memory used only for large chunks.)
177
178 All operations (except malloc_stats and mallinfo) have execution
179 times that are bounded by a constant factor of the number of bits in
180 a size_t, not counting any clearing in calloc or copying in realloc,
181 or actions surrounding MORECORE and MMAP that have times
182 proportional to the number of non-contiguous regions returned by
183 system allocation routines, which is often just 1.
184
185 The implementation is not very modular and seriously overuses
186 macros. Perhaps someday all C compilers will do as good a job
187 inlining modular code as can now be done by brute-force expansion,
188 but now, enough of them seem not to.
189
190 Some compilers issue a lot of warnings about code that is
191 dead/unreachable only on some platforms, and also about intentional
192 uses of negation on unsigned types. All known cases of each can be
193 ignored.
194
195 For a longer but out of date high-level description, see
196 http://gee.cs.oswego.edu/dl/html/malloc.html
197
198* MSPACES
199 If MSPACES is defined, then in addition to malloc, free, etc.,
200 this file also defines mspace_malloc, mspace_free, etc. These
201 are versions of malloc routines that take an "mspace" argument
202 obtained using create_mspace, to control all internal bookkeeping.
203 If ONLY_MSPACES is defined, only these versions are compiled.
204 So if you would like to use this allocator for only some allocations,
205 and your system malloc for others, you can compile with
206 ONLY_MSPACES and then do something like...
207 static mspace mymspace = create_mspace(0,0); // for example
208 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
209
210 (Note: If you only need one instance of an mspace, you can instead
211 use "USE_DL_PREFIX" to relabel the global malloc.)
212
213 You can similarly create thread-local allocators by storing
214 mspaces as thread-locals. For example:
215 static __thread mspace tlms = 0;
216 void* tlmalloc(size_t bytes) {
217 if (tlms == 0) tlms = create_mspace(0, 0);
218 return mspace_malloc(tlms, bytes);
219 }
220 void tlfree(void* mem) { mspace_free(tlms, mem); }
221
222 Unless FOOTERS is defined, each mspace is completely independent.
223 You cannot allocate from one and free to another (although
224 conformance is only weakly checked, so usage errors are not always
225 caught). If FOOTERS is defined, then each chunk carries around a tag
226 indicating its originating mspace, and frees are directed to their
227 originating spaces.
228
229 ------------------------- Compile-time options ---------------------------
230
231Be careful in setting #define values for numerical constants of type
232size_t. On some systems, literal values are not automatically extended
233to size_t precision unless they are explicitly casted.
234
235WIN32 default: defined if _WIN32 defined
236 Defining WIN32 sets up defaults for MS environment and compilers.
237 Otherwise defaults are for unix.
238
239MALLOC_ALIGNMENT default: (size_t)8
240 Controls the minimum alignment for malloc'ed chunks. It must be a
241 power of two and at least 8, even on machines for which smaller
242 alignments would suffice. It may be defined as larger than this
243 though. Note however that code and data structures are optimized for
244 the case of 8-byte alignment.
245
246MSPACES default: 0 (false)
247 If true, compile in support for independent allocation spaces.
248 This is only supported if HAVE_MMAP is true.
249
250ONLY_MSPACES default: 0 (false)
251 If true, only compile in mspace versions, not regular versions.
252
253USE_LOCKS default: 0 (false)
254 Causes each call to each public routine to be surrounded with
255 pthread or WIN32 mutex lock/unlock. (If set true, this can be
256 overridden on a per-mspace basis for mspace versions.)
257
258FOOTERS default: 0
259 If true, provide extra checking and dispatching by placing
260 information in the footers of allocated chunks. This adds
261 space and time overhead.
262
263INSECURE default: 0
264 If true, omit checks for usage errors and heap space overwrites.
265
266USE_DL_PREFIX default: NOT defined
267 Causes compiler to prefix all public routines with the string 'dl'.
268 This can be useful when you only want to use this malloc in one part
269 of a program, using your regular system malloc elsewhere.
270
271ABORT default: defined as abort()
272 Defines how to abort on failed checks. On most systems, a failed
273 check cannot die with an "assert" or even print an informative
274 message, because the underlying print routines in turn call malloc,
275 which will fail again. Generally, the best policy is to simply call
276 abort(). It's not very useful to do more than this because many
277 errors due to overwriting will show up as address faults (null, odd
278 addresses etc) rather than malloc-triggered checks, so will also
279 abort. Also, most compilers know that abort() does not return, so
280 can better optimize code conditionally calling it.
281
282PROCEED_ON_ERROR default: defined as 0 (false)
283 Controls whether detected bad addresses cause them to bypassed
284 rather than aborting. If set, detected bad arguments to free and
285 realloc are ignored. And all bookkeeping information is zeroed out
286 upon a detected overwrite of freed heap space, thus losing the
287 ability to ever return it from malloc again, but enabling the
288 application to proceed. If PROCEED_ON_ERROR is defined, the
289 static variable malloc_corruption_error_count is compiled in
290 and can be examined to see if errors have occurred. This option
291 generates slower code than the default abort policy.
292
293DEBUG default: NOT defined
294 The DEBUG setting is mainly intended for people trying to modify
295 this code or diagnose problems when porting to new platforms.
296 However, it may also be able to better isolate user errors than just
297 using runtime checks. The assertions in the check routines spell
298 out in more detail the assumptions and invariants underlying the
299 algorithms. The checking is fairly extensive, and will slow down
300 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
301 set will attempt to check every non-mmapped allocated and free chunk
302 in the course of computing the summaries.
303
304ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
305 Debugging assertion failures can be nearly impossible if your
306 version of the assert macro causes malloc to be called, which will
307 lead to a cascade of further failures, blowing the runtime stack.
308 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
309 which will usually make debugging easier.
310
311MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
312 The action to take before "return 0" when malloc fails to be able to
313 return memory because there is none available.
314
315HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
316 True if this system supports sbrk or an emulation of it.
317
318MORECORE default: sbrk
319 The name of the sbrk-style system routine to call to obtain more
320 memory. See below for guidance on writing custom MORECORE
321 functions. The type of the argument to sbrk/MORECORE varies across
322 systems. It cannot be size_t, because it supports negative
323 arguments, so it is normally the signed type of the same width as
324 size_t (sometimes declared as "intptr_t"). It doesn't much matter
325 though. Internally, we only call it with arguments less than half
326 the max value of a size_t, which should work across all reasonable
327 possibilities, although sometimes generating compiler warnings. See
328 near the end of this file for guidelines for creating a custom
329 version of MORECORE.
330
331MORECORE_CONTIGUOUS default: 1 (true)
332 If true, take advantage of fact that consecutive calls to MORECORE
333 with positive arguments always return contiguous increasing
334 addresses. This is true of unix sbrk. It does not hurt too much to
335 set it true anyway, since malloc copes with non-contiguities.
336 Setting it false when definitely non-contiguous saves time
337 and possibly wasted space it would take to discover this though.
338
339MORECORE_CANNOT_TRIM default: NOT defined
340 True if MORECORE cannot release space back to the system when given
341 negative arguments. This is generally necessary only if you are
342 using a hand-crafted MORECORE function that cannot handle negative
343 arguments.
344
345HAVE_MMAP default: 1 (true)
346 True if this system supports mmap or an emulation of it. If so, and
347 HAVE_MORECORE is not true, MMAP is used for all system
348 allocation. If set and HAVE_MORECORE is true as well, MMAP is
349 primarily used to directly allocate very large blocks. It is also
350 used as a backup strategy in cases where MORECORE fails to provide
351 space from system. Note: A single call to MUNMAP is assumed to be
352 able to unmap memory that may have be allocated using multiple calls
353 to MMAP, so long as they are adjacent.
354
355HAVE_MREMAP default: 1 on linux, else 0
356 If true realloc() uses mremap() to re-allocate large blocks and
357 extend or shrink allocation spaces.
358
359MMAP_CLEARS default: 1 on unix
360 True if mmap clears memory so calloc doesn't need to. This is true
361 for standard unix mmap using /dev/zero.
362
363USE_BUILTIN_FFS default: 0 (i.e., not used)
364 Causes malloc to use the builtin ffs() function to compute indices.
365 Some compilers may recognize and intrinsify ffs to be faster than the
366 supplied C version. Also, the case of x86 using gcc is special-cased
367 to an asm instruction, so is already as fast as it can be, and so
368 this setting has no effect. (On most x86s, the asm version is only
369 slightly faster than the C version.)
370
371malloc_getpagesize default: derive from system includes, or 4096.
372 The system page size. To the extent possible, this malloc manages
373 memory from the system in page-size units. This may be (and
374 usually is) a function rather than a constant. This is ignored
375 if WIN32, where page size is determined using getSystemInfo during
376 initialization.
377
378USE_DEV_RANDOM default: 0 (i.e., not used)
379 Causes malloc to use /dev/random to initialize secure magic seed for
380 stamping footers. Otherwise, the current time is used.
381
382NO_MALLINFO default: 0
383 If defined, don't compile "mallinfo". This can be a simple way
384 of dealing with mismatches between system declarations and
385 those in this file.
386
387MALLINFO_FIELD_TYPE default: size_t
388 The type of the fields in the mallinfo struct. This was originally
389 defined as "int" in SVID etc, but is more usefully defined as
390 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
391
392REALLOC_ZERO_BYTES_FREES default: not defined
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -0800393 This should be set if a call to realloc with zero bytes should
394 be the same as a call to free. Some people think it should. Otherwise,
395 since this malloc returns a unique pointer for malloc(0), so does
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -0700396 realloc(p, 0).
397
398LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
399LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
400LACKS_STDLIB_H default: NOT defined unless on WIN32
401 Define these if your system does not have these header files.
402 You might need to manually insert some of the declarations they provide.
403
404DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
405 system_info.dwAllocationGranularity in WIN32,
406 otherwise 64K.
407 Also settable using mallopt(M_GRANULARITY, x)
408 The unit for allocating and deallocating memory from the system. On
409 most systems with contiguous MORECORE, there is no reason to
410 make this more than a page. However, systems with MMAP tend to
411 either require or encourage larger granularities. You can increase
412 this value to prevent system allocation functions to be called so
413 often, especially if they are slow. The value must be at least one
414 page and must be a power of two. Setting to 0 causes initialization
415 to either page size or win32 region size. (Note: In previous
416 versions of malloc, the equivalent of this option was called
417 "TOP_PAD")
418
419DEFAULT_TRIM_THRESHOLD default: 2MB
420 Also settable using mallopt(M_TRIM_THRESHOLD, x)
421 The maximum amount of unused top-most memory to keep before
422 releasing via malloc_trim in free(). Automatic trimming is mainly
423 useful in long-lived programs using contiguous MORECORE. Because
424 trimming via sbrk can be slow on some systems, and can sometimes be
425 wasteful (in cases where programs immediately afterward allocate
426 more large chunks) the value should be high enough so that your
427 overall system performance would improve by releasing this much
428 memory. As a rough guide, you might set to a value close to the
429 average size of a process (program) running on your system.
430 Releasing this much memory would allow such a process to run in
431 memory. Generally, it is worth tuning trim thresholds when a
432 program undergoes phases where several large chunks are allocated
433 and released in ways that can reuse each other's storage, perhaps
434 mixed with phases where there are no such chunks at all. The trim
435 value must be greater than page size to have any useful effect. To
436 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
437 some people use of mallocing a huge space and then freeing it at
438 program startup, in an attempt to reserve system memory, doesn't
439 have the intended effect under automatic trimming, since that memory
440 will immediately be returned to the system.
441
442DEFAULT_MMAP_THRESHOLD default: 256K
443 Also settable using mallopt(M_MMAP_THRESHOLD, x)
444 The request size threshold for using MMAP to directly service a
445 request. Requests of at least this size that cannot be allocated
446 using already-existing space will be serviced via mmap. (If enough
447 normal freed space already exists it is used instead.) Using mmap
448 segregates relatively large chunks of memory so that they can be
449 individually obtained and released from the host system. A request
450 serviced through mmap is never reused by any other request (at least
451 not directly; the system may just so happen to remap successive
452 requests to the same locations). Segregating space in this way has
453 the benefits that: Mmapped space can always be individually released
454 back to the system, which helps keep the system level memory demands
455 of a long-lived program low. Also, mapped memory doesn't become
456 `locked' between other chunks, as can happen with normally allocated
457 chunks, which means that even trimming via malloc_trim would not
458 release them. However, it has the disadvantage that the space
459 cannot be reclaimed, consolidated, and then used to service later
460 requests, as happens with normal chunks. The advantages of mmap
461 nearly always outweigh disadvantages for "large" chunks, but the
462 value of "large" may vary across systems. The default is an
463 empirically derived value that works well in most systems. You can
464 disable mmap by setting to MAX_SIZE_T.
465
466*/
467
468#ifndef WIN32
469#ifdef _WIN32
470#define WIN32 1
471#endif /* _WIN32 */
472#endif /* WIN32 */
473#ifdef WIN32
474#define WIN32_LEAN_AND_MEAN
475#include <windows.h>
476#define HAVE_MMAP 1
477#define HAVE_MORECORE 0
478#define LACKS_UNISTD_H
479#define LACKS_SYS_PARAM_H
480#define LACKS_SYS_MMAN_H
481#define LACKS_STRING_H
482#define LACKS_STRINGS_H
483#define LACKS_SYS_TYPES_H
484#define LACKS_ERRNO_H
485#define MALLOC_FAILURE_ACTION
486#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
487#endif /* WIN32 */
488
489#if defined(DARWIN) || defined(_DARWIN)
490/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
491#ifndef HAVE_MORECORE
492#define HAVE_MORECORE 0
493#define HAVE_MMAP 1
494#endif /* HAVE_MORECORE */
495#endif /* DARWIN */
496
497#ifndef LACKS_SYS_TYPES_H
498#include <sys/types.h> /* For size_t */
499#endif /* LACKS_SYS_TYPES_H */
500
501/* The maximum possible size_t value has all bits set */
502#define MAX_SIZE_T (~(size_t)0)
503
504#ifndef ONLY_MSPACES
505#define ONLY_MSPACES 0
506#endif /* ONLY_MSPACES */
507#ifndef MSPACES
508#if ONLY_MSPACES
509#define MSPACES 1
510#else /* ONLY_MSPACES */
511#define MSPACES 0
512#endif /* ONLY_MSPACES */
513#endif /* MSPACES */
514#ifndef MALLOC_ALIGNMENT
515#define MALLOC_ALIGNMENT ((size_t)8U)
516#endif /* MALLOC_ALIGNMENT */
517#ifndef FOOTERS
518#define FOOTERS 0
519#endif /* FOOTERS */
520#ifndef USE_MAX_ALLOWED_FOOTPRINT
521#define USE_MAX_ALLOWED_FOOTPRINT 0
522#endif
523#ifndef ABORT
524#define ABORT abort()
525#endif /* ABORT */
526#ifndef ABORT_ON_ASSERT_FAILURE
527#define ABORT_ON_ASSERT_FAILURE 1
528#endif /* ABORT_ON_ASSERT_FAILURE */
529#ifndef PROCEED_ON_ERROR
530#define PROCEED_ON_ERROR 0
531#endif /* PROCEED_ON_ERROR */
532#ifndef USE_LOCKS
533#define USE_LOCKS 0
534#endif /* USE_LOCKS */
535#ifndef INSECURE
536#define INSECURE 0
537#endif /* INSECURE */
538#ifndef HAVE_MMAP
539#define HAVE_MMAP 1
540#endif /* HAVE_MMAP */
541#ifndef MMAP_CLEARS
542#define MMAP_CLEARS 1
543#endif /* MMAP_CLEARS */
544#ifndef HAVE_MREMAP
545#ifdef linux
546#define HAVE_MREMAP 1
547#else /* linux */
548#define HAVE_MREMAP 0
549#endif /* linux */
550#endif /* HAVE_MREMAP */
551#ifndef MALLOC_FAILURE_ACTION
552#define MALLOC_FAILURE_ACTION errno = ENOMEM;
553#endif /* MALLOC_FAILURE_ACTION */
554#ifndef HAVE_MORECORE
555#if ONLY_MSPACES
556#define HAVE_MORECORE 0
557#else /* ONLY_MSPACES */
558#define HAVE_MORECORE 1
559#endif /* ONLY_MSPACES */
560#endif /* HAVE_MORECORE */
561#if !HAVE_MORECORE
562#define MORECORE_CONTIGUOUS 0
563#else /* !HAVE_MORECORE */
564#ifndef MORECORE
565#define MORECORE sbrk
566#endif /* MORECORE */
567#ifndef MORECORE_CONTIGUOUS
568#define MORECORE_CONTIGUOUS 1
569#endif /* MORECORE_CONTIGUOUS */
570#endif /* HAVE_MORECORE */
571#ifndef DEFAULT_GRANULARITY
572#if MORECORE_CONTIGUOUS
573#define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
574#else /* MORECORE_CONTIGUOUS */
575#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
576#endif /* MORECORE_CONTIGUOUS */
577#endif /* DEFAULT_GRANULARITY */
578#ifndef DEFAULT_TRIM_THRESHOLD
579#ifndef MORECORE_CANNOT_TRIM
580#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
581#else /* MORECORE_CANNOT_TRIM */
582#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
583#endif /* MORECORE_CANNOT_TRIM */
584#endif /* DEFAULT_TRIM_THRESHOLD */
585#ifndef DEFAULT_MMAP_THRESHOLD
586#if HAVE_MMAP
587#define DEFAULT_MMAP_THRESHOLD ((size_t)64U * (size_t)1024U)
588#else /* HAVE_MMAP */
589#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
590#endif /* HAVE_MMAP */
591#endif /* DEFAULT_MMAP_THRESHOLD */
592#ifndef USE_BUILTIN_FFS
593#define USE_BUILTIN_FFS 0
594#endif /* USE_BUILTIN_FFS */
595#ifndef USE_DEV_RANDOM
596#define USE_DEV_RANDOM 0
597#endif /* USE_DEV_RANDOM */
598#ifndef NO_MALLINFO
599#define NO_MALLINFO 0
600#endif /* NO_MALLINFO */
601#ifndef MALLINFO_FIELD_TYPE
602#define MALLINFO_FIELD_TYPE size_t
603#endif /* MALLINFO_FIELD_TYPE */
604
605/*
606 mallopt tuning options. SVID/XPG defines four standard parameter
607 numbers for mallopt, normally defined in malloc.h. None of these
608 are used in this malloc, so setting them has no effect. But this
609 malloc does support the following options.
610*/
611
612#define M_TRIM_THRESHOLD (-1)
613#define M_GRANULARITY (-2)
614#define M_MMAP_THRESHOLD (-3)
615
616/* ------------------------ Mallinfo declarations ------------------------ */
617
618#if !NO_MALLINFO
619/*
620 This version of malloc supports the standard SVID/XPG mallinfo
621 routine that returns a struct containing usage properties and
622 statistics. It should work on any system that has a
623 /usr/include/malloc.h defining struct mallinfo. The main
624 declaration needed is the mallinfo struct that is returned (by-copy)
625 by mallinfo(). The malloinfo struct contains a bunch of fields that
626 are not even meaningful in this version of malloc. These fields are
627 are instead filled by mallinfo() with other numbers that might be of
628 interest.
629
630 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
631 /usr/include/malloc.h file that includes a declaration of struct
632 mallinfo. If so, it is included; else a compliant version is
633 declared below. These must be precisely the same for mallinfo() to
634 work. The original SVID version of this struct, defined on most
635 systems with mallinfo, declares all fields as ints. But some others
636 define as unsigned long. If your system defines the fields using a
637 type of different width than listed here, you MUST #include your
638 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
639*/
640
641/* #define HAVE_USR_INCLUDE_MALLOC_H */
642
643#if !ANDROID
644#ifdef HAVE_USR_INCLUDE_MALLOC_H
645#include "/usr/include/malloc.h"
646#else /* HAVE_USR_INCLUDE_MALLOC_H */
647
648struct mallinfo {
649 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
650 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
651 MALLINFO_FIELD_TYPE smblks; /* always 0 */
652 MALLINFO_FIELD_TYPE hblks; /* always 0 */
653 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
654 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
655 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
656 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
657 MALLINFO_FIELD_TYPE fordblks; /* total free space */
658 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
659};
660
661#endif /* HAVE_USR_INCLUDE_MALLOC_H */
662#endif /* NO_MALLINFO */
663#endif /* ANDROID */
664
665#ifdef __cplusplus
666extern "C" {
667#endif /* __cplusplus */
668
669#if !ONLY_MSPACES
670
671/* ------------------- Declarations of public routines ------------------- */
672
673/* Check an additional macro for the five primary functions */
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -0800674#ifndef USE_DL_PREFIX
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -0700675#define dlcalloc calloc
676#define dlfree free
677#define dlmalloc malloc
678#define dlmemalign memalign
679#define dlrealloc realloc
680#endif
681
682#ifndef USE_DL_PREFIX
683#define dlvalloc valloc
684#define dlpvalloc pvalloc
685#define dlmallinfo mallinfo
686#define dlmallopt mallopt
687#define dlmalloc_trim malloc_trim
688#define dlmalloc_walk_free_pages \
689 malloc_walk_free_pages
690#define dlmalloc_walk_heap \
691 malloc_walk_heap
692#define dlmalloc_stats malloc_stats
693#define dlmalloc_usable_size malloc_usable_size
694#define dlmalloc_footprint malloc_footprint
695#define dlmalloc_max_allowed_footprint \
696 malloc_max_allowed_footprint
697#define dlmalloc_set_max_allowed_footprint \
698 malloc_set_max_allowed_footprint
699#define dlmalloc_max_footprint malloc_max_footprint
700#define dlindependent_calloc independent_calloc
701#define dlindependent_comalloc independent_comalloc
702#endif /* USE_DL_PREFIX */
703
704
705/*
706 malloc(size_t n)
707 Returns a pointer to a newly allocated chunk of at least n bytes, or
708 null if no space is available, in which case errno is set to ENOMEM
709 on ANSI C systems.
710
711 If n is zero, malloc returns a minimum-sized chunk. (The minimum
712 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
713 systems.) Note that size_t is an unsigned type, so calls with
714 arguments that would be negative if signed are interpreted as
715 requests for huge amounts of space, which will often fail. The
716 maximum supported value of n differs across systems, but is in all
717 cases less than the maximum representable value of a size_t.
718*/
719void* dlmalloc(size_t);
720
721/*
722 free(void* p)
723 Releases the chunk of memory pointed to by p, that had been previously
724 allocated using malloc or a related routine such as realloc.
725 It has no effect if p is null. If p was not malloced or already
726 freed, free(p) will by default cause the current program to abort.
727*/
728void dlfree(void*);
729
730/*
731 calloc(size_t n_elements, size_t element_size);
732 Returns a pointer to n_elements * element_size bytes, with all locations
733 set to zero.
734*/
735void* dlcalloc(size_t, size_t);
736
737/*
738 realloc(void* p, size_t n)
739 Returns a pointer to a chunk of size n that contains the same data
740 as does chunk p up to the minimum of (n, p's size) bytes, or null
741 if no space is available.
742
743 The returned pointer may or may not be the same as p. The algorithm
744 prefers extending p in most cases when possible, otherwise it
745 employs the equivalent of a malloc-copy-free sequence.
746
747 If p is null, realloc is equivalent to malloc.
748
749 If space is not available, realloc returns null, errno is set (if on
750 ANSI) and p is NOT freed.
751
752 if n is for fewer bytes than already held by p, the newly unused
753 space is lopped off and freed if possible. realloc with a size
754 argument of zero (re)allocates a minimum-sized chunk.
755
756 The old unix realloc convention of allowing the last-free'd chunk
757 to be used as an argument to realloc is not supported.
758*/
759
760void* dlrealloc(void*, size_t);
761
762/*
763 memalign(size_t alignment, size_t n);
764 Returns a pointer to a newly allocated chunk of n bytes, aligned
765 in accord with the alignment argument.
766
767 The alignment argument should be a power of two. If the argument is
768 not a power of two, the nearest greater power is used.
769 8-byte alignment is guaranteed by normal malloc calls, so don't
770 bother calling memalign with an argument of 8 or less.
771
772 Overreliance on memalign is a sure way to fragment space.
773*/
774void* dlmemalign(size_t, size_t);
775
776/*
777 valloc(size_t n);
778 Equivalent to memalign(pagesize, n), where pagesize is the page
779 size of the system. If the pagesize is unknown, 4096 is used.
780*/
781void* dlvalloc(size_t);
782
783/*
784 mallopt(int parameter_number, int parameter_value)
785 Sets tunable parameters The format is to provide a
786 (parameter-number, parameter-value) pair. mallopt then sets the
787 corresponding parameter to the argument value if it can (i.e., so
788 long as the value is meaningful), and returns 1 if successful else
789 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
790 normally defined in malloc.h. None of these are use in this malloc,
791 so setting them has no effect. But this malloc also supports other
792 options in mallopt. See below for details. Briefly, supported
793 parameters are as follows (listed defaults are for "typical"
794 configurations).
795
796 Symbol param # default allowed param values
797 M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)
798 M_GRANULARITY -2 page size any power of 2 >= page size
799 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
800*/
801int dlmallopt(int, int);
802
803/*
804 malloc_footprint();
805 Returns the number of bytes obtained from the system. The total
806 number of bytes allocated by malloc, realloc etc., is less than this
807 value. Unlike mallinfo, this function returns only a precomputed
808 result, so can be called frequently to monitor memory consumption.
809 Even if locks are otherwise defined, this function does not use them,
810 so results might not be up to date.
811*/
812size_t dlmalloc_footprint(void);
813
814#if USE_MAX_ALLOWED_FOOTPRINT
815/*
816 malloc_max_allowed_footprint();
817 Returns the number of bytes that the heap is allowed to obtain
818 from the system. malloc_footprint() should always return a
819 size less than or equal to max_allowed_footprint, unless the
820 max_allowed_footprint was set to a value smaller than the
821 footprint at the time.
822*/
823size_t dlmalloc_max_allowed_footprint();
824
825/*
826 malloc_set_max_allowed_footprint();
827 Set the maximum number of bytes that the heap is allowed to
828 obtain from the system. The size will be rounded up to a whole
829 page, and the rounded number will be returned from future calls
830 to malloc_max_allowed_footprint(). If the new max_allowed_footprint
831 is larger than the current footprint, the heap will never grow
832 larger than max_allowed_footprint. If the new max_allowed_footprint
833 is smaller than the current footprint, the heap will not grow
834 further.
835
836 TODO: try to force the heap to give up memory in the shrink case,
837 and update this comment once that happens.
838*/
839void dlmalloc_set_max_allowed_footprint(size_t bytes);
840#endif /* USE_MAX_ALLOWED_FOOTPRINT */
841
842/*
843 malloc_max_footprint();
844 Returns the maximum number of bytes obtained from the system. This
845 value will be greater than current footprint if deallocated space
846 has been reclaimed by the system. The peak number of bytes allocated
847 by malloc, realloc etc., is less than this value. Unlike mallinfo,
848 this function returns only a precomputed result, so can be called
849 frequently to monitor memory consumption. Even if locks are
850 otherwise defined, this function does not use them, so results might
851 not be up to date.
852*/
853size_t dlmalloc_max_footprint(void);
854
855#if !NO_MALLINFO
856/*
857 mallinfo()
858 Returns (by copy) a struct containing various summary statistics:
859
860 arena: current total non-mmapped bytes allocated from system
861 ordblks: the number of free chunks
862 smblks: always zero.
863 hblks: current number of mmapped regions
864 hblkhd: total bytes held in mmapped regions
865 usmblks: the maximum total allocated space. This will be greater
866 than current total if trimming has occurred.
867 fsmblks: always zero
868 uordblks: current total allocated space (normal or mmapped)
869 fordblks: total free space
870 keepcost: the maximum number of bytes that could ideally be released
871 back to system via malloc_trim. ("ideally" means that
872 it ignores page restrictions etc.)
873
874 Because these fields are ints, but internal bookkeeping may
875 be kept as longs, the reported values may wrap around zero and
876 thus be inaccurate.
877*/
878struct mallinfo dlmallinfo(void);
879#endif /* NO_MALLINFO */
880
881/*
882 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
883
884 independent_calloc is similar to calloc, but instead of returning a
885 single cleared space, it returns an array of pointers to n_elements
886 independent elements that can hold contents of size elem_size, each
887 of which starts out cleared, and can be independently freed,
888 realloc'ed etc. The elements are guaranteed to be adjacently
889 allocated (this is not guaranteed to occur with multiple callocs or
890 mallocs), which may also improve cache locality in some
891 applications.
892
893 The "chunks" argument is optional (i.e., may be null, which is
894 probably the most typical usage). If it is null, the returned array
895 is itself dynamically allocated and should also be freed when it is
896 no longer needed. Otherwise, the chunks array must be of at least
897 n_elements in length. It is filled in with the pointers to the
898 chunks.
899
900 In either case, independent_calloc returns this pointer array, or
901 null if the allocation failed. If n_elements is zero and "chunks"
902 is null, it returns a chunk representing an array with zero elements
903 (which should be freed if not wanted).
904
905 Each element must be individually freed when it is no longer
906 needed. If you'd like to instead be able to free all at once, you
907 should instead use regular calloc and assign pointers into this
908 space to represent elements. (In this case though, you cannot
909 independently free elements.)
910
911 independent_calloc simplifies and speeds up implementations of many
912 kinds of pools. It may also be useful when constructing large data
913 structures that initially have a fixed number of fixed-sized nodes,
914 but the number is not known at compile time, and some of the nodes
915 may later need to be freed. For example:
916
917 struct Node { int item; struct Node* next; };
918
919 struct Node* build_list() {
920 struct Node** pool;
921 int n = read_number_of_nodes_needed();
922 if (n <= 0) return 0;
923 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
924 if (pool == 0) die();
925 // organize into a linked list...
926 struct Node* first = pool[0];
927 for (i = 0; i < n-1; ++i)
928 pool[i]->next = pool[i+1];
929 free(pool); // Can now free the array (or not, if it is needed later)
930 return first;
931 }
932*/
933void** dlindependent_calloc(size_t, size_t, void**);
934
935/*
936 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
937
938 independent_comalloc allocates, all at once, a set of n_elements
939 chunks with sizes indicated in the "sizes" array. It returns
940 an array of pointers to these elements, each of which can be
941 independently freed, realloc'ed etc. The elements are guaranteed to
942 be adjacently allocated (this is not guaranteed to occur with
943 multiple callocs or mallocs), which may also improve cache locality
944 in some applications.
945
946 The "chunks" argument is optional (i.e., may be null). If it is null
947 the returned array is itself dynamically allocated and should also
948 be freed when it is no longer needed. Otherwise, the chunks array
949 must be of at least n_elements in length. It is filled in with the
950 pointers to the chunks.
951
952 In either case, independent_comalloc returns this pointer array, or
953 null if the allocation failed. If n_elements is zero and chunks is
954 null, it returns a chunk representing an array with zero elements
955 (which should be freed if not wanted).
956
957 Each element must be individually freed when it is no longer
958 needed. If you'd like to instead be able to free all at once, you
959 should instead use a single regular malloc, and assign pointers at
960 particular offsets in the aggregate space. (In this case though, you
961 cannot independently free elements.)
962
963 independent_comallac differs from independent_calloc in that each
964 element may have a different size, and also that it does not
965 automatically clear elements.
966
967 independent_comalloc can be used to speed up allocation in cases
968 where several structs or objects must always be allocated at the
969 same time. For example:
970
971 struct Head { ... }
972 struct Foot { ... }
973
974 void send_message(char* msg) {
975 int msglen = strlen(msg);
976 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
977 void* chunks[3];
978 if (independent_comalloc(3, sizes, chunks) == 0)
979 die();
980 struct Head* head = (struct Head*)(chunks[0]);
981 char* body = (char*)(chunks[1]);
982 struct Foot* foot = (struct Foot*)(chunks[2]);
983 // ...
984 }
985
986 In general though, independent_comalloc is worth using only for
987 larger values of n_elements. For small values, you probably won't
988 detect enough difference from series of malloc calls to bother.
989
990 Overuse of independent_comalloc can increase overall memory usage,
991 since it cannot reuse existing noncontiguous small chunks that
992 might be available for some of the elements.
993*/
994void** dlindependent_comalloc(size_t, size_t*, void**);
995
996
997/*
998 pvalloc(size_t n);
999 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1000 round up n to nearest pagesize.
1001 */
1002void* dlpvalloc(size_t);
1003
1004/*
1005 malloc_trim(size_t pad);
1006
1007 If possible, gives memory back to the system (via negative arguments
1008 to sbrk) if there is unused memory at the `high' end of the malloc
1009 pool or in unused MMAP segments. You can call this after freeing
1010 large blocks of memory to potentially reduce the system-level memory
1011 requirements of a program. However, it cannot guarantee to reduce
1012 memory. Under some allocation patterns, some large free blocks of
1013 memory will be locked between two used chunks, so they cannot be
1014 given back to the system.
1015
1016 The `pad' argument to malloc_trim represents the amount of free
1017 trailing space to leave untrimmed. If this argument is zero, only
1018 the minimum amount of memory to maintain internal data structures
1019 will be left. Non-zero arguments can be supplied to maintain enough
1020 trailing space to service future expected allocations without having
1021 to re-obtain memory from the system.
1022
1023 Malloc_trim returns 1 if it actually released any memory, else 0.
1024*/
1025int dlmalloc_trim(size_t);
1026
1027/*
1028 malloc_walk_free_pages(handler, harg)
1029
1030 Calls the provided handler on each free region in the heap. The
1031 memory between start and end are guaranteed not to contain any
1032 important data, so the handler is free to alter the contents
1033 in any way. This can be used to advise the OS that large free
1034 regions may be swapped out.
1035
1036 The value in harg will be passed to each call of the handler.
1037 */
1038void dlmalloc_walk_free_pages(void(*)(void*, void*, void*), void*);
1039
1040/*
1041 malloc_walk_heap(handler, harg)
1042
1043 Calls the provided handler on each object or free region in the
1044 heap. The handler will receive the chunk pointer and length, the
1045 object pointer and length, and the value in harg on each call.
1046 */
1047void dlmalloc_walk_heap(void(*)(const void*, size_t,
1048 const void*, size_t, void*),
1049 void*);
1050
1051/*
1052 malloc_usable_size(void* p);
1053
1054 Returns the number of bytes you can actually use in
1055 an allocated chunk, which may be more than you requested (although
1056 often not) due to alignment and minimum size constraints.
1057 You can use this many bytes without worrying about
1058 overwriting other allocated objects. This is not a particularly great
1059 programming practice. malloc_usable_size can be more useful in
1060 debugging and assertions, for example:
1061
1062 p = malloc(n);
1063 assert(malloc_usable_size(p) >= 256);
1064*/
1065size_t dlmalloc_usable_size(void*);
1066
1067/*
1068 malloc_stats();
1069 Prints on stderr the amount of space obtained from the system (both
1070 via sbrk and mmap), the maximum amount (which may be more than
1071 current if malloc_trim and/or munmap got called), and the current
1072 number of bytes allocated via malloc (or realloc, etc) but not yet
1073 freed. Note that this is the number of bytes allocated, not the
1074 number requested. It will be larger than the number requested
1075 because of alignment and bookkeeping overhead. Because it includes
1076 alignment wastage as being in use, this figure may be greater than
1077 zero even when no user-level chunks are allocated.
1078
1079 The reported current and maximum system memory can be inaccurate if
1080 a program makes other calls to system memory allocation functions
1081 (normally sbrk) outside of malloc.
1082
1083 malloc_stats prints only the most commonly interesting statistics.
1084 More information can be obtained by calling mallinfo.
1085*/
1086void dlmalloc_stats(void);
1087
1088#endif /* ONLY_MSPACES */
1089
1090#if MSPACES
1091
1092/*
1093 mspace is an opaque type representing an independent
1094 region of space that supports mspace_malloc, etc.
1095*/
1096typedef void* mspace;
1097
1098/*
1099 create_mspace creates and returns a new independent space with the
1100 given initial capacity, or, if 0, the default granularity size. It
1101 returns null if there is no system memory available to create the
1102 space. If argument locked is non-zero, the space uses a separate
1103 lock to control access. The capacity of the space will grow
1104 dynamically as needed to service mspace_malloc requests. You can
1105 control the sizes of incremental increases of this space by
1106 compiling with a different DEFAULT_GRANULARITY or dynamically
1107 setting with mallopt(M_GRANULARITY, value).
1108*/
1109mspace create_mspace(size_t capacity, int locked);
1110
1111/*
1112 destroy_mspace destroys the given space, and attempts to return all
1113 of its memory back to the system, returning the total number of
1114 bytes freed. After destruction, the results of access to all memory
1115 used by the space become undefined.
1116*/
1117size_t destroy_mspace(mspace msp);
1118
1119/*
1120 create_mspace_with_base uses the memory supplied as the initial base
1121 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1122 space is used for bookkeeping, so the capacity must be at least this
1123 large. (Otherwise 0 is returned.) When this initial space is
1124 exhausted, additional memory will be obtained from the system.
1125 Destroying this space will deallocate all additionally allocated
1126 space (if possible) but not the initial base.
1127*/
1128mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1129
1130/*
1131 mspace_malloc behaves as malloc, but operates within
1132 the given space.
1133*/
1134void* mspace_malloc(mspace msp, size_t bytes);
1135
1136/*
1137 mspace_free behaves as free, but operates within
1138 the given space.
1139
1140 If compiled with FOOTERS==1, mspace_free is not actually needed.
1141 free may be called instead of mspace_free because freed chunks from
1142 any space are handled by their originating spaces.
1143*/
1144void mspace_free(mspace msp, void* mem);
1145
1146/*
1147 mspace_realloc behaves as realloc, but operates within
1148 the given space.
1149
1150 If compiled with FOOTERS==1, mspace_realloc is not actually
1151 needed. realloc may be called instead of mspace_realloc because
1152 realloced chunks from any space are handled by their originating
1153 spaces.
1154*/
1155void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1156
Barry Hayesf30dae92009-05-26 10:33:04 -07001157#if ANDROID /* Added for Android, not part of dlmalloc as released */
1158/*
1159 mspace_merge_objects will merge allocated memory mema and memb
1160 together, provided memb immediately follows mema. It is roughly as
1161 if memb has been freed and mema has been realloced to a larger size.
1162 On successfully merging, mema will be returned. If either argument
1163 is null or memb does not immediately follow mema, null will be
1164 returned.
1165
1166 Both mema and memb should have been previously allocated using
1167 malloc or a related routine such as realloc. If either mema or memb
1168 was not malloced or was previously freed, the result is undefined,
1169 but like mspace_free, the default is to abort the program.
1170*/
1171void* mspace_merge_objects(mspace msp, void* mema, void* memb);
1172#endif
1173
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07001174/*
1175 mspace_calloc behaves as calloc, but operates within
1176 the given space.
1177*/
1178void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1179
1180/*
1181 mspace_memalign behaves as memalign, but operates within
1182 the given space.
1183*/
1184void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1185
1186/*
1187 mspace_independent_calloc behaves as independent_calloc, but
1188 operates within the given space.
1189*/
1190void** mspace_independent_calloc(mspace msp, size_t n_elements,
1191 size_t elem_size, void* chunks[]);
1192
1193/*
1194 mspace_independent_comalloc behaves as independent_comalloc, but
1195 operates within the given space.
1196*/
1197void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1198 size_t sizes[], void* chunks[]);
1199
1200/*
1201 mspace_footprint() returns the number of bytes obtained from the
1202 system for this space.
1203*/
1204size_t mspace_footprint(mspace msp);
1205
1206/*
1207 mspace_max_footprint() returns the peak number of bytes obtained from the
1208 system for this space.
1209*/
1210size_t mspace_max_footprint(mspace msp);
1211
1212
1213#if !NO_MALLINFO
1214/*
1215 mspace_mallinfo behaves as mallinfo, but reports properties of
1216 the given space.
1217*/
1218struct mallinfo mspace_mallinfo(mspace msp);
1219#endif /* NO_MALLINFO */
1220
1221/*
1222 mspace_malloc_stats behaves as malloc_stats, but reports
1223 properties of the given space.
1224*/
1225void mspace_malloc_stats(mspace msp);
1226
1227/*
1228 mspace_trim behaves as malloc_trim, but
1229 operates within the given space.
1230*/
1231int mspace_trim(mspace msp, size_t pad);
1232
1233/*
1234 An alias for mallopt.
1235*/
1236int mspace_mallopt(int, int);
1237
1238#endif /* MSPACES */
1239
1240#ifdef __cplusplus
1241}; /* end of extern "C" */
1242#endif /* __cplusplus */
1243
1244/*
1245 ========================================================================
1246 To make a fully customizable malloc.h header file, cut everything
1247 above this line, put into file malloc.h, edit to suit, and #include it
1248 on the next line, as well as in programs that use this malloc.
1249 ========================================================================
1250*/
1251
1252/* #include "malloc.h" */
1253
1254/*------------------------------ internal #includes ---------------------- */
1255
1256#ifdef WIN32
1257#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1258#endif /* WIN32 */
1259
1260#include <stdio.h> /* for printing in malloc_stats */
1261
1262#ifndef LACKS_ERRNO_H
1263#include <errno.h> /* for MALLOC_FAILURE_ACTION */
1264#endif /* LACKS_ERRNO_H */
1265#if FOOTERS
1266#include <time.h> /* for magic initialization */
1267#endif /* FOOTERS */
1268#ifndef LACKS_STDLIB_H
1269#include <stdlib.h> /* for abort() */
1270#endif /* LACKS_STDLIB_H */
1271#ifdef DEBUG
1272#if ABORT_ON_ASSERT_FAILURE
1273#define assert(x) if(!(x)) ABORT
1274#else /* ABORT_ON_ASSERT_FAILURE */
1275#include <assert.h>
1276#endif /* ABORT_ON_ASSERT_FAILURE */
1277#else /* DEBUG */
1278#define assert(x)
1279#endif /* DEBUG */
1280#ifndef LACKS_STRING_H
1281#include <string.h> /* for memset etc */
1282#endif /* LACKS_STRING_H */
1283#if USE_BUILTIN_FFS
1284#ifndef LACKS_STRINGS_H
1285#include <strings.h> /* for ffs */
1286#endif /* LACKS_STRINGS_H */
1287#endif /* USE_BUILTIN_FFS */
1288#if HAVE_MMAP
1289#ifndef LACKS_SYS_MMAN_H
1290#include <sys/mman.h> /* for mmap */
1291#endif /* LACKS_SYS_MMAN_H */
1292#ifndef LACKS_FCNTL_H
1293#include <fcntl.h>
1294#endif /* LACKS_FCNTL_H */
1295#endif /* HAVE_MMAP */
1296#if HAVE_MORECORE
1297#ifndef LACKS_UNISTD_H
1298#include <unistd.h> /* for sbrk */
1299#else /* LACKS_UNISTD_H */
1300#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1301extern void* sbrk(ptrdiff_t);
1302#endif /* FreeBSD etc */
1303#endif /* LACKS_UNISTD_H */
1304#endif /* HAVE_MMAP */
1305
1306#ifndef WIN32
1307#ifndef malloc_getpagesize
1308# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1309# ifndef _SC_PAGE_SIZE
1310# define _SC_PAGE_SIZE _SC_PAGESIZE
1311# endif
1312# endif
1313# ifdef _SC_PAGE_SIZE
1314# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1315# else
1316# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1317 extern size_t getpagesize();
1318# define malloc_getpagesize getpagesize()
1319# else
1320# ifdef WIN32 /* use supplied emulation of getpagesize */
1321# define malloc_getpagesize getpagesize()
1322# else
1323# ifndef LACKS_SYS_PARAM_H
1324# include <sys/param.h>
1325# endif
1326# ifdef EXEC_PAGESIZE
1327# define malloc_getpagesize EXEC_PAGESIZE
1328# else
1329# ifdef NBPG
1330# ifndef CLSIZE
1331# define malloc_getpagesize NBPG
1332# else
1333# define malloc_getpagesize (NBPG * CLSIZE)
1334# endif
1335# else
1336# ifdef NBPC
1337# define malloc_getpagesize NBPC
1338# else
1339# ifdef PAGESIZE
1340# define malloc_getpagesize PAGESIZE
1341# else /* just guess */
1342# define malloc_getpagesize ((size_t)4096U)
1343# endif
1344# endif
1345# endif
1346# endif
1347# endif
1348# endif
1349# endif
1350#endif
1351#endif
1352
1353/* ------------------- size_t and alignment properties -------------------- */
1354
1355/* The byte and bit size of a size_t */
1356#define SIZE_T_SIZE (sizeof(size_t))
1357#define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1358
1359/* Some constants coerced to size_t */
1360/* Annoying but necessary to avoid errors on some plaftorms */
1361#define SIZE_T_ZERO ((size_t)0)
1362#define SIZE_T_ONE ((size_t)1)
1363#define SIZE_T_TWO ((size_t)2)
1364#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1365#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1366#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1367#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1368
1369/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1370#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1371
1372/* True if address a has acceptable alignment */
1373#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1374
1375/* the number of bytes to offset an address to align it */
1376#define align_offset(A)\
1377 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1378 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1379
1380/* -------------------------- MMAP preliminaries ------------------------- */
1381
1382/*
1383 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1384 checks to fail so compiler optimizer can delete code rather than
1385 using so many "#if"s.
1386*/
1387
1388
1389/* MORECORE and MMAP must return MFAIL on failure */
1390#define MFAIL ((void*)(MAX_SIZE_T))
1391#define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1392
1393#if !HAVE_MMAP
1394#define IS_MMAPPED_BIT (SIZE_T_ZERO)
1395#define USE_MMAP_BIT (SIZE_T_ZERO)
1396#define CALL_MMAP(s) MFAIL
1397#define CALL_MUNMAP(a, s) (-1)
1398#define DIRECT_MMAP(s) MFAIL
1399
1400#else /* HAVE_MMAP */
1401#define IS_MMAPPED_BIT (SIZE_T_ONE)
1402#define USE_MMAP_BIT (SIZE_T_ONE)
1403
1404#ifndef WIN32
1405#define CALL_MUNMAP(a, s) munmap((a), (s))
1406#define MMAP_PROT (PROT_READ|PROT_WRITE)
1407#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1408#define MAP_ANONYMOUS MAP_ANON
1409#endif /* MAP_ANON */
1410#ifdef MAP_ANONYMOUS
1411#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1412#define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1413#else /* MAP_ANONYMOUS */
1414/*
1415 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1416 is unlikely to be needed, but is supplied just in case.
1417*/
1418#define MMAP_FLAGS (MAP_PRIVATE)
1419static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1420#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1421 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1422 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1423 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1424#endif /* MAP_ANONYMOUS */
1425
1426#define DIRECT_MMAP(s) CALL_MMAP(s)
1427#else /* WIN32 */
1428
1429/* Win32 MMAP via VirtualAlloc */
1430static void* win32mmap(size_t size) {
1431 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1432 return (ptr != 0)? ptr: MFAIL;
1433}
1434
1435/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1436static void* win32direct_mmap(size_t size) {
1437 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1438 PAGE_READWRITE);
1439 return (ptr != 0)? ptr: MFAIL;
1440}
1441
1442/* This function supports releasing coalesed segments */
1443static int win32munmap(void* ptr, size_t size) {
1444 MEMORY_BASIC_INFORMATION minfo;
1445 char* cptr = ptr;
1446 while (size) {
1447 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1448 return -1;
1449 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1450 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1451 return -1;
1452 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1453 return -1;
1454 cptr += minfo.RegionSize;
1455 size -= minfo.RegionSize;
1456 }
1457 return 0;
1458}
1459
1460#define CALL_MMAP(s) win32mmap(s)
1461#define CALL_MUNMAP(a, s) win32munmap((a), (s))
1462#define DIRECT_MMAP(s) win32direct_mmap(s)
1463#endif /* WIN32 */
1464#endif /* HAVE_MMAP */
1465
1466#if HAVE_MMAP && HAVE_MREMAP
1467#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1468#else /* HAVE_MMAP && HAVE_MREMAP */
1469#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1470#endif /* HAVE_MMAP && HAVE_MREMAP */
1471
1472#if HAVE_MORECORE
1473#define CALL_MORECORE(S) MORECORE(S)
1474#else /* HAVE_MORECORE */
1475#define CALL_MORECORE(S) MFAIL
1476#endif /* HAVE_MORECORE */
1477
1478/* mstate bit set if continguous morecore disabled or failed */
1479#define USE_NONCONTIGUOUS_BIT (4U)
1480
1481/* segment bit set in create_mspace_with_base */
1482#define EXTERN_BIT (8U)
1483
1484
1485/* --------------------------- Lock preliminaries ------------------------ */
1486
1487#if USE_LOCKS
1488
1489/*
1490 When locks are defined, there are up to two global locks:
1491
1492 * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1493 MORECORE. In many cases sys_alloc requires two calls, that should
1494 not be interleaved with calls by other threads. This does not
1495 protect against direct calls to MORECORE by other threads not
1496 using this lock, so there is still code to cope the best we can on
1497 interference.
1498
1499 * magic_init_mutex ensures that mparams.magic and other
1500 unique mparams values are initialized only once.
1501*/
1502
1503#ifndef WIN32
1504/* By default use posix locks */
1505#include <pthread.h>
1506#define MLOCK_T pthread_mutex_t
1507#define INITIAL_LOCK(l) pthread_mutex_init(l, NULL)
1508#define ACQUIRE_LOCK(l) pthread_mutex_lock(l)
1509#define RELEASE_LOCK(l) pthread_mutex_unlock(l)
1510
1511#if HAVE_MORECORE
1512static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1513#endif /* HAVE_MORECORE */
1514
1515static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1516
1517#else /* WIN32 */
1518/*
1519 Because lock-protected regions have bounded times, and there
1520 are no recursive lock calls, we can use simple spinlocks.
1521*/
1522
1523#define MLOCK_T long
1524static int win32_acquire_lock (MLOCK_T *sl) {
1525 for (;;) {
1526#ifdef InterlockedCompareExchangePointer
1527 if (!InterlockedCompareExchange(sl, 1, 0))
1528 return 0;
1529#else /* Use older void* version */
1530 if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1531 return 0;
1532#endif /* InterlockedCompareExchangePointer */
1533 Sleep (0);
1534 }
1535}
1536
1537static void win32_release_lock (MLOCK_T *sl) {
1538 InterlockedExchange (sl, 0);
1539}
1540
1541#define INITIAL_LOCK(l) *(l)=0
1542#define ACQUIRE_LOCK(l) win32_acquire_lock(l)
1543#define RELEASE_LOCK(l) win32_release_lock(l)
1544#if HAVE_MORECORE
1545static MLOCK_T morecore_mutex;
1546#endif /* HAVE_MORECORE */
1547static MLOCK_T magic_init_mutex;
1548#endif /* WIN32 */
1549
1550#define USE_LOCK_BIT (2U)
1551#else /* USE_LOCKS */
1552#define USE_LOCK_BIT (0U)
1553#define INITIAL_LOCK(l)
1554#endif /* USE_LOCKS */
1555
1556#if USE_LOCKS && HAVE_MORECORE
1557#define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);
1558#define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);
1559#else /* USE_LOCKS && HAVE_MORECORE */
1560#define ACQUIRE_MORECORE_LOCK()
1561#define RELEASE_MORECORE_LOCK()
1562#endif /* USE_LOCKS && HAVE_MORECORE */
1563
1564#if USE_LOCKS
1565#define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);
1566#define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);
1567#else /* USE_LOCKS */
1568#define ACQUIRE_MAGIC_INIT_LOCK()
1569#define RELEASE_MAGIC_INIT_LOCK()
1570#endif /* USE_LOCKS */
1571
1572
1573/* ----------------------- Chunk representations ------------------------ */
1574
1575/*
1576 (The following includes lightly edited explanations by Colin Plumb.)
1577
1578 The malloc_chunk declaration below is misleading (but accurate and
1579 necessary). It declares a "view" into memory allowing access to
1580 necessary fields at known offsets from a given base.
1581
1582 Chunks of memory are maintained using a `boundary tag' method as
1583 originally described by Knuth. (See the paper by Paul Wilson
1584 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1585 techniques.) Sizes of free chunks are stored both in the front of
1586 each chunk and at the end. This makes consolidating fragmented
1587 chunks into bigger chunks fast. The head fields also hold bits
1588 representing whether chunks are free or in use.
1589
1590 Here are some pictures to make it clearer. They are "exploded" to
1591 show that the state of a chunk can be thought of as extending from
1592 the high 31 bits of the head field of its header through the
1593 prev_foot and PINUSE_BIT bit of the following chunk header.
1594
1595 A chunk that's in use looks like:
1596
1597 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1598 | Size of previous chunk (if P = 1) |
1599 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1600 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1601 | Size of this chunk 1| +-+
1602 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1603 | |
1604 +- -+
1605 | |
1606 +- -+
1607 | :
1608 +- size - sizeof(size_t) available payload bytes -+
1609 : |
1610 chunk-> +- -+
1611 | |
1612 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1613 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1614 | Size of next chunk (may or may not be in use) | +-+
1615 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1616
1617 And if it's free, it looks like this:
1618
1619 chunk-> +- -+
1620 | User payload (must be in use, or we would have merged!) |
1621 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1622 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1623 | Size of this chunk 0| +-+
1624 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1625 | Next pointer |
1626 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1627 | Prev pointer |
1628 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1629 | :
1630 +- size - sizeof(struct chunk) unused bytes -+
1631 : |
1632 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1633 | Size of this chunk |
1634 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1635 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1636 | Size of next chunk (must be in use, or we would have merged)| +-+
1637 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1638 | :
1639 +- User payload -+
1640 : |
1641 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1642 |0|
1643 +-+
1644 Note that since we always merge adjacent free chunks, the chunks
1645 adjacent to a free chunk must be in use.
1646
1647 Given a pointer to a chunk (which can be derived trivially from the
1648 payload pointer) we can, in O(1) time, find out whether the adjacent
1649 chunks are free, and if so, unlink them from the lists that they
1650 are on and merge them with the current chunk.
1651
1652 Chunks always begin on even word boundaries, so the mem portion
1653 (which is returned to the user) is also on an even word boundary, and
1654 thus at least double-word aligned.
1655
1656 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1657 chunk size (which is always a multiple of two words), is an in-use
1658 bit for the *previous* chunk. If that bit is *clear*, then the
1659 word before the current chunk size contains the previous chunk
1660 size, and can be used to find the front of the previous chunk.
1661 The very first chunk allocated always has this bit set, preventing
1662 access to non-existent (or non-owned) memory. If pinuse is set for
1663 any given chunk, then you CANNOT determine the size of the
1664 previous chunk, and might even get a memory addressing fault when
1665 trying to do so.
1666
1667 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1668 the chunk size redundantly records whether the current chunk is
1669 inuse. This redundancy enables usage checks within free and realloc,
1670 and reduces indirection when freeing and consolidating chunks.
1671
1672 Each freshly allocated chunk must have both cinuse and pinuse set.
1673 That is, each allocated chunk borders either a previously allocated
1674 and still in-use chunk, or the base of its memory arena. This is
1675 ensured by making all allocations from the the `lowest' part of any
1676 found chunk. Further, no free chunk physically borders another one,
1677 so each free chunk is known to be preceded and followed by either
1678 inuse chunks or the ends of memory.
1679
1680 Note that the `foot' of the current chunk is actually represented
1681 as the prev_foot of the NEXT chunk. This makes it easier to
1682 deal with alignments etc but can be very confusing when trying
1683 to extend or adapt this code.
1684
1685 The exceptions to all this are
1686
1687 1. The special chunk `top' is the top-most available chunk (i.e.,
1688 the one bordering the end of available memory). It is treated
1689 specially. Top is never included in any bin, is used only if
1690 no other chunk is available, and is released back to the
1691 system if it is very large (see M_TRIM_THRESHOLD). In effect,
1692 the top chunk is treated as larger (and thus less well
1693 fitting) than any other available chunk. The top chunk
1694 doesn't update its trailing size field since there is no next
1695 contiguous chunk that would have to index off it. However,
1696 space is still allocated for it (TOP_FOOT_SIZE) to enable
1697 separation or merging when space is extended.
1698
1699 3. Chunks allocated via mmap, which have the lowest-order bit
1700 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1701 PINUSE_BIT in their head fields. Because they are allocated
1702 one-by-one, each must carry its own prev_foot field, which is
1703 also used to hold the offset this chunk has within its mmapped
1704 region, which is needed to preserve alignment. Each mmapped
1705 chunk is trailed by the first two fields of a fake next-chunk
1706 for sake of usage checks.
1707
1708*/
1709
1710struct malloc_chunk {
1711 size_t prev_foot; /* Size of previous chunk (if free). */
1712 size_t head; /* Size and inuse bits. */
1713 struct malloc_chunk* fd; /* double links -- used only if free. */
1714 struct malloc_chunk* bk;
1715};
1716
1717typedef struct malloc_chunk mchunk;
1718typedef struct malloc_chunk* mchunkptr;
1719typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
1720typedef unsigned int bindex_t; /* Described below */
1721typedef unsigned int binmap_t; /* Described below */
1722typedef unsigned int flag_t; /* The type of various bit flag sets */
1723
1724/* ------------------- Chunks sizes and alignments ----------------------- */
1725
1726#define MCHUNK_SIZE (sizeof(mchunk))
1727
1728#if FOOTERS
1729#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1730#else /* FOOTERS */
1731#define CHUNK_OVERHEAD (SIZE_T_SIZE)
1732#endif /* FOOTERS */
1733
1734/* MMapped chunks need a second word of overhead ... */
1735#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1736/* ... and additional padding for fake next-chunk at foot */
1737#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
1738
1739/* The smallest size we can malloc is an aligned minimal chunk */
1740#define MIN_CHUNK_SIZE\
1741 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1742
1743/* conversion from malloc headers to user pointers, and back */
1744#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
1745#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1746/* chunk associated with aligned address A */
1747#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
1748
1749/* Bounds on request (not chunk) sizes. */
1750#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
1751#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1752
1753/* pad request bytes into a usable size */
1754#define pad_request(req) \
1755 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1756
1757/* pad request, checking for minimum (but not maximum) */
1758#define request2size(req) \
1759 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1760
1761
1762/* ------------------ Operations on head and foot fields ----------------- */
1763
1764/*
1765 The head field of a chunk is or'ed with PINUSE_BIT when previous
1766 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1767 use. If the chunk was obtained with mmap, the prev_foot field has
1768 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1769 mmapped region to the base of the chunk.
1770*/
1771
1772#define PINUSE_BIT (SIZE_T_ONE)
1773#define CINUSE_BIT (SIZE_T_TWO)
1774#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
1775
1776/* Head value for fenceposts */
1777#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
1778
1779/* extraction of fields from head words */
1780#define cinuse(p) ((p)->head & CINUSE_BIT)
1781#define pinuse(p) ((p)->head & PINUSE_BIT)
1782#define chunksize(p) ((p)->head & ~(INUSE_BITS))
1783
1784#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
1785#define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
1786
1787/* Treat space at ptr +/- offset as a chunk */
1788#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1789#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1790
1791/* Ptr to next or previous physical malloc_chunk. */
1792#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1793#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1794
1795/* extract next chunk's pinuse bit */
1796#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
1797
1798/* Get/set size at footer */
1799#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1800#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1801
1802/* Set size, pinuse bit, and foot */
1803#define set_size_and_pinuse_of_free_chunk(p, s)\
1804 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1805
1806/* Set size, pinuse bit, foot, and clear next pinuse */
1807#define set_free_with_pinuse(p, s, n)\
1808 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1809
1810#define is_mmapped(p)\
1811 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1812
1813/* Get the internal overhead associated with chunk p */
1814#define overhead_for(p)\
1815 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1816
1817/* Return true if malloced space is not necessarily cleared */
1818#if MMAP_CLEARS
1819#define calloc_must_clear(p) (!is_mmapped(p))
1820#else /* MMAP_CLEARS */
1821#define calloc_must_clear(p) (1)
1822#endif /* MMAP_CLEARS */
1823
1824/* ---------------------- Overlaid data structures ----------------------- */
1825
1826/*
1827 When chunks are not in use, they are treated as nodes of either
1828 lists or trees.
1829
1830 "Small" chunks are stored in circular doubly-linked lists, and look
1831 like this:
1832
1833 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1834 | Size of previous chunk |
1835 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1836 `head:' | Size of chunk, in bytes |P|
1837 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1838 | Forward pointer to next chunk in list |
1839 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1840 | Back pointer to previous chunk in list |
1841 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1842 | Unused space (may be 0 bytes long) .
1843 . .
1844 . |
1845nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1846 `foot:' | Size of chunk, in bytes |
1847 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1848
1849 Larger chunks are kept in a form of bitwise digital trees (aka
1850 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
1851 free chunks greater than 256 bytes, their size doesn't impose any
1852 constraints on user chunk sizes. Each node looks like:
1853
1854 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1855 | Size of previous chunk |
1856 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1857 `head:' | Size of chunk, in bytes |P|
1858 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1859 | Forward pointer to next chunk of same size |
1860 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1861 | Back pointer to previous chunk of same size |
1862 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1863 | Pointer to left child (child[0]) |
1864 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1865 | Pointer to right child (child[1]) |
1866 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1867 | Pointer to parent |
1868 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1869 | bin index of this chunk |
1870 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1871 | Unused space .
1872 . |
1873nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1874 `foot:' | Size of chunk, in bytes |
1875 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1876
1877 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
1878 of the same size are arranged in a circularly-linked list, with only
1879 the oldest chunk (the next to be used, in our FIFO ordering)
1880 actually in the tree. (Tree members are distinguished by a non-null
1881 parent pointer.) If a chunk with the same size an an existing node
1882 is inserted, it is linked off the existing node using pointers that
1883 work in the same way as fd/bk pointers of small chunks.
1884
1885 Each tree contains a power of 2 sized range of chunk sizes (the
1886 smallest is 0x100 <= x < 0x180), which is is divided in half at each
1887 tree level, with the chunks in the smaller half of the range (0x100
1888 <= x < 0x140 for the top nose) in the left subtree and the larger
1889 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
1890 done by inspecting individual bits.
1891
1892 Using these rules, each node's left subtree contains all smaller
1893 sizes than its right subtree. However, the node at the root of each
1894 subtree has no particular ordering relationship to either. (The
1895 dividing line between the subtree sizes is based on trie relation.)
1896 If we remove the last chunk of a given size from the interior of the
1897 tree, we need to replace it with a leaf node. The tree ordering
1898 rules permit a node to be replaced by any leaf below it.
1899
1900 The smallest chunk in a tree (a common operation in a best-fit
1901 allocator) can be found by walking a path to the leftmost leaf in
1902 the tree. Unlike a usual binary tree, where we follow left child
1903 pointers until we reach a null, here we follow the right child
1904 pointer any time the left one is null, until we reach a leaf with
1905 both child pointers null. The smallest chunk in the tree will be
1906 somewhere along that path.
1907
1908 The worst case number of steps to add, find, or remove a node is
1909 bounded by the number of bits differentiating chunks within
1910 bins. Under current bin calculations, this ranges from 6 up to 21
1911 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1912 is of course much better.
1913*/
1914
1915struct malloc_tree_chunk {
1916 /* The first four fields must be compatible with malloc_chunk */
1917 size_t prev_foot;
1918 size_t head;
1919 struct malloc_tree_chunk* fd;
1920 struct malloc_tree_chunk* bk;
1921
1922 struct malloc_tree_chunk* child[2];
1923 struct malloc_tree_chunk* parent;
1924 bindex_t index;
1925};
1926
1927typedef struct malloc_tree_chunk tchunk;
1928typedef struct malloc_tree_chunk* tchunkptr;
1929typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1930
1931/* A little helper macro for trees */
1932#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1933
1934/* ----------------------------- Segments -------------------------------- */
1935
1936/*
1937 Each malloc space may include non-contiguous segments, held in a
1938 list headed by an embedded malloc_segment record representing the
1939 top-most space. Segments also include flags holding properties of
1940 the space. Large chunks that are directly allocated by mmap are not
1941 included in this list. They are instead independently created and
1942 destroyed without otherwise keeping track of them.
1943
1944 Segment management mainly comes into play for spaces allocated by
1945 MMAP. Any call to MMAP might or might not return memory that is
1946 adjacent to an existing segment. MORECORE normally contiguously
1947 extends the current space, so this space is almost always adjacent,
1948 which is simpler and faster to deal with. (This is why MORECORE is
1949 used preferentially to MMAP when both are available -- see
1950 sys_alloc.) When allocating using MMAP, we don't use any of the
1951 hinting mechanisms (inconsistently) supported in various
1952 implementations of unix mmap, or distinguish reserving from
1953 committing memory. Instead, we just ask for space, and exploit
1954 contiguity when we get it. It is probably possible to do
1955 better than this on some systems, but no general scheme seems
1956 to be significantly better.
1957
1958 Management entails a simpler variant of the consolidation scheme
1959 used for chunks to reduce fragmentation -- new adjacent memory is
1960 normally prepended or appended to an existing segment. However,
1961 there are limitations compared to chunk consolidation that mostly
1962 reflect the fact that segment processing is relatively infrequent
1963 (occurring only when getting memory from system) and that we
1964 don't expect to have huge numbers of segments:
1965
1966 * Segments are not indexed, so traversal requires linear scans. (It
1967 would be possible to index these, but is not worth the extra
1968 overhead and complexity for most programs on most platforms.)
1969 * New segments are only appended to old ones when holding top-most
1970 memory; if they cannot be prepended to others, they are held in
1971 different segments.
1972
1973 Except for the top-most segment of an mstate, each segment record
1974 is kept at the tail of its segment. Segments are added by pushing
1975 segment records onto the list headed by &mstate.seg for the
1976 containing mstate.
1977
1978 Segment flags control allocation/merge/deallocation policies:
1979 * If EXTERN_BIT set, then we did not allocate this segment,
1980 and so should not try to deallocate or merge with others.
1981 (This currently holds only for the initial segment passed
1982 into create_mspace_with_base.)
1983 * If IS_MMAPPED_BIT set, the segment may be merged with
1984 other surrounding mmapped segments and trimmed/de-allocated
1985 using munmap.
1986 * If neither bit is set, then the segment was obtained using
1987 MORECORE so can be merged with surrounding MORECORE'd segments
1988 and deallocated/trimmed using MORECORE with negative arguments.
1989*/
1990
1991struct malloc_segment {
1992 char* base; /* base address */
1993 size_t size; /* allocated size */
1994 struct malloc_segment* next; /* ptr to next segment */
1995 flag_t sflags; /* mmap and extern flag */
1996};
1997
1998#define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
1999#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
2000
2001typedef struct malloc_segment msegment;
2002typedef struct malloc_segment* msegmentptr;
2003
2004/* ---------------------------- malloc_state ----------------------------- */
2005
2006/*
2007 A malloc_state holds all of the bookkeeping for a space.
2008 The main fields are:
2009
2010 Top
2011 The topmost chunk of the currently active segment. Its size is
2012 cached in topsize. The actual size of topmost space is
2013 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2014 fenceposts and segment records if necessary when getting more
2015 space from the system. The size at which to autotrim top is
2016 cached from mparams in trim_check, except that it is disabled if
2017 an autotrim fails.
2018
2019 Designated victim (dv)
2020 This is the preferred chunk for servicing small requests that
2021 don't have exact fits. It is normally the chunk split off most
2022 recently to service another small request. Its size is cached in
2023 dvsize. The link fields of this chunk are not maintained since it
2024 is not kept in a bin.
2025
2026 SmallBins
2027 An array of bin headers for free chunks. These bins hold chunks
2028 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2029 chunks of all the same size, spaced 8 bytes apart. To simplify
2030 use in double-linked lists, each bin header acts as a malloc_chunk
2031 pointing to the real first node, if it exists (else pointing to
2032 itself). This avoids special-casing for headers. But to avoid
2033 waste, we allocate only the fd/bk pointers of bins, and then use
2034 repositioning tricks to treat these as the fields of a chunk.
2035
2036 TreeBins
2037 Treebins are pointers to the roots of trees holding a range of
2038 sizes. There are 2 equally spaced treebins for each power of two
2039 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2040 larger.
2041
2042 Bin maps
2043 There is one bit map for small bins ("smallmap") and one for
2044 treebins ("treemap). Each bin sets its bit when non-empty, and
2045 clears the bit when empty. Bit operations are then used to avoid
2046 bin-by-bin searching -- nearly all "search" is done without ever
2047 looking at bins that won't be selected. The bit maps
2048 conservatively use 32 bits per map word, even if on 64bit system.
2049 For a good description of some of the bit-based techniques used
2050 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2051 supplement at http://hackersdelight.org/). Many of these are
2052 intended to reduce the branchiness of paths through malloc etc, as
2053 well as to reduce the number of memory locations read or written.
2054
2055 Segments
2056 A list of segments headed by an embedded malloc_segment record
2057 representing the initial space.
2058
2059 Address check support
2060 The least_addr field is the least address ever obtained from
2061 MORECORE or MMAP. Attempted frees and reallocs of any address less
2062 than this are trapped (unless INSECURE is defined).
2063
2064 Magic tag
2065 A cross-check field that should always hold same value as mparams.magic.
2066
2067 Flags
2068 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2069
2070 Statistics
2071 Each space keeps track of current and maximum system memory
2072 obtained via MORECORE or MMAP.
2073
2074 Locking
2075 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2076 around every public call using this mspace.
2077*/
2078
2079/* Bin types, widths and sizes */
2080#define NSMALLBINS (32U)
2081#define NTREEBINS (32U)
2082#define SMALLBIN_SHIFT (3U)
2083#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2084#define TREEBIN_SHIFT (8U)
2085#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2086#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2087#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2088
2089struct malloc_state {
2090 binmap_t smallmap;
2091 binmap_t treemap;
2092 size_t dvsize;
2093 size_t topsize;
2094 char* least_addr;
2095 mchunkptr dv;
2096 mchunkptr top;
2097 size_t trim_check;
2098 size_t magic;
2099 mchunkptr smallbins[(NSMALLBINS+1)*2];
2100 tbinptr treebins[NTREEBINS];
2101 size_t footprint;
2102#if USE_MAX_ALLOWED_FOOTPRINT
2103 size_t max_allowed_footprint;
2104#endif
2105 size_t max_footprint;
2106 flag_t mflags;
2107#if USE_LOCKS
2108 MLOCK_T mutex; /* locate lock among fields that rarely change */
2109#endif /* USE_LOCKS */
2110 msegment seg;
2111};
2112
2113typedef struct malloc_state* mstate;
2114
2115/* ------------- Global malloc_state and malloc_params ------------------- */
2116
2117/*
2118 malloc_params holds global properties, including those that can be
2119 dynamically set using mallopt. There is a single instance, mparams,
2120 initialized in init_mparams.
2121*/
2122
2123struct malloc_params {
2124 size_t magic;
2125 size_t page_size;
2126 size_t granularity;
2127 size_t mmap_threshold;
2128 size_t trim_threshold;
2129 flag_t default_mflags;
2130};
2131
2132static struct malloc_params mparams;
2133
2134/* The global malloc_state used for all non-"mspace" calls */
2135static struct malloc_state _gm_
2136#if USE_MAX_ALLOWED_FOOTPRINT
2137 = { .max_allowed_footprint = MAX_SIZE_T };
2138#else
2139 ;
2140#endif
2141
2142#define gm (&_gm_)
2143#define is_global(M) ((M) == &_gm_)
2144#define is_initialized(M) ((M)->top != 0)
2145
2146/* -------------------------- system alloc setup ------------------------- */
2147
2148/* Operations on mflags */
2149
2150#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2151#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2152#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2153
2154#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2155#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2156#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2157
2158#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2159#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2160
2161#define set_lock(M,L)\
2162 ((M)->mflags = (L)?\
2163 ((M)->mflags | USE_LOCK_BIT) :\
2164 ((M)->mflags & ~USE_LOCK_BIT))
2165
2166/* page-align a size */
2167#define page_align(S)\
2168 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2169
2170/* granularity-align a size */
2171#define granularity_align(S)\
2172 (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2173
2174#define is_page_aligned(S)\
2175 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2176#define is_granularity_aligned(S)\
2177 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2178
2179/* True if segment S holds address A */
2180#define segment_holds(S, A)\
2181 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2182
2183/* Return segment holding given address */
2184static msegmentptr segment_holding(mstate m, char* addr) {
2185 msegmentptr sp = &m->seg;
2186 for (;;) {
2187 if (addr >= sp->base && addr < sp->base + sp->size)
2188 return sp;
2189 if ((sp = sp->next) == 0)
2190 return 0;
2191 }
2192}
2193
2194/* Return true if segment contains a segment link */
2195static int has_segment_link(mstate m, msegmentptr ss) {
2196 msegmentptr sp = &m->seg;
2197 for (;;) {
2198 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2199 return 1;
2200 if ((sp = sp->next) == 0)
2201 return 0;
2202 }
2203}
2204
2205#ifndef MORECORE_CANNOT_TRIM
2206#define should_trim(M,s) ((s) > (M)->trim_check)
2207#else /* MORECORE_CANNOT_TRIM */
2208#define should_trim(M,s) (0)
2209#endif /* MORECORE_CANNOT_TRIM */
2210
2211/*
2212 TOP_FOOT_SIZE is padding at the end of a segment, including space
2213 that may be needed to place segment records and fenceposts when new
2214 noncontiguous segments are added.
2215*/
2216#define TOP_FOOT_SIZE\
2217 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2218
2219
2220/* ------------------------------- Hooks -------------------------------- */
2221
2222/*
2223 PREACTION should be defined to return 0 on success, and nonzero on
2224 failure. If you are not using locking, you can redefine these to do
2225 anything you like.
2226*/
2227
2228#if USE_LOCKS
2229
2230/* Ensure locks are initialized */
2231#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2232
2233#define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2234#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2235#else /* USE_LOCKS */
2236
2237#ifndef PREACTION
2238#define PREACTION(M) (0)
2239#endif /* PREACTION */
2240
2241#ifndef POSTACTION
2242#define POSTACTION(M)
2243#endif /* POSTACTION */
2244
2245#endif /* USE_LOCKS */
2246
2247/*
2248 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2249 USAGE_ERROR_ACTION is triggered on detected bad frees and
2250 reallocs. The argument p is an address that might have triggered the
2251 fault. It is ignored by the two predefined actions, but might be
2252 useful in custom actions that try to help diagnose errors.
2253*/
2254
2255#if PROCEED_ON_ERROR
2256
2257/* A count of the number of corruption errors causing resets */
2258int malloc_corruption_error_count;
2259
2260/* default corruption action */
2261static void reset_on_error(mstate m);
2262
2263#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2264#define USAGE_ERROR_ACTION(m, p)
2265
2266#else /* PROCEED_ON_ERROR */
2267
2268#ifndef CORRUPTION_ERROR_ACTION
2269#define CORRUPTION_ERROR_ACTION(m) ABORT
2270#endif /* CORRUPTION_ERROR_ACTION */
2271
2272#ifndef USAGE_ERROR_ACTION
2273#define USAGE_ERROR_ACTION(m,p) ABORT
2274#endif /* USAGE_ERROR_ACTION */
2275
2276#endif /* PROCEED_ON_ERROR */
2277
2278/* -------------------------- Debugging setup ---------------------------- */
2279
2280#if ! DEBUG
2281
2282#define check_free_chunk(M,P)
2283#define check_inuse_chunk(M,P)
2284#define check_malloced_chunk(M,P,N)
2285#define check_mmapped_chunk(M,P)
2286#define check_malloc_state(M)
2287#define check_top_chunk(M,P)
2288
2289#else /* DEBUG */
2290#define check_free_chunk(M,P) do_check_free_chunk(M,P)
2291#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2292#define check_top_chunk(M,P) do_check_top_chunk(M,P)
2293#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2294#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2295#define check_malloc_state(M) do_check_malloc_state(M)
2296
2297static void do_check_any_chunk(mstate m, mchunkptr p);
2298static void do_check_top_chunk(mstate m, mchunkptr p);
2299static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2300static void do_check_inuse_chunk(mstate m, mchunkptr p);
2301static void do_check_free_chunk(mstate m, mchunkptr p);
2302static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2303static void do_check_tree(mstate m, tchunkptr t);
2304static void do_check_treebin(mstate m, bindex_t i);
2305static void do_check_smallbin(mstate m, bindex_t i);
2306static void do_check_malloc_state(mstate m);
2307static int bin_find(mstate m, mchunkptr x);
2308static size_t traverse_and_check(mstate m);
2309#endif /* DEBUG */
2310
2311/* ---------------------------- Indexing Bins ---------------------------- */
2312
2313#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2314#define small_index(s) ((s) >> SMALLBIN_SHIFT)
2315#define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2316#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2317
2318/* addressing by index. See above about smallbin repositioning */
2319#define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2320#define treebin_at(M,i) (&((M)->treebins[i]))
2321
2322/* assign tree index for size S to variable I */
2323#if defined(__GNUC__) && defined(i386)
2324#define compute_tree_index(S, I)\
2325{\
2326 size_t X = S >> TREEBIN_SHIFT;\
2327 if (X == 0)\
2328 I = 0;\
2329 else if (X > 0xFFFF)\
2330 I = NTREEBINS-1;\
2331 else {\
2332 unsigned int K;\
2333 __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\
2334 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2335 }\
2336}
2337#else /* GNUC */
2338#define compute_tree_index(S, I)\
2339{\
2340 size_t X = S >> TREEBIN_SHIFT;\
2341 if (X == 0)\
2342 I = 0;\
2343 else if (X > 0xFFFF)\
2344 I = NTREEBINS-1;\
2345 else {\
2346 unsigned int Y = (unsigned int)X;\
2347 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2348 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2349 N += K;\
2350 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2351 K = 14 - N + ((Y <<= K) >> 15);\
2352 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2353 }\
2354}
2355#endif /* GNUC */
2356
2357/* Bit representing maximum resolved size in a treebin at i */
2358#define bit_for_tree_index(i) \
2359 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2360
2361/* Shift placing maximum resolved bit in a treebin at i as sign bit */
2362#define leftshift_for_tree_index(i) \
2363 ((i == NTREEBINS-1)? 0 : \
2364 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2365
2366/* The size of the smallest chunk held in bin with index i */
2367#define minsize_for_tree_index(i) \
2368 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2369 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2370
2371
2372/* ------------------------ Operations on bin maps ----------------------- */
2373
2374/* bit corresponding to given index */
2375#define idx2bit(i) ((binmap_t)(1) << (i))
2376
2377/* Mark/Clear bits with given index */
2378#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2379#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2380#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2381
2382#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2383#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2384#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2385
2386/* index corresponding to given bit */
2387
2388#if defined(__GNUC__) && defined(i386)
2389#define compute_bit2idx(X, I)\
2390{\
2391 unsigned int J;\
2392 __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2393 I = (bindex_t)J;\
2394}
2395
2396#else /* GNUC */
2397#if USE_BUILTIN_FFS
2398#define compute_bit2idx(X, I) I = ffs(X)-1
2399
2400#else /* USE_BUILTIN_FFS */
2401#define compute_bit2idx(X, I)\
2402{\
2403 unsigned int Y = X - 1;\
2404 unsigned int K = Y >> (16-4) & 16;\
2405 unsigned int N = K; Y >>= K;\
2406 N += K = Y >> (8-3) & 8; Y >>= K;\
2407 N += K = Y >> (4-2) & 4; Y >>= K;\
2408 N += K = Y >> (2-1) & 2; Y >>= K;\
2409 N += K = Y >> (1-0) & 1; Y >>= K;\
2410 I = (bindex_t)(N + Y);\
2411}
2412#endif /* USE_BUILTIN_FFS */
2413#endif /* GNUC */
2414
2415/* isolate the least set bit of a bitmap */
2416#define least_bit(x) ((x) & -(x))
2417
2418/* mask with all bits to left of least bit of x on */
2419#define left_bits(x) ((x<<1) | -(x<<1))
2420
2421/* mask with all bits to left of or equal to least bit of x on */
2422#define same_or_left_bits(x) ((x) | -(x))
2423
2424
2425/* ----------------------- Runtime Check Support ------------------------- */
2426
2427/*
2428 For security, the main invariant is that malloc/free/etc never
2429 writes to a static address other than malloc_state, unless static
2430 malloc_state itself has been corrupted, which cannot occur via
2431 malloc (because of these checks). In essence this means that we
2432 believe all pointers, sizes, maps etc held in malloc_state, but
2433 check all of those linked or offsetted from other embedded data
2434 structures. These checks are interspersed with main code in a way
2435 that tends to minimize their run-time cost.
2436
2437 When FOOTERS is defined, in addition to range checking, we also
2438 verify footer fields of inuse chunks, which can be used guarantee
2439 that the mstate controlling malloc/free is intact. This is a
2440 streamlined version of the approach described by William Robertson
2441 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2442 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2443 of an inuse chunk holds the xor of its mstate and a random seed,
2444 that is checked upon calls to free() and realloc(). This is
2445 (probablistically) unguessable from outside the program, but can be
2446 computed by any code successfully malloc'ing any chunk, so does not
2447 itself provide protection against code that has already broken
2448 security through some other means. Unlike Robertson et al, we
2449 always dynamically check addresses of all offset chunks (previous,
2450 next, etc). This turns out to be cheaper than relying on hashes.
2451*/
2452
2453#if !INSECURE
2454/* Check if address a is at least as high as any from MORECORE or MMAP */
2455#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2456/* Check if address of next chunk n is higher than base chunk p */
2457#define ok_next(p, n) ((char*)(p) < (char*)(n))
2458/* Check if p has its cinuse bit on */
2459#define ok_cinuse(p) cinuse(p)
2460/* Check if p has its pinuse bit on */
2461#define ok_pinuse(p) pinuse(p)
2462
2463#else /* !INSECURE */
2464#define ok_address(M, a) (1)
2465#define ok_next(b, n) (1)
2466#define ok_cinuse(p) (1)
2467#define ok_pinuse(p) (1)
2468#endif /* !INSECURE */
2469
2470#if (FOOTERS && !INSECURE)
2471/* Check if (alleged) mstate m has expected magic field */
2472#define ok_magic(M) ((M)->magic == mparams.magic)
2473#else /* (FOOTERS && !INSECURE) */
2474#define ok_magic(M) (1)
2475#endif /* (FOOTERS && !INSECURE) */
2476
2477
2478/* In gcc, use __builtin_expect to minimize impact of checks */
2479#if !INSECURE
2480#if defined(__GNUC__) && __GNUC__ >= 3
2481#define RTCHECK(e) __builtin_expect(e, 1)
2482#else /* GNUC */
2483#define RTCHECK(e) (e)
2484#endif /* GNUC */
2485#else /* !INSECURE */
2486#define RTCHECK(e) (1)
2487#endif /* !INSECURE */
2488
2489/* macros to set up inuse chunks with or without footers */
2490
2491#if !FOOTERS
2492
2493#define mark_inuse_foot(M,p,s)
2494
2495/* Set cinuse bit and pinuse bit of next chunk */
2496#define set_inuse(M,p,s)\
2497 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2498 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2499
2500/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2501#define set_inuse_and_pinuse(M,p,s)\
2502 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2503 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2504
2505/* Set size, cinuse and pinuse bit of this chunk */
2506#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2507 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2508
2509#else /* FOOTERS */
2510
2511/* Set foot of inuse chunk to be xor of mstate and seed */
2512#define mark_inuse_foot(M,p,s)\
2513 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2514
2515#define get_mstate_for(p)\
2516 ((mstate)(((mchunkptr)((char*)(p) +\
2517 (chunksize(p))))->prev_foot ^ mparams.magic))
2518
2519#define set_inuse(M,p,s)\
2520 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2521 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2522 mark_inuse_foot(M,p,s))
2523
2524#define set_inuse_and_pinuse(M,p,s)\
2525 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2526 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2527 mark_inuse_foot(M,p,s))
2528
2529#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2530 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2531 mark_inuse_foot(M, p, s))
2532
2533#endif /* !FOOTERS */
2534
2535/* ---------------------------- setting mparams -------------------------- */
2536
2537/* Initialize mparams */
2538static int init_mparams(void) {
2539 if (mparams.page_size == 0) {
2540 size_t s;
2541
2542 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2543 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2544#if MORECORE_CONTIGUOUS
2545 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2546#else /* MORECORE_CONTIGUOUS */
2547 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2548#endif /* MORECORE_CONTIGUOUS */
2549
2550#if (FOOTERS && !INSECURE)
2551 {
2552#if USE_DEV_RANDOM
2553 int fd;
2554 unsigned char buf[sizeof(size_t)];
2555 /* Try to use /dev/urandom, else fall back on using time */
2556 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2557 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2558 s = *((size_t *) buf);
2559 close(fd);
2560 }
2561 else
2562#endif /* USE_DEV_RANDOM */
2563 s = (size_t)(time(0) ^ (size_t)0x55555555U);
2564
2565 s |= (size_t)8U; /* ensure nonzero */
2566 s &= ~(size_t)7U; /* improve chances of fault for bad values */
2567
2568 }
2569#else /* (FOOTERS && !INSECURE) */
2570 s = (size_t)0x58585858U;
2571#endif /* (FOOTERS && !INSECURE) */
2572 ACQUIRE_MAGIC_INIT_LOCK();
2573 if (mparams.magic == 0) {
2574 mparams.magic = s;
2575 /* Set up lock for main malloc area */
2576 INITIAL_LOCK(&gm->mutex);
2577 gm->mflags = mparams.default_mflags;
2578 }
2579 RELEASE_MAGIC_INIT_LOCK();
2580
2581#ifndef WIN32
2582 mparams.page_size = malloc_getpagesize;
2583 mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2584 DEFAULT_GRANULARITY : mparams.page_size);
2585#else /* WIN32 */
2586 {
2587 SYSTEM_INFO system_info;
2588 GetSystemInfo(&system_info);
2589 mparams.page_size = system_info.dwPageSize;
2590 mparams.granularity = system_info.dwAllocationGranularity;
2591 }
2592#endif /* WIN32 */
2593
2594 /* Sanity-check configuration:
2595 size_t must be unsigned and as wide as pointer type.
2596 ints must be at least 4 bytes.
2597 alignment must be at least 8.
2598 Alignment, min chunk size, and page size must all be powers of 2.
2599 */
2600 if ((sizeof(size_t) != sizeof(char*)) ||
2601 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
2602 (sizeof(int) < 4) ||
2603 (MALLOC_ALIGNMENT < (size_t)8U) ||
2604 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
2605 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
2606 ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2607 ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0))
2608 ABORT;
2609 }
2610 return 0;
2611}
2612
2613/* support for mallopt */
2614static int change_mparam(int param_number, int value) {
2615 size_t val = (size_t)value;
2616 init_mparams();
2617 switch(param_number) {
2618 case M_TRIM_THRESHOLD:
2619 mparams.trim_threshold = val;
2620 return 1;
2621 case M_GRANULARITY:
2622 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2623 mparams.granularity = val;
2624 return 1;
2625 }
2626 else
2627 return 0;
2628 case M_MMAP_THRESHOLD:
2629 mparams.mmap_threshold = val;
2630 return 1;
2631 default:
2632 return 0;
2633 }
2634}
2635
2636#if DEBUG
2637/* ------------------------- Debugging Support --------------------------- */
2638
2639/* Check properties of any chunk, whether free, inuse, mmapped etc */
2640static void do_check_any_chunk(mstate m, mchunkptr p) {
2641 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2642 assert(ok_address(m, p));
2643}
2644
2645/* Check properties of top chunk */
2646static void do_check_top_chunk(mstate m, mchunkptr p) {
2647 msegmentptr sp = segment_holding(m, (char*)p);
2648 size_t sz = chunksize(p);
2649 assert(sp != 0);
2650 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2651 assert(ok_address(m, p));
2652 assert(sz == m->topsize);
2653 assert(sz > 0);
2654 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2655 assert(pinuse(p));
2656 assert(!next_pinuse(p));
2657}
2658
2659/* Check properties of (inuse) mmapped chunks */
2660static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2661 size_t sz = chunksize(p);
2662 size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2663 assert(is_mmapped(p));
2664 assert(use_mmap(m));
2665 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2666 assert(ok_address(m, p));
2667 assert(!is_small(sz));
2668 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2669 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2670 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2671}
2672
2673/* Check properties of inuse chunks */
2674static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2675 do_check_any_chunk(m, p);
2676 assert(cinuse(p));
2677 assert(next_pinuse(p));
2678 /* If not pinuse and not mmapped, previous chunk has OK offset */
2679 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2680 if (is_mmapped(p))
2681 do_check_mmapped_chunk(m, p);
2682}
2683
2684/* Check properties of free chunks */
2685static void do_check_free_chunk(mstate m, mchunkptr p) {
2686 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2687 mchunkptr next = chunk_plus_offset(p, sz);
2688 do_check_any_chunk(m, p);
2689 assert(!cinuse(p));
2690 assert(!next_pinuse(p));
2691 assert (!is_mmapped(p));
2692 if (p != m->dv && p != m->top) {
2693 if (sz >= MIN_CHUNK_SIZE) {
2694 assert((sz & CHUNK_ALIGN_MASK) == 0);
2695 assert(is_aligned(chunk2mem(p)));
2696 assert(next->prev_foot == sz);
2697 assert(pinuse(p));
2698 assert (next == m->top || cinuse(next));
2699 assert(p->fd->bk == p);
2700 assert(p->bk->fd == p);
2701 }
2702 else /* markers are always of size SIZE_T_SIZE */
2703 assert(sz == SIZE_T_SIZE);
2704 }
2705}
2706
2707/* Check properties of malloced chunks at the point they are malloced */
2708static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2709 if (mem != 0) {
2710 mchunkptr p = mem2chunk(mem);
2711 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2712 do_check_inuse_chunk(m, p);
2713 assert((sz & CHUNK_ALIGN_MASK) == 0);
2714 assert(sz >= MIN_CHUNK_SIZE);
2715 assert(sz >= s);
2716 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2717 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2718 }
2719}
2720
2721/* Check a tree and its subtrees. */
2722static void do_check_tree(mstate m, tchunkptr t) {
2723 tchunkptr head = 0;
2724 tchunkptr u = t;
2725 bindex_t tindex = t->index;
2726 size_t tsize = chunksize(t);
2727 bindex_t idx;
2728 compute_tree_index(tsize, idx);
2729 assert(tindex == idx);
2730 assert(tsize >= MIN_LARGE_SIZE);
2731 assert(tsize >= minsize_for_tree_index(idx));
2732 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2733
2734 do { /* traverse through chain of same-sized nodes */
2735 do_check_any_chunk(m, ((mchunkptr)u));
2736 assert(u->index == tindex);
2737 assert(chunksize(u) == tsize);
2738 assert(!cinuse(u));
2739 assert(!next_pinuse(u));
2740 assert(u->fd->bk == u);
2741 assert(u->bk->fd == u);
2742 if (u->parent == 0) {
2743 assert(u->child[0] == 0);
2744 assert(u->child[1] == 0);
2745 }
2746 else {
2747 assert(head == 0); /* only one node on chain has parent */
2748 head = u;
2749 assert(u->parent != u);
2750 assert (u->parent->child[0] == u ||
2751 u->parent->child[1] == u ||
2752 *((tbinptr*)(u->parent)) == u);
2753 if (u->child[0] != 0) {
2754 assert(u->child[0]->parent == u);
2755 assert(u->child[0] != u);
2756 do_check_tree(m, u->child[0]);
2757 }
2758 if (u->child[1] != 0) {
2759 assert(u->child[1]->parent == u);
2760 assert(u->child[1] != u);
2761 do_check_tree(m, u->child[1]);
2762 }
2763 if (u->child[0] != 0 && u->child[1] != 0) {
2764 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2765 }
2766 }
2767 u = u->fd;
2768 } while (u != t);
2769 assert(head != 0);
2770}
2771
2772/* Check all the chunks in a treebin. */
2773static void do_check_treebin(mstate m, bindex_t i) {
2774 tbinptr* tb = treebin_at(m, i);
2775 tchunkptr t = *tb;
2776 int empty = (m->treemap & (1U << i)) == 0;
2777 if (t == 0)
2778 assert(empty);
2779 if (!empty)
2780 do_check_tree(m, t);
2781}
2782
2783/* Check all the chunks in a smallbin. */
2784static void do_check_smallbin(mstate m, bindex_t i) {
2785 sbinptr b = smallbin_at(m, i);
2786 mchunkptr p = b->bk;
2787 unsigned int empty = (m->smallmap & (1U << i)) == 0;
2788 if (p == b)
2789 assert(empty);
2790 if (!empty) {
2791 for (; p != b; p = p->bk) {
2792 size_t size = chunksize(p);
2793 mchunkptr q;
2794 /* each chunk claims to be free */
2795 do_check_free_chunk(m, p);
2796 /* chunk belongs in bin */
2797 assert(small_index(size) == i);
2798 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2799 /* chunk is followed by an inuse chunk */
2800 q = next_chunk(p);
2801 if (q->head != FENCEPOST_HEAD)
2802 do_check_inuse_chunk(m, q);
2803 }
2804 }
2805}
2806
2807/* Find x in a bin. Used in other check functions. */
2808static int bin_find(mstate m, mchunkptr x) {
2809 size_t size = chunksize(x);
2810 if (is_small(size)) {
2811 bindex_t sidx = small_index(size);
2812 sbinptr b = smallbin_at(m, sidx);
2813 if (smallmap_is_marked(m, sidx)) {
2814 mchunkptr p = b;
2815 do {
2816 if (p == x)
2817 return 1;
2818 } while ((p = p->fd) != b);
2819 }
2820 }
2821 else {
2822 bindex_t tidx;
2823 compute_tree_index(size, tidx);
2824 if (treemap_is_marked(m, tidx)) {
2825 tchunkptr t = *treebin_at(m, tidx);
2826 size_t sizebits = size << leftshift_for_tree_index(tidx);
2827 while (t != 0 && chunksize(t) != size) {
2828 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2829 sizebits <<= 1;
2830 }
2831 if (t != 0) {
2832 tchunkptr u = t;
2833 do {
2834 if (u == (tchunkptr)x)
2835 return 1;
2836 } while ((u = u->fd) != t);
2837 }
2838 }
2839 }
2840 return 0;
2841}
2842
2843/* Traverse each chunk and check it; return total */
2844static size_t traverse_and_check(mstate m) {
2845 size_t sum = 0;
2846 if (is_initialized(m)) {
2847 msegmentptr s = &m->seg;
2848 sum += m->topsize + TOP_FOOT_SIZE;
2849 while (s != 0) {
2850 mchunkptr q = align_as_chunk(s->base);
2851 mchunkptr lastq = 0;
2852 assert(pinuse(q));
2853 while (segment_holds(s, q) &&
2854 q != m->top && q->head != FENCEPOST_HEAD) {
2855 sum += chunksize(q);
2856 if (cinuse(q)) {
2857 assert(!bin_find(m, q));
2858 do_check_inuse_chunk(m, q);
2859 }
2860 else {
2861 assert(q == m->dv || bin_find(m, q));
2862 assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2863 do_check_free_chunk(m, q);
2864 }
2865 lastq = q;
2866 q = next_chunk(q);
2867 }
2868 s = s->next;
2869 }
2870 }
2871 return sum;
2872}
2873
2874/* Check all properties of malloc_state. */
2875static void do_check_malloc_state(mstate m) {
2876 bindex_t i;
2877 size_t total;
2878 /* check bins */
2879 for (i = 0; i < NSMALLBINS; ++i)
2880 do_check_smallbin(m, i);
2881 for (i = 0; i < NTREEBINS; ++i)
2882 do_check_treebin(m, i);
2883
2884 if (m->dvsize != 0) { /* check dv chunk */
2885 do_check_any_chunk(m, m->dv);
2886 assert(m->dvsize == chunksize(m->dv));
2887 assert(m->dvsize >= MIN_CHUNK_SIZE);
2888 assert(bin_find(m, m->dv) == 0);
2889 }
2890
2891 if (m->top != 0) { /* check top chunk */
2892 do_check_top_chunk(m, m->top);
2893 assert(m->topsize == chunksize(m->top));
2894 assert(m->topsize > 0);
2895 assert(bin_find(m, m->top) == 0);
2896 }
2897
2898 total = traverse_and_check(m);
2899 assert(total <= m->footprint);
2900 assert(m->footprint <= m->max_footprint);
2901#if USE_MAX_ALLOWED_FOOTPRINT
2902 //TODO: change these assertions if we allow for shrinking.
2903 assert(m->footprint <= m->max_allowed_footprint);
2904 assert(m->max_footprint <= m->max_allowed_footprint);
2905#endif
2906}
2907#endif /* DEBUG */
2908
2909/* ----------------------------- statistics ------------------------------ */
2910
2911#if !NO_MALLINFO
2912static struct mallinfo internal_mallinfo(mstate m) {
2913 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2914 if (!PREACTION(m)) {
2915 check_malloc_state(m);
2916 if (is_initialized(m)) {
2917 size_t nfree = SIZE_T_ONE; /* top always free */
2918 size_t mfree = m->topsize + TOP_FOOT_SIZE;
2919 size_t sum = mfree;
2920 msegmentptr s = &m->seg;
2921 while (s != 0) {
2922 mchunkptr q = align_as_chunk(s->base);
2923 while (segment_holds(s, q) &&
2924 q != m->top && q->head != FENCEPOST_HEAD) {
2925 size_t sz = chunksize(q);
2926 sum += sz;
2927 if (!cinuse(q)) {
2928 mfree += sz;
2929 ++nfree;
2930 }
2931 q = next_chunk(q);
2932 }
2933 s = s->next;
2934 }
2935
2936 nm.arena = sum;
2937 nm.ordblks = nfree;
2938 nm.hblkhd = m->footprint - sum;
2939 nm.usmblks = m->max_footprint;
2940 nm.uordblks = m->footprint - mfree;
2941 nm.fordblks = mfree;
2942 nm.keepcost = m->topsize;
2943 }
2944
2945 POSTACTION(m);
2946 }
2947 return nm;
2948}
2949#endif /* !NO_MALLINFO */
2950
2951static void internal_malloc_stats(mstate m) {
2952 if (!PREACTION(m)) {
2953 size_t maxfp = 0;
2954 size_t fp = 0;
2955 size_t used = 0;
2956 check_malloc_state(m);
2957 if (is_initialized(m)) {
2958 msegmentptr s = &m->seg;
2959 maxfp = m->max_footprint;
2960 fp = m->footprint;
2961 used = fp - (m->topsize + TOP_FOOT_SIZE);
2962
2963 while (s != 0) {
2964 mchunkptr q = align_as_chunk(s->base);
2965 while (segment_holds(s, q) &&
2966 q != m->top && q->head != FENCEPOST_HEAD) {
2967 if (!cinuse(q))
2968 used -= chunksize(q);
2969 q = next_chunk(q);
2970 }
2971 s = s->next;
2972 }
2973 }
2974
2975 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2976 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
2977 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
2978
2979 POSTACTION(m);
2980 }
2981}
2982
2983/* ----------------------- Operations on smallbins ----------------------- */
2984
2985/*
2986 Various forms of linking and unlinking are defined as macros. Even
2987 the ones for trees, which are very long but have very short typical
2988 paths. This is ugly but reduces reliance on inlining support of
2989 compilers.
2990*/
2991
2992/* Link a free chunk into a smallbin */
2993#define insert_small_chunk(M, P, S) {\
2994 bindex_t I = small_index(S);\
2995 mchunkptr B = smallbin_at(M, I);\
2996 mchunkptr F = B;\
2997 assert(S >= MIN_CHUNK_SIZE);\
2998 if (!smallmap_is_marked(M, I))\
2999 mark_smallmap(M, I);\
3000 else if (RTCHECK(ok_address(M, B->fd)))\
3001 F = B->fd;\
3002 else {\
3003 CORRUPTION_ERROR_ACTION(M);\
3004 }\
3005 B->fd = P;\
3006 F->bk = P;\
3007 P->fd = F;\
3008 P->bk = B;\
3009}
3010
3011/* Unlink a chunk from a smallbin
3012 * Added check: if F->bk != P or B->fd != P, we have double linked list
3013 * corruption, and abort.
3014 */
3015#define unlink_small_chunk(M, P, S) {\
3016 mchunkptr F = P->fd;\
3017 mchunkptr B = P->bk;\
3018 bindex_t I = small_index(S);\
3019 if (__builtin_expect (F->bk != P || B->fd != P, 0))\
3020 CORRUPTION_ERROR_ACTION(M);\
3021 assert(P != B);\
3022 assert(P != F);\
3023 assert(chunksize(P) == small_index2size(I));\
3024 if (F == B)\
3025 clear_smallmap(M, I);\
3026 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
3027 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
3028 F->bk = B;\
3029 B->fd = F;\
3030 }\
3031 else {\
3032 CORRUPTION_ERROR_ACTION(M);\
3033 }\
3034}
3035
3036/* Unlink the first chunk from a smallbin
3037 * Added check: if F->bk != P or B->fd != P, we have double linked list
3038 * corruption, and abort.
3039 */
3040#define unlink_first_small_chunk(M, B, P, I) {\
3041 mchunkptr F = P->fd;\
3042 if (__builtin_expect (F->bk != P || B->fd != P, 0))\
3043 CORRUPTION_ERROR_ACTION(M);\
3044 assert(P != B);\
3045 assert(P != F);\
3046 assert(chunksize(P) == small_index2size(I));\
3047 if (B == F)\
3048 clear_smallmap(M, I);\
3049 else if (RTCHECK(ok_address(M, F))) {\
3050 B->fd = F;\
3051 F->bk = B;\
3052 }\
3053 else {\
3054 CORRUPTION_ERROR_ACTION(M);\
3055 }\
3056}
3057
3058/* Replace dv node, binning the old one */
3059/* Used only when dvsize known to be small */
3060#define replace_dv(M, P, S) {\
3061 size_t DVS = M->dvsize;\
3062 if (DVS != 0) {\
3063 mchunkptr DV = M->dv;\
3064 assert(is_small(DVS));\
3065 insert_small_chunk(M, DV, DVS);\
3066 }\
3067 M->dvsize = S;\
3068 M->dv = P;\
3069}
3070
3071/* ------------------------- Operations on trees ------------------------- */
3072
3073/* Insert chunk into tree */
3074#define insert_large_chunk(M, X, S) {\
3075 tbinptr* H;\
3076 bindex_t I;\
3077 compute_tree_index(S, I);\
3078 H = treebin_at(M, I);\
3079 X->index = I;\
3080 X->child[0] = X->child[1] = 0;\
3081 if (!treemap_is_marked(M, I)) {\
3082 mark_treemap(M, I);\
3083 *H = X;\
3084 X->parent = (tchunkptr)H;\
3085 X->fd = X->bk = X;\
3086 }\
3087 else {\
3088 tchunkptr T = *H;\
3089 size_t K = S << leftshift_for_tree_index(I);\
3090 for (;;) {\
3091 if (chunksize(T) != S) {\
3092 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3093 K <<= 1;\
3094 if (*C != 0)\
3095 T = *C;\
3096 else if (RTCHECK(ok_address(M, C))) {\
3097 *C = X;\
3098 X->parent = T;\
3099 X->fd = X->bk = X;\
3100 break;\
3101 }\
3102 else {\
3103 CORRUPTION_ERROR_ACTION(M);\
3104 break;\
3105 }\
3106 }\
3107 else {\
3108 tchunkptr F = T->fd;\
3109 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3110 T->fd = F->bk = X;\
3111 X->fd = F;\
3112 X->bk = T;\
3113 X->parent = 0;\
3114 break;\
3115 }\
3116 else {\
3117 CORRUPTION_ERROR_ACTION(M);\
3118 break;\
3119 }\
3120 }\
3121 }\
3122 }\
3123}
3124
3125/*
3126 Unlink steps:
3127
3128 1. If x is a chained node, unlink it from its same-sized fd/bk links
3129 and choose its bk node as its replacement.
3130 2. If x was the last node of its size, but not a leaf node, it must
3131 be replaced with a leaf node (not merely one with an open left or
3132 right), to make sure that lefts and rights of descendents
3133 correspond properly to bit masks. We use the rightmost descendent
3134 of x. We could use any other leaf, but this is easy to locate and
3135 tends to counteract removal of leftmosts elsewhere, and so keeps
3136 paths shorter than minimally guaranteed. This doesn't loop much
3137 because on average a node in a tree is near the bottom.
3138 3. If x is the base of a chain (i.e., has parent links) relink
3139 x's parent and children to x's replacement (or null if none).
3140
3141 Added check: if F->bk != X or R->fd != X, we have double linked list
3142 corruption, and abort.
3143*/
3144
3145#define unlink_large_chunk(M, X) {\
3146 tchunkptr XP = X->parent;\
3147 tchunkptr R;\
3148 if (X->bk != X) {\
3149 tchunkptr F = X->fd;\
3150 R = X->bk;\
3151 if (__builtin_expect (F->bk != X || R->fd != X, 0))\
3152 CORRUPTION_ERROR_ACTION(M);\
3153 if (RTCHECK(ok_address(M, F))) {\
3154 F->bk = R;\
3155 R->fd = F;\
3156 }\
3157 else {\
3158 CORRUPTION_ERROR_ACTION(M);\
3159 }\
3160 }\
3161 else {\
3162 tchunkptr* RP;\
3163 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3164 ((R = *(RP = &(X->child[0]))) != 0)) {\
3165 tchunkptr* CP;\
3166 while ((*(CP = &(R->child[1])) != 0) ||\
3167 (*(CP = &(R->child[0])) != 0)) {\
3168 R = *(RP = CP);\
3169 }\
3170 if (RTCHECK(ok_address(M, RP)))\
3171 *RP = 0;\
3172 else {\
3173 CORRUPTION_ERROR_ACTION(M);\
3174 }\
3175 }\
3176 }\
3177 if (XP != 0) {\
3178 tbinptr* H = treebin_at(M, X->index);\
3179 if (X == *H) {\
3180 if ((*H = R) == 0) \
3181 clear_treemap(M, X->index);\
3182 }\
3183 else if (RTCHECK(ok_address(M, XP))) {\
3184 if (XP->child[0] == X) \
3185 XP->child[0] = R;\
3186 else \
3187 XP->child[1] = R;\
3188 }\
3189 else\
3190 CORRUPTION_ERROR_ACTION(M);\
3191 if (R != 0) {\
3192 if (RTCHECK(ok_address(M, R))) {\
3193 tchunkptr C0, C1;\
3194 R->parent = XP;\
3195 if ((C0 = X->child[0]) != 0) {\
3196 if (RTCHECK(ok_address(M, C0))) {\
3197 R->child[0] = C0;\
3198 C0->parent = R;\
3199 }\
3200 else\
3201 CORRUPTION_ERROR_ACTION(M);\
3202 }\
3203 if ((C1 = X->child[1]) != 0) {\
3204 if (RTCHECK(ok_address(M, C1))) {\
3205 R->child[1] = C1;\
3206 C1->parent = R;\
3207 }\
3208 else\
3209 CORRUPTION_ERROR_ACTION(M);\
3210 }\
3211 }\
3212 else\
3213 CORRUPTION_ERROR_ACTION(M);\
3214 }\
3215 }\
3216}
3217
3218/* Relays to large vs small bin operations */
3219
3220#define insert_chunk(M, P, S)\
3221 if (is_small(S)) insert_small_chunk(M, P, S)\
3222 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3223
3224#define unlink_chunk(M, P, S)\
3225 if (is_small(S)) unlink_small_chunk(M, P, S)\
3226 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3227
3228
3229/* Relays to internal calls to malloc/free from realloc, memalign etc */
3230
3231#if ONLY_MSPACES
3232#define internal_malloc(m, b) mspace_malloc(m, b)
3233#define internal_free(m, mem) mspace_free(m,mem);
3234#else /* ONLY_MSPACES */
3235#if MSPACES
3236#define internal_malloc(m, b)\
3237 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3238#define internal_free(m, mem)\
3239 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3240#else /* MSPACES */
3241#define internal_malloc(m, b) dlmalloc(b)
3242#define internal_free(m, mem) dlfree(mem)
3243#endif /* MSPACES */
3244#endif /* ONLY_MSPACES */
3245
3246/* ----------------------- Direct-mmapping chunks ----------------------- */
3247
3248/*
3249 Directly mmapped chunks are set up with an offset to the start of
3250 the mmapped region stored in the prev_foot field of the chunk. This
3251 allows reconstruction of the required argument to MUNMAP when freed,
3252 and also allows adjustment of the returned chunk to meet alignment
3253 requirements (especially in memalign). There is also enough space
3254 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3255 the PINUSE bit so frees can be checked.
3256*/
3257
3258/* Malloc using mmap */
3259static void* mmap_alloc(mstate m, size_t nb) {
3260 size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3261#if USE_MAX_ALLOWED_FOOTPRINT
3262 size_t new_footprint = m->footprint + mmsize;
3263 if (new_footprint <= m->footprint || /* Check for wrap around 0 */
3264 new_footprint > m->max_allowed_footprint)
3265 return 0;
3266#endif
3267 if (mmsize > nb) { /* Check for wrap around 0 */
3268 char* mm = (char*)(DIRECT_MMAP(mmsize));
3269 if (mm != CMFAIL) {
3270 size_t offset = align_offset(chunk2mem(mm));
3271 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3272 mchunkptr p = (mchunkptr)(mm + offset);
3273 p->prev_foot = offset | IS_MMAPPED_BIT;
3274 (p)->head = (psize|CINUSE_BIT);
3275 mark_inuse_foot(m, p, psize);
3276 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3277 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3278
3279 if (mm < m->least_addr)
3280 m->least_addr = mm;
3281 if ((m->footprint += mmsize) > m->max_footprint)
3282 m->max_footprint = m->footprint;
3283 assert(is_aligned(chunk2mem(p)));
3284 check_mmapped_chunk(m, p);
3285 return chunk2mem(p);
3286 }
3287 }
3288 return 0;
3289}
3290
3291/* Realloc using mmap */
3292static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3293 size_t oldsize = chunksize(oldp);
3294 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3295 return 0;
3296 /* Keep old chunk if big enough but not too big */
3297 if (oldsize >= nb + SIZE_T_SIZE &&
3298 (oldsize - nb) <= (mparams.granularity << 1))
3299 return oldp;
3300 else {
3301 size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3302 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3303 size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3304 CHUNK_ALIGN_MASK);
3305 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3306 oldmmsize, newmmsize, 1);
3307 if (cp != CMFAIL) {
3308 mchunkptr newp = (mchunkptr)(cp + offset);
3309 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3310 newp->head = (psize|CINUSE_BIT);
3311 mark_inuse_foot(m, newp, psize);
3312 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3313 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3314
3315 if (cp < m->least_addr)
3316 m->least_addr = cp;
3317 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3318 m->max_footprint = m->footprint;
3319 check_mmapped_chunk(m, newp);
3320 return newp;
3321 }
3322 }
3323 return 0;
3324}
3325
3326/* -------------------------- mspace management -------------------------- */
3327
3328/* Initialize top chunk and its size */
3329static void init_top(mstate m, mchunkptr p, size_t psize) {
3330 /* Ensure alignment */
3331 size_t offset = align_offset(chunk2mem(p));
3332 p = (mchunkptr)((char*)p + offset);
3333 psize -= offset;
3334
3335 m->top = p;
3336 m->topsize = psize;
3337 p->head = psize | PINUSE_BIT;
3338 /* set size of fake trailing chunk holding overhead space only once */
3339 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3340 m->trim_check = mparams.trim_threshold; /* reset on each update */
3341}
3342
3343/* Initialize bins for a new mstate that is otherwise zeroed out */
3344static void init_bins(mstate m) {
3345 /* Establish circular links for smallbins */
3346 bindex_t i;
3347 for (i = 0; i < NSMALLBINS; ++i) {
3348 sbinptr bin = smallbin_at(m,i);
3349 bin->fd = bin->bk = bin;
3350 }
3351}
3352
3353#if PROCEED_ON_ERROR
3354
3355/* default corruption action */
3356static void reset_on_error(mstate m) {
3357 int i;
3358 ++malloc_corruption_error_count;
3359 /* Reinitialize fields to forget about all memory */
3360 m->smallbins = m->treebins = 0;
3361 m->dvsize = m->topsize = 0;
3362 m->seg.base = 0;
3363 m->seg.size = 0;
3364 m->seg.next = 0;
3365 m->top = m->dv = 0;
3366 for (i = 0; i < NTREEBINS; ++i)
3367 *treebin_at(m, i) = 0;
3368 init_bins(m);
3369}
3370#endif /* PROCEED_ON_ERROR */
3371
3372/* Allocate chunk and prepend remainder with chunk in successor base. */
3373static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3374 size_t nb) {
3375 mchunkptr p = align_as_chunk(newbase);
3376 mchunkptr oldfirst = align_as_chunk(oldbase);
3377 size_t psize = (char*)oldfirst - (char*)p;
3378 mchunkptr q = chunk_plus_offset(p, nb);
3379 size_t qsize = psize - nb;
3380 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3381
3382 assert((char*)oldfirst > (char*)q);
3383 assert(pinuse(oldfirst));
3384 assert(qsize >= MIN_CHUNK_SIZE);
3385
3386 /* consolidate remainder with first chunk of old base */
3387 if (oldfirst == m->top) {
3388 size_t tsize = m->topsize += qsize;
3389 m->top = q;
3390 q->head = tsize | PINUSE_BIT;
3391 check_top_chunk(m, q);
3392 }
3393 else if (oldfirst == m->dv) {
3394 size_t dsize = m->dvsize += qsize;
3395 m->dv = q;
3396 set_size_and_pinuse_of_free_chunk(q, dsize);
3397 }
3398 else {
3399 if (!cinuse(oldfirst)) {
3400 size_t nsize = chunksize(oldfirst);
3401 unlink_chunk(m, oldfirst, nsize);
3402 oldfirst = chunk_plus_offset(oldfirst, nsize);
3403 qsize += nsize;
3404 }
3405 set_free_with_pinuse(q, qsize, oldfirst);
3406 insert_chunk(m, q, qsize);
3407 check_free_chunk(m, q);
3408 }
3409
3410 check_malloced_chunk(m, chunk2mem(p), nb);
3411 return chunk2mem(p);
3412}
3413
3414
3415/* Add a segment to hold a new noncontiguous region */
3416static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3417 /* Determine locations and sizes of segment, fenceposts, old top */
3418 char* old_top = (char*)m->top;
3419 msegmentptr oldsp = segment_holding(m, old_top);
3420 char* old_end = oldsp->base + oldsp->size;
3421 size_t ssize = pad_request(sizeof(struct malloc_segment));
3422 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3423 size_t offset = align_offset(chunk2mem(rawsp));
3424 char* asp = rawsp + offset;
3425 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3426 mchunkptr sp = (mchunkptr)csp;
3427 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3428 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3429 mchunkptr p = tnext;
3430 int nfences = 0;
3431
3432 /* reset top to new space */
3433 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3434
3435 /* Set up segment record */
3436 assert(is_aligned(ss));
3437 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3438 *ss = m->seg; /* Push current record */
3439 m->seg.base = tbase;
3440 m->seg.size = tsize;
3441 m->seg.sflags = mmapped;
3442 m->seg.next = ss;
3443
3444 /* Insert trailing fenceposts */
3445 for (;;) {
3446 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3447 p->head = FENCEPOST_HEAD;
3448 ++nfences;
3449 if ((char*)(&(nextp->head)) < old_end)
3450 p = nextp;
3451 else
3452 break;
3453 }
3454 assert(nfences >= 2);
3455
3456 /* Insert the rest of old top into a bin as an ordinary free chunk */
3457 if (csp != old_top) {
3458 mchunkptr q = (mchunkptr)old_top;
3459 size_t psize = csp - old_top;
3460 mchunkptr tn = chunk_plus_offset(q, psize);
3461 set_free_with_pinuse(q, psize, tn);
3462 insert_chunk(m, q, psize);
3463 }
3464
3465 check_top_chunk(m, m->top);
3466}
3467
3468/* -------------------------- System allocation -------------------------- */
3469
3470/* Get memory from system using MORECORE or MMAP */
3471static void* sys_alloc(mstate m, size_t nb) {
3472 char* tbase = CMFAIL;
3473 size_t tsize = 0;
3474 flag_t mmap_flag = 0;
3475
3476 init_mparams();
3477
3478 /* Directly map large chunks */
3479 if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3480 void* mem = mmap_alloc(m, nb);
3481 if (mem != 0)
3482 return mem;
3483 }
3484
3485#if USE_MAX_ALLOWED_FOOTPRINT
3486 /* Make sure the footprint doesn't grow past max_allowed_footprint.
3487 * This covers all cases except for where we need to page align, below.
3488 */
3489 {
3490 size_t new_footprint = m->footprint +
3491 granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3492 if (new_footprint <= m->footprint || /* Check for wrap around 0 */
3493 new_footprint > m->max_allowed_footprint)
3494 return 0;
3495 }
3496#endif
3497
3498 /*
3499 Try getting memory in any of three ways (in most-preferred to
3500 least-preferred order):
3501 1. A call to MORECORE that can normally contiguously extend memory.
3502 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3503 or main space is mmapped or a previous contiguous call failed)
3504 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3505 Note that under the default settings, if MORECORE is unable to
3506 fulfill a request, and HAVE_MMAP is true, then mmap is
3507 used as a noncontiguous system allocator. This is a useful backup
3508 strategy for systems with holes in address spaces -- in this case
3509 sbrk cannot contiguously expand the heap, but mmap may be able to
3510 find space.
3511 3. A call to MORECORE that cannot usually contiguously extend memory.
3512 (disabled if not HAVE_MORECORE)
3513 */
3514
3515 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3516 char* br = CMFAIL;
3517 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3518 size_t asize = 0;
3519 ACQUIRE_MORECORE_LOCK();
3520
3521 if (ss == 0) { /* First time through or recovery */
3522 char* base = (char*)CALL_MORECORE(0);
3523 if (base != CMFAIL) {
3524 asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3525 /* Adjust to end on a page boundary */
3526 if (!is_page_aligned(base)) {
3527 asize += (page_align((size_t)base) - (size_t)base);
3528#if USE_MAX_ALLOWED_FOOTPRINT
3529 /* If the alignment pushes us over max_allowed_footprint,
3530 * poison the upcoming call to MORECORE and continue.
3531 */
3532 {
3533 size_t new_footprint = m->footprint + asize;
3534 if (new_footprint <= m->footprint || /* Check for wrap around 0 */
3535 new_footprint > m->max_allowed_footprint) {
3536 asize = HALF_MAX_SIZE_T;
3537 }
3538 }
3539#endif
3540 }
3541 /* Can't call MORECORE if size is negative when treated as signed */
3542 if (asize < HALF_MAX_SIZE_T &&
3543 (br = (char*)(CALL_MORECORE(asize))) == base) {
3544 tbase = base;
3545 tsize = asize;
3546 }
3547 }
3548 }
3549 else {
3550 /* Subtract out existing available top space from MORECORE request. */
3551 asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3552 /* Use mem here only if it did continuously extend old space */
3553 if (asize < HALF_MAX_SIZE_T &&
3554 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3555 tbase = br;
3556 tsize = asize;
3557 }
3558 }
3559
3560 if (tbase == CMFAIL) { /* Cope with partial failure */
3561 if (br != CMFAIL) { /* Try to use/extend the space we did get */
3562 if (asize < HALF_MAX_SIZE_T &&
3563 asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3564 size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3565 if (esize < HALF_MAX_SIZE_T) {
3566 char* end = (char*)CALL_MORECORE(esize);
3567 if (end != CMFAIL)
3568 asize += esize;
3569 else { /* Can't use; try to release */
3570 CALL_MORECORE(-asize);
3571 br = CMFAIL;
3572 }
3573 }
3574 }
3575 }
3576 if (br != CMFAIL) { /* Use the space we did get */
3577 tbase = br;
3578 tsize = asize;
3579 }
3580 else
3581 disable_contiguous(m); /* Don't try contiguous path in the future */
3582 }
3583
3584 RELEASE_MORECORE_LOCK();
3585 }
3586
3587 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
3588 size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3589 size_t rsize = granularity_align(req);
3590 if (rsize > nb) { /* Fail if wraps around zero */
3591 char* mp = (char*)(CALL_MMAP(rsize));
3592 if (mp != CMFAIL) {
3593 tbase = mp;
3594 tsize = rsize;
3595 mmap_flag = IS_MMAPPED_BIT;
3596 }
3597 }
3598 }
3599
3600 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3601 size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3602 if (asize < HALF_MAX_SIZE_T) {
3603 char* br = CMFAIL;
3604 char* end = CMFAIL;
3605 ACQUIRE_MORECORE_LOCK();
3606 br = (char*)(CALL_MORECORE(asize));
3607 end = (char*)(CALL_MORECORE(0));
3608 RELEASE_MORECORE_LOCK();
3609 if (br != CMFAIL && end != CMFAIL && br < end) {
3610 size_t ssize = end - br;
3611 if (ssize > nb + TOP_FOOT_SIZE) {
3612 tbase = br;
3613 tsize = ssize;
3614 }
3615 }
3616 }
3617 }
3618
3619 if (tbase != CMFAIL) {
3620
3621 if ((m->footprint += tsize) > m->max_footprint)
3622 m->max_footprint = m->footprint;
3623
3624 if (!is_initialized(m)) { /* first-time initialization */
3625 m->seg.base = m->least_addr = tbase;
3626 m->seg.size = tsize;
3627 m->seg.sflags = mmap_flag;
3628 m->magic = mparams.magic;
3629 init_bins(m);
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -08003630 if (is_global(m))
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003631 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3632 else {
3633 /* Offset top by embedded malloc_state */
3634 mchunkptr mn = next_chunk(mem2chunk(m));
3635 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3636 }
3637 }
3638
3639 else {
3640 /* Try to merge with an existing segment */
3641 msegmentptr sp = &m->seg;
3642 while (sp != 0 && tbase != sp->base + sp->size)
3643 sp = sp->next;
3644 if (sp != 0 &&
3645 !is_extern_segment(sp) &&
3646 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
3647 segment_holds(sp, m->top)) { /* append */
3648 sp->size += tsize;
3649 init_top(m, m->top, m->topsize + tsize);
3650 }
3651 else {
3652 if (tbase < m->least_addr)
3653 m->least_addr = tbase;
3654 sp = &m->seg;
3655 while (sp != 0 && sp->base != tbase + tsize)
3656 sp = sp->next;
3657 if (sp != 0 &&
3658 !is_extern_segment(sp) &&
3659 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
3660 char* oldbase = sp->base;
3661 sp->base = tbase;
3662 sp->size += tsize;
3663 return prepend_alloc(m, tbase, oldbase, nb);
3664 }
3665 else
3666 add_segment(m, tbase, tsize, mmap_flag);
3667 }
3668 }
3669
3670 if (nb < m->topsize) { /* Allocate from new or extended top space */
3671 size_t rsize = m->topsize -= nb;
3672 mchunkptr p = m->top;
3673 mchunkptr r = m->top = chunk_plus_offset(p, nb);
3674 r->head = rsize | PINUSE_BIT;
3675 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3676 check_top_chunk(m, m->top);
3677 check_malloced_chunk(m, chunk2mem(p), nb);
3678 return chunk2mem(p);
3679 }
3680 }
3681
3682 MALLOC_FAILURE_ACTION;
3683 return 0;
3684}
3685
3686/* ----------------------- system deallocation -------------------------- */
3687
3688/* Unmap and unlink any mmapped segments that don't contain used chunks */
3689static size_t release_unused_segments(mstate m) {
3690 size_t released = 0;
3691 msegmentptr pred = &m->seg;
3692 msegmentptr sp = pred->next;
3693 while (sp != 0) {
3694 char* base = sp->base;
3695 size_t size = sp->size;
3696 msegmentptr next = sp->next;
3697 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3698 mchunkptr p = align_as_chunk(base);
3699 size_t psize = chunksize(p);
3700 /* Can unmap if first chunk holds entire segment and not pinned */
3701 if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3702 tchunkptr tp = (tchunkptr)p;
3703 assert(segment_holds(sp, (char*)sp));
3704 if (p == m->dv) {
3705 m->dv = 0;
3706 m->dvsize = 0;
3707 }
3708 else {
3709 unlink_large_chunk(m, tp);
3710 }
3711 if (CALL_MUNMAP(base, size) == 0) {
3712 released += size;
3713 m->footprint -= size;
3714 /* unlink obsoleted record */
3715 sp = pred;
3716 sp->next = next;
3717 }
3718 else { /* back out if cannot unmap */
3719 insert_large_chunk(m, tp, psize);
3720 }
3721 }
3722 }
3723 pred = sp;
3724 sp = next;
3725 }
3726 return released;
3727}
3728
3729static int sys_trim(mstate m, size_t pad) {
3730 size_t released = 0;
3731 if (pad < MAX_REQUEST && is_initialized(m)) {
3732 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3733
3734 if (m->topsize > pad) {
3735 /* Shrink top space in granularity-size units, keeping at least one */
3736 size_t unit = mparams.granularity;
3737 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3738 SIZE_T_ONE) * unit;
3739 msegmentptr sp = segment_holding(m, (char*)m->top);
3740
3741 if (!is_extern_segment(sp)) {
3742 if (is_mmapped_segment(sp)) {
3743 if (HAVE_MMAP &&
3744 sp->size >= extra &&
3745 !has_segment_link(m, sp)) { /* can't shrink if pinned */
3746 size_t newsize = sp->size - extra;
3747 /* Prefer mremap, fall back to munmap */
3748 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3749 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3750 released = extra;
3751 }
3752 }
3753 }
3754 else if (HAVE_MORECORE) {
3755 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3756 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3757 ACQUIRE_MORECORE_LOCK();
3758 {
3759 /* Make sure end of memory is where we last set it. */
3760 char* old_br = (char*)(CALL_MORECORE(0));
3761 if (old_br == sp->base + sp->size) {
3762 char* rel_br = (char*)(CALL_MORECORE(-extra));
3763 char* new_br = (char*)(CALL_MORECORE(0));
3764 if (rel_br != CMFAIL && new_br < old_br)
3765 released = old_br - new_br;
3766 }
3767 }
3768 RELEASE_MORECORE_LOCK();
3769 }
3770 }
3771
3772 if (released != 0) {
3773 sp->size -= released;
3774 m->footprint -= released;
3775 init_top(m, m->top, m->topsize - released);
3776 check_top_chunk(m, m->top);
3777 }
3778 }
3779
3780 /* Unmap any unused mmapped segments */
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -08003781 if (HAVE_MMAP)
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003782 released += release_unused_segments(m);
3783
3784 /* On failure, disable autotrim to avoid repeated failed future calls */
3785 if (released == 0)
3786 m->trim_check = MAX_SIZE_T;
3787 }
3788
3789 return (released != 0)? 1 : 0;
3790}
3791
3792/* ---------------------------- malloc support --------------------------- */
3793
3794/* allocate a large request from the best fitting chunk in a treebin */
3795static void* tmalloc_large(mstate m, size_t nb) {
3796 tchunkptr v = 0;
3797 size_t rsize = -nb; /* Unsigned negation */
3798 tchunkptr t;
3799 bindex_t idx;
3800 compute_tree_index(nb, idx);
3801
3802 if ((t = *treebin_at(m, idx)) != 0) {
3803 /* Traverse tree for this bin looking for node with size == nb */
3804 size_t sizebits = nb << leftshift_for_tree_index(idx);
3805 tchunkptr rst = 0; /* The deepest untaken right subtree */
3806 for (;;) {
3807 tchunkptr rt;
3808 size_t trem = chunksize(t) - nb;
3809 if (trem < rsize) {
3810 v = t;
3811 if ((rsize = trem) == 0)
3812 break;
3813 }
3814 rt = t->child[1];
3815 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3816 if (rt != 0 && rt != t)
3817 rst = rt;
3818 if (t == 0) {
3819 t = rst; /* set t to least subtree holding sizes > nb */
3820 break;
3821 }
3822 sizebits <<= 1;
3823 }
3824 }
3825
3826 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3827 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3828 if (leftbits != 0) {
3829 bindex_t i;
3830 binmap_t leastbit = least_bit(leftbits);
3831 compute_bit2idx(leastbit, i);
3832 t = *treebin_at(m, i);
3833 }
3834 }
3835
3836 while (t != 0) { /* find smallest of tree or subtree */
3837 size_t trem = chunksize(t) - nb;
3838 if (trem < rsize) {
3839 rsize = trem;
3840 v = t;
3841 }
3842 t = leftmost_child(t);
3843 }
3844
3845 /* If dv is a better fit, return 0 so malloc will use it */
3846 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3847 if (RTCHECK(ok_address(m, v))) { /* split */
3848 mchunkptr r = chunk_plus_offset(v, nb);
3849 assert(chunksize(v) == rsize + nb);
3850 if (RTCHECK(ok_next(v, r))) {
3851 unlink_large_chunk(m, v);
3852 if (rsize < MIN_CHUNK_SIZE)
3853 set_inuse_and_pinuse(m, v, (rsize + nb));
3854 else {
3855 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3856 set_size_and_pinuse_of_free_chunk(r, rsize);
3857 insert_chunk(m, r, rsize);
3858 }
3859 return chunk2mem(v);
3860 }
3861 }
3862 CORRUPTION_ERROR_ACTION(m);
3863 }
3864 return 0;
3865}
3866
3867/* allocate a small request from the best fitting chunk in a treebin */
3868static void* tmalloc_small(mstate m, size_t nb) {
3869 tchunkptr t, v;
3870 size_t rsize;
3871 bindex_t i;
3872 binmap_t leastbit = least_bit(m->treemap);
3873 compute_bit2idx(leastbit, i);
3874
3875 v = t = *treebin_at(m, i);
3876 rsize = chunksize(t) - nb;
3877
3878 while ((t = leftmost_child(t)) != 0) {
3879 size_t trem = chunksize(t) - nb;
3880 if (trem < rsize) {
3881 rsize = trem;
3882 v = t;
3883 }
3884 }
3885
3886 if (RTCHECK(ok_address(m, v))) {
3887 mchunkptr r = chunk_plus_offset(v, nb);
3888 assert(chunksize(v) == rsize + nb);
3889 if (RTCHECK(ok_next(v, r))) {
3890 unlink_large_chunk(m, v);
3891 if (rsize < MIN_CHUNK_SIZE)
3892 set_inuse_and_pinuse(m, v, (rsize + nb));
3893 else {
3894 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3895 set_size_and_pinuse_of_free_chunk(r, rsize);
3896 replace_dv(m, r, rsize);
3897 }
3898 return chunk2mem(v);
3899 }
3900 }
3901
3902 CORRUPTION_ERROR_ACTION(m);
3903 return 0;
3904}
3905
3906/* --------------------------- realloc support --------------------------- */
3907
3908static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3909 if (bytes >= MAX_REQUEST) {
3910 MALLOC_FAILURE_ACTION;
3911 return 0;
3912 }
3913 if (!PREACTION(m)) {
3914 mchunkptr oldp = mem2chunk(oldmem);
3915 size_t oldsize = chunksize(oldp);
3916 mchunkptr next = chunk_plus_offset(oldp, oldsize);
3917 mchunkptr newp = 0;
3918 void* extra = 0;
3919
3920 /* Try to either shrink or extend into top. Else malloc-copy-free */
3921
3922 if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3923 ok_next(oldp, next) && ok_pinuse(next))) {
3924 size_t nb = request2size(bytes);
3925 if (is_mmapped(oldp))
3926 newp = mmap_resize(m, oldp, nb);
3927 else if (oldsize >= nb) { /* already big enough */
3928 size_t rsize = oldsize - nb;
3929 newp = oldp;
3930 if (rsize >= MIN_CHUNK_SIZE) {
3931 mchunkptr remainder = chunk_plus_offset(newp, nb);
3932 set_inuse(m, newp, nb);
3933 set_inuse(m, remainder, rsize);
3934 extra = chunk2mem(remainder);
3935 }
3936 }
3937 else if (next == m->top && oldsize + m->topsize > nb) {
3938 /* Expand into top */
3939 size_t newsize = oldsize + m->topsize;
3940 size_t newtopsize = newsize - nb;
3941 mchunkptr newtop = chunk_plus_offset(oldp, nb);
3942 set_inuse(m, oldp, nb);
3943 newtop->head = newtopsize |PINUSE_BIT;
3944 m->top = newtop;
3945 m->topsize = newtopsize;
3946 newp = oldp;
3947 }
3948 }
3949 else {
3950 USAGE_ERROR_ACTION(m, oldmem);
3951 POSTACTION(m);
3952 return 0;
3953 }
3954
3955 POSTACTION(m);
3956
3957 if (newp != 0) {
3958 if (extra != 0) {
3959 internal_free(m, extra);
3960 }
3961 check_inuse_chunk(m, newp);
3962 return chunk2mem(newp);
3963 }
3964 else {
3965 void* newmem = internal_malloc(m, bytes);
3966 if (newmem != 0) {
3967 size_t oc = oldsize - overhead_for(oldp);
3968 memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3969 internal_free(m, oldmem);
3970 }
3971 return newmem;
3972 }
3973 }
3974 return 0;
3975}
3976
3977/* --------------------------- memalign support -------------------------- */
3978
3979static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3980 if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
3981 return internal_malloc(m, bytes);
3982 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3983 alignment = MIN_CHUNK_SIZE;
3984 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3985 size_t a = MALLOC_ALIGNMENT << 1;
3986 while (a < alignment) a <<= 1;
3987 alignment = a;
3988 }
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -08003989
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07003990 if (bytes >= MAX_REQUEST - alignment) {
3991 if (m != 0) { /* Test isn't needed but avoids compiler warning */
3992 MALLOC_FAILURE_ACTION;
3993 }
3994 }
3995 else {
3996 size_t nb = request2size(bytes);
3997 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3998 char* mem = (char*)internal_malloc(m, req);
3999 if (mem != 0) {
4000 void* leader = 0;
4001 void* trailer = 0;
4002 mchunkptr p = mem2chunk(mem);
4003
4004 if (PREACTION(m)) return 0;
4005 if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
4006 /*
4007 Find an aligned spot inside chunk. Since we need to give
4008 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4009 the first calculation places us at a spot with less than
4010 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4011 We've allocated enough total room so that this is always
4012 possible.
4013 */
4014 char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
4015 alignment -
4016 SIZE_T_ONE)) &
4017 -alignment));
4018 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4019 br : br+alignment;
4020 mchunkptr newp = (mchunkptr)pos;
4021 size_t leadsize = pos - (char*)(p);
4022 size_t newsize = chunksize(p) - leadsize;
4023
4024 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4025 newp->prev_foot = p->prev_foot + leadsize;
4026 newp->head = (newsize|CINUSE_BIT);
4027 }
4028 else { /* Otherwise, give back leader, use the rest */
4029 set_inuse(m, newp, newsize);
4030 set_inuse(m, p, leadsize);
4031 leader = chunk2mem(p);
4032 }
4033 p = newp;
4034 }
4035
4036 /* Give back spare room at the end */
4037 if (!is_mmapped(p)) {
4038 size_t size = chunksize(p);
4039 if (size > nb + MIN_CHUNK_SIZE) {
4040 size_t remainder_size = size - nb;
4041 mchunkptr remainder = chunk_plus_offset(p, nb);
4042 set_inuse(m, p, nb);
4043 set_inuse(m, remainder, remainder_size);
4044 trailer = chunk2mem(remainder);
4045 }
4046 }
4047
4048 assert (chunksize(p) >= nb);
4049 assert((((size_t)(chunk2mem(p))) % alignment) == 0);
4050 check_inuse_chunk(m, p);
4051 POSTACTION(m);
4052 if (leader != 0) {
4053 internal_free(m, leader);
4054 }
4055 if (trailer != 0) {
4056 internal_free(m, trailer);
4057 }
4058 return chunk2mem(p);
4059 }
4060 }
4061 return 0;
4062}
4063
4064/* ------------------------ comalloc/coalloc support --------------------- */
4065
4066static void** ialloc(mstate m,
4067 size_t n_elements,
4068 size_t* sizes,
4069 int opts,
4070 void* chunks[]) {
4071 /*
4072 This provides common support for independent_X routines, handling
4073 all of the combinations that can result.
4074
4075 The opts arg has:
4076 bit 0 set if all elements are same size (using sizes[0])
4077 bit 1 set if elements should be zeroed
4078 */
4079
4080 size_t element_size; /* chunksize of each element, if all same */
4081 size_t contents_size; /* total size of elements */
4082 size_t array_size; /* request size of pointer array */
4083 void* mem; /* malloced aggregate space */
4084 mchunkptr p; /* corresponding chunk */
4085 size_t remainder_size; /* remaining bytes while splitting */
4086 void** marray; /* either "chunks" or malloced ptr array */
4087 mchunkptr array_chunk; /* chunk for malloced ptr array */
4088 flag_t was_enabled; /* to disable mmap */
4089 size_t size;
4090 size_t i;
4091
4092 /* compute array length, if needed */
4093 if (chunks != 0) {
4094 if (n_elements == 0)
4095 return chunks; /* nothing to do */
4096 marray = chunks;
4097 array_size = 0;
4098 }
4099 else {
4100 /* if empty req, must still return chunk representing empty array */
4101 if (n_elements == 0)
4102 return (void**)internal_malloc(m, 0);
4103 marray = 0;
4104 array_size = request2size(n_elements * (sizeof(void*)));
4105 }
4106
4107 /* compute total element size */
4108 if (opts & 0x1) { /* all-same-size */
4109 element_size = request2size(*sizes);
4110 contents_size = n_elements * element_size;
4111 }
4112 else { /* add up all the sizes */
4113 element_size = 0;
4114 contents_size = 0;
4115 for (i = 0; i != n_elements; ++i)
4116 contents_size += request2size(sizes[i]);
4117 }
4118
4119 size = contents_size + array_size;
4120
4121 /*
4122 Allocate the aggregate chunk. First disable direct-mmapping so
4123 malloc won't use it, since we would not be able to later
4124 free/realloc space internal to a segregated mmap region.
4125 */
4126 was_enabled = use_mmap(m);
4127 disable_mmap(m);
4128 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4129 if (was_enabled)
4130 enable_mmap(m);
4131 if (mem == 0)
4132 return 0;
4133
4134 if (PREACTION(m)) return 0;
4135 p = mem2chunk(mem);
4136 remainder_size = chunksize(p);
4137
4138 assert(!is_mmapped(p));
4139
4140 if (opts & 0x2) { /* optionally clear the elements */
4141 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4142 }
4143
4144 /* If not provided, allocate the pointer array as final part of chunk */
4145 if (marray == 0) {
4146 size_t array_chunk_size;
4147 array_chunk = chunk_plus_offset(p, contents_size);
4148 array_chunk_size = remainder_size - contents_size;
4149 marray = (void**) (chunk2mem(array_chunk));
4150 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4151 remainder_size = contents_size;
4152 }
4153
4154 /* split out elements */
4155 for (i = 0; ; ++i) {
4156 marray[i] = chunk2mem(p);
4157 if (i != n_elements-1) {
4158 if (element_size != 0)
4159 size = element_size;
4160 else
4161 size = request2size(sizes[i]);
4162 remainder_size -= size;
4163 set_size_and_pinuse_of_inuse_chunk(m, p, size);
4164 p = chunk_plus_offset(p, size);
4165 }
4166 else { /* the final element absorbs any overallocation slop */
4167 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4168 break;
4169 }
4170 }
4171
4172#if DEBUG
4173 if (marray != chunks) {
4174 /* final element must have exactly exhausted chunk */
4175 if (element_size != 0) {
4176 assert(remainder_size == element_size);
4177 }
4178 else {
4179 assert(remainder_size == request2size(sizes[i]));
4180 }
4181 check_inuse_chunk(m, mem2chunk(marray));
4182 }
4183 for (i = 0; i != n_elements; ++i)
4184 check_inuse_chunk(m, mem2chunk(marray[i]));
4185
4186#endif /* DEBUG */
4187
4188 POSTACTION(m);
4189 return marray;
4190}
4191
4192
4193/* -------------------------- public routines ---------------------------- */
4194
4195#if !ONLY_MSPACES
4196
4197void* dlmalloc(size_t bytes) {
4198 /*
4199 Basic algorithm:
4200 If a small request (< 256 bytes minus per-chunk overhead):
4201 1. If one exists, use a remainderless chunk in associated smallbin.
4202 (Remainderless means that there are too few excess bytes to
4203 represent as a chunk.)
4204 2. If it is big enough, use the dv chunk, which is normally the
4205 chunk adjacent to the one used for the most recent small request.
4206 3. If one exists, split the smallest available chunk in a bin,
4207 saving remainder in dv.
4208 4. If it is big enough, use the top chunk.
4209 5. If available, get memory from system and use it
4210 Otherwise, for a large request:
4211 1. Find the smallest available binned chunk that fits, and use it
4212 if it is better fitting than dv chunk, splitting if necessary.
4213 2. If better fitting than any binned chunk, use the dv chunk.
4214 3. If it is big enough, use the top chunk.
4215 4. If request size >= mmap threshold, try to directly mmap this chunk.
4216 5. If available, get memory from system and use it
4217
4218 The ugly goto's here ensure that postaction occurs along all paths.
4219 */
4220
4221 if (!PREACTION(gm)) {
4222 void* mem;
4223 size_t nb;
4224 if (bytes <= MAX_SMALL_REQUEST) {
4225 bindex_t idx;
4226 binmap_t smallbits;
4227 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4228 idx = small_index(nb);
4229 smallbits = gm->smallmap >> idx;
4230
4231 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4232 mchunkptr b, p;
4233 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4234 b = smallbin_at(gm, idx);
4235 p = b->fd;
4236 assert(chunksize(p) == small_index2size(idx));
4237 unlink_first_small_chunk(gm, b, p, idx);
4238 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4239 mem = chunk2mem(p);
4240 check_malloced_chunk(gm, mem, nb);
4241 goto postaction;
4242 }
4243
4244 else if (nb > gm->dvsize) {
4245 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4246 mchunkptr b, p, r;
4247 size_t rsize;
4248 bindex_t i;
4249 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4250 binmap_t leastbit = least_bit(leftbits);
4251 compute_bit2idx(leastbit, i);
4252 b = smallbin_at(gm, i);
4253 p = b->fd;
4254 assert(chunksize(p) == small_index2size(i));
4255 unlink_first_small_chunk(gm, b, p, i);
4256 rsize = small_index2size(i) - nb;
4257 /* Fit here cannot be remainderless if 4byte sizes */
4258 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4259 set_inuse_and_pinuse(gm, p, small_index2size(i));
4260 else {
4261 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4262 r = chunk_plus_offset(p, nb);
4263 set_size_and_pinuse_of_free_chunk(r, rsize);
4264 replace_dv(gm, r, rsize);
4265 }
4266 mem = chunk2mem(p);
4267 check_malloced_chunk(gm, mem, nb);
4268 goto postaction;
4269 }
4270
4271 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4272 check_malloced_chunk(gm, mem, nb);
4273 goto postaction;
4274 }
4275 }
4276 }
4277 else if (bytes >= MAX_REQUEST)
4278 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4279 else {
4280 nb = pad_request(bytes);
4281 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4282 check_malloced_chunk(gm, mem, nb);
4283 goto postaction;
4284 }
4285 }
4286
4287 if (nb <= gm->dvsize) {
4288 size_t rsize = gm->dvsize - nb;
4289 mchunkptr p = gm->dv;
4290 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4291 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4292 gm->dvsize = rsize;
4293 set_size_and_pinuse_of_free_chunk(r, rsize);
4294 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4295 }
4296 else { /* exhaust dv */
4297 size_t dvs = gm->dvsize;
4298 gm->dvsize = 0;
4299 gm->dv = 0;
4300 set_inuse_and_pinuse(gm, p, dvs);
4301 }
4302 mem = chunk2mem(p);
4303 check_malloced_chunk(gm, mem, nb);
4304 goto postaction;
4305 }
4306
4307 else if (nb < gm->topsize) { /* Split top */
4308 size_t rsize = gm->topsize -= nb;
4309 mchunkptr p = gm->top;
4310 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4311 r->head = rsize | PINUSE_BIT;
4312 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4313 mem = chunk2mem(p);
4314 check_top_chunk(gm, gm->top);
4315 check_malloced_chunk(gm, mem, nb);
4316 goto postaction;
4317 }
4318
4319 mem = sys_alloc(gm, nb);
4320
4321 postaction:
4322 POSTACTION(gm);
4323 return mem;
4324 }
4325
4326 return 0;
4327}
4328
4329void dlfree(void* mem) {
4330 /*
4331 Consolidate freed chunks with preceeding or succeeding bordering
4332 free chunks, if they exist, and then place in a bin. Intermixed
4333 with special cases for top, dv, mmapped chunks, and usage errors.
4334 */
4335
4336 if (mem != 0) {
4337 mchunkptr p = mem2chunk(mem);
4338#if FOOTERS
4339 mstate fm = get_mstate_for(p);
4340 if (!ok_magic(fm)) {
4341 USAGE_ERROR_ACTION(fm, p);
4342 return;
4343 }
4344#else /* FOOTERS */
4345#define fm gm
4346#endif /* FOOTERS */
4347 if (!PREACTION(fm)) {
4348 check_inuse_chunk(fm, p);
4349 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4350 size_t psize = chunksize(p);
4351 mchunkptr next = chunk_plus_offset(p, psize);
4352 if (!pinuse(p)) {
4353 size_t prevsize = p->prev_foot;
4354 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4355 prevsize &= ~IS_MMAPPED_BIT;
4356 psize += prevsize + MMAP_FOOT_PAD;
4357 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4358 fm->footprint -= psize;
4359 goto postaction;
4360 }
4361 else {
4362 mchunkptr prev = chunk_minus_offset(p, prevsize);
4363 psize += prevsize;
4364 p = prev;
4365 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4366 if (p != fm->dv) {
4367 unlink_chunk(fm, p, prevsize);
4368 }
4369 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4370 fm->dvsize = psize;
4371 set_free_with_pinuse(p, psize, next);
4372 goto postaction;
4373 }
4374 }
4375 else
4376 goto erroraction;
4377 }
4378 }
4379
4380 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4381 if (!cinuse(next)) { /* consolidate forward */
4382 if (next == fm->top) {
4383 size_t tsize = fm->topsize += psize;
4384 fm->top = p;
4385 p->head = tsize | PINUSE_BIT;
4386 if (p == fm->dv) {
4387 fm->dv = 0;
4388 fm->dvsize = 0;
4389 }
4390 if (should_trim(fm, tsize))
4391 sys_trim(fm, 0);
4392 goto postaction;
4393 }
4394 else if (next == fm->dv) {
4395 size_t dsize = fm->dvsize += psize;
4396 fm->dv = p;
4397 set_size_and_pinuse_of_free_chunk(p, dsize);
4398 goto postaction;
4399 }
4400 else {
4401 size_t nsize = chunksize(next);
4402 psize += nsize;
4403 unlink_chunk(fm, next, nsize);
4404 set_size_and_pinuse_of_free_chunk(p, psize);
4405 if (p == fm->dv) {
4406 fm->dvsize = psize;
4407 goto postaction;
4408 }
4409 }
4410 }
4411 else
4412 set_free_with_pinuse(p, psize, next);
4413 insert_chunk(fm, p, psize);
4414 check_free_chunk(fm, p);
4415 goto postaction;
4416 }
4417 }
4418 erroraction:
4419 USAGE_ERROR_ACTION(fm, p);
4420 postaction:
4421 POSTACTION(fm);
4422 }
4423 }
4424#if !FOOTERS
4425#undef fm
4426#endif /* FOOTERS */
4427}
4428
4429void* dlcalloc(size_t n_elements, size_t elem_size) {
4430 void *mem;
4431 if (n_elements && MAX_SIZE_T / n_elements < elem_size) {
4432 /* Fail on overflow */
4433 MALLOC_FAILURE_ACTION;
4434 return NULL;
4435 }
4436 elem_size *= n_elements;
4437 mem = dlmalloc(elem_size);
4438 if (mem && calloc_must_clear(mem2chunk(mem)))
4439 memset(mem, 0, elem_size);
4440 return mem;
4441}
4442
4443void* dlrealloc(void* oldmem, size_t bytes) {
4444 if (oldmem == 0)
4445 return dlmalloc(bytes);
4446#ifdef REALLOC_ZERO_BYTES_FREES
4447 if (bytes == 0) {
4448 dlfree(oldmem);
4449 return 0;
4450 }
4451#endif /* REALLOC_ZERO_BYTES_FREES */
4452 else {
4453#if ! FOOTERS
4454 mstate m = gm;
4455#else /* FOOTERS */
4456 mstate m = get_mstate_for(mem2chunk(oldmem));
4457 if (!ok_magic(m)) {
4458 USAGE_ERROR_ACTION(m, oldmem);
4459 return 0;
4460 }
4461#endif /* FOOTERS */
4462 return internal_realloc(m, oldmem, bytes);
4463 }
4464}
4465
4466void* dlmemalign(size_t alignment, size_t bytes) {
4467 return internal_memalign(gm, alignment, bytes);
4468}
4469
4470void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4471 void* chunks[]) {
4472 size_t sz = elem_size; /* serves as 1-element array */
4473 return ialloc(gm, n_elements, &sz, 3, chunks);
4474}
4475
4476void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4477 void* chunks[]) {
4478 return ialloc(gm, n_elements, sizes, 0, chunks);
4479}
4480
4481void* dlvalloc(size_t bytes) {
4482 size_t pagesz;
4483 init_mparams();
4484 pagesz = mparams.page_size;
4485 return dlmemalign(pagesz, bytes);
4486}
4487
4488void* dlpvalloc(size_t bytes) {
4489 size_t pagesz;
4490 init_mparams();
4491 pagesz = mparams.page_size;
4492 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4493}
4494
4495int dlmalloc_trim(size_t pad) {
4496 int result = 0;
4497 if (!PREACTION(gm)) {
4498 result = sys_trim(gm, pad);
4499 POSTACTION(gm);
4500 }
4501 return result;
4502}
4503
4504size_t dlmalloc_footprint(void) {
4505 return gm->footprint;
4506}
4507
4508#if USE_MAX_ALLOWED_FOOTPRINT
4509size_t dlmalloc_max_allowed_footprint(void) {
4510 return gm->max_allowed_footprint;
4511}
4512
4513void dlmalloc_set_max_allowed_footprint(size_t bytes) {
4514 if (bytes > gm->footprint) {
4515 /* Increase the size in multiples of the granularity,
4516 * which is the smallest unit we request from the system.
4517 */
4518 gm->max_allowed_footprint = gm->footprint +
4519 granularity_align(bytes - gm->footprint);
4520 }
4521 else {
4522 //TODO: allow for reducing the max footprint
4523 gm->max_allowed_footprint = gm->footprint;
4524 }
4525}
4526#endif
4527
4528size_t dlmalloc_max_footprint(void) {
4529 return gm->max_footprint;
4530}
4531
4532#if !NO_MALLINFO
4533struct mallinfo dlmallinfo(void) {
4534 return internal_mallinfo(gm);
4535}
4536#endif /* NO_MALLINFO */
4537
4538void dlmalloc_stats() {
4539 internal_malloc_stats(gm);
4540}
4541
4542size_t dlmalloc_usable_size(void* mem) {
4543 if (mem != 0) {
4544 mchunkptr p = mem2chunk(mem);
4545 if (cinuse(p))
4546 return chunksize(p) - overhead_for(p);
4547 }
4548 return 0;
4549}
4550
4551int dlmallopt(int param_number, int value) {
4552 return change_mparam(param_number, value);
4553}
4554
4555#endif /* !ONLY_MSPACES */
4556
4557/* ----------------------------- user mspaces ---------------------------- */
4558
4559#if MSPACES
4560
4561static mstate init_user_mstate(char* tbase, size_t tsize) {
4562 size_t msize = pad_request(sizeof(struct malloc_state));
4563 mchunkptr mn;
4564 mchunkptr msp = align_as_chunk(tbase);
4565 mstate m = (mstate)(chunk2mem(msp));
4566 memset(m, 0, msize);
4567 INITIAL_LOCK(&m->mutex);
4568 msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4569 m->seg.base = m->least_addr = tbase;
4570 m->seg.size = m->footprint = m->max_footprint = tsize;
4571#if USE_MAX_ALLOWED_FOOTPRINT
4572 m->max_allowed_footprint = MAX_SIZE_T;
4573#endif
4574 m->magic = mparams.magic;
4575 m->mflags = mparams.default_mflags;
4576 disable_contiguous(m);
4577 init_bins(m);
4578 mn = next_chunk(mem2chunk(m));
4579 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4580 check_top_chunk(m, m->top);
4581 return m;
4582}
4583
4584mspace create_mspace(size_t capacity, int locked) {
4585 mstate m = 0;
4586 size_t msize = pad_request(sizeof(struct malloc_state));
4587 init_mparams(); /* Ensure pagesize etc initialized */
4588
4589 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4590 size_t rs = ((capacity == 0)? mparams.granularity :
4591 (capacity + TOP_FOOT_SIZE + msize));
4592 size_t tsize = granularity_align(rs);
4593 char* tbase = (char*)(CALL_MMAP(tsize));
4594 if (tbase != CMFAIL) {
4595 m = init_user_mstate(tbase, tsize);
4596 m->seg.sflags = IS_MMAPPED_BIT;
4597 set_lock(m, locked);
4598 }
4599 }
4600 return (mspace)m;
4601}
4602
4603mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4604 mstate m = 0;
4605 size_t msize = pad_request(sizeof(struct malloc_state));
4606 init_mparams(); /* Ensure pagesize etc initialized */
4607
4608 if (capacity > msize + TOP_FOOT_SIZE &&
4609 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4610 m = init_user_mstate((char*)base, capacity);
4611 m->seg.sflags = EXTERN_BIT;
4612 set_lock(m, locked);
4613 }
4614 return (mspace)m;
4615}
4616
4617size_t destroy_mspace(mspace msp) {
4618 size_t freed = 0;
4619 mstate ms = (mstate)msp;
4620 if (ok_magic(ms)) {
4621 msegmentptr sp = &ms->seg;
4622 while (sp != 0) {
4623 char* base = sp->base;
4624 size_t size = sp->size;
4625 flag_t flag = sp->sflags;
4626 sp = sp->next;
4627 if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4628 CALL_MUNMAP(base, size) == 0)
4629 freed += size;
4630 }
4631 }
4632 else {
4633 USAGE_ERROR_ACTION(ms,ms);
4634 }
4635 return freed;
4636}
4637
4638/*
4639 mspace versions of routines are near-clones of the global
4640 versions. This is not so nice but better than the alternatives.
4641*/
4642
4643
4644void* mspace_malloc(mspace msp, size_t bytes) {
4645 mstate ms = (mstate)msp;
4646 if (!ok_magic(ms)) {
4647 USAGE_ERROR_ACTION(ms,ms);
4648 return 0;
4649 }
4650 if (!PREACTION(ms)) {
4651 void* mem;
4652 size_t nb;
4653 if (bytes <= MAX_SMALL_REQUEST) {
4654 bindex_t idx;
4655 binmap_t smallbits;
4656 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4657 idx = small_index(nb);
4658 smallbits = ms->smallmap >> idx;
4659
4660 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4661 mchunkptr b, p;
4662 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4663 b = smallbin_at(ms, idx);
4664 p = b->fd;
4665 assert(chunksize(p) == small_index2size(idx));
4666 unlink_first_small_chunk(ms, b, p, idx);
4667 set_inuse_and_pinuse(ms, p, small_index2size(idx));
4668 mem = chunk2mem(p);
4669 check_malloced_chunk(ms, mem, nb);
4670 goto postaction;
4671 }
4672
4673 else if (nb > ms->dvsize) {
4674 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4675 mchunkptr b, p, r;
4676 size_t rsize;
4677 bindex_t i;
4678 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4679 binmap_t leastbit = least_bit(leftbits);
4680 compute_bit2idx(leastbit, i);
4681 b = smallbin_at(ms, i);
4682 p = b->fd;
4683 assert(chunksize(p) == small_index2size(i));
4684 unlink_first_small_chunk(ms, b, p, i);
4685 rsize = small_index2size(i) - nb;
4686 /* Fit here cannot be remainderless if 4byte sizes */
4687 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4688 set_inuse_and_pinuse(ms, p, small_index2size(i));
4689 else {
4690 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4691 r = chunk_plus_offset(p, nb);
4692 set_size_and_pinuse_of_free_chunk(r, rsize);
4693 replace_dv(ms, r, rsize);
4694 }
4695 mem = chunk2mem(p);
4696 check_malloced_chunk(ms, mem, nb);
4697 goto postaction;
4698 }
4699
4700 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4701 check_malloced_chunk(ms, mem, nb);
4702 goto postaction;
4703 }
4704 }
4705 }
4706 else if (bytes >= MAX_REQUEST)
4707 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4708 else {
4709 nb = pad_request(bytes);
4710 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4711 check_malloced_chunk(ms, mem, nb);
4712 goto postaction;
4713 }
4714 }
4715
4716 if (nb <= ms->dvsize) {
4717 size_t rsize = ms->dvsize - nb;
4718 mchunkptr p = ms->dv;
4719 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4720 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4721 ms->dvsize = rsize;
4722 set_size_and_pinuse_of_free_chunk(r, rsize);
4723 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4724 }
4725 else { /* exhaust dv */
4726 size_t dvs = ms->dvsize;
4727 ms->dvsize = 0;
4728 ms->dv = 0;
4729 set_inuse_and_pinuse(ms, p, dvs);
4730 }
4731 mem = chunk2mem(p);
4732 check_malloced_chunk(ms, mem, nb);
4733 goto postaction;
4734 }
4735
4736 else if (nb < ms->topsize) { /* Split top */
4737 size_t rsize = ms->topsize -= nb;
4738 mchunkptr p = ms->top;
4739 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4740 r->head = rsize | PINUSE_BIT;
4741 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4742 mem = chunk2mem(p);
4743 check_top_chunk(ms, ms->top);
4744 check_malloced_chunk(ms, mem, nb);
4745 goto postaction;
4746 }
4747
4748 mem = sys_alloc(ms, nb);
4749
4750 postaction:
4751 POSTACTION(ms);
4752 return mem;
4753 }
4754
4755 return 0;
4756}
4757
4758void mspace_free(mspace msp, void* mem) {
4759 if (mem != 0) {
4760 mchunkptr p = mem2chunk(mem);
4761#if FOOTERS
4762 mstate fm = get_mstate_for(p);
4763#else /* FOOTERS */
4764 mstate fm = (mstate)msp;
4765#endif /* FOOTERS */
4766 if (!ok_magic(fm)) {
4767 USAGE_ERROR_ACTION(fm, p);
4768 return;
4769 }
4770 if (!PREACTION(fm)) {
4771 check_inuse_chunk(fm, p);
4772 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4773 size_t psize = chunksize(p);
4774 mchunkptr next = chunk_plus_offset(p, psize);
4775 if (!pinuse(p)) {
4776 size_t prevsize = p->prev_foot;
4777 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4778 prevsize &= ~IS_MMAPPED_BIT;
4779 psize += prevsize + MMAP_FOOT_PAD;
4780 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4781 fm->footprint -= psize;
4782 goto postaction;
4783 }
4784 else {
4785 mchunkptr prev = chunk_minus_offset(p, prevsize);
4786 psize += prevsize;
4787 p = prev;
4788 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4789 if (p != fm->dv) {
4790 unlink_chunk(fm, p, prevsize);
4791 }
4792 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4793 fm->dvsize = psize;
4794 set_free_with_pinuse(p, psize, next);
4795 goto postaction;
4796 }
4797 }
4798 else
4799 goto erroraction;
4800 }
4801 }
4802
4803 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4804 if (!cinuse(next)) { /* consolidate forward */
4805 if (next == fm->top) {
4806 size_t tsize = fm->topsize += psize;
4807 fm->top = p;
4808 p->head = tsize | PINUSE_BIT;
4809 if (p == fm->dv) {
4810 fm->dv = 0;
4811 fm->dvsize = 0;
4812 }
4813 if (should_trim(fm, tsize))
4814 sys_trim(fm, 0);
4815 goto postaction;
4816 }
4817 else if (next == fm->dv) {
4818 size_t dsize = fm->dvsize += psize;
4819 fm->dv = p;
4820 set_size_and_pinuse_of_free_chunk(p, dsize);
4821 goto postaction;
4822 }
4823 else {
4824 size_t nsize = chunksize(next);
4825 psize += nsize;
4826 unlink_chunk(fm, next, nsize);
4827 set_size_and_pinuse_of_free_chunk(p, psize);
4828 if (p == fm->dv) {
4829 fm->dvsize = psize;
4830 goto postaction;
4831 }
4832 }
4833 }
4834 else
4835 set_free_with_pinuse(p, psize, next);
4836 insert_chunk(fm, p, psize);
4837 check_free_chunk(fm, p);
4838 goto postaction;
4839 }
4840 }
4841 erroraction:
4842 USAGE_ERROR_ACTION(fm, p);
4843 postaction:
4844 POSTACTION(fm);
4845 }
4846 }
4847}
4848
4849void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4850 void *mem;
4851 mstate ms = (mstate)msp;
4852 if (!ok_magic(ms)) {
4853 USAGE_ERROR_ACTION(ms,ms);
4854 return 0;
4855 }
4856 if (n_elements && MAX_SIZE_T / n_elements < elem_size) {
4857 /* Fail on overflow */
4858 MALLOC_FAILURE_ACTION;
4859 return NULL;
4860 }
4861 elem_size *= n_elements;
4862 mem = internal_malloc(ms, elem_size);
4863 if (mem && calloc_must_clear(mem2chunk(mem)))
4864 memset(mem, 0, elem_size);
4865 return mem;
4866}
4867
4868void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4869 if (oldmem == 0)
4870 return mspace_malloc(msp, bytes);
4871#ifdef REALLOC_ZERO_BYTES_FREES
4872 if (bytes == 0) {
4873 mspace_free(msp, oldmem);
4874 return 0;
4875 }
4876#endif /* REALLOC_ZERO_BYTES_FREES */
4877 else {
4878#if FOOTERS
4879 mchunkptr p = mem2chunk(oldmem);
4880 mstate ms = get_mstate_for(p);
4881#else /* FOOTERS */
4882 mstate ms = (mstate)msp;
4883#endif /* FOOTERS */
4884 if (!ok_magic(ms)) {
4885 USAGE_ERROR_ACTION(ms,ms);
4886 return 0;
4887 }
4888 return internal_realloc(ms, oldmem, bytes);
4889 }
4890}
4891
Barry Hayesf30dae92009-05-26 10:33:04 -07004892#if ANDROID
4893void* mspace_merge_objects(mspace msp, void* mema, void* memb)
4894{
4895 /* PREACTION/POSTACTION aren't necessary because we are only
4896 modifying fields of inuse chunks owned by the current thread, in
4897 which case no other malloc operations can touch them.
4898 */
4899 if (mema == NULL || memb == NULL) {
4900 return NULL;
4901 }
4902 mchunkptr pa = mem2chunk(mema);
4903 mchunkptr pb = mem2chunk(memb);
4904
4905#if FOOTERS
4906 mstate fm = get_mstate_for(pa);
4907#else /* FOOTERS */
4908 mstate fm = (mstate)msp;
4909#endif /* FOOTERS */
4910 if (!ok_magic(fm)) {
4911 USAGE_ERROR_ACTION(fm, pa);
4912 return NULL;
4913 }
4914 check_inuse_chunk(fm, pa);
4915 if (RTCHECK(ok_address(fm, pa) && ok_cinuse(pa))) {
4916 if (next_chunk(pa) != pb) {
4917 /* Since pb may not be in fm, we can't check ok_address(fm, pb);
4918 since ok_cinuse(pb) would be unsafe before an address check,
4919 return NULL rather than invoke USAGE_ERROR_ACTION if pb is not
4920 in use or is a bogus address.
4921 */
4922 return NULL;
4923 }
4924 /* Since b follows a, they share the mspace. */
4925#if FOOTERS
4926 assert(fm == get_mstate_for(pb));
4927#endif /* FOOTERS */
4928 check_inuse_chunk(fm, pb);
4929 if (RTCHECK(ok_address(fm, pb) && ok_cinuse(pb))) {
4930 size_t sz = chunksize(pb);
4931 pa->head += sz;
4932 /* Make sure pa still passes. */
4933 check_inuse_chunk(fm, pa);
4934 return mema;
4935 }
4936 else {
4937 USAGE_ERROR_ACTION(fm, pb);
4938 return NULL;
4939 }
4940 }
4941 else {
4942 USAGE_ERROR_ACTION(fm, pa);
4943 return NULL;
4944 }
4945}
4946#endif /* ANDROID */
4947
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07004948void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4949 mstate ms = (mstate)msp;
4950 if (!ok_magic(ms)) {
4951 USAGE_ERROR_ACTION(ms,ms);
4952 return 0;
4953 }
4954 return internal_memalign(ms, alignment, bytes);
4955}
4956
4957void** mspace_independent_calloc(mspace msp, size_t n_elements,
4958 size_t elem_size, void* chunks[]) {
4959 size_t sz = elem_size; /* serves as 1-element array */
4960 mstate ms = (mstate)msp;
4961 if (!ok_magic(ms)) {
4962 USAGE_ERROR_ACTION(ms,ms);
4963 return 0;
4964 }
4965 return ialloc(ms, n_elements, &sz, 3, chunks);
4966}
4967
4968void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4969 size_t sizes[], void* chunks[]) {
4970 mstate ms = (mstate)msp;
4971 if (!ok_magic(ms)) {
4972 USAGE_ERROR_ACTION(ms,ms);
4973 return 0;
4974 }
4975 return ialloc(ms, n_elements, sizes, 0, chunks);
4976}
4977
4978int mspace_trim(mspace msp, size_t pad) {
4979 int result = 0;
4980 mstate ms = (mstate)msp;
4981 if (ok_magic(ms)) {
4982 if (!PREACTION(ms)) {
4983 result = sys_trim(ms, pad);
4984 POSTACTION(ms);
4985 }
4986 }
4987 else {
4988 USAGE_ERROR_ACTION(ms,ms);
4989 }
4990 return result;
4991}
4992
4993void mspace_malloc_stats(mspace msp) {
4994 mstate ms = (mstate)msp;
4995 if (ok_magic(ms)) {
4996 internal_malloc_stats(ms);
4997 }
4998 else {
4999 USAGE_ERROR_ACTION(ms,ms);
5000 }
5001}
5002
5003size_t mspace_footprint(mspace msp) {
5004 size_t result;
5005 mstate ms = (mstate)msp;
5006 if (ok_magic(ms)) {
5007 result = ms->footprint;
5008 }
5009 else {
5010 USAGE_ERROR_ACTION(ms,ms);
5011 }
5012 return result;
5013}
5014
5015#if USE_MAX_ALLOWED_FOOTPRINT
5016size_t mspace_max_allowed_footprint(mspace msp) {
5017 size_t result;
5018 mstate ms = (mstate)msp;
5019 if (ok_magic(ms)) {
5020 result = ms->max_allowed_footprint;
5021 }
5022 else {
5023 USAGE_ERROR_ACTION(ms,ms);
5024 }
5025 return result;
5026}
5027
5028void mspace_set_max_allowed_footprint(mspace msp, size_t bytes) {
5029 mstate ms = (mstate)msp;
5030 if (ok_magic(ms)) {
5031 if (bytes > ms->footprint) {
5032 /* Increase the size in multiples of the granularity,
5033 * which is the smallest unit we request from the system.
5034 */
5035 ms->max_allowed_footprint = ms->footprint +
5036 granularity_align(bytes - ms->footprint);
5037 }
5038 else {
5039 //TODO: allow for reducing the max footprint
5040 ms->max_allowed_footprint = ms->footprint;
5041 }
5042 }
5043 else {
5044 USAGE_ERROR_ACTION(ms,ms);
5045 }
5046}
5047#endif
5048
5049size_t mspace_max_footprint(mspace msp) {
5050 size_t result;
5051 mstate ms = (mstate)msp;
5052 if (ok_magic(ms)) {
5053 result = ms->max_footprint;
5054 }
5055 else {
5056 USAGE_ERROR_ACTION(ms,ms);
5057 }
5058 return result;
5059}
5060
5061
5062#if !NO_MALLINFO
5063struct mallinfo mspace_mallinfo(mspace msp) {
5064 mstate ms = (mstate)msp;
5065 if (!ok_magic(ms)) {
5066 USAGE_ERROR_ACTION(ms,ms);
5067 }
5068 return internal_mallinfo(ms);
5069}
5070#endif /* NO_MALLINFO */
5071
5072int mspace_mallopt(int param_number, int value) {
5073 return change_mparam(param_number, value);
5074}
5075
5076#endif /* MSPACES */
5077
5078#if MSPACES && ONLY_MSPACES
5079void mspace_walk_free_pages(mspace msp,
5080 void(*handler)(void *start, void *end, void *arg), void *harg)
5081{
5082 mstate m = (mstate)msp;
5083 if (!ok_magic(m)) {
5084 USAGE_ERROR_ACTION(m,m);
5085 return;
5086 }
5087#else
5088void dlmalloc_walk_free_pages(void(*handler)(void *start, void *end, void *arg),
5089 void *harg)
5090{
5091 mstate m = (mstate)gm;
5092#endif
5093 if (!PREACTION(m)) {
5094 if (is_initialized(m)) {
5095 msegmentptr s = &m->seg;
5096 while (s != 0) {
5097 mchunkptr p = align_as_chunk(s->base);
5098 while (segment_holds(s, p) &&
5099 p != m->top && p->head != FENCEPOST_HEAD) {
5100 void *chunkptr, *userptr;
5101 size_t chunklen, userlen;
5102 chunkptr = p;
5103 chunklen = chunksize(p);
5104 if (!cinuse(p)) {
5105 void *start;
5106 if (is_small(chunklen)) {
5107 start = (void *)(p + 1);
5108 }
5109 else {
5110 start = (void *)((tchunkptr)p + 1);
5111 }
5112 handler(start, next_chunk(p), harg);
5113 }
5114 p = next_chunk(p);
5115 }
5116 if (p == m->top) {
5117 handler((void *)(p + 1), next_chunk(p), harg);
5118 }
5119 s = s->next;
5120 }
5121 }
5122 POSTACTION(m);
5123 }
5124}
5125
5126
5127#if MSPACES && ONLY_MSPACES
5128void mspace_walk_heap(mspace msp,
5129 void(*handler)(const void *chunkptr, size_t chunklen,
5130 const void *userptr, size_t userlen,
5131 void *arg),
5132 void *harg)
5133{
5134 msegmentptr s;
5135 mstate m = (mstate)msp;
5136 if (!ok_magic(m)) {
5137 USAGE_ERROR_ACTION(m,m);
5138 return;
5139 }
5140#else
5141void dlmalloc_walk_heap(void(*handler)(const void *chunkptr, size_t chunklen,
5142 const void *userptr, size_t userlen,
5143 void *arg),
5144 void *harg)
5145{
5146 msegmentptr s;
5147 mstate m = (mstate)gm;
5148#endif
5149
5150 s = &m->seg;
5151 while (s != 0) {
5152 mchunkptr p = align_as_chunk(s->base);
5153 while (segment_holds(s, p) &&
5154 p != m->top && p->head != FENCEPOST_HEAD) {
5155 void *chunkptr, *userptr;
5156 size_t chunklen, userlen;
5157 chunkptr = p;
5158 chunklen = chunksize(p);
5159 if (cinuse(p)) {
5160 userptr = chunk2mem(p);
5161 userlen = chunklen - overhead_for(p);
5162 }
5163 else {
5164 userptr = NULL;
5165 userlen = 0;
5166 }
5167 handler(chunkptr, chunklen, userptr, userlen, harg);
5168 p = next_chunk(p);
5169 }
5170 if (p == m->top) {
5171 /* The top chunk is just a big free chunk for our purposes.
5172 */
5173 handler(m->top, m->topsize, NULL, 0, harg);
5174 }
5175 s = s->next;
5176 }
5177}
5178
5179/* -------------------- Alternative MORECORE functions ------------------- */
5180
5181/*
5182 Guidelines for creating a custom version of MORECORE:
5183
5184 * For best performance, MORECORE should allocate in multiples of pagesize.
5185 * MORECORE may allocate more memory than requested. (Or even less,
5186 but this will usually result in a malloc failure.)
5187 * MORECORE must not allocate memory when given argument zero, but
5188 instead return one past the end address of memory from previous
5189 nonzero call.
5190 * For best performance, consecutive calls to MORECORE with positive
5191 arguments should return increasing addresses, indicating that
5192 space has been contiguously extended.
5193 * Even though consecutive calls to MORECORE need not return contiguous
5194 addresses, it must be OK for malloc'ed chunks to span multiple
5195 regions in those cases where they do happen to be contiguous.
5196 * MORECORE need not handle negative arguments -- it may instead
5197 just return MFAIL when given negative arguments.
5198 Negative arguments are always multiples of pagesize. MORECORE
5199 must not misinterpret negative args as large positive unsigned
5200 args. You can suppress all such calls from even occurring by defining
5201 MORECORE_CANNOT_TRIM,
5202
5203 As an example alternative MORECORE, here is a custom allocator
5204 kindly contributed for pre-OSX macOS. It uses virtually but not
5205 necessarily physically contiguous non-paged memory (locked in,
5206 present and won't get swapped out). You can use it by uncommenting
5207 this section, adding some #includes, and setting up the appropriate
5208 defines above:
5209
5210 #define MORECORE osMoreCore
5211
5212 There is also a shutdown routine that should somehow be called for
5213 cleanup upon program exit.
5214
5215 #define MAX_POOL_ENTRIES 100
5216 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
5217 static int next_os_pool;
5218 void *our_os_pools[MAX_POOL_ENTRIES];
5219
5220 void *osMoreCore(int size)
5221 {
5222 void *ptr = 0;
5223 static void *sbrk_top = 0;
5224
5225 if (size > 0)
5226 {
5227 if (size < MINIMUM_MORECORE_SIZE)
5228 size = MINIMUM_MORECORE_SIZE;
5229 if (CurrentExecutionLevel() == kTaskLevel)
5230 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
5231 if (ptr == 0)
5232 {
5233 return (void *) MFAIL;
5234 }
5235 // save ptrs so they can be freed during cleanup
5236 our_os_pools[next_os_pool] = ptr;
5237 next_os_pool++;
5238 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
5239 sbrk_top = (char *) ptr + size;
5240 return ptr;
5241 }
5242 else if (size < 0)
5243 {
5244 // we don't currently support shrink behavior
5245 return (void *) MFAIL;
5246 }
5247 else
5248 {
5249 return sbrk_top;
5250 }
5251 }
5252
5253 // cleanup any allocated memory pools
5254 // called as last thing before shutting down driver
5255
5256 void osCleanupMem(void)
5257 {
5258 void **ptr;
5259
5260 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
5261 if (*ptr)
5262 {
5263 PoolDeallocate(*ptr);
5264 *ptr = 0;
5265 }
5266 }
5267
5268*/
5269
5270
5271/* -----------------------------------------------------------------------
5272History:
5273 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
5274 * Add max_footprint functions
5275 * Ensure all appropriate literals are size_t
5276 * Fix conditional compilation problem for some #define settings
5277 * Avoid concatenating segments with the one provided
5278 in create_mspace_with_base
5279 * Rename some variables to avoid compiler shadowing warnings
5280 * Use explicit lock initialization.
5281 * Better handling of sbrk interference.
5282 * Simplify and fix segment insertion, trimming and mspace_destroy
5283 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5284 * Thanks especially to Dennis Flanagan for help on these.
5285
5286 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
5287 * Fix memalign brace error.
5288
5289 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
5290 * Fix improper #endif nesting in C++
5291 * Add explicit casts needed for C++
5292
5293 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
5294 * Use trees for large bins
5295 * Support mspaces
5296 * Use segments to unify sbrk-based and mmap-based system allocation,
5297 removing need for emulation on most platforms without sbrk.
5298 * Default safety checks
5299 * Optional footer checks. Thanks to William Robertson for the idea.
5300 * Internal code refactoring
5301 * Incorporate suggestions and platform-specific changes.
5302 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5303 Aaron Bachmann, Emery Berger, and others.
5304 * Speed up non-fastbin processing enough to remove fastbins.
5305 * Remove useless cfree() to avoid conflicts with other apps.
5306 * Remove internal memcpy, memset. Compilers handle builtins better.
5307 * Remove some options that no one ever used and rename others.
5308
5309 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5310 * Fix malloc_state bitmap array misdeclaration
5311
5312 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5313 * Allow tuning of FIRST_SORTED_BIN_SIZE
5314 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5315 * Better detection and support for non-contiguousness of MORECORE.
5316 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5317 * Bypass most of malloc if no frees. Thanks To Emery Berger.
5318 * Fix freeing of old top non-contiguous chunk im sysmalloc.
5319 * Raised default trim and map thresholds to 256K.
5320 * Fix mmap-related #defines. Thanks to Lubos Lunak.
5321 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5322 * Branch-free bin calculation
5323 * Default trim and mmap thresholds now 256K.
5324
5325 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5326 * Introduce independent_comalloc and independent_calloc.
5327 Thanks to Michael Pachos for motivation and help.
5328 * Make optional .h file available
5329 * Allow > 2GB requests on 32bit systems.
5330 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5331 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5332 and Anonymous.
5333 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5334 helping test this.)
5335 * memalign: check alignment arg
5336 * realloc: don't try to shift chunks backwards, since this
5337 leads to more fragmentation in some programs and doesn't
5338 seem to help in any others.
5339 * Collect all cases in malloc requiring system memory into sysmalloc
5340 * Use mmap as backup to sbrk
5341 * Place all internal state in malloc_state
5342 * Introduce fastbins (although similar to 2.5.1)
5343 * Many minor tunings and cosmetic improvements
5344 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5345 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5346 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5347 * Include errno.h to support default failure action.
5348
5349 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5350 * return null for negative arguments
5351 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5352 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5353 (e.g. WIN32 platforms)
5354 * Cleanup header file inclusion for WIN32 platforms
5355 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5356 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5357 memory allocation routines
5358 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5359 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5360 usage of 'assert' in non-WIN32 code
5361 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5362 avoid infinite loop
5363 * Always call 'fREe()' rather than 'free()'
5364
5365 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5366 * Fixed ordering problem with boundary-stamping
5367
5368 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5369 * Added pvalloc, as recommended by H.J. Liu
5370 * Added 64bit pointer support mainly from Wolfram Gloger
5371 * Added anonymously donated WIN32 sbrk emulation
5372 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5373 * malloc_extend_top: fix mask error that caused wastage after
5374 foreign sbrks
5375 * Add linux mremap support code from HJ Liu
5376
5377 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5378 * Integrated most documentation with the code.
5379 * Add support for mmap, with help from
5380 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5381 * Use last_remainder in more cases.
5382 * Pack bins using idea from colin@nyx10.cs.du.edu
5383 * Use ordered bins instead of best-fit threshhold
5384 * Eliminate block-local decls to simplify tracing and debugging.
5385 * Support another case of realloc via move into top
5386 * Fix error occuring when initial sbrk_base not word-aligned.
5387 * Rely on page size for units instead of SBRK_UNIT to
5388 avoid surprises about sbrk alignment conventions.
5389 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5390 (raymond@es.ele.tue.nl) for the suggestion.
5391 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5392 * More precautions for cases where other routines call sbrk,
5393 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5394 * Added macros etc., allowing use in linux libc from
5395 H.J. Lu (hjl@gnu.ai.mit.edu)
5396 * Inverted this history list
5397
5398 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5399 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5400 * Removed all preallocation code since under current scheme
5401 the work required to undo bad preallocations exceeds
5402 the work saved in good cases for most test programs.
5403 * No longer use return list or unconsolidated bins since
5404 no scheme using them consistently outperforms those that don't
5405 given above changes.
5406 * Use best fit for very large chunks to prevent some worst-cases.
5407 * Added some support for debugging
5408
5409 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5410 * Removed footers when chunks are in use. Thanks to
5411 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5412
5413 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5414 * Added malloc_trim, with help from Wolfram Gloger
5415 (wmglo@Dent.MED.Uni-Muenchen.DE).
5416
5417 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5418
5419 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5420 * realloc: try to expand in both directions
5421 * malloc: swap order of clean-bin strategy;
5422 * realloc: only conditionally expand backwards
5423 * Try not to scavenge used bins
5424 * Use bin counts as a guide to preallocation
5425 * Occasionally bin return list chunks in first scan
5426 * Add a few optimizations from colin@nyx10.cs.du.edu
5427
5428 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5429 * faster bin computation & slightly different binning
5430 * merged all consolidations to one part of malloc proper
5431 (eliminating old malloc_find_space & malloc_clean_bin)
5432 * Scan 2 returns chunks (not just 1)
5433 * Propagate failure in realloc if malloc returns 0
5434 * Add stuff to allow compilation on non-ANSI compilers
5435 from kpv@research.att.com
5436
5437 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5438 * removed potential for odd address access in prev_chunk
5439 * removed dependency on getpagesize.h
5440 * misc cosmetics and a bit more internal documentation
5441 * anticosmetics: mangled names in macros to evade debugger strangeness
5442 * tested on sparc, hp-700, dec-mips, rs6000
5443 with gcc & native cc (hp, dec only) allowing
5444 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5445
5446 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5447 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5448 structure of old version, but most details differ.)
Vladimir Chtchetkineb74ceb22009-11-17 14:13:38 -08005449
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07005450*/