blob: 78f20c059cab5524dcf303312390e4b55e6c3a4f [file] [log] [blame]
The Android Open Source Projecta27d2ba2008-10-21 07:00:00 -07001/*
2 * Copyright (C) 2008 The Android Open Source Project
3 * All rights reserved.
4 *
5 * Redistribution and use in source and binary forms, with or without
6 * modification, are permitted provided that the following conditions
7 * are met:
8 * * Redistributions of source code must retain the above copyright
9 * notice, this list of conditions and the following disclaimer.
10 * * Redistributions in binary form must reproduce the above copyright
11 * notice, this list of conditions and the following disclaimer in
12 * the documentation and/or other materials provided with the
13 * distribution.
14 *
15 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
16 * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
17 * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
18 * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
19 * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
20 * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
21 * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
22 * OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
23 * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
24 * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
25 * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
26 * SUCH DAMAGE.
27 */
28/*
29 This is a version (aka dlmalloc) of malloc/free/realloc written by
30 Doug Lea and released to the public domain, as explained at
31 http://creativecommons.org/licenses/publicdomain. Send questions,
32 comments, complaints, performance data, etc to dl@cs.oswego.edu
33
34* Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)
35
36 Note: There may be an updated version of this malloc obtainable at
37 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
38 Check before installing!
39
40* Quickstart
41
42 This library is all in one file to simplify the most common usage:
43 ftp it, compile it (-O3), and link it into another program. All of
44 the compile-time options default to reasonable values for use on
45 most platforms. You might later want to step through various
46 compile-time and dynamic tuning options.
47
48 For convenience, an include file for code using this malloc is at:
49 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
50 You don't really need this .h file unless you call functions not
51 defined in your system include files. The .h file contains only the
52 excerpts from this file needed for using this malloc on ANSI C/C++
53 systems, so long as you haven't changed compile-time options about
54 naming and tuning parameters. If you do, then you can create your
55 own malloc.h that does include all settings by cutting at the point
56 indicated below. Note that you may already by default be using a C
57 library containing a malloc that is based on some version of this
58 malloc (for example in linux). You might still want to use the one
59 in this file to customize settings or to avoid overheads associated
60 with library versions.
61
62* Vital statistics:
63
64 Supported pointer/size_t representation: 4 or 8 bytes
65 size_t MUST be an unsigned type of the same width as
66 pointers. (If you are using an ancient system that declares
67 size_t as a signed type, or need it to be a different width
68 than pointers, you can use a previous release of this malloc
69 (e.g. 2.7.2) supporting these.)
70
71 Alignment: 8 bytes (default)
72 This suffices for nearly all current machines and C compilers.
73 However, you can define MALLOC_ALIGNMENT to be wider than this
74 if necessary (up to 128bytes), at the expense of using more space.
75
76 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
77 8 or 16 bytes (if 8byte sizes)
78 Each malloced chunk has a hidden word of overhead holding size
79 and status information, and additional cross-check word
80 if FOOTERS is defined.
81
82 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
83 8-byte ptrs: 32 bytes (including overhead)
84
85 Even a request for zero bytes (i.e., malloc(0)) returns a
86 pointer to something of the minimum allocatable size.
87 The maximum overhead wastage (i.e., number of extra bytes
88 allocated than were requested in malloc) is less than or equal
89 to the minimum size, except for requests >= mmap_threshold that
90 are serviced via mmap(), where the worst case wastage is about
91 32 bytes plus the remainder from a system page (the minimal
92 mmap unit); typically 4096 or 8192 bytes.
93
94 Security: static-safe; optionally more or less
95 The "security" of malloc refers to the ability of malicious
96 code to accentuate the effects of errors (for example, freeing
97 space that is not currently malloc'ed or overwriting past the
98 ends of chunks) in code that calls malloc. This malloc
99 guarantees not to modify any memory locations below the base of
100 heap, i.e., static variables, even in the presence of usage
101 errors. The routines additionally detect most improper frees
102 and reallocs. All this holds as long as the static bookkeeping
103 for malloc itself is not corrupted by some other means. This
104 is only one aspect of security -- these checks do not, and
105 cannot, detect all possible programming errors.
106
107 If FOOTERS is defined nonzero, then each allocated chunk
108 carries an additional check word to verify that it was malloced
109 from its space. These check words are the same within each
110 execution of a program using malloc, but differ across
111 executions, so externally crafted fake chunks cannot be
112 freed. This improves security by rejecting frees/reallocs that
113 could corrupt heap memory, in addition to the checks preventing
114 writes to statics that are always on. This may further improve
115 security at the expense of time and space overhead. (Note that
116 FOOTERS may also be worth using with MSPACES.)
117
118 By default detected errors cause the program to abort (calling
119 "abort()"). You can override this to instead proceed past
120 errors by defining PROCEED_ON_ERROR. In this case, a bad free
121 has no effect, and a malloc that encounters a bad address
122 caused by user overwrites will ignore the bad address by
123 dropping pointers and indices to all known memory. This may
124 be appropriate for programs that should continue if at all
125 possible in the face of programming errors, although they may
126 run out of memory because dropped memory is never reclaimed.
127
128 If you don't like either of these options, you can define
129 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
130 else. And if if you are sure that your program using malloc has
131 no errors or vulnerabilities, you can define INSECURE to 1,
132 which might (or might not) provide a small performance improvement.
133
134 Thread-safety: NOT thread-safe unless USE_LOCKS defined
135 When USE_LOCKS is defined, each public call to malloc, free,
136 etc is surrounded with either a pthread mutex or a win32
137 spinlock (depending on WIN32). This is not especially fast, and
138 can be a major bottleneck. It is designed only to provide
139 minimal protection in concurrent environments, and to provide a
140 basis for extensions. If you are using malloc in a concurrent
141 program, consider instead using ptmalloc, which is derived from
142 a version of this malloc. (See http://www.malloc.de).
143
144 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
145 This malloc can use unix sbrk or any emulation (invoked using
146 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
147 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
148 memory. On most unix systems, it tends to work best if both
149 MORECORE and MMAP are enabled. On Win32, it uses emulations
150 based on VirtualAlloc. It also uses common C library functions
151 like memset.
152
153 Compliance: I believe it is compliant with the Single Unix Specification
154 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
155 others as well.
156
157* Overview of algorithms
158
159 This is not the fastest, most space-conserving, most portable, or
160 most tunable malloc ever written. However it is among the fastest
161 while also being among the most space-conserving, portable and
162 tunable. Consistent balance across these factors results in a good
163 general-purpose allocator for malloc-intensive programs.
164
165 In most ways, this malloc is a best-fit allocator. Generally, it
166 chooses the best-fitting existing chunk for a request, with ties
167 broken in approximately least-recently-used order. (This strategy
168 normally maintains low fragmentation.) However, for requests less
169 than 256bytes, it deviates from best-fit when there is not an
170 exactly fitting available chunk by preferring to use space adjacent
171 to that used for the previous small request, as well as by breaking
172 ties in approximately most-recently-used order. (These enhance
173 locality of series of small allocations.) And for very large requests
174 (>= 256Kb by default), it relies on system memory mapping
175 facilities, if supported. (This helps avoid carrying around and
176 possibly fragmenting memory used only for large chunks.)
177
178 All operations (except malloc_stats and mallinfo) have execution
179 times that are bounded by a constant factor of the number of bits in
180 a size_t, not counting any clearing in calloc or copying in realloc,
181 or actions surrounding MORECORE and MMAP that have times
182 proportional to the number of non-contiguous regions returned by
183 system allocation routines, which is often just 1.
184
185 The implementation is not very modular and seriously overuses
186 macros. Perhaps someday all C compilers will do as good a job
187 inlining modular code as can now be done by brute-force expansion,
188 but now, enough of them seem not to.
189
190 Some compilers issue a lot of warnings about code that is
191 dead/unreachable only on some platforms, and also about intentional
192 uses of negation on unsigned types. All known cases of each can be
193 ignored.
194
195 For a longer but out of date high-level description, see
196 http://gee.cs.oswego.edu/dl/html/malloc.html
197
198* MSPACES
199 If MSPACES is defined, then in addition to malloc, free, etc.,
200 this file also defines mspace_malloc, mspace_free, etc. These
201 are versions of malloc routines that take an "mspace" argument
202 obtained using create_mspace, to control all internal bookkeeping.
203 If ONLY_MSPACES is defined, only these versions are compiled.
204 So if you would like to use this allocator for only some allocations,
205 and your system malloc for others, you can compile with
206 ONLY_MSPACES and then do something like...
207 static mspace mymspace = create_mspace(0,0); // for example
208 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
209
210 (Note: If you only need one instance of an mspace, you can instead
211 use "USE_DL_PREFIX" to relabel the global malloc.)
212
213 You can similarly create thread-local allocators by storing
214 mspaces as thread-locals. For example:
215 static __thread mspace tlms = 0;
216 void* tlmalloc(size_t bytes) {
217 if (tlms == 0) tlms = create_mspace(0, 0);
218 return mspace_malloc(tlms, bytes);
219 }
220 void tlfree(void* mem) { mspace_free(tlms, mem); }
221
222 Unless FOOTERS is defined, each mspace is completely independent.
223 You cannot allocate from one and free to another (although
224 conformance is only weakly checked, so usage errors are not always
225 caught). If FOOTERS is defined, then each chunk carries around a tag
226 indicating its originating mspace, and frees are directed to their
227 originating spaces.
228
229 ------------------------- Compile-time options ---------------------------
230
231Be careful in setting #define values for numerical constants of type
232size_t. On some systems, literal values are not automatically extended
233to size_t precision unless they are explicitly casted.
234
235WIN32 default: defined if _WIN32 defined
236 Defining WIN32 sets up defaults for MS environment and compilers.
237 Otherwise defaults are for unix.
238
239MALLOC_ALIGNMENT default: (size_t)8
240 Controls the minimum alignment for malloc'ed chunks. It must be a
241 power of two and at least 8, even on machines for which smaller
242 alignments would suffice. It may be defined as larger than this
243 though. Note however that code and data structures are optimized for
244 the case of 8-byte alignment.
245
246MSPACES default: 0 (false)
247 If true, compile in support for independent allocation spaces.
248 This is only supported if HAVE_MMAP is true.
249
250ONLY_MSPACES default: 0 (false)
251 If true, only compile in mspace versions, not regular versions.
252
253USE_LOCKS default: 0 (false)
254 Causes each call to each public routine to be surrounded with
255 pthread or WIN32 mutex lock/unlock. (If set true, this can be
256 overridden on a per-mspace basis for mspace versions.)
257
258FOOTERS default: 0
259 If true, provide extra checking and dispatching by placing
260 information in the footers of allocated chunks. This adds
261 space and time overhead.
262
263INSECURE default: 0
264 If true, omit checks for usage errors and heap space overwrites.
265
266USE_DL_PREFIX default: NOT defined
267 Causes compiler to prefix all public routines with the string 'dl'.
268 This can be useful when you only want to use this malloc in one part
269 of a program, using your regular system malloc elsewhere.
270
271ABORT default: defined as abort()
272 Defines how to abort on failed checks. On most systems, a failed
273 check cannot die with an "assert" or even print an informative
274 message, because the underlying print routines in turn call malloc,
275 which will fail again. Generally, the best policy is to simply call
276 abort(). It's not very useful to do more than this because many
277 errors due to overwriting will show up as address faults (null, odd
278 addresses etc) rather than malloc-triggered checks, so will also
279 abort. Also, most compilers know that abort() does not return, so
280 can better optimize code conditionally calling it.
281
282PROCEED_ON_ERROR default: defined as 0 (false)
283 Controls whether detected bad addresses cause them to bypassed
284 rather than aborting. If set, detected bad arguments to free and
285 realloc are ignored. And all bookkeeping information is zeroed out
286 upon a detected overwrite of freed heap space, thus losing the
287 ability to ever return it from malloc again, but enabling the
288 application to proceed. If PROCEED_ON_ERROR is defined, the
289 static variable malloc_corruption_error_count is compiled in
290 and can be examined to see if errors have occurred. This option
291 generates slower code than the default abort policy.
292
293DEBUG default: NOT defined
294 The DEBUG setting is mainly intended for people trying to modify
295 this code or diagnose problems when porting to new platforms.
296 However, it may also be able to better isolate user errors than just
297 using runtime checks. The assertions in the check routines spell
298 out in more detail the assumptions and invariants underlying the
299 algorithms. The checking is fairly extensive, and will slow down
300 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
301 set will attempt to check every non-mmapped allocated and free chunk
302 in the course of computing the summaries.
303
304ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
305 Debugging assertion failures can be nearly impossible if your
306 version of the assert macro causes malloc to be called, which will
307 lead to a cascade of further failures, blowing the runtime stack.
308 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
309 which will usually make debugging easier.
310
311MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
312 The action to take before "return 0" when malloc fails to be able to
313 return memory because there is none available.
314
315HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
316 True if this system supports sbrk or an emulation of it.
317
318MORECORE default: sbrk
319 The name of the sbrk-style system routine to call to obtain more
320 memory. See below for guidance on writing custom MORECORE
321 functions. The type of the argument to sbrk/MORECORE varies across
322 systems. It cannot be size_t, because it supports negative
323 arguments, so it is normally the signed type of the same width as
324 size_t (sometimes declared as "intptr_t"). It doesn't much matter
325 though. Internally, we only call it with arguments less than half
326 the max value of a size_t, which should work across all reasonable
327 possibilities, although sometimes generating compiler warnings. See
328 near the end of this file for guidelines for creating a custom
329 version of MORECORE.
330
331MORECORE_CONTIGUOUS default: 1 (true)
332 If true, take advantage of fact that consecutive calls to MORECORE
333 with positive arguments always return contiguous increasing
334 addresses. This is true of unix sbrk. It does not hurt too much to
335 set it true anyway, since malloc copes with non-contiguities.
336 Setting it false when definitely non-contiguous saves time
337 and possibly wasted space it would take to discover this though.
338
339MORECORE_CANNOT_TRIM default: NOT defined
340 True if MORECORE cannot release space back to the system when given
341 negative arguments. This is generally necessary only if you are
342 using a hand-crafted MORECORE function that cannot handle negative
343 arguments.
344
345HAVE_MMAP default: 1 (true)
346 True if this system supports mmap or an emulation of it. If so, and
347 HAVE_MORECORE is not true, MMAP is used for all system
348 allocation. If set and HAVE_MORECORE is true as well, MMAP is
349 primarily used to directly allocate very large blocks. It is also
350 used as a backup strategy in cases where MORECORE fails to provide
351 space from system. Note: A single call to MUNMAP is assumed to be
352 able to unmap memory that may have be allocated using multiple calls
353 to MMAP, so long as they are adjacent.
354
355HAVE_MREMAP default: 1 on linux, else 0
356 If true realloc() uses mremap() to re-allocate large blocks and
357 extend or shrink allocation spaces.
358
359MMAP_CLEARS default: 1 on unix
360 True if mmap clears memory so calloc doesn't need to. This is true
361 for standard unix mmap using /dev/zero.
362
363USE_BUILTIN_FFS default: 0 (i.e., not used)
364 Causes malloc to use the builtin ffs() function to compute indices.
365 Some compilers may recognize and intrinsify ffs to be faster than the
366 supplied C version. Also, the case of x86 using gcc is special-cased
367 to an asm instruction, so is already as fast as it can be, and so
368 this setting has no effect. (On most x86s, the asm version is only
369 slightly faster than the C version.)
370
371malloc_getpagesize default: derive from system includes, or 4096.
372 The system page size. To the extent possible, this malloc manages
373 memory from the system in page-size units. This may be (and
374 usually is) a function rather than a constant. This is ignored
375 if WIN32, where page size is determined using getSystemInfo during
376 initialization.
377
378USE_DEV_RANDOM default: 0 (i.e., not used)
379 Causes malloc to use /dev/random to initialize secure magic seed for
380 stamping footers. Otherwise, the current time is used.
381
382NO_MALLINFO default: 0
383 If defined, don't compile "mallinfo". This can be a simple way
384 of dealing with mismatches between system declarations and
385 those in this file.
386
387MALLINFO_FIELD_TYPE default: size_t
388 The type of the fields in the mallinfo struct. This was originally
389 defined as "int" in SVID etc, but is more usefully defined as
390 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
391
392REALLOC_ZERO_BYTES_FREES default: not defined
393 This should be set if a call to realloc with zero bytes should
394 be the same as a call to free. Some people think it should. Otherwise,
395 since this malloc returns a unique pointer for malloc(0), so does
396 realloc(p, 0).
397
398LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
399LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
400LACKS_STDLIB_H default: NOT defined unless on WIN32
401 Define these if your system does not have these header files.
402 You might need to manually insert some of the declarations they provide.
403
404DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
405 system_info.dwAllocationGranularity in WIN32,
406 otherwise 64K.
407 Also settable using mallopt(M_GRANULARITY, x)
408 The unit for allocating and deallocating memory from the system. On
409 most systems with contiguous MORECORE, there is no reason to
410 make this more than a page. However, systems with MMAP tend to
411 either require or encourage larger granularities. You can increase
412 this value to prevent system allocation functions to be called so
413 often, especially if they are slow. The value must be at least one
414 page and must be a power of two. Setting to 0 causes initialization
415 to either page size or win32 region size. (Note: In previous
416 versions of malloc, the equivalent of this option was called
417 "TOP_PAD")
418
419DEFAULT_TRIM_THRESHOLD default: 2MB
420 Also settable using mallopt(M_TRIM_THRESHOLD, x)
421 The maximum amount of unused top-most memory to keep before
422 releasing via malloc_trim in free(). Automatic trimming is mainly
423 useful in long-lived programs using contiguous MORECORE. Because
424 trimming via sbrk can be slow on some systems, and can sometimes be
425 wasteful (in cases where programs immediately afterward allocate
426 more large chunks) the value should be high enough so that your
427 overall system performance would improve by releasing this much
428 memory. As a rough guide, you might set to a value close to the
429 average size of a process (program) running on your system.
430 Releasing this much memory would allow such a process to run in
431 memory. Generally, it is worth tuning trim thresholds when a
432 program undergoes phases where several large chunks are allocated
433 and released in ways that can reuse each other's storage, perhaps
434 mixed with phases where there are no such chunks at all. The trim
435 value must be greater than page size to have any useful effect. To
436 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
437 some people use of mallocing a huge space and then freeing it at
438 program startup, in an attempt to reserve system memory, doesn't
439 have the intended effect under automatic trimming, since that memory
440 will immediately be returned to the system.
441
442DEFAULT_MMAP_THRESHOLD default: 256K
443 Also settable using mallopt(M_MMAP_THRESHOLD, x)
444 The request size threshold for using MMAP to directly service a
445 request. Requests of at least this size that cannot be allocated
446 using already-existing space will be serviced via mmap. (If enough
447 normal freed space already exists it is used instead.) Using mmap
448 segregates relatively large chunks of memory so that they can be
449 individually obtained and released from the host system. A request
450 serviced through mmap is never reused by any other request (at least
451 not directly; the system may just so happen to remap successive
452 requests to the same locations). Segregating space in this way has
453 the benefits that: Mmapped space can always be individually released
454 back to the system, which helps keep the system level memory demands
455 of a long-lived program low. Also, mapped memory doesn't become
456 `locked' between other chunks, as can happen with normally allocated
457 chunks, which means that even trimming via malloc_trim would not
458 release them. However, it has the disadvantage that the space
459 cannot be reclaimed, consolidated, and then used to service later
460 requests, as happens with normal chunks. The advantages of mmap
461 nearly always outweigh disadvantages for "large" chunks, but the
462 value of "large" may vary across systems. The default is an
463 empirically derived value that works well in most systems. You can
464 disable mmap by setting to MAX_SIZE_T.
465
466*/
467
468#ifndef WIN32
469#ifdef _WIN32
470#define WIN32 1
471#endif /* _WIN32 */
472#endif /* WIN32 */
473#ifdef WIN32
474#define WIN32_LEAN_AND_MEAN
475#include <windows.h>
476#define HAVE_MMAP 1
477#define HAVE_MORECORE 0
478#define LACKS_UNISTD_H
479#define LACKS_SYS_PARAM_H
480#define LACKS_SYS_MMAN_H
481#define LACKS_STRING_H
482#define LACKS_STRINGS_H
483#define LACKS_SYS_TYPES_H
484#define LACKS_ERRNO_H
485#define MALLOC_FAILURE_ACTION
486#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
487#endif /* WIN32 */
488
489#if defined(DARWIN) || defined(_DARWIN)
490/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
491#ifndef HAVE_MORECORE
492#define HAVE_MORECORE 0
493#define HAVE_MMAP 1
494#endif /* HAVE_MORECORE */
495#endif /* DARWIN */
496
497#ifndef LACKS_SYS_TYPES_H
498#include <sys/types.h> /* For size_t */
499#endif /* LACKS_SYS_TYPES_H */
500
501/* The maximum possible size_t value has all bits set */
502#define MAX_SIZE_T (~(size_t)0)
503
504#ifndef ONLY_MSPACES
505#define ONLY_MSPACES 0
506#endif /* ONLY_MSPACES */
507#ifndef MSPACES
508#if ONLY_MSPACES
509#define MSPACES 1
510#else /* ONLY_MSPACES */
511#define MSPACES 0
512#endif /* ONLY_MSPACES */
513#endif /* MSPACES */
514#ifndef MALLOC_ALIGNMENT
515#define MALLOC_ALIGNMENT ((size_t)8U)
516#endif /* MALLOC_ALIGNMENT */
517#ifndef FOOTERS
518#define FOOTERS 0
519#endif /* FOOTERS */
520#ifndef USE_MAX_ALLOWED_FOOTPRINT
521#define USE_MAX_ALLOWED_FOOTPRINT 0
522#endif
523#ifndef ABORT
524#define ABORT abort()
525#endif /* ABORT */
526#ifndef ABORT_ON_ASSERT_FAILURE
527#define ABORT_ON_ASSERT_FAILURE 1
528#endif /* ABORT_ON_ASSERT_FAILURE */
529#ifndef PROCEED_ON_ERROR
530#define PROCEED_ON_ERROR 0
531#endif /* PROCEED_ON_ERROR */
532#ifndef USE_LOCKS
533#define USE_LOCKS 0
534#endif /* USE_LOCKS */
535#ifndef INSECURE
536#define INSECURE 0
537#endif /* INSECURE */
538#ifndef HAVE_MMAP
539#define HAVE_MMAP 1
540#endif /* HAVE_MMAP */
541#ifndef MMAP_CLEARS
542#define MMAP_CLEARS 1
543#endif /* MMAP_CLEARS */
544#ifndef HAVE_MREMAP
545#ifdef linux
546#define HAVE_MREMAP 1
547#else /* linux */
548#define HAVE_MREMAP 0
549#endif /* linux */
550#endif /* HAVE_MREMAP */
551#ifndef MALLOC_FAILURE_ACTION
552#define MALLOC_FAILURE_ACTION errno = ENOMEM;
553#endif /* MALLOC_FAILURE_ACTION */
554#ifndef HAVE_MORECORE
555#if ONLY_MSPACES
556#define HAVE_MORECORE 0
557#else /* ONLY_MSPACES */
558#define HAVE_MORECORE 1
559#endif /* ONLY_MSPACES */
560#endif /* HAVE_MORECORE */
561#if !HAVE_MORECORE
562#define MORECORE_CONTIGUOUS 0
563#else /* !HAVE_MORECORE */
564#ifndef MORECORE
565#define MORECORE sbrk
566#endif /* MORECORE */
567#ifndef MORECORE_CONTIGUOUS
568#define MORECORE_CONTIGUOUS 1
569#endif /* MORECORE_CONTIGUOUS */
570#endif /* HAVE_MORECORE */
571#ifndef DEFAULT_GRANULARITY
572#if MORECORE_CONTIGUOUS
573#define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
574#else /* MORECORE_CONTIGUOUS */
575#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
576#endif /* MORECORE_CONTIGUOUS */
577#endif /* DEFAULT_GRANULARITY */
578#ifndef DEFAULT_TRIM_THRESHOLD
579#ifndef MORECORE_CANNOT_TRIM
580#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
581#else /* MORECORE_CANNOT_TRIM */
582#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
583#endif /* MORECORE_CANNOT_TRIM */
584#endif /* DEFAULT_TRIM_THRESHOLD */
585#ifndef DEFAULT_MMAP_THRESHOLD
586#if HAVE_MMAP
587#define DEFAULT_MMAP_THRESHOLD ((size_t)64U * (size_t)1024U)
588#else /* HAVE_MMAP */
589#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
590#endif /* HAVE_MMAP */
591#endif /* DEFAULT_MMAP_THRESHOLD */
592#ifndef USE_BUILTIN_FFS
593#define USE_BUILTIN_FFS 0
594#endif /* USE_BUILTIN_FFS */
595#ifndef USE_DEV_RANDOM
596#define USE_DEV_RANDOM 0
597#endif /* USE_DEV_RANDOM */
598#ifndef NO_MALLINFO
599#define NO_MALLINFO 0
600#endif /* NO_MALLINFO */
601#ifndef MALLINFO_FIELD_TYPE
602#define MALLINFO_FIELD_TYPE size_t
603#endif /* MALLINFO_FIELD_TYPE */
604
605/*
606 mallopt tuning options. SVID/XPG defines four standard parameter
607 numbers for mallopt, normally defined in malloc.h. None of these
608 are used in this malloc, so setting them has no effect. But this
609 malloc does support the following options.
610*/
611
612#define M_TRIM_THRESHOLD (-1)
613#define M_GRANULARITY (-2)
614#define M_MMAP_THRESHOLD (-3)
615
616/* ------------------------ Mallinfo declarations ------------------------ */
617
618#if !NO_MALLINFO
619/*
620 This version of malloc supports the standard SVID/XPG mallinfo
621 routine that returns a struct containing usage properties and
622 statistics. It should work on any system that has a
623 /usr/include/malloc.h defining struct mallinfo. The main
624 declaration needed is the mallinfo struct that is returned (by-copy)
625 by mallinfo(). The malloinfo struct contains a bunch of fields that
626 are not even meaningful in this version of malloc. These fields are
627 are instead filled by mallinfo() with other numbers that might be of
628 interest.
629
630 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
631 /usr/include/malloc.h file that includes a declaration of struct
632 mallinfo. If so, it is included; else a compliant version is
633 declared below. These must be precisely the same for mallinfo() to
634 work. The original SVID version of this struct, defined on most
635 systems with mallinfo, declares all fields as ints. But some others
636 define as unsigned long. If your system defines the fields using a
637 type of different width than listed here, you MUST #include your
638 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
639*/
640
641/* #define HAVE_USR_INCLUDE_MALLOC_H */
642
643#if !ANDROID
644#ifdef HAVE_USR_INCLUDE_MALLOC_H
645#include "/usr/include/malloc.h"
646#else /* HAVE_USR_INCLUDE_MALLOC_H */
647
648struct mallinfo {
649 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
650 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
651 MALLINFO_FIELD_TYPE smblks; /* always 0 */
652 MALLINFO_FIELD_TYPE hblks; /* always 0 */
653 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
654 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
655 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
656 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
657 MALLINFO_FIELD_TYPE fordblks; /* total free space */
658 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
659};
660
661#endif /* HAVE_USR_INCLUDE_MALLOC_H */
662#endif /* NO_MALLINFO */
663#endif /* ANDROID */
664
665#ifdef __cplusplus
666extern "C" {
667#endif /* __cplusplus */
668
669#if !ONLY_MSPACES
670
671/* ------------------- Declarations of public routines ------------------- */
672
673/* Check an additional macro for the five primary functions */
674#if !defined(USE_DL_PREFIX) || !defined(MALLOC_LEAK_CHECK)
675#define dlcalloc calloc
676#define dlfree free
677#define dlmalloc malloc
678#define dlmemalign memalign
679#define dlrealloc realloc
680#endif
681
682#ifndef USE_DL_PREFIX
683#define dlvalloc valloc
684#define dlpvalloc pvalloc
685#define dlmallinfo mallinfo
686#define dlmallopt mallopt
687#define dlmalloc_trim malloc_trim
688#define dlmalloc_walk_free_pages \
689 malloc_walk_free_pages
690#define dlmalloc_walk_heap \
691 malloc_walk_heap
692#define dlmalloc_stats malloc_stats
693#define dlmalloc_usable_size malloc_usable_size
694#define dlmalloc_footprint malloc_footprint
695#define dlmalloc_max_allowed_footprint \
696 malloc_max_allowed_footprint
697#define dlmalloc_set_max_allowed_footprint \
698 malloc_set_max_allowed_footprint
699#define dlmalloc_max_footprint malloc_max_footprint
700#define dlindependent_calloc independent_calloc
701#define dlindependent_comalloc independent_comalloc
702#endif /* USE_DL_PREFIX */
703
704
705/*
706 malloc(size_t n)
707 Returns a pointer to a newly allocated chunk of at least n bytes, or
708 null if no space is available, in which case errno is set to ENOMEM
709 on ANSI C systems.
710
711 If n is zero, malloc returns a minimum-sized chunk. (The minimum
712 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
713 systems.) Note that size_t is an unsigned type, so calls with
714 arguments that would be negative if signed are interpreted as
715 requests for huge amounts of space, which will often fail. The
716 maximum supported value of n differs across systems, but is in all
717 cases less than the maximum representable value of a size_t.
718*/
719void* dlmalloc(size_t);
720
721/*
722 free(void* p)
723 Releases the chunk of memory pointed to by p, that had been previously
724 allocated using malloc or a related routine such as realloc.
725 It has no effect if p is null. If p was not malloced or already
726 freed, free(p) will by default cause the current program to abort.
727*/
728void dlfree(void*);
729
730/*
731 calloc(size_t n_elements, size_t element_size);
732 Returns a pointer to n_elements * element_size bytes, with all locations
733 set to zero.
734*/
735void* dlcalloc(size_t, size_t);
736
737/*
738 realloc(void* p, size_t n)
739 Returns a pointer to a chunk of size n that contains the same data
740 as does chunk p up to the minimum of (n, p's size) bytes, or null
741 if no space is available.
742
743 The returned pointer may or may not be the same as p. The algorithm
744 prefers extending p in most cases when possible, otherwise it
745 employs the equivalent of a malloc-copy-free sequence.
746
747 If p is null, realloc is equivalent to malloc.
748
749 If space is not available, realloc returns null, errno is set (if on
750 ANSI) and p is NOT freed.
751
752 if n is for fewer bytes than already held by p, the newly unused
753 space is lopped off and freed if possible. realloc with a size
754 argument of zero (re)allocates a minimum-sized chunk.
755
756 The old unix realloc convention of allowing the last-free'd chunk
757 to be used as an argument to realloc is not supported.
758*/
759
760void* dlrealloc(void*, size_t);
761
762/*
763 memalign(size_t alignment, size_t n);
764 Returns a pointer to a newly allocated chunk of n bytes, aligned
765 in accord with the alignment argument.
766
767 The alignment argument should be a power of two. If the argument is
768 not a power of two, the nearest greater power is used.
769 8-byte alignment is guaranteed by normal malloc calls, so don't
770 bother calling memalign with an argument of 8 or less.
771
772 Overreliance on memalign is a sure way to fragment space.
773*/
774void* dlmemalign(size_t, size_t);
775
776/*
777 valloc(size_t n);
778 Equivalent to memalign(pagesize, n), where pagesize is the page
779 size of the system. If the pagesize is unknown, 4096 is used.
780*/
781void* dlvalloc(size_t);
782
783/*
784 mallopt(int parameter_number, int parameter_value)
785 Sets tunable parameters The format is to provide a
786 (parameter-number, parameter-value) pair. mallopt then sets the
787 corresponding parameter to the argument value if it can (i.e., so
788 long as the value is meaningful), and returns 1 if successful else
789 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
790 normally defined in malloc.h. None of these are use in this malloc,
791 so setting them has no effect. But this malloc also supports other
792 options in mallopt. See below for details. Briefly, supported
793 parameters are as follows (listed defaults are for "typical"
794 configurations).
795
796 Symbol param # default allowed param values
797 M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)
798 M_GRANULARITY -2 page size any power of 2 >= page size
799 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
800*/
801int dlmallopt(int, int);
802
803/*
804 malloc_footprint();
805 Returns the number of bytes obtained from the system. The total
806 number of bytes allocated by malloc, realloc etc., is less than this
807 value. Unlike mallinfo, this function returns only a precomputed
808 result, so can be called frequently to monitor memory consumption.
809 Even if locks are otherwise defined, this function does not use them,
810 so results might not be up to date.
811*/
812size_t dlmalloc_footprint(void);
813
814#if USE_MAX_ALLOWED_FOOTPRINT
815/*
816 malloc_max_allowed_footprint();
817 Returns the number of bytes that the heap is allowed to obtain
818 from the system. malloc_footprint() should always return a
819 size less than or equal to max_allowed_footprint, unless the
820 max_allowed_footprint was set to a value smaller than the
821 footprint at the time.
822*/
823size_t dlmalloc_max_allowed_footprint();
824
825/*
826 malloc_set_max_allowed_footprint();
827 Set the maximum number of bytes that the heap is allowed to
828 obtain from the system. The size will be rounded up to a whole
829 page, and the rounded number will be returned from future calls
830 to malloc_max_allowed_footprint(). If the new max_allowed_footprint
831 is larger than the current footprint, the heap will never grow
832 larger than max_allowed_footprint. If the new max_allowed_footprint
833 is smaller than the current footprint, the heap will not grow
834 further.
835
836 TODO: try to force the heap to give up memory in the shrink case,
837 and update this comment once that happens.
838*/
839void dlmalloc_set_max_allowed_footprint(size_t bytes);
840#endif /* USE_MAX_ALLOWED_FOOTPRINT */
841
842/*
843 malloc_max_footprint();
844 Returns the maximum number of bytes obtained from the system. This
845 value will be greater than current footprint if deallocated space
846 has been reclaimed by the system. The peak number of bytes allocated
847 by malloc, realloc etc., is less than this value. Unlike mallinfo,
848 this function returns only a precomputed result, so can be called
849 frequently to monitor memory consumption. Even if locks are
850 otherwise defined, this function does not use them, so results might
851 not be up to date.
852*/
853size_t dlmalloc_max_footprint(void);
854
855#if !NO_MALLINFO
856/*
857 mallinfo()
858 Returns (by copy) a struct containing various summary statistics:
859
860 arena: current total non-mmapped bytes allocated from system
861 ordblks: the number of free chunks
862 smblks: always zero.
863 hblks: current number of mmapped regions
864 hblkhd: total bytes held in mmapped regions
865 usmblks: the maximum total allocated space. This will be greater
866 than current total if trimming has occurred.
867 fsmblks: always zero
868 uordblks: current total allocated space (normal or mmapped)
869 fordblks: total free space
870 keepcost: the maximum number of bytes that could ideally be released
871 back to system via malloc_trim. ("ideally" means that
872 it ignores page restrictions etc.)
873
874 Because these fields are ints, but internal bookkeeping may
875 be kept as longs, the reported values may wrap around zero and
876 thus be inaccurate.
877*/
878struct mallinfo dlmallinfo(void);
879#endif /* NO_MALLINFO */
880
881/*
882 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
883
884 independent_calloc is similar to calloc, but instead of returning a
885 single cleared space, it returns an array of pointers to n_elements
886 independent elements that can hold contents of size elem_size, each
887 of which starts out cleared, and can be independently freed,
888 realloc'ed etc. The elements are guaranteed to be adjacently
889 allocated (this is not guaranteed to occur with multiple callocs or
890 mallocs), which may also improve cache locality in some
891 applications.
892
893 The "chunks" argument is optional (i.e., may be null, which is
894 probably the most typical usage). If it is null, the returned array
895 is itself dynamically allocated and should also be freed when it is
896 no longer needed. Otherwise, the chunks array must be of at least
897 n_elements in length. It is filled in with the pointers to the
898 chunks.
899
900 In either case, independent_calloc returns this pointer array, or
901 null if the allocation failed. If n_elements is zero and "chunks"
902 is null, it returns a chunk representing an array with zero elements
903 (which should be freed if not wanted).
904
905 Each element must be individually freed when it is no longer
906 needed. If you'd like to instead be able to free all at once, you
907 should instead use regular calloc and assign pointers into this
908 space to represent elements. (In this case though, you cannot
909 independently free elements.)
910
911 independent_calloc simplifies and speeds up implementations of many
912 kinds of pools. It may also be useful when constructing large data
913 structures that initially have a fixed number of fixed-sized nodes,
914 but the number is not known at compile time, and some of the nodes
915 may later need to be freed. For example:
916
917 struct Node { int item; struct Node* next; };
918
919 struct Node* build_list() {
920 struct Node** pool;
921 int n = read_number_of_nodes_needed();
922 if (n <= 0) return 0;
923 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
924 if (pool == 0) die();
925 // organize into a linked list...
926 struct Node* first = pool[0];
927 for (i = 0; i < n-1; ++i)
928 pool[i]->next = pool[i+1];
929 free(pool); // Can now free the array (or not, if it is needed later)
930 return first;
931 }
932*/
933void** dlindependent_calloc(size_t, size_t, void**);
934
935/*
936 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
937
938 independent_comalloc allocates, all at once, a set of n_elements
939 chunks with sizes indicated in the "sizes" array. It returns
940 an array of pointers to these elements, each of which can be
941 independently freed, realloc'ed etc. The elements are guaranteed to
942 be adjacently allocated (this is not guaranteed to occur with
943 multiple callocs or mallocs), which may also improve cache locality
944 in some applications.
945
946 The "chunks" argument is optional (i.e., may be null). If it is null
947 the returned array is itself dynamically allocated and should also
948 be freed when it is no longer needed. Otherwise, the chunks array
949 must be of at least n_elements in length. It is filled in with the
950 pointers to the chunks.
951
952 In either case, independent_comalloc returns this pointer array, or
953 null if the allocation failed. If n_elements is zero and chunks is
954 null, it returns a chunk representing an array with zero elements
955 (which should be freed if not wanted).
956
957 Each element must be individually freed when it is no longer
958 needed. If you'd like to instead be able to free all at once, you
959 should instead use a single regular malloc, and assign pointers at
960 particular offsets in the aggregate space. (In this case though, you
961 cannot independently free elements.)
962
963 independent_comallac differs from independent_calloc in that each
964 element may have a different size, and also that it does not
965 automatically clear elements.
966
967 independent_comalloc can be used to speed up allocation in cases
968 where several structs or objects must always be allocated at the
969 same time. For example:
970
971 struct Head { ... }
972 struct Foot { ... }
973
974 void send_message(char* msg) {
975 int msglen = strlen(msg);
976 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
977 void* chunks[3];
978 if (independent_comalloc(3, sizes, chunks) == 0)
979 die();
980 struct Head* head = (struct Head*)(chunks[0]);
981 char* body = (char*)(chunks[1]);
982 struct Foot* foot = (struct Foot*)(chunks[2]);
983 // ...
984 }
985
986 In general though, independent_comalloc is worth using only for
987 larger values of n_elements. For small values, you probably won't
988 detect enough difference from series of malloc calls to bother.
989
990 Overuse of independent_comalloc can increase overall memory usage,
991 since it cannot reuse existing noncontiguous small chunks that
992 might be available for some of the elements.
993*/
994void** dlindependent_comalloc(size_t, size_t*, void**);
995
996
997/*
998 pvalloc(size_t n);
999 Equivalent to valloc(minimum-page-that-holds(n)), that is,
1000 round up n to nearest pagesize.
1001 */
1002void* dlpvalloc(size_t);
1003
1004/*
1005 malloc_trim(size_t pad);
1006
1007 If possible, gives memory back to the system (via negative arguments
1008 to sbrk) if there is unused memory at the `high' end of the malloc
1009 pool or in unused MMAP segments. You can call this after freeing
1010 large blocks of memory to potentially reduce the system-level memory
1011 requirements of a program. However, it cannot guarantee to reduce
1012 memory. Under some allocation patterns, some large free blocks of
1013 memory will be locked between two used chunks, so they cannot be
1014 given back to the system.
1015
1016 The `pad' argument to malloc_trim represents the amount of free
1017 trailing space to leave untrimmed. If this argument is zero, only
1018 the minimum amount of memory to maintain internal data structures
1019 will be left. Non-zero arguments can be supplied to maintain enough
1020 trailing space to service future expected allocations without having
1021 to re-obtain memory from the system.
1022
1023 Malloc_trim returns 1 if it actually released any memory, else 0.
1024*/
1025int dlmalloc_trim(size_t);
1026
1027/*
1028 malloc_walk_free_pages(handler, harg)
1029
1030 Calls the provided handler on each free region in the heap. The
1031 memory between start and end are guaranteed not to contain any
1032 important data, so the handler is free to alter the contents
1033 in any way. This can be used to advise the OS that large free
1034 regions may be swapped out.
1035
1036 The value in harg will be passed to each call of the handler.
1037 */
1038void dlmalloc_walk_free_pages(void(*)(void*, void*, void*), void*);
1039
1040/*
1041 malloc_walk_heap(handler, harg)
1042
1043 Calls the provided handler on each object or free region in the
1044 heap. The handler will receive the chunk pointer and length, the
1045 object pointer and length, and the value in harg on each call.
1046 */
1047void dlmalloc_walk_heap(void(*)(const void*, size_t,
1048 const void*, size_t, void*),
1049 void*);
1050
1051/*
1052 malloc_usable_size(void* p);
1053
1054 Returns the number of bytes you can actually use in
1055 an allocated chunk, which may be more than you requested (although
1056 often not) due to alignment and minimum size constraints.
1057 You can use this many bytes without worrying about
1058 overwriting other allocated objects. This is not a particularly great
1059 programming practice. malloc_usable_size can be more useful in
1060 debugging and assertions, for example:
1061
1062 p = malloc(n);
1063 assert(malloc_usable_size(p) >= 256);
1064*/
1065size_t dlmalloc_usable_size(void*);
1066
1067/*
1068 malloc_stats();
1069 Prints on stderr the amount of space obtained from the system (both
1070 via sbrk and mmap), the maximum amount (which may be more than
1071 current if malloc_trim and/or munmap got called), and the current
1072 number of bytes allocated via malloc (or realloc, etc) but not yet
1073 freed. Note that this is the number of bytes allocated, not the
1074 number requested. It will be larger than the number requested
1075 because of alignment and bookkeeping overhead. Because it includes
1076 alignment wastage as being in use, this figure may be greater than
1077 zero even when no user-level chunks are allocated.
1078
1079 The reported current and maximum system memory can be inaccurate if
1080 a program makes other calls to system memory allocation functions
1081 (normally sbrk) outside of malloc.
1082
1083 malloc_stats prints only the most commonly interesting statistics.
1084 More information can be obtained by calling mallinfo.
1085*/
1086void dlmalloc_stats(void);
1087
1088#endif /* ONLY_MSPACES */
1089
1090#if MSPACES
1091
1092/*
1093 mspace is an opaque type representing an independent
1094 region of space that supports mspace_malloc, etc.
1095*/
1096typedef void* mspace;
1097
1098/*
1099 create_mspace creates and returns a new independent space with the
1100 given initial capacity, or, if 0, the default granularity size. It
1101 returns null if there is no system memory available to create the
1102 space. If argument locked is non-zero, the space uses a separate
1103 lock to control access. The capacity of the space will grow
1104 dynamically as needed to service mspace_malloc requests. You can
1105 control the sizes of incremental increases of this space by
1106 compiling with a different DEFAULT_GRANULARITY or dynamically
1107 setting with mallopt(M_GRANULARITY, value).
1108*/
1109mspace create_mspace(size_t capacity, int locked);
1110
1111/*
1112 destroy_mspace destroys the given space, and attempts to return all
1113 of its memory back to the system, returning the total number of
1114 bytes freed. After destruction, the results of access to all memory
1115 used by the space become undefined.
1116*/
1117size_t destroy_mspace(mspace msp);
1118
1119/*
1120 create_mspace_with_base uses the memory supplied as the initial base
1121 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1122 space is used for bookkeeping, so the capacity must be at least this
1123 large. (Otherwise 0 is returned.) When this initial space is
1124 exhausted, additional memory will be obtained from the system.
1125 Destroying this space will deallocate all additionally allocated
1126 space (if possible) but not the initial base.
1127*/
1128mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1129
1130/*
1131 mspace_malloc behaves as malloc, but operates within
1132 the given space.
1133*/
1134void* mspace_malloc(mspace msp, size_t bytes);
1135
1136/*
1137 mspace_free behaves as free, but operates within
1138 the given space.
1139
1140 If compiled with FOOTERS==1, mspace_free is not actually needed.
1141 free may be called instead of mspace_free because freed chunks from
1142 any space are handled by their originating spaces.
1143*/
1144void mspace_free(mspace msp, void* mem);
1145
1146/*
1147 mspace_realloc behaves as realloc, but operates within
1148 the given space.
1149
1150 If compiled with FOOTERS==1, mspace_realloc is not actually
1151 needed. realloc may be called instead of mspace_realloc because
1152 realloced chunks from any space are handled by their originating
1153 spaces.
1154*/
1155void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1156
1157/*
1158 mspace_calloc behaves as calloc, but operates within
1159 the given space.
1160*/
1161void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1162
1163/*
1164 mspace_memalign behaves as memalign, but operates within
1165 the given space.
1166*/
1167void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1168
1169/*
1170 mspace_independent_calloc behaves as independent_calloc, but
1171 operates within the given space.
1172*/
1173void** mspace_independent_calloc(mspace msp, size_t n_elements,
1174 size_t elem_size, void* chunks[]);
1175
1176/*
1177 mspace_independent_comalloc behaves as independent_comalloc, but
1178 operates within the given space.
1179*/
1180void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1181 size_t sizes[], void* chunks[]);
1182
1183/*
1184 mspace_footprint() returns the number of bytes obtained from the
1185 system for this space.
1186*/
1187size_t mspace_footprint(mspace msp);
1188
1189/*
1190 mspace_max_footprint() returns the peak number of bytes obtained from the
1191 system for this space.
1192*/
1193size_t mspace_max_footprint(mspace msp);
1194
1195
1196#if !NO_MALLINFO
1197/*
1198 mspace_mallinfo behaves as mallinfo, but reports properties of
1199 the given space.
1200*/
1201struct mallinfo mspace_mallinfo(mspace msp);
1202#endif /* NO_MALLINFO */
1203
1204/*
1205 mspace_malloc_stats behaves as malloc_stats, but reports
1206 properties of the given space.
1207*/
1208void mspace_malloc_stats(mspace msp);
1209
1210/*
1211 mspace_trim behaves as malloc_trim, but
1212 operates within the given space.
1213*/
1214int mspace_trim(mspace msp, size_t pad);
1215
1216/*
1217 An alias for mallopt.
1218*/
1219int mspace_mallopt(int, int);
1220
1221#endif /* MSPACES */
1222
1223#ifdef __cplusplus
1224}; /* end of extern "C" */
1225#endif /* __cplusplus */
1226
1227/*
1228 ========================================================================
1229 To make a fully customizable malloc.h header file, cut everything
1230 above this line, put into file malloc.h, edit to suit, and #include it
1231 on the next line, as well as in programs that use this malloc.
1232 ========================================================================
1233*/
1234
1235/* #include "malloc.h" */
1236
1237/*------------------------------ internal #includes ---------------------- */
1238
1239#ifdef WIN32
1240#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1241#endif /* WIN32 */
1242
1243#include <stdio.h> /* for printing in malloc_stats */
1244
1245#ifndef LACKS_ERRNO_H
1246#include <errno.h> /* for MALLOC_FAILURE_ACTION */
1247#endif /* LACKS_ERRNO_H */
1248#if FOOTERS
1249#include <time.h> /* for magic initialization */
1250#endif /* FOOTERS */
1251#ifndef LACKS_STDLIB_H
1252#include <stdlib.h> /* for abort() */
1253#endif /* LACKS_STDLIB_H */
1254#ifdef DEBUG
1255#if ABORT_ON_ASSERT_FAILURE
1256#define assert(x) if(!(x)) ABORT
1257#else /* ABORT_ON_ASSERT_FAILURE */
1258#include <assert.h>
1259#endif /* ABORT_ON_ASSERT_FAILURE */
1260#else /* DEBUG */
1261#define assert(x)
1262#endif /* DEBUG */
1263#ifndef LACKS_STRING_H
1264#include <string.h> /* for memset etc */
1265#endif /* LACKS_STRING_H */
1266#if USE_BUILTIN_FFS
1267#ifndef LACKS_STRINGS_H
1268#include <strings.h> /* for ffs */
1269#endif /* LACKS_STRINGS_H */
1270#endif /* USE_BUILTIN_FFS */
1271#if HAVE_MMAP
1272#ifndef LACKS_SYS_MMAN_H
1273#include <sys/mman.h> /* for mmap */
1274#endif /* LACKS_SYS_MMAN_H */
1275#ifndef LACKS_FCNTL_H
1276#include <fcntl.h>
1277#endif /* LACKS_FCNTL_H */
1278#endif /* HAVE_MMAP */
1279#if HAVE_MORECORE
1280#ifndef LACKS_UNISTD_H
1281#include <unistd.h> /* for sbrk */
1282#else /* LACKS_UNISTD_H */
1283#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1284extern void* sbrk(ptrdiff_t);
1285#endif /* FreeBSD etc */
1286#endif /* LACKS_UNISTD_H */
1287#endif /* HAVE_MMAP */
1288
1289#ifndef WIN32
1290#ifndef malloc_getpagesize
1291# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1292# ifndef _SC_PAGE_SIZE
1293# define _SC_PAGE_SIZE _SC_PAGESIZE
1294# endif
1295# endif
1296# ifdef _SC_PAGE_SIZE
1297# define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1298# else
1299# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1300 extern size_t getpagesize();
1301# define malloc_getpagesize getpagesize()
1302# else
1303# ifdef WIN32 /* use supplied emulation of getpagesize */
1304# define malloc_getpagesize getpagesize()
1305# else
1306# ifndef LACKS_SYS_PARAM_H
1307# include <sys/param.h>
1308# endif
1309# ifdef EXEC_PAGESIZE
1310# define malloc_getpagesize EXEC_PAGESIZE
1311# else
1312# ifdef NBPG
1313# ifndef CLSIZE
1314# define malloc_getpagesize NBPG
1315# else
1316# define malloc_getpagesize (NBPG * CLSIZE)
1317# endif
1318# else
1319# ifdef NBPC
1320# define malloc_getpagesize NBPC
1321# else
1322# ifdef PAGESIZE
1323# define malloc_getpagesize PAGESIZE
1324# else /* just guess */
1325# define malloc_getpagesize ((size_t)4096U)
1326# endif
1327# endif
1328# endif
1329# endif
1330# endif
1331# endif
1332# endif
1333#endif
1334#endif
1335
1336/* ------------------- size_t and alignment properties -------------------- */
1337
1338/* The byte and bit size of a size_t */
1339#define SIZE_T_SIZE (sizeof(size_t))
1340#define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1341
1342/* Some constants coerced to size_t */
1343/* Annoying but necessary to avoid errors on some plaftorms */
1344#define SIZE_T_ZERO ((size_t)0)
1345#define SIZE_T_ONE ((size_t)1)
1346#define SIZE_T_TWO ((size_t)2)
1347#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1348#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1349#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1350#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1351
1352/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1353#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1354
1355/* True if address a has acceptable alignment */
1356#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1357
1358/* the number of bytes to offset an address to align it */
1359#define align_offset(A)\
1360 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1361 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1362
1363/* -------------------------- MMAP preliminaries ------------------------- */
1364
1365/*
1366 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1367 checks to fail so compiler optimizer can delete code rather than
1368 using so many "#if"s.
1369*/
1370
1371
1372/* MORECORE and MMAP must return MFAIL on failure */
1373#define MFAIL ((void*)(MAX_SIZE_T))
1374#define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1375
1376#if !HAVE_MMAP
1377#define IS_MMAPPED_BIT (SIZE_T_ZERO)
1378#define USE_MMAP_BIT (SIZE_T_ZERO)
1379#define CALL_MMAP(s) MFAIL
1380#define CALL_MUNMAP(a, s) (-1)
1381#define DIRECT_MMAP(s) MFAIL
1382
1383#else /* HAVE_MMAP */
1384#define IS_MMAPPED_BIT (SIZE_T_ONE)
1385#define USE_MMAP_BIT (SIZE_T_ONE)
1386
1387#ifndef WIN32
1388#define CALL_MUNMAP(a, s) munmap((a), (s))
1389#define MMAP_PROT (PROT_READ|PROT_WRITE)
1390#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1391#define MAP_ANONYMOUS MAP_ANON
1392#endif /* MAP_ANON */
1393#ifdef MAP_ANONYMOUS
1394#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1395#define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1396#else /* MAP_ANONYMOUS */
1397/*
1398 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1399 is unlikely to be needed, but is supplied just in case.
1400*/
1401#define MMAP_FLAGS (MAP_PRIVATE)
1402static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1403#define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1404 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1405 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1406 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1407#endif /* MAP_ANONYMOUS */
1408
1409#define DIRECT_MMAP(s) CALL_MMAP(s)
1410#else /* WIN32 */
1411
1412/* Win32 MMAP via VirtualAlloc */
1413static void* win32mmap(size_t size) {
1414 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1415 return (ptr != 0)? ptr: MFAIL;
1416}
1417
1418/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1419static void* win32direct_mmap(size_t size) {
1420 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1421 PAGE_READWRITE);
1422 return (ptr != 0)? ptr: MFAIL;
1423}
1424
1425/* This function supports releasing coalesed segments */
1426static int win32munmap(void* ptr, size_t size) {
1427 MEMORY_BASIC_INFORMATION minfo;
1428 char* cptr = ptr;
1429 while (size) {
1430 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1431 return -1;
1432 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1433 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1434 return -1;
1435 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1436 return -1;
1437 cptr += minfo.RegionSize;
1438 size -= minfo.RegionSize;
1439 }
1440 return 0;
1441}
1442
1443#define CALL_MMAP(s) win32mmap(s)
1444#define CALL_MUNMAP(a, s) win32munmap((a), (s))
1445#define DIRECT_MMAP(s) win32direct_mmap(s)
1446#endif /* WIN32 */
1447#endif /* HAVE_MMAP */
1448
1449#if HAVE_MMAP && HAVE_MREMAP
1450#define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1451#else /* HAVE_MMAP && HAVE_MREMAP */
1452#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1453#endif /* HAVE_MMAP && HAVE_MREMAP */
1454
1455#if HAVE_MORECORE
1456#define CALL_MORECORE(S) MORECORE(S)
1457#else /* HAVE_MORECORE */
1458#define CALL_MORECORE(S) MFAIL
1459#endif /* HAVE_MORECORE */
1460
1461/* mstate bit set if continguous morecore disabled or failed */
1462#define USE_NONCONTIGUOUS_BIT (4U)
1463
1464/* segment bit set in create_mspace_with_base */
1465#define EXTERN_BIT (8U)
1466
1467
1468/* --------------------------- Lock preliminaries ------------------------ */
1469
1470#if USE_LOCKS
1471
1472/*
1473 When locks are defined, there are up to two global locks:
1474
1475 * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1476 MORECORE. In many cases sys_alloc requires two calls, that should
1477 not be interleaved with calls by other threads. This does not
1478 protect against direct calls to MORECORE by other threads not
1479 using this lock, so there is still code to cope the best we can on
1480 interference.
1481
1482 * magic_init_mutex ensures that mparams.magic and other
1483 unique mparams values are initialized only once.
1484*/
1485
1486#ifndef WIN32
1487/* By default use posix locks */
1488#include <pthread.h>
1489#define MLOCK_T pthread_mutex_t
1490#define INITIAL_LOCK(l) pthread_mutex_init(l, NULL)
1491#define ACQUIRE_LOCK(l) pthread_mutex_lock(l)
1492#define RELEASE_LOCK(l) pthread_mutex_unlock(l)
1493
1494#if HAVE_MORECORE
1495static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1496#endif /* HAVE_MORECORE */
1497
1498static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1499
1500#else /* WIN32 */
1501/*
1502 Because lock-protected regions have bounded times, and there
1503 are no recursive lock calls, we can use simple spinlocks.
1504*/
1505
1506#define MLOCK_T long
1507static int win32_acquire_lock (MLOCK_T *sl) {
1508 for (;;) {
1509#ifdef InterlockedCompareExchangePointer
1510 if (!InterlockedCompareExchange(sl, 1, 0))
1511 return 0;
1512#else /* Use older void* version */
1513 if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1514 return 0;
1515#endif /* InterlockedCompareExchangePointer */
1516 Sleep (0);
1517 }
1518}
1519
1520static void win32_release_lock (MLOCK_T *sl) {
1521 InterlockedExchange (sl, 0);
1522}
1523
1524#define INITIAL_LOCK(l) *(l)=0
1525#define ACQUIRE_LOCK(l) win32_acquire_lock(l)
1526#define RELEASE_LOCK(l) win32_release_lock(l)
1527#if HAVE_MORECORE
1528static MLOCK_T morecore_mutex;
1529#endif /* HAVE_MORECORE */
1530static MLOCK_T magic_init_mutex;
1531#endif /* WIN32 */
1532
1533#define USE_LOCK_BIT (2U)
1534#else /* USE_LOCKS */
1535#define USE_LOCK_BIT (0U)
1536#define INITIAL_LOCK(l)
1537#endif /* USE_LOCKS */
1538
1539#if USE_LOCKS && HAVE_MORECORE
1540#define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);
1541#define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);
1542#else /* USE_LOCKS && HAVE_MORECORE */
1543#define ACQUIRE_MORECORE_LOCK()
1544#define RELEASE_MORECORE_LOCK()
1545#endif /* USE_LOCKS && HAVE_MORECORE */
1546
1547#if USE_LOCKS
1548#define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);
1549#define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);
1550#else /* USE_LOCKS */
1551#define ACQUIRE_MAGIC_INIT_LOCK()
1552#define RELEASE_MAGIC_INIT_LOCK()
1553#endif /* USE_LOCKS */
1554
1555
1556/* ----------------------- Chunk representations ------------------------ */
1557
1558/*
1559 (The following includes lightly edited explanations by Colin Plumb.)
1560
1561 The malloc_chunk declaration below is misleading (but accurate and
1562 necessary). It declares a "view" into memory allowing access to
1563 necessary fields at known offsets from a given base.
1564
1565 Chunks of memory are maintained using a `boundary tag' method as
1566 originally described by Knuth. (See the paper by Paul Wilson
1567 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1568 techniques.) Sizes of free chunks are stored both in the front of
1569 each chunk and at the end. This makes consolidating fragmented
1570 chunks into bigger chunks fast. The head fields also hold bits
1571 representing whether chunks are free or in use.
1572
1573 Here are some pictures to make it clearer. They are "exploded" to
1574 show that the state of a chunk can be thought of as extending from
1575 the high 31 bits of the head field of its header through the
1576 prev_foot and PINUSE_BIT bit of the following chunk header.
1577
1578 A chunk that's in use looks like:
1579
1580 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1581 | Size of previous chunk (if P = 1) |
1582 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1583 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1584 | Size of this chunk 1| +-+
1585 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1586 | |
1587 +- -+
1588 | |
1589 +- -+
1590 | :
1591 +- size - sizeof(size_t) available payload bytes -+
1592 : |
1593 chunk-> +- -+
1594 | |
1595 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1596 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1597 | Size of next chunk (may or may not be in use) | +-+
1598 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1599
1600 And if it's free, it looks like this:
1601
1602 chunk-> +- -+
1603 | User payload (must be in use, or we would have merged!) |
1604 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1605 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1606 | Size of this chunk 0| +-+
1607 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1608 | Next pointer |
1609 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1610 | Prev pointer |
1611 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1612 | :
1613 +- size - sizeof(struct chunk) unused bytes -+
1614 : |
1615 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1616 | Size of this chunk |
1617 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1618 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1619 | Size of next chunk (must be in use, or we would have merged)| +-+
1620 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1621 | :
1622 +- User payload -+
1623 : |
1624 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1625 |0|
1626 +-+
1627 Note that since we always merge adjacent free chunks, the chunks
1628 adjacent to a free chunk must be in use.
1629
1630 Given a pointer to a chunk (which can be derived trivially from the
1631 payload pointer) we can, in O(1) time, find out whether the adjacent
1632 chunks are free, and if so, unlink them from the lists that they
1633 are on and merge them with the current chunk.
1634
1635 Chunks always begin on even word boundaries, so the mem portion
1636 (which is returned to the user) is also on an even word boundary, and
1637 thus at least double-word aligned.
1638
1639 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1640 chunk size (which is always a multiple of two words), is an in-use
1641 bit for the *previous* chunk. If that bit is *clear*, then the
1642 word before the current chunk size contains the previous chunk
1643 size, and can be used to find the front of the previous chunk.
1644 The very first chunk allocated always has this bit set, preventing
1645 access to non-existent (or non-owned) memory. If pinuse is set for
1646 any given chunk, then you CANNOT determine the size of the
1647 previous chunk, and might even get a memory addressing fault when
1648 trying to do so.
1649
1650 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1651 the chunk size redundantly records whether the current chunk is
1652 inuse. This redundancy enables usage checks within free and realloc,
1653 and reduces indirection when freeing and consolidating chunks.
1654
1655 Each freshly allocated chunk must have both cinuse and pinuse set.
1656 That is, each allocated chunk borders either a previously allocated
1657 and still in-use chunk, or the base of its memory arena. This is
1658 ensured by making all allocations from the the `lowest' part of any
1659 found chunk. Further, no free chunk physically borders another one,
1660 so each free chunk is known to be preceded and followed by either
1661 inuse chunks or the ends of memory.
1662
1663 Note that the `foot' of the current chunk is actually represented
1664 as the prev_foot of the NEXT chunk. This makes it easier to
1665 deal with alignments etc but can be very confusing when trying
1666 to extend or adapt this code.
1667
1668 The exceptions to all this are
1669
1670 1. The special chunk `top' is the top-most available chunk (i.e.,
1671 the one bordering the end of available memory). It is treated
1672 specially. Top is never included in any bin, is used only if
1673 no other chunk is available, and is released back to the
1674 system if it is very large (see M_TRIM_THRESHOLD). In effect,
1675 the top chunk is treated as larger (and thus less well
1676 fitting) than any other available chunk. The top chunk
1677 doesn't update its trailing size field since there is no next
1678 contiguous chunk that would have to index off it. However,
1679 space is still allocated for it (TOP_FOOT_SIZE) to enable
1680 separation or merging when space is extended.
1681
1682 3. Chunks allocated via mmap, which have the lowest-order bit
1683 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1684 PINUSE_BIT in their head fields. Because they are allocated
1685 one-by-one, each must carry its own prev_foot field, which is
1686 also used to hold the offset this chunk has within its mmapped
1687 region, which is needed to preserve alignment. Each mmapped
1688 chunk is trailed by the first two fields of a fake next-chunk
1689 for sake of usage checks.
1690
1691*/
1692
1693struct malloc_chunk {
1694 size_t prev_foot; /* Size of previous chunk (if free). */
1695 size_t head; /* Size and inuse bits. */
1696 struct malloc_chunk* fd; /* double links -- used only if free. */
1697 struct malloc_chunk* bk;
1698};
1699
1700typedef struct malloc_chunk mchunk;
1701typedef struct malloc_chunk* mchunkptr;
1702typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
1703typedef unsigned int bindex_t; /* Described below */
1704typedef unsigned int binmap_t; /* Described below */
1705typedef unsigned int flag_t; /* The type of various bit flag sets */
1706
1707/* ------------------- Chunks sizes and alignments ----------------------- */
1708
1709#define MCHUNK_SIZE (sizeof(mchunk))
1710
1711#if FOOTERS
1712#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1713#else /* FOOTERS */
1714#define CHUNK_OVERHEAD (SIZE_T_SIZE)
1715#endif /* FOOTERS */
1716
1717/* MMapped chunks need a second word of overhead ... */
1718#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1719/* ... and additional padding for fake next-chunk at foot */
1720#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
1721
1722/* The smallest size we can malloc is an aligned minimal chunk */
1723#define MIN_CHUNK_SIZE\
1724 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1725
1726/* conversion from malloc headers to user pointers, and back */
1727#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
1728#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1729/* chunk associated with aligned address A */
1730#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
1731
1732/* Bounds on request (not chunk) sizes. */
1733#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
1734#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1735
1736/* pad request bytes into a usable size */
1737#define pad_request(req) \
1738 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1739
1740/* pad request, checking for minimum (but not maximum) */
1741#define request2size(req) \
1742 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1743
1744
1745/* ------------------ Operations on head and foot fields ----------------- */
1746
1747/*
1748 The head field of a chunk is or'ed with PINUSE_BIT when previous
1749 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1750 use. If the chunk was obtained with mmap, the prev_foot field has
1751 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1752 mmapped region to the base of the chunk.
1753*/
1754
1755#define PINUSE_BIT (SIZE_T_ONE)
1756#define CINUSE_BIT (SIZE_T_TWO)
1757#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
1758
1759/* Head value for fenceposts */
1760#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
1761
1762/* extraction of fields from head words */
1763#define cinuse(p) ((p)->head & CINUSE_BIT)
1764#define pinuse(p) ((p)->head & PINUSE_BIT)
1765#define chunksize(p) ((p)->head & ~(INUSE_BITS))
1766
1767#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
1768#define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
1769
1770/* Treat space at ptr +/- offset as a chunk */
1771#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1772#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1773
1774/* Ptr to next or previous physical malloc_chunk. */
1775#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1776#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1777
1778/* extract next chunk's pinuse bit */
1779#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
1780
1781/* Get/set size at footer */
1782#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1783#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1784
1785/* Set size, pinuse bit, and foot */
1786#define set_size_and_pinuse_of_free_chunk(p, s)\
1787 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1788
1789/* Set size, pinuse bit, foot, and clear next pinuse */
1790#define set_free_with_pinuse(p, s, n)\
1791 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1792
1793#define is_mmapped(p)\
1794 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1795
1796/* Get the internal overhead associated with chunk p */
1797#define overhead_for(p)\
1798 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1799
1800/* Return true if malloced space is not necessarily cleared */
1801#if MMAP_CLEARS
1802#define calloc_must_clear(p) (!is_mmapped(p))
1803#else /* MMAP_CLEARS */
1804#define calloc_must_clear(p) (1)
1805#endif /* MMAP_CLEARS */
1806
1807/* ---------------------- Overlaid data structures ----------------------- */
1808
1809/*
1810 When chunks are not in use, they are treated as nodes of either
1811 lists or trees.
1812
1813 "Small" chunks are stored in circular doubly-linked lists, and look
1814 like this:
1815
1816 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1817 | Size of previous chunk |
1818 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1819 `head:' | Size of chunk, in bytes |P|
1820 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1821 | Forward pointer to next chunk in list |
1822 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1823 | Back pointer to previous chunk in list |
1824 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1825 | Unused space (may be 0 bytes long) .
1826 . .
1827 . |
1828nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1829 `foot:' | Size of chunk, in bytes |
1830 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1831
1832 Larger chunks are kept in a form of bitwise digital trees (aka
1833 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
1834 free chunks greater than 256 bytes, their size doesn't impose any
1835 constraints on user chunk sizes. Each node looks like:
1836
1837 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1838 | Size of previous chunk |
1839 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1840 `head:' | Size of chunk, in bytes |P|
1841 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1842 | Forward pointer to next chunk of same size |
1843 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1844 | Back pointer to previous chunk of same size |
1845 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1846 | Pointer to left child (child[0]) |
1847 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1848 | Pointer to right child (child[1]) |
1849 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1850 | Pointer to parent |
1851 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1852 | bin index of this chunk |
1853 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1854 | Unused space .
1855 . |
1856nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1857 `foot:' | Size of chunk, in bytes |
1858 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1859
1860 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
1861 of the same size are arranged in a circularly-linked list, with only
1862 the oldest chunk (the next to be used, in our FIFO ordering)
1863 actually in the tree. (Tree members are distinguished by a non-null
1864 parent pointer.) If a chunk with the same size an an existing node
1865 is inserted, it is linked off the existing node using pointers that
1866 work in the same way as fd/bk pointers of small chunks.
1867
1868 Each tree contains a power of 2 sized range of chunk sizes (the
1869 smallest is 0x100 <= x < 0x180), which is is divided in half at each
1870 tree level, with the chunks in the smaller half of the range (0x100
1871 <= x < 0x140 for the top nose) in the left subtree and the larger
1872 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
1873 done by inspecting individual bits.
1874
1875 Using these rules, each node's left subtree contains all smaller
1876 sizes than its right subtree. However, the node at the root of each
1877 subtree has no particular ordering relationship to either. (The
1878 dividing line between the subtree sizes is based on trie relation.)
1879 If we remove the last chunk of a given size from the interior of the
1880 tree, we need to replace it with a leaf node. The tree ordering
1881 rules permit a node to be replaced by any leaf below it.
1882
1883 The smallest chunk in a tree (a common operation in a best-fit
1884 allocator) can be found by walking a path to the leftmost leaf in
1885 the tree. Unlike a usual binary tree, where we follow left child
1886 pointers until we reach a null, here we follow the right child
1887 pointer any time the left one is null, until we reach a leaf with
1888 both child pointers null. The smallest chunk in the tree will be
1889 somewhere along that path.
1890
1891 The worst case number of steps to add, find, or remove a node is
1892 bounded by the number of bits differentiating chunks within
1893 bins. Under current bin calculations, this ranges from 6 up to 21
1894 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1895 is of course much better.
1896*/
1897
1898struct malloc_tree_chunk {
1899 /* The first four fields must be compatible with malloc_chunk */
1900 size_t prev_foot;
1901 size_t head;
1902 struct malloc_tree_chunk* fd;
1903 struct malloc_tree_chunk* bk;
1904
1905 struct malloc_tree_chunk* child[2];
1906 struct malloc_tree_chunk* parent;
1907 bindex_t index;
1908};
1909
1910typedef struct malloc_tree_chunk tchunk;
1911typedef struct malloc_tree_chunk* tchunkptr;
1912typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1913
1914/* A little helper macro for trees */
1915#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1916
1917/* ----------------------------- Segments -------------------------------- */
1918
1919/*
1920 Each malloc space may include non-contiguous segments, held in a
1921 list headed by an embedded malloc_segment record representing the
1922 top-most space. Segments also include flags holding properties of
1923 the space. Large chunks that are directly allocated by mmap are not
1924 included in this list. They are instead independently created and
1925 destroyed without otherwise keeping track of them.
1926
1927 Segment management mainly comes into play for spaces allocated by
1928 MMAP. Any call to MMAP might or might not return memory that is
1929 adjacent to an existing segment. MORECORE normally contiguously
1930 extends the current space, so this space is almost always adjacent,
1931 which is simpler and faster to deal with. (This is why MORECORE is
1932 used preferentially to MMAP when both are available -- see
1933 sys_alloc.) When allocating using MMAP, we don't use any of the
1934 hinting mechanisms (inconsistently) supported in various
1935 implementations of unix mmap, or distinguish reserving from
1936 committing memory. Instead, we just ask for space, and exploit
1937 contiguity when we get it. It is probably possible to do
1938 better than this on some systems, but no general scheme seems
1939 to be significantly better.
1940
1941 Management entails a simpler variant of the consolidation scheme
1942 used for chunks to reduce fragmentation -- new adjacent memory is
1943 normally prepended or appended to an existing segment. However,
1944 there are limitations compared to chunk consolidation that mostly
1945 reflect the fact that segment processing is relatively infrequent
1946 (occurring only when getting memory from system) and that we
1947 don't expect to have huge numbers of segments:
1948
1949 * Segments are not indexed, so traversal requires linear scans. (It
1950 would be possible to index these, but is not worth the extra
1951 overhead and complexity for most programs on most platforms.)
1952 * New segments are only appended to old ones when holding top-most
1953 memory; if they cannot be prepended to others, they are held in
1954 different segments.
1955
1956 Except for the top-most segment of an mstate, each segment record
1957 is kept at the tail of its segment. Segments are added by pushing
1958 segment records onto the list headed by &mstate.seg for the
1959 containing mstate.
1960
1961 Segment flags control allocation/merge/deallocation policies:
1962 * If EXTERN_BIT set, then we did not allocate this segment,
1963 and so should not try to deallocate or merge with others.
1964 (This currently holds only for the initial segment passed
1965 into create_mspace_with_base.)
1966 * If IS_MMAPPED_BIT set, the segment may be merged with
1967 other surrounding mmapped segments and trimmed/de-allocated
1968 using munmap.
1969 * If neither bit is set, then the segment was obtained using
1970 MORECORE so can be merged with surrounding MORECORE'd segments
1971 and deallocated/trimmed using MORECORE with negative arguments.
1972*/
1973
1974struct malloc_segment {
1975 char* base; /* base address */
1976 size_t size; /* allocated size */
1977 struct malloc_segment* next; /* ptr to next segment */
1978 flag_t sflags; /* mmap and extern flag */
1979};
1980
1981#define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
1982#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
1983
1984typedef struct malloc_segment msegment;
1985typedef struct malloc_segment* msegmentptr;
1986
1987/* ---------------------------- malloc_state ----------------------------- */
1988
1989/*
1990 A malloc_state holds all of the bookkeeping for a space.
1991 The main fields are:
1992
1993 Top
1994 The topmost chunk of the currently active segment. Its size is
1995 cached in topsize. The actual size of topmost space is
1996 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1997 fenceposts and segment records if necessary when getting more
1998 space from the system. The size at which to autotrim top is
1999 cached from mparams in trim_check, except that it is disabled if
2000 an autotrim fails.
2001
2002 Designated victim (dv)
2003 This is the preferred chunk for servicing small requests that
2004 don't have exact fits. It is normally the chunk split off most
2005 recently to service another small request. Its size is cached in
2006 dvsize. The link fields of this chunk are not maintained since it
2007 is not kept in a bin.
2008
2009 SmallBins
2010 An array of bin headers for free chunks. These bins hold chunks
2011 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2012 chunks of all the same size, spaced 8 bytes apart. To simplify
2013 use in double-linked lists, each bin header acts as a malloc_chunk
2014 pointing to the real first node, if it exists (else pointing to
2015 itself). This avoids special-casing for headers. But to avoid
2016 waste, we allocate only the fd/bk pointers of bins, and then use
2017 repositioning tricks to treat these as the fields of a chunk.
2018
2019 TreeBins
2020 Treebins are pointers to the roots of trees holding a range of
2021 sizes. There are 2 equally spaced treebins for each power of two
2022 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2023 larger.
2024
2025 Bin maps
2026 There is one bit map for small bins ("smallmap") and one for
2027 treebins ("treemap). Each bin sets its bit when non-empty, and
2028 clears the bit when empty. Bit operations are then used to avoid
2029 bin-by-bin searching -- nearly all "search" is done without ever
2030 looking at bins that won't be selected. The bit maps
2031 conservatively use 32 bits per map word, even if on 64bit system.
2032 For a good description of some of the bit-based techniques used
2033 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2034 supplement at http://hackersdelight.org/). Many of these are
2035 intended to reduce the branchiness of paths through malloc etc, as
2036 well as to reduce the number of memory locations read or written.
2037
2038 Segments
2039 A list of segments headed by an embedded malloc_segment record
2040 representing the initial space.
2041
2042 Address check support
2043 The least_addr field is the least address ever obtained from
2044 MORECORE or MMAP. Attempted frees and reallocs of any address less
2045 than this are trapped (unless INSECURE is defined).
2046
2047 Magic tag
2048 A cross-check field that should always hold same value as mparams.magic.
2049
2050 Flags
2051 Bits recording whether to use MMAP, locks, or contiguous MORECORE
2052
2053 Statistics
2054 Each space keeps track of current and maximum system memory
2055 obtained via MORECORE or MMAP.
2056
2057 Locking
2058 If USE_LOCKS is defined, the "mutex" lock is acquired and released
2059 around every public call using this mspace.
2060*/
2061
2062/* Bin types, widths and sizes */
2063#define NSMALLBINS (32U)
2064#define NTREEBINS (32U)
2065#define SMALLBIN_SHIFT (3U)
2066#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
2067#define TREEBIN_SHIFT (8U)
2068#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
2069#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
2070#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2071
2072struct malloc_state {
2073 binmap_t smallmap;
2074 binmap_t treemap;
2075 size_t dvsize;
2076 size_t topsize;
2077 char* least_addr;
2078 mchunkptr dv;
2079 mchunkptr top;
2080 size_t trim_check;
2081 size_t magic;
2082 mchunkptr smallbins[(NSMALLBINS+1)*2];
2083 tbinptr treebins[NTREEBINS];
2084 size_t footprint;
2085#if USE_MAX_ALLOWED_FOOTPRINT
2086 size_t max_allowed_footprint;
2087#endif
2088 size_t max_footprint;
2089 flag_t mflags;
2090#if USE_LOCKS
2091 MLOCK_T mutex; /* locate lock among fields that rarely change */
2092#endif /* USE_LOCKS */
2093 msegment seg;
2094};
2095
2096typedef struct malloc_state* mstate;
2097
2098/* ------------- Global malloc_state and malloc_params ------------------- */
2099
2100/*
2101 malloc_params holds global properties, including those that can be
2102 dynamically set using mallopt. There is a single instance, mparams,
2103 initialized in init_mparams.
2104*/
2105
2106struct malloc_params {
2107 size_t magic;
2108 size_t page_size;
2109 size_t granularity;
2110 size_t mmap_threshold;
2111 size_t trim_threshold;
2112 flag_t default_mflags;
2113};
2114
2115static struct malloc_params mparams;
2116
2117/* The global malloc_state used for all non-"mspace" calls */
2118static struct malloc_state _gm_
2119#if USE_MAX_ALLOWED_FOOTPRINT
2120 = { .max_allowed_footprint = MAX_SIZE_T };
2121#else
2122 ;
2123#endif
2124
2125#define gm (&_gm_)
2126#define is_global(M) ((M) == &_gm_)
2127#define is_initialized(M) ((M)->top != 0)
2128
2129/* -------------------------- system alloc setup ------------------------- */
2130
2131/* Operations on mflags */
2132
2133#define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2134#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2135#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2136
2137#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2138#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2139#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2140
2141#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2142#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2143
2144#define set_lock(M,L)\
2145 ((M)->mflags = (L)?\
2146 ((M)->mflags | USE_LOCK_BIT) :\
2147 ((M)->mflags & ~USE_LOCK_BIT))
2148
2149/* page-align a size */
2150#define page_align(S)\
2151 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2152
2153/* granularity-align a size */
2154#define granularity_align(S)\
2155 (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2156
2157#define is_page_aligned(S)\
2158 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2159#define is_granularity_aligned(S)\
2160 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2161
2162/* True if segment S holds address A */
2163#define segment_holds(S, A)\
2164 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2165
2166/* Return segment holding given address */
2167static msegmentptr segment_holding(mstate m, char* addr) {
2168 msegmentptr sp = &m->seg;
2169 for (;;) {
2170 if (addr >= sp->base && addr < sp->base + sp->size)
2171 return sp;
2172 if ((sp = sp->next) == 0)
2173 return 0;
2174 }
2175}
2176
2177/* Return true if segment contains a segment link */
2178static int has_segment_link(mstate m, msegmentptr ss) {
2179 msegmentptr sp = &m->seg;
2180 for (;;) {
2181 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2182 return 1;
2183 if ((sp = sp->next) == 0)
2184 return 0;
2185 }
2186}
2187
2188#ifndef MORECORE_CANNOT_TRIM
2189#define should_trim(M,s) ((s) > (M)->trim_check)
2190#else /* MORECORE_CANNOT_TRIM */
2191#define should_trim(M,s) (0)
2192#endif /* MORECORE_CANNOT_TRIM */
2193
2194/*
2195 TOP_FOOT_SIZE is padding at the end of a segment, including space
2196 that may be needed to place segment records and fenceposts when new
2197 noncontiguous segments are added.
2198*/
2199#define TOP_FOOT_SIZE\
2200 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2201
2202
2203/* ------------------------------- Hooks -------------------------------- */
2204
2205/*
2206 PREACTION should be defined to return 0 on success, and nonzero on
2207 failure. If you are not using locking, you can redefine these to do
2208 anything you like.
2209*/
2210
2211#if USE_LOCKS
2212
2213/* Ensure locks are initialized */
2214#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2215
2216#define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2217#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2218#else /* USE_LOCKS */
2219
2220#ifndef PREACTION
2221#define PREACTION(M) (0)
2222#endif /* PREACTION */
2223
2224#ifndef POSTACTION
2225#define POSTACTION(M)
2226#endif /* POSTACTION */
2227
2228#endif /* USE_LOCKS */
2229
2230/*
2231 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2232 USAGE_ERROR_ACTION is triggered on detected bad frees and
2233 reallocs. The argument p is an address that might have triggered the
2234 fault. It is ignored by the two predefined actions, but might be
2235 useful in custom actions that try to help diagnose errors.
2236*/
2237
2238#if PROCEED_ON_ERROR
2239
2240/* A count of the number of corruption errors causing resets */
2241int malloc_corruption_error_count;
2242
2243/* default corruption action */
2244static void reset_on_error(mstate m);
2245
2246#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2247#define USAGE_ERROR_ACTION(m, p)
2248
2249#else /* PROCEED_ON_ERROR */
2250
2251#ifndef CORRUPTION_ERROR_ACTION
2252#define CORRUPTION_ERROR_ACTION(m) ABORT
2253#endif /* CORRUPTION_ERROR_ACTION */
2254
2255#ifndef USAGE_ERROR_ACTION
2256#define USAGE_ERROR_ACTION(m,p) ABORT
2257#endif /* USAGE_ERROR_ACTION */
2258
2259#endif /* PROCEED_ON_ERROR */
2260
2261/* -------------------------- Debugging setup ---------------------------- */
2262
2263#if ! DEBUG
2264
2265#define check_free_chunk(M,P)
2266#define check_inuse_chunk(M,P)
2267#define check_malloced_chunk(M,P,N)
2268#define check_mmapped_chunk(M,P)
2269#define check_malloc_state(M)
2270#define check_top_chunk(M,P)
2271
2272#else /* DEBUG */
2273#define check_free_chunk(M,P) do_check_free_chunk(M,P)
2274#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2275#define check_top_chunk(M,P) do_check_top_chunk(M,P)
2276#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2277#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2278#define check_malloc_state(M) do_check_malloc_state(M)
2279
2280static void do_check_any_chunk(mstate m, mchunkptr p);
2281static void do_check_top_chunk(mstate m, mchunkptr p);
2282static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2283static void do_check_inuse_chunk(mstate m, mchunkptr p);
2284static void do_check_free_chunk(mstate m, mchunkptr p);
2285static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2286static void do_check_tree(mstate m, tchunkptr t);
2287static void do_check_treebin(mstate m, bindex_t i);
2288static void do_check_smallbin(mstate m, bindex_t i);
2289static void do_check_malloc_state(mstate m);
2290static int bin_find(mstate m, mchunkptr x);
2291static size_t traverse_and_check(mstate m);
2292#endif /* DEBUG */
2293
2294/* ---------------------------- Indexing Bins ---------------------------- */
2295
2296#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2297#define small_index(s) ((s) >> SMALLBIN_SHIFT)
2298#define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2299#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2300
2301/* addressing by index. See above about smallbin repositioning */
2302#define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2303#define treebin_at(M,i) (&((M)->treebins[i]))
2304
2305/* assign tree index for size S to variable I */
2306#if defined(__GNUC__) && defined(i386)
2307#define compute_tree_index(S, I)\
2308{\
2309 size_t X = S >> TREEBIN_SHIFT;\
2310 if (X == 0)\
2311 I = 0;\
2312 else if (X > 0xFFFF)\
2313 I = NTREEBINS-1;\
2314 else {\
2315 unsigned int K;\
2316 __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\
2317 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2318 }\
2319}
2320#else /* GNUC */
2321#define compute_tree_index(S, I)\
2322{\
2323 size_t X = S >> TREEBIN_SHIFT;\
2324 if (X == 0)\
2325 I = 0;\
2326 else if (X > 0xFFFF)\
2327 I = NTREEBINS-1;\
2328 else {\
2329 unsigned int Y = (unsigned int)X;\
2330 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2331 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2332 N += K;\
2333 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2334 K = 14 - N + ((Y <<= K) >> 15);\
2335 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2336 }\
2337}
2338#endif /* GNUC */
2339
2340/* Bit representing maximum resolved size in a treebin at i */
2341#define bit_for_tree_index(i) \
2342 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2343
2344/* Shift placing maximum resolved bit in a treebin at i as sign bit */
2345#define leftshift_for_tree_index(i) \
2346 ((i == NTREEBINS-1)? 0 : \
2347 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2348
2349/* The size of the smallest chunk held in bin with index i */
2350#define minsize_for_tree_index(i) \
2351 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2352 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2353
2354
2355/* ------------------------ Operations on bin maps ----------------------- */
2356
2357/* bit corresponding to given index */
2358#define idx2bit(i) ((binmap_t)(1) << (i))
2359
2360/* Mark/Clear bits with given index */
2361#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2362#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2363#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2364
2365#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2366#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2367#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2368
2369/* index corresponding to given bit */
2370
2371#if defined(__GNUC__) && defined(i386)
2372#define compute_bit2idx(X, I)\
2373{\
2374 unsigned int J;\
2375 __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2376 I = (bindex_t)J;\
2377}
2378
2379#else /* GNUC */
2380#if USE_BUILTIN_FFS
2381#define compute_bit2idx(X, I) I = ffs(X)-1
2382
2383#else /* USE_BUILTIN_FFS */
2384#define compute_bit2idx(X, I)\
2385{\
2386 unsigned int Y = X - 1;\
2387 unsigned int K = Y >> (16-4) & 16;\
2388 unsigned int N = K; Y >>= K;\
2389 N += K = Y >> (8-3) & 8; Y >>= K;\
2390 N += K = Y >> (4-2) & 4; Y >>= K;\
2391 N += K = Y >> (2-1) & 2; Y >>= K;\
2392 N += K = Y >> (1-0) & 1; Y >>= K;\
2393 I = (bindex_t)(N + Y);\
2394}
2395#endif /* USE_BUILTIN_FFS */
2396#endif /* GNUC */
2397
2398/* isolate the least set bit of a bitmap */
2399#define least_bit(x) ((x) & -(x))
2400
2401/* mask with all bits to left of least bit of x on */
2402#define left_bits(x) ((x<<1) | -(x<<1))
2403
2404/* mask with all bits to left of or equal to least bit of x on */
2405#define same_or_left_bits(x) ((x) | -(x))
2406
2407
2408/* ----------------------- Runtime Check Support ------------------------- */
2409
2410/*
2411 For security, the main invariant is that malloc/free/etc never
2412 writes to a static address other than malloc_state, unless static
2413 malloc_state itself has been corrupted, which cannot occur via
2414 malloc (because of these checks). In essence this means that we
2415 believe all pointers, sizes, maps etc held in malloc_state, but
2416 check all of those linked or offsetted from other embedded data
2417 structures. These checks are interspersed with main code in a way
2418 that tends to minimize their run-time cost.
2419
2420 When FOOTERS is defined, in addition to range checking, we also
2421 verify footer fields of inuse chunks, which can be used guarantee
2422 that the mstate controlling malloc/free is intact. This is a
2423 streamlined version of the approach described by William Robertson
2424 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2425 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2426 of an inuse chunk holds the xor of its mstate and a random seed,
2427 that is checked upon calls to free() and realloc(). This is
2428 (probablistically) unguessable from outside the program, but can be
2429 computed by any code successfully malloc'ing any chunk, so does not
2430 itself provide protection against code that has already broken
2431 security through some other means. Unlike Robertson et al, we
2432 always dynamically check addresses of all offset chunks (previous,
2433 next, etc). This turns out to be cheaper than relying on hashes.
2434*/
2435
2436#if !INSECURE
2437/* Check if address a is at least as high as any from MORECORE or MMAP */
2438#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2439/* Check if address of next chunk n is higher than base chunk p */
2440#define ok_next(p, n) ((char*)(p) < (char*)(n))
2441/* Check if p has its cinuse bit on */
2442#define ok_cinuse(p) cinuse(p)
2443/* Check if p has its pinuse bit on */
2444#define ok_pinuse(p) pinuse(p)
2445
2446#else /* !INSECURE */
2447#define ok_address(M, a) (1)
2448#define ok_next(b, n) (1)
2449#define ok_cinuse(p) (1)
2450#define ok_pinuse(p) (1)
2451#endif /* !INSECURE */
2452
2453#if (FOOTERS && !INSECURE)
2454/* Check if (alleged) mstate m has expected magic field */
2455#define ok_magic(M) ((M)->magic == mparams.magic)
2456#else /* (FOOTERS && !INSECURE) */
2457#define ok_magic(M) (1)
2458#endif /* (FOOTERS && !INSECURE) */
2459
2460
2461/* In gcc, use __builtin_expect to minimize impact of checks */
2462#if !INSECURE
2463#if defined(__GNUC__) && __GNUC__ >= 3
2464#define RTCHECK(e) __builtin_expect(e, 1)
2465#else /* GNUC */
2466#define RTCHECK(e) (e)
2467#endif /* GNUC */
2468#else /* !INSECURE */
2469#define RTCHECK(e) (1)
2470#endif /* !INSECURE */
2471
2472/* macros to set up inuse chunks with or without footers */
2473
2474#if !FOOTERS
2475
2476#define mark_inuse_foot(M,p,s)
2477
2478/* Set cinuse bit and pinuse bit of next chunk */
2479#define set_inuse(M,p,s)\
2480 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2481 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2482
2483/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2484#define set_inuse_and_pinuse(M,p,s)\
2485 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2486 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2487
2488/* Set size, cinuse and pinuse bit of this chunk */
2489#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2490 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2491
2492#else /* FOOTERS */
2493
2494/* Set foot of inuse chunk to be xor of mstate and seed */
2495#define mark_inuse_foot(M,p,s)\
2496 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2497
2498#define get_mstate_for(p)\
2499 ((mstate)(((mchunkptr)((char*)(p) +\
2500 (chunksize(p))))->prev_foot ^ mparams.magic))
2501
2502#define set_inuse(M,p,s)\
2503 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2504 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2505 mark_inuse_foot(M,p,s))
2506
2507#define set_inuse_and_pinuse(M,p,s)\
2508 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2509 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2510 mark_inuse_foot(M,p,s))
2511
2512#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2513 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2514 mark_inuse_foot(M, p, s))
2515
2516#endif /* !FOOTERS */
2517
2518/* ---------------------------- setting mparams -------------------------- */
2519
2520/* Initialize mparams */
2521static int init_mparams(void) {
2522 if (mparams.page_size == 0) {
2523 size_t s;
2524
2525 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2526 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2527#if MORECORE_CONTIGUOUS
2528 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2529#else /* MORECORE_CONTIGUOUS */
2530 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2531#endif /* MORECORE_CONTIGUOUS */
2532
2533#if (FOOTERS && !INSECURE)
2534 {
2535#if USE_DEV_RANDOM
2536 int fd;
2537 unsigned char buf[sizeof(size_t)];
2538 /* Try to use /dev/urandom, else fall back on using time */
2539 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2540 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2541 s = *((size_t *) buf);
2542 close(fd);
2543 }
2544 else
2545#endif /* USE_DEV_RANDOM */
2546 s = (size_t)(time(0) ^ (size_t)0x55555555U);
2547
2548 s |= (size_t)8U; /* ensure nonzero */
2549 s &= ~(size_t)7U; /* improve chances of fault for bad values */
2550
2551 }
2552#else /* (FOOTERS && !INSECURE) */
2553 s = (size_t)0x58585858U;
2554#endif /* (FOOTERS && !INSECURE) */
2555 ACQUIRE_MAGIC_INIT_LOCK();
2556 if (mparams.magic == 0) {
2557 mparams.magic = s;
2558 /* Set up lock for main malloc area */
2559 INITIAL_LOCK(&gm->mutex);
2560 gm->mflags = mparams.default_mflags;
2561 }
2562 RELEASE_MAGIC_INIT_LOCK();
2563
2564#ifndef WIN32
2565 mparams.page_size = malloc_getpagesize;
2566 mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2567 DEFAULT_GRANULARITY : mparams.page_size);
2568#else /* WIN32 */
2569 {
2570 SYSTEM_INFO system_info;
2571 GetSystemInfo(&system_info);
2572 mparams.page_size = system_info.dwPageSize;
2573 mparams.granularity = system_info.dwAllocationGranularity;
2574 }
2575#endif /* WIN32 */
2576
2577 /* Sanity-check configuration:
2578 size_t must be unsigned and as wide as pointer type.
2579 ints must be at least 4 bytes.
2580 alignment must be at least 8.
2581 Alignment, min chunk size, and page size must all be powers of 2.
2582 */
2583 if ((sizeof(size_t) != sizeof(char*)) ||
2584 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
2585 (sizeof(int) < 4) ||
2586 (MALLOC_ALIGNMENT < (size_t)8U) ||
2587 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
2588 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
2589 ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2590 ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0))
2591 ABORT;
2592 }
2593 return 0;
2594}
2595
2596/* support for mallopt */
2597static int change_mparam(int param_number, int value) {
2598 size_t val = (size_t)value;
2599 init_mparams();
2600 switch(param_number) {
2601 case M_TRIM_THRESHOLD:
2602 mparams.trim_threshold = val;
2603 return 1;
2604 case M_GRANULARITY:
2605 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2606 mparams.granularity = val;
2607 return 1;
2608 }
2609 else
2610 return 0;
2611 case M_MMAP_THRESHOLD:
2612 mparams.mmap_threshold = val;
2613 return 1;
2614 default:
2615 return 0;
2616 }
2617}
2618
2619#if DEBUG
2620/* ------------------------- Debugging Support --------------------------- */
2621
2622/* Check properties of any chunk, whether free, inuse, mmapped etc */
2623static void do_check_any_chunk(mstate m, mchunkptr p) {
2624 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2625 assert(ok_address(m, p));
2626}
2627
2628/* Check properties of top chunk */
2629static void do_check_top_chunk(mstate m, mchunkptr p) {
2630 msegmentptr sp = segment_holding(m, (char*)p);
2631 size_t sz = chunksize(p);
2632 assert(sp != 0);
2633 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2634 assert(ok_address(m, p));
2635 assert(sz == m->topsize);
2636 assert(sz > 0);
2637 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2638 assert(pinuse(p));
2639 assert(!next_pinuse(p));
2640}
2641
2642/* Check properties of (inuse) mmapped chunks */
2643static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2644 size_t sz = chunksize(p);
2645 size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2646 assert(is_mmapped(p));
2647 assert(use_mmap(m));
2648 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2649 assert(ok_address(m, p));
2650 assert(!is_small(sz));
2651 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2652 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2653 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2654}
2655
2656/* Check properties of inuse chunks */
2657static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2658 do_check_any_chunk(m, p);
2659 assert(cinuse(p));
2660 assert(next_pinuse(p));
2661 /* If not pinuse and not mmapped, previous chunk has OK offset */
2662 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2663 if (is_mmapped(p))
2664 do_check_mmapped_chunk(m, p);
2665}
2666
2667/* Check properties of free chunks */
2668static void do_check_free_chunk(mstate m, mchunkptr p) {
2669 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2670 mchunkptr next = chunk_plus_offset(p, sz);
2671 do_check_any_chunk(m, p);
2672 assert(!cinuse(p));
2673 assert(!next_pinuse(p));
2674 assert (!is_mmapped(p));
2675 if (p != m->dv && p != m->top) {
2676 if (sz >= MIN_CHUNK_SIZE) {
2677 assert((sz & CHUNK_ALIGN_MASK) == 0);
2678 assert(is_aligned(chunk2mem(p)));
2679 assert(next->prev_foot == sz);
2680 assert(pinuse(p));
2681 assert (next == m->top || cinuse(next));
2682 assert(p->fd->bk == p);
2683 assert(p->bk->fd == p);
2684 }
2685 else /* markers are always of size SIZE_T_SIZE */
2686 assert(sz == SIZE_T_SIZE);
2687 }
2688}
2689
2690/* Check properties of malloced chunks at the point they are malloced */
2691static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2692 if (mem != 0) {
2693 mchunkptr p = mem2chunk(mem);
2694 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2695 do_check_inuse_chunk(m, p);
2696 assert((sz & CHUNK_ALIGN_MASK) == 0);
2697 assert(sz >= MIN_CHUNK_SIZE);
2698 assert(sz >= s);
2699 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2700 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2701 }
2702}
2703
2704/* Check a tree and its subtrees. */
2705static void do_check_tree(mstate m, tchunkptr t) {
2706 tchunkptr head = 0;
2707 tchunkptr u = t;
2708 bindex_t tindex = t->index;
2709 size_t tsize = chunksize(t);
2710 bindex_t idx;
2711 compute_tree_index(tsize, idx);
2712 assert(tindex == idx);
2713 assert(tsize >= MIN_LARGE_SIZE);
2714 assert(tsize >= minsize_for_tree_index(idx));
2715 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2716
2717 do { /* traverse through chain of same-sized nodes */
2718 do_check_any_chunk(m, ((mchunkptr)u));
2719 assert(u->index == tindex);
2720 assert(chunksize(u) == tsize);
2721 assert(!cinuse(u));
2722 assert(!next_pinuse(u));
2723 assert(u->fd->bk == u);
2724 assert(u->bk->fd == u);
2725 if (u->parent == 0) {
2726 assert(u->child[0] == 0);
2727 assert(u->child[1] == 0);
2728 }
2729 else {
2730 assert(head == 0); /* only one node on chain has parent */
2731 head = u;
2732 assert(u->parent != u);
2733 assert (u->parent->child[0] == u ||
2734 u->parent->child[1] == u ||
2735 *((tbinptr*)(u->parent)) == u);
2736 if (u->child[0] != 0) {
2737 assert(u->child[0]->parent == u);
2738 assert(u->child[0] != u);
2739 do_check_tree(m, u->child[0]);
2740 }
2741 if (u->child[1] != 0) {
2742 assert(u->child[1]->parent == u);
2743 assert(u->child[1] != u);
2744 do_check_tree(m, u->child[1]);
2745 }
2746 if (u->child[0] != 0 && u->child[1] != 0) {
2747 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2748 }
2749 }
2750 u = u->fd;
2751 } while (u != t);
2752 assert(head != 0);
2753}
2754
2755/* Check all the chunks in a treebin. */
2756static void do_check_treebin(mstate m, bindex_t i) {
2757 tbinptr* tb = treebin_at(m, i);
2758 tchunkptr t = *tb;
2759 int empty = (m->treemap & (1U << i)) == 0;
2760 if (t == 0)
2761 assert(empty);
2762 if (!empty)
2763 do_check_tree(m, t);
2764}
2765
2766/* Check all the chunks in a smallbin. */
2767static void do_check_smallbin(mstate m, bindex_t i) {
2768 sbinptr b = smallbin_at(m, i);
2769 mchunkptr p = b->bk;
2770 unsigned int empty = (m->smallmap & (1U << i)) == 0;
2771 if (p == b)
2772 assert(empty);
2773 if (!empty) {
2774 for (; p != b; p = p->bk) {
2775 size_t size = chunksize(p);
2776 mchunkptr q;
2777 /* each chunk claims to be free */
2778 do_check_free_chunk(m, p);
2779 /* chunk belongs in bin */
2780 assert(small_index(size) == i);
2781 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2782 /* chunk is followed by an inuse chunk */
2783 q = next_chunk(p);
2784 if (q->head != FENCEPOST_HEAD)
2785 do_check_inuse_chunk(m, q);
2786 }
2787 }
2788}
2789
2790/* Find x in a bin. Used in other check functions. */
2791static int bin_find(mstate m, mchunkptr x) {
2792 size_t size = chunksize(x);
2793 if (is_small(size)) {
2794 bindex_t sidx = small_index(size);
2795 sbinptr b = smallbin_at(m, sidx);
2796 if (smallmap_is_marked(m, sidx)) {
2797 mchunkptr p = b;
2798 do {
2799 if (p == x)
2800 return 1;
2801 } while ((p = p->fd) != b);
2802 }
2803 }
2804 else {
2805 bindex_t tidx;
2806 compute_tree_index(size, tidx);
2807 if (treemap_is_marked(m, tidx)) {
2808 tchunkptr t = *treebin_at(m, tidx);
2809 size_t sizebits = size << leftshift_for_tree_index(tidx);
2810 while (t != 0 && chunksize(t) != size) {
2811 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2812 sizebits <<= 1;
2813 }
2814 if (t != 0) {
2815 tchunkptr u = t;
2816 do {
2817 if (u == (tchunkptr)x)
2818 return 1;
2819 } while ((u = u->fd) != t);
2820 }
2821 }
2822 }
2823 return 0;
2824}
2825
2826/* Traverse each chunk and check it; return total */
2827static size_t traverse_and_check(mstate m) {
2828 size_t sum = 0;
2829 if (is_initialized(m)) {
2830 msegmentptr s = &m->seg;
2831 sum += m->topsize + TOP_FOOT_SIZE;
2832 while (s != 0) {
2833 mchunkptr q = align_as_chunk(s->base);
2834 mchunkptr lastq = 0;
2835 assert(pinuse(q));
2836 while (segment_holds(s, q) &&
2837 q != m->top && q->head != FENCEPOST_HEAD) {
2838 sum += chunksize(q);
2839 if (cinuse(q)) {
2840 assert(!bin_find(m, q));
2841 do_check_inuse_chunk(m, q);
2842 }
2843 else {
2844 assert(q == m->dv || bin_find(m, q));
2845 assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2846 do_check_free_chunk(m, q);
2847 }
2848 lastq = q;
2849 q = next_chunk(q);
2850 }
2851 s = s->next;
2852 }
2853 }
2854 return sum;
2855}
2856
2857/* Check all properties of malloc_state. */
2858static void do_check_malloc_state(mstate m) {
2859 bindex_t i;
2860 size_t total;
2861 /* check bins */
2862 for (i = 0; i < NSMALLBINS; ++i)
2863 do_check_smallbin(m, i);
2864 for (i = 0; i < NTREEBINS; ++i)
2865 do_check_treebin(m, i);
2866
2867 if (m->dvsize != 0) { /* check dv chunk */
2868 do_check_any_chunk(m, m->dv);
2869 assert(m->dvsize == chunksize(m->dv));
2870 assert(m->dvsize >= MIN_CHUNK_SIZE);
2871 assert(bin_find(m, m->dv) == 0);
2872 }
2873
2874 if (m->top != 0) { /* check top chunk */
2875 do_check_top_chunk(m, m->top);
2876 assert(m->topsize == chunksize(m->top));
2877 assert(m->topsize > 0);
2878 assert(bin_find(m, m->top) == 0);
2879 }
2880
2881 total = traverse_and_check(m);
2882 assert(total <= m->footprint);
2883 assert(m->footprint <= m->max_footprint);
2884#if USE_MAX_ALLOWED_FOOTPRINT
2885 //TODO: change these assertions if we allow for shrinking.
2886 assert(m->footprint <= m->max_allowed_footprint);
2887 assert(m->max_footprint <= m->max_allowed_footprint);
2888#endif
2889}
2890#endif /* DEBUG */
2891
2892/* ----------------------------- statistics ------------------------------ */
2893
2894#if !NO_MALLINFO
2895static struct mallinfo internal_mallinfo(mstate m) {
2896 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2897 if (!PREACTION(m)) {
2898 check_malloc_state(m);
2899 if (is_initialized(m)) {
2900 size_t nfree = SIZE_T_ONE; /* top always free */
2901 size_t mfree = m->topsize + TOP_FOOT_SIZE;
2902 size_t sum = mfree;
2903 msegmentptr s = &m->seg;
2904 while (s != 0) {
2905 mchunkptr q = align_as_chunk(s->base);
2906 while (segment_holds(s, q) &&
2907 q != m->top && q->head != FENCEPOST_HEAD) {
2908 size_t sz = chunksize(q);
2909 sum += sz;
2910 if (!cinuse(q)) {
2911 mfree += sz;
2912 ++nfree;
2913 }
2914 q = next_chunk(q);
2915 }
2916 s = s->next;
2917 }
2918
2919 nm.arena = sum;
2920 nm.ordblks = nfree;
2921 nm.hblkhd = m->footprint - sum;
2922 nm.usmblks = m->max_footprint;
2923 nm.uordblks = m->footprint - mfree;
2924 nm.fordblks = mfree;
2925 nm.keepcost = m->topsize;
2926 }
2927
2928 POSTACTION(m);
2929 }
2930 return nm;
2931}
2932#endif /* !NO_MALLINFO */
2933
2934static void internal_malloc_stats(mstate m) {
2935 if (!PREACTION(m)) {
2936 size_t maxfp = 0;
2937 size_t fp = 0;
2938 size_t used = 0;
2939 check_malloc_state(m);
2940 if (is_initialized(m)) {
2941 msegmentptr s = &m->seg;
2942 maxfp = m->max_footprint;
2943 fp = m->footprint;
2944 used = fp - (m->topsize + TOP_FOOT_SIZE);
2945
2946 while (s != 0) {
2947 mchunkptr q = align_as_chunk(s->base);
2948 while (segment_holds(s, q) &&
2949 q != m->top && q->head != FENCEPOST_HEAD) {
2950 if (!cinuse(q))
2951 used -= chunksize(q);
2952 q = next_chunk(q);
2953 }
2954 s = s->next;
2955 }
2956 }
2957
2958 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2959 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
2960 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
2961
2962 POSTACTION(m);
2963 }
2964}
2965
2966/* ----------------------- Operations on smallbins ----------------------- */
2967
2968/*
2969 Various forms of linking and unlinking are defined as macros. Even
2970 the ones for trees, which are very long but have very short typical
2971 paths. This is ugly but reduces reliance on inlining support of
2972 compilers.
2973*/
2974
2975/* Link a free chunk into a smallbin */
2976#define insert_small_chunk(M, P, S) {\
2977 bindex_t I = small_index(S);\
2978 mchunkptr B = smallbin_at(M, I);\
2979 mchunkptr F = B;\
2980 assert(S >= MIN_CHUNK_SIZE);\
2981 if (!smallmap_is_marked(M, I))\
2982 mark_smallmap(M, I);\
2983 else if (RTCHECK(ok_address(M, B->fd)))\
2984 F = B->fd;\
2985 else {\
2986 CORRUPTION_ERROR_ACTION(M);\
2987 }\
2988 B->fd = P;\
2989 F->bk = P;\
2990 P->fd = F;\
2991 P->bk = B;\
2992}
2993
2994/* Unlink a chunk from a smallbin
2995 * Added check: if F->bk != P or B->fd != P, we have double linked list
2996 * corruption, and abort.
2997 */
2998#define unlink_small_chunk(M, P, S) {\
2999 mchunkptr F = P->fd;\
3000 mchunkptr B = P->bk;\
3001 bindex_t I = small_index(S);\
3002 if (__builtin_expect (F->bk != P || B->fd != P, 0))\
3003 CORRUPTION_ERROR_ACTION(M);\
3004 assert(P != B);\
3005 assert(P != F);\
3006 assert(chunksize(P) == small_index2size(I));\
3007 if (F == B)\
3008 clear_smallmap(M, I);\
3009 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
3010 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
3011 F->bk = B;\
3012 B->fd = F;\
3013 }\
3014 else {\
3015 CORRUPTION_ERROR_ACTION(M);\
3016 }\
3017}
3018
3019/* Unlink the first chunk from a smallbin
3020 * Added check: if F->bk != P or B->fd != P, we have double linked list
3021 * corruption, and abort.
3022 */
3023#define unlink_first_small_chunk(M, B, P, I) {\
3024 mchunkptr F = P->fd;\
3025 if (__builtin_expect (F->bk != P || B->fd != P, 0))\
3026 CORRUPTION_ERROR_ACTION(M);\
3027 assert(P != B);\
3028 assert(P != F);\
3029 assert(chunksize(P) == small_index2size(I));\
3030 if (B == F)\
3031 clear_smallmap(M, I);\
3032 else if (RTCHECK(ok_address(M, F))) {\
3033 B->fd = F;\
3034 F->bk = B;\
3035 }\
3036 else {\
3037 CORRUPTION_ERROR_ACTION(M);\
3038 }\
3039}
3040
3041/* Replace dv node, binning the old one */
3042/* Used only when dvsize known to be small */
3043#define replace_dv(M, P, S) {\
3044 size_t DVS = M->dvsize;\
3045 if (DVS != 0) {\
3046 mchunkptr DV = M->dv;\
3047 assert(is_small(DVS));\
3048 insert_small_chunk(M, DV, DVS);\
3049 }\
3050 M->dvsize = S;\
3051 M->dv = P;\
3052}
3053
3054/* ------------------------- Operations on trees ------------------------- */
3055
3056/* Insert chunk into tree */
3057#define insert_large_chunk(M, X, S) {\
3058 tbinptr* H;\
3059 bindex_t I;\
3060 compute_tree_index(S, I);\
3061 H = treebin_at(M, I);\
3062 X->index = I;\
3063 X->child[0] = X->child[1] = 0;\
3064 if (!treemap_is_marked(M, I)) {\
3065 mark_treemap(M, I);\
3066 *H = X;\
3067 X->parent = (tchunkptr)H;\
3068 X->fd = X->bk = X;\
3069 }\
3070 else {\
3071 tchunkptr T = *H;\
3072 size_t K = S << leftshift_for_tree_index(I);\
3073 for (;;) {\
3074 if (chunksize(T) != S) {\
3075 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3076 K <<= 1;\
3077 if (*C != 0)\
3078 T = *C;\
3079 else if (RTCHECK(ok_address(M, C))) {\
3080 *C = X;\
3081 X->parent = T;\
3082 X->fd = X->bk = X;\
3083 break;\
3084 }\
3085 else {\
3086 CORRUPTION_ERROR_ACTION(M);\
3087 break;\
3088 }\
3089 }\
3090 else {\
3091 tchunkptr F = T->fd;\
3092 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3093 T->fd = F->bk = X;\
3094 X->fd = F;\
3095 X->bk = T;\
3096 X->parent = 0;\
3097 break;\
3098 }\
3099 else {\
3100 CORRUPTION_ERROR_ACTION(M);\
3101 break;\
3102 }\
3103 }\
3104 }\
3105 }\
3106}
3107
3108/*
3109 Unlink steps:
3110
3111 1. If x is a chained node, unlink it from its same-sized fd/bk links
3112 and choose its bk node as its replacement.
3113 2. If x was the last node of its size, but not a leaf node, it must
3114 be replaced with a leaf node (not merely one with an open left or
3115 right), to make sure that lefts and rights of descendents
3116 correspond properly to bit masks. We use the rightmost descendent
3117 of x. We could use any other leaf, but this is easy to locate and
3118 tends to counteract removal of leftmosts elsewhere, and so keeps
3119 paths shorter than minimally guaranteed. This doesn't loop much
3120 because on average a node in a tree is near the bottom.
3121 3. If x is the base of a chain (i.e., has parent links) relink
3122 x's parent and children to x's replacement (or null if none).
3123
3124 Added check: if F->bk != X or R->fd != X, we have double linked list
3125 corruption, and abort.
3126*/
3127
3128#define unlink_large_chunk(M, X) {\
3129 tchunkptr XP = X->parent;\
3130 tchunkptr R;\
3131 if (X->bk != X) {\
3132 tchunkptr F = X->fd;\
3133 R = X->bk;\
3134 if (__builtin_expect (F->bk != X || R->fd != X, 0))\
3135 CORRUPTION_ERROR_ACTION(M);\
3136 if (RTCHECK(ok_address(M, F))) {\
3137 F->bk = R;\
3138 R->fd = F;\
3139 }\
3140 else {\
3141 CORRUPTION_ERROR_ACTION(M);\
3142 }\
3143 }\
3144 else {\
3145 tchunkptr* RP;\
3146 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3147 ((R = *(RP = &(X->child[0]))) != 0)) {\
3148 tchunkptr* CP;\
3149 while ((*(CP = &(R->child[1])) != 0) ||\
3150 (*(CP = &(R->child[0])) != 0)) {\
3151 R = *(RP = CP);\
3152 }\
3153 if (RTCHECK(ok_address(M, RP)))\
3154 *RP = 0;\
3155 else {\
3156 CORRUPTION_ERROR_ACTION(M);\
3157 }\
3158 }\
3159 }\
3160 if (XP != 0) {\
3161 tbinptr* H = treebin_at(M, X->index);\
3162 if (X == *H) {\
3163 if ((*H = R) == 0) \
3164 clear_treemap(M, X->index);\
3165 }\
3166 else if (RTCHECK(ok_address(M, XP))) {\
3167 if (XP->child[0] == X) \
3168 XP->child[0] = R;\
3169 else \
3170 XP->child[1] = R;\
3171 }\
3172 else\
3173 CORRUPTION_ERROR_ACTION(M);\
3174 if (R != 0) {\
3175 if (RTCHECK(ok_address(M, R))) {\
3176 tchunkptr C0, C1;\
3177 R->parent = XP;\
3178 if ((C0 = X->child[0]) != 0) {\
3179 if (RTCHECK(ok_address(M, C0))) {\
3180 R->child[0] = C0;\
3181 C0->parent = R;\
3182 }\
3183 else\
3184 CORRUPTION_ERROR_ACTION(M);\
3185 }\
3186 if ((C1 = X->child[1]) != 0) {\
3187 if (RTCHECK(ok_address(M, C1))) {\
3188 R->child[1] = C1;\
3189 C1->parent = R;\
3190 }\
3191 else\
3192 CORRUPTION_ERROR_ACTION(M);\
3193 }\
3194 }\
3195 else\
3196 CORRUPTION_ERROR_ACTION(M);\
3197 }\
3198 }\
3199}
3200
3201/* Relays to large vs small bin operations */
3202
3203#define insert_chunk(M, P, S)\
3204 if (is_small(S)) insert_small_chunk(M, P, S)\
3205 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3206
3207#define unlink_chunk(M, P, S)\
3208 if (is_small(S)) unlink_small_chunk(M, P, S)\
3209 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3210
3211
3212/* Relays to internal calls to malloc/free from realloc, memalign etc */
3213
3214#if ONLY_MSPACES
3215#define internal_malloc(m, b) mspace_malloc(m, b)
3216#define internal_free(m, mem) mspace_free(m,mem);
3217#else /* ONLY_MSPACES */
3218#if MSPACES
3219#define internal_malloc(m, b)\
3220 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3221#define internal_free(m, mem)\
3222 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3223#else /* MSPACES */
3224#define internal_malloc(m, b) dlmalloc(b)
3225#define internal_free(m, mem) dlfree(mem)
3226#endif /* MSPACES */
3227#endif /* ONLY_MSPACES */
3228
3229/* ----------------------- Direct-mmapping chunks ----------------------- */
3230
3231/*
3232 Directly mmapped chunks are set up with an offset to the start of
3233 the mmapped region stored in the prev_foot field of the chunk. This
3234 allows reconstruction of the required argument to MUNMAP when freed,
3235 and also allows adjustment of the returned chunk to meet alignment
3236 requirements (especially in memalign). There is also enough space
3237 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3238 the PINUSE bit so frees can be checked.
3239*/
3240
3241/* Malloc using mmap */
3242static void* mmap_alloc(mstate m, size_t nb) {
3243 size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3244#if USE_MAX_ALLOWED_FOOTPRINT
3245 size_t new_footprint = m->footprint + mmsize;
3246 if (new_footprint <= m->footprint || /* Check for wrap around 0 */
3247 new_footprint > m->max_allowed_footprint)
3248 return 0;
3249#endif
3250 if (mmsize > nb) { /* Check for wrap around 0 */
3251 char* mm = (char*)(DIRECT_MMAP(mmsize));
3252 if (mm != CMFAIL) {
3253 size_t offset = align_offset(chunk2mem(mm));
3254 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3255 mchunkptr p = (mchunkptr)(mm + offset);
3256 p->prev_foot = offset | IS_MMAPPED_BIT;
3257 (p)->head = (psize|CINUSE_BIT);
3258 mark_inuse_foot(m, p, psize);
3259 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3260 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3261
3262 if (mm < m->least_addr)
3263 m->least_addr = mm;
3264 if ((m->footprint += mmsize) > m->max_footprint)
3265 m->max_footprint = m->footprint;
3266 assert(is_aligned(chunk2mem(p)));
3267 check_mmapped_chunk(m, p);
3268 return chunk2mem(p);
3269 }
3270 }
3271 return 0;
3272}
3273
3274/* Realloc using mmap */
3275static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3276 size_t oldsize = chunksize(oldp);
3277 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3278 return 0;
3279 /* Keep old chunk if big enough but not too big */
3280 if (oldsize >= nb + SIZE_T_SIZE &&
3281 (oldsize - nb) <= (mparams.granularity << 1))
3282 return oldp;
3283 else {
3284 size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3285 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3286 size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3287 CHUNK_ALIGN_MASK);
3288 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3289 oldmmsize, newmmsize, 1);
3290 if (cp != CMFAIL) {
3291 mchunkptr newp = (mchunkptr)(cp + offset);
3292 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3293 newp->head = (psize|CINUSE_BIT);
3294 mark_inuse_foot(m, newp, psize);
3295 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3296 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3297
3298 if (cp < m->least_addr)
3299 m->least_addr = cp;
3300 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3301 m->max_footprint = m->footprint;
3302 check_mmapped_chunk(m, newp);
3303 return newp;
3304 }
3305 }
3306 return 0;
3307}
3308
3309/* -------------------------- mspace management -------------------------- */
3310
3311/* Initialize top chunk and its size */
3312static void init_top(mstate m, mchunkptr p, size_t psize) {
3313 /* Ensure alignment */
3314 size_t offset = align_offset(chunk2mem(p));
3315 p = (mchunkptr)((char*)p + offset);
3316 psize -= offset;
3317
3318 m->top = p;
3319 m->topsize = psize;
3320 p->head = psize | PINUSE_BIT;
3321 /* set size of fake trailing chunk holding overhead space only once */
3322 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3323 m->trim_check = mparams.trim_threshold; /* reset on each update */
3324}
3325
3326/* Initialize bins for a new mstate that is otherwise zeroed out */
3327static void init_bins(mstate m) {
3328 /* Establish circular links for smallbins */
3329 bindex_t i;
3330 for (i = 0; i < NSMALLBINS; ++i) {
3331 sbinptr bin = smallbin_at(m,i);
3332 bin->fd = bin->bk = bin;
3333 }
3334}
3335
3336#if PROCEED_ON_ERROR
3337
3338/* default corruption action */
3339static void reset_on_error(mstate m) {
3340 int i;
3341 ++malloc_corruption_error_count;
3342 /* Reinitialize fields to forget about all memory */
3343 m->smallbins = m->treebins = 0;
3344 m->dvsize = m->topsize = 0;
3345 m->seg.base = 0;
3346 m->seg.size = 0;
3347 m->seg.next = 0;
3348 m->top = m->dv = 0;
3349 for (i = 0; i < NTREEBINS; ++i)
3350 *treebin_at(m, i) = 0;
3351 init_bins(m);
3352}
3353#endif /* PROCEED_ON_ERROR */
3354
3355/* Allocate chunk and prepend remainder with chunk in successor base. */
3356static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3357 size_t nb) {
3358 mchunkptr p = align_as_chunk(newbase);
3359 mchunkptr oldfirst = align_as_chunk(oldbase);
3360 size_t psize = (char*)oldfirst - (char*)p;
3361 mchunkptr q = chunk_plus_offset(p, nb);
3362 size_t qsize = psize - nb;
3363 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3364
3365 assert((char*)oldfirst > (char*)q);
3366 assert(pinuse(oldfirst));
3367 assert(qsize >= MIN_CHUNK_SIZE);
3368
3369 /* consolidate remainder with first chunk of old base */
3370 if (oldfirst == m->top) {
3371 size_t tsize = m->topsize += qsize;
3372 m->top = q;
3373 q->head = tsize | PINUSE_BIT;
3374 check_top_chunk(m, q);
3375 }
3376 else if (oldfirst == m->dv) {
3377 size_t dsize = m->dvsize += qsize;
3378 m->dv = q;
3379 set_size_and_pinuse_of_free_chunk(q, dsize);
3380 }
3381 else {
3382 if (!cinuse(oldfirst)) {
3383 size_t nsize = chunksize(oldfirst);
3384 unlink_chunk(m, oldfirst, nsize);
3385 oldfirst = chunk_plus_offset(oldfirst, nsize);
3386 qsize += nsize;
3387 }
3388 set_free_with_pinuse(q, qsize, oldfirst);
3389 insert_chunk(m, q, qsize);
3390 check_free_chunk(m, q);
3391 }
3392
3393 check_malloced_chunk(m, chunk2mem(p), nb);
3394 return chunk2mem(p);
3395}
3396
3397
3398/* Add a segment to hold a new noncontiguous region */
3399static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3400 /* Determine locations and sizes of segment, fenceposts, old top */
3401 char* old_top = (char*)m->top;
3402 msegmentptr oldsp = segment_holding(m, old_top);
3403 char* old_end = oldsp->base + oldsp->size;
3404 size_t ssize = pad_request(sizeof(struct malloc_segment));
3405 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3406 size_t offset = align_offset(chunk2mem(rawsp));
3407 char* asp = rawsp + offset;
3408 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3409 mchunkptr sp = (mchunkptr)csp;
3410 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3411 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3412 mchunkptr p = tnext;
3413 int nfences = 0;
3414
3415 /* reset top to new space */
3416 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3417
3418 /* Set up segment record */
3419 assert(is_aligned(ss));
3420 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3421 *ss = m->seg; /* Push current record */
3422 m->seg.base = tbase;
3423 m->seg.size = tsize;
3424 m->seg.sflags = mmapped;
3425 m->seg.next = ss;
3426
3427 /* Insert trailing fenceposts */
3428 for (;;) {
3429 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3430 p->head = FENCEPOST_HEAD;
3431 ++nfences;
3432 if ((char*)(&(nextp->head)) < old_end)
3433 p = nextp;
3434 else
3435 break;
3436 }
3437 assert(nfences >= 2);
3438
3439 /* Insert the rest of old top into a bin as an ordinary free chunk */
3440 if (csp != old_top) {
3441 mchunkptr q = (mchunkptr)old_top;
3442 size_t psize = csp - old_top;
3443 mchunkptr tn = chunk_plus_offset(q, psize);
3444 set_free_with_pinuse(q, psize, tn);
3445 insert_chunk(m, q, psize);
3446 }
3447
3448 check_top_chunk(m, m->top);
3449}
3450
3451/* -------------------------- System allocation -------------------------- */
3452
3453/* Get memory from system using MORECORE or MMAP */
3454static void* sys_alloc(mstate m, size_t nb) {
3455 char* tbase = CMFAIL;
3456 size_t tsize = 0;
3457 flag_t mmap_flag = 0;
3458
3459 init_mparams();
3460
3461 /* Directly map large chunks */
3462 if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3463 void* mem = mmap_alloc(m, nb);
3464 if (mem != 0)
3465 return mem;
3466 }
3467
3468#if USE_MAX_ALLOWED_FOOTPRINT
3469 /* Make sure the footprint doesn't grow past max_allowed_footprint.
3470 * This covers all cases except for where we need to page align, below.
3471 */
3472 {
3473 size_t new_footprint = m->footprint +
3474 granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3475 if (new_footprint <= m->footprint || /* Check for wrap around 0 */
3476 new_footprint > m->max_allowed_footprint)
3477 return 0;
3478 }
3479#endif
3480
3481 /*
3482 Try getting memory in any of three ways (in most-preferred to
3483 least-preferred order):
3484 1. A call to MORECORE that can normally contiguously extend memory.
3485 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3486 or main space is mmapped or a previous contiguous call failed)
3487 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3488 Note that under the default settings, if MORECORE is unable to
3489 fulfill a request, and HAVE_MMAP is true, then mmap is
3490 used as a noncontiguous system allocator. This is a useful backup
3491 strategy for systems with holes in address spaces -- in this case
3492 sbrk cannot contiguously expand the heap, but mmap may be able to
3493 find space.
3494 3. A call to MORECORE that cannot usually contiguously extend memory.
3495 (disabled if not HAVE_MORECORE)
3496 */
3497
3498 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3499 char* br = CMFAIL;
3500 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3501 size_t asize = 0;
3502 ACQUIRE_MORECORE_LOCK();
3503
3504 if (ss == 0) { /* First time through or recovery */
3505 char* base = (char*)CALL_MORECORE(0);
3506 if (base != CMFAIL) {
3507 asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3508 /* Adjust to end on a page boundary */
3509 if (!is_page_aligned(base)) {
3510 asize += (page_align((size_t)base) - (size_t)base);
3511#if USE_MAX_ALLOWED_FOOTPRINT
3512 /* If the alignment pushes us over max_allowed_footprint,
3513 * poison the upcoming call to MORECORE and continue.
3514 */
3515 {
3516 size_t new_footprint = m->footprint + asize;
3517 if (new_footprint <= m->footprint || /* Check for wrap around 0 */
3518 new_footprint > m->max_allowed_footprint) {
3519 asize = HALF_MAX_SIZE_T;
3520 }
3521 }
3522#endif
3523 }
3524 /* Can't call MORECORE if size is negative when treated as signed */
3525 if (asize < HALF_MAX_SIZE_T &&
3526 (br = (char*)(CALL_MORECORE(asize))) == base) {
3527 tbase = base;
3528 tsize = asize;
3529 }
3530 }
3531 }
3532 else {
3533 /* Subtract out existing available top space from MORECORE request. */
3534 asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3535 /* Use mem here only if it did continuously extend old space */
3536 if (asize < HALF_MAX_SIZE_T &&
3537 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3538 tbase = br;
3539 tsize = asize;
3540 }
3541 }
3542
3543 if (tbase == CMFAIL) { /* Cope with partial failure */
3544 if (br != CMFAIL) { /* Try to use/extend the space we did get */
3545 if (asize < HALF_MAX_SIZE_T &&
3546 asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3547 size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3548 if (esize < HALF_MAX_SIZE_T) {
3549 char* end = (char*)CALL_MORECORE(esize);
3550 if (end != CMFAIL)
3551 asize += esize;
3552 else { /* Can't use; try to release */
3553 CALL_MORECORE(-asize);
3554 br = CMFAIL;
3555 }
3556 }
3557 }
3558 }
3559 if (br != CMFAIL) { /* Use the space we did get */
3560 tbase = br;
3561 tsize = asize;
3562 }
3563 else
3564 disable_contiguous(m); /* Don't try contiguous path in the future */
3565 }
3566
3567 RELEASE_MORECORE_LOCK();
3568 }
3569
3570 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
3571 size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3572 size_t rsize = granularity_align(req);
3573 if (rsize > nb) { /* Fail if wraps around zero */
3574 char* mp = (char*)(CALL_MMAP(rsize));
3575 if (mp != CMFAIL) {
3576 tbase = mp;
3577 tsize = rsize;
3578 mmap_flag = IS_MMAPPED_BIT;
3579 }
3580 }
3581 }
3582
3583 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3584 size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3585 if (asize < HALF_MAX_SIZE_T) {
3586 char* br = CMFAIL;
3587 char* end = CMFAIL;
3588 ACQUIRE_MORECORE_LOCK();
3589 br = (char*)(CALL_MORECORE(asize));
3590 end = (char*)(CALL_MORECORE(0));
3591 RELEASE_MORECORE_LOCK();
3592 if (br != CMFAIL && end != CMFAIL && br < end) {
3593 size_t ssize = end - br;
3594 if (ssize > nb + TOP_FOOT_SIZE) {
3595 tbase = br;
3596 tsize = ssize;
3597 }
3598 }
3599 }
3600 }
3601
3602 if (tbase != CMFAIL) {
3603
3604 if ((m->footprint += tsize) > m->max_footprint)
3605 m->max_footprint = m->footprint;
3606
3607 if (!is_initialized(m)) { /* first-time initialization */
3608 m->seg.base = m->least_addr = tbase;
3609 m->seg.size = tsize;
3610 m->seg.sflags = mmap_flag;
3611 m->magic = mparams.magic;
3612 init_bins(m);
3613 if (is_global(m))
3614 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3615 else {
3616 /* Offset top by embedded malloc_state */
3617 mchunkptr mn = next_chunk(mem2chunk(m));
3618 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3619 }
3620 }
3621
3622 else {
3623 /* Try to merge with an existing segment */
3624 msegmentptr sp = &m->seg;
3625 while (sp != 0 && tbase != sp->base + sp->size)
3626 sp = sp->next;
3627 if (sp != 0 &&
3628 !is_extern_segment(sp) &&
3629 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
3630 segment_holds(sp, m->top)) { /* append */
3631 sp->size += tsize;
3632 init_top(m, m->top, m->topsize + tsize);
3633 }
3634 else {
3635 if (tbase < m->least_addr)
3636 m->least_addr = tbase;
3637 sp = &m->seg;
3638 while (sp != 0 && sp->base != tbase + tsize)
3639 sp = sp->next;
3640 if (sp != 0 &&
3641 !is_extern_segment(sp) &&
3642 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
3643 char* oldbase = sp->base;
3644 sp->base = tbase;
3645 sp->size += tsize;
3646 return prepend_alloc(m, tbase, oldbase, nb);
3647 }
3648 else
3649 add_segment(m, tbase, tsize, mmap_flag);
3650 }
3651 }
3652
3653 if (nb < m->topsize) { /* Allocate from new or extended top space */
3654 size_t rsize = m->topsize -= nb;
3655 mchunkptr p = m->top;
3656 mchunkptr r = m->top = chunk_plus_offset(p, nb);
3657 r->head = rsize | PINUSE_BIT;
3658 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3659 check_top_chunk(m, m->top);
3660 check_malloced_chunk(m, chunk2mem(p), nb);
3661 return chunk2mem(p);
3662 }
3663 }
3664
3665 MALLOC_FAILURE_ACTION;
3666 return 0;
3667}
3668
3669/* ----------------------- system deallocation -------------------------- */
3670
3671/* Unmap and unlink any mmapped segments that don't contain used chunks */
3672static size_t release_unused_segments(mstate m) {
3673 size_t released = 0;
3674 msegmentptr pred = &m->seg;
3675 msegmentptr sp = pred->next;
3676 while (sp != 0) {
3677 char* base = sp->base;
3678 size_t size = sp->size;
3679 msegmentptr next = sp->next;
3680 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3681 mchunkptr p = align_as_chunk(base);
3682 size_t psize = chunksize(p);
3683 /* Can unmap if first chunk holds entire segment and not pinned */
3684 if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3685 tchunkptr tp = (tchunkptr)p;
3686 assert(segment_holds(sp, (char*)sp));
3687 if (p == m->dv) {
3688 m->dv = 0;
3689 m->dvsize = 0;
3690 }
3691 else {
3692 unlink_large_chunk(m, tp);
3693 }
3694 if (CALL_MUNMAP(base, size) == 0) {
3695 released += size;
3696 m->footprint -= size;
3697 /* unlink obsoleted record */
3698 sp = pred;
3699 sp->next = next;
3700 }
3701 else { /* back out if cannot unmap */
3702 insert_large_chunk(m, tp, psize);
3703 }
3704 }
3705 }
3706 pred = sp;
3707 sp = next;
3708 }
3709 return released;
3710}
3711
3712static int sys_trim(mstate m, size_t pad) {
3713 size_t released = 0;
3714 if (pad < MAX_REQUEST && is_initialized(m)) {
3715 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3716
3717 if (m->topsize > pad) {
3718 /* Shrink top space in granularity-size units, keeping at least one */
3719 size_t unit = mparams.granularity;
3720 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3721 SIZE_T_ONE) * unit;
3722 msegmentptr sp = segment_holding(m, (char*)m->top);
3723
3724 if (!is_extern_segment(sp)) {
3725 if (is_mmapped_segment(sp)) {
3726 if (HAVE_MMAP &&
3727 sp->size >= extra &&
3728 !has_segment_link(m, sp)) { /* can't shrink if pinned */
3729 size_t newsize = sp->size - extra;
3730 /* Prefer mremap, fall back to munmap */
3731 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3732 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3733 released = extra;
3734 }
3735 }
3736 }
3737 else if (HAVE_MORECORE) {
3738 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3739 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3740 ACQUIRE_MORECORE_LOCK();
3741 {
3742 /* Make sure end of memory is where we last set it. */
3743 char* old_br = (char*)(CALL_MORECORE(0));
3744 if (old_br == sp->base + sp->size) {
3745 char* rel_br = (char*)(CALL_MORECORE(-extra));
3746 char* new_br = (char*)(CALL_MORECORE(0));
3747 if (rel_br != CMFAIL && new_br < old_br)
3748 released = old_br - new_br;
3749 }
3750 }
3751 RELEASE_MORECORE_LOCK();
3752 }
3753 }
3754
3755 if (released != 0) {
3756 sp->size -= released;
3757 m->footprint -= released;
3758 init_top(m, m->top, m->topsize - released);
3759 check_top_chunk(m, m->top);
3760 }
3761 }
3762
3763 /* Unmap any unused mmapped segments */
3764 if (HAVE_MMAP)
3765 released += release_unused_segments(m);
3766
3767 /* On failure, disable autotrim to avoid repeated failed future calls */
3768 if (released == 0)
3769 m->trim_check = MAX_SIZE_T;
3770 }
3771
3772 return (released != 0)? 1 : 0;
3773}
3774
3775/* ---------------------------- malloc support --------------------------- */
3776
3777/* allocate a large request from the best fitting chunk in a treebin */
3778static void* tmalloc_large(mstate m, size_t nb) {
3779 tchunkptr v = 0;
3780 size_t rsize = -nb; /* Unsigned negation */
3781 tchunkptr t;
3782 bindex_t idx;
3783 compute_tree_index(nb, idx);
3784
3785 if ((t = *treebin_at(m, idx)) != 0) {
3786 /* Traverse tree for this bin looking for node with size == nb */
3787 size_t sizebits = nb << leftshift_for_tree_index(idx);
3788 tchunkptr rst = 0; /* The deepest untaken right subtree */
3789 for (;;) {
3790 tchunkptr rt;
3791 size_t trem = chunksize(t) - nb;
3792 if (trem < rsize) {
3793 v = t;
3794 if ((rsize = trem) == 0)
3795 break;
3796 }
3797 rt = t->child[1];
3798 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3799 if (rt != 0 && rt != t)
3800 rst = rt;
3801 if (t == 0) {
3802 t = rst; /* set t to least subtree holding sizes > nb */
3803 break;
3804 }
3805 sizebits <<= 1;
3806 }
3807 }
3808
3809 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3810 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3811 if (leftbits != 0) {
3812 bindex_t i;
3813 binmap_t leastbit = least_bit(leftbits);
3814 compute_bit2idx(leastbit, i);
3815 t = *treebin_at(m, i);
3816 }
3817 }
3818
3819 while (t != 0) { /* find smallest of tree or subtree */
3820 size_t trem = chunksize(t) - nb;
3821 if (trem < rsize) {
3822 rsize = trem;
3823 v = t;
3824 }
3825 t = leftmost_child(t);
3826 }
3827
3828 /* If dv is a better fit, return 0 so malloc will use it */
3829 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3830 if (RTCHECK(ok_address(m, v))) { /* split */
3831 mchunkptr r = chunk_plus_offset(v, nb);
3832 assert(chunksize(v) == rsize + nb);
3833 if (RTCHECK(ok_next(v, r))) {
3834 unlink_large_chunk(m, v);
3835 if (rsize < MIN_CHUNK_SIZE)
3836 set_inuse_and_pinuse(m, v, (rsize + nb));
3837 else {
3838 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3839 set_size_and_pinuse_of_free_chunk(r, rsize);
3840 insert_chunk(m, r, rsize);
3841 }
3842 return chunk2mem(v);
3843 }
3844 }
3845 CORRUPTION_ERROR_ACTION(m);
3846 }
3847 return 0;
3848}
3849
3850/* allocate a small request from the best fitting chunk in a treebin */
3851static void* tmalloc_small(mstate m, size_t nb) {
3852 tchunkptr t, v;
3853 size_t rsize;
3854 bindex_t i;
3855 binmap_t leastbit = least_bit(m->treemap);
3856 compute_bit2idx(leastbit, i);
3857
3858 v = t = *treebin_at(m, i);
3859 rsize = chunksize(t) - nb;
3860
3861 while ((t = leftmost_child(t)) != 0) {
3862 size_t trem = chunksize(t) - nb;
3863 if (trem < rsize) {
3864 rsize = trem;
3865 v = t;
3866 }
3867 }
3868
3869 if (RTCHECK(ok_address(m, v))) {
3870 mchunkptr r = chunk_plus_offset(v, nb);
3871 assert(chunksize(v) == rsize + nb);
3872 if (RTCHECK(ok_next(v, r))) {
3873 unlink_large_chunk(m, v);
3874 if (rsize < MIN_CHUNK_SIZE)
3875 set_inuse_and_pinuse(m, v, (rsize + nb));
3876 else {
3877 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3878 set_size_and_pinuse_of_free_chunk(r, rsize);
3879 replace_dv(m, r, rsize);
3880 }
3881 return chunk2mem(v);
3882 }
3883 }
3884
3885 CORRUPTION_ERROR_ACTION(m);
3886 return 0;
3887}
3888
3889/* --------------------------- realloc support --------------------------- */
3890
3891static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3892 if (bytes >= MAX_REQUEST) {
3893 MALLOC_FAILURE_ACTION;
3894 return 0;
3895 }
3896 if (!PREACTION(m)) {
3897 mchunkptr oldp = mem2chunk(oldmem);
3898 size_t oldsize = chunksize(oldp);
3899 mchunkptr next = chunk_plus_offset(oldp, oldsize);
3900 mchunkptr newp = 0;
3901 void* extra = 0;
3902
3903 /* Try to either shrink or extend into top. Else malloc-copy-free */
3904
3905 if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3906 ok_next(oldp, next) && ok_pinuse(next))) {
3907 size_t nb = request2size(bytes);
3908 if (is_mmapped(oldp))
3909 newp = mmap_resize(m, oldp, nb);
3910 else if (oldsize >= nb) { /* already big enough */
3911 size_t rsize = oldsize - nb;
3912 newp = oldp;
3913 if (rsize >= MIN_CHUNK_SIZE) {
3914 mchunkptr remainder = chunk_plus_offset(newp, nb);
3915 set_inuse(m, newp, nb);
3916 set_inuse(m, remainder, rsize);
3917 extra = chunk2mem(remainder);
3918 }
3919 }
3920 else if (next == m->top && oldsize + m->topsize > nb) {
3921 /* Expand into top */
3922 size_t newsize = oldsize + m->topsize;
3923 size_t newtopsize = newsize - nb;
3924 mchunkptr newtop = chunk_plus_offset(oldp, nb);
3925 set_inuse(m, oldp, nb);
3926 newtop->head = newtopsize |PINUSE_BIT;
3927 m->top = newtop;
3928 m->topsize = newtopsize;
3929 newp = oldp;
3930 }
3931 }
3932 else {
3933 USAGE_ERROR_ACTION(m, oldmem);
3934 POSTACTION(m);
3935 return 0;
3936 }
3937
3938 POSTACTION(m);
3939
3940 if (newp != 0) {
3941 if (extra != 0) {
3942 internal_free(m, extra);
3943 }
3944 check_inuse_chunk(m, newp);
3945 return chunk2mem(newp);
3946 }
3947 else {
3948 void* newmem = internal_malloc(m, bytes);
3949 if (newmem != 0) {
3950 size_t oc = oldsize - overhead_for(oldp);
3951 memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3952 internal_free(m, oldmem);
3953 }
3954 return newmem;
3955 }
3956 }
3957 return 0;
3958}
3959
3960/* --------------------------- memalign support -------------------------- */
3961
3962static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3963 if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
3964 return internal_malloc(m, bytes);
3965 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3966 alignment = MIN_CHUNK_SIZE;
3967 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3968 size_t a = MALLOC_ALIGNMENT << 1;
3969 while (a < alignment) a <<= 1;
3970 alignment = a;
3971 }
3972
3973 if (bytes >= MAX_REQUEST - alignment) {
3974 if (m != 0) { /* Test isn't needed but avoids compiler warning */
3975 MALLOC_FAILURE_ACTION;
3976 }
3977 }
3978 else {
3979 size_t nb = request2size(bytes);
3980 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3981 char* mem = (char*)internal_malloc(m, req);
3982 if (mem != 0) {
3983 void* leader = 0;
3984 void* trailer = 0;
3985 mchunkptr p = mem2chunk(mem);
3986
3987 if (PREACTION(m)) return 0;
3988 if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3989 /*
3990 Find an aligned spot inside chunk. Since we need to give
3991 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3992 the first calculation places us at a spot with less than
3993 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3994 We've allocated enough total room so that this is always
3995 possible.
3996 */
3997 char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3998 alignment -
3999 SIZE_T_ONE)) &
4000 -alignment));
4001 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4002 br : br+alignment;
4003 mchunkptr newp = (mchunkptr)pos;
4004 size_t leadsize = pos - (char*)(p);
4005 size_t newsize = chunksize(p) - leadsize;
4006
4007 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4008 newp->prev_foot = p->prev_foot + leadsize;
4009 newp->head = (newsize|CINUSE_BIT);
4010 }
4011 else { /* Otherwise, give back leader, use the rest */
4012 set_inuse(m, newp, newsize);
4013 set_inuse(m, p, leadsize);
4014 leader = chunk2mem(p);
4015 }
4016 p = newp;
4017 }
4018
4019 /* Give back spare room at the end */
4020 if (!is_mmapped(p)) {
4021 size_t size = chunksize(p);
4022 if (size > nb + MIN_CHUNK_SIZE) {
4023 size_t remainder_size = size - nb;
4024 mchunkptr remainder = chunk_plus_offset(p, nb);
4025 set_inuse(m, p, nb);
4026 set_inuse(m, remainder, remainder_size);
4027 trailer = chunk2mem(remainder);
4028 }
4029 }
4030
4031 assert (chunksize(p) >= nb);
4032 assert((((size_t)(chunk2mem(p))) % alignment) == 0);
4033 check_inuse_chunk(m, p);
4034 POSTACTION(m);
4035 if (leader != 0) {
4036 internal_free(m, leader);
4037 }
4038 if (trailer != 0) {
4039 internal_free(m, trailer);
4040 }
4041 return chunk2mem(p);
4042 }
4043 }
4044 return 0;
4045}
4046
4047/* ------------------------ comalloc/coalloc support --------------------- */
4048
4049static void** ialloc(mstate m,
4050 size_t n_elements,
4051 size_t* sizes,
4052 int opts,
4053 void* chunks[]) {
4054 /*
4055 This provides common support for independent_X routines, handling
4056 all of the combinations that can result.
4057
4058 The opts arg has:
4059 bit 0 set if all elements are same size (using sizes[0])
4060 bit 1 set if elements should be zeroed
4061 */
4062
4063 size_t element_size; /* chunksize of each element, if all same */
4064 size_t contents_size; /* total size of elements */
4065 size_t array_size; /* request size of pointer array */
4066 void* mem; /* malloced aggregate space */
4067 mchunkptr p; /* corresponding chunk */
4068 size_t remainder_size; /* remaining bytes while splitting */
4069 void** marray; /* either "chunks" or malloced ptr array */
4070 mchunkptr array_chunk; /* chunk for malloced ptr array */
4071 flag_t was_enabled; /* to disable mmap */
4072 size_t size;
4073 size_t i;
4074
4075 /* compute array length, if needed */
4076 if (chunks != 0) {
4077 if (n_elements == 0)
4078 return chunks; /* nothing to do */
4079 marray = chunks;
4080 array_size = 0;
4081 }
4082 else {
4083 /* if empty req, must still return chunk representing empty array */
4084 if (n_elements == 0)
4085 return (void**)internal_malloc(m, 0);
4086 marray = 0;
4087 array_size = request2size(n_elements * (sizeof(void*)));
4088 }
4089
4090 /* compute total element size */
4091 if (opts & 0x1) { /* all-same-size */
4092 element_size = request2size(*sizes);
4093 contents_size = n_elements * element_size;
4094 }
4095 else { /* add up all the sizes */
4096 element_size = 0;
4097 contents_size = 0;
4098 for (i = 0; i != n_elements; ++i)
4099 contents_size += request2size(sizes[i]);
4100 }
4101
4102 size = contents_size + array_size;
4103
4104 /*
4105 Allocate the aggregate chunk. First disable direct-mmapping so
4106 malloc won't use it, since we would not be able to later
4107 free/realloc space internal to a segregated mmap region.
4108 */
4109 was_enabled = use_mmap(m);
4110 disable_mmap(m);
4111 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4112 if (was_enabled)
4113 enable_mmap(m);
4114 if (mem == 0)
4115 return 0;
4116
4117 if (PREACTION(m)) return 0;
4118 p = mem2chunk(mem);
4119 remainder_size = chunksize(p);
4120
4121 assert(!is_mmapped(p));
4122
4123 if (opts & 0x2) { /* optionally clear the elements */
4124 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4125 }
4126
4127 /* If not provided, allocate the pointer array as final part of chunk */
4128 if (marray == 0) {
4129 size_t array_chunk_size;
4130 array_chunk = chunk_plus_offset(p, contents_size);
4131 array_chunk_size = remainder_size - contents_size;
4132 marray = (void**) (chunk2mem(array_chunk));
4133 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4134 remainder_size = contents_size;
4135 }
4136
4137 /* split out elements */
4138 for (i = 0; ; ++i) {
4139 marray[i] = chunk2mem(p);
4140 if (i != n_elements-1) {
4141 if (element_size != 0)
4142 size = element_size;
4143 else
4144 size = request2size(sizes[i]);
4145 remainder_size -= size;
4146 set_size_and_pinuse_of_inuse_chunk(m, p, size);
4147 p = chunk_plus_offset(p, size);
4148 }
4149 else { /* the final element absorbs any overallocation slop */
4150 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4151 break;
4152 }
4153 }
4154
4155#if DEBUG
4156 if (marray != chunks) {
4157 /* final element must have exactly exhausted chunk */
4158 if (element_size != 0) {
4159 assert(remainder_size == element_size);
4160 }
4161 else {
4162 assert(remainder_size == request2size(sizes[i]));
4163 }
4164 check_inuse_chunk(m, mem2chunk(marray));
4165 }
4166 for (i = 0; i != n_elements; ++i)
4167 check_inuse_chunk(m, mem2chunk(marray[i]));
4168
4169#endif /* DEBUG */
4170
4171 POSTACTION(m);
4172 return marray;
4173}
4174
4175
4176/* -------------------------- public routines ---------------------------- */
4177
4178#if !ONLY_MSPACES
4179
4180void* dlmalloc(size_t bytes) {
4181 /*
4182 Basic algorithm:
4183 If a small request (< 256 bytes minus per-chunk overhead):
4184 1. If one exists, use a remainderless chunk in associated smallbin.
4185 (Remainderless means that there are too few excess bytes to
4186 represent as a chunk.)
4187 2. If it is big enough, use the dv chunk, which is normally the
4188 chunk adjacent to the one used for the most recent small request.
4189 3. If one exists, split the smallest available chunk in a bin,
4190 saving remainder in dv.
4191 4. If it is big enough, use the top chunk.
4192 5. If available, get memory from system and use it
4193 Otherwise, for a large request:
4194 1. Find the smallest available binned chunk that fits, and use it
4195 if it is better fitting than dv chunk, splitting if necessary.
4196 2. If better fitting than any binned chunk, use the dv chunk.
4197 3. If it is big enough, use the top chunk.
4198 4. If request size >= mmap threshold, try to directly mmap this chunk.
4199 5. If available, get memory from system and use it
4200
4201 The ugly goto's here ensure that postaction occurs along all paths.
4202 */
4203
4204 if (!PREACTION(gm)) {
4205 void* mem;
4206 size_t nb;
4207 if (bytes <= MAX_SMALL_REQUEST) {
4208 bindex_t idx;
4209 binmap_t smallbits;
4210 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4211 idx = small_index(nb);
4212 smallbits = gm->smallmap >> idx;
4213
4214 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4215 mchunkptr b, p;
4216 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4217 b = smallbin_at(gm, idx);
4218 p = b->fd;
4219 assert(chunksize(p) == small_index2size(idx));
4220 unlink_first_small_chunk(gm, b, p, idx);
4221 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4222 mem = chunk2mem(p);
4223 check_malloced_chunk(gm, mem, nb);
4224 goto postaction;
4225 }
4226
4227 else if (nb > gm->dvsize) {
4228 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4229 mchunkptr b, p, r;
4230 size_t rsize;
4231 bindex_t i;
4232 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4233 binmap_t leastbit = least_bit(leftbits);
4234 compute_bit2idx(leastbit, i);
4235 b = smallbin_at(gm, i);
4236 p = b->fd;
4237 assert(chunksize(p) == small_index2size(i));
4238 unlink_first_small_chunk(gm, b, p, i);
4239 rsize = small_index2size(i) - nb;
4240 /* Fit here cannot be remainderless if 4byte sizes */
4241 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4242 set_inuse_and_pinuse(gm, p, small_index2size(i));
4243 else {
4244 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4245 r = chunk_plus_offset(p, nb);
4246 set_size_and_pinuse_of_free_chunk(r, rsize);
4247 replace_dv(gm, r, rsize);
4248 }
4249 mem = chunk2mem(p);
4250 check_malloced_chunk(gm, mem, nb);
4251 goto postaction;
4252 }
4253
4254 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4255 check_malloced_chunk(gm, mem, nb);
4256 goto postaction;
4257 }
4258 }
4259 }
4260 else if (bytes >= MAX_REQUEST)
4261 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4262 else {
4263 nb = pad_request(bytes);
4264 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4265 check_malloced_chunk(gm, mem, nb);
4266 goto postaction;
4267 }
4268 }
4269
4270 if (nb <= gm->dvsize) {
4271 size_t rsize = gm->dvsize - nb;
4272 mchunkptr p = gm->dv;
4273 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4274 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4275 gm->dvsize = rsize;
4276 set_size_and_pinuse_of_free_chunk(r, rsize);
4277 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4278 }
4279 else { /* exhaust dv */
4280 size_t dvs = gm->dvsize;
4281 gm->dvsize = 0;
4282 gm->dv = 0;
4283 set_inuse_and_pinuse(gm, p, dvs);
4284 }
4285 mem = chunk2mem(p);
4286 check_malloced_chunk(gm, mem, nb);
4287 goto postaction;
4288 }
4289
4290 else if (nb < gm->topsize) { /* Split top */
4291 size_t rsize = gm->topsize -= nb;
4292 mchunkptr p = gm->top;
4293 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4294 r->head = rsize | PINUSE_BIT;
4295 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4296 mem = chunk2mem(p);
4297 check_top_chunk(gm, gm->top);
4298 check_malloced_chunk(gm, mem, nb);
4299 goto postaction;
4300 }
4301
4302 mem = sys_alloc(gm, nb);
4303
4304 postaction:
4305 POSTACTION(gm);
4306 return mem;
4307 }
4308
4309 return 0;
4310}
4311
4312void dlfree(void* mem) {
4313 /*
4314 Consolidate freed chunks with preceeding or succeeding bordering
4315 free chunks, if they exist, and then place in a bin. Intermixed
4316 with special cases for top, dv, mmapped chunks, and usage errors.
4317 */
4318
4319 if (mem != 0) {
4320 mchunkptr p = mem2chunk(mem);
4321#if FOOTERS
4322 mstate fm = get_mstate_for(p);
4323 if (!ok_magic(fm)) {
4324 USAGE_ERROR_ACTION(fm, p);
4325 return;
4326 }
4327#else /* FOOTERS */
4328#define fm gm
4329#endif /* FOOTERS */
4330 if (!PREACTION(fm)) {
4331 check_inuse_chunk(fm, p);
4332 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4333 size_t psize = chunksize(p);
4334 mchunkptr next = chunk_plus_offset(p, psize);
4335 if (!pinuse(p)) {
4336 size_t prevsize = p->prev_foot;
4337 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4338 prevsize &= ~IS_MMAPPED_BIT;
4339 psize += prevsize + MMAP_FOOT_PAD;
4340 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4341 fm->footprint -= psize;
4342 goto postaction;
4343 }
4344 else {
4345 mchunkptr prev = chunk_minus_offset(p, prevsize);
4346 psize += prevsize;
4347 p = prev;
4348 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4349 if (p != fm->dv) {
4350 unlink_chunk(fm, p, prevsize);
4351 }
4352 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4353 fm->dvsize = psize;
4354 set_free_with_pinuse(p, psize, next);
4355 goto postaction;
4356 }
4357 }
4358 else
4359 goto erroraction;
4360 }
4361 }
4362
4363 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4364 if (!cinuse(next)) { /* consolidate forward */
4365 if (next == fm->top) {
4366 size_t tsize = fm->topsize += psize;
4367 fm->top = p;
4368 p->head = tsize | PINUSE_BIT;
4369 if (p == fm->dv) {
4370 fm->dv = 0;
4371 fm->dvsize = 0;
4372 }
4373 if (should_trim(fm, tsize))
4374 sys_trim(fm, 0);
4375 goto postaction;
4376 }
4377 else if (next == fm->dv) {
4378 size_t dsize = fm->dvsize += psize;
4379 fm->dv = p;
4380 set_size_and_pinuse_of_free_chunk(p, dsize);
4381 goto postaction;
4382 }
4383 else {
4384 size_t nsize = chunksize(next);
4385 psize += nsize;
4386 unlink_chunk(fm, next, nsize);
4387 set_size_and_pinuse_of_free_chunk(p, psize);
4388 if (p == fm->dv) {
4389 fm->dvsize = psize;
4390 goto postaction;
4391 }
4392 }
4393 }
4394 else
4395 set_free_with_pinuse(p, psize, next);
4396 insert_chunk(fm, p, psize);
4397 check_free_chunk(fm, p);
4398 goto postaction;
4399 }
4400 }
4401 erroraction:
4402 USAGE_ERROR_ACTION(fm, p);
4403 postaction:
4404 POSTACTION(fm);
4405 }
4406 }
4407#if !FOOTERS
4408#undef fm
4409#endif /* FOOTERS */
4410}
4411
4412void* dlcalloc(size_t n_elements, size_t elem_size) {
4413 void *mem;
4414 if (n_elements && MAX_SIZE_T / n_elements < elem_size) {
4415 /* Fail on overflow */
4416 MALLOC_FAILURE_ACTION;
4417 return NULL;
4418 }
4419 elem_size *= n_elements;
4420 mem = dlmalloc(elem_size);
4421 if (mem && calloc_must_clear(mem2chunk(mem)))
4422 memset(mem, 0, elem_size);
4423 return mem;
4424}
4425
4426void* dlrealloc(void* oldmem, size_t bytes) {
4427 if (oldmem == 0)
4428 return dlmalloc(bytes);
4429#ifdef REALLOC_ZERO_BYTES_FREES
4430 if (bytes == 0) {
4431 dlfree(oldmem);
4432 return 0;
4433 }
4434#endif /* REALLOC_ZERO_BYTES_FREES */
4435 else {
4436#if ! FOOTERS
4437 mstate m = gm;
4438#else /* FOOTERS */
4439 mstate m = get_mstate_for(mem2chunk(oldmem));
4440 if (!ok_magic(m)) {
4441 USAGE_ERROR_ACTION(m, oldmem);
4442 return 0;
4443 }
4444#endif /* FOOTERS */
4445 return internal_realloc(m, oldmem, bytes);
4446 }
4447}
4448
4449void* dlmemalign(size_t alignment, size_t bytes) {
4450 return internal_memalign(gm, alignment, bytes);
4451}
4452
4453void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4454 void* chunks[]) {
4455 size_t sz = elem_size; /* serves as 1-element array */
4456 return ialloc(gm, n_elements, &sz, 3, chunks);
4457}
4458
4459void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4460 void* chunks[]) {
4461 return ialloc(gm, n_elements, sizes, 0, chunks);
4462}
4463
4464void* dlvalloc(size_t bytes) {
4465 size_t pagesz;
4466 init_mparams();
4467 pagesz = mparams.page_size;
4468 return dlmemalign(pagesz, bytes);
4469}
4470
4471void* dlpvalloc(size_t bytes) {
4472 size_t pagesz;
4473 init_mparams();
4474 pagesz = mparams.page_size;
4475 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4476}
4477
4478int dlmalloc_trim(size_t pad) {
4479 int result = 0;
4480 if (!PREACTION(gm)) {
4481 result = sys_trim(gm, pad);
4482 POSTACTION(gm);
4483 }
4484 return result;
4485}
4486
4487size_t dlmalloc_footprint(void) {
4488 return gm->footprint;
4489}
4490
4491#if USE_MAX_ALLOWED_FOOTPRINT
4492size_t dlmalloc_max_allowed_footprint(void) {
4493 return gm->max_allowed_footprint;
4494}
4495
4496void dlmalloc_set_max_allowed_footprint(size_t bytes) {
4497 if (bytes > gm->footprint) {
4498 /* Increase the size in multiples of the granularity,
4499 * which is the smallest unit we request from the system.
4500 */
4501 gm->max_allowed_footprint = gm->footprint +
4502 granularity_align(bytes - gm->footprint);
4503 }
4504 else {
4505 //TODO: allow for reducing the max footprint
4506 gm->max_allowed_footprint = gm->footprint;
4507 }
4508}
4509#endif
4510
4511size_t dlmalloc_max_footprint(void) {
4512 return gm->max_footprint;
4513}
4514
4515#if !NO_MALLINFO
4516struct mallinfo dlmallinfo(void) {
4517 return internal_mallinfo(gm);
4518}
4519#endif /* NO_MALLINFO */
4520
4521void dlmalloc_stats() {
4522 internal_malloc_stats(gm);
4523}
4524
4525size_t dlmalloc_usable_size(void* mem) {
4526 if (mem != 0) {
4527 mchunkptr p = mem2chunk(mem);
4528 if (cinuse(p))
4529 return chunksize(p) - overhead_for(p);
4530 }
4531 return 0;
4532}
4533
4534int dlmallopt(int param_number, int value) {
4535 return change_mparam(param_number, value);
4536}
4537
4538#endif /* !ONLY_MSPACES */
4539
4540/* ----------------------------- user mspaces ---------------------------- */
4541
4542#if MSPACES
4543
4544static mstate init_user_mstate(char* tbase, size_t tsize) {
4545 size_t msize = pad_request(sizeof(struct malloc_state));
4546 mchunkptr mn;
4547 mchunkptr msp = align_as_chunk(tbase);
4548 mstate m = (mstate)(chunk2mem(msp));
4549 memset(m, 0, msize);
4550 INITIAL_LOCK(&m->mutex);
4551 msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4552 m->seg.base = m->least_addr = tbase;
4553 m->seg.size = m->footprint = m->max_footprint = tsize;
4554#if USE_MAX_ALLOWED_FOOTPRINT
4555 m->max_allowed_footprint = MAX_SIZE_T;
4556#endif
4557 m->magic = mparams.magic;
4558 m->mflags = mparams.default_mflags;
4559 disable_contiguous(m);
4560 init_bins(m);
4561 mn = next_chunk(mem2chunk(m));
4562 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4563 check_top_chunk(m, m->top);
4564 return m;
4565}
4566
4567mspace create_mspace(size_t capacity, int locked) {
4568 mstate m = 0;
4569 size_t msize = pad_request(sizeof(struct malloc_state));
4570 init_mparams(); /* Ensure pagesize etc initialized */
4571
4572 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4573 size_t rs = ((capacity == 0)? mparams.granularity :
4574 (capacity + TOP_FOOT_SIZE + msize));
4575 size_t tsize = granularity_align(rs);
4576 char* tbase = (char*)(CALL_MMAP(tsize));
4577 if (tbase != CMFAIL) {
4578 m = init_user_mstate(tbase, tsize);
4579 m->seg.sflags = IS_MMAPPED_BIT;
4580 set_lock(m, locked);
4581 }
4582 }
4583 return (mspace)m;
4584}
4585
4586mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4587 mstate m = 0;
4588 size_t msize = pad_request(sizeof(struct malloc_state));
4589 init_mparams(); /* Ensure pagesize etc initialized */
4590
4591 if (capacity > msize + TOP_FOOT_SIZE &&
4592 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4593 m = init_user_mstate((char*)base, capacity);
4594 m->seg.sflags = EXTERN_BIT;
4595 set_lock(m, locked);
4596 }
4597 return (mspace)m;
4598}
4599
4600size_t destroy_mspace(mspace msp) {
4601 size_t freed = 0;
4602 mstate ms = (mstate)msp;
4603 if (ok_magic(ms)) {
4604 msegmentptr sp = &ms->seg;
4605 while (sp != 0) {
4606 char* base = sp->base;
4607 size_t size = sp->size;
4608 flag_t flag = sp->sflags;
4609 sp = sp->next;
4610 if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4611 CALL_MUNMAP(base, size) == 0)
4612 freed += size;
4613 }
4614 }
4615 else {
4616 USAGE_ERROR_ACTION(ms,ms);
4617 }
4618 return freed;
4619}
4620
4621/*
4622 mspace versions of routines are near-clones of the global
4623 versions. This is not so nice but better than the alternatives.
4624*/
4625
4626
4627void* mspace_malloc(mspace msp, size_t bytes) {
4628 mstate ms = (mstate)msp;
4629 if (!ok_magic(ms)) {
4630 USAGE_ERROR_ACTION(ms,ms);
4631 return 0;
4632 }
4633 if (!PREACTION(ms)) {
4634 void* mem;
4635 size_t nb;
4636 if (bytes <= MAX_SMALL_REQUEST) {
4637 bindex_t idx;
4638 binmap_t smallbits;
4639 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4640 idx = small_index(nb);
4641 smallbits = ms->smallmap >> idx;
4642
4643 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4644 mchunkptr b, p;
4645 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4646 b = smallbin_at(ms, idx);
4647 p = b->fd;
4648 assert(chunksize(p) == small_index2size(idx));
4649 unlink_first_small_chunk(ms, b, p, idx);
4650 set_inuse_and_pinuse(ms, p, small_index2size(idx));
4651 mem = chunk2mem(p);
4652 check_malloced_chunk(ms, mem, nb);
4653 goto postaction;
4654 }
4655
4656 else if (nb > ms->dvsize) {
4657 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4658 mchunkptr b, p, r;
4659 size_t rsize;
4660 bindex_t i;
4661 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4662 binmap_t leastbit = least_bit(leftbits);
4663 compute_bit2idx(leastbit, i);
4664 b = smallbin_at(ms, i);
4665 p = b->fd;
4666 assert(chunksize(p) == small_index2size(i));
4667 unlink_first_small_chunk(ms, b, p, i);
4668 rsize = small_index2size(i) - nb;
4669 /* Fit here cannot be remainderless if 4byte sizes */
4670 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4671 set_inuse_and_pinuse(ms, p, small_index2size(i));
4672 else {
4673 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4674 r = chunk_plus_offset(p, nb);
4675 set_size_and_pinuse_of_free_chunk(r, rsize);
4676 replace_dv(ms, r, rsize);
4677 }
4678 mem = chunk2mem(p);
4679 check_malloced_chunk(ms, mem, nb);
4680 goto postaction;
4681 }
4682
4683 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4684 check_malloced_chunk(ms, mem, nb);
4685 goto postaction;
4686 }
4687 }
4688 }
4689 else if (bytes >= MAX_REQUEST)
4690 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4691 else {
4692 nb = pad_request(bytes);
4693 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4694 check_malloced_chunk(ms, mem, nb);
4695 goto postaction;
4696 }
4697 }
4698
4699 if (nb <= ms->dvsize) {
4700 size_t rsize = ms->dvsize - nb;
4701 mchunkptr p = ms->dv;
4702 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4703 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4704 ms->dvsize = rsize;
4705 set_size_and_pinuse_of_free_chunk(r, rsize);
4706 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4707 }
4708 else { /* exhaust dv */
4709 size_t dvs = ms->dvsize;
4710 ms->dvsize = 0;
4711 ms->dv = 0;
4712 set_inuse_and_pinuse(ms, p, dvs);
4713 }
4714 mem = chunk2mem(p);
4715 check_malloced_chunk(ms, mem, nb);
4716 goto postaction;
4717 }
4718
4719 else if (nb < ms->topsize) { /* Split top */
4720 size_t rsize = ms->topsize -= nb;
4721 mchunkptr p = ms->top;
4722 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4723 r->head = rsize | PINUSE_BIT;
4724 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4725 mem = chunk2mem(p);
4726 check_top_chunk(ms, ms->top);
4727 check_malloced_chunk(ms, mem, nb);
4728 goto postaction;
4729 }
4730
4731 mem = sys_alloc(ms, nb);
4732
4733 postaction:
4734 POSTACTION(ms);
4735 return mem;
4736 }
4737
4738 return 0;
4739}
4740
4741void mspace_free(mspace msp, void* mem) {
4742 if (mem != 0) {
4743 mchunkptr p = mem2chunk(mem);
4744#if FOOTERS
4745 mstate fm = get_mstate_for(p);
4746#else /* FOOTERS */
4747 mstate fm = (mstate)msp;
4748#endif /* FOOTERS */
4749 if (!ok_magic(fm)) {
4750 USAGE_ERROR_ACTION(fm, p);
4751 return;
4752 }
4753 if (!PREACTION(fm)) {
4754 check_inuse_chunk(fm, p);
4755 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4756 size_t psize = chunksize(p);
4757 mchunkptr next = chunk_plus_offset(p, psize);
4758 if (!pinuse(p)) {
4759 size_t prevsize = p->prev_foot;
4760 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4761 prevsize &= ~IS_MMAPPED_BIT;
4762 psize += prevsize + MMAP_FOOT_PAD;
4763 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4764 fm->footprint -= psize;
4765 goto postaction;
4766 }
4767 else {
4768 mchunkptr prev = chunk_minus_offset(p, prevsize);
4769 psize += prevsize;
4770 p = prev;
4771 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4772 if (p != fm->dv) {
4773 unlink_chunk(fm, p, prevsize);
4774 }
4775 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4776 fm->dvsize = psize;
4777 set_free_with_pinuse(p, psize, next);
4778 goto postaction;
4779 }
4780 }
4781 else
4782 goto erroraction;
4783 }
4784 }
4785
4786 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4787 if (!cinuse(next)) { /* consolidate forward */
4788 if (next == fm->top) {
4789 size_t tsize = fm->topsize += psize;
4790 fm->top = p;
4791 p->head = tsize | PINUSE_BIT;
4792 if (p == fm->dv) {
4793 fm->dv = 0;
4794 fm->dvsize = 0;
4795 }
4796 if (should_trim(fm, tsize))
4797 sys_trim(fm, 0);
4798 goto postaction;
4799 }
4800 else if (next == fm->dv) {
4801 size_t dsize = fm->dvsize += psize;
4802 fm->dv = p;
4803 set_size_and_pinuse_of_free_chunk(p, dsize);
4804 goto postaction;
4805 }
4806 else {
4807 size_t nsize = chunksize(next);
4808 psize += nsize;
4809 unlink_chunk(fm, next, nsize);
4810 set_size_and_pinuse_of_free_chunk(p, psize);
4811 if (p == fm->dv) {
4812 fm->dvsize = psize;
4813 goto postaction;
4814 }
4815 }
4816 }
4817 else
4818 set_free_with_pinuse(p, psize, next);
4819 insert_chunk(fm, p, psize);
4820 check_free_chunk(fm, p);
4821 goto postaction;
4822 }
4823 }
4824 erroraction:
4825 USAGE_ERROR_ACTION(fm, p);
4826 postaction:
4827 POSTACTION(fm);
4828 }
4829 }
4830}
4831
4832void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4833 void *mem;
4834 mstate ms = (mstate)msp;
4835 if (!ok_magic(ms)) {
4836 USAGE_ERROR_ACTION(ms,ms);
4837 return 0;
4838 }
4839 if (n_elements && MAX_SIZE_T / n_elements < elem_size) {
4840 /* Fail on overflow */
4841 MALLOC_FAILURE_ACTION;
4842 return NULL;
4843 }
4844 elem_size *= n_elements;
4845 mem = internal_malloc(ms, elem_size);
4846 if (mem && calloc_must_clear(mem2chunk(mem)))
4847 memset(mem, 0, elem_size);
4848 return mem;
4849}
4850
4851void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4852 if (oldmem == 0)
4853 return mspace_malloc(msp, bytes);
4854#ifdef REALLOC_ZERO_BYTES_FREES
4855 if (bytes == 0) {
4856 mspace_free(msp, oldmem);
4857 return 0;
4858 }
4859#endif /* REALLOC_ZERO_BYTES_FREES */
4860 else {
4861#if FOOTERS
4862 mchunkptr p = mem2chunk(oldmem);
4863 mstate ms = get_mstate_for(p);
4864#else /* FOOTERS */
4865 mstate ms = (mstate)msp;
4866#endif /* FOOTERS */
4867 if (!ok_magic(ms)) {
4868 USAGE_ERROR_ACTION(ms,ms);
4869 return 0;
4870 }
4871 return internal_realloc(ms, oldmem, bytes);
4872 }
4873}
4874
4875void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4876 mstate ms = (mstate)msp;
4877 if (!ok_magic(ms)) {
4878 USAGE_ERROR_ACTION(ms,ms);
4879 return 0;
4880 }
4881 return internal_memalign(ms, alignment, bytes);
4882}
4883
4884void** mspace_independent_calloc(mspace msp, size_t n_elements,
4885 size_t elem_size, void* chunks[]) {
4886 size_t sz = elem_size; /* serves as 1-element array */
4887 mstate ms = (mstate)msp;
4888 if (!ok_magic(ms)) {
4889 USAGE_ERROR_ACTION(ms,ms);
4890 return 0;
4891 }
4892 return ialloc(ms, n_elements, &sz, 3, chunks);
4893}
4894
4895void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4896 size_t sizes[], void* chunks[]) {
4897 mstate ms = (mstate)msp;
4898 if (!ok_magic(ms)) {
4899 USAGE_ERROR_ACTION(ms,ms);
4900 return 0;
4901 }
4902 return ialloc(ms, n_elements, sizes, 0, chunks);
4903}
4904
4905int mspace_trim(mspace msp, size_t pad) {
4906 int result = 0;
4907 mstate ms = (mstate)msp;
4908 if (ok_magic(ms)) {
4909 if (!PREACTION(ms)) {
4910 result = sys_trim(ms, pad);
4911 POSTACTION(ms);
4912 }
4913 }
4914 else {
4915 USAGE_ERROR_ACTION(ms,ms);
4916 }
4917 return result;
4918}
4919
4920void mspace_malloc_stats(mspace msp) {
4921 mstate ms = (mstate)msp;
4922 if (ok_magic(ms)) {
4923 internal_malloc_stats(ms);
4924 }
4925 else {
4926 USAGE_ERROR_ACTION(ms,ms);
4927 }
4928}
4929
4930size_t mspace_footprint(mspace msp) {
4931 size_t result;
4932 mstate ms = (mstate)msp;
4933 if (ok_magic(ms)) {
4934 result = ms->footprint;
4935 }
4936 else {
4937 USAGE_ERROR_ACTION(ms,ms);
4938 }
4939 return result;
4940}
4941
4942#if USE_MAX_ALLOWED_FOOTPRINT
4943size_t mspace_max_allowed_footprint(mspace msp) {
4944 size_t result;
4945 mstate ms = (mstate)msp;
4946 if (ok_magic(ms)) {
4947 result = ms->max_allowed_footprint;
4948 }
4949 else {
4950 USAGE_ERROR_ACTION(ms,ms);
4951 }
4952 return result;
4953}
4954
4955void mspace_set_max_allowed_footprint(mspace msp, size_t bytes) {
4956 mstate ms = (mstate)msp;
4957 if (ok_magic(ms)) {
4958 if (bytes > ms->footprint) {
4959 /* Increase the size in multiples of the granularity,
4960 * which is the smallest unit we request from the system.
4961 */
4962 ms->max_allowed_footprint = ms->footprint +
4963 granularity_align(bytes - ms->footprint);
4964 }
4965 else {
4966 //TODO: allow for reducing the max footprint
4967 ms->max_allowed_footprint = ms->footprint;
4968 }
4969 }
4970 else {
4971 USAGE_ERROR_ACTION(ms,ms);
4972 }
4973}
4974#endif
4975
4976size_t mspace_max_footprint(mspace msp) {
4977 size_t result;
4978 mstate ms = (mstate)msp;
4979 if (ok_magic(ms)) {
4980 result = ms->max_footprint;
4981 }
4982 else {
4983 USAGE_ERROR_ACTION(ms,ms);
4984 }
4985 return result;
4986}
4987
4988
4989#if !NO_MALLINFO
4990struct mallinfo mspace_mallinfo(mspace msp) {
4991 mstate ms = (mstate)msp;
4992 if (!ok_magic(ms)) {
4993 USAGE_ERROR_ACTION(ms,ms);
4994 }
4995 return internal_mallinfo(ms);
4996}
4997#endif /* NO_MALLINFO */
4998
4999int mspace_mallopt(int param_number, int value) {
5000 return change_mparam(param_number, value);
5001}
5002
5003#endif /* MSPACES */
5004
5005#if MSPACES && ONLY_MSPACES
5006void mspace_walk_free_pages(mspace msp,
5007 void(*handler)(void *start, void *end, void *arg), void *harg)
5008{
5009 mstate m = (mstate)msp;
5010 if (!ok_magic(m)) {
5011 USAGE_ERROR_ACTION(m,m);
5012 return;
5013 }
5014#else
5015void dlmalloc_walk_free_pages(void(*handler)(void *start, void *end, void *arg),
5016 void *harg)
5017{
5018 mstate m = (mstate)gm;
5019#endif
5020 if (!PREACTION(m)) {
5021 if (is_initialized(m)) {
5022 msegmentptr s = &m->seg;
5023 while (s != 0) {
5024 mchunkptr p = align_as_chunk(s->base);
5025 while (segment_holds(s, p) &&
5026 p != m->top && p->head != FENCEPOST_HEAD) {
5027 void *chunkptr, *userptr;
5028 size_t chunklen, userlen;
5029 chunkptr = p;
5030 chunklen = chunksize(p);
5031 if (!cinuse(p)) {
5032 void *start;
5033 if (is_small(chunklen)) {
5034 start = (void *)(p + 1);
5035 }
5036 else {
5037 start = (void *)((tchunkptr)p + 1);
5038 }
5039 handler(start, next_chunk(p), harg);
5040 }
5041 p = next_chunk(p);
5042 }
5043 if (p == m->top) {
5044 handler((void *)(p + 1), next_chunk(p), harg);
5045 }
5046 s = s->next;
5047 }
5048 }
5049 POSTACTION(m);
5050 }
5051}
5052
5053
5054#if MSPACES && ONLY_MSPACES
5055void mspace_walk_heap(mspace msp,
5056 void(*handler)(const void *chunkptr, size_t chunklen,
5057 const void *userptr, size_t userlen,
5058 void *arg),
5059 void *harg)
5060{
5061 msegmentptr s;
5062 mstate m = (mstate)msp;
5063 if (!ok_magic(m)) {
5064 USAGE_ERROR_ACTION(m,m);
5065 return;
5066 }
5067#else
5068void dlmalloc_walk_heap(void(*handler)(const void *chunkptr, size_t chunklen,
5069 const void *userptr, size_t userlen,
5070 void *arg),
5071 void *harg)
5072{
5073 msegmentptr s;
5074 mstate m = (mstate)gm;
5075#endif
5076
5077 s = &m->seg;
5078 while (s != 0) {
5079 mchunkptr p = align_as_chunk(s->base);
5080 while (segment_holds(s, p) &&
5081 p != m->top && p->head != FENCEPOST_HEAD) {
5082 void *chunkptr, *userptr;
5083 size_t chunklen, userlen;
5084 chunkptr = p;
5085 chunklen = chunksize(p);
5086 if (cinuse(p)) {
5087 userptr = chunk2mem(p);
5088 userlen = chunklen - overhead_for(p);
5089 }
5090 else {
5091 userptr = NULL;
5092 userlen = 0;
5093 }
5094 handler(chunkptr, chunklen, userptr, userlen, harg);
5095 p = next_chunk(p);
5096 }
5097 if (p == m->top) {
5098 /* The top chunk is just a big free chunk for our purposes.
5099 */
5100 handler(m->top, m->topsize, NULL, 0, harg);
5101 }
5102 s = s->next;
5103 }
5104}
5105
5106/* -------------------- Alternative MORECORE functions ------------------- */
5107
5108/*
5109 Guidelines for creating a custom version of MORECORE:
5110
5111 * For best performance, MORECORE should allocate in multiples of pagesize.
5112 * MORECORE may allocate more memory than requested. (Or even less,
5113 but this will usually result in a malloc failure.)
5114 * MORECORE must not allocate memory when given argument zero, but
5115 instead return one past the end address of memory from previous
5116 nonzero call.
5117 * For best performance, consecutive calls to MORECORE with positive
5118 arguments should return increasing addresses, indicating that
5119 space has been contiguously extended.
5120 * Even though consecutive calls to MORECORE need not return contiguous
5121 addresses, it must be OK for malloc'ed chunks to span multiple
5122 regions in those cases where they do happen to be contiguous.
5123 * MORECORE need not handle negative arguments -- it may instead
5124 just return MFAIL when given negative arguments.
5125 Negative arguments are always multiples of pagesize. MORECORE
5126 must not misinterpret negative args as large positive unsigned
5127 args. You can suppress all such calls from even occurring by defining
5128 MORECORE_CANNOT_TRIM,
5129
5130 As an example alternative MORECORE, here is a custom allocator
5131 kindly contributed for pre-OSX macOS. It uses virtually but not
5132 necessarily physically contiguous non-paged memory (locked in,
5133 present and won't get swapped out). You can use it by uncommenting
5134 this section, adding some #includes, and setting up the appropriate
5135 defines above:
5136
5137 #define MORECORE osMoreCore
5138
5139 There is also a shutdown routine that should somehow be called for
5140 cleanup upon program exit.
5141
5142 #define MAX_POOL_ENTRIES 100
5143 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
5144 static int next_os_pool;
5145 void *our_os_pools[MAX_POOL_ENTRIES];
5146
5147 void *osMoreCore(int size)
5148 {
5149 void *ptr = 0;
5150 static void *sbrk_top = 0;
5151
5152 if (size > 0)
5153 {
5154 if (size < MINIMUM_MORECORE_SIZE)
5155 size = MINIMUM_MORECORE_SIZE;
5156 if (CurrentExecutionLevel() == kTaskLevel)
5157 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
5158 if (ptr == 0)
5159 {
5160 return (void *) MFAIL;
5161 }
5162 // save ptrs so they can be freed during cleanup
5163 our_os_pools[next_os_pool] = ptr;
5164 next_os_pool++;
5165 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
5166 sbrk_top = (char *) ptr + size;
5167 return ptr;
5168 }
5169 else if (size < 0)
5170 {
5171 // we don't currently support shrink behavior
5172 return (void *) MFAIL;
5173 }
5174 else
5175 {
5176 return sbrk_top;
5177 }
5178 }
5179
5180 // cleanup any allocated memory pools
5181 // called as last thing before shutting down driver
5182
5183 void osCleanupMem(void)
5184 {
5185 void **ptr;
5186
5187 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
5188 if (*ptr)
5189 {
5190 PoolDeallocate(*ptr);
5191 *ptr = 0;
5192 }
5193 }
5194
5195*/
5196
5197
5198/* -----------------------------------------------------------------------
5199History:
5200 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
5201 * Add max_footprint functions
5202 * Ensure all appropriate literals are size_t
5203 * Fix conditional compilation problem for some #define settings
5204 * Avoid concatenating segments with the one provided
5205 in create_mspace_with_base
5206 * Rename some variables to avoid compiler shadowing warnings
5207 * Use explicit lock initialization.
5208 * Better handling of sbrk interference.
5209 * Simplify and fix segment insertion, trimming and mspace_destroy
5210 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5211 * Thanks especially to Dennis Flanagan for help on these.
5212
5213 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
5214 * Fix memalign brace error.
5215
5216 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
5217 * Fix improper #endif nesting in C++
5218 * Add explicit casts needed for C++
5219
5220 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
5221 * Use trees for large bins
5222 * Support mspaces
5223 * Use segments to unify sbrk-based and mmap-based system allocation,
5224 removing need for emulation on most platforms without sbrk.
5225 * Default safety checks
5226 * Optional footer checks. Thanks to William Robertson for the idea.
5227 * Internal code refactoring
5228 * Incorporate suggestions and platform-specific changes.
5229 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5230 Aaron Bachmann, Emery Berger, and others.
5231 * Speed up non-fastbin processing enough to remove fastbins.
5232 * Remove useless cfree() to avoid conflicts with other apps.
5233 * Remove internal memcpy, memset. Compilers handle builtins better.
5234 * Remove some options that no one ever used and rename others.
5235
5236 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
5237 * Fix malloc_state bitmap array misdeclaration
5238
5239 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
5240 * Allow tuning of FIRST_SORTED_BIN_SIZE
5241 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5242 * Better detection and support for non-contiguousness of MORECORE.
5243 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5244 * Bypass most of malloc if no frees. Thanks To Emery Berger.
5245 * Fix freeing of old top non-contiguous chunk im sysmalloc.
5246 * Raised default trim and map thresholds to 256K.
5247 * Fix mmap-related #defines. Thanks to Lubos Lunak.
5248 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5249 * Branch-free bin calculation
5250 * Default trim and mmap thresholds now 256K.
5251
5252 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
5253 * Introduce independent_comalloc and independent_calloc.
5254 Thanks to Michael Pachos for motivation and help.
5255 * Make optional .h file available
5256 * Allow > 2GB requests on 32bit systems.
5257 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5258 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5259 and Anonymous.
5260 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5261 helping test this.)
5262 * memalign: check alignment arg
5263 * realloc: don't try to shift chunks backwards, since this
5264 leads to more fragmentation in some programs and doesn't
5265 seem to help in any others.
5266 * Collect all cases in malloc requiring system memory into sysmalloc
5267 * Use mmap as backup to sbrk
5268 * Place all internal state in malloc_state
5269 * Introduce fastbins (although similar to 2.5.1)
5270 * Many minor tunings and cosmetic improvements
5271 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5272 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5273 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5274 * Include errno.h to support default failure action.
5275
5276 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5277 * return null for negative arguments
5278 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5279 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5280 (e.g. WIN32 platforms)
5281 * Cleanup header file inclusion for WIN32 platforms
5282 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5283 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5284 memory allocation routines
5285 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5286 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5287 usage of 'assert' in non-WIN32 code
5288 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5289 avoid infinite loop
5290 * Always call 'fREe()' rather than 'free()'
5291
5292 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5293 * Fixed ordering problem with boundary-stamping
5294
5295 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5296 * Added pvalloc, as recommended by H.J. Liu
5297 * Added 64bit pointer support mainly from Wolfram Gloger
5298 * Added anonymously donated WIN32 sbrk emulation
5299 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5300 * malloc_extend_top: fix mask error that caused wastage after
5301 foreign sbrks
5302 * Add linux mremap support code from HJ Liu
5303
5304 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5305 * Integrated most documentation with the code.
5306 * Add support for mmap, with help from
5307 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5308 * Use last_remainder in more cases.
5309 * Pack bins using idea from colin@nyx10.cs.du.edu
5310 * Use ordered bins instead of best-fit threshhold
5311 * Eliminate block-local decls to simplify tracing and debugging.
5312 * Support another case of realloc via move into top
5313 * Fix error occuring when initial sbrk_base not word-aligned.
5314 * Rely on page size for units instead of SBRK_UNIT to
5315 avoid surprises about sbrk alignment conventions.
5316 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5317 (raymond@es.ele.tue.nl) for the suggestion.
5318 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5319 * More precautions for cases where other routines call sbrk,
5320 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5321 * Added macros etc., allowing use in linux libc from
5322 H.J. Lu (hjl@gnu.ai.mit.edu)
5323 * Inverted this history list
5324
5325 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5326 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5327 * Removed all preallocation code since under current scheme
5328 the work required to undo bad preallocations exceeds
5329 the work saved in good cases for most test programs.
5330 * No longer use return list or unconsolidated bins since
5331 no scheme using them consistently outperforms those that don't
5332 given above changes.
5333 * Use best fit for very large chunks to prevent some worst-cases.
5334 * Added some support for debugging
5335
5336 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5337 * Removed footers when chunks are in use. Thanks to
5338 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5339
5340 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5341 * Added malloc_trim, with help from Wolfram Gloger
5342 (wmglo@Dent.MED.Uni-Muenchen.DE).
5343
5344 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5345
5346 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5347 * realloc: try to expand in both directions
5348 * malloc: swap order of clean-bin strategy;
5349 * realloc: only conditionally expand backwards
5350 * Try not to scavenge used bins
5351 * Use bin counts as a guide to preallocation
5352 * Occasionally bin return list chunks in first scan
5353 * Add a few optimizations from colin@nyx10.cs.du.edu
5354
5355 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5356 * faster bin computation & slightly different binning
5357 * merged all consolidations to one part of malloc proper
5358 (eliminating old malloc_find_space & malloc_clean_bin)
5359 * Scan 2 returns chunks (not just 1)
5360 * Propagate failure in realloc if malloc returns 0
5361 * Add stuff to allow compilation on non-ANSI compilers
5362 from kpv@research.att.com
5363
5364 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5365 * removed potential for odd address access in prev_chunk
5366 * removed dependency on getpagesize.h
5367 * misc cosmetics and a bit more internal documentation
5368 * anticosmetics: mangled names in macros to evade debugger strangeness
5369 * tested on sparc, hp-700, dec-mips, rs6000
5370 with gcc & native cc (hp, dec only) allowing
5371 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5372
5373 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5374 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5375 structure of old version, but most details differ.)
5376
5377*/