Back to index

plt-scheme  4.2.1
dlmalloc.c
Go to the documentation of this file.
00001 /*
00002   This is a version (aka dlmalloc) of malloc/free/realloc written by
00003   Doug Lea and released to the public domain, as explained at
00004   http://creativecommons.org/licenses/publicdomain.  Send questions,
00005   comments, complaints, performance data, etc to dl@cs.oswego.edu
00006 
00007 * Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
00008 
00009    Note: There may be an updated version of this malloc obtainable at
00010            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
00011          Check before installing!
00012 
00013 * Quickstart
00014 
00015   This library is all in one file to simplify the most common usage:
00016   ftp it, compile it (-O3), and link it into another program. All of
00017   the compile-time options default to reasonable values for use on
00018   most platforms.  You might later want to step through various
00019   compile-time and dynamic tuning options.
00020 
00021   For convenience, an include file for code using this malloc is at:
00022      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
00023   You don't really need this .h file unless you call functions not
00024   defined in your system include files.  The .h file contains only the
00025   excerpts from this file needed for using this malloc on ANSI C/C++
00026   systems, so long as you haven't changed compile-time options about
00027   naming and tuning parameters.  If you do, then you can create your
00028   own malloc.h that does include all settings by cutting at the point
00029   indicated below. Note that you may already by default be using a C
00030   library containing a malloc that is based on some version of this
00031   malloc (for example in linux). You might still want to use the one
00032   in this file to customize settings or to avoid overheads associated
00033   with library versions.
00034 
00035 * Vital statistics:
00036 
00037   Supported pointer/size_t representation:       4 or 8 bytes
00038        size_t MUST be an unsigned type of the same width as
00039        pointers. (If you are using an ancient system that declares
00040        size_t as a signed type, or need it to be a different width
00041        than pointers, you can use a previous release of this malloc
00042        (e.g. 2.7.2) supporting these.)
00043 
00044   Alignment:                                     8 bytes (default)
00045        This suffices for nearly all current machines and C compilers.
00046        However, you can define MALLOC_ALIGNMENT to be wider than this
00047        if necessary (up to 128bytes), at the expense of using more space.
00048 
00049   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
00050                                           8 or 16 bytes (if 8byte sizes)
00051        Each malloced chunk has a hidden word of overhead holding size
00052        and status information, and additional cross-check word
00053        if FOOTERS is defined.
00054 
00055   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
00056                           8-byte ptrs:  32 bytes    (including overhead)
00057 
00058        Even a request for zero bytes (i.e., malloc(0)) returns a
00059        pointer to something of the minimum allocatable size.
00060        The maximum overhead wastage (i.e., number of extra bytes
00061        allocated than were requested in malloc) is less than or equal
00062        to the minimum size, except for requests >= mmap_threshold that
00063        are serviced via mmap(), where the worst case wastage is about
00064        32 bytes plus the remainder from a system page (the minimal
00065        mmap unit); typically 4096 or 8192 bytes.
00066 
00067   Security: static-safe; optionally more or less
00068        The "security" of malloc refers to the ability of malicious
00069        code to accentuate the effects of errors (for example, freeing
00070        space that is not currently malloc'ed or overwriting past the
00071        ends of chunks) in code that calls malloc.  This malloc
00072        guarantees not to modify any memory locations below the base of
00073        heap, i.e., static variables, even in the presence of usage
00074        errors.  The routines additionally detect most improper frees
00075        and reallocs.  All this holds as long as the static bookkeeping
00076        for malloc itself is not corrupted by some other means.  This
00077        is only one aspect of security -- these checks do not, and
00078        cannot, detect all possible programming errors.
00079 
00080        If FOOTERS is defined nonzero, then each allocated chunk
00081        carries an additional check word to verify that it was malloced
00082        from its space.  These check words are the same within each
00083        execution of a program using malloc, but differ across
00084        executions, so externally crafted fake chunks cannot be
00085        freed. This improves security by rejecting frees/reallocs that
00086        could corrupt heap memory, in addition to the checks preventing
00087        writes to statics that are always on.  This may further improve
00088        security at the expense of time and space overhead.  (Note that
00089        FOOTERS may also be worth using with MSPACES.)
00090 
00091        By default detected errors cause the program to abort (calling
00092        "abort()"). You can override this to instead proceed past
00093        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
00094        has no effect, and a malloc that encounters a bad address
00095        caused by user overwrites will ignore the bad address by
00096        dropping pointers and indices to all known memory. This may
00097        be appropriate for programs that should continue if at all
00098        possible in the face of programming errors, although they may
00099        run out of memory because dropped memory is never reclaimed.
00100 
00101        If you don't like either of these options, you can define
00102        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
00103        else. And if if you are sure that your program using malloc has
00104        no errors or vulnerabilities, you can define INSECURE to 1,
00105        which might (or might not) provide a small performance improvement.
00106 
00107   Thread-safety: NOT thread-safe unless USE_LOCKS defined
00108        When USE_LOCKS is defined, each public call to malloc, free,
00109        etc is surrounded with either a pthread mutex or a win32
00110        spinlock (depending on WIN32). This is not especially fast, and
00111        can be a major bottleneck.  It is designed only to provide
00112        minimal protection in concurrent environments, and to provide a
00113        basis for extensions.  If you are using malloc in a concurrent
00114        program, consider instead using ptmalloc, which is derived from
00115        a version of this malloc. (See http://www.malloc.de).
00116 
00117   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
00118        This malloc can use unix sbrk or any emulation (invoked using
00119        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
00120        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
00121        memory.  On most unix systems, it tends to work best if both
00122        MORECORE and MMAP are enabled.  On Win32, it uses emulations
00123        based on VirtualAlloc. It also uses common C library functions
00124        like memset.
00125 
00126   Compliance: I believe it is compliant with the Single Unix Specification
00127        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
00128        others as well.
00129 
00130 * Overview of algorithms
00131 
00132   This is not the fastest, most space-conserving, most portable, or
00133   most tunable malloc ever written. However it is among the fastest
00134   while also being among the most space-conserving, portable and
00135   tunable.  Consistent balance across these factors results in a good
00136   general-purpose allocator for malloc-intensive programs.
00137 
00138   In most ways, this malloc is a best-fit allocator. Generally, it
00139   chooses the best-fitting existing chunk for a request, with ties
00140   broken in approximately least-recently-used order. (This strategy
00141   normally maintains low fragmentation.) However, for requests less
00142   than 256bytes, it deviates from best-fit when there is not an
00143   exactly fitting available chunk by preferring to use space adjacent
00144   to that used for the previous small request, as well as by breaking
00145   ties in approximately most-recently-used order. (These enhance
00146   locality of series of small allocations.)  And for very large requests
00147   (>= 256Kb by default), it relies on system memory mapping
00148   facilities, if supported.  (This helps avoid carrying around and
00149   possibly fragmenting memory used only for large chunks.)
00150 
00151   All operations (except malloc_stats and mallinfo) have execution
00152   times that are bounded by a constant factor of the number of bits in
00153   a size_t, not counting any clearing in calloc or copying in realloc,
00154   or actions surrounding MORECORE and MMAP that have times
00155   proportional to the number of non-contiguous regions returned by
00156   system allocation routines, which is often just 1.
00157 
00158   The implementation is not very modular and seriously overuses
00159   macros. Perhaps someday all C compilers will do as good a job
00160   inlining modular code as can now be done by brute-force expansion,
00161   but now, enough of them seem not to.
00162 
00163   Some compilers issue a lot of warnings about code that is
00164   dead/unreachable only on some platforms, and also about intentional
00165   uses of negation on unsigned types. All known cases of each can be
00166   ignored.
00167 
00168   For a longer but out of date high-level description, see
00169      http://gee.cs.oswego.edu/dl/html/malloc.html
00170 
00171 * MSPACES
00172   If MSPACES is defined, then in addition to malloc, free, etc.,
00173   this file also defines mspace_malloc, mspace_free, etc. These
00174   are versions of malloc routines that take an "mspace" argument
00175   obtained using create_mspace, to control all internal bookkeeping.
00176   If ONLY_MSPACES is defined, only these versions are compiled.
00177   So if you would like to use this allocator for only some allocations,
00178   and your system malloc for others, you can compile with
00179   ONLY_MSPACES and then do something like...
00180     static mspace mymspace = create_mspace(0,0); // for example
00181     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
00182 
00183   (Note: If you only need one instance of an mspace, you can instead
00184   use "USE_DL_PREFIX" to relabel the global malloc.)
00185 
00186   You can similarly create thread-local allocators by storing
00187   mspaces as thread-locals. For example:
00188     static __thread mspace tlms = 0;
00189     void*  tlmalloc(size_t bytes) {
00190       if (tlms == 0) tlms = create_mspace(0, 0);
00191       return mspace_malloc(tlms, bytes);
00192     }
00193     void  tlfree(void* mem) { mspace_free(tlms, mem); }
00194 
00195   Unless FOOTERS is defined, each mspace is completely independent.
00196   You cannot allocate from one and free to another (although
00197   conformance is only weakly checked, so usage errors are not always
00198   caught). If FOOTERS is defined, then each chunk carries around a tag
00199   indicating its originating mspace, and frees are directed to their
00200   originating spaces.
00201 
00202  -------------------------  Compile-time options ---------------------------
00203 
00204 Be careful in setting #define values for numerical constants of type
00205 size_t. On some systems, literal values are not automatically extended
00206 to size_t precision unless they are explicitly casted.
00207 
00208 WIN32                    default: defined if _WIN32 defined
00209   Defining WIN32 sets up defaults for MS environment and compilers.
00210   Otherwise defaults are for unix.
00211 
00212 MALLOC_ALIGNMENT         default: (size_t)8
00213   Controls the minimum alignment for malloc'ed chunks.  It must be a
00214   power of two and at least 8, even on machines for which smaller
00215   alignments would suffice. It may be defined as larger than this
00216   though. Note however that code and data structures are optimized for
00217   the case of 8-byte alignment.
00218 
00219 MSPACES                  default: 0 (false)
00220   If true, compile in support for independent allocation spaces.
00221   This is only supported if HAVE_MMAP is true.
00222 
00223 ONLY_MSPACES             default: 0 (false)
00224   If true, only compile in mspace versions, not regular versions.
00225 
00226 USE_LOCKS                default: 0 (false)
00227   Causes each call to each public routine to be surrounded with
00228   pthread or WIN32 mutex lock/unlock. (If set true, this can be
00229   overridden on a per-mspace basis for mspace versions.)
00230 
00231 FOOTERS                  default: 0
00232   If true, provide extra checking and dispatching by placing
00233   information in the footers of allocated chunks. This adds
00234   space and time overhead.
00235 
00236 INSECURE                 default: 0
00237   If true, omit checks for usage errors and heap space overwrites.
00238 
00239 USE_DL_PREFIX            default: NOT defined
00240   Causes compiler to prefix all public routines with the string 'dl'.
00241   This can be useful when you only want to use this malloc in one part
00242   of a program, using your regular system malloc elsewhere.
00243 
00244 ABORT                    default: defined as abort()
00245   Defines how to abort on failed checks.  On most systems, a failed
00246   check cannot die with an "assert" or even print an informative
00247   message, because the underlying print routines in turn call malloc,
00248   which will fail again.  Generally, the best policy is to simply call
00249   abort(). It's not very useful to do more than this because many
00250   errors due to overwriting will show up as address faults (null, odd
00251   addresses etc) rather than malloc-triggered checks, so will also
00252   abort.  Also, most compilers know that abort() does not return, so
00253   can better optimize code conditionally calling it.
00254 
00255 PROCEED_ON_ERROR           default: defined as 0 (false)
00256   Controls whether detected bad addresses cause them to bypassed
00257   rather than aborting. If set, detected bad arguments to free and
00258   realloc are ignored. And all bookkeeping information is zeroed out
00259   upon a detected overwrite of freed heap space, thus losing the
00260   ability to ever return it from malloc again, but enabling the
00261   application to proceed. If PROCEED_ON_ERROR is defined, the
00262   static variable malloc_corruption_error_count is compiled in
00263   and can be examined to see if errors have occurred. This option
00264   generates slower code than the default abort policy.
00265 
00266 DEBUG                    default: NOT defined
00267   The DEBUG setting is mainly intended for people trying to modify
00268   this code or diagnose problems when porting to new platforms.
00269   However, it may also be able to better isolate user errors than just
00270   using runtime checks.  The assertions in the check routines spell
00271   out in more detail the assumptions and invariants underlying the
00272   algorithms.  The checking is fairly extensive, and will slow down
00273   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
00274   set will attempt to check every non-mmapped allocated and free chunk
00275   in the course of computing the summaries.
00276 
00277 ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
00278   Debugging assertion failures can be nearly impossible if your
00279   version of the assert macro causes malloc to be called, which will
00280   lead to a cascade of further failures, blowing the runtime stack.
00281   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
00282   which will usually make debugging easier.
00283 
00284 MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
00285   The action to take before "return 0" when malloc fails to be able to
00286   return memory because there is none available.
00287 
00288 HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
00289   True if this system supports sbrk or an emulation of it.
00290 
00291 MORECORE                  default: sbrk
00292   The name of the sbrk-style system routine to call to obtain more
00293   memory.  See below for guidance on writing custom MORECORE
00294   functions. The type of the argument to sbrk/MORECORE varies across
00295   systems.  It cannot be size_t, because it supports negative
00296   arguments, so it is normally the signed type of the same width as
00297   size_t (sometimes declared as "intptr_t").  It doesn't much matter
00298   though. Internally, we only call it with arguments less than half
00299   the max value of a size_t, which should work across all reasonable
00300   possibilities, although sometimes generating compiler warnings.  See
00301   near the end of this file for guidelines for creating a custom
00302   version of MORECORE.
00303 
00304 MORECORE_CONTIGUOUS       default: 1 (true)
00305   If true, take advantage of fact that consecutive calls to MORECORE
00306   with positive arguments always return contiguous increasing
00307   addresses.  This is true of unix sbrk. It does not hurt too much to
00308   set it true anyway, since malloc copes with non-contiguities.
00309   Setting it false when definitely non-contiguous saves time
00310   and possibly wasted space it would take to discover this though.
00311 
00312 MORECORE_CANNOT_TRIM      default: NOT defined
00313   True if MORECORE cannot release space back to the system when given
00314   negative arguments. This is generally necessary only if you are
00315   using a hand-crafted MORECORE function that cannot handle negative
00316   arguments.
00317 
00318 HAVE_MMAP                 default: 1 (true)
00319   True if this system supports mmap or an emulation of it.  If so, and
00320   HAVE_MORECORE is not true, MMAP is used for all system
00321   allocation. If set and HAVE_MORECORE is true as well, MMAP is
00322   primarily used to directly allocate very large blocks. It is also
00323   used as a backup strategy in cases where MORECORE fails to provide
00324   space from system. Note: A single call to MUNMAP is assumed to be
00325   able to unmap memory that may have be allocated using multiple calls
00326   to MMAP, so long as they are adjacent.
00327 
00328 HAVE_MREMAP               default: 1 on linux, else 0
00329   If true realloc() uses mremap() to re-allocate large blocks and
00330   extend or shrink allocation spaces.
00331 
00332 MMAP_CLEARS               default: 1 on unix
00333   True if mmap clears memory so calloc doesn't need to. This is true
00334   for standard unix mmap using /dev/zero.
00335 
00336 USE_BUILTIN_FFS            default: 0 (i.e., not used)
00337   Causes malloc to use the builtin ffs() function to compute indices.
00338   Some compilers may recognize and intrinsify ffs to be faster than the
00339   supplied C version. Also, the case of x86 using gcc is special-cased
00340   to an asm instruction, so is already as fast as it can be, and so
00341   this setting has no effect. (On most x86s, the asm version is only
00342   slightly faster than the C version.)
00343 
00344 malloc_getpagesize         default: derive from system includes, or 4096.
00345   The system page size. To the extent possible, this malloc manages
00346   memory from the system in page-size units.  This may be (and
00347   usually is) a function rather than a constant. This is ignored
00348   if WIN32, where page size is determined using getSystemInfo during
00349   initialization.
00350 
00351 USE_DEV_RANDOM             default: 0 (i.e., not used)
00352   Causes malloc to use /dev/random to initialize secure magic seed for
00353   stamping footers. Otherwise, the current time is used.
00354 
00355 NO_MALLINFO                default: 0
00356   If defined, don't compile "mallinfo". This can be a simple way
00357   of dealing with mismatches between system declarations and
00358   those in this file.
00359 
00360 MALLINFO_FIELD_TYPE        default: size_t
00361   The type of the fields in the mallinfo struct. This was originally
00362   defined as "int" in SVID etc, but is more usefully defined as
00363   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
00364 
00365 REALLOC_ZERO_BYTES_FREES    default: not defined
00366   This should be set if a call to realloc with zero bytes should 
00367   be the same as a call to free. Some people think it should. Otherwise, 
00368   since this malloc returns a unique pointer for malloc(0), so does 
00369   realloc(p, 0).
00370 
00371 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
00372 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
00373 LACKS_STDLIB_H                default: NOT defined unless on WIN32
00374   Define these if your system does not have these header files.
00375   You might need to manually insert some of the declarations they provide.
00376 
00377 DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
00378                                 system_info.dwAllocationGranularity in WIN32,
00379                                 otherwise 64K.
00380       Also settable using mallopt(M_GRANULARITY, x)
00381   The unit for allocating and deallocating memory from the system.  On
00382   most systems with contiguous MORECORE, there is no reason to
00383   make this more than a page. However, systems with MMAP tend to
00384   either require or encourage larger granularities.  You can increase
00385   this value to prevent system allocation functions to be called so
00386   often, especially if they are slow.  The value must be at least one
00387   page and must be a power of two.  Setting to 0 causes initialization
00388   to either page size or win32 region size.  (Note: In previous
00389   versions of malloc, the equivalent of this option was called
00390   "TOP_PAD")
00391 
00392 DEFAULT_TRIM_THRESHOLD    default: 2MB
00393       Also settable using mallopt(M_TRIM_THRESHOLD, x)
00394   The maximum amount of unused top-most memory to keep before
00395   releasing via malloc_trim in free().  Automatic trimming is mainly
00396   useful in long-lived programs using contiguous MORECORE.  Because
00397   trimming via sbrk can be slow on some systems, and can sometimes be
00398   wasteful (in cases where programs immediately afterward allocate
00399   more large chunks) the value should be high enough so that your
00400   overall system performance would improve by releasing this much
00401   memory.  As a rough guide, you might set to a value close to the
00402   average size of a process (program) running on your system.
00403   Releasing this much memory would allow such a process to run in
00404   memory.  Generally, it is worth tuning trim thresholds when a
00405   program undergoes phases where several large chunks are allocated
00406   and released in ways that can reuse each other's storage, perhaps
00407   mixed with phases where there are no such chunks at all. The trim
00408   value must be greater than page size to have any useful effect.  To
00409   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
00410   some people use of mallocing a huge space and then freeing it at
00411   program startup, in an attempt to reserve system memory, doesn't
00412   have the intended effect under automatic trimming, since that memory
00413   will immediately be returned to the system.
00414 
00415 DEFAULT_MMAP_THRESHOLD       default: 256K
00416       Also settable using mallopt(M_MMAP_THRESHOLD, x)
00417   The request size threshold for using MMAP to directly service a
00418   request. Requests of at least this size that cannot be allocated
00419   using already-existing space will be serviced via mmap.  (If enough
00420   normal freed space already exists it is used instead.)  Using mmap
00421   segregates relatively large chunks of memory so that they can be
00422   individually obtained and released from the host system. A request
00423   serviced through mmap is never reused by any other request (at least
00424   not directly; the system may just so happen to remap successive
00425   requests to the same locations).  Segregating space in this way has
00426   the benefits that: Mmapped space can always be individually released
00427   back to the system, which helps keep the system level memory demands
00428   of a long-lived program low.  Also, mapped memory doesn't become
00429   `locked' between other chunks, as can happen with normally allocated
00430   chunks, which means that even trimming via malloc_trim would not
00431   release them.  However, it has the disadvantage that the space
00432   cannot be reclaimed, consolidated, and then used to service later
00433   requests, as happens with normal chunks.  The advantages of mmap
00434   nearly always outweigh disadvantages for "large" chunks, but the
00435   value of "large" may vary across systems.  The default is an
00436   empirically derived value that works well in most systems. You can
00437   disable mmap by setting to MAX_SIZE_T.
00438 
00439 */
00440 
00441 #ifndef WIN32
00442 #ifdef _WIN32
00443 #define WIN32 1
00444 #endif  /* _WIN32 */
00445 #endif  /* WIN32 */
00446 #ifdef WIN32
00447 #define WIN32_LEAN_AND_MEAN
00448 #include <windows.h>
00449 #define HAVE_MMAP 1
00450 #define HAVE_MORECORE 0
00451 #define LACKS_UNISTD_H
00452 #define LACKS_SYS_PARAM_H
00453 #define LACKS_SYS_MMAN_H
00454 #define LACKS_STRING_H
00455 #define LACKS_STRINGS_H
00456 #define LACKS_SYS_TYPES_H
00457 #define LACKS_ERRNO_H
00458 #define MALLOC_FAILURE_ACTION
00459 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
00460 #endif  /* WIN32 */
00461 
00462 #if defined(DARWIN) || defined(_DARWIN)
00463 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
00464 #ifndef HAVE_MORECORE
00465 #define HAVE_MORECORE 0
00466 #define HAVE_MMAP 1
00467 #endif  /* HAVE_MORECORE */
00468 #endif  /* DARWIN */
00469 
00470 #ifndef LACKS_SYS_TYPES_H
00471 #include <sys/types.h>  /* For size_t */
00472 #endif  /* LACKS_SYS_TYPES_H */
00473 
00474 /* The maximum possible size_t value has all bits set */
00475 #define MAX_SIZE_T           (~(size_t)0)
00476 
00477 #ifndef ONLY_MSPACES
00478 #define ONLY_MSPACES 0
00479 #endif  /* ONLY_MSPACES */
00480 #ifndef MSPACES
00481 #if ONLY_MSPACES
00482 #define MSPACES 1
00483 #else   /* ONLY_MSPACES */
00484 #define MSPACES 0
00485 #endif  /* ONLY_MSPACES */
00486 #endif  /* MSPACES */
00487 #ifndef MALLOC_ALIGNMENT
00488 #define MALLOC_ALIGNMENT ((size_t)8U)
00489 #endif  /* MALLOC_ALIGNMENT */
00490 #ifndef FOOTERS
00491 #define FOOTERS 0
00492 #endif  /* FOOTERS */
00493 #ifndef ABORT
00494 #define ABORT  abort()
00495 #endif  /* ABORT */
00496 #ifndef ABORT_ON_ASSERT_FAILURE
00497 #define ABORT_ON_ASSERT_FAILURE 1
00498 #endif  /* ABORT_ON_ASSERT_FAILURE */
00499 #ifndef PROCEED_ON_ERROR
00500 #define PROCEED_ON_ERROR 0
00501 #endif  /* PROCEED_ON_ERROR */
00502 #ifndef USE_LOCKS
00503 #define USE_LOCKS 0
00504 #endif  /* USE_LOCKS */
00505 #ifndef INSECURE
00506 #define INSECURE 0
00507 #endif  /* INSECURE */
00508 #ifndef HAVE_MMAP
00509 #define HAVE_MMAP 1
00510 #endif  /* HAVE_MMAP */
00511 #ifndef MMAP_CLEARS
00512 #define MMAP_CLEARS 1
00513 #endif  /* MMAP_CLEARS */
00514 #ifndef HAVE_MREMAP
00515 #ifdef linux
00516 #define HAVE_MREMAP 1
00517 #else   /* linux */
00518 #define HAVE_MREMAP 0
00519 #endif  /* linux */
00520 #endif  /* HAVE_MREMAP */
00521 #ifndef MALLOC_FAILURE_ACTION
00522 #define MALLOC_FAILURE_ACTION  errno = ENOMEM;
00523 #endif  /* MALLOC_FAILURE_ACTION */
00524 #ifndef HAVE_MORECORE
00525 #if ONLY_MSPACES
00526 #define HAVE_MORECORE 0
00527 #else   /* ONLY_MSPACES */
00528 #define HAVE_MORECORE 1
00529 #endif  /* ONLY_MSPACES */
00530 #endif  /* HAVE_MORECORE */
00531 #if !HAVE_MORECORE
00532 #define MORECORE_CONTIGUOUS 0
00533 #else   /* !HAVE_MORECORE */
00534 #ifndef MORECORE
00535 #define MORECORE sbrk
00536 #endif  /* MORECORE */
00537 #ifndef MORECORE_CONTIGUOUS
00538 #define MORECORE_CONTIGUOUS 1
00539 #endif  /* MORECORE_CONTIGUOUS */
00540 #endif  /* HAVE_MORECORE */
00541 #ifndef DEFAULT_GRANULARITY
00542 #if MORECORE_CONTIGUOUS
00543 #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
00544 #else   /* MORECORE_CONTIGUOUS */
00545 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
00546 #endif  /* MORECORE_CONTIGUOUS */
00547 #endif  /* DEFAULT_GRANULARITY */
00548 #ifndef DEFAULT_TRIM_THRESHOLD
00549 #ifndef MORECORE_CANNOT_TRIM
00550 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
00551 #else   /* MORECORE_CANNOT_TRIM */
00552 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
00553 #endif  /* MORECORE_CANNOT_TRIM */
00554 #endif  /* DEFAULT_TRIM_THRESHOLD */
00555 #ifndef DEFAULT_MMAP_THRESHOLD
00556 #if HAVE_MMAP
00557 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
00558 #else   /* HAVE_MMAP */
00559 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
00560 #endif  /* HAVE_MMAP */
00561 #endif  /* DEFAULT_MMAP_THRESHOLD */
00562 #ifndef USE_BUILTIN_FFS
00563 #define USE_BUILTIN_FFS 0
00564 #endif  /* USE_BUILTIN_FFS */
00565 #ifndef USE_DEV_RANDOM
00566 #define USE_DEV_RANDOM 0
00567 #endif  /* USE_DEV_RANDOM */
00568 #ifndef NO_MALLINFO
00569 #define NO_MALLINFO 0
00570 #endif  /* NO_MALLINFO */
00571 #ifndef MALLINFO_FIELD_TYPE
00572 #define MALLINFO_FIELD_TYPE size_t
00573 #endif  /* MALLINFO_FIELD_TYPE */
00574 
00575 /*
00576   mallopt tuning options.  SVID/XPG defines four standard parameter
00577   numbers for mallopt, normally defined in malloc.h.  None of these
00578   are used in this malloc, so setting them has no effect. But this
00579   malloc does support the following options.
00580 */
00581 
00582 #define M_TRIM_THRESHOLD     (-1)
00583 #define M_GRANULARITY        (-2)
00584 #define M_MMAP_THRESHOLD     (-3)
00585 
00586 /* ------------------------ Mallinfo declarations ------------------------ */
00587 
00588 #if !NO_MALLINFO
00589 /*
00590   This version of malloc supports the standard SVID/XPG mallinfo
00591   routine that returns a struct containing usage properties and
00592   statistics. It should work on any system that has a
00593   /usr/include/malloc.h defining struct mallinfo.  The main
00594   declaration needed is the mallinfo struct that is returned (by-copy)
00595   by mallinfo().  The malloinfo struct contains a bunch of fields that
00596   are not even meaningful in this version of malloc.  These fields are
00597   are instead filled by mallinfo() with other numbers that might be of
00598   interest.
00599 
00600   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
00601   /usr/include/malloc.h file that includes a declaration of struct
00602   mallinfo.  If so, it is included; else a compliant version is
00603   declared below.  These must be precisely the same for mallinfo() to
00604   work.  The original SVID version of this struct, defined on most
00605   systems with mallinfo, declares all fields as ints. But some others
00606   define as unsigned long. If your system defines the fields using a
00607   type of different width than listed here, you MUST #include your
00608   system version and #define HAVE_USR_INCLUDE_MALLOC_H.
00609 */
00610 
00611 /* #define HAVE_USR_INCLUDE_MALLOC_H */
00612 
00613 #ifdef HAVE_USR_INCLUDE_MALLOC_H
00614 #include "/usr/include/malloc.h"
00615 #else /* HAVE_USR_INCLUDE_MALLOC_H */
00616 
00617 struct mallinfo {
00618   MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
00619   MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
00620   MALLINFO_FIELD_TYPE smblks;   /* always 0 */
00621   MALLINFO_FIELD_TYPE hblks;    /* always 0 */
00622   MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
00623   MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
00624   MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
00625   MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
00626   MALLINFO_FIELD_TYPE fordblks; /* total free space */
00627   MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
00628 };
00629 
00630 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
00631 #endif /* NO_MALLINFO */
00632 
00633 #ifdef __cplusplus
00634 extern "C" {
00635 #endif /* __cplusplus */
00636 
00637 #if !ONLY_MSPACES
00638 
00639 /* ------------------- Declarations of public routines ------------------- */
00640 
00641 #ifndef USE_DL_PREFIX
00642 #define dlcalloc               calloc
00643 #define dlfree                 free
00644 #define dlmalloc               malloc
00645 #define dlmemalign             memalign
00646 #define dlrealloc              realloc
00647 #define dlvalloc               valloc
00648 #define dlpvalloc              pvalloc
00649 #define dlmallinfo             mallinfo
00650 #define dlmallopt              mallopt
00651 #define dlmalloc_trim          malloc_trim
00652 #define dlmalloc_stats         malloc_stats
00653 #define dlmalloc_usable_size   malloc_usable_size
00654 #define dlmalloc_footprint     malloc_footprint
00655 #define dlmalloc_max_footprint malloc_max_footprint
00656 #define dlindependent_calloc   independent_calloc
00657 #define dlindependent_comalloc independent_comalloc
00658 #endif /* USE_DL_PREFIX */
00659 
00660 
00661 /*
00662   malloc(size_t n)
00663   Returns a pointer to a newly allocated chunk of at least n bytes, or
00664   null if no space is available, in which case errno is set to ENOMEM
00665   on ANSI C systems.
00666 
00667   If n is zero, malloc returns a minimum-sized chunk. (The minimum
00668   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
00669   systems.)  Note that size_t is an unsigned type, so calls with
00670   arguments that would be negative if signed are interpreted as
00671   requests for huge amounts of space, which will often fail. The
00672   maximum supported value of n differs across systems, but is in all
00673   cases less than the maximum representable value of a size_t.
00674 */
00675 void* dlmalloc(size_t);
00676 
00677 /*
00678   free(void* p)
00679   Releases the chunk of memory pointed to by p, that had been previously
00680   allocated using malloc or a related routine such as realloc.
00681   It has no effect if p is null. If p was not malloced or already
00682   freed, free(p) will by default cause the current program to abort.
00683 */
00684 void  dlfree(void*);
00685 
00686 /*
00687   calloc(size_t n_elements, size_t element_size);
00688   Returns a pointer to n_elements * element_size bytes, with all locations
00689   set to zero.
00690 */
00691 void* dlcalloc(size_t, size_t);
00692 
00693 /*
00694   realloc(void* p, size_t n)
00695   Returns a pointer to a chunk of size n that contains the same data
00696   as does chunk p up to the minimum of (n, p's size) bytes, or null
00697   if no space is available.
00698 
00699   The returned pointer may or may not be the same as p. The algorithm
00700   prefers extending p in most cases when possible, otherwise it
00701   employs the equivalent of a malloc-copy-free sequence.
00702 
00703   If p is null, realloc is equivalent to malloc.
00704 
00705   If space is not available, realloc returns null, errno is set (if on
00706   ANSI) and p is NOT freed.
00707 
00708   if n is for fewer bytes than already held by p, the newly unused
00709   space is lopped off and freed if possible.  realloc with a size
00710   argument of zero (re)allocates a minimum-sized chunk.
00711 
00712   The old unix realloc convention of allowing the last-free'd chunk
00713   to be used as an argument to realloc is not supported.
00714 */
00715 
00716 void* dlrealloc(void*, size_t);
00717 
00718 /*
00719   memalign(size_t alignment, size_t n);
00720   Returns a pointer to a newly allocated chunk of n bytes, aligned
00721   in accord with the alignment argument.
00722 
00723   The alignment argument should be a power of two. If the argument is
00724   not a power of two, the nearest greater power is used.
00725   8-byte alignment is guaranteed by normal malloc calls, so don't
00726   bother calling memalign with an argument of 8 or less.
00727 
00728   Overreliance on memalign is a sure way to fragment space.
00729 */
00730 void* dlmemalign(size_t, size_t);
00731 
00732 /*
00733   valloc(size_t n);
00734   Equivalent to memalign(pagesize, n), where pagesize is the page
00735   size of the system. If the pagesize is unknown, 4096 is used.
00736 */
00737 void* dlvalloc(size_t);
00738 
00739 /*
00740   mallopt(int parameter_number, int parameter_value)
00741   Sets tunable parameters The format is to provide a
00742   (parameter-number, parameter-value) pair.  mallopt then sets the
00743   corresponding parameter to the argument value if it can (i.e., so
00744   long as the value is meaningful), and returns 1 if successful else
00745   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
00746   normally defined in malloc.h.  None of these are use in this malloc,
00747   so setting them has no effect. But this malloc also supports other
00748   options in mallopt. See below for details.  Briefly, supported
00749   parameters are as follows (listed defaults are for "typical"
00750   configurations).
00751 
00752   Symbol            param #  default    allowed param values
00753   M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
00754   M_GRANULARITY        -2     page size   any power of 2 >= page size
00755   M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
00756 */
00757 int dlmallopt(int, int);
00758 
00759 /*
00760   malloc_footprint();
00761   Returns the number of bytes obtained from the system.  The total
00762   number of bytes allocated by malloc, realloc etc., is less than this
00763   value. Unlike mallinfo, this function returns only a precomputed
00764   result, so can be called frequently to monitor memory consumption.
00765   Even if locks are otherwise defined, this function does not use them,
00766   so results might not be up to date.
00767 */
00768 size_t dlmalloc_footprint(void);
00769 
00770 /*
00771   malloc_max_footprint();
00772   Returns the maximum number of bytes obtained from the system. This
00773   value will be greater than current footprint if deallocated space
00774   has been reclaimed by the system. The peak number of bytes allocated
00775   by malloc, realloc etc., is less than this value. Unlike mallinfo,
00776   this function returns only a precomputed result, so can be called
00777   frequently to monitor memory consumption.  Even if locks are
00778   otherwise defined, this function does not use them, so results might
00779   not be up to date.
00780 */
00781 size_t dlmalloc_max_footprint(void);
00782 
00783 #if !NO_MALLINFO
00784 /*
00785   mallinfo()
00786   Returns (by copy) a struct containing various summary statistics:
00787 
00788   arena:     current total non-mmapped bytes allocated from system
00789   ordblks:   the number of free chunks
00790   smblks:    always zero.
00791   hblks:     current number of mmapped regions
00792   hblkhd:    total bytes held in mmapped regions
00793   usmblks:   the maximum total allocated space. This will be greater
00794                 than current total if trimming has occurred.
00795   fsmblks:   always zero
00796   uordblks:  current total allocated space (normal or mmapped)
00797   fordblks:  total free space
00798   keepcost:  the maximum number of bytes that could ideally be released
00799                back to system via malloc_trim. ("ideally" means that
00800                it ignores page restrictions etc.)
00801 
00802   Because these fields are ints, but internal bookkeeping may
00803   be kept as longs, the reported values may wrap around zero and
00804   thus be inaccurate.
00805 */
00806 struct mallinfo dlmallinfo(void);
00807 #endif /* NO_MALLINFO */
00808 
00809 /*
00810   independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
00811 
00812   independent_calloc is similar to calloc, but instead of returning a
00813   single cleared space, it returns an array of pointers to n_elements
00814   independent elements that can hold contents of size elem_size, each
00815   of which starts out cleared, and can be independently freed,
00816   realloc'ed etc. The elements are guaranteed to be adjacently
00817   allocated (this is not guaranteed to occur with multiple callocs or
00818   mallocs), which may also improve cache locality in some
00819   applications.
00820 
00821   The "chunks" argument is optional (i.e., may be null, which is
00822   probably the most typical usage). If it is null, the returned array
00823   is itself dynamically allocated and should also be freed when it is
00824   no longer needed. Otherwise, the chunks array must be of at least
00825   n_elements in length. It is filled in with the pointers to the
00826   chunks.
00827 
00828   In either case, independent_calloc returns this pointer array, or
00829   null if the allocation failed.  If n_elements is zero and "chunks"
00830   is null, it returns a chunk representing an array with zero elements
00831   (which should be freed if not wanted).
00832 
00833   Each element must be individually freed when it is no longer
00834   needed. If you'd like to instead be able to free all at once, you
00835   should instead use regular calloc and assign pointers into this
00836   space to represent elements.  (In this case though, you cannot
00837   independently free elements.)
00838 
00839   independent_calloc simplifies and speeds up implementations of many
00840   kinds of pools.  It may also be useful when constructing large data
00841   structures that initially have a fixed number of fixed-sized nodes,
00842   but the number is not known at compile time, and some of the nodes
00843   may later need to be freed. For example:
00844 
00845   struct Node { int item; struct Node* next; };
00846 
00847   struct Node* build_list() {
00848     struct Node** pool;
00849     int n = read_number_of_nodes_needed();
00850     if (n <= 0) return 0;
00851     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
00852     if (pool == 0) die();
00853     // organize into a linked list...
00854     struct Node* first = pool[0];
00855     for (i = 0; i < n-1; ++i)
00856       pool[i]->next = pool[i+1];
00857     free(pool);     // Can now free the array (or not, if it is needed later)
00858     return first;
00859   }
00860 */
00861 void** dlindependent_calloc(size_t, size_t, void**);
00862 
00863 /*
00864   independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
00865 
00866   independent_comalloc allocates, all at once, a set of n_elements
00867   chunks with sizes indicated in the "sizes" array.    It returns
00868   an array of pointers to these elements, each of which can be
00869   independently freed, realloc'ed etc. The elements are guaranteed to
00870   be adjacently allocated (this is not guaranteed to occur with
00871   multiple callocs or mallocs), which may also improve cache locality
00872   in some applications.
00873 
00874   The "chunks" argument is optional (i.e., may be null). If it is null
00875   the returned array is itself dynamically allocated and should also
00876   be freed when it is no longer needed. Otherwise, the chunks array
00877   must be of at least n_elements in length. It is filled in with the
00878   pointers to the chunks.
00879 
00880   In either case, independent_comalloc returns this pointer array, or
00881   null if the allocation failed.  If n_elements is zero and chunks is
00882   null, it returns a chunk representing an array with zero elements
00883   (which should be freed if not wanted).
00884 
00885   Each element must be individually freed when it is no longer
00886   needed. If you'd like to instead be able to free all at once, you
00887   should instead use a single regular malloc, and assign pointers at
00888   particular offsets in the aggregate space. (In this case though, you
00889   cannot independently free elements.)
00890 
00891   independent_comallac differs from independent_calloc in that each
00892   element may have a different size, and also that it does not
00893   automatically clear elements.
00894 
00895   independent_comalloc can be used to speed up allocation in cases
00896   where several structs or objects must always be allocated at the
00897   same time.  For example:
00898 
00899   struct Head { ... }
00900   struct Foot { ... }
00901 
00902   void send_message(char* msg) {
00903     int msglen = strlen(msg);
00904     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
00905     void* chunks[3];
00906     if (independent_comalloc(3, sizes, chunks) == 0)
00907       die();
00908     struct Head* head = (struct Head*)(chunks[0]);
00909     char*        body = (char*)(chunks[1]);
00910     struct Foot* foot = (struct Foot*)(chunks[2]);
00911     // ...
00912   }
00913 
00914   In general though, independent_comalloc is worth using only for
00915   larger values of n_elements. For small values, you probably won't
00916   detect enough difference from series of malloc calls to bother.
00917 
00918   Overuse of independent_comalloc can increase overall memory usage,
00919   since it cannot reuse existing noncontiguous small chunks that
00920   might be available for some of the elements.
00921 */
00922 void** dlindependent_comalloc(size_t, size_t*, void**);
00923 
00924 
00925 /*
00926   pvalloc(size_t n);
00927   Equivalent to valloc(minimum-page-that-holds(n)), that is,
00928   round up n to nearest pagesize.
00929  */
00930 void*  dlpvalloc(size_t);
00931 
00932 /*
00933   malloc_trim(size_t pad);
00934 
00935   If possible, gives memory back to the system (via negative arguments
00936   to sbrk) if there is unused memory at the `high' end of the malloc
00937   pool or in unused MMAP segments. You can call this after freeing
00938   large blocks of memory to potentially reduce the system-level memory
00939   requirements of a program. However, it cannot guarantee to reduce
00940   memory. Under some allocation patterns, some large free blocks of
00941   memory will be locked between two used chunks, so they cannot be
00942   given back to the system.
00943 
00944   The `pad' argument to malloc_trim represents the amount of free
00945   trailing space to leave untrimmed. If this argument is zero, only
00946   the minimum amount of memory to maintain internal data structures
00947   will be left. Non-zero arguments can be supplied to maintain enough
00948   trailing space to service future expected allocations without having
00949   to re-obtain memory from the system.
00950 
00951   Malloc_trim returns 1 if it actually released any memory, else 0.
00952 */
00953 int  dlmalloc_trim(size_t);
00954 
00955 /*
00956   malloc_usable_size(void* p);
00957 
00958   Returns the number of bytes you can actually use in
00959   an allocated chunk, which may be more than you requested (although
00960   often not) due to alignment and minimum size constraints.
00961   You can use this many bytes without worrying about
00962   overwriting other allocated objects. This is not a particularly great
00963   programming practice. malloc_usable_size can be more useful in
00964   debugging and assertions, for example:
00965 
00966   p = malloc(n);
00967   assert(malloc_usable_size(p) >= 256);
00968 */
00969 size_t dlmalloc_usable_size(void*);
00970 
00971 /*
00972   malloc_stats();
00973   Prints on stderr the amount of space obtained from the system (both
00974   via sbrk and mmap), the maximum amount (which may be more than
00975   current if malloc_trim and/or munmap got called), and the current
00976   number of bytes allocated via malloc (or realloc, etc) but not yet
00977   freed. Note that this is the number of bytes allocated, not the
00978   number requested. It will be larger than the number requested
00979   because of alignment and bookkeeping overhead. Because it includes
00980   alignment wastage as being in use, this figure may be greater than
00981   zero even when no user-level chunks are allocated.
00982 
00983   The reported current and maximum system memory can be inaccurate if
00984   a program makes other calls to system memory allocation functions
00985   (normally sbrk) outside of malloc.
00986 
00987   malloc_stats prints only the most commonly interesting statistics.
00988   More information can be obtained by calling mallinfo.
00989 */
00990 void  dlmalloc_stats(void);
00991 
00992 #endif /* ONLY_MSPACES */
00993 
00994 #if MSPACES
00995 
00996 /*
00997   mspace is an opaque type representing an independent
00998   region of space that supports mspace_malloc, etc.
00999 */
01000 typedef void* mspace;
01001 
01002 /*
01003   create_mspace creates and returns a new independent space with the
01004   given initial capacity, or, if 0, the default granularity size.  It
01005   returns null if there is no system memory available to create the
01006   space.  If argument locked is non-zero, the space uses a separate
01007   lock to control access. The capacity of the space will grow
01008   dynamically as needed to service mspace_malloc requests.  You can
01009   control the sizes of incremental increases of this space by
01010   compiling with a different DEFAULT_GRANULARITY or dynamically
01011   setting with mallopt(M_GRANULARITY, value).
01012 */
01013 mspace create_mspace(size_t capacity, int locked);
01014 
01015 /*
01016   destroy_mspace destroys the given space, and attempts to return all
01017   of its memory back to the system, returning the total number of
01018   bytes freed. After destruction, the results of access to all memory
01019   used by the space become undefined.
01020 */
01021 size_t destroy_mspace(mspace msp);
01022 
01023 /*
01024   create_mspace_with_base uses the memory supplied as the initial base
01025   of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
01026   space is used for bookkeeping, so the capacity must be at least this
01027   large. (Otherwise 0 is returned.) When this initial space is
01028   exhausted, additional memory will be obtained from the system.
01029   Destroying this space will deallocate all additionally allocated
01030   space (if possible) but not the initial base.
01031 */
01032 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
01033 
01034 /*
01035   mspace_malloc behaves as malloc, but operates within
01036   the given space.
01037 */
01038 void* mspace_malloc(mspace msp, size_t bytes);
01039 
01040 /*
01041   mspace_free behaves as free, but operates within
01042   the given space.
01043 
01044   If compiled with FOOTERS==1, mspace_free is not actually needed.
01045   free may be called instead of mspace_free because freed chunks from
01046   any space are handled by their originating spaces.
01047 */
01048 void mspace_free(mspace msp, void* mem);
01049 
01050 /*
01051   mspace_realloc behaves as realloc, but operates within
01052   the given space.
01053 
01054   If compiled with FOOTERS==1, mspace_realloc is not actually
01055   needed.  realloc may be called instead of mspace_realloc because
01056   realloced chunks from any space are handled by their originating
01057   spaces.
01058 */
01059 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
01060 
01061 /*
01062   mspace_calloc behaves as calloc, but operates within
01063   the given space.
01064 */
01065 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
01066 
01067 /*
01068   mspace_memalign behaves as memalign, but operates within
01069   the given space.
01070 */
01071 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
01072 
01073 /*
01074   mspace_independent_calloc behaves as independent_calloc, but
01075   operates within the given space.
01076 */
01077 void** mspace_independent_calloc(mspace msp, size_t n_elements,
01078                                  size_t elem_size, void* chunks[]);
01079 
01080 /*
01081   mspace_independent_comalloc behaves as independent_comalloc, but
01082   operates within the given space.
01083 */
01084 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
01085                                    size_t sizes[], void* chunks[]);
01086 
01087 /*
01088   mspace_footprint() returns the number of bytes obtained from the
01089   system for this space.
01090 */
01091 size_t mspace_footprint(mspace msp);
01092 
01093 /*
01094   mspace_max_footprint() returns the peak number of bytes obtained from the
01095   system for this space.
01096 */
01097 size_t mspace_max_footprint(mspace msp);
01098 
01099 
01100 #if !NO_MALLINFO
01101 /*
01102   mspace_mallinfo behaves as mallinfo, but reports properties of
01103   the given space.
01104 */
01105 struct mallinfo mspace_mallinfo(mspace msp);
01106 #endif /* NO_MALLINFO */
01107 
01108 /*
01109   mspace_malloc_stats behaves as malloc_stats, but reports
01110   properties of the given space.
01111 */
01112 void mspace_malloc_stats(mspace msp);
01113 
01114 /*
01115   mspace_trim behaves as malloc_trim, but
01116   operates within the given space.
01117 */
01118 int mspace_trim(mspace msp, size_t pad);
01119 
01120 /*
01121   An alias for mallopt.
01122 */
01123 int mspace_mallopt(int, int);
01124 
01125 #endif /* MSPACES */
01126 
01127 #ifdef __cplusplus
01128 };  /* end of extern "C" */
01129 #endif /* __cplusplus */
01130 
01131 /*
01132   ========================================================================
01133   To make a fully customizable malloc.h header file, cut everything
01134   above this line, put into file malloc.h, edit to suit, and #include it
01135   on the next line, as well as in programs that use this malloc.
01136   ========================================================================
01137 */
01138 
01139 /* #include "malloc.h" */
01140 
01141 /*------------------------------ internal #includes ---------------------- */
01142 
01143 #ifdef WIN32
01144 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
01145 #endif /* WIN32 */
01146 
01147 #include <stdio.h>       /* for printing in malloc_stats */
01148 
01149 #ifndef LACKS_ERRNO_H
01150 #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
01151 #endif /* LACKS_ERRNO_H */
01152 #if FOOTERS
01153 #include <time.h>        /* for magic initialization */
01154 #endif /* FOOTERS */
01155 #ifndef LACKS_STDLIB_H
01156 #include <stdlib.h>      /* for abort() */
01157 #endif /* LACKS_STDLIB_H */
01158 #ifdef DEBUG
01159 #if ABORT_ON_ASSERT_FAILURE
01160 #define assert(x) if(!(x)) ABORT
01161 #else /* ABORT_ON_ASSERT_FAILURE */
01162 #include <assert.h>
01163 #endif /* ABORT_ON_ASSERT_FAILURE */
01164 #else  /* DEBUG */
01165 #define assert(x)
01166 #endif /* DEBUG */
01167 #ifndef LACKS_STRING_H
01168 #include <string.h>      /* for memset etc */
01169 #endif  /* LACKS_STRING_H */
01170 #if USE_BUILTIN_FFS
01171 #ifndef LACKS_STRINGS_H
01172 #include <strings.h>     /* for ffs */
01173 #endif /* LACKS_STRINGS_H */
01174 #endif /* USE_BUILTIN_FFS */
01175 #if HAVE_MMAP
01176 #ifndef LACKS_SYS_MMAN_H
01177 #include <sys/mman.h>    /* for mmap */
01178 #endif /* LACKS_SYS_MMAN_H */
01179 #ifndef LACKS_FCNTL_H
01180 #include <fcntl.h>
01181 #endif /* LACKS_FCNTL_H */
01182 #endif /* HAVE_MMAP */
01183 #if HAVE_MORECORE
01184 #ifndef LACKS_UNISTD_H
01185 #include <unistd.h>     /* for sbrk */
01186 #else /* LACKS_UNISTD_H */
01187 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
01188 extern void*     sbrk(ptrdiff_t);
01189 #endif /* FreeBSD etc */
01190 #endif /* LACKS_UNISTD_H */
01191 #endif /* HAVE_MMAP */
01192 
01193 #ifndef WIN32
01194 #ifndef malloc_getpagesize
01195 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
01196 #    ifndef _SC_PAGE_SIZE
01197 #      define _SC_PAGE_SIZE _SC_PAGESIZE
01198 #    endif
01199 #  endif
01200 #  ifdef _SC_PAGE_SIZE
01201 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
01202 #  else
01203 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
01204        extern size_t getpagesize();
01205 #      define malloc_getpagesize getpagesize()
01206 #    else
01207 #      ifdef WIN32 /* use supplied emulation of getpagesize */
01208 #        define malloc_getpagesize getpagesize()
01209 #      else
01210 #        ifndef LACKS_SYS_PARAM_H
01211 #          include <sys/param.h>
01212 #        endif
01213 #        ifdef EXEC_PAGESIZE
01214 #          define malloc_getpagesize EXEC_PAGESIZE
01215 #        else
01216 #          ifdef NBPG
01217 #            ifndef CLSIZE
01218 #              define malloc_getpagesize NBPG
01219 #            else
01220 #              define malloc_getpagesize (NBPG * CLSIZE)
01221 #            endif
01222 #          else
01223 #            ifdef NBPC
01224 #              define malloc_getpagesize NBPC
01225 #            else
01226 #              ifdef PAGESIZE
01227 #                define malloc_getpagesize PAGESIZE
01228 #              else /* just guess */
01229 #                define malloc_getpagesize ((size_t)4096U)
01230 #              endif
01231 #            endif
01232 #          endif
01233 #        endif
01234 #      endif
01235 #    endif
01236 #  endif
01237 #endif
01238 #endif
01239 
01240 /* ------------------- size_t and alignment properties -------------------- */
01241 
01242 /* The byte and bit size of a size_t */
01243 #define SIZE_T_SIZE         (sizeof(size_t))
01244 #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
01245 
01246 /* Some constants coerced to size_t */
01247 /* Annoying but necessary to avoid errors on some plaftorms */
01248 #define SIZE_T_ZERO         ((size_t)0)
01249 #define SIZE_T_ONE          ((size_t)1)
01250 #define SIZE_T_TWO          ((size_t)2)
01251 #define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
01252 #define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
01253 #define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
01254 #define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
01255 
01256 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
01257 #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
01258 
01259 /* True if address a has acceptable alignment */
01260 #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
01261 
01262 /* the number of bytes to offset an address to align it */
01263 #define align_offset(A)\
01264  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
01265   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
01266 
01267 /* -------------------------- MMAP preliminaries ------------------------- */
01268 
01269 /*
01270    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
01271    checks to fail so compiler optimizer can delete code rather than
01272    using so many "#if"s.
01273 */
01274 
01275 
01276 /* MORECORE and MMAP must return MFAIL on failure */
01277 #define MFAIL                ((void*)(MAX_SIZE_T))
01278 #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
01279 
01280 #if !HAVE_MMAP
01281 #define IS_MMAPPED_BIT       (SIZE_T_ZERO)
01282 #define USE_MMAP_BIT         (SIZE_T_ZERO)
01283 #define CALL_MMAP(s)         MFAIL
01284 #define CALL_MUNMAP(a, s)    (-1)
01285 #define DIRECT_MMAP(s)       MFAIL
01286 
01287 #else /* HAVE_MMAP */
01288 #define IS_MMAPPED_BIT       (SIZE_T_ONE)
01289 #define USE_MMAP_BIT         (SIZE_T_ONE)
01290 
01291 #ifndef WIN32
01292 #define CALL_MUNMAP(a, s)    munmap((a), (s))
01293 #define MMAP_PROT            (PROT_READ|PROT_WRITE)
01294 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
01295 #define MAP_ANONYMOUS        MAP_ANON
01296 #endif /* MAP_ANON */
01297 #ifdef MAP_ANONYMOUS
01298 #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
01299 #define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
01300 #else /* MAP_ANONYMOUS */
01301 /*
01302    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
01303    is unlikely to be needed, but is supplied just in case.
01304 */
01305 #define MMAP_FLAGS           (MAP_PRIVATE)
01306 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
01307 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
01308            (dev_zero_fd = open("/dev/zero", O_RDWR), \
01309             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
01310             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
01311 #endif /* MAP_ANONYMOUS */
01312 
01313 #define DIRECT_MMAP(s)       CALL_MMAP(s)
01314 #else /* WIN32 */
01315 
01316 /* Win32 MMAP via VirtualAlloc */
01317 static void* win32mmap(size_t size) {
01318   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
01319   return (ptr != 0)? ptr: MFAIL;
01320 }
01321 
01322 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
01323 static void* win32direct_mmap(size_t size) {
01324   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
01325                            PAGE_READWRITE);
01326   return (ptr != 0)? ptr: MFAIL;
01327 }
01328 
01329 /* This function supports releasing coalesed segments */
01330 static int win32munmap(void* ptr, size_t size) {
01331   MEMORY_BASIC_INFORMATION minfo;
01332   char* cptr = ptr;
01333   while (size) {
01334     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
01335       return -1;
01336     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
01337         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
01338       return -1;
01339     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
01340       return -1;
01341     cptr += minfo.RegionSize;
01342     size -= minfo.RegionSize;
01343   }
01344   return 0;
01345 }
01346 
01347 #define CALL_MMAP(s)         win32mmap(s)
01348 #define CALL_MUNMAP(a, s)    win32munmap((a), (s))
01349 #define DIRECT_MMAP(s)       win32direct_mmap(s)
01350 #endif /* WIN32 */
01351 #endif /* HAVE_MMAP */
01352 
01353 #if HAVE_MMAP && HAVE_MREMAP
01354 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
01355 #else  /* HAVE_MMAP && HAVE_MREMAP */
01356 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
01357 #endif /* HAVE_MMAP && HAVE_MREMAP */
01358 
01359 #if HAVE_MORECORE
01360 #define CALL_MORECORE(S)     MORECORE(S)
01361 #else  /* HAVE_MORECORE */
01362 #define CALL_MORECORE(S)     MFAIL
01363 #endif /* HAVE_MORECORE */
01364 
01365 /* mstate bit set if continguous morecore disabled or failed */
01366 #define USE_NONCONTIGUOUS_BIT (4U)
01367 
01368 /* segment bit set in create_mspace_with_base */
01369 #define EXTERN_BIT            (8U)
01370 
01371 
01372 /* --------------------------- Lock preliminaries ------------------------ */
01373 
01374 #if USE_LOCKS
01375 
01376 /*
01377   When locks are defined, there are up to two global locks:
01378 
01379   * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
01380     MORECORE.  In many cases sys_alloc requires two calls, that should
01381     not be interleaved with calls by other threads.  This does not
01382     protect against direct calls to MORECORE by other threads not
01383     using this lock, so there is still code to cope the best we can on
01384     interference.
01385 
01386   * magic_init_mutex ensures that mparams.magic and other
01387     unique mparams values are initialized only once.
01388 */
01389 
01390 #ifndef WIN32
01391 /* By default use posix locks */
01392 #include <pthread.h>
01393 #define MLOCK_T pthread_mutex_t
01394 #define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
01395 #define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
01396 #define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
01397 
01398 #if HAVE_MORECORE
01399 static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
01400 #endif /* HAVE_MORECORE */
01401 
01402 static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
01403 
01404 #else /* WIN32 */
01405 /*
01406    Because lock-protected regions have bounded times, and there
01407    are no recursive lock calls, we can use simple spinlocks.
01408 */
01409 
01410 #define MLOCK_T long
01411 static int win32_acquire_lock (MLOCK_T *sl) {
01412   for (;;) {
01413 #ifdef InterlockedCompareExchangePointer
01414     if (!InterlockedCompareExchange(sl, 1, 0))
01415       return 0;
01416 #else  /* Use older void* version */
01417     if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
01418       return 0;
01419 #endif /* InterlockedCompareExchangePointer */
01420     Sleep (0);
01421   }
01422 }
01423 
01424 static void win32_release_lock (MLOCK_T *sl) {
01425   InterlockedExchange (sl, 0);
01426 }
01427 
01428 #define INITIAL_LOCK(l)      *(l)=0
01429 #define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
01430 #define RELEASE_LOCK(l)      win32_release_lock(l)
01431 #if HAVE_MORECORE
01432 static MLOCK_T morecore_mutex;
01433 #endif /* HAVE_MORECORE */
01434 static MLOCK_T magic_init_mutex;
01435 #endif /* WIN32 */
01436 
01437 #define USE_LOCK_BIT               (2U)
01438 #else  /* USE_LOCKS */
01439 #define USE_LOCK_BIT               (0U)
01440 #define INITIAL_LOCK(l)
01441 #endif /* USE_LOCKS */
01442 
01443 #if USE_LOCKS && HAVE_MORECORE
01444 #define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
01445 #define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
01446 #else /* USE_LOCKS && HAVE_MORECORE */
01447 #define ACQUIRE_MORECORE_LOCK()
01448 #define RELEASE_MORECORE_LOCK()
01449 #endif /* USE_LOCKS && HAVE_MORECORE */
01450 
01451 #if USE_LOCKS
01452 #define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
01453 #define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
01454 #else  /* USE_LOCKS */
01455 #define ACQUIRE_MAGIC_INIT_LOCK()
01456 #define RELEASE_MAGIC_INIT_LOCK()
01457 #endif /* USE_LOCKS */
01458 
01459 
01460 /* -----------------------  Chunk representations ------------------------ */
01461 
01462 /*
01463   (The following includes lightly edited explanations by Colin Plumb.)
01464 
01465   The malloc_chunk declaration below is misleading (but accurate and
01466   necessary).  It declares a "view" into memory allowing access to
01467   necessary fields at known offsets from a given base.
01468 
01469   Chunks of memory are maintained using a `boundary tag' method as
01470   originally described by Knuth.  (See the paper by Paul Wilson
01471   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
01472   techniques.)  Sizes of free chunks are stored both in the front of
01473   each chunk and at the end.  This makes consolidating fragmented
01474   chunks into bigger chunks fast.  The head fields also hold bits
01475   representing whether chunks are free or in use.
01476 
01477   Here are some pictures to make it clearer.  They are "exploded" to
01478   show that the state of a chunk can be thought of as extending from
01479   the high 31 bits of the head field of its header through the
01480   prev_foot and PINUSE_BIT bit of the following chunk header.
01481 
01482   A chunk that's in use looks like:
01483 
01484    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01485            | Size of previous chunk (if P = 1)                             |
01486            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01487          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
01488          | Size of this chunk                                         1| +-+
01489    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01490          |                                                               |
01491          +-                                                             -+
01492          |                                                               |
01493          +-                                                             -+
01494          |                                                               :
01495          +-      size - sizeof(size_t) available payload bytes          -+
01496          :                                                               |
01497  chunk-> +-                                                             -+
01498          |                                                               |
01499          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01500        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
01501        | Size of next chunk (may or may not be in use)               | +-+
01502  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01503 
01504     And if it's free, it looks like this:
01505 
01506    chunk-> +-                                                             -+
01507            | User payload (must be in use, or we would have merged!)       |
01508            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01509          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
01510          | Size of this chunk                                         0| +-+
01511    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01512          | Next pointer                                                  |
01513          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01514          | Prev pointer                                                  |
01515          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01516          |                                                               :
01517          +-      size - sizeof(struct chunk) unused bytes               -+
01518          :                                                               |
01519  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01520          | Size of this chunk                                            |
01521          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01522        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
01523        | Size of next chunk (must be in use, or we would have merged)| +-+
01524  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01525        |                                                               :
01526        +- User payload                                                -+
01527        :                                                               |
01528        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01529                                                                      |0|
01530                                                                      +-+
01531   Note that since we always merge adjacent free chunks, the chunks
01532   adjacent to a free chunk must be in use.
01533 
01534   Given a pointer to a chunk (which can be derived trivially from the
01535   payload pointer) we can, in O(1) time, find out whether the adjacent
01536   chunks are free, and if so, unlink them from the lists that they
01537   are on and merge them with the current chunk.
01538 
01539   Chunks always begin on even word boundaries, so the mem portion
01540   (which is returned to the user) is also on an even word boundary, and
01541   thus at least double-word aligned.
01542 
01543   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
01544   chunk size (which is always a multiple of two words), is an in-use
01545   bit for the *previous* chunk.  If that bit is *clear*, then the
01546   word before the current chunk size contains the previous chunk
01547   size, and can be used to find the front of the previous chunk.
01548   The very first chunk allocated always has this bit set, preventing
01549   access to non-existent (or non-owned) memory. If pinuse is set for
01550   any given chunk, then you CANNOT determine the size of the
01551   previous chunk, and might even get a memory addressing fault when
01552   trying to do so.
01553 
01554   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
01555   the chunk size redundantly records whether the current chunk is
01556   inuse. This redundancy enables usage checks within free and realloc,
01557   and reduces indirection when freeing and consolidating chunks.
01558 
01559   Each freshly allocated chunk must have both cinuse and pinuse set.
01560   That is, each allocated chunk borders either a previously allocated
01561   and still in-use chunk, or the base of its memory arena. This is
01562   ensured by making all allocations from the the `lowest' part of any
01563   found chunk.  Further, no free chunk physically borders another one,
01564   so each free chunk is known to be preceded and followed by either
01565   inuse chunks or the ends of memory.
01566 
01567   Note that the `foot' of the current chunk is actually represented
01568   as the prev_foot of the NEXT chunk. This makes it easier to
01569   deal with alignments etc but can be very confusing when trying
01570   to extend or adapt this code.
01571 
01572   The exceptions to all this are
01573 
01574      1. The special chunk `top' is the top-most available chunk (i.e.,
01575         the one bordering the end of available memory). It is treated
01576         specially.  Top is never included in any bin, is used only if
01577         no other chunk is available, and is released back to the
01578         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
01579         the top chunk is treated as larger (and thus less well
01580         fitting) than any other available chunk.  The top chunk
01581         doesn't update its trailing size field since there is no next
01582         contiguous chunk that would have to index off it. However,
01583         space is still allocated for it (TOP_FOOT_SIZE) to enable
01584         separation or merging when space is extended.
01585 
01586      3. Chunks allocated via mmap, which have the lowest-order bit
01587         (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
01588         PINUSE_BIT in their head fields.  Because they are allocated
01589         one-by-one, each must carry its own prev_foot field, which is
01590         also used to hold the offset this chunk has within its mmapped
01591         region, which is needed to preserve alignment. Each mmapped
01592         chunk is trailed by the first two fields of a fake next-chunk
01593         for sake of usage checks.
01594 
01595 */
01596 
01597 struct malloc_chunk {
01598   size_t               prev_foot;  /* Size of previous chunk (if free).  */
01599   size_t               head;       /* Size and inuse bits. */
01600   struct malloc_chunk* fd;         /* double links -- used only if free. */
01601   struct malloc_chunk* bk;
01602 };
01603 
01604 typedef struct malloc_chunk  mchunk;
01605 typedef struct malloc_chunk* mchunkptr;
01606 typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
01607 typedef unsigned int bindex_t;         /* Described below */
01608 typedef unsigned int binmap_t;         /* Described below */
01609 typedef unsigned int flag_t;           /* The type of various bit flag sets */
01610 
01611 /* ------------------- Chunks sizes and alignments ----------------------- */
01612 
01613 #define MCHUNK_SIZE         (sizeof(mchunk))
01614 
01615 #if FOOTERS
01616 #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
01617 #else /* FOOTERS */
01618 #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
01619 #endif /* FOOTERS */
01620 
01621 /* MMapped chunks need a second word of overhead ... */
01622 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
01623 /* ... and additional padding for fake next-chunk at foot */
01624 #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
01625 
01626 /* The smallest size we can malloc is an aligned minimal chunk */
01627 #define MIN_CHUNK_SIZE\
01628   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
01629 
01630 /* conversion from malloc headers to user pointers, and back */
01631 #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
01632 #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
01633 /* chunk associated with aligned address A */
01634 #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
01635 
01636 /* Bounds on request (not chunk) sizes. */
01637 #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
01638 #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
01639 
01640 /* pad request bytes into a usable size */
01641 #define pad_request(req) \
01642    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
01643 
01644 /* pad request, checking for minimum (but not maximum) */
01645 #define request2size(req) \
01646   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
01647 
01648 
01649 /* ------------------ Operations on head and foot fields ----------------- */
01650 
01651 /*
01652   The head field of a chunk is or'ed with PINUSE_BIT when previous
01653   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
01654   use. If the chunk was obtained with mmap, the prev_foot field has
01655   IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
01656   mmapped region to the base of the chunk.
01657 */
01658 
01659 #define PINUSE_BIT          (SIZE_T_ONE)
01660 #define CINUSE_BIT          (SIZE_T_TWO)
01661 #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
01662 
01663 /* Head value for fenceposts */
01664 #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
01665 
01666 /* extraction of fields from head words */
01667 #define cinuse(p)           ((p)->head & CINUSE_BIT)
01668 #define pinuse(p)           ((p)->head & PINUSE_BIT)
01669 #define chunksize(p)        ((p)->head & ~(INUSE_BITS))
01670 
01671 #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
01672 #define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
01673 
01674 /* Treat space at ptr +/- offset as a chunk */
01675 #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
01676 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
01677 
01678 /* Ptr to next or previous physical malloc_chunk. */
01679 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
01680 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
01681 
01682 /* extract next chunk's pinuse bit */
01683 #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
01684 
01685 /* Get/set size at footer */
01686 #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
01687 #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
01688 
01689 /* Set size, pinuse bit, and foot */
01690 #define set_size_and_pinuse_of_free_chunk(p, s)\
01691   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
01692 
01693 /* Set size, pinuse bit, foot, and clear next pinuse */
01694 #define set_free_with_pinuse(p, s, n)\
01695   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
01696 
01697 #define is_mmapped(p)\
01698   (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
01699 
01700 /* Get the internal overhead associated with chunk p */
01701 #define overhead_for(p)\
01702  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
01703 
01704 /* Return true if malloced space is not necessarily cleared */
01705 #if MMAP_CLEARS
01706 #define calloc_must_clear(p) (!is_mmapped(p))
01707 #else /* MMAP_CLEARS */
01708 #define calloc_must_clear(p) (1)
01709 #endif /* MMAP_CLEARS */
01710 
01711 /* ---------------------- Overlaid data structures ----------------------- */
01712 
01713 /*
01714   When chunks are not in use, they are treated as nodes of either
01715   lists or trees.
01716 
01717   "Small"  chunks are stored in circular doubly-linked lists, and look
01718   like this:
01719 
01720     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01721             |             Size of previous chunk                            |
01722             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01723     `head:' |             Size of chunk, in bytes                         |P|
01724       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01725             |             Forward pointer to next chunk in list             |
01726             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01727             |             Back pointer to previous chunk in list            |
01728             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01729             |             Unused space (may be 0 bytes long)                .
01730             .                                                               .
01731             .                                                               |
01732 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01733     `foot:' |             Size of chunk, in bytes                           |
01734             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01735 
01736   Larger chunks are kept in a form of bitwise digital trees (aka
01737   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
01738   free chunks greater than 256 bytes, their size doesn't impose any
01739   constraints on user chunk sizes.  Each node looks like:
01740 
01741     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01742             |             Size of previous chunk                            |
01743             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01744     `head:' |             Size of chunk, in bytes                         |P|
01745       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01746             |             Forward pointer to next chunk of same size        |
01747             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01748             |             Back pointer to previous chunk of same size       |
01749             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01750             |             Pointer to left child (child[0])                  |
01751             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01752             |             Pointer to right child (child[1])                 |
01753             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01754             |             Pointer to parent                                 |
01755             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01756             |             bin index of this chunk                           |
01757             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01758             |             Unused space                                      .
01759             .                                                               |
01760 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01761     `foot:' |             Size of chunk, in bytes                           |
01762             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01763 
01764   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
01765   of the same size are arranged in a circularly-linked list, with only
01766   the oldest chunk (the next to be used, in our FIFO ordering)
01767   actually in the tree.  (Tree members are distinguished by a non-null
01768   parent pointer.)  If a chunk with the same size an an existing node
01769   is inserted, it is linked off the existing node using pointers that
01770   work in the same way as fd/bk pointers of small chunks.
01771 
01772   Each tree contains a power of 2 sized range of chunk sizes (the
01773   smallest is 0x100 <= x < 0x180), which is is divided in half at each
01774   tree level, with the chunks in the smaller half of the range (0x100
01775   <= x < 0x140 for the top nose) in the left subtree and the larger
01776   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
01777   done by inspecting individual bits.
01778 
01779   Using these rules, each node's left subtree contains all smaller
01780   sizes than its right subtree.  However, the node at the root of each
01781   subtree has no particular ordering relationship to either.  (The
01782   dividing line between the subtree sizes is based on trie relation.)
01783   If we remove the last chunk of a given size from the interior of the
01784   tree, we need to replace it with a leaf node.  The tree ordering
01785   rules permit a node to be replaced by any leaf below it.
01786 
01787   The smallest chunk in a tree (a common operation in a best-fit
01788   allocator) can be found by walking a path to the leftmost leaf in
01789   the tree.  Unlike a usual binary tree, where we follow left child
01790   pointers until we reach a null, here we follow the right child
01791   pointer any time the left one is null, until we reach a leaf with
01792   both child pointers null. The smallest chunk in the tree will be
01793   somewhere along that path.
01794 
01795   The worst case number of steps to add, find, or remove a node is
01796   bounded by the number of bits differentiating chunks within
01797   bins. Under current bin calculations, this ranges from 6 up to 21
01798   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
01799   is of course much better.
01800 */
01801 
01802 struct malloc_tree_chunk {
01803   /* The first four fields must be compatible with malloc_chunk */
01804   size_t                    prev_foot;
01805   size_t                    head;
01806   struct malloc_tree_chunk* fd;
01807   struct malloc_tree_chunk* bk;
01808 
01809   struct malloc_tree_chunk* child[2];
01810   struct malloc_tree_chunk* parent;
01811   bindex_t                  index;
01812 };
01813 
01814 typedef struct malloc_tree_chunk  tchunk;
01815 typedef struct malloc_tree_chunk* tchunkptr;
01816 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
01817 
01818 /* A little helper macro for trees */
01819 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
01820 
01821 /* ----------------------------- Segments -------------------------------- */
01822 
01823 /*
01824   Each malloc space may include non-contiguous segments, held in a
01825   list headed by an embedded malloc_segment record representing the
01826   top-most space. Segments also include flags holding properties of
01827   the space. Large chunks that are directly allocated by mmap are not
01828   included in this list. They are instead independently created and
01829   destroyed without otherwise keeping track of them.
01830 
01831   Segment management mainly comes into play for spaces allocated by
01832   MMAP.  Any call to MMAP might or might not return memory that is
01833   adjacent to an existing segment.  MORECORE normally contiguously
01834   extends the current space, so this space is almost always adjacent,
01835   which is simpler and faster to deal with. (This is why MORECORE is
01836   used preferentially to MMAP when both are available -- see
01837   sys_alloc.)  When allocating using MMAP, we don't use any of the
01838   hinting mechanisms (inconsistently) supported in various
01839   implementations of unix mmap, or distinguish reserving from
01840   committing memory. Instead, we just ask for space, and exploit
01841   contiguity when we get it.  It is probably possible to do
01842   better than this on some systems, but no general scheme seems
01843   to be significantly better.
01844 
01845   Management entails a simpler variant of the consolidation scheme
01846   used for chunks to reduce fragmentation -- new adjacent memory is
01847   normally prepended or appended to an existing segment. However,
01848   there are limitations compared to chunk consolidation that mostly
01849   reflect the fact that segment processing is relatively infrequent
01850   (occurring only when getting memory from system) and that we
01851   don't expect to have huge numbers of segments:
01852 
01853   * Segments are not indexed, so traversal requires linear scans.  (It
01854     would be possible to index these, but is not worth the extra
01855     overhead and complexity for most programs on most platforms.)
01856   * New segments are only appended to old ones when holding top-most
01857     memory; if they cannot be prepended to others, they are held in
01858     different segments.
01859 
01860   Except for the top-most segment of an mstate, each segment record
01861   is kept at the tail of its segment. Segments are added by pushing
01862   segment records onto the list headed by &mstate.seg for the
01863   containing mstate.
01864 
01865   Segment flags control allocation/merge/deallocation policies:
01866   * If EXTERN_BIT set, then we did not allocate this segment,
01867     and so should not try to deallocate or merge with others.
01868     (This currently holds only for the initial segment passed
01869     into create_mspace_with_base.)
01870   * If IS_MMAPPED_BIT set, the segment may be merged with
01871     other surrounding mmapped segments and trimmed/de-allocated
01872     using munmap.
01873   * If neither bit is set, then the segment was obtained using
01874     MORECORE so can be merged with surrounding MORECORE'd segments
01875     and deallocated/trimmed using MORECORE with negative arguments.
01876 */
01877 
01878 struct malloc_segment {
01879   char*        base;             /* base address */
01880   size_t       size;             /* allocated size */
01881   struct malloc_segment* next;   /* ptr to next segment */
01882 #if FFI_MMAP_EXEC_WRIT
01883   /* The mmap magic is supposed to store the address of the executable
01884      segment at the very end of the requested block.  */
01885 
01886 # define mmap_exec_offset(b,s) (*(ptrdiff_t*)((b)+(s)-sizeof(ptrdiff_t)))
01887 
01888   /* We can only merge segments if their corresponding executable
01889      segments are at identical offsets.  */
01890 # define check_segment_merge(S,b,s) \
01891   (mmap_exec_offset((b),(s)) == (S)->exec_offset)
01892 
01893 # define add_segment_exec_offset(p,S) ((char*)(p) + (S)->exec_offset)
01894 # define sub_segment_exec_offset(p,S) ((char*)(p) - (S)->exec_offset)
01895 
01896   /* The removal of sflags only works with HAVE_MORECORE == 0.  */
01897 
01898 # define get_segment_flags(S)   (IS_MMAPPED_BIT)
01899 # define set_segment_flags(S,v) \
01900   (((v) != IS_MMAPPED_BIT) ? (ABORT, (v)) :                           \
01901    (((S)->exec_offset =                                               \
01902      mmap_exec_offset((S)->base, (S)->size)),                         \
01903     (mmap_exec_offset((S)->base + (S)->exec_offset, (S)->size) !=     \
01904      (S)->exec_offset) ? (ABORT, (v)) :                               \
01905    (mmap_exec_offset((S)->base, (S)->size) = 0), (v)))
01906 
01907   /* We use an offset here, instead of a pointer, because then, when
01908      base changes, we don't have to modify this.  On architectures
01909      with segmented addresses, this might not work.  */
01910   ptrdiff_t    exec_offset;
01911 #else
01912 
01913 # define get_segment_flags(S)   ((S)->sflags)
01914 # define set_segment_flags(S,v) ((S)->sflags = (v))
01915 # define check_segment_merge(S,b,s) (1)
01916 
01917   flag_t       sflags;           /* mmap and extern flag */
01918 #endif
01919 };
01920 
01921 #define is_mmapped_segment(S)  (get_segment_flags(S) & IS_MMAPPED_BIT)
01922 #define is_extern_segment(S)   (get_segment_flags(S) & EXTERN_BIT)
01923 
01924 typedef struct malloc_segment  msegment;
01925 typedef struct malloc_segment* msegmentptr;
01926 
01927 /* ---------------------------- malloc_state ----------------------------- */
01928 
01929 /*
01930    A malloc_state holds all of the bookkeeping for a space.
01931    The main fields are:
01932 
01933   Top
01934     The topmost chunk of the currently active segment. Its size is
01935     cached in topsize.  The actual size of topmost space is
01936     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
01937     fenceposts and segment records if necessary when getting more
01938     space from the system.  The size at which to autotrim top is
01939     cached from mparams in trim_check, except that it is disabled if
01940     an autotrim fails.
01941 
01942   Designated victim (dv)
01943     This is the preferred chunk for servicing small requests that
01944     don't have exact fits.  It is normally the chunk split off most
01945     recently to service another small request.  Its size is cached in
01946     dvsize. The link fields of this chunk are not maintained since it
01947     is not kept in a bin.
01948 
01949   SmallBins
01950     An array of bin headers for free chunks.  These bins hold chunks
01951     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
01952     chunks of all the same size, spaced 8 bytes apart.  To simplify
01953     use in double-linked lists, each bin header acts as a malloc_chunk
01954     pointing to the real first node, if it exists (else pointing to
01955     itself).  This avoids special-casing for headers.  But to avoid
01956     waste, we allocate only the fd/bk pointers of bins, and then use
01957     repositioning tricks to treat these as the fields of a chunk.
01958 
01959   TreeBins
01960     Treebins are pointers to the roots of trees holding a range of
01961     sizes. There are 2 equally spaced treebins for each power of two
01962     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
01963     larger.
01964 
01965   Bin maps
01966     There is one bit map for small bins ("smallmap") and one for
01967     treebins ("treemap).  Each bin sets its bit when non-empty, and
01968     clears the bit when empty.  Bit operations are then used to avoid
01969     bin-by-bin searching -- nearly all "search" is done without ever
01970     looking at bins that won't be selected.  The bit maps
01971     conservatively use 32 bits per map word, even if on 64bit system.
01972     For a good description of some of the bit-based techniques used
01973     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
01974     supplement at http://hackersdelight.org/). Many of these are
01975     intended to reduce the branchiness of paths through malloc etc, as
01976     well as to reduce the number of memory locations read or written.
01977 
01978   Segments
01979     A list of segments headed by an embedded malloc_segment record
01980     representing the initial space.
01981 
01982   Address check support
01983     The least_addr field is the least address ever obtained from
01984     MORECORE or MMAP. Attempted frees and reallocs of any address less
01985     than this are trapped (unless INSECURE is defined).
01986 
01987   Magic tag
01988     A cross-check field that should always hold same value as mparams.magic.
01989 
01990   Flags
01991     Bits recording whether to use MMAP, locks, or contiguous MORECORE
01992 
01993   Statistics
01994     Each space keeps track of current and maximum system memory
01995     obtained via MORECORE or MMAP.
01996 
01997   Locking
01998     If USE_LOCKS is defined, the "mutex" lock is acquired and released
01999     around every public call using this mspace.
02000 */
02001 
02002 /* Bin types, widths and sizes */
02003 #define NSMALLBINS        (32U)
02004 #define NTREEBINS         (32U)
02005 #define SMALLBIN_SHIFT    (3U)
02006 #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
02007 #define TREEBIN_SHIFT     (8U)
02008 #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
02009 #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
02010 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
02011 
02012 struct malloc_state {
02013   binmap_t   smallmap;
02014   binmap_t   treemap;
02015   size_t     dvsize;
02016   size_t     topsize;
02017   char*      least_addr;
02018   mchunkptr  dv;
02019   mchunkptr  top;
02020   size_t     trim_check;
02021   size_t     magic;
02022   mchunkptr  smallbins[(NSMALLBINS+1)*2];
02023   tbinptr    treebins[NTREEBINS];
02024   size_t     footprint;
02025   size_t     max_footprint;
02026   flag_t     mflags;
02027 #if USE_LOCKS
02028   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
02029 #endif /* USE_LOCKS */
02030   msegment   seg;
02031 };
02032 
02033 typedef struct malloc_state*    mstate;
02034 
02035 /* ------------- Global malloc_state and malloc_params ------------------- */
02036 
02037 /*
02038   malloc_params holds global properties, including those that can be
02039   dynamically set using mallopt. There is a single instance, mparams,
02040   initialized in init_mparams.
02041 */
02042 
02043 struct malloc_params {
02044   size_t magic;
02045   size_t page_size;
02046   size_t granularity;
02047   size_t mmap_threshold;
02048   size_t trim_threshold;
02049   flag_t default_mflags;
02050 };
02051 
02052 static struct malloc_params mparams;
02053 
02054 /* The global malloc_state used for all non-"mspace" calls */
02055 static struct malloc_state _gm_;
02056 #define gm                 (&_gm_)
02057 #define is_global(M)       ((M) == &_gm_)
02058 #define is_initialized(M)  ((M)->top != 0)
02059 
02060 /* -------------------------- system alloc setup ------------------------- */
02061 
02062 /* Operations on mflags */
02063 
02064 #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
02065 #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
02066 #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
02067 
02068 #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
02069 #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
02070 #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
02071 
02072 #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
02073 #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
02074 
02075 #define set_lock(M,L)\
02076  ((M)->mflags = (L)?\
02077   ((M)->mflags | USE_LOCK_BIT) :\
02078   ((M)->mflags & ~USE_LOCK_BIT))
02079 
02080 /* page-align a size */
02081 #define page_align(S)\
02082  (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
02083 
02084 /* granularity-align a size */
02085 #define granularity_align(S)\
02086   (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
02087 
02088 #define is_page_aligned(S)\
02089    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
02090 #define is_granularity_aligned(S)\
02091    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
02092 
02093 /*  True if segment S holds address A */
02094 #define segment_holds(S, A)\
02095   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
02096 
02097 /* Return segment holding given address */
02098 static msegmentptr segment_holding(mstate m, char* addr) {
02099   msegmentptr sp = &m->seg;
02100   for (;;) {
02101     if (addr >= sp->base && addr < sp->base + sp->size)
02102       return sp;
02103     if ((sp = sp->next) == 0)
02104       return 0;
02105   }
02106 }
02107 
02108 /* Return true if segment contains a segment link */
02109 static int has_segment_link(mstate m, msegmentptr ss) {
02110   msegmentptr sp = &m->seg;
02111   for (;;) {
02112     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
02113       return 1;
02114     if ((sp = sp->next) == 0)
02115       return 0;
02116   }
02117 }
02118 
02119 #ifndef MORECORE_CANNOT_TRIM
02120 #define should_trim(M,s)  ((s) > (M)->trim_check)
02121 #else  /* MORECORE_CANNOT_TRIM */
02122 #define should_trim(M,s)  (0)
02123 #endif /* MORECORE_CANNOT_TRIM */
02124 
02125 /*
02126   TOP_FOOT_SIZE is padding at the end of a segment, including space
02127   that may be needed to place segment records and fenceposts when new
02128   noncontiguous segments are added.
02129 */
02130 #define TOP_FOOT_SIZE\
02131   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
02132 
02133 
02134 /* -------------------------------  Hooks -------------------------------- */
02135 
02136 /*
02137   PREACTION should be defined to return 0 on success, and nonzero on
02138   failure. If you are not using locking, you can redefine these to do
02139   anything you like.
02140 */
02141 
02142 #if USE_LOCKS
02143 
02144 /* Ensure locks are initialized */
02145 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
02146 
02147 #define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
02148 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
02149 #else /* USE_LOCKS */
02150 
02151 #ifndef PREACTION
02152 #define PREACTION(M) (0)
02153 #endif  /* PREACTION */
02154 
02155 #ifndef POSTACTION
02156 #define POSTACTION(M)
02157 #endif  /* POSTACTION */
02158 
02159 #endif /* USE_LOCKS */
02160 
02161 /*
02162   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
02163   USAGE_ERROR_ACTION is triggered on detected bad frees and
02164   reallocs. The argument p is an address that might have triggered the
02165   fault. It is ignored by the two predefined actions, but might be
02166   useful in custom actions that try to help diagnose errors.
02167 */
02168 
02169 #if PROCEED_ON_ERROR
02170 
02171 /* A count of the number of corruption errors causing resets */
02172 int malloc_corruption_error_count;
02173 
02174 /* default corruption action */
02175 static void reset_on_error(mstate m);
02176 
02177 #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
02178 #define USAGE_ERROR_ACTION(m, p)
02179 
02180 #else /* PROCEED_ON_ERROR */
02181 
02182 #ifndef CORRUPTION_ERROR_ACTION
02183 #define CORRUPTION_ERROR_ACTION(m) ABORT
02184 #endif /* CORRUPTION_ERROR_ACTION */
02185 
02186 #ifndef USAGE_ERROR_ACTION
02187 #define USAGE_ERROR_ACTION(m,p) ABORT
02188 #endif /* USAGE_ERROR_ACTION */
02189 
02190 #endif /* PROCEED_ON_ERROR */
02191 
02192 /* -------------------------- Debugging setup ---------------------------- */
02193 
02194 #if ! DEBUG
02195 
02196 #define check_free_chunk(M,P)
02197 #define check_inuse_chunk(M,P)
02198 #define check_malloced_chunk(M,P,N)
02199 #define check_mmapped_chunk(M,P)
02200 #define check_malloc_state(M)
02201 #define check_top_chunk(M,P)
02202 
02203 #else /* DEBUG */
02204 #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
02205 #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
02206 #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
02207 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
02208 #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
02209 #define check_malloc_state(M)       do_check_malloc_state(M)
02210 
02211 static void   do_check_any_chunk(mstate m, mchunkptr p);
02212 static void   do_check_top_chunk(mstate m, mchunkptr p);
02213 static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
02214 static void   do_check_inuse_chunk(mstate m, mchunkptr p);
02215 static void   do_check_free_chunk(mstate m, mchunkptr p);
02216 static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
02217 static void   do_check_tree(mstate m, tchunkptr t);
02218 static void   do_check_treebin(mstate m, bindex_t i);
02219 static void   do_check_smallbin(mstate m, bindex_t i);
02220 static void   do_check_malloc_state(mstate m);
02221 static int    bin_find(mstate m, mchunkptr x);
02222 static size_t traverse_and_check(mstate m);
02223 #endif /* DEBUG */
02224 
02225 /* ---------------------------- Indexing Bins ---------------------------- */
02226 
02227 #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
02228 #define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
02229 #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
02230 #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
02231 
02232 /* addressing by index. See above about smallbin repositioning */
02233 #define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
02234 #define treebin_at(M,i)     (&((M)->treebins[i]))
02235 
02236 /* assign tree index for size S to variable I */
02237 #if defined(__GNUC__) && defined(i386)
02238 #define compute_tree_index(S, I)\
02239 {\
02240   size_t X = S >> TREEBIN_SHIFT;\
02241   if (X == 0)\
02242     I = 0;\
02243   else if (X > 0xFFFF)\
02244     I = NTREEBINS-1;\
02245   else {\
02246     unsigned int K;\
02247     __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
02248     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
02249   }\
02250 }
02251 #else /* GNUC */
02252 #define compute_tree_index(S, I)\
02253 {\
02254   size_t X = S >> TREEBIN_SHIFT;\
02255   if (X == 0)\
02256     I = 0;\
02257   else if (X > 0xFFFF)\
02258     I = NTREEBINS-1;\
02259   else {\
02260     unsigned int Y = (unsigned int)X;\
02261     unsigned int N = ((Y - 0x100) >> 16) & 8;\
02262     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
02263     N += K;\
02264     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
02265     K = 14 - N + ((Y <<= K) >> 15);\
02266     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
02267   }\
02268 }
02269 #endif /* GNUC */
02270 
02271 /* Bit representing maximum resolved size in a treebin at i */
02272 #define bit_for_tree_index(i) \
02273    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
02274 
02275 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
02276 #define leftshift_for_tree_index(i) \
02277    ((i == NTREEBINS-1)? 0 : \
02278     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
02279 
02280 /* The size of the smallest chunk held in bin with index i */
02281 #define minsize_for_tree_index(i) \
02282    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
02283    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
02284 
02285 
02286 /* ------------------------ Operations on bin maps ----------------------- */
02287 
02288 /* bit corresponding to given index */
02289 #define idx2bit(i)              ((binmap_t)(1) << (i))
02290 
02291 /* Mark/Clear bits with given index */
02292 #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
02293 #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
02294 #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
02295 
02296 #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
02297 #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
02298 #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
02299 
02300 /* index corresponding to given bit */
02301 
02302 #if defined(__GNUC__) && defined(i386)
02303 #define compute_bit2idx(X, I)\
02304 {\
02305   unsigned int J;\
02306   __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
02307   I = (bindex_t)J;\
02308 }
02309 
02310 #else /* GNUC */
02311 #if  USE_BUILTIN_FFS
02312 #define compute_bit2idx(X, I) I = ffs(X)-1
02313 
02314 #else /* USE_BUILTIN_FFS */
02315 #define compute_bit2idx(X, I)\
02316 {\
02317   unsigned int Y = X - 1;\
02318   unsigned int K = Y >> (16-4) & 16;\
02319   unsigned int N = K;        Y >>= K;\
02320   N += K = Y >> (8-3) &  8;  Y >>= K;\
02321   N += K = Y >> (4-2) &  4;  Y >>= K;\
02322   N += K = Y >> (2-1) &  2;  Y >>= K;\
02323   N += K = Y >> (1-0) &  1;  Y >>= K;\
02324   I = (bindex_t)(N + Y);\
02325 }
02326 #endif /* USE_BUILTIN_FFS */
02327 #endif /* GNUC */
02328 
02329 /* isolate the least set bit of a bitmap */
02330 #define least_bit(x)         ((x) & -(x))
02331 
02332 /* mask with all bits to left of least bit of x on */
02333 #define left_bits(x)         ((x<<1) | -(x<<1))
02334 
02335 /* mask with all bits to left of or equal to least bit of x on */
02336 #define same_or_left_bits(x) ((x) | -(x))
02337 
02338 
02339 /* ----------------------- Runtime Check Support ------------------------- */
02340 
02341 /*
02342   For security, the main invariant is that malloc/free/etc never
02343   writes to a static address other than malloc_state, unless static
02344   malloc_state itself has been corrupted, which cannot occur via
02345   malloc (because of these checks). In essence this means that we
02346   believe all pointers, sizes, maps etc held in malloc_state, but
02347   check all of those linked or offsetted from other embedded data
02348   structures.  These checks are interspersed with main code in a way
02349   that tends to minimize their run-time cost.
02350 
02351   When FOOTERS is defined, in addition to range checking, we also
02352   verify footer fields of inuse chunks, which can be used guarantee
02353   that the mstate controlling malloc/free is intact.  This is a
02354   streamlined version of the approach described by William Robertson
02355   et al in "Run-time Detection of Heap-based Overflows" LISA'03
02356   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
02357   of an inuse chunk holds the xor of its mstate and a random seed,
02358   that is checked upon calls to free() and realloc().  This is
02359   (probablistically) unguessable from outside the program, but can be
02360   computed by any code successfully malloc'ing any chunk, so does not
02361   itself provide protection against code that has already broken
02362   security through some other means.  Unlike Robertson et al, we
02363   always dynamically check addresses of all offset chunks (previous,
02364   next, etc). This turns out to be cheaper than relying on hashes.
02365 */
02366 
02367 #if !INSECURE
02368 /* Check if address a is at least as high as any from MORECORE or MMAP */
02369 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
02370 /* Check if address of next chunk n is higher than base chunk p */
02371 #define ok_next(p, n)    ((char*)(p) < (char*)(n))
02372 /* Check if p has its cinuse bit on */
02373 #define ok_cinuse(p)     cinuse(p)
02374 /* Check if p has its pinuse bit on */
02375 #define ok_pinuse(p)     pinuse(p)
02376 
02377 #else /* !INSECURE */
02378 #define ok_address(M, a) (1)
02379 #define ok_next(b, n)    (1)
02380 #define ok_cinuse(p)     (1)
02381 #define ok_pinuse(p)     (1)
02382 #endif /* !INSECURE */
02383 
02384 #if (FOOTERS && !INSECURE)
02385 /* Check if (alleged) mstate m has expected magic field */
02386 #define ok_magic(M)      ((M)->magic == mparams.magic)
02387 #else  /* (FOOTERS && !INSECURE) */
02388 #define ok_magic(M)      (1)
02389 #endif /* (FOOTERS && !INSECURE) */
02390 
02391 
02392 /* In gcc, use __builtin_expect to minimize impact of checks */
02393 #if !INSECURE
02394 #if defined(__GNUC__) && __GNUC__ >= 3
02395 #define RTCHECK(e)  __builtin_expect(e, 1)
02396 #else /* GNUC */
02397 #define RTCHECK(e)  (e)
02398 #endif /* GNUC */
02399 #else /* !INSECURE */
02400 #define RTCHECK(e)  (1)
02401 #endif /* !INSECURE */
02402 
02403 /* macros to set up inuse chunks with or without footers */
02404 
02405 #if !FOOTERS
02406 
02407 #define mark_inuse_foot(M,p,s)
02408 
02409 /* Set cinuse bit and pinuse bit of next chunk */
02410 #define set_inuse(M,p,s)\
02411   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
02412   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
02413 
02414 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
02415 #define set_inuse_and_pinuse(M,p,s)\
02416   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
02417   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
02418 
02419 /* Set size, cinuse and pinuse bit of this chunk */
02420 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
02421   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
02422 
02423 #else /* FOOTERS */
02424 
02425 /* Set foot of inuse chunk to be xor of mstate and seed */
02426 #define mark_inuse_foot(M,p,s)\
02427   (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
02428 
02429 #define get_mstate_for(p)\
02430   ((mstate)(((mchunkptr)((char*)(p) +\
02431     (chunksize(p))))->prev_foot ^ mparams.magic))
02432 
02433 #define set_inuse(M,p,s)\
02434   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
02435   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
02436   mark_inuse_foot(M,p,s))
02437 
02438 #define set_inuse_and_pinuse(M,p,s)\
02439   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
02440   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
02441  mark_inuse_foot(M,p,s))
02442 
02443 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
02444   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
02445   mark_inuse_foot(M, p, s))
02446 
02447 #endif /* !FOOTERS */
02448 
02449 /* ---------------------------- setting mparams -------------------------- */
02450 
02451 /* Initialize mparams */
02452 static int init_mparams(void) {
02453   if (mparams.page_size == 0) {
02454     size_t s;
02455 
02456     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
02457     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
02458 #if MORECORE_CONTIGUOUS
02459     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
02460 #else  /* MORECORE_CONTIGUOUS */
02461     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
02462 #endif /* MORECORE_CONTIGUOUS */
02463 
02464 #if (FOOTERS && !INSECURE)
02465     {
02466 #if USE_DEV_RANDOM
02467       int fd;
02468       unsigned char buf[sizeof(size_t)];
02469       /* Try to use /dev/urandom, else fall back on using time */
02470       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
02471           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
02472         s = *((size_t *) buf);
02473         close(fd);
02474       }
02475       else
02476 #endif /* USE_DEV_RANDOM */
02477         s = (size_t)(time(0) ^ (size_t)0x55555555U);
02478 
02479       s |= (size_t)8U;    /* ensure nonzero */
02480       s &= ~(size_t)7U;   /* improve chances of fault for bad values */
02481 
02482     }
02483 #else /* (FOOTERS && !INSECURE) */
02484     s = (size_t)0x58585858U;
02485 #endif /* (FOOTERS && !INSECURE) */
02486     ACQUIRE_MAGIC_INIT_LOCK();
02487     if (mparams.magic == 0) {
02488       mparams.magic = s;
02489       /* Set up lock for main malloc area */
02490       INITIAL_LOCK(&gm->mutex);
02491       gm->mflags = mparams.default_mflags;
02492     }
02493     RELEASE_MAGIC_INIT_LOCK();
02494 
02495 #ifndef WIN32
02496     mparams.page_size = malloc_getpagesize;
02497     mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
02498                            DEFAULT_GRANULARITY : mparams.page_size);
02499 #else /* WIN32 */
02500     {
02501       SYSTEM_INFO system_info;
02502       GetSystemInfo(&system_info);
02503       mparams.page_size = system_info.dwPageSize;
02504       mparams.granularity = system_info.dwAllocationGranularity;
02505     }
02506 #endif /* WIN32 */
02507 
02508     /* Sanity-check configuration:
02509        size_t must be unsigned and as wide as pointer type.
02510        ints must be at least 4 bytes.
02511        alignment must be at least 8.
02512        Alignment, min chunk size, and page size must all be powers of 2.
02513     */
02514     if ((sizeof(size_t) != sizeof(char*)) ||
02515         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
02516         (sizeof(int) < 4)  ||
02517         (MALLOC_ALIGNMENT < (size_t)8U) ||
02518         ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
02519         ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
02520         ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
02521         ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
02522       ABORT;
02523   }
02524   return 0;
02525 }
02526 
02527 /* support for mallopt */
02528 static int change_mparam(int param_number, int value) {
02529   size_t val = (size_t)value;
02530   init_mparams();
02531   switch(param_number) {
02532   case M_TRIM_THRESHOLD:
02533     mparams.trim_threshold = val;
02534     return 1;
02535   case M_GRANULARITY:
02536     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
02537       mparams.granularity = val;
02538       return 1;
02539     }
02540     else
02541       return 0;
02542   case M_MMAP_THRESHOLD:
02543     mparams.mmap_threshold = val;
02544     return 1;
02545   default:
02546     return 0;
02547   }
02548 }
02549 
02550 #if DEBUG
02551 /* ------------------------- Debugging Support --------------------------- */
02552 
02553 /* Check properties of any chunk, whether free, inuse, mmapped etc  */
02554 static void do_check_any_chunk(mstate m, mchunkptr p) {
02555   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
02556   assert(ok_address(m, p));
02557 }
02558 
02559 /* Check properties of top chunk */
02560 static void do_check_top_chunk(mstate m, mchunkptr p) {
02561   msegmentptr sp = segment_holding(m, (char*)p);
02562   size_t  sz = chunksize(p);
02563   assert(sp != 0);
02564   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
02565   assert(ok_address(m, p));
02566   assert(sz == m->topsize);
02567   assert(sz > 0);
02568   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
02569   assert(pinuse(p));
02570   assert(!next_pinuse(p));
02571 }
02572 
02573 /* Check properties of (inuse) mmapped chunks */
02574 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
02575   size_t  sz = chunksize(p);
02576   size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
02577   assert(is_mmapped(p));
02578   assert(use_mmap(m));
02579   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
02580   assert(ok_address(m, p));
02581   assert(!is_small(sz));
02582   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
02583   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
02584   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
02585 }
02586 
02587 /* Check properties of inuse chunks */
02588 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
02589   do_check_any_chunk(m, p);
02590   assert(cinuse(p));
02591   assert(next_pinuse(p));
02592   /* If not pinuse and not mmapped, previous chunk has OK offset */
02593   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
02594   if (is_mmapped(p))
02595     do_check_mmapped_chunk(m, p);
02596 }
02597 
02598 /* Check properties of free chunks */
02599 static void do_check_free_chunk(mstate m, mchunkptr p) {
02600   size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
02601   mchunkptr next = chunk_plus_offset(p, sz);
02602   do_check_any_chunk(m, p);
02603   assert(!cinuse(p));
02604   assert(!next_pinuse(p));
02605   assert (!is_mmapped(p));
02606   if (p != m->dv && p != m->top) {
02607     if (sz >= MIN_CHUNK_SIZE) {
02608       assert((sz & CHUNK_ALIGN_MASK) == 0);
02609       assert(is_aligned(chunk2mem(p)));
02610       assert(next->prev_foot == sz);
02611       assert(pinuse(p));
02612       assert (next == m->top || cinuse(next));
02613       assert(p->fd->bk == p);
02614       assert(p->bk->fd == p);
02615     }
02616     else  /* markers are always of size SIZE_T_SIZE */
02617       assert(sz == SIZE_T_SIZE);
02618   }
02619 }
02620 
02621 /* Check properties of malloced chunks at the point they are malloced */
02622 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
02623   if (mem != 0) {
02624     mchunkptr p = mem2chunk(mem);
02625     size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
02626     do_check_inuse_chunk(m, p);
02627     assert((sz & CHUNK_ALIGN_MASK) == 0);
02628     assert(sz >= MIN_CHUNK_SIZE);
02629     assert(sz >= s);
02630     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
02631     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
02632   }
02633 }
02634 
02635 /* Check a tree and its subtrees.  */
02636 static void do_check_tree(mstate m, tchunkptr t) {
02637   tchunkptr head = 0;
02638   tchunkptr u = t;
02639   bindex_t tindex = t->index;
02640   size_t tsize = chunksize(t);
02641   bindex_t idx;
02642   compute_tree_index(tsize, idx);
02643   assert(tindex == idx);
02644   assert(tsize >= MIN_LARGE_SIZE);
02645   assert(tsize >= minsize_for_tree_index(idx));
02646   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
02647 
02648   do { /* traverse through chain of same-sized nodes */
02649     do_check_any_chunk(m, ((mchunkptr)u));
02650     assert(u->index == tindex);
02651     assert(chunksize(u) == tsize);
02652     assert(!cinuse(u));
02653     assert(!next_pinuse(u));
02654     assert(u->fd->bk == u);
02655     assert(u->bk->fd == u);
02656     if (u->parent == 0) {
02657       assert(u->child[0] == 0);
02658       assert(u->child[1] == 0);
02659     }
02660     else {
02661       assert(head == 0); /* only one node on chain has parent */
02662       head = u;
02663       assert(u->parent != u);
02664       assert (u->parent->child[0] == u ||
02665               u->parent->child[1] == u ||
02666               *((tbinptr*)(u->parent)) == u);
02667       if (u->child[0] != 0) {
02668         assert(u->child[0]->parent == u);
02669         assert(u->child[0] != u);
02670         do_check_tree(m, u->child[0]);
02671       }
02672       if (u->child[1] != 0) {
02673         assert(u->child[1]->parent == u);
02674         assert(u->child[1] != u);
02675         do_check_tree(m, u->child[1]);
02676       }
02677       if (u->child[0] != 0 && u->child[1] != 0) {
02678         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
02679       }
02680     }
02681     u = u->fd;
02682   } while (u != t);
02683   assert(head != 0);
02684 }
02685 
02686 /*  Check all the chunks in a treebin.  */
02687 static void do_check_treebin(mstate m, bindex_t i) {
02688   tbinptr* tb = treebin_at(m, i);
02689   tchunkptr t = *tb;
02690   int empty = (m->treemap & (1U << i)) == 0;
02691   if (t == 0)
02692     assert(empty);
02693   if (!empty)
02694     do_check_tree(m, t);
02695 }
02696 
02697 /*  Check all the chunks in a smallbin.  */
02698 static void do_check_smallbin(mstate m, bindex_t i) {
02699   sbinptr b = smallbin_at(m, i);
02700   mchunkptr p = b->bk;
02701   unsigned int empty = (m->smallmap & (1U << i)) == 0;
02702   if (p == b)
02703     assert(empty);
02704   if (!empty) {
02705     for (; p != b; p = p->bk) {
02706       size_t size = chunksize(p);
02707       mchunkptr q;
02708       /* each chunk claims to be free */
02709       do_check_free_chunk(m, p);
02710       /* chunk belongs in bin */
02711       assert(small_index(size) == i);
02712       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
02713       /* chunk is followed by an inuse chunk */
02714       q = next_chunk(p);
02715       if (q->head != FENCEPOST_HEAD)
02716         do_check_inuse_chunk(m, q);
02717     }
02718   }
02719 }
02720 
02721 /* Find x in a bin. Used in other check functions. */
02722 static int bin_find(mstate m, mchunkptr x) {
02723   size_t size = chunksize(x);
02724   if (is_small(size)) {
02725     bindex_t sidx = small_index(size);
02726     sbinptr b = smallbin_at(m, sidx);
02727     if (smallmap_is_marked(m, sidx)) {
02728       mchunkptr p = b;
02729       do {
02730         if (p == x)
02731           return 1;
02732       } while ((p = p->fd) != b);
02733     }
02734   }
02735   else {
02736     bindex_t tidx;
02737     compute_tree_index(size, tidx);
02738     if (treemap_is_marked(m, tidx)) {
02739       tchunkptr t = *treebin_at(m, tidx);
02740       size_t sizebits = size << leftshift_for_tree_index(tidx);
02741       while (t != 0 && chunksize(t) != size) {
02742         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
02743         sizebits <<= 1;
02744       }
02745       if (t != 0) {
02746         tchunkptr u = t;
02747         do {
02748           if (u == (tchunkptr)x)
02749             return 1;
02750         } while ((u = u->fd) != t);
02751       }
02752     }
02753   }
02754   return 0;
02755 }
02756 
02757 /* Traverse each chunk and check it; return total */
02758 static size_t traverse_and_check(mstate m) {
02759   size_t sum = 0;
02760   if (is_initialized(m)) {
02761     msegmentptr s = &m->seg;
02762     sum += m->topsize + TOP_FOOT_SIZE;
02763     while (s != 0) {
02764       mchunkptr q = align_as_chunk(s->base);
02765       mchunkptr lastq = 0;
02766       assert(pinuse(q));
02767       while (segment_holds(s, q) &&
02768              q != m->top && q->head != FENCEPOST_HEAD) {
02769         sum += chunksize(q);
02770         if (cinuse(q)) {
02771           assert(!bin_find(m, q));
02772           do_check_inuse_chunk(m, q);
02773         }
02774         else {
02775           assert(q == m->dv || bin_find(m, q));
02776           assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
02777           do_check_free_chunk(m, q);
02778         }
02779         lastq = q;
02780         q = next_chunk(q);
02781       }
02782       s = s->next;
02783     }
02784   }
02785   return sum;
02786 }
02787 
02788 /* Check all properties of malloc_state. */
02789 static void do_check_malloc_state(mstate m) {
02790   bindex_t i;
02791   size_t total;
02792   /* check bins */
02793   for (i = 0; i < NSMALLBINS; ++i)
02794     do_check_smallbin(m, i);
02795   for (i = 0; i < NTREEBINS; ++i)
02796     do_check_treebin(m, i);
02797 
02798   if (m->dvsize != 0) { /* check dv chunk */
02799     do_check_any_chunk(m, m->dv);
02800     assert(m->dvsize == chunksize(m->dv));
02801     assert(m->dvsize >= MIN_CHUNK_SIZE);
02802     assert(bin_find(m, m->dv) == 0);
02803   }
02804 
02805   if (m->top != 0) {   /* check top chunk */
02806     do_check_top_chunk(m, m->top);
02807     assert(m->topsize == chunksize(m->top));
02808     assert(m->topsize > 0);
02809     assert(bin_find(m, m->top) == 0);
02810   }
02811 
02812   total = traverse_and_check(m);
02813   assert(total <= m->footprint);
02814   assert(m->footprint <= m->max_footprint);
02815 }
02816 #endif /* DEBUG */
02817 
02818 /* ----------------------------- statistics ------------------------------ */
02819 
02820 #if !NO_MALLINFO
02821 static struct mallinfo internal_mallinfo(mstate m) {
02822   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
02823   if (!PREACTION(m)) {
02824     check_malloc_state(m);
02825     if (is_initialized(m)) {
02826       size_t nfree = SIZE_T_ONE; /* top always free */
02827       size_t mfree = m->topsize + TOP_FOOT_SIZE;
02828       size_t sum = mfree;
02829       msegmentptr s = &m->seg;
02830       while (s != 0) {
02831         mchunkptr q = align_as_chunk(s->base);
02832         while (segment_holds(s, q) &&
02833                q != m->top && q->head != FENCEPOST_HEAD) {
02834           size_t sz = chunksize(q);
02835           sum += sz;
02836           if (!cinuse(q)) {
02837             mfree += sz;
02838             ++nfree;
02839           }
02840           q = next_chunk(q);
02841         }
02842         s = s->next;
02843       }
02844 
02845       nm.arena    = sum;
02846       nm.ordblks  = nfree;
02847       nm.hblkhd   = m->footprint - sum;
02848       nm.usmblks  = m->max_footprint;
02849       nm.uordblks = m->footprint - mfree;
02850       nm.fordblks = mfree;
02851       nm.keepcost = m->topsize;
02852     }
02853 
02854     POSTACTION(m);
02855   }
02856   return nm;
02857 }
02858 #endif /* !NO_MALLINFO */
02859 
02860 static void internal_malloc_stats(mstate m) {
02861   if (!PREACTION(m)) {
02862     size_t maxfp = 0;
02863     size_t fp = 0;
02864     size_t used = 0;
02865     check_malloc_state(m);
02866     if (is_initialized(m)) {
02867       msegmentptr s = &m->seg;
02868       maxfp = m->max_footprint;
02869       fp = m->footprint;
02870       used = fp - (m->topsize + TOP_FOOT_SIZE);
02871 
02872       while (s != 0) {
02873         mchunkptr q = align_as_chunk(s->base);
02874         while (segment_holds(s, q) &&
02875                q != m->top && q->head != FENCEPOST_HEAD) {
02876           if (!cinuse(q))
02877             used -= chunksize(q);
02878           q = next_chunk(q);
02879         }
02880         s = s->next;
02881       }
02882     }
02883 
02884     fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
02885     fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
02886     fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
02887 
02888     POSTACTION(m);
02889   }
02890 }
02891 
02892 /* ----------------------- Operations on smallbins ----------------------- */
02893 
02894 /*
02895   Various forms of linking and unlinking are defined as macros.  Even
02896   the ones for trees, which are very long but have very short typical
02897   paths.  This is ugly but reduces reliance on inlining support of
02898   compilers.
02899 */
02900 
02901 /* Link a free chunk into a smallbin  */
02902 #define insert_small_chunk(M, P, S) {\
02903   bindex_t I  = small_index(S);\
02904   mchunkptr B = smallbin_at(M, I);\
02905   mchunkptr F = B;\
02906   assert(S >= MIN_CHUNK_SIZE);\
02907   if (!smallmap_is_marked(M, I))\
02908     mark_smallmap(M, I);\
02909   else if (RTCHECK(ok_address(M, B->fd)))\
02910     F = B->fd;\
02911   else {\
02912     CORRUPTION_ERROR_ACTION(M);\
02913   }\
02914   B->fd = P;\
02915   F->bk = P;\
02916   P->fd = F;\
02917   P->bk = B;\
02918 }
02919 
02920 /* Unlink a chunk from a smallbin  */
02921 #define unlink_small_chunk(M, P, S) {\
02922   mchunkptr F = P->fd;\
02923   mchunkptr B = P->bk;\
02924   bindex_t I = small_index(S);\
02925   assert(P != B);\
02926   assert(P != F);\
02927   assert(chunksize(P) == small_index2size(I));\
02928   if (F == B)\
02929     clear_smallmap(M, I);\
02930   else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
02931                    (B == smallbin_at(M,I) || ok_address(M, B)))) {\
02932     F->bk = B;\
02933     B->fd = F;\
02934   }\
02935   else {\
02936     CORRUPTION_ERROR_ACTION(M);\
02937   }\
02938 }
02939 
02940 /* Unlink the first chunk from a smallbin */
02941 #define unlink_first_small_chunk(M, B, P, I) {\
02942   mchunkptr F = P->fd;\
02943   assert(P != B);\
02944   assert(P != F);\
02945   assert(chunksize(P) == small_index2size(I));\
02946   if (B == F)\
02947     clear_smallmap(M, I);\
02948   else if (RTCHECK(ok_address(M, F))) {\
02949     B->fd = F;\
02950     F->bk = B;\
02951   }\
02952   else {\
02953     CORRUPTION_ERROR_ACTION(M);\
02954   }\
02955 }
02956 
02957 /* Replace dv node, binning the old one */
02958 /* Used only when dvsize known to be small */
02959 #define replace_dv(M, P, S) {\
02960   size_t DVS = M->dvsize;\
02961   if (DVS != 0) {\
02962     mchunkptr DV = M->dv;\
02963     assert(is_small(DVS));\
02964     insert_small_chunk(M, DV, DVS);\
02965   }\
02966   M->dvsize = S;\
02967   M->dv = P;\
02968 }
02969 
02970 /* ------------------------- Operations on trees ------------------------- */
02971 
02972 /* Insert chunk into tree */
02973 #define insert_large_chunk(M, X, S) {\
02974   tbinptr* H;\
02975   bindex_t I;\
02976   compute_tree_index(S, I);\
02977   H = treebin_at(M, I);\
02978   X->index = I;\
02979   X->child[0] = X->child[1] = 0;\
02980   if (!treemap_is_marked(M, I)) {\
02981     mark_treemap(M, I);\
02982     *H = X;\
02983     X->parent = (tchunkptr)H;\
02984     X->fd = X->bk = X;\
02985   }\
02986   else {\
02987     tchunkptr T = *H;\
02988     size_t K = S << leftshift_for_tree_index(I);\
02989     for (;;) {\
02990       if (chunksize(T) != S) {\
02991         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
02992         K <<= 1;\
02993         if (*C != 0)\
02994           T = *C;\
02995         else if (RTCHECK(ok_address(M, C))) {\
02996           *C = X;\
02997           X->parent = T;\
02998           X->fd = X->bk = X;\
02999           break;\
03000         }\
03001         else {\
03002           CORRUPTION_ERROR_ACTION(M);\
03003           break;\
03004         }\
03005       }\
03006       else {\
03007         tchunkptr F = T->fd;\
03008         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
03009           T->fd = F->bk = X;\
03010           X->fd = F;\
03011           X->bk = T;\
03012           X->parent = 0;\
03013           break;\
03014         }\
03015         else {\
03016           CORRUPTION_ERROR_ACTION(M);\
03017           break;\
03018         }\
03019       }\
03020     }\
03021   }\
03022 }
03023 
03024 /*
03025   Unlink steps:
03026 
03027   1. If x is a chained node, unlink it from its same-sized fd/bk links
03028      and choose its bk node as its replacement.
03029   2. If x was the last node of its size, but not a leaf node, it must
03030      be replaced with a leaf node (not merely one with an open left or
03031      right), to make sure that lefts and rights of descendents
03032      correspond properly to bit masks.  We use the rightmost descendent
03033      of x.  We could use any other leaf, but this is easy to locate and
03034      tends to counteract removal of leftmosts elsewhere, and so keeps
03035      paths shorter than minimally guaranteed.  This doesn't loop much
03036      because on average a node in a tree is near the bottom.
03037   3. If x is the base of a chain (i.e., has parent links) relink
03038      x's parent and children to x's replacement (or null if none).
03039 */
03040 
03041 #define unlink_large_chunk(M, X) {\
03042   tchunkptr XP = X->parent;\
03043   tchunkptr R;\
03044   if (X->bk != X) {\
03045     tchunkptr F = X->fd;\
03046     R = X->bk;\
03047     if (RTCHECK(ok_address(M, F))) {\
03048       F->bk = R;\
03049       R->fd = F;\
03050     }\
03051     else {\
03052       CORRUPTION_ERROR_ACTION(M);\
03053     }\
03054   }\
03055   else {\
03056     tchunkptr* RP;\
03057     if (((R = *(RP = &(X->child[1]))) != 0) ||\
03058         ((R = *(RP = &(X->child[0]))) != 0)) {\
03059       tchunkptr* CP;\
03060       while ((*(CP = &(R->child[1])) != 0) ||\
03061              (*(CP = &(R->child[0])) != 0)) {\
03062         R = *(RP = CP);\
03063       }\
03064       if (RTCHECK(ok_address(M, RP)))\
03065         *RP = 0;\
03066       else {\
03067         CORRUPTION_ERROR_ACTION(M);\
03068       }\
03069     }\
03070   }\
03071   if (XP != 0) {\
03072     tbinptr* H = treebin_at(M, X->index);\
03073     if (X == *H) {\
03074       if ((*H = R) == 0) \
03075         clear_treemap(M, X->index);\
03076     }\
03077     else if (RTCHECK(ok_address(M, XP))) {\
03078       if (XP->child[0] == X) \
03079         XP->child[0] = R;\
03080       else \
03081         XP->child[1] = R;\
03082     }\
03083     else\
03084       CORRUPTION_ERROR_ACTION(M);\
03085     if (R != 0) {\
03086       if (RTCHECK(ok_address(M, R))) {\
03087         tchunkptr C0, C1;\
03088         R->parent = XP;\
03089         if ((C0 = X->child[0]) != 0) {\
03090           if (RTCHECK(ok_address(M, C0))) {\
03091             R->child[0] = C0;\
03092             C0->parent = R;\
03093           }\
03094           else\
03095             CORRUPTION_ERROR_ACTION(M);\
03096         }\
03097         if ((C1 = X->child[1]) != 0) {\
03098           if (RTCHECK(ok_address(M, C1))) {\
03099             R->child[1] = C1;\
03100             C1->parent = R;\
03101           }\
03102           else\
03103             CORRUPTION_ERROR_ACTION(M);\
03104         }\
03105       }\
03106       else\
03107         CORRUPTION_ERROR_ACTION(M);\
03108     }\
03109   }\
03110 }
03111 
03112 /* Relays to large vs small bin operations */
03113 
03114 #define insert_chunk(M, P, S)\
03115   if (is_small(S)) insert_small_chunk(M, P, S)\
03116   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
03117 
03118 #define unlink_chunk(M, P, S)\
03119   if (is_small(S)) unlink_small_chunk(M, P, S)\
03120   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
03121 
03122 
03123 /* Relays to internal calls to malloc/free from realloc, memalign etc */
03124 
03125 #if ONLY_MSPACES
03126 #define internal_malloc(m, b) mspace_malloc(m, b)
03127 #define internal_free(m, mem) mspace_free(m,mem);
03128 #else /* ONLY_MSPACES */
03129 #if MSPACES
03130 #define internal_malloc(m, b)\
03131    (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
03132 #define internal_free(m, mem)\
03133    if (m == gm) dlfree(mem); else mspace_free(m,mem);
03134 #else /* MSPACES */
03135 #define internal_malloc(m, b) dlmalloc(b)
03136 #define internal_free(m, mem) dlfree(mem)
03137 #endif /* MSPACES */
03138 #endif /* ONLY_MSPACES */
03139 
03140 /* -----------------------  Direct-mmapping chunks ----------------------- */
03141 
03142 /*
03143   Directly mmapped chunks are set up with an offset to the start of
03144   the mmapped region stored in the prev_foot field of the chunk. This
03145   allows reconstruction of the required argument to MUNMAP when freed,
03146   and also allows adjustment of the returned chunk to meet alignment
03147   requirements (especially in memalign).  There is also enough space
03148   allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
03149   the PINUSE bit so frees can be checked.
03150 */
03151 
03152 /* Malloc using mmap */
03153 static void* mmap_alloc(mstate m, size_t nb) {
03154   size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
03155   if (mmsize > nb) {     /* Check for wrap around 0 */
03156     char* mm = (char*)(DIRECT_MMAP(mmsize));
03157     if (mm != CMFAIL) {
03158       size_t offset = align_offset(chunk2mem(mm));
03159       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
03160       mchunkptr p = (mchunkptr)(mm + offset);
03161       p->prev_foot = offset | IS_MMAPPED_BIT;
03162       (p)->head = (psize|CINUSE_BIT);
03163       mark_inuse_foot(m, p, psize);
03164       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
03165       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
03166 
03167       if (mm < m->least_addr)
03168         m->least_addr = mm;
03169       if ((m->footprint += mmsize) > m->max_footprint)
03170         m->max_footprint = m->footprint;
03171       assert(is_aligned(chunk2mem(p)));
03172       check_mmapped_chunk(m, p);
03173       return chunk2mem(p);
03174     }
03175   }
03176   return 0;
03177 }
03178 
03179 /* Realloc using mmap */
03180 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
03181   size_t oldsize = chunksize(oldp);
03182   if (is_small(nb)) /* Can't shrink mmap regions below small size */
03183     return 0;
03184   /* Keep old chunk if big enough but not too big */
03185   if (oldsize >= nb + SIZE_T_SIZE &&
03186       (oldsize - nb) <= (mparams.granularity << 1))
03187     return oldp;
03188   else {
03189     size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
03190     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
03191     size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
03192                                          CHUNK_ALIGN_MASK);
03193     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
03194                                   oldmmsize, newmmsize, 1);
03195     if (cp != CMFAIL) {
03196       mchunkptr newp = (mchunkptr)(cp + offset);
03197       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
03198       newp->head = (psize|CINUSE_BIT);
03199       mark_inuse_foot(m, newp, psize);
03200       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
03201       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
03202 
03203       if (cp < m->least_addr)
03204         m->least_addr = cp;
03205       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
03206         m->max_footprint = m->footprint;
03207       check_mmapped_chunk(m, newp);
03208       return newp;
03209     }
03210   }
03211   return 0;
03212 }
03213 
03214 /* -------------------------- mspace management -------------------------- */
03215 
03216 /* Initialize top chunk and its size */
03217 static void init_top(mstate m, mchunkptr p, size_t psize) {
03218   /* Ensure alignment */
03219   size_t offset = align_offset(chunk2mem(p));
03220   p = (mchunkptr)((char*)p + offset);
03221   psize -= offset;
03222 
03223   m->top = p;
03224   m->topsize = psize;
03225   p->head = psize | PINUSE_BIT;
03226   /* set size of fake trailing chunk holding overhead space only once */
03227   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
03228   m->trim_check = mparams.trim_threshold; /* reset on each update */
03229 }
03230 
03231 /* Initialize bins for a new mstate that is otherwise zeroed out */
03232 static void init_bins(mstate m) {
03233   /* Establish circular links for smallbins */
03234   bindex_t i;
03235   for (i = 0; i < NSMALLBINS; ++i) {
03236     sbinptr bin = smallbin_at(m,i);
03237     bin->fd = bin->bk = bin;
03238   }
03239 }
03240 
03241 #if PROCEED_ON_ERROR
03242 
03243 /* default corruption action */
03244 static void reset_on_error(mstate m) {
03245   int i;
03246   ++malloc_corruption_error_count;
03247   /* Reinitialize fields to forget about all memory */
03248   m->smallbins = m->treebins = 0;
03249   m->dvsize = m->topsize = 0;
03250   m->seg.base = 0;
03251   m->seg.size = 0;
03252   m->seg.next = 0;
03253   m->top = m->dv = 0;
03254   for (i = 0; i < NTREEBINS; ++i)
03255     *treebin_at(m, i) = 0;
03256   init_bins(m);
03257 }
03258 #endif /* PROCEED_ON_ERROR */
03259 
03260 /* Allocate chunk and prepend remainder with chunk in successor base. */
03261 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
03262                            size_t nb) {
03263   mchunkptr p = align_as_chunk(newbase);
03264   mchunkptr oldfirst = align_as_chunk(oldbase);
03265   size_t psize = (char*)oldfirst - (char*)p;
03266   mchunkptr q = chunk_plus_offset(p, nb);
03267   size_t qsize = psize - nb;
03268   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
03269 
03270   assert((char*)oldfirst > (char*)q);
03271   assert(pinuse(oldfirst));
03272   assert(qsize >= MIN_CHUNK_SIZE);
03273 
03274   /* consolidate remainder with first chunk of old base */
03275   if (oldfirst == m->top) {
03276     size_t tsize = m->topsize += qsize;
03277     m->top = q;
03278     q->head = tsize | PINUSE_BIT;
03279     check_top_chunk(m, q);
03280   }
03281   else if (oldfirst == m->dv) {
03282     size_t dsize = m->dvsize += qsize;
03283     m->dv = q;
03284     set_size_and_pinuse_of_free_chunk(q, dsize);
03285   }
03286   else {
03287     if (!cinuse(oldfirst)) {
03288       size_t nsize = chunksize(oldfirst);
03289       unlink_chunk(m, oldfirst, nsize);
03290       oldfirst = chunk_plus_offset(oldfirst, nsize);
03291       qsize += nsize;
03292     }
03293     set_free_with_pinuse(q, qsize, oldfirst);
03294     insert_chunk(m, q, qsize);
03295     check_free_chunk(m, q);
03296   }
03297 
03298   check_malloced_chunk(m, chunk2mem(p), nb);
03299   return chunk2mem(p);
03300 }
03301 
03302 
03303 /* Add a segment to hold a new noncontiguous region */
03304 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
03305   /* Determine locations and sizes of segment, fenceposts, old top */
03306   char* old_top = (char*)m->top;
03307   msegmentptr oldsp = segment_holding(m, old_top);
03308   char* old_end = oldsp->base + oldsp->size;
03309   size_t ssize = pad_request(sizeof(struct malloc_segment));
03310   char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
03311   size_t offset = align_offset(chunk2mem(rawsp));
03312   char* asp = rawsp + offset;
03313   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
03314   mchunkptr sp = (mchunkptr)csp;
03315   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
03316   mchunkptr tnext = chunk_plus_offset(sp, ssize);
03317   mchunkptr p = tnext;
03318   int nfences = 0;
03319 
03320   /* reset top to new space */
03321   init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
03322 
03323   /* Set up segment record */
03324   assert(is_aligned(ss));
03325   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
03326   *ss = m->seg; /* Push current record */
03327   m->seg.base = tbase;
03328   m->seg.size = tsize;
03329   set_segment_flags(&m->seg, mmapped);
03330   m->seg.next = ss;
03331 
03332   /* Insert trailing fenceposts */
03333   for (;;) {
03334     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
03335     p->head = FENCEPOST_HEAD;
03336     ++nfences;
03337     if ((char*)(&(nextp->head)) < old_end)
03338       p = nextp;
03339     else
03340       break;
03341   }
03342   assert(nfences >= 2);
03343 
03344   /* Insert the rest of old top into a bin as an ordinary free chunk */
03345   if (csp != old_top) {
03346     mchunkptr q = (mchunkptr)old_top;
03347     size_t psize = csp - old_top;
03348     mchunkptr tn = chunk_plus_offset(q, psize);
03349     set_free_with_pinuse(q, psize, tn);
03350     insert_chunk(m, q, psize);
03351   }
03352 
03353   check_top_chunk(m, m->top);
03354 }
03355 
03356 /* -------------------------- System allocation -------------------------- */
03357 
03358 /* Get memory from system using MORECORE or MMAP */
03359 static void* sys_alloc(mstate m, size_t nb) {
03360   char* tbase = CMFAIL;
03361   size_t tsize = 0;
03362   flag_t mmap_flag = 0;
03363 
03364   init_mparams();
03365 
03366   /* Directly map large chunks */
03367   if (use_mmap(m) && nb >= mparams.mmap_threshold) {
03368     void* mem = mmap_alloc(m, nb);
03369     if (mem != 0)
03370       return mem;
03371   }
03372 
03373   /*
03374     Try getting memory in any of three ways (in most-preferred to
03375     least-preferred order):
03376     1. A call to MORECORE that can normally contiguously extend memory.
03377        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
03378        or main space is mmapped or a previous contiguous call failed)
03379     2. A call to MMAP new space (disabled if not HAVE_MMAP).
03380        Note that under the default settings, if MORECORE is unable to
03381        fulfill a request, and HAVE_MMAP is true, then mmap is
03382        used as a noncontiguous system allocator. This is a useful backup
03383        strategy for systems with holes in address spaces -- in this case
03384        sbrk cannot contiguously expand the heap, but mmap may be able to
03385        find space.
03386     3. A call to MORECORE that cannot usually contiguously extend memory.
03387        (disabled if not HAVE_MORECORE)
03388   */
03389 
03390   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
03391     char* br = CMFAIL;
03392     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
03393     size_t asize = 0;
03394     ACQUIRE_MORECORE_LOCK();
03395 
03396     if (ss == 0) {  /* First time through or recovery */
03397       char* base = (char*)CALL_MORECORE(0);
03398       if (base != CMFAIL) {
03399         asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
03400         /* Adjust to end on a page boundary */
03401         if (!is_page_aligned(base))
03402           asize += (page_align((size_t)base) - (size_t)base);
03403         /* Can't call MORECORE if size is negative when treated as signed */
03404         if (asize < HALF_MAX_SIZE_T &&
03405             (br = (char*)(CALL_MORECORE(asize))) == base) {
03406           tbase = base;
03407           tsize = asize;
03408         }
03409       }
03410     }
03411     else {
03412       /* Subtract out existing available top space from MORECORE request. */
03413       asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
03414       /* Use mem here only if it did continuously extend old space */
03415       if (asize < HALF_MAX_SIZE_T &&
03416           (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
03417         tbase = br;
03418         tsize = asize;
03419       }
03420     }
03421 
03422     if (tbase == CMFAIL) {    /* Cope with partial failure */
03423       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
03424         if (asize < HALF_MAX_SIZE_T &&
03425             asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
03426           size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
03427           if (esize < HALF_MAX_SIZE_T) {
03428             char* end = (char*)CALL_MORECORE(esize);
03429             if (end != CMFAIL)
03430               asize += esize;
03431             else {            /* Can't use; try to release */
03432               (void)CALL_MORECORE(-asize);
03433               br = CMFAIL;
03434             }
03435           }
03436         }
03437       }
03438       if (br != CMFAIL) {    /* Use the space we did get */
03439         tbase = br;
03440         tsize = asize;
03441       }
03442       else
03443         disable_contiguous(m); /* Don't try contiguous path in the future */
03444     }
03445 
03446     RELEASE_MORECORE_LOCK();
03447   }
03448 
03449   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
03450     size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
03451     size_t rsize = granularity_align(req);
03452     if (rsize > nb) { /* Fail if wraps around zero */
03453       char* mp = (char*)(CALL_MMAP(rsize));
03454       if (mp != CMFAIL) {
03455         tbase = mp;
03456         tsize = rsize;
03457         mmap_flag = IS_MMAPPED_BIT;
03458       }
03459     }
03460   }
03461 
03462   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
03463     size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
03464     if (asize < HALF_MAX_SIZE_T) {
03465       char* br = CMFAIL;
03466       char* end = CMFAIL;
03467       ACQUIRE_MORECORE_LOCK();
03468       br = (char*)(CALL_MORECORE(asize));
03469       end = (char*)(CALL_MORECORE(0));
03470       RELEASE_MORECORE_LOCK();
03471       if (br != CMFAIL && end != CMFAIL && br < end) {
03472         size_t ssize = end - br;
03473         if (ssize > nb + TOP_FOOT_SIZE) {
03474           tbase = br;
03475           tsize = ssize;
03476         }
03477       }
03478     }
03479   }
03480 
03481   if (tbase != CMFAIL) {
03482 
03483     if ((m->footprint += tsize) > m->max_footprint)
03484       m->max_footprint = m->footprint;
03485 
03486     if (!is_initialized(m)) { /* first-time initialization */
03487       m->seg.base = m->least_addr = tbase;
03488       m->seg.size = tsize;
03489       set_segment_flags(&m->seg, mmap_flag);
03490       m->magic = mparams.magic;
03491       init_bins(m);
03492       if (is_global(m)) 
03493         init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
03494       else {
03495         /* Offset top by embedded malloc_state */
03496         mchunkptr mn = next_chunk(mem2chunk(m));
03497         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
03498       }
03499     }
03500 
03501     else {
03502       /* Try to merge with an existing segment */
03503       msegmentptr sp = &m->seg;
03504       while (sp != 0 && tbase != sp->base + sp->size)
03505         sp = sp->next;
03506       if (sp != 0 &&
03507           !is_extern_segment(sp) &&
03508          check_segment_merge(sp, tbase, tsize) &&
03509           (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag &&
03510           segment_holds(sp, m->top)) { /* append */
03511         sp->size += tsize;
03512         init_top(m, m->top, m->topsize + tsize);
03513       }
03514       else {
03515         if (tbase < m->least_addr)
03516           m->least_addr = tbase;
03517         sp = &m->seg;
03518         while (sp != 0 && sp->base != tbase + tsize)
03519           sp = sp->next;
03520         if (sp != 0 &&
03521             !is_extern_segment(sp) &&
03522            check_segment_merge(sp, tbase, tsize) &&
03523             (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag) {
03524           char* oldbase = sp->base;
03525           sp->base = tbase;
03526           sp->size += tsize;
03527           return prepend_alloc(m, tbase, oldbase, nb);
03528         }
03529         else
03530           add_segment(m, tbase, tsize, mmap_flag);
03531       }
03532     }
03533 
03534     if (nb < m->topsize) { /* Allocate from new or extended top space */
03535       size_t rsize = m->topsize -= nb;
03536       mchunkptr p = m->top;
03537       mchunkptr r = m->top = chunk_plus_offset(p, nb);
03538       r->head = rsize | PINUSE_BIT;
03539       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
03540       check_top_chunk(m, m->top);
03541       check_malloced_chunk(m, chunk2mem(p), nb);
03542       return chunk2mem(p);
03543     }
03544   }
03545 
03546   MALLOC_FAILURE_ACTION;
03547   return 0;
03548 }
03549 
03550 /* -----------------------  system deallocation -------------------------- */
03551 
03552 /* Unmap and unlink any mmapped segments that don't contain used chunks */
03553 static size_t release_unused_segments(mstate m) {
03554   size_t released = 0;
03555   msegmentptr pred = &m->seg;
03556   msegmentptr sp = pred->next;
03557   while (sp != 0) {
03558     char* base = sp->base;
03559     size_t size = sp->size;
03560     msegmentptr next = sp->next;
03561     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
03562       mchunkptr p = align_as_chunk(base);
03563       size_t psize = chunksize(p);
03564       /* Can unmap if first chunk holds entire segment and not pinned */
03565       if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
03566         tchunkptr tp = (tchunkptr)p;
03567         assert(segment_holds(sp, (char*)sp));
03568         if (p == m->dv) {
03569           m->dv = 0;
03570           m->dvsize = 0;
03571         }
03572         else {
03573           unlink_large_chunk(m, tp);
03574         }
03575         if (CALL_MUNMAP(base, size) == 0) {
03576           released += size;
03577           m->footprint -= size;
03578           /* unlink obsoleted record */
03579           sp = pred;
03580           sp->next = next;
03581         }
03582         else { /* back out if cannot unmap */
03583           insert_large_chunk(m, tp, psize);
03584         }
03585       }
03586     }
03587     pred = sp;
03588     sp = next;
03589   }
03590   return released;
03591 }
03592 
03593 static int sys_trim(mstate m, size_t pad) {
03594   size_t released = 0;
03595   if (pad < MAX_REQUEST && is_initialized(m)) {
03596     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
03597 
03598     if (m->topsize > pad) {
03599       /* Shrink top space in granularity-size units, keeping at least one */
03600       size_t unit = mparams.granularity;
03601       size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
03602                       SIZE_T_ONE) * unit;
03603       msegmentptr sp = segment_holding(m, (char*)m->top);
03604 
03605       if (!is_extern_segment(sp)) {
03606         if (is_mmapped_segment(sp)) {
03607           if (HAVE_MMAP &&
03608               sp->size >= extra &&
03609               !has_segment_link(m, sp)) { /* can't shrink if pinned */
03610             size_t newsize = sp->size - extra;
03611             /* Prefer mremap, fall back to munmap */
03612             if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
03613                 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
03614               released = extra;
03615             }
03616           }
03617         }
03618         else if (HAVE_MORECORE) {
03619           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
03620             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
03621           ACQUIRE_MORECORE_LOCK();
03622           {
03623             /* Make sure end of memory is where we last set it. */
03624             char* old_br = (char*)(CALL_MORECORE(0));
03625             if (old_br == sp->base + sp->size) {
03626               char* rel_br = (char*)(CALL_MORECORE(-extra));
03627               char* new_br = (char*)(CALL_MORECORE(0));
03628               if (rel_br != CMFAIL && new_br < old_br)
03629                 released = old_br - new_br;
03630             }
03631           }
03632           RELEASE_MORECORE_LOCK();
03633         }
03634       }
03635 
03636       if (released != 0) {
03637         sp->size -= released;
03638         m->footprint -= released;
03639         init_top(m, m->top, m->topsize - released);
03640         check_top_chunk(m, m->top);
03641       }
03642     }
03643 
03644     /* Unmap any unused mmapped segments */
03645     if (HAVE_MMAP) 
03646       released += release_unused_segments(m);
03647 
03648     /* On failure, disable autotrim to avoid repeated failed future calls */
03649     if (released == 0)
03650       m->trim_check = MAX_SIZE_T;
03651   }
03652 
03653   return (released != 0)? 1 : 0;
03654 }
03655 
03656 /* ---------------------------- malloc support --------------------------- */
03657 
03658 /* allocate a large request from the best fitting chunk in a treebin */
03659 static void* tmalloc_large(mstate m, size_t nb) {
03660   tchunkptr v = 0;
03661   size_t rsize = -nb; /* Unsigned negation */
03662   tchunkptr t;
03663   bindex_t idx;
03664   compute_tree_index(nb, idx);
03665 
03666   if ((t = *treebin_at(m, idx)) != 0) {
03667     /* Traverse tree for this bin looking for node with size == nb */
03668     size_t sizebits = nb << leftshift_for_tree_index(idx);
03669     tchunkptr rst = 0;  /* The deepest untaken right subtree */
03670     for (;;) {
03671       tchunkptr rt;
03672       size_t trem = chunksize(t) - nb;
03673       if (trem < rsize) {
03674         v = t;
03675         if ((rsize = trem) == 0)
03676           break;
03677       }
03678       rt = t->child[1];
03679       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
03680       if (rt != 0 && rt != t)
03681         rst = rt;
03682       if (t == 0) {
03683         t = rst; /* set t to least subtree holding sizes > nb */
03684         break;
03685       }
03686       sizebits <<= 1;
03687     }
03688   }
03689 
03690   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
03691     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
03692     if (leftbits != 0) {
03693       bindex_t i;
03694       binmap_t leastbit = least_bit(leftbits);
03695       compute_bit2idx(leastbit, i);
03696       t = *treebin_at(m, i);
03697     }
03698   }
03699 
03700   while (t != 0) { /* find smallest of tree or subtree */
03701     size_t trem = chunksize(t) - nb;
03702     if (trem < rsize) {
03703       rsize = trem;
03704       v = t;
03705     }
03706     t = leftmost_child(t);
03707   }
03708 
03709   /*  If dv is a better fit, return 0 so malloc will use it */
03710   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
03711     if (RTCHECK(ok_address(m, v))) { /* split */
03712       mchunkptr r = chunk_plus_offset(v, nb);
03713       assert(chunksize(v) == rsize + nb);
03714       if (RTCHECK(ok_next(v, r))) {
03715         unlink_large_chunk(m, v);
03716         if (rsize < MIN_CHUNK_SIZE)
03717           set_inuse_and_pinuse(m, v, (rsize + nb));
03718         else {
03719           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
03720           set_size_and_pinuse_of_free_chunk(r, rsize);
03721           insert_chunk(m, r, rsize);
03722         }
03723         return chunk2mem(v);
03724       }
03725     }
03726     CORRUPTION_ERROR_ACTION(m);
03727   }
03728   return 0;
03729 }
03730 
03731 /* allocate a small request from the best fitting chunk in a treebin */
03732 static void* tmalloc_small(mstate m, size_t nb) {
03733   tchunkptr t, v;
03734   size_t rsize;
03735   bindex_t i;
03736   binmap_t leastbit = least_bit(m->treemap);
03737   compute_bit2idx(leastbit, i);
03738 
03739   v = t = *treebin_at(m, i);
03740   rsize = chunksize(t) - nb;
03741 
03742   while ((t = leftmost_child(t)) != 0) {
03743     size_t trem = chunksize(t) - nb;
03744     if (trem < rsize) {
03745       rsize = trem;
03746       v = t;
03747     }
03748   }
03749 
03750   if (RTCHECK(ok_address(m, v))) {
03751     mchunkptr r = chunk_plus_offset(v, nb);
03752     assert(chunksize(v) == rsize + nb);
03753     if (RTCHECK(ok_next(v, r))) {
03754       unlink_large_chunk(m, v);
03755       if (rsize < MIN_CHUNK_SIZE)
03756         set_inuse_and_pinuse(m, v, (rsize + nb));
03757       else {
03758         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
03759         set_size_and_pinuse_of_free_chunk(r, rsize);
03760         replace_dv(m, r, rsize);
03761       }
03762       return chunk2mem(v);
03763     }
03764   }
03765 
03766   CORRUPTION_ERROR_ACTION(m);
03767   return 0;
03768 }
03769 
03770 /* --------------------------- realloc support --------------------------- */
03771 
03772 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
03773   if (bytes >= MAX_REQUEST) {
03774     MALLOC_FAILURE_ACTION;
03775     return 0;
03776   }
03777   if (!PREACTION(m)) {
03778     mchunkptr oldp = mem2chunk(oldmem);
03779     size_t oldsize = chunksize(oldp);
03780     mchunkptr next = chunk_plus_offset(oldp, oldsize);
03781     mchunkptr newp = 0;
03782     void* extra = 0;
03783 
03784     /* Try to either shrink or extend into top. Else malloc-copy-free */
03785 
03786     if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
03787                 ok_next(oldp, next) && ok_pinuse(next))) {
03788       size_t nb = request2size(bytes);
03789       if (is_mmapped(oldp))
03790         newp = mmap_resize(m, oldp, nb);
03791       else if (oldsize >= nb) { /* already big enough */
03792         size_t rsize = oldsize - nb;
03793         newp = oldp;
03794         if (rsize >= MIN_CHUNK_SIZE) {
03795           mchunkptr remainder = chunk_plus_offset(newp, nb);
03796           set_inuse(m, newp, nb);
03797           set_inuse(m, remainder, rsize);
03798           extra = chunk2mem(remainder);
03799         }
03800       }
03801       else if (next == m->top && oldsize + m->topsize > nb) {
03802         /* Expand into top */
03803         size_t newsize = oldsize + m->topsize;
03804         size_t newtopsize = newsize - nb;
03805         mchunkptr newtop = chunk_plus_offset(oldp, nb);
03806         set_inuse(m, oldp, nb);
03807         newtop->head = newtopsize |PINUSE_BIT;
03808         m->top = newtop;
03809         m->topsize = newtopsize;
03810         newp = oldp;
03811       }
03812     }
03813     else {
03814       USAGE_ERROR_ACTION(m, oldmem);
03815       POSTACTION(m);
03816       return 0;
03817     }
03818 
03819     POSTACTION(m);
03820 
03821     if (newp != 0) {
03822       if (extra != 0) {
03823         internal_free(m, extra);
03824       }
03825       check_inuse_chunk(m, newp);
03826       return chunk2mem(newp);
03827     }
03828     else {
03829       void* newmem = internal_malloc(m, bytes);
03830       if (newmem != 0) {
03831         size_t oc = oldsize - overhead_for(oldp);
03832         memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
03833         internal_free(m, oldmem);
03834       }
03835       return newmem;
03836     }
03837   }
03838   return 0;
03839 }
03840 
03841 /* --------------------------- memalign support -------------------------- */
03842 
03843 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
03844   if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
03845     return internal_malloc(m, bytes);
03846   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
03847     alignment = MIN_CHUNK_SIZE;
03848   if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
03849     size_t a = MALLOC_ALIGNMENT << 1;
03850     while (a < alignment) a <<= 1;
03851     alignment = a;
03852   }
03853   
03854   if (bytes >= MAX_REQUEST - alignment) {
03855     if (m != 0)  { /* Test isn't needed but avoids compiler warning */
03856       MALLOC_FAILURE_ACTION;
03857     }
03858   }
03859   else {
03860     size_t nb = request2size(bytes);
03861     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
03862     char* mem = (char*)internal_malloc(m, req);
03863     if (mem != 0) {
03864       void* leader = 0;
03865       void* trailer = 0;
03866       mchunkptr p = mem2chunk(mem);
03867 
03868       if (PREACTION(m)) return 0;
03869       if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
03870         /*
03871           Find an aligned spot inside chunk.  Since we need to give
03872           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
03873           the first calculation places us at a spot with less than
03874           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
03875           We've allocated enough total room so that this is always
03876           possible.
03877         */
03878         char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
03879                                                        alignment -
03880                                                        SIZE_T_ONE)) &
03881                                              -alignment));
03882         char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
03883           br : br+alignment;
03884         mchunkptr newp = (mchunkptr)pos;
03885         size_t leadsize = pos - (char*)(p);
03886         size_t newsize = chunksize(p) - leadsize;
03887 
03888         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
03889           newp->prev_foot = p->prev_foot + leadsize;
03890           newp->head = (newsize|CINUSE_BIT);
03891         }
03892         else { /* Otherwise, give back leader, use the rest */
03893           set_inuse(m, newp, newsize);
03894           set_inuse(m, p, leadsize);
03895           leader = chunk2mem(p);
03896         }
03897         p = newp;
03898       }
03899 
03900       /* Give back spare room at the end */
03901       if (!is_mmapped(p)) {
03902         size_t size = chunksize(p);
03903         if (size > nb + MIN_CHUNK_SIZE) {
03904           size_t remainder_size = size - nb;
03905           mchunkptr remainder = chunk_plus_offset(p, nb);
03906           set_inuse(m, p, nb);
03907           set_inuse(m, remainder, remainder_size);
03908           trailer = chunk2mem(remainder);
03909         }
03910       }
03911 
03912       assert (chunksize(p) >= nb);
03913       assert((((size_t)(chunk2mem(p))) % alignment) == 0);
03914       check_inuse_chunk(m, p);
03915       POSTACTION(m);
03916       if (leader != 0) {
03917         internal_free(m, leader);
03918       }
03919       if (trailer != 0) {
03920         internal_free(m, trailer);
03921       }
03922       return chunk2mem(p);
03923     }
03924   }
03925   return 0;
03926 }
03927 
03928 /* ------------------------ comalloc/coalloc support --------------------- */
03929 
03930 static void** ialloc(mstate m,
03931                      size_t n_elements,
03932                      size_t* sizes,
03933                      int opts,
03934                      void* chunks[]) {
03935   /*
03936     This provides common support for independent_X routines, handling
03937     all of the combinations that can result.
03938 
03939     The opts arg has:
03940     bit 0 set if all elements are same size (using sizes[0])
03941     bit 1 set if elements should be zeroed
03942   */
03943 
03944   size_t    element_size;   /* chunksize of each element, if all same */
03945   size_t    contents_size;  /* total size of elements */
03946   size_t    array_size;     /* request size of pointer array */
03947   void*     mem;            /* malloced aggregate space */
03948   mchunkptr p;              /* corresponding chunk */
03949   size_t    remainder_size; /* remaining bytes while splitting */
03950   void**    marray;         /* either "chunks" or malloced ptr array */
03951   mchunkptr array_chunk;    /* chunk for malloced ptr array */
03952   flag_t    was_enabled;    /* to disable mmap */
03953   size_t    size;
03954   size_t    i;
03955 
03956   /* compute array length, if needed */
03957   if (chunks != 0) {
03958     if (n_elements == 0)
03959       return chunks; /* nothing to do */
03960     marray = chunks;
03961     array_size = 0;
03962   }
03963   else {
03964     /* if empty req, must still return chunk representing empty array */
03965     if (n_elements == 0)
03966       return (void**)internal_malloc(m, 0);
03967     marray = 0;
03968     array_size = request2size(n_elements * (sizeof(void*)));
03969   }
03970 
03971   /* compute total element size */
03972   if (opts & 0x1) { /* all-same-size */
03973     element_size = request2size(*sizes);
03974     contents_size = n_elements * element_size;
03975   }
03976   else { /* add up all the sizes */
03977     element_size = 0;
03978     contents_size = 0;
03979     for (i = 0; i != n_elements; ++i)
03980       contents_size += request2size(sizes[i]);
03981   }
03982 
03983   size = contents_size + array_size;
03984 
03985   /*
03986      Allocate the aggregate chunk.  First disable direct-mmapping so
03987      malloc won't use it, since we would not be able to later
03988      free/realloc space internal to a segregated mmap region.
03989   */
03990   was_enabled = use_mmap(m);
03991   disable_mmap(m);
03992   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
03993   if (was_enabled)
03994     enable_mmap(m);
03995   if (mem == 0)
03996     return 0;
03997 
03998   if (PREACTION(m)) return 0;
03999   p = mem2chunk(mem);
04000   remainder_size = chunksize(p);
04001 
04002   assert(!is_mmapped(p));
04003 
04004   if (opts & 0x2) {       /* optionally clear the elements */
04005     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
04006   }
04007 
04008   /* If not provided, allocate the pointer array as final part of chunk */
04009   if (marray == 0) {
04010     size_t  array_chunk_size;
04011     array_chunk = chunk_plus_offset(p, contents_size);
04012     array_chunk_size = remainder_size - contents_size;
04013     marray = (void**) (chunk2mem(array_chunk));
04014     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
04015     remainder_size = contents_size;
04016   }
04017 
04018   /* split out elements */
04019   for (i = 0; ; ++i) {
04020     marray[i] = chunk2mem(p);
04021     if (i != n_elements-1) {
04022       if (element_size != 0)
04023         size = element_size;
04024       else
04025         size = request2size(sizes[i]);
04026       remainder_size -= size;
04027       set_size_and_pinuse_of_inuse_chunk(m, p, size);
04028       p = chunk_plus_offset(p, size);
04029     }
04030     else { /* the final element absorbs any overallocation slop */
04031       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
04032       break;
04033     }
04034   }
04035 
04036 #if DEBUG
04037   if (marray != chunks) {
04038     /* final element must have exactly exhausted chunk */
04039     if (element_size != 0) {
04040       assert(remainder_size == element_size);
04041     }
04042     else {
04043       assert(remainder_size == request2size(sizes[i]));
04044     }
04045     check_inuse_chunk(m, mem2chunk(marray));
04046   }
04047   for (i = 0; i != n_elements; ++i)
04048     check_inuse_chunk(m, mem2chunk(marray[i]));
04049 
04050 #endif /* DEBUG */
04051 
04052   POSTACTION(m);
04053   return marray;
04054 }
04055 
04056 
04057 /* -------------------------- public routines ---------------------------- */
04058 
04059 #if !ONLY_MSPACES
04060 
04061 void* dlmalloc(size_t bytes) {
04062   /*
04063      Basic algorithm:
04064      If a small request (< 256 bytes minus per-chunk overhead):
04065        1. If one exists, use a remainderless chunk in associated smallbin.
04066           (Remainderless means that there are too few excess bytes to
04067           represent as a chunk.)
04068        2. If it is big enough, use the dv chunk, which is normally the
04069           chunk adjacent to the one used for the most recent small request.
04070        3. If one exists, split the smallest available chunk in a bin,
04071           saving remainder in dv.
04072        4. If it is big enough, use the top chunk.
04073        5. If available, get memory from system and use it
04074      Otherwise, for a large request:
04075        1. Find the smallest available binned chunk that fits, and use it
04076           if it is better fitting than dv chunk, splitting if necessary.
04077        2. If better fitting than any binned chunk, use the dv chunk.
04078        3. If it is big enough, use the top chunk.
04079        4. If request size >= mmap threshold, try to directly mmap this chunk.
04080        5. If available, get memory from system and use it
04081 
04082      The ugly goto's here ensure that postaction occurs along all paths.
04083   */
04084 
04085   if (!PREACTION(gm)) {
04086     void* mem;
04087     size_t nb;
04088     if (bytes <= MAX_SMALL_REQUEST) {
04089       bindex_t idx;
04090       binmap_t smallbits;
04091       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
04092       idx = small_index(nb);
04093       smallbits = gm->smallmap >> idx;
04094 
04095       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
04096         mchunkptr b, p;
04097         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
04098         b = smallbin_at(gm, idx);
04099         p = b->fd;
04100         assert(chunksize(p) == small_index2size(idx));
04101         unlink_first_small_chunk(gm, b, p, idx);
04102         set_inuse_and_pinuse(gm, p, small_index2size(idx));
04103         mem = chunk2mem(p);
04104         check_malloced_chunk(gm, mem, nb);
04105         goto postaction;
04106       }
04107 
04108       else if (nb > gm->dvsize) {
04109         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
04110           mchunkptr b, p, r;
04111           size_t rsize;
04112           bindex_t i;
04113           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
04114           binmap_t leastbit = least_bit(leftbits);
04115           compute_bit2idx(leastbit, i);
04116           b = smallbin_at(gm, i);
04117           p = b->fd;
04118           assert(chunksize(p) == small_index2size(i));
04119           unlink_first_small_chunk(gm, b, p, i);
04120           rsize = small_index2size(i) - nb;
04121           /* Fit here cannot be remainderless if 4byte sizes */
04122           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
04123             set_inuse_and_pinuse(gm, p, small_index2size(i));
04124           else {
04125             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
04126             r = chunk_plus_offset(p, nb);
04127             set_size_and_pinuse_of_free_chunk(r, rsize);
04128             replace_dv(gm, r, rsize);
04129           }
04130           mem = chunk2mem(p);
04131           check_malloced_chunk(gm, mem, nb);
04132           goto postaction;
04133         }
04134 
04135         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
04136           check_malloced_chunk(gm, mem, nb);
04137           goto postaction;
04138         }
04139       }
04140     }
04141     else if (bytes >= MAX_REQUEST)
04142       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
04143     else {
04144       nb = pad_request(bytes);
04145       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
04146         check_malloced_chunk(gm, mem, nb);
04147         goto postaction;
04148       }
04149     }
04150 
04151     if (nb <= gm->dvsize) {
04152       size_t rsize = gm->dvsize - nb;
04153       mchunkptr p = gm->dv;
04154       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
04155         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
04156         gm->dvsize = rsize;
04157         set_size_and_pinuse_of_free_chunk(r, rsize);
04158         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
04159       }
04160       else { /* exhaust dv */
04161         size_t dvs = gm->dvsize;
04162         gm->dvsize = 0;
04163         gm->dv = 0;
04164         set_inuse_and_pinuse(gm, p, dvs);
04165       }
04166       mem = chunk2mem(p);
04167       check_malloced_chunk(gm, mem, nb);
04168       goto postaction;
04169     }
04170 
04171     else if (nb < gm->topsize) { /* Split top */
04172       size_t rsize = gm->topsize -= nb;
04173       mchunkptr p = gm->top;
04174       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
04175       r->head = rsize | PINUSE_BIT;
04176       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
04177       mem = chunk2mem(p);
04178       check_top_chunk(gm, gm->top);
04179       check_malloced_chunk(gm, mem, nb);
04180       goto postaction;
04181     }
04182 
04183     mem = sys_alloc(gm, nb);
04184 
04185   postaction:
04186     POSTACTION(gm);
04187     return mem;
04188   }
04189 
04190   return 0;
04191 }
04192 
04193 void dlfree(void* mem) {
04194   /*
04195      Consolidate freed chunks with preceeding or succeeding bordering
04196      free chunks, if they exist, and then place in a bin.  Intermixed
04197      with special cases for top, dv, mmapped chunks, and usage errors.
04198   */
04199 
04200   if (mem != 0) {
04201     mchunkptr p  = mem2chunk(mem);
04202 #if FOOTERS
04203     mstate fm = get_mstate_for(p);
04204     if (!ok_magic(fm)) {
04205       USAGE_ERROR_ACTION(fm, p);
04206       return;
04207     }
04208 #else /* FOOTERS */
04209 #define fm gm
04210 #endif /* FOOTERS */
04211     if (!PREACTION(fm)) {
04212       check_inuse_chunk(fm, p);
04213       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
04214         size_t psize = chunksize(p);
04215         mchunkptr next = chunk_plus_offset(p, psize);
04216         if (!pinuse(p)) {
04217           size_t prevsize = p->prev_foot;
04218           if ((prevsize & IS_MMAPPED_BIT) != 0) {
04219             prevsize &= ~IS_MMAPPED_BIT;
04220             psize += prevsize + MMAP_FOOT_PAD;
04221             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
04222               fm->footprint -= psize;
04223             goto postaction;
04224           }
04225           else {
04226             mchunkptr prev = chunk_minus_offset(p, prevsize);
04227             psize += prevsize;
04228             p = prev;
04229             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
04230               if (p != fm->dv) {
04231                 unlink_chunk(fm, p, prevsize);
04232               }
04233               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
04234                 fm->dvsize = psize;
04235                 set_free_with_pinuse(p, psize, next);
04236                 goto postaction;
04237               }
04238             }
04239             else
04240               goto erroraction;
04241           }
04242         }
04243 
04244         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
04245           if (!cinuse(next)) {  /* consolidate forward */
04246             if (next == fm->top) {
04247               size_t tsize = fm->topsize += psize;
04248               fm->top = p;
04249               p->head = tsize | PINUSE_BIT;
04250               if (p == fm->dv) {
04251                 fm->dv = 0;
04252                 fm->dvsize = 0;
04253               }
04254               if (should_trim(fm, tsize))
04255                 sys_trim(fm, 0);
04256               goto postaction;
04257             }
04258             else if (next == fm->dv) {
04259               size_t dsize = fm->dvsize += psize;
04260               fm->dv = p;
04261               set_size_and_pinuse_of_free_chunk(p, dsize);
04262               goto postaction;
04263             }
04264             else {
04265               size_t nsize = chunksize(next);
04266               psize += nsize;
04267               unlink_chunk(fm, next, nsize);
04268               set_size_and_pinuse_of_free_chunk(p, psize);
04269               if (p == fm->dv) {
04270                 fm->dvsize = psize;
04271                 goto postaction;
04272               }
04273             }
04274           }
04275           else
04276             set_free_with_pinuse(p, psize, next);
04277           insert_chunk(fm, p, psize);
04278           check_free_chunk(fm, p);
04279           goto postaction;
04280         }
04281       }
04282     erroraction:
04283       USAGE_ERROR_ACTION(fm, p);
04284     postaction:
04285       POSTACTION(fm);
04286     }
04287   }
04288 #if !FOOTERS
04289 #undef fm
04290 #endif /* FOOTERS */
04291 }
04292 
04293 void* dlcalloc(size_t n_elements, size_t elem_size) {
04294   void* mem;
04295   size_t req = 0;
04296   if (n_elements != 0) {
04297     req = n_elements * elem_size;
04298     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
04299         (req / n_elements != elem_size))
04300       req = MAX_SIZE_T; /* force downstream failure on overflow */
04301   }
04302   mem = dlmalloc(req);
04303   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
04304     memset(mem, 0, req);
04305   return mem;
04306 }
04307 
04308 void* dlrealloc(void* oldmem, size_t bytes) {
04309   if (oldmem == 0)
04310     return dlmalloc(bytes);
04311 #ifdef REALLOC_ZERO_BYTES_FREES
04312   if (bytes == 0) {
04313     dlfree(oldmem);
04314     return 0;
04315   }
04316 #endif /* REALLOC_ZERO_BYTES_FREES */
04317   else {
04318 #if ! FOOTERS
04319     mstate m = gm;
04320 #else /* FOOTERS */
04321     mstate m = get_mstate_for(mem2chunk(oldmem));
04322     if (!ok_magic(m)) {
04323       USAGE_ERROR_ACTION(m, oldmem);
04324       return 0;
04325     }
04326 #endif /* FOOTERS */
04327     return internal_realloc(m, oldmem, bytes);
04328   }
04329 }
04330 
04331 void* dlmemalign(size_t alignment, size_t bytes) {
04332   return internal_memalign(gm, alignment, bytes);
04333 }
04334 
04335 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
04336                                  void* chunks[]) {
04337   size_t sz = elem_size; /* serves as 1-element array */
04338   return ialloc(gm, n_elements, &sz, 3, chunks);
04339 }
04340 
04341 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
04342                                    void* chunks[]) {
04343   return ialloc(gm, n_elements, sizes, 0, chunks);
04344 }
04345 
04346 void* dlvalloc(size_t bytes) {
04347   size_t pagesz;
04348   init_mparams();
04349   pagesz = mparams.page_size;
04350   return dlmemalign(pagesz, bytes);
04351 }
04352 
04353 void* dlpvalloc(size_t bytes) {
04354   size_t pagesz;
04355   init_mparams();
04356   pagesz = mparams.page_size;
04357   return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
04358 }
04359 
04360 int dlmalloc_trim(size_t pad) {
04361   int result = 0;
04362   if (!PREACTION(gm)) {
04363     result = sys_trim(gm, pad);
04364     POSTACTION(gm);
04365   }
04366   return result;
04367 }
04368 
04369 size_t dlmalloc_footprint(void) {
04370   return gm->footprint;
04371 }
04372 
04373 size_t dlmalloc_max_footprint(void) {
04374   return gm->max_footprint;
04375 }
04376 
04377 #if !NO_MALLINFO
04378 struct mallinfo dlmallinfo(void) {
04379   return internal_mallinfo(gm);
04380 }
04381 #endif /* NO_MALLINFO */
04382 
04383 void dlmalloc_stats() {
04384   internal_malloc_stats(gm);
04385 }
04386 
04387 size_t dlmalloc_usable_size(void* mem) {
04388   if (mem != 0) {
04389     mchunkptr p = mem2chunk(mem);
04390     if (cinuse(p))
04391       return chunksize(p) - overhead_for(p);
04392   }
04393   return 0;
04394 }
04395 
04396 int dlmallopt(int param_number, int value) {
04397   return change_mparam(param_number, value);
04398 }
04399 
04400 #endif /* !ONLY_MSPACES */
04401 
04402 /* ----------------------------- user mspaces ---------------------------- */
04403 
04404 #if MSPACES
04405 
04406 static mstate init_user_mstate(char* tbase, size_t tsize) {
04407   size_t msize = pad_request(sizeof(struct malloc_state));
04408   mchunkptr mn;
04409   mchunkptr msp = align_as_chunk(tbase);
04410   mstate m = (mstate)(chunk2mem(msp));
04411   memset(m, 0, msize);
04412   INITIAL_LOCK(&m->mutex);
04413   msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
04414   m->seg.base = m->least_addr = tbase;
04415   m->seg.size = m->footprint = m->max_footprint = tsize;
04416   m->magic = mparams.magic;
04417   m->mflags = mparams.default_mflags;
04418   disable_contiguous(m);
04419   init_bins(m);
04420   mn = next_chunk(mem2chunk(m));
04421   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
04422   check_top_chunk(m, m->top);
04423   return m;
04424 }
04425 
04426 mspace create_mspace(size_t capacity, int locked) {
04427   mstate m = 0;
04428   size_t msize = pad_request(sizeof(struct malloc_state));
04429   init_mparams(); /* Ensure pagesize etc initialized */
04430 
04431   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
04432     size_t rs = ((capacity == 0)? mparams.granularity :
04433                  (capacity + TOP_FOOT_SIZE + msize));
04434     size_t tsize = granularity_align(rs);
04435     char* tbase = (char*)(CALL_MMAP(tsize));
04436     if (tbase != CMFAIL) {
04437       m = init_user_mstate(tbase, tsize);
04438       set_segment_flags(&m->seg, IS_MMAPPED_BIT);
04439       set_lock(m, locked);
04440     }
04441   }
04442   return (mspace)m;
04443 }
04444 
04445 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
04446   mstate m = 0;
04447   size_t msize = pad_request(sizeof(struct malloc_state));
04448   init_mparams(); /* Ensure pagesize etc initialized */
04449 
04450   if (capacity > msize + TOP_FOOT_SIZE &&
04451       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
04452     m = init_user_mstate((char*)base, capacity);
04453     set_segment_flags(&m->seg, EXTERN_BIT);
04454     set_lock(m, locked);
04455   }
04456   return (mspace)m;
04457 }
04458 
04459 size_t destroy_mspace(mspace msp) {
04460   size_t freed = 0;
04461   mstate ms = (mstate)msp;
04462   if (ok_magic(ms)) {
04463     msegmentptr sp = &ms->seg;
04464     while (sp != 0) {
04465       char* base = sp->base;
04466       size_t size = sp->size;
04467       flag_t flag = get_segment_flags(sp);
04468       sp = sp->next;
04469       if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
04470           CALL_MUNMAP(base, size) == 0)
04471         freed += size;
04472     }
04473   }
04474   else {
04475     USAGE_ERROR_ACTION(ms,ms);
04476   }
04477   return freed;
04478 }
04479 
04480 /*
04481   mspace versions of routines are near-clones of the global
04482   versions. This is not so nice but better than the alternatives.
04483 */
04484 
04485 
04486 void* mspace_malloc(mspace msp, size_t bytes) {
04487   mstate ms = (mstate)msp;
04488   if (!ok_magic(ms)) {
04489     USAGE_ERROR_ACTION(ms,ms);
04490     return 0;
04491   }
04492   if (!PREACTION(ms)) {
04493     void* mem;
04494     size_t nb;
04495     if (bytes <= MAX_SMALL_REQUEST) {
04496       bindex_t idx;
04497       binmap_t smallbits;
04498       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
04499       idx = small_index(nb);
04500       smallbits = ms->smallmap >> idx;
04501 
04502       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
04503         mchunkptr b, p;
04504         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
04505         b = smallbin_at(ms, idx);
04506         p = b->fd;
04507         assert(chunksize(p) == small_index2size(idx));
04508         unlink_first_small_chunk(ms, b, p, idx);
04509         set_inuse_and_pinuse(ms, p, small_index2size(idx));
04510         mem = chunk2mem(p);
04511         check_malloced_chunk(ms, mem, nb);
04512         goto postaction;
04513       }
04514 
04515       else if (nb > ms->dvsize) {
04516         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
04517           mchunkptr b, p, r;
04518           size_t rsize;
04519           bindex_t i;
04520           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
04521           binmap_t leastbit = least_bit(leftbits);
04522           compute_bit2idx(leastbit, i);
04523           b = smallbin_at(ms, i);
04524           p = b->fd;
04525           assert(chunksize(p) == small_index2size(i));
04526           unlink_first_small_chunk(ms, b, p, i);
04527           rsize = small_index2size(i) - nb;
04528           /* Fit here cannot be remainderless if 4byte sizes */
04529           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
04530             set_inuse_and_pinuse(ms, p, small_index2size(i));
04531           else {
04532             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
04533             r = chunk_plus_offset(p, nb);
04534             set_size_and_pinuse_of_free_chunk(r, rsize);
04535             replace_dv(ms, r, rsize);
04536           }
04537           mem = chunk2mem(p);
04538           check_malloced_chunk(ms, mem, nb);
04539           goto postaction;
04540         }
04541 
04542         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
04543           check_malloced_chunk(ms, mem, nb);
04544           goto postaction;
04545         }
04546       }
04547     }
04548     else if (bytes >= MAX_REQUEST)
04549       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
04550     else {
04551       nb = pad_request(bytes);
04552       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
04553         check_malloced_chunk(ms, mem, nb);
04554         goto postaction;
04555       }
04556     }
04557 
04558     if (nb <= ms->dvsize) {
04559       size_t rsize = ms->dvsize - nb;
04560       mchunkptr p = ms->dv;
04561       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
04562         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
04563         ms->dvsize = rsize;
04564         set_size_and_pinuse_of_free_chunk(r, rsize);
04565         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
04566       }
04567       else { /* exhaust dv */
04568         size_t dvs = ms->dvsize;
04569         ms->dvsize = 0;
04570         ms->dv = 0;
04571         set_inuse_and_pinuse(ms, p, dvs);
04572       }
04573       mem = chunk2mem(p);
04574       check_malloced_chunk(ms, mem, nb);
04575       goto postaction;
04576     }
04577 
04578     else if (nb < ms->topsize) { /* Split top */
04579       size_t rsize = ms->topsize -= nb;
04580       mchunkptr p = ms->top;
04581       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
04582       r->head = rsize | PINUSE_BIT;
04583       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
04584       mem = chunk2mem(p);
04585       check_top_chunk(ms, ms->top);
04586       check_malloced_chunk(ms, mem, nb);
04587       goto postaction;
04588     }
04589 
04590     mem = sys_alloc(ms, nb);
04591 
04592   postaction:
04593     POSTACTION(ms);
04594     return mem;
04595   }
04596 
04597   return 0;
04598 }
04599 
04600 void mspace_free(mspace msp, void* mem) {
04601   if (mem != 0) {
04602     mchunkptr p  = mem2chunk(mem);
04603 #if FOOTERS
04604     mstate fm = get_mstate_for(p);
04605 #else /* FOOTERS */
04606     mstate fm = (mstate)msp;
04607 #endif /* FOOTERS */
04608     if (!ok_magic(fm)) {
04609       USAGE_ERROR_ACTION(fm, p);
04610       return;
04611     }
04612     if (!PREACTION(fm)) {
04613       check_inuse_chunk(fm, p);
04614       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
04615         size_t psize = chunksize(p);
04616         mchunkptr next = chunk_plus_offset(p, psize);
04617         if (!pinuse(p)) {
04618           size_t prevsize = p->prev_foot;
04619           if ((prevsize & IS_MMAPPED_BIT) != 0) {
04620             prevsize &= ~IS_MMAPPED_BIT;
04621             psize += prevsize + MMAP_FOOT_PAD;
04622             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
04623               fm->footprint -= psize;
04624             goto postaction;
04625           }
04626           else {
04627             mchunkptr prev = chunk_minus_offset(p, prevsize);
04628             psize += prevsize;
04629             p = prev;
04630             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
04631               if (p != fm->dv) {
04632                 unlink_chunk(fm, p, prevsize);
04633               }
04634               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
04635                 fm->dvsize = psize;
04636                 set_free_with_pinuse(p, psize, next);
04637                 goto postaction;
04638               }
04639             }
04640             else
04641               goto erroraction;
04642           }
04643         }
04644 
04645         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
04646           if (!cinuse(next)) {  /* consolidate forward */
04647             if (next == fm->top) {
04648               size_t tsize = fm->topsize += psize;
04649               fm->top = p;
04650               p->head = tsize | PINUSE_BIT;
04651               if (p == fm->dv) {
04652                 fm->dv = 0;
04653                 fm->dvsize = 0;
04654               }
04655               if (should_trim(fm, tsize))
04656                 sys_trim(fm, 0);
04657               goto postaction;
04658             }
04659             else if (next == fm->dv) {
04660               size_t dsize = fm->dvsize += psize;
04661               fm->dv = p;
04662               set_size_and_pinuse_of_free_chunk(p, dsize);
04663               goto postaction;
04664             }
04665             else {
04666               size_t nsize = chunksize(next);
04667               psize += nsize;
04668               unlink_chunk(fm, next, nsize);
04669               set_size_and_pinuse_of_free_chunk(p, psize);
04670               if (p == fm->dv) {
04671                 fm->dvsize = psize;
04672                 goto postaction;
04673               }
04674             }
04675           }
04676           else
04677             set_free_with_pinuse(p, psize, next);
04678           insert_chunk(fm, p, psize);
04679           check_free_chunk(fm, p);
04680           goto postaction;
04681         }
04682       }
04683     erroraction:
04684       USAGE_ERROR_ACTION(fm, p);
04685     postaction:
04686       POSTACTION(fm);
04687     }
04688   }
04689 }
04690 
04691 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
04692   void* mem;
04693   size_t req = 0;
04694   mstate ms = (mstate)msp;
04695   if (!ok_magic(ms)) {
04696     USAGE_ERROR_ACTION(ms,ms);
04697     return 0;
04698   }
04699   if (n_elements != 0) {
04700     req = n_elements * elem_size;
04701     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
04702         (req / n_elements != elem_size))
04703       req = MAX_SIZE_T; /* force downstream failure on overflow */
04704   }
04705   mem = internal_malloc(ms, req);
04706   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
04707     memset(mem, 0, req);
04708   return mem;
04709 }
04710 
04711 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
04712   if (oldmem == 0)
04713     return mspace_malloc(msp, bytes);
04714 #ifdef REALLOC_ZERO_BYTES_FREES
04715   if (bytes == 0) {
04716     mspace_free(msp, oldmem);
04717     return 0;
04718   }
04719 #endif /* REALLOC_ZERO_BYTES_FREES */
04720   else {
04721 #if FOOTERS
04722     mchunkptr p  = mem2chunk(oldmem);
04723     mstate ms = get_mstate_for(p);
04724 #else /* FOOTERS */
04725     mstate ms = (mstate)msp;
04726 #endif /* FOOTERS */
04727     if (!ok_magic(ms)) {
04728       USAGE_ERROR_ACTION(ms,ms);
04729       return 0;
04730     }
04731     return internal_realloc(ms, oldmem, bytes);
04732   }
04733 }
04734 
04735 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
04736   mstate ms = (mstate)msp;
04737   if (!ok_magic(ms)) {
04738     USAGE_ERROR_ACTION(ms,ms);
04739     return 0;
04740   }
04741   return internal_memalign(ms, alignment, bytes);
04742 }
04743 
04744 void** mspace_independent_calloc(mspace msp, size_t n_elements,
04745                                  size_t elem_size, void* chunks[]) {
04746   size_t sz = elem_size; /* serves as 1-element array */
04747   mstate ms = (mstate)msp;
04748   if (!ok_magic(ms)) {
04749     USAGE_ERROR_ACTION(ms,ms);
04750     return 0;
04751   }
04752   return ialloc(ms, n_elements, &sz, 3, chunks);
04753 }
04754 
04755 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
04756                                    size_t sizes[], void* chunks[]) {
04757   mstate ms = (mstate)msp;
04758   if (!ok_magic(ms)) {
04759     USAGE_ERROR_ACTION(ms,ms);
04760     return 0;
04761   }
04762   return ialloc(ms, n_elements, sizes, 0, chunks);
04763 }
04764 
04765 int mspace_trim(mspace msp, size_t pad) {
04766   int result = 0;
04767   mstate ms = (mstate)msp;
04768   if (ok_magic(ms)) {
04769     if (!PREACTION(ms)) {
04770       result = sys_trim(ms, pad);
04771       POSTACTION(ms);
04772     }
04773   }
04774   else {
04775     USAGE_ERROR_ACTION(ms,ms);
04776   }
04777   return result;
04778 }
04779 
04780 void mspace_malloc_stats(mspace msp) {
04781   mstate ms = (mstate)msp;
04782   if (ok_magic(ms)) {
04783     internal_malloc_stats(ms);
04784   }
04785   else {
04786     USAGE_ERROR_ACTION(ms,ms);
04787   }
04788 }
04789 
04790 size_t mspace_footprint(mspace msp) {
04791   size_t result;
04792   mstate ms = (mstate)msp;
04793   if (ok_magic(ms)) {
04794     result = ms->footprint;
04795   }
04796   USAGE_ERROR_ACTION(ms,ms);
04797   return result;
04798 }
04799 
04800 
04801 size_t mspace_max_footprint(mspace msp) {
04802   size_t result;
04803   mstate ms = (mstate)msp;
04804   if (ok_magic(ms)) {
04805     result = ms->max_footprint;
04806   }
04807   USAGE_ERROR_ACTION(ms,ms);
04808   return result;
04809 }
04810 
04811 
04812 #if !NO_MALLINFO
04813 struct mallinfo mspace_mallinfo(mspace msp) {
04814   mstate ms = (mstate)msp;
04815   if (!ok_magic(ms)) {
04816     USAGE_ERROR_ACTION(ms,ms);
04817   }
04818   return internal_mallinfo(ms);
04819 }
04820 #endif /* NO_MALLINFO */
04821 
04822 int mspace_mallopt(int param_number, int value) {
04823   return change_mparam(param_number, value);
04824 }
04825 
04826 #endif /* MSPACES */
04827 
04828 /* -------------------- Alternative MORECORE functions ------------------- */
04829 
04830 /*
04831   Guidelines for creating a custom version of MORECORE:
04832 
04833   * For best performance, MORECORE should allocate in multiples of pagesize.
04834   * MORECORE may allocate more memory than requested. (Or even less,
04835       but this will usually result in a malloc failure.)
04836   * MORECORE must not allocate memory when given argument zero, but
04837       instead return one past the end address of memory from previous
04838       nonzero call.
04839   * For best performance, consecutive calls to MORECORE with positive
04840       arguments should return increasing addresses, indicating that
04841       space has been contiguously extended.
04842   * Even though consecutive calls to MORECORE need not return contiguous
04843       addresses, it must be OK for malloc'ed chunks to span multiple
04844       regions in those cases where they do happen to be contiguous.
04845   * MORECORE need not handle negative arguments -- it may instead
04846       just return MFAIL when given negative arguments.
04847       Negative arguments are always multiples of pagesize. MORECORE
04848       must not misinterpret negative args as large positive unsigned
04849       args. You can suppress all such calls from even occurring by defining
04850       MORECORE_CANNOT_TRIM,
04851 
04852   As an example alternative MORECORE, here is a custom allocator
04853   kindly contributed for pre-OSX macOS.  It uses virtually but not
04854   necessarily physically contiguous non-paged memory (locked in,
04855   present and won't get swapped out).  You can use it by uncommenting
04856   this section, adding some #includes, and setting up the appropriate
04857   defines above:
04858 
04859       #define MORECORE osMoreCore
04860 
04861   There is also a shutdown routine that should somehow be called for
04862   cleanup upon program exit.
04863 
04864   #define MAX_POOL_ENTRIES 100
04865   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
04866   static int next_os_pool;
04867   void *our_os_pools[MAX_POOL_ENTRIES];
04868 
04869   void *osMoreCore(int size)
04870   {
04871     void *ptr = 0;
04872     static void *sbrk_top = 0;
04873 
04874     if (size > 0)
04875     {
04876       if (size < MINIMUM_MORECORE_SIZE)
04877          size = MINIMUM_MORECORE_SIZE;
04878       if (CurrentExecutionLevel() == kTaskLevel)
04879          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
04880       if (ptr == 0)
04881       {
04882         return (void *) MFAIL;
04883       }
04884       // save ptrs so they can be freed during cleanup
04885       our_os_pools[next_os_pool] = ptr;
04886       next_os_pool++;
04887       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
04888       sbrk_top = (char *) ptr + size;
04889       return ptr;
04890     }
04891     else if (size < 0)
04892     {
04893       // we don't currently support shrink behavior
04894       return (void *) MFAIL;
04895     }
04896     else
04897     {
04898       return sbrk_top;
04899     }
04900   }
04901 
04902   // cleanup any allocated memory pools
04903   // called as last thing before shutting down driver
04904 
04905   void osCleanupMem(void)
04906   {
04907     void **ptr;
04908 
04909     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
04910       if (*ptr)
04911       {
04912          PoolDeallocate(*ptr);
04913          *ptr = 0;
04914       }
04915   }
04916 
04917 */
04918 
04919 
04920 /* -----------------------------------------------------------------------
04921 History:
04922     V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
04923       * Add max_footprint functions
04924       * Ensure all appropriate literals are size_t
04925       * Fix conditional compilation problem for some #define settings
04926       * Avoid concatenating segments with the one provided
04927         in create_mspace_with_base
04928       * Rename some variables to avoid compiler shadowing warnings
04929       * Use explicit lock initialization.
04930       * Better handling of sbrk interference.
04931       * Simplify and fix segment insertion, trimming and mspace_destroy
04932       * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
04933       * Thanks especially to Dennis Flanagan for help on these.
04934 
04935     V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
04936       * Fix memalign brace error.
04937 
04938     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
04939       * Fix improper #endif nesting in C++
04940       * Add explicit casts needed for C++
04941 
04942     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
04943       * Use trees for large bins
04944       * Support mspaces
04945       * Use segments to unify sbrk-based and mmap-based system allocation,
04946         removing need for emulation on most platforms without sbrk.
04947       * Default safety checks
04948       * Optional footer checks. Thanks to William Robertson for the idea.
04949       * Internal code refactoring
04950       * Incorporate suggestions and platform-specific changes.
04951         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
04952         Aaron Bachmann,  Emery Berger, and others.
04953       * Speed up non-fastbin processing enough to remove fastbins.
04954       * Remove useless cfree() to avoid conflicts with other apps.
04955       * Remove internal memcpy, memset. Compilers handle builtins better.
04956       * Remove some options that no one ever used and rename others.
04957 
04958     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
04959       * Fix malloc_state bitmap array misdeclaration
04960 
04961     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
04962       * Allow tuning of FIRST_SORTED_BIN_SIZE
04963       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
04964       * Better detection and support for non-contiguousness of MORECORE.
04965         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
04966       * Bypass most of malloc if no frees. Thanks To Emery Berger.
04967       * Fix freeing of old top non-contiguous chunk im sysmalloc.
04968       * Raised default trim and map thresholds to 256K.
04969       * Fix mmap-related #defines. Thanks to Lubos Lunak.
04970       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
04971       * Branch-free bin calculation
04972       * Default trim and mmap thresholds now 256K.
04973 
04974     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
04975       * Introduce independent_comalloc and independent_calloc.
04976         Thanks to Michael Pachos for motivation and help.
04977       * Make optional .h file available
04978       * Allow > 2GB requests on 32bit systems.
04979       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
04980         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
04981         and Anonymous.
04982       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
04983         helping test this.)
04984       * memalign: check alignment arg
04985       * realloc: don't try to shift chunks backwards, since this
04986         leads to  more fragmentation in some programs and doesn't
04987         seem to help in any others.
04988       * Collect all cases in malloc requiring system memory into sysmalloc
04989       * Use mmap as backup to sbrk
04990       * Place all internal state in malloc_state
04991       * Introduce fastbins (although similar to 2.5.1)
04992       * Many minor tunings and cosmetic improvements
04993       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
04994       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
04995         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
04996       * Include errno.h to support default failure action.
04997 
04998     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
04999       * return null for negative arguments
05000       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
05001          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
05002           (e.g. WIN32 platforms)
05003          * Cleanup header file inclusion for WIN32 platforms
05004          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
05005          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
05006            memory allocation routines
05007          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
05008          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
05009            usage of 'assert' in non-WIN32 code
05010          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
05011            avoid infinite loop
05012       * Always call 'fREe()' rather than 'free()'
05013 
05014     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
05015       * Fixed ordering problem with boundary-stamping
05016 
05017     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
05018       * Added pvalloc, as recommended by H.J. Liu
05019       * Added 64bit pointer support mainly from Wolfram Gloger
05020       * Added anonymously donated WIN32 sbrk emulation
05021       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
05022       * malloc_extend_top: fix mask error that caused wastage after
05023         foreign sbrks
05024       * Add linux mremap support code from HJ Liu
05025 
05026     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
05027       * Integrated most documentation with the code.
05028       * Add support for mmap, with help from
05029         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
05030       * Use last_remainder in more cases.
05031       * Pack bins using idea from  colin@nyx10.cs.du.edu
05032       * Use ordered bins instead of best-fit threshhold
05033       * Eliminate block-local decls to simplify tracing and debugging.
05034       * Support another case of realloc via move into top
05035       * Fix error occuring when initial sbrk_base not word-aligned.
05036       * Rely on page size for units instead of SBRK_UNIT to
05037         avoid surprises about sbrk alignment conventions.
05038       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
05039         (raymond@es.ele.tue.nl) for the suggestion.
05040       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
05041       * More precautions for cases where other routines call sbrk,
05042         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
05043       * Added macros etc., allowing use in linux libc from
05044         H.J. Lu (hjl@gnu.ai.mit.edu)
05045       * Inverted this history list
05046 
05047     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
05048       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
05049       * Removed all preallocation code since under current scheme
05050         the work required to undo bad preallocations exceeds
05051         the work saved in good cases for most test programs.
05052       * No longer use return list or unconsolidated bins since
05053         no scheme using them consistently outperforms those that don't
05054         given above changes.
05055       * Use best fit for very large chunks to prevent some worst-cases.
05056       * Added some support for debugging
05057 
05058     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
05059       * Removed footers when chunks are in use. Thanks to
05060         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
05061 
05062     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
05063       * Added malloc_trim, with help from Wolfram Gloger
05064         (wmglo@Dent.MED.Uni-Muenchen.DE).
05065 
05066     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
05067 
05068     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
05069       * realloc: try to expand in both directions
05070       * malloc: swap order of clean-bin strategy;
05071       * realloc: only conditionally expand backwards
05072       * Try not to scavenge used bins
05073       * Use bin counts as a guide to preallocation
05074       * Occasionally bin return list chunks in first scan
05075       * Add a few optimizations from colin@nyx10.cs.du.edu
05076 
05077     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
05078       * faster bin computation & slightly different binning
05079       * merged all consolidations to one part of malloc proper
05080          (eliminating old malloc_find_space & malloc_clean_bin)
05081       * Scan 2 returns chunks (not just 1)
05082       * Propagate failure in realloc if malloc returns 0
05083       * Add stuff to allow compilation on non-ANSI compilers
05084           from kpv@research.att.com
05085 
05086     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
05087       * removed potential for odd address access in prev_chunk
05088       * removed dependency on getpagesize.h
05089       * misc cosmetics and a bit more internal documentation
05090       * anticosmetics: mangled names in macros to evade debugger strangeness
05091       * tested on sparc, hp-700, dec-mips, rs6000
05092           with gcc & native cc (hp, dec only) allowing
05093           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
05094 
05095     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
05096       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
05097          structure of old version,  but most details differ.)
05098  
05099 */