Back to index

lightning-sunbird  0.9+nobinonly
malloc.c
Go to the documentation of this file.
00001 #define USE_MALLOC_LOCK
00002 #define DEFAULT_TRIM_THRESHOLD (256 * 1024)
00003 
00004 /* ---------- To make a malloc.h, start cutting here ------------ */
00005 
00006 /*
00007   ****************************************************************
00008   * THIS IS A PRERELEASE. It has not yet been tested adequately. *
00009   * If you use it, please send back comments, suggestions,       *
00010   * performance reports, etc.                                    *
00011   ****************************************************************
00012 */
00013 
00014 /*
00015   A version (aka dlmalloc) of malloc/free/realloc written by Doug
00016   Lea and released to the public domain.  Use this code without
00017   permission or acknowledgement in any way you wish.  Send questions,
00018   comments, complaints, performance data, etc to dl@cs.oswego.edu
00019 
00020 * VERSION 2.7.0pre7 Wed Jan 10 13:33:01 2001  Doug Lea  (dl at gee)
00021 
00022    Note: There may be an updated version of this malloc obtainable at
00023            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
00024          Check before installing!
00025 
00026 * Quickstart
00027 
00028   This library is all in one file to simplify the most common usage:
00029   ftp it, compile it (-O), and link it into another program. All
00030   of the compile-time options default to reasonable values for use on
00031   most unix platforms. Compile -DWIN32 for reasonable defaults on windows.
00032   You might later want to step through various compile options.
00033 
00034 * Why use this malloc?
00035 
00036   This is not the fastest, most space-conserving, most portable, or
00037   most tunable malloc ever written. However it is among the fastest
00038   while also being among the most space-conserving, portable and tunable.
00039   Consistent balance across these factors results in a good general-purpose
00040   allocator for malloc-intensive programs.
00041 
00042   The main properties of the algorithms are:
00043   * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
00044     with ties normally decided via FIFO (i.e. least recently used).
00045   * For small (<= 64 bytes by default) requests, it is a caching
00046     allocator, that maintains pools of quickly recycled chunks.
00047   * In between, and for combinations of large and small requests, it does
00048     the best it can trying to meet both goals at once.
00049 
00050   Compared to 2.6.X versions, this version is generally faster,
00051   especially for programs that allocate and free many small chunks.
00052 
00053   For a longer but slightly out of date high-level description, see
00054      http://gee.cs.oswego.edu/dl/html/malloc.html
00055 
00056   You may already by default be using a c library containing a malloc
00057   that is somehow based on some version of this malloc (for example in
00058   linux). You might still want to use the one in this file in order to
00059   customize settings or to avoid overheads associated with library
00060   versions.
00061 
00062 * Synopsis of public routines
00063 
00064   (Much fuller descriptions are contained in the program documentation below.)
00065 
00066   malloc(size_t n);
00067      Return a pointer to a newly allocated chunk of at least n bytes, or null
00068      if no space is available.
00069   free(Void_t* p);
00070      Release the chunk of memory pointed to by p, or no effect if p is null.
00071   realloc(Void_t* p, size_t n);
00072      Return a pointer to a chunk of size n that contains the same data
00073      as does chunk p up to the minimum of (n, p's size) bytes, or null
00074      if no space is available. The returned pointer may or may not be
00075      the same as p. If p is null, equivalent to malloc.  Unless the
00076      #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
00077      size argument of zero (re)allocates a minimum-sized chunk.
00078   memalign(size_t alignment, size_t n);
00079      Return a pointer to a newly allocated chunk of n bytes, aligned
00080      in accord with the alignment argument, which must be a power of
00081      two.
00082   valloc(size_t n);
00083      Equivalent to memalign(pagesize, n), where pagesize is the page
00084      size of the system (or as near to this as can be figured out from
00085      all the includes/defines below.)
00086   pvalloc(size_t n);
00087      Equivalent to valloc(minimum-page-that-holds(n)), that is,
00088      round up n to nearest pagesize.
00089   calloc(size_t unit, size_t quantity);
00090      Returns a pointer to quantity * unit bytes, with all locations
00091      set to zero.
00092   cfree(Void_t* p);
00093      Equivalent to free(p).
00094   malloc_trim(size_t pad);
00095      Release all but pad bytes of freed top-most memory back
00096      to the system. Return 1 if successful, else 0.
00097   malloc_usable_size(Void_t* p);
00098      Report the number usable allocated bytes associated with allocated
00099      chunk p. This may or may not report more bytes than were requested,
00100      due to alignment and minimum size constraints.
00101   malloc_stats();
00102      Prints brief summary statistics on stderr.
00103   mallinfo()
00104      Returns (by copy) a struct containing various summary statistics.
00105   mallopt(int parameter_number, int parameter_value)
00106      Changes one of the tunable parameters described below. Returns
00107      1 if successful in changing the parameter, else 0.
00108 
00109 * Vital statistics:
00110 
00111   Assumed pointer representation:       4 or 8 bytes
00112        (Thanks to Wolfram Gloger for contributing most of the
00113        changes supporting dual 4/8.)
00114 
00115   Assumed size_t  representation:       4 or 8 bytes 
00116        Note that size_t is allowed to be 4 bytes even if pointers are 8.
00117        You can adjust this by defining INTERNAL_SIZE_T
00118 
00119   Alignment:                            2 * sizeof(size_t) 
00120        (i.e., 8 byte alignment with 4byte size_t). This suffices for
00121        nearly all current machines and C compilers. However, you can
00122        define MALLOC_ALIGNMENT to be wider than this if necessary.
00123 
00124   Minimum overhead per allocated chunk: 4 or 8 bytes
00125        Each malloced chunk has a hidden word of overhead holding size
00126        and status information.
00127 
00128   Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
00129                           8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
00130 
00131        When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
00132        ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
00133        needed; 4 (8) for a trailing size field and 8 (16) bytes for
00134        free list pointers. Thus, the minimum allocatable size is
00135        16/24/32 bytes.
00136 
00137        Even a request for zero bytes (i.e., malloc(0)) returns a
00138        pointer to something of the minimum allocatable size.
00139 
00140        The maximum overhead wastage (i.e., number of extra bytes
00141        allocated than were requested in malloc) is less than or equal
00142        to the minimum size, except for requests >= mmap_threshold that
00143        are serviced via mmap(), where the worst case wastage is 2 *
00144        sizeof(size_t) bytes plus the remainder from a system page (the
00145        minimal mmap unit); typically 4096 bytes.
00146 
00147   Maximum allocated size: 4-byte size_t: 2^31 minus about two pages
00148                           8-byte size_t: 2^63 minus about two pages
00149 
00150        It is assumed that (possibly signed) size_t values suffice
00151        to represent chunk sizes. `Possibly signed' is due to the fact
00152        that `size_t' may be defined on a system as either a signed or
00153        an unsigned type. The ISO C standard says that it must be
00154        unsigned, but a few systems are known not to adhere to this.
00155        Additionally, even when size_t is unsigned, sbrk (which is by
00156        default used to obtain memory from system) accepts signed
00157        arguments, and may not be able to handle size_t-wide arguments
00158        with negative sign bit.  To be conservative, values that would
00159        appear as negative after accounting for overhead and alignment
00160        are rejected.
00161 
00162        Requests for sizes outside this range will perform an optional
00163        failure action and then return null. (Requests may also
00164        also fail because a system is out of memory.)
00165 
00166   Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined
00167 
00168        When USE_MALLOC_LOCK is defined, wrappers are created to
00169        surround every public call with either a pthread mutex or
00170        a win32 critical section (depending on WIN32). This is not
00171        especially fast, and can be a major bottleneck in programs with
00172        many threads. It is designed only to provide minimal protection
00173        in concurrent environments, and to provide a basis for
00174        extensions.  If you are using malloc in a concurrent program,
00175        you would be far better off obtaining ptmalloc, which is
00176        derived from a version of this malloc, and is well-tuned for
00177        concurrent programs. (See http://www.malloc.de)
00178 
00179   Compliance: I believe it is compliant with the 1997 Single Unix Specification
00180 
00181        (See http://www.opennc.org). Probably other standards as well.
00182 
00183 * Limitations
00184 
00185     Here are some features that are NOT currently supported
00186 
00187     * No automated mechanism for fully checking that all accesses
00188       to malloced memory stay within their bounds. However, there
00189       are several add-ons and adaptations of this or other mallocs
00190       available that do this.
00191     * No support for compaction.
00192 
00193 * Synopsis of compile-time options:
00194 
00195     People have reported using previous versions of this malloc on all
00196     versions of Unix, sometimes by tweaking some of the defines
00197     below. It has been tested most extensively on Solaris and
00198     Linux. It is also reported to work on WIN32 platforms.
00199     People also report using it in stand-alone embedded systems.
00200 
00201     The implementation is in straight, hand-tuned ANSI C.  It is not
00202     at all modular. (Sorry!)  It uses a lot of macros.  To be at all
00203     usable, this code should be compiled using an optimizing compiler
00204     (for example gcc -O3) that can simplify expressions and control
00205     paths. (FAQ: some macros import variables as arguments rather than
00206     declare locals because people reported that some debuggers
00207     otherwise get confused.)
00208 
00209     OPTION                     DEFAULT VALUE
00210 
00211     Compilation Environment options:
00212 
00213     __STD_C                    derived from C compiler defines
00214     WIN32                      NOT defined
00215     HAVE_MEMCPY                defined
00216     USE_MEMCPY                 1 if HAVE_MEMCPY is defined
00217     HAVE_MMAP                  defined as 1 
00218     MMAP_AS_MORECORE_SIZE      (1024 * 1024) 
00219     HAVE_MREMAP                defined as 0 unless linux defined
00220     malloc_getpagesize         derived from system #includes, or 4096 if not
00221     HAVE_USR_INCLUDE_MALLOC_H  NOT defined
00222     LACKS_UNISTD_H             NOT defined unless WIN32
00223     LACKS_SYS_PARAM_H          NOT defined unless WIN32
00224     LACKS_SYS_MMAN_H           NOT defined unless WIN32
00225 
00226     Changing default word sizes:
00227 
00228     INTERNAL_SIZE_T            size_t
00229     MALLOC_ALIGNMENT           2 * sizeof(INTERNAL_SIZE_T)
00230 
00231     Configuration and functionality options:
00232 
00233     USE_DL_PREFIX              NOT defined
00234     USE_PUBLIC_MALLOC_WRAPPERS NOT defined
00235     USE_MALLOC_LOCK            NOT defined
00236     DEBUG                      NOT defined
00237     REALLOC_ZERO_BYTES_FREES   NOT defined
00238     MALLOC_FAILURE_ACTION      errno = ENOMEM, if __STD_C defined, else no-op
00239     TRIM_FASTBINS              0
00240 
00241     Options for customizing MORECORE:
00242 
00243     MORECORE                   sbrk
00244     MORECORE_CONTIGUOUS        1 
00245 
00246     Tuning options that are also dynamically changeable via mallopt:
00247 
00248     DEFAULT_MXFAST             64
00249     DEFAULT_TRIM_THRESHOLD     128 * 1024
00250     DEFAULT_TOP_PAD            0
00251     DEFAULT_MMAP_THRESHOLD     128 * 1024
00252     DEFAULT_MMAP_MAX           256
00253 
00254     There are several other #defined constants and macros that you
00255     probably don't want to touch unless you are extending or adapting malloc.
00256 
00257 */
00258 #include "xpcom-private.h"
00259 
00260 
00261 
00262 /*
00263   WIN32 sets up defaults for MS environment and compilers.
00264   Otherwise defaults are for unix.
00265 */
00266 
00267 /* #define WIN32 */
00268 
00269 #ifdef WIN32
00270 
00271 #include <windows.h>
00272 
00273 /* Win32 doesn't supply or need the following headers */
00274 #define LACKS_UNISTD_H
00275 #define LACKS_SYS_PARAM_H
00276 #define LACKS_SYS_MMAN_H
00277 
00278 /* Use the supplied emulation of sbrk */
00279 #define MORECORE sbrk
00280 #define MORECORE_CONTIGUOUS 1
00281 #define MORECORE_FAILURE    ((void*)(-1))
00282 
00283 /* Use the supplied emulation mmap, munmap */
00284 #define HAVE_MMAP 1
00285 #define MUNMAP_FAILURE  (-1)
00286 /* These values don't really matter in windows mmap emulation */
00287 #define MAP_PRIVATE 1
00288 #define MAP_ANONYMOUS 2
00289 #define PROT_READ 1
00290 #define PROT_WRITE 2
00291 
00292 /* Emulation functions defined at the end of this file */
00293 
00294 /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */
00295 #ifdef USE_MALLOC_LOCK
00296 static int slwait(int *sl);
00297 static int slrelease(int *sl);
00298 #endif
00299 
00300 static long getpagesize(void);
00301 static long getregionsize(void);
00302 static void *sbrk(long size);
00303 static void *mmap(void *ptr, long size, long prot, long type, long handle, long arg);
00304 static long munmap(void *ptr, long size);
00305 
00306 static void vminfo (unsigned long *free, unsigned long *reserved, unsigned long *committed);
00307 static int cpuinfo (int whole, unsigned long *kernel, unsigned long *user);
00308 
00309 #endif
00310 
00311 
00312 
00313 /*
00314   __STD_C should be nonzero if using ANSI-standard C compiler, a C++
00315   compiler, or a C compiler sufficiently close to ANSI to get away
00316   with it.
00317 */
00318 
00319 #ifndef __STD_C
00320 #ifdef __STDC__
00321 #define __STD_C     1
00322 #else
00323 #if __cplusplus
00324 #define __STD_C     1
00325 #else
00326 #define __STD_C     0
00327 #endif /*__cplusplus*/
00328 #endif /*__STDC__*/
00329 #endif /*__STD_C*/
00330 
00331 
00332 /*
00333   Void_t* is the pointer type that malloc should say it returns
00334 */
00335 
00336 #ifndef Void_t
00337 #if (__STD_C || defined(WIN32))
00338 #define Void_t      void
00339 #else
00340 #define Void_t      char
00341 #endif
00342 #endif /*Void_t*/
00343 
00344 #if __STD_C
00345 #include <stddef.h>   /* for size_t */
00346 #else
00347 #include <sys/types.h>
00348 #endif
00349 
00350 #ifdef __cplusplus
00351 extern "C" {
00352 #endif
00353 
00354 /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */
00355 
00356 /* #define  LACKS_UNISTD_H */
00357 
00358 #ifndef LACKS_UNISTD_H
00359 #include <unistd.h>
00360 #endif
00361 
00362 /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */
00363 
00364 /* #define  LACKS_SYS_PARAM_H */
00365 
00366 
00367 #include <stdio.h>    /* needed for malloc_stats */
00368 #include <errno.h>    /* needed for optional MALLOC_FAILURE_ACTION */
00369 
00370 
00371 /*
00372   Debugging:
00373 
00374   Because freed chunks may be overwritten with bookkeeping fields, this
00375   malloc will often die when freed memory is overwritten by user
00376   programs.  This can be very effective (albeit in an annoying way)
00377   in helping track down dangling pointers.
00378 
00379   If you compile with -DDEBUG, a number of assertion checks are
00380   enabled that will catch more memory errors. You probably won't be
00381   able to make much sense of the actual assertion errors, but they
00382   should help you locate incorrectly overwritten memory.  The
00383   checking is fairly extensive, and will slow down execution
00384   noticeably. Calling malloc_stats or mallinfo with DEBUG set will
00385   attempt to check every non-mmapped allocated and free chunk in the
00386   course of computing the summmaries. (By nature, mmapped regions
00387   cannot be checked very much automatically.)
00388 
00389   Setting DEBUG may also be helpful if you are trying to modify
00390   this code. The assertions in the check routines spell out in more
00391   detail the assumptions and invariants underlying the algorithms.
00392 
00393 */
00394 
00395 #if DEBUG
00396 #include <assert.h>
00397 #else
00398 #define assert(x) ((void)0)
00399 #endif
00400 
00401 
00402 /*
00403   INTERNAL_SIZE_T is the word-size used for internal bookkeeping
00404   of chunk sizes.
00405 
00406   The default version is the same as size_t.
00407 
00408   While not strictly necessary, it is best to define this as an
00409   unsigned type, even if size_t is a signed type. This may avoid some
00410   artificial size limitations on some systems.
00411 
00412   On a 64-bit machine, you may be able to reduce malloc overhead by
00413   defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
00414   expense of not being able to handle more than 2^32 of malloced
00415   space. If this limitation is acceptable, you are encouraged to set
00416   this unless you are on a platform requiring 16byte alignments. In
00417   this case the alignment requirements turn out to negate any
00418   potential advantages of decreasing size_t word size.
00419 
00420   Note to implementors: To deal with all this, comparisons and
00421   difference computations among INTERNAL_SIZE_Ts should normally cast
00422   INTERNAL_SIZE_T's to long or unsigned long, as appropriate, being
00423   aware of the fact that casting an unsigned int to a wider long does not
00424   sign-extend. (This also makes checking for negative numbers awkward.)
00425 
00426 */
00427 
00428 #ifndef INTERNAL_SIZE_T
00429 #define INTERNAL_SIZE_T size_t
00430 #endif
00431 
00432 /* The corresponding word size */
00433 #define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
00434 
00435 
00436 /*
00437   MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
00438   It must be a power of two at least 2 * SIZE_SZ, even on machines
00439   for which smaller alignments would suffice. It may be defined as
00440   larger than this though. (Note however that code and data structures
00441   are optimized for the case of 8-byte alignment.)
00442 
00443 */
00444 
00445   /* #define MALLOC_ALIGNMENT 16 */
00446 
00447 #ifndef MALLOC_ALIGNMENT
00448 #define MALLOC_ALIGNMENT       (2 * SIZE_SZ)
00449 #endif
00450 
00451 /* The corresponding bit mask value */
00452 #define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
00453 
00454 
00455 /*
00456   REALLOC_ZERO_BYTES_FREES should be set if a call to
00457   realloc with zero bytes should be the same as a call to free.
00458   Some people think it should. Otherwise, since this malloc
00459   returns a unique pointer for malloc(0), so does realloc(p, 0).
00460 */
00461 
00462 /*   #define REALLOC_ZERO_BYTES_FREES */
00463 
00464 
00465 /*
00466   USE_DL_PREFIX will prefix all public routines with the string 'dl'.
00467   This is necessary when you only want to use this malloc in one part 
00468   of a program, using your regular system malloc elsewhere.
00469 */
00470 
00471 /* #define USE_DL_PREFIX */
00472 
00473 
00474 /*
00475   USE_MALLOC_LOCK causes wrapper functions to surround each
00476   callable routine with pthread mutex lock/unlock.
00477 
00478   USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined
00479 */
00480 
00481 /* #define USE_MALLOC_LOCK */
00482 
00483 
00484 /*
00485   If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is
00486   actually a wrapper function that first calls MALLOC_PREACTION, then
00487   calls the internal routine, and follows it with
00488   MALLOC_POSTACTION. This is needed for locking, but you can also use
00489   this, without USE_MALLOC_LOCK, for purposes of interception,
00490   instrumentation, etc. It is a sad fact that using wrappers often
00491   noticeably degrades performance of malloc-intensive programs.
00492 */
00493 
00494 #ifdef USE_MALLOC_LOCK
00495 #define USE_PUBLIC_MALLOC_WRAPPERS
00496 #else
00497 /* #define USE_PUBLIC_MALLOC_WRAPPERS */
00498 #endif
00499 
00500 
00501 
00502 
00503 /*
00504   HAVE_MEMCPY should be defined if you are not otherwise using
00505   ANSI STD C, but still have memcpy and memset in your C library
00506   and want to use them in calloc and realloc. Otherwise simple
00507   macro versions are defined below.
00508 
00509   USE_MEMCPY should be defined as 1 if you actually want to
00510   have memset and memcpy called. People report that the macro
00511   versions are faster than libc versions on some systems.
00512   
00513   Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks
00514   (of <= 36 bytes) are manually unrolled in realloc and calloc.
00515 */
00516 
00517 #define HAVE_MEMCPY
00518 
00519 #ifndef USE_MEMCPY
00520 #ifdef HAVE_MEMCPY
00521 #define USE_MEMCPY 1
00522 #else
00523 #define USE_MEMCPY 0
00524 #endif
00525 #endif
00526 
00527 
00528 #if (__STD_C || defined(HAVE_MEMCPY))
00529 
00530 #ifdef WIN32
00531   /*
00532     On Win32 platforms, 'memset()' and 'memcpy()' are already declared in
00533     'windows.h'
00534   */
00535 #else
00536 #if __STD_C
00537 void* memset(void*, int, size_t);
00538 void* memcpy(void*, const void*, size_t);
00539 void* memmove(void*, const void*, size_t);
00540 #else
00541 Void_t* memset();
00542 Void_t* memcpy();
00543 Void_t* memmove();
00544 #endif
00545 #endif
00546 #endif
00547 
00548 
00549 /*
00550   MALLOC_FAILURE_ACTION is the action to take before "return 0" when
00551   malloc fails to be able to return memory, either because memory is
00552   exhausted or because of illegal arguments.
00553   
00554   By default, sets errno if running on STD_C platform, else does nothing.  
00555 */
00556 
00557 #ifndef MALLOC_FAILURE_ACTION
00558 #if __STD_C
00559 #define MALLOC_FAILURE_ACTION \
00560    errno = ENOMEM;
00561 
00562 #else
00563 
00564 #define MALLOC_FAILURE_ACTION
00565 #endif
00566 #endif
00567 
00568 /*
00569   Define HAVE_MMAP as true to optionally make malloc() use mmap() to
00570   allocate very large blocks.  These will be returned to the
00571   operating system immediately after a free(). Also, if mmap
00572   is available, it is used as a backup strategy in cases where
00573   MORECORE fails to provide space from system.
00574 
00575   This malloc is best tuned to work with mmap for large requests.
00576   If you do not have mmap, allocation of very large chunks (1MB
00577   or so) may be slower than you'd like.
00578 */
00579 
00580 #ifndef HAVE_MMAP
00581 #define HAVE_MMAP 1
00582 #endif
00583 
00584 /* 
00585    MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
00586    sbrk fails, and mmap is used as a backup (which is done only if
00587    HAVE_MMAP).  The value must be a multiple of page size.  This
00588    backup strategy generally applies only when systems have "holes" in
00589    address space, so sbrk cannot perform contiguous expansion, but
00590    there is still space available on system.  On systems for which
00591    this is known to be useful (i.e. most linux kernels), this occurs
00592    only when programs allocate huge amounts of memory.  Between this,
00593    and the fact that mmap regions tend to be limited, the size should
00594    be large, to avoid too many mmap calls and thus avoid running out
00595    of kernel resources.
00596 */
00597 
00598 #ifndef MMAP_AS_MORECORE_SIZE
00599 #define MMAP_AS_MORECORE_SIZE (1024 * 1024)
00600 #endif
00601 
00602 
00603 
00604 /*
00605   Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
00606   large blocks.  This is currently only possible on Linux with
00607   kernel versions newer than 1.3.77.
00608 */
00609 
00610 #ifndef HAVE_MREMAP
00611 #ifdef linux
00612 #define HAVE_MREMAP 1
00613 #else
00614 #define HAVE_MREMAP 0
00615 #endif
00616 
00617 #endif /* HAVE_MMAP */
00618 
00619 
00620 /*
00621 
00622   This version of malloc supports the standard SVID/XPG mallinfo
00623   routine that returns a struct containing usage properties and
00624   statistics. It should work on any SVID/XPG compliant system that has
00625   a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
00626   install such a thing yourself, cut out the preliminary declarations
00627   as described above and below and save them in a malloc.h file. But
00628   there's no compelling reason to bother to do this.)
00629 
00630   The main declaration needed is the mallinfo struct that is returned
00631   (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
00632   bunch of field that are not even meaningful in this version of
00633   malloc.  These fields are are instead filled by mallinfo() with
00634   other numbers that might be of interest.
00635 
00636   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
00637   /usr/include/malloc.h file that includes a declaration of struct
00638   mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
00639   version is declared below.  These must be precisely the same for
00640   mallinfo() to work.
00641 
00642 */
00643 
00644 /* #define HAVE_USR_INCLUDE_MALLOC_H */
00645 
00646 #ifdef HAVE_USR_INCLUDE_MALLOC_H
00647 #include "/usr/include/malloc.h"
00648 #else
00649 
00650 /* SVID2/XPG mallinfo structure */
00651 
00652 struct mallinfo {
00653   int arena;    /* non-mmapped space allocated from system */
00654   int ordblks;  /* number of free chunks */
00655   int smblks;   /* number of fastbin blocks */
00656   int hblks;    /* number of mmapped regions */
00657   int hblkhd;   /* space in mmapped regions */
00658   int usmblks;  /* maximum total allocated space */
00659   int fsmblks;  /* space available in freed fastbin blocks */
00660   int uordblks; /* total allocated space */
00661   int fordblks; /* total free space */
00662   int keepcost; /* top-most, releasable (via malloc_trim) space */
00663 };
00664 
00665 /* SVID2/XPG mallopt options */
00666 
00667 #define M_MXFAST  1    /* Set maximum fastbin size */
00668 #define M_NLBLKS  2    /* UNUSED in this malloc */
00669 #define M_GRAIN   3    /* UNUSED in this malloc */
00670 #define M_KEEP    4    /* UNUSED in this malloc */
00671 
00672 
00673 #endif
00674 
00675 
00676 /* Additional mallopt options supported in this malloc */
00677 
00678 #ifndef M_TRIM_THRESHOLD
00679 #define M_TRIM_THRESHOLD    -1
00680 #endif
00681 
00682 #ifndef M_TOP_PAD
00683 #define M_TOP_PAD           -2
00684 #endif
00685 
00686 #ifndef M_MMAP_THRESHOLD
00687 #define M_MMAP_THRESHOLD    -3
00688 #endif
00689 
00690 #ifndef M_MMAP_MAX
00691 #define M_MMAP_MAX          -4
00692 #endif
00693 
00694 
00695 /*
00696   MXFAST is the maximum request size used for "fastbins", special bins
00697   that hold returned chunks without consolidating their spaces. This
00698   enables future requests for chunks of the same size to be handled
00699   very quickly, but can increase fragmentation, and thus increase the
00700   overall memory footprint of a program.
00701 
00702   This malloc manages fastbins very conservatively yet still
00703   efficiently, so fragmentation is rarely a problem for values less
00704   than or equal to the default.  The maximum supported value of MXFAST
00705   is 80. You wouldn't want it any higher than this anyway.  Fastbins
00706   are designed especially for use with many small structs, objects or
00707   strings -- the default handles structs/objects/arrays with sizes up
00708   to 8 4byte fields, or small strings representing words, tokens,
00709   etc. Using fastbins for larger objects normally worsens
00710   fragmentation without improving speed.
00711 
00712   MXFAST is set in REQUEST size units. It is internally used in
00713   chunksize units, which adds padding and alignment.  You can reduce
00714   MXFAST to 0 to disable all use of fastbins.  This causes the malloc
00715   algorithm to be a close approximation of fifo-best-fit in all cases,
00716   not just for larger requests, but will generally cause it to be
00717   slower.
00718 
00719 */
00720 
00721 #ifndef DEFAULT_MXFAST
00722 #define DEFAULT_MXFAST     64
00723 #endif
00724 
00725 
00726 /*
00727   M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
00728   to keep before releasing via malloc_trim in free().
00729 
00730   Automatic trimming is mainly useful in long-lived programs.
00731   Because trimming via sbrk can be slow on some systems, and can
00732   sometimes be wasteful (in cases where programs immediately
00733   afterward allocate more large chunks) the value should be high
00734   enough so that your overall system performance would improve by
00735   releasing.
00736 
00737   The trim threshold and the mmap control parameters (see below)
00738   can be traded off with one another. Trimming and mmapping are
00739   two different ways of releasing unused memory back to the
00740   system. Between these two, it is often possible to keep
00741   system-level demands of a long-lived program down to a bare
00742   minimum. For example, in one test suite of sessions measuring
00743   the XF86 X server on Linux, using a trim threshold of 128K and a
00744   mmap threshold of 192K led to near-minimal long term resource
00745   consumption.
00746 
00747   If you are using this malloc in a long-lived program, it should
00748   pay to experiment with these values.  As a rough guide, you
00749   might set to a value close to the average size of a process
00750   (program) running on your system.  Releasing this much memory
00751   would allow such a process to run in memory.  Generally, it's
00752   worth it to tune for trimming rather tham memory mapping when a
00753   program undergoes phases where several large chunks are
00754   allocated and released in ways that can reuse each other's
00755   storage, perhaps mixed with phases where there are no such
00756   chunks at all.  And in well-behaved long-lived programs,
00757   controlling release of large blocks via trimming versus mapping
00758   is usually faster.
00759 
00760   However, in most programs, these parameters serve mainly as
00761   protection against the system-level effects of carrying around
00762   massive amounts of unneeded memory. Since frequent calls to
00763   sbrk, mmap, and munmap otherwise degrade performance, the default
00764   parameters are set to relatively high values that serve only as
00765   safeguards.
00766 
00767   The default trim value is high enough to cause trimming only in
00768   fairly extreme (by current memory consumption standards) cases.
00769   It must be greater than page size to have any useful effect.  To
00770   disable trimming completely, you can set to (unsigned long)(-1);
00771 
00772   Trim settings interact with fastbin (MXFAST) settings: Unless
00773   TRIM_FASTBINS is defined, automatic trimming never takes place upon
00774   freeing a chunk with size less than or equal to MXFAST. Trimming is
00775   instead delayed until subsequent freeing of larger chunks. However,
00776   you can still force an attempted trim by calling malloc_trim.
00777 
00778   Also, trimming is not generally possible in cases where
00779   the main arena is obtained via mmap.
00780 
00781 */
00782 
00783 
00784 #ifndef DEFAULT_TRIM_THRESHOLD
00785 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
00786 #endif
00787 
00788 
00789 
00790 /*
00791   M_TOP_PAD is the amount of extra `padding' space to allocate or
00792   retain whenever sbrk is called. It is used in two ways internally:
00793 
00794   * When sbrk is called to extend the top of the arena to satisfy
00795   a new malloc request, this much padding is added to the sbrk
00796   request.
00797 
00798   * When malloc_trim is called automatically from free(),
00799   it is used as the `pad' argument.
00800 
00801   In both cases, the actual amount of padding is rounded
00802   so that the end of the arena is always a system page boundary.
00803 
00804   The main reason for using padding is to avoid calling sbrk so
00805   often. Having even a small pad greatly reduces the likelihood
00806   that nearly every malloc request during program start-up (or
00807   after trimming) will invoke sbrk, which needlessly wastes
00808   time.
00809 
00810   Automatic rounding-up to page-size units is normally sufficient
00811   to avoid measurable overhead, so the default is 0.  However, in
00812   systems where sbrk is relatively slow, it can pay to increase
00813   this value, at the expense of carrying around more memory than
00814   the program needs.
00815 
00816 */
00817 
00818 #ifndef DEFAULT_TOP_PAD
00819 #define DEFAULT_TOP_PAD        (0)
00820 #endif
00821 
00822 /*
00823 
00824   M_MMAP_THRESHOLD is the request size threshold for using mmap()
00825   to service a request. Requests of at least this size that cannot
00826   be allocated using already-existing space will be serviced via mmap.
00827   (If enough normal freed space already exists it is used instead.)
00828 
00829   Using mmap segregates relatively large chunks of memory so that
00830   they can be individually obtained and released from the host
00831   system. A request serviced through mmap is never reused by any
00832   other request (at least not directly; the system may just so
00833   happen to remap successive requests to the same locations).
00834 
00835   Segregating space in this way has the benefit that mmapped space
00836   can ALWAYS be individually released back to the system, which
00837   helps keep the system level memory demands of a long-lived
00838   program low. Mapped memory can never become `locked' between
00839   other chunks, as can happen with normally allocated chunks, which
00840   means that even trimming via malloc_trim would not release them.
00841 
00842   However, it has the disadvantages that:
00843 
00844    1. The space cannot be reclaimed, consolidated, and then
00845       used to service later requests, as happens with normal chunks.
00846    2. It can lead to more wastage because of mmap page alignment
00847       requirements
00848    3. It causes malloc performance to be more dependent on host
00849       system memory management support routines which may vary in
00850       implementation quality and may impose arbitrary
00851       limitations. Generally, servicing a request via normal
00852       malloc steps is faster than going through a system's mmap.
00853 
00854   All together, these considerations should lead you to use mmap
00855   only for relatively large requests.
00856 
00857 */
00858 
00859 
00860 #ifndef DEFAULT_MMAP_THRESHOLD
00861 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
00862 #endif
00863 
00864 /*
00865   M_MMAP_MAX is the maximum number of requests to simultaneously
00866   service using mmap. This parameter exists because:
00867 
00868   1. Some systems have a limited number of internal tables for
00869      use by mmap.
00870   2. In most systems, overreliance on mmap can degrade overall
00871      performance.
00872   3. If a program allocates many large regions, it is probably
00873      better off using normal sbrk-based allocation routines that
00874      can reclaim and reallocate normal heap memory. 
00875 
00876   Setting to 0 disables use of mmap for servicing large requests.  If
00877   HAVE_MMAP is not set, the default value is 0, and attempts to set it
00878   to non-zero values in mallopt will fail.
00879 */
00880 
00881 
00882 
00883 #ifndef DEFAULT_MMAP_MAX
00884 #if HAVE_MMAP
00885 #define DEFAULT_MMAP_MAX       (256)
00886 #else
00887 #define DEFAULT_MMAP_MAX       (0)
00888 #endif
00889 #endif
00890 
00891 
00892 /*
00893   TRIM_FASTBINS controls whether free() of a very small chunk can
00894   immediately lead to trimming. Setting to true (1) can reduce memory
00895   footprint, but will almost always slow down (by a few percent)
00896   programs that use a lot of small chunks.
00897 
00898   Define this only if you are willing to give up some speed to more
00899   aggressively reduce system-level memory footprint when releasing
00900   memory in programs that use many small chunks.  You can get
00901   essentially the same effect by setting MXFAST to 0, but this can
00902   lead to even greater slowdowns in programs using many small chunks.
00903   TRIM_FASTBINS is an in-between compile-time option, that disables
00904   only those chunks bordering topmost memory from being placed in
00905   fastbins.
00906 
00907 */
00908 
00909 
00910 #ifndef TRIM_FASTBINS
00911 #define TRIM_FASTBINS  0
00912 #endif
00913 
00914 
00915 /*
00916   MORECORE-related declarations. By default, rely on sbrk
00917 */
00918 
00919 
00920 #ifdef LACKS_UNISTD_H
00921 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
00922 #if __STD_C
00923 extern Void_t*     sbrk(ptrdiff_t);
00924 #else
00925 extern Void_t*     sbrk();
00926 #endif
00927 #endif
00928 #endif
00929 
00930 /*
00931   MORECORE is the name of the routine to call to obtain more memory
00932   from the system.  See below for general guidance on writing
00933   alternative MORECORE functions, as well as a version for WIN32 and a
00934   sample version for pre-OSX macos.
00935 */
00936 
00937 #ifndef MORECORE
00938 #define MORECORE sbrk
00939 #endif
00940 
00941 
00942 /*
00943   MORECORE_FAILURE is the value returned upon failure of MORECORE
00944   as well as mmap. Since it cannot be an otherwise valid memory address,
00945   and must reflect values of standard sys calls, you probably ought not
00946   try to redefine it.
00947 */
00948 
00949 #ifndef MORECORE_FAILURE
00950 #define MORECORE_FAILURE (-1)
00951 #endif
00952 
00953 /*
00954   If MORECORE_CONTIGUOUS is true, take advantage of fact that
00955   consecutive calls to MORECORE with positive arguments always return
00956   contiguous increasing addresses.  This is true of unix sbrk.  Even
00957   if not defined, when regions happen to be contiguous, malloc will
00958   permit allocations spanning regions obtained from different
00959   calls. But defining this when applicable enables some stronger
00960   consistency checks and space efficiencies.
00961 */
00962 
00963 
00964 #ifndef MORECORE_CONTIGUOUS
00965 #define MORECORE_CONTIGUOUS 1
00966 #endif
00967 
00968 
00969 /*
00970   The system page size. To the extent possible, this malloc manages
00971   memory from the system in page-size units.  Note that this value is
00972   cached during initialization into a field of malloc_state. So even
00973   if malloc_getpagesize is a function, it is only called once.
00974 
00975   The following mechanics for getpagesize were adapted from bsd/gnu
00976   getpagesize.h. If none of the system-probes here apply, a value of
00977   4096 is used, which should be OK: If they don't apply, then using
00978   the actual value probably doesn't impact performance.
00979 */
00980 
00981 #ifndef malloc_getpagesize
00982 
00983 #ifndef LACKS_UNISTD_H
00984 #  include <unistd.h>
00985 #endif
00986 
00987 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
00988 #    ifndef _SC_PAGE_SIZE
00989 #      define _SC_PAGE_SIZE _SC_PAGESIZE
00990 #    endif
00991 #  endif
00992 
00993 #  ifdef _SC_PAGE_SIZE
00994 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
00995 #  else
00996 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
00997        extern size_t getpagesize();
00998 #      define malloc_getpagesize getpagesize()
00999 #    else
01000 #      ifdef WIN32 /* use supplied emulation of getpagesize */
01001 #        define malloc_getpagesize getpagesize() 
01002 #      else
01003 #        ifndef LACKS_SYS_PARAM_H
01004 #          include <sys/param.h>
01005 #        endif
01006 #        ifdef EXEC_PAGESIZE
01007 #          define malloc_getpagesize EXEC_PAGESIZE
01008 #        else
01009 #          ifdef NBPG
01010 #            ifndef CLSIZE
01011 #              define malloc_getpagesize NBPG
01012 #            else
01013 #              define malloc_getpagesize (NBPG * CLSIZE)
01014 #            endif
01015 #          else
01016 #            ifdef NBPC
01017 #              define malloc_getpagesize NBPC
01018 #            else
01019 #              ifdef PAGESIZE
01020 #                define malloc_getpagesize PAGESIZE
01021 #              else /* just guess */
01022 #                define malloc_getpagesize (4096) 
01023 #              endif
01024 #            endif
01025 #          endif
01026 #        endif
01027 #      endif
01028 #    endif
01029 #  endif
01030 #endif
01031 
01032 
01033 /* Two-phase Name mangling */
01034 
01035 #ifndef USE_PUBLIC_MALLOC_WRAPPERS
01036 #define cALLOc      public_cALLOc
01037 #define fREe        public_fREe
01038 #define cFREe       public_cFREe
01039 #define mALLOc      public_mALLOc
01040 #define mEMALIGn    public_mEMALIGn
01041 #define rEALLOc     public_rEALLOc
01042 #define vALLOc      public_vALLOc
01043 #define pVALLOc     public_pVALLOc
01044 #define mALLINFo    public_mALLINFo
01045 #define mALLOPt     public_mALLOPt
01046 #define mTRIm       public_mTRIm
01047 #define mSTATs      public_mSTATs
01048 #define mUSABLe     public_mUSABLe
01049 #endif
01050 
01051 #ifdef USE_DL_PREFIX
01052 #define public_cALLOc    dlcalloc
01053 #define public_fREe      dlfree
01054 #define public_cFREe     dlcfree
01055 #define public_mALLOc    dlmalloc
01056 #define public_mEMALIGn  dlmemalign
01057 #define public_rEALLOc   dlrealloc
01058 #define public_vALLOc    dlvalloc
01059 #define public_pVALLOc   dlpvalloc
01060 #define public_mALLINFo  dlmallinfo
01061 #define public_mALLOPt   dlmallopt
01062 #define public_mTRIm     dlmalloc_trim
01063 #define public_mSTATs    dlmalloc_stats
01064 #define public_mUSABLe   dlmalloc_usable_size
01065 #else /* USE_DL_PREFIX */
01066 #define public_cALLOc    calloc
01067 #define public_fREe      free
01068 #define public_cFREe     cfree
01069 #define public_mALLOc    malloc
01070 #define public_mEMALIGn  memalign
01071 #define public_rEALLOc   realloc
01072 #define public_vALLOc    valloc
01073 #define public_pVALLOc   pvalloc
01074 #define public_mALLINFo  mallinfo
01075 #define public_mALLOPt   mallopt
01076 #define public_mTRIm     malloc_trim
01077 #define public_mSTATs    malloc_stats
01078 #define public_mUSABLe   malloc_usable_size
01079 #endif /* USE_DL_PREFIX */
01080 
01081 #if __STD_C
01082 
01083 Void_t* public_mALLOc(size_t);
01084 void    public_fREe(Void_t*);
01085 Void_t* public_rEALLOc(Void_t*, size_t);
01086 Void_t* public_mEMALIGn(size_t, size_t);
01087 Void_t* public_vALLOc(size_t);
01088 Void_t* public_pVALLOc(size_t);
01089 Void_t* public_cALLOc(size_t, size_t);
01090 void    public_cFREe(Void_t*);
01091 int     public_mTRIm(size_t);
01092 size_t  public_mUSABLe(Void_t*);
01093 void    public_mSTATs();
01094 int     public_mALLOPt(int, int);
01095 struct mallinfo public_mALLINFo(void);
01096 #else
01097 Void_t* public_mALLOc();
01098 void    public_fREe();
01099 Void_t* public_rEALLOc();
01100 Void_t* public_mEMALIGn();
01101 Void_t* public_vALLOc();
01102 Void_t* public_pVALLOc();
01103 Void_t* public_cALLOc();
01104 void    public_cFREe();
01105 int     public_mTRIm();
01106 size_t  public_mUSABLe();
01107 void    public_mSTATs();
01108 int     public_mALLOPt();
01109 struct mallinfo public_mALLINFo();
01110 #endif
01111 
01112 
01113 #ifdef __cplusplus
01114 };  /* end of extern "C" */
01115 #endif
01116 
01117 
01118 
01119 /* ---------- To make a malloc.h, end cutting here ------------ */
01120 
01121 
01122 /* Declarations of internal utilities defined below  */
01123 
01124 
01125 
01126 
01127 #ifdef USE_PUBLIC_MALLOC_WRAPPERS
01128 #if __STD_C
01129 
01130 static Void_t* mALLOc(size_t);
01131 static void    fREe(Void_t*);
01132 static Void_t* rEALLOc(Void_t*, size_t);
01133 static Void_t* mEMALIGn(size_t, size_t);
01134 static Void_t* vALLOc(size_t);
01135 static Void_t* pVALLOc(size_t);
01136 static Void_t* cALLOc(size_t, size_t);
01137 static void    cFREe(Void_t*);
01138 static int     mTRIm(size_t);
01139 static size_t  mUSABLe(Void_t*);
01140 static void    mSTATs();
01141 static int     mALLOPt(int, int);
01142 static struct mallinfo mALLINFo(void);
01143 #else
01144 static Void_t* mALLOc();
01145 static void    fREe();
01146 static Void_t* rEALLOc();
01147 static Void_t* mEMALIGn();
01148 static Void_t* vALLOc();
01149 static Void_t* pVALLOc();
01150 static Void_t* cALLOc();
01151 static void    cFREe();
01152 static int     mTRIm();
01153 static size_t  mUSABLe();
01154 static void    mSTATs();
01155 static int     mALLOPt();
01156 static struct mallinfo mALLINFo();
01157 #endif
01158 #endif
01159 
01160 
01161 
01162 /* ---------- public wrappers --------------- */
01163 
01164 #ifdef USE_PUBLIC_MALLOC_WRAPPERS
01165 
01166 /*
01167   MALLOC_PREACTION and MALLOC_POSTACTION should be
01168   defined to return 0 on success, and nonzero on failure.
01169   The return value of MALLOC_POSTACTION is currently ignored
01170   in wrapper functions since there is no reasonable default
01171   action to take on failure.
01172 */
01173 
01174 
01175 #ifdef USE_MALLOC_LOCK
01176 
01177 #ifdef WIN32
01178 
01179 static int mALLOC_MUTEx;
01180 
01181 #define MALLOC_PREACTION   slwait(&mALLOC_MUTEx)
01182 #define MALLOC_POSTACTION  slrelease(&mALLOC_MUTEx)
01183 
01184 #else
01185 
01186 #include <pthread.h>
01187 
01188 static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER;
01189 
01190 #define MALLOC_PREACTION   pthread_mutex_lock(&mALLOC_MUTEx)
01191 #define MALLOC_POSTACTION  pthread_mutex_unlock(&mALLOC_MUTEx)
01192 
01193 #endif /* USE_MALLOC_LOCK */
01194 
01195 #else
01196 
01197 /* Substitute anything you like for these */
01198 
01199 #define MALLOC_PREACTION   (0)
01200 #define MALLOC_POSTACTION  (0)
01201 
01202 #endif
01203 
01204 Void_t* public_mALLOc(size_t bytes) {
01205   Void_t* m;
01206   if (MALLOC_PREACTION != 0) {
01207     return 0;
01208   }
01209   m = mALLOc(bytes);
01210   if (MALLOC_POSTACTION != 0) {
01211   }
01212   return m;
01213 }
01214 
01215 void public_fREe(Void_t* m) {
01216   if (MALLOC_PREACTION != 0) {
01217     return;
01218   }
01219   fREe(m);
01220   if (MALLOC_POSTACTION != 0) {
01221   }
01222 }
01223 
01224 Void_t* public_rEALLOc(Void_t* m, size_t bytes) {
01225   if (MALLOC_PREACTION != 0) {
01226     return 0;
01227   }
01228   m = rEALLOc(m, bytes);
01229   if (MALLOC_POSTACTION != 0) {
01230   }
01231   return m;
01232 }
01233 
01234 Void_t* public_mEMALIGn(size_t alignment, size_t bytes) {
01235   Void_t* m;
01236   if (MALLOC_PREACTION != 0) {
01237     return 0;
01238   }
01239   m = mEMALIGn(alignment, bytes);
01240   if (MALLOC_POSTACTION != 0) {
01241   }
01242   return m;
01243 }
01244 
01245 Void_t* public_vALLOc(size_t bytes) {
01246   Void_t* m;
01247   if (MALLOC_PREACTION != 0) {
01248     return 0;
01249   }
01250   m = vALLOc(bytes);
01251   if (MALLOC_POSTACTION != 0) {
01252   }
01253   return m;
01254 }
01255 
01256 Void_t* public_pVALLOc(size_t bytes) {
01257   Void_t* m;
01258   if (MALLOC_PREACTION != 0) {
01259     return 0;
01260   }
01261   m = pVALLOc(bytes);
01262   if (MALLOC_POSTACTION != 0) {
01263   }
01264   return m;
01265 }
01266 
01267 Void_t* public_cALLOc(size_t n, size_t elem_size) {
01268   Void_t* m;
01269   if (MALLOC_PREACTION != 0) {
01270     return 0;
01271   }
01272   m = cALLOc(n, elem_size);
01273   if (MALLOC_POSTACTION != 0) {
01274   }
01275   return m;
01276 }
01277 
01278 void public_cFREe(Void_t* m) {
01279   if (MALLOC_PREACTION != 0) {
01280     return;
01281   }
01282   cFREe(m);
01283   if (MALLOC_POSTACTION != 0) {
01284   }
01285 }
01286 
01287 int public_mTRIm(size_t s) {
01288   int result;
01289   if (MALLOC_PREACTION != 0) {
01290     return 0;
01291   }
01292   result = mTRIm(s);
01293   if (MALLOC_POSTACTION != 0) {
01294   }
01295   return result;
01296 }
01297 
01298 
01299 size_t public_mUSABLe(Void_t* m) {
01300   size_t result;
01301   if (MALLOC_PREACTION != 0) {
01302     return 0;
01303   }
01304   result = mUSABLe(m);
01305   if (MALLOC_POSTACTION != 0) {
01306   }
01307   return result;
01308 }
01309 
01310 
01311 void public_mSTATs() {
01312   if (MALLOC_PREACTION != 0) {
01313     return;
01314   }
01315   mSTATs();
01316   if (MALLOC_POSTACTION != 0) {
01317   }
01318 }
01319 
01320 struct mallinfo public_mALLINFo() {
01321   struct mallinfo m;
01322   if (MALLOC_PREACTION != 0) {
01323     return m;
01324   }
01325   m = mALLINFo();
01326   if (MALLOC_POSTACTION != 0) {
01327   }
01328   return m;
01329 }
01330 
01331 int public_mALLOPt(int p, int v) {
01332   int result;
01333   if (MALLOC_PREACTION != 0) {
01334     return 0;
01335   }
01336   result = mALLOPt(p, v);
01337   if (MALLOC_POSTACTION != 0) {
01338   }
01339   return result;
01340 }
01341 
01342 #endif
01343 
01344 
01345 
01346 /* ------------- Optional versions of memcopy ---------------- */
01347 
01348 
01349 #if USE_MEMCPY
01350 
01351 #define MALLOC_COPY(dest, src, nbytes, overlap) \
01352  ((overlap) ? memmove(dest, src, nbytes) : memcpy(dest, src, nbytes))
01353 #define MALLOC_ZERO(dest, nbytes)       memset(dest, 0,   nbytes)
01354 
01355 #else /* !USE_MEMCPY */
01356 
01357 /* Use Duff's device for good zeroing/copying performance. */
01358 
01359 #define MALLOC_ZERO(charp, nbytes)                                            \
01360 do {                                                                          \
01361   INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
01362   long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
01363   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
01364   switch (mctmp) {                                                            \
01365     case 0: for(;;) { *mzp++ = 0;                                             \
01366     case 7:           *mzp++ = 0;                                             \
01367     case 6:           *mzp++ = 0;                                             \
01368     case 5:           *mzp++ = 0;                                             \
01369     case 4:           *mzp++ = 0;                                             \
01370     case 3:           *mzp++ = 0;                                             \
01371     case 2:           *mzp++ = 0;                                             \
01372     case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
01373   }                                                                           \
01374 } while(0)
01375 
01376 /* For overlapping case, dest is always _below_ src. */
01377 
01378 #define MALLOC_COPY(dest,src,nbytes,overlap)                                  \
01379 do {                                                                          \
01380   INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
01381   INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
01382   long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
01383   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
01384   switch (mctmp) {                                                            \
01385     case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
01386     case 7:           *mcdst++ = *mcsrc++;                                    \
01387     case 6:           *mcdst++ = *mcsrc++;                                    \
01388     case 5:           *mcdst++ = *mcsrc++;                                    \
01389     case 4:           *mcdst++ = *mcsrc++;                                    \
01390     case 3:           *mcdst++ = *mcsrc++;                                    \
01391     case 2:           *mcdst++ = *mcsrc++;                                    \
01392     case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
01393   }                                                                           \
01394 } while(0)
01395 
01396 #endif
01397 
01398 /* ------------------ MMAP support ------------------  */
01399 
01400 
01401 #if HAVE_MMAP
01402 
01403 #include <fcntl.h>
01404 #ifndef LACKS_SYS_MMAN_H
01405 #include <sys/mman.h>
01406 #endif
01407 
01408 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
01409 #define MAP_ANONYMOUS MAP_ANON
01410 #endif
01411 
01412 
01413 /* 
01414    Nearly all versions of mmap support MAP_ANONYMOUS, 
01415    so the following is unlikely to be needed, but is
01416    supplied just in case.
01417 */
01418 
01419 #ifndef MAP_ANONYMOUS
01420 
01421 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
01422 
01423 #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
01424  (dev_zero_fd = open("/dev/zero", O_RDWR), \
01425   mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
01426    mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
01427 
01428 #else
01429 
01430 #define MMAP(addr, size, prot, flags) \
01431  (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
01432 
01433 #endif
01434 
01435 #endif /* HAVE_MMAP */
01436 
01437 
01438 /* ---------- Alternative MORECORE functions ------------ */
01439 
01440 
01441 /*
01442   General Requirements for MORECORE.
01443 
01444   The MORECORE function must have the following properties:
01445 
01446   If MORECORE_CONTIGUOUS is false:
01447 
01448     * MORECORE must allocate in multiples of pagesize. It will
01449       only be called with arguments that are multiples of pagesize.
01450 
01451     * MORECORE must page-align. That is, MORECORE(0) must
01452       return an address at a page boundary.
01453 
01454   else (i.e. If MORECORE_CONTIGUOUS is true):
01455 
01456     * Consecutive calls to MORECORE with positive arguments
01457       return increasing addresses, indicating that space has been
01458       contiguously extended.
01459 
01460     * MORECORE need not allocate in multiples of pagesize.
01461       Calls to MORECORE need not have args of multiples of pagesize.
01462 
01463     * MORECORE need not page-align.
01464 
01465   In either case:
01466 
01467     * MORECORE may allocate more memory than requested. (Or even less,
01468       but this will generally result in a malloc failure.)
01469 
01470     * MORECORE must not allocate memory when given argument zero, but
01471       instead return one past the end address of memory from previous
01472       nonzero call. This malloc does NOT call MORECORE(0)
01473       until at least one call with positive arguments is made, so
01474       the initial value returned is not important.
01475 
01476     * Even though consecutive calls to MORECORE need not return contiguous
01477       addresses, it must be OK for malloc'ed chunks to span multiple
01478       regions in those cases where they do happen to be contiguous.
01479 
01480     * MORECORE need not handle negative arguments -- it may instead
01481       just return MORECORE_FAILURE when given negative arguments.
01482       Negative arguments are always multiples of pagesize. MORECORE
01483       must not misinterpret negative args as large positive unsigned
01484       args.
01485 
01486   There is some variation across systems about the type of the
01487   argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
01488   actually be size_t, because sbrk supports negative args, so it is
01489   normally the signed type of the same width as size_t (sometimes
01490   declared as "intptr_t", and sometimes "ptrdiff_t").  It doesn't much
01491   matter though. Internally, we use "long" as arguments, which should
01492   work across all reasonable possibilities.
01493 
01494   Additionally, if MORECORE ever returns failure for a positive
01495   request, and HAVE_MMAP is true, then mmap is used as a noncontiguous
01496   system allocator. This is a useful backup strategy for systems with
01497   holes in address spaces -- in this case sbrk cannot contiguously
01498   expand the heap, but mmap may be able to map noncontiguous space.
01499   If you'd like mmap to ALWAYS be used, you can define MORECORE to be
01500   a function that always returns MORECORE_FAILURE.
01501 
01502   If you are using this malloc with something other than unix sbrk to
01503   supply memory regions, you probably want to set MORECORE_CONTIGUOUS
01504   as false.  As an example, here is a custom allocator kindly
01505   contributed for pre-OSX macOS.  It uses virtually but not
01506   necessarily physically contiguous non-paged memory (locked in,
01507   present and won't get swapped out).  You can use it by uncommenting
01508   this section, adding some #includes, and setting up the appropriate
01509   defines above:
01510 
01511       #define MORECORE osMoreCore
01512       #define MORECORE_CONTIGUOUS 0
01513 
01514   There is also a shutdown routine that should somehow be called for
01515   cleanup upon program exit.
01516 
01517   #define MAX_POOL_ENTRIES 100
01518   #define MINIMUM_MORECORE_SIZE  (64 * 1024)
01519   static int next_os_pool;
01520   void *our_os_pools[MAX_POOL_ENTRIES];
01521 
01522   void *osMoreCore(int size)
01523   {
01524     void *ptr = 0;
01525     static void *sbrk_top = 0;
01526 
01527     if (size > 0)
01528     {
01529       if (size < MINIMUM_MORECORE_SIZE)
01530          size = MINIMUM_MORECORE_SIZE;
01531       if (CurrentExecutionLevel() == kTaskLevel)
01532          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
01533       if (ptr == 0)
01534       {
01535         return (void *) MORECORE_FAILURE;
01536       }
01537       // save ptrs so they can be freed during cleanup
01538       our_os_pools[next_os_pool] = ptr;
01539       next_os_pool++;
01540       ptr = (void *) ((((unsigned long) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
01541       sbrk_top = (char *) ptr + size;
01542       return ptr;
01543     }
01544     else if (size < 0)
01545     {
01546       // we don't currently support shrink behavior
01547       return (void *) MORECORE_FAILURE;
01548     }
01549     else
01550     {
01551       return sbrk_top;
01552     }
01553   }
01554 
01555   // cleanup any allocated memory pools
01556   // called as last thing before shutting down driver
01557 
01558   void osCleanupMem(void)
01559   {
01560     void **ptr;
01561 
01562     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
01563       if (*ptr)
01564       {
01565          PoolDeallocate(*ptr);
01566          *ptr = 0;
01567       }
01568   }
01569 
01570 */
01571 
01572 
01573 
01574 
01575 
01576 /*
01577   -----------------------  Chunk representations -----------------------
01578 */
01579 
01580 
01581 /*
01582   This struct declaration is misleading (but accurate and necessary).
01583   It declares a "view" into memory allowing access to necessary
01584   fields at known offsets from a given base. See explanation below.
01585 */
01586 
01587 struct malloc_chunk {
01588 
01589   INTERNAL_SIZE_T      prev_size;  /* Size of previous chunk (if free).  */
01590   INTERNAL_SIZE_T      size;       /* Size in bytes, including overhead. */
01591 
01592   struct malloc_chunk* fd;         /* double links -- used only if free. */
01593   struct malloc_chunk* bk;
01594 };
01595 
01596 
01597 typedef struct malloc_chunk* mchunkptr;
01598 
01599 /*
01600 
01601    malloc_chunk details:
01602 
01603     (The following includes lightly edited explanations by Colin Plumb.)
01604 
01605     Chunks of memory are maintained using a `boundary tag' method as
01606     described in e.g., Knuth or Standish.  (See the paper by Paul
01607     Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
01608     survey of such techniques.)  Sizes of free chunks are stored both
01609     in the front of each chunk and at the end.  This makes
01610     consolidating fragmented chunks into bigger chunks very fast.  The
01611     size fields also hold bits representing whether chunks are free or
01612     in use.
01613 
01614     An allocated chunk looks like this:
01615 
01616 
01617     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01618             |             Size of previous chunk, if allocated            | |
01619             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01620             |             Size of chunk, in bytes                         |P|
01621       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01622             |             User data starts here...                          .
01623             .                                                               .
01624             .             (malloc_usable_space() bytes)                     .
01625             .                                                               |
01626 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01627             |             Size of chunk                                     |
01628             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01629 
01630 
01631     Where "chunk" is the front of the chunk for the purpose of most of
01632     the malloc code, but "mem" is the pointer that is returned to the
01633     user.  "Nextchunk" is the beginning of the next contiguous chunk.
01634 
01635     Chunks always begin on even word boundries, so the mem portion
01636     (which is returned to the user) is also on an even word boundary, and
01637     thus double-word aligned.
01638 
01639     Free chunks are stored in circular doubly-linked lists, and look like this:
01640 
01641     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01642             |             Size of previous chunk                            |
01643             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01644     `head:' |             Size of chunk, in bytes                         |P|
01645       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01646             |             Forward pointer to next chunk in list             |
01647             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01648             |             Back pointer to previous chunk in list            |
01649             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01650             |             Unused space (may be 0 bytes long)                .
01651             .                                                               .
01652             .                                                               |
01653 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01654     `foot:' |             Size of chunk, in bytes                           |
01655             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
01656 
01657     The P (PREV_INUSE) bit, stored in the unused low-order bit of the
01658     chunk size (which is always a multiple of two words), is an in-use
01659     bit for the *previous* chunk.  If that bit is *clear*, then the
01660     word before the current chunk size contains the previous chunk
01661     size, and can be used to find the front of the previous chunk.
01662     The very first chunk allocated always has this bit set,
01663     preventing access to non-existent (or non-owned) memory. If
01664     prev_inuse is set for any given chunk, then you CANNOT determine
01665     the size of the previous chunk, and might even get a memory
01666     addressing fault when trying to do so.
01667 
01668     Note that the `foot' of the current chunk is actually represented
01669     as the prev_size of the NEXT chunk. (This makes it easier to
01670     deal with alignments etc).
01671 
01672     The two exceptions to all this are
01673 
01674      1. The special chunk `top' doesn't bother using the
01675         trailing size field since there is no next contiguous chunk
01676         that would have to index off it. After initialization, `top'
01677         is forced to always exist.  If it would become less than
01678         MINSIZE bytes long, it is replenished.
01679 
01680      2. Chunks allocated via mmap, which have the second-lowest-order
01681         bit (IS_MMAPPED) set in their size fields.  Because they are
01682         allocated one-by-one, each must contain its own trailing size field.
01683 
01684 */
01685 
01686 
01687 
01688 /*
01689   Size and alignment checks and conversions
01690 */
01691 
01692 /* conversion from malloc headers to user pointers, and back */
01693 
01694 #define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
01695 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
01696 
01697 /* The smallest possible chunk */
01698 #define MIN_CHUNK_SIZE        (sizeof(struct malloc_chunk))
01699 
01700 /* The smallest size we can malloc is an aligned minimal chunk */
01701 
01702 #define MINSIZE   ((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
01703 
01704 /* Check if m has acceptable alignment */
01705 
01706 #define aligned_OK(m)  (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
01707 
01708 /* 
01709    Check for negative/huge sizes.
01710    This cannot just test for < 0 because argument might
01711    be an unsigned type of uncertain width.
01712 */
01713 
01714 #define IS_NEGATIVE(x) \
01715   ((unsigned long)x >= \
01716    (unsigned long)((((INTERNAL_SIZE_T)(1)) << ((SIZE_SZ)*8 - 1))))
01717 
01718 
01719 /* pad request bytes into a usable size -- internal version */
01720 
01721 #define request2size(req)                                        \
01722   (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE)  ?            \
01723    MINSIZE :                                                     \
01724    ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
01725 
01726 
01727 /*
01728    Same, except check for negative/huge arguments.
01729    This lets through args that are positive but wrap into
01730    negatives when padded. However, these are trapped
01731    elsewhere.
01732 */
01733 
01734 #define checked_request2size(req, sz)                                    \
01735    if (IS_NEGATIVE(req)) {                                               \
01736      MALLOC_FAILURE_ACTION;                                              \
01737      return 0;                                                           \
01738    }                                                                     \
01739    (sz) = request2size(req);
01740 
01741 
01742 
01743 
01744 /*
01745   Physical chunk operations
01746 */
01747 
01748 
01749 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
01750 
01751 #define PREV_INUSE 0x1
01752 
01753 
01754 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
01755 
01756 #define IS_MMAPPED 0x2
01757 
01758 
01759 /* Bits to mask off when extracting size */
01760 
01761 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
01762 
01763 
01764 /* Ptr to next physical malloc_chunk. */
01765 
01766 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
01767 
01768 
01769 /* Ptr to previous physical malloc_chunk */
01770 
01771 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
01772 
01773 
01774 /* Treat space at ptr + offset as a chunk */
01775 
01776 #define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
01777 
01778 
01779 
01780 
01781 /*
01782   Dealing with use bits
01783 
01784   Note: IS_MMAPPED is intentionally not masked off from size field in
01785   macros for which mmapped chunks should never be seen. This should
01786   cause helpful core dumps to occur if it is tried by accident by
01787   people extending or adapting this malloc.
01788 
01789 */
01790 
01791 
01792 /* extract p's inuse bit */
01793 
01794 #define inuse(p)\
01795 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
01796 
01797 
01798 /* extract inuse bit of previous chunk */
01799 
01800 #define prev_inuse(p)       ((p)->size & PREV_INUSE)
01801 
01802 
01803 /* check for mmap()'ed chunk */
01804 
01805 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
01806 
01807 
01808 /* set/clear chunk as being inuse without otherwise disturbing */
01809 
01810 #define set_inuse(p)\
01811 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
01812 
01813 #define clear_inuse(p)\
01814 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
01815 
01816 
01817 /* check/set/clear inuse bits in known places */
01818 
01819 #define inuse_bit_at_offset(p, s)\
01820  (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
01821 
01822 #define set_inuse_bit_at_offset(p, s)\
01823  (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
01824 
01825 #define clear_inuse_bit_at_offset(p, s)\
01826  (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
01827 
01828 
01829 
01830 
01831 /*
01832   Dealing with size fields
01833 */
01834 
01835 /* Get size, ignoring use bits */
01836 
01837 #define chunksize(p)         ((p)->size & ~(SIZE_BITS))
01838 
01839 /* Set size at head, without disturbing its use bit */
01840 
01841 #define set_head_size(p, s)  ((p)->size = (((p)->size & PREV_INUSE) | (s)))
01842 
01843 /* Set size/use field */
01844 
01845 #define set_head(p, s)       ((p)->size = (s))
01846 
01847 /* Set size at footer (only when chunk is not in use) */
01848 
01849 #define set_foot(p, s)       (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
01850 
01851 
01852 
01853 
01854 
01855 /*
01856   ------------------  Internal data structures --------------------
01857 
01858    All internal state is held in an instance of malloc_state defined
01859    below. There are no other static variables, except in two optional
01860    cases: 
01861    * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared
01862       above. 
01863    * If HAVE_MMAP is true, but mmap doesn't support
01864      MAP_ANONYMOUS, a dummy file descriptor for mmap.
01865 
01866    Beware of lots of tricks that minimize the total space
01867    requirements. The result is a little over 1K bytes (for 4byte
01868    pointers and size_t.)
01869 
01870 */
01871 
01872 /*
01873 
01874   Bins
01875 
01876     An array of bin headers for free chunks. Each bin is doubly
01877     linked.  The bins are approximately proportionally (log) spaced.
01878     There are a lot of these bins (128). This may look excessive, but
01879     works very well in practice.  Most bins hold sizes that are
01880     unusual as malloc request sizes, but are more usual for fragments
01881     and consolidated sets of chunks, which is what these bins hold, so
01882     they can be found quickly.  All procedures maintain the invariant
01883     that no consolidated chunk physically borders another one, so each
01884     chunk in a list is known to be preceeded and followed by either
01885     inuse chunks or the ends of memory.
01886 
01887     Chunks in bins are kept in size order, with ties going to the
01888     approximately least recently used chunk. Ordering is irrelevant
01889     for the small bins, which all contain the same-sized chunks, but
01890     facilitates best-fit allocation for larger chunks. (These lists
01891     are just sequential. Keeping them in order almost never requires
01892     enough traversal to warrant using fancier ordered data
01893     structures.)  Chunks of the same size are linked with the most
01894     recently freed at the front, and allocations are taken from the
01895     back.  This results in LRU (FIFO) allocation order, which tends
01896     to give each chunk an equal opportunity to be consolidated with
01897     adjacent freed chunks, resulting in larger free chunks and less
01898     fragmentation.
01899 
01900     To simplify use in double-linked lists, each bin header acts
01901     as a malloc_chunk. This avoids special-casing for headers.
01902     But to conserve space and (mainly) improve locality, we allocate
01903     only the fd/bk pointers of bins, and then use repositioning tricks
01904     to treat these as the fields of a malloc_chunk*.  
01905 */
01906 
01907 typedef struct malloc_chunk* mbinptr;
01908 
01909 #define NBINS        128
01910 
01911 
01912 /* addressing -- note that bin_at(0) does not exist */
01913 
01914 #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1)))
01915 
01916 
01917 /* analog of ++bin */
01918 
01919 #define next_bin(b)  ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1)))
01920 
01921 
01922 /* Reminders about list directionality within bins */
01923 
01924 #define first(b)     ((b)->fd)
01925 #define last(b)      ((b)->bk)
01926 
01927 /*
01928    Take a chunk off a bin list
01929 */
01930 
01931 #define unlink(P, BK, FD) {                                            \
01932   FD = P->fd;                                                          \
01933   BK = P->bk;                                                          \
01934   FD->bk = BK;                                                         \
01935   BK->fd = FD;                                                         \
01936 }
01937 
01938 /*
01939   Indexing bins
01940 
01941     Bins for sizes < 512 bytes contain chunks of all the same size, spaced
01942     8 bytes apart. Larger bins are approximately logarithmically spaced:
01943 
01944     64 bins of size       8
01945     32 bins of size      64
01946     16 bins of size     512
01947      8 bins of size    4096
01948      4 bins of size   32768
01949      2 bins of size  262144
01950      1 bin  of size what's left
01951 
01952     There is actually a little bit of slop in the numbers in bin_index
01953     for the sake of speed. This makes no difference elsewhere.
01954 
01955     The bins top out at around 1mb because we expect to service large
01956     chunks via mmap.
01957 
01958 */
01959 
01960 /* The first NSMALLBIN bins (and fastbins) hold only one size */
01961 #define NSMALLBINS         64
01962 #define SMALLBIN_WIDTH      8
01963 #define MIN_LARGE_SIZE    512
01964 
01965 #define in_smallbin_range(sz)  ((sz) <  MIN_LARGE_SIZE)
01966 
01967 #define smallbin_index(sz)     (((unsigned)(sz)) >> 3)
01968 
01969 #define largebin_index(sz)                                                   \
01970 (((((unsigned long)(sz)) >>  6) <= 32)?  56 + (((unsigned long)(sz)) >>  6): \
01971  ((((unsigned long)(sz)) >>  9) <= 20)?  91 + (((unsigned long)(sz)) >>  9): \
01972  ((((unsigned long)(sz)) >> 12) <= 10)? 110 + (((unsigned long)(sz)) >> 12): \
01973  ((((unsigned long)(sz)) >> 15) <=  4)? 119 + (((unsigned long)(sz)) >> 15): \
01974  ((((unsigned long)(sz)) >> 18) <=  2)? 124 + (((unsigned long)(sz)) >> 18): \
01975                                         126)
01976 
01977 #define bin_index(sz) \
01978  ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz))
01979 
01980 
01981 /*
01982   Unsorted chunks
01983 
01984     All remainders from chunk splits, as well as all returned chunks,
01985     are first placed in the "unsorted" bin. They are then placed
01986     in regular bins after malloc gives them ONE chance to be used before
01987     binning. So, basically, the unsorted_chunks list acts as a queue,
01988     with chunks being placed on it in free (and malloc_consolidate),
01989     and taken off (to be either used or placed in bins) in malloc.
01990     
01991 */
01992 
01993 
01994 /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
01995 
01996 #define unsorted_chunks(M)          (bin_at(M, 1))
01997 
01998 
01999 /*
02000   Top
02001 
02002     The top-most available chunk (i.e., the one bordering the end of
02003     available memory) is treated specially. It is never included in
02004     any bin, is used only if no other chunk is available, and is
02005     released back to the system if it is very large (see
02006     M_TRIM_THRESHOLD).  `top' is never properly linked to its bin
02007     since it is always handled specially.  Because top initially
02008     points to its own bin with initial zero size, thus forcing
02009     extension on the first malloc request, we avoid having any special
02010     code in malloc to check whether it even exists yet. But we still
02011     need to do so when getting memory from system, so we make
02012     initial_top treat the bin as a legal but unusable chunk during the
02013     interval between initialization and the first call to
02014     sYSMALLOc. (This is somewhat delicate, since it relies on
02015     the 2 preceding words to be zero during this interval as well.)
02016 */
02017 
02018 
02019 /* Conveniently, the unsorted bin can be used as dummy top on first call */
02020 #define initial_top(M)              (unsorted_chunks(M))
02021 
02022 /*
02023   Binmap
02024 
02025     To help compensate for the large number of bins, a one-level index
02026     structure is used for bin-by-bin searching.  `binmap' is a
02027     bitvector recording whether bins are definitely empty so they can
02028     be skipped over during during traversals.  The bits are NOT always
02029     cleared as soon as bins are empty, but instead only
02030     when they are noticed to be empty during traversal in malloc.
02031 
02032 */
02033 
02034 /* Conservatively use 32 bits per map word, even if on 64bit system */
02035 #define BINMAPSHIFT      5
02036 #define BITSPERMAP       (1U << BINMAPSHIFT)
02037 #define BINMAPSIZE       (NBINS / BITSPERMAP)
02038 
02039 #define idx2block(i)     ((i) >> BINMAPSHIFT)
02040 #define idx2bit(i)       ((1U << ((i) & ((1U << BINMAPSHIFT)-1))))
02041 
02042 #define mark_bin(m,i)    ((m)->binmap[idx2block(i)] |=  idx2bit(i))
02043 #define unmark_bin(m,i)  ((m)->binmap[idx2block(i)] &= ~(idx2bit(i)))
02044 #define get_binmap(m,i)  ((m)->binmap[idx2block(i)] &   idx2bit(i))
02045 
02046 
02047 /*
02048   Fastbins
02049 
02050     An array of lists holding recently freed small chunks.  Fastbins
02051     are not doubly linked.  It is faster to single-link them, and
02052     since chunks are never removed from the middles of these lists,
02053     double linking is not necessary.
02054 
02055     Chunks in fastbins keep their inuse bit set, so they cannot
02056     be consolidated with other free chunks. malloc_consolidate
02057     releases all chunks in fastbins and consolidates them with
02058     other free chunks.
02059 */
02060 
02061 typedef struct malloc_chunk* mfastbinptr;
02062 
02063 /* offset 2 to use otherwise unindexable first 2 bins */
02064 #define fastbin_index(sz)        ((((unsigned int)(sz)) >> 3) - 2)
02065 
02066 /* The maximum fastbin request size we support */
02067 #define MAX_FAST_SIZE     80
02068 
02069 #define NFASTBINS  (fastbin_index(request2size(MAX_FAST_SIZE))+1)
02070 
02071 
02072 
02073 /*
02074   Flag bit held in max_fast indicating that there probably are some
02075   fastbin chunks . It is set true on entering a chunk into any fastbin,
02076   and cleared only in malloc_consolidate.
02077 
02078   The truth value is inverted so that have_fastchunks will be true
02079   upon startup (since statics are zero-filled).
02080 */
02081 
02082 
02083 #define have_fastchunks(M)    (((M)->max_fast &  1U) == 0)
02084 #define clear_fastchunks(M)   ((M)->max_fast |=  1U)
02085 #define set_fastchunks(M)     ((M)->max_fast &= ~1U)
02086 
02087 /* 
02088    Initialization value of max_fast. 
02089    Use impossibly small value if 0.
02090    Value also has flag bit clear.
02091 */
02092 #define req2max_fast(s) (((((s) == 0)? SMALLBIN_WIDTH: request2size(s))) | 1U)
02093 
02094 
02095 /*
02096   NONCONTIGUOUS_REGIONS is a special value for sbrk_base indicating that
02097   MORECORE does not return contiguous regions. In this case, we do not check
02098   or assume that the address of each chunk is at least sbrk_base.  Otherwise,
02099   contiguity is exploited in merging together, when possible, results
02100   from consecutive MORECORE calls.
02101 
02102   The possible values for sbrk_base are:
02103     MORECORE_FAILURE:   
02104        MORECORE has not yet been called, but we expect contiguous space
02105     NONCONTIGUOUS_REGIONS: 
02106        we don't expect or rely on contiguous space
02107     any other legal address: 
02108        the first address returned by MORECORE when contiguous
02109 */
02110 
02111 #define NONCONTIGUOUS_REGIONS ((char*)(-3))
02112 
02113 
02114 /*
02115    ----------- Internal state representation and initialization -----------
02116 */
02117 
02118 
02119 struct malloc_state {
02120 
02121   /* The maximum chunk size to be eligible for fastbin */
02122   INTERNAL_SIZE_T  max_fast;   /* low bit used as fastbin flag */
02123 
02124   /* Base of the topmost chunk -- not otherwise kept in a bin */
02125   mchunkptr        top;
02126 
02127   /* The remainder from the most recent split of a small request */
02128   mchunkptr        last_remainder;
02129 
02130   /* Fastbins */
02131   mfastbinptr      fastbins[NFASTBINS];
02132 
02133   /* Normal bins packed as described above */
02134   mchunkptr        bins[NBINS * 2];
02135 
02136   /* Bitmap of bins */
02137   unsigned int     binmap[BINMAPSIZE];
02138 
02139   /* Tunable parameters */
02140   unsigned long    trim_threshold;
02141   INTERNAL_SIZE_T  top_pad;
02142   INTERNAL_SIZE_T  mmap_threshold;
02143 
02144   /* Memory map support */
02145   int              n_mmaps;
02146   int              n_mmaps_max;
02147   int              max_n_mmaps;
02148 
02149   /* Bookkeeping for sbrk */
02150   unsigned int     pagesize;    /* Cache malloc_getpagesize */
02151   char*            sbrk_base;   /* first address returned by sbrk, 
02152                                    or NONCONTIGUOUS_REGIONS */
02153   /* Statistics */
02154 
02155   INTERNAL_SIZE_T  mmapped_mem;
02156   INTERNAL_SIZE_T  sbrked_mem;
02157 
02158   INTERNAL_SIZE_T  max_sbrked_mem;
02159   INTERNAL_SIZE_T  max_mmapped_mem;
02160   INTERNAL_SIZE_T  max_total_mem;
02161 };
02162 
02163 typedef struct malloc_state *mstate;
02164 
02165 /* 
02166    There is exactly one instance of this struct in this malloc.
02167 
02168    If you are adapting this malloc in a way that does NOT use a static
02169    malloc_state, you MUST explicitly zero-fill it before using. This
02170    malloc relies on the property that malloc_state is initialized to
02171    all zeroes (as is true of C statics).
02172 
02173 */
02174 
02175 static struct malloc_state av_;  /* never directly referenced */
02176 
02177 /*
02178    All uses of av_ are via get_malloc_state().
02179    This simplifies construction of multithreaded, etc extensions.
02180 
02181    At most one call to get_malloc_state is made per invocation of
02182    the public versions of malloc, free, and all other routines
02183    except realloc, valloc, and vpalloc. Also, it is called
02184    in check* routines if DEBUG is set.
02185 */
02186 
02187 #define get_malloc_state() (&(av_))
02188 
02189 /*
02190   Initialize a malloc_state struct.
02191 
02192   This is called only from within malloc_consolidate, which needs
02193   be called in the same contexts anyway.  It is never called directly
02194   outside of malloc_consolidate because some optimizing compilers try
02195   to inline it at all call points, which turns out not to be an
02196   optimization at all. (Inlining it only in malloc_consolidate is fine though.)
02197 */
02198 
02199 #if __STD_C
02200 static void malloc_init_state(mstate av)
02201 #else
02202 static void malloc_init_state(av) mstate av;
02203 #endif
02204 {
02205   int     i;
02206   mbinptr bin;
02207 
02208   
02209   /* Uncomment this if you are not using a static av */
02210   /* MALLOC_ZERO(av, sizeof(struct malloc_state); */
02211 
02212   /* Establish circular links for normal bins */
02213   for (i = 1; i < NBINS; ++i) { 
02214     bin = bin_at(av,i);
02215     bin->fd = bin->bk = bin;
02216   }
02217 
02218   av->max_fast       = req2max_fast(DEFAULT_MXFAST);
02219 
02220   av->top_pad        = DEFAULT_TOP_PAD;
02221   av->n_mmaps_max    = DEFAULT_MMAP_MAX;
02222   av->mmap_threshold = DEFAULT_MMAP_THRESHOLD;
02223 
02224 #if MORECORE_CONTIGUOUS
02225   av->trim_threshold = DEFAULT_TRIM_THRESHOLD;
02226   av->sbrk_base      = (char*)MORECORE_FAILURE;
02227 #else
02228   av->trim_threshold = (unsigned long)(-1);
02229   av->sbrk_base      = NONCONTIGUOUS_REGIONS;
02230 #endif
02231 
02232   av->top            = initial_top(av);
02233   av->pagesize       = malloc_getpagesize;
02234 }
02235 
02236 /* 
02237    Other internal utilities operating on mstates
02238 */
02239 
02240 #if __STD_C
02241 static Void_t* sYSMALLOc(INTERNAL_SIZE_T, mstate);
02242 static int  sYSTRIm(size_t, mstate);
02243 static void malloc_consolidate(mstate);
02244 #else
02245 static Void_t* sYSMALLOc();
02246 static int  sYSTRIm();
02247 static void malloc_consolidate();
02248 #endif
02249 
02250 
02251 /*
02252   Debugging support
02253 
02254   These routines make a number of assertions about the states
02255   of data structures that should be true at all times. If any
02256   are not true, it's very likely that a user program has somehow
02257   trashed memory. (It's also possible that there is a coding error
02258   in malloc. In which case, please report it!)
02259 
02260 */
02261 
02262 #if ! DEBUG
02263 
02264 #define check_chunk(P)
02265 #define check_free_chunk(P)
02266 #define check_inuse_chunk(P)
02267 #define check_remalloced_chunk(P,N)
02268 #define check_malloced_chunk(P,N)
02269 #define check_malloc_state()
02270 
02271 #else
02272 #define check_chunk(P)              do_check_chunk(P)
02273 #define check_free_chunk(P)         do_check_free_chunk(P)
02274 #define check_inuse_chunk(P)        do_check_inuse_chunk(P)
02275 #define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N)
02276 #define check_malloced_chunk(P,N)   do_check_malloced_chunk(P,N)
02277 #define check_malloc_state()        do_check_malloc_state()
02278 
02279 
02280 /*
02281   Properties of all chunks
02282 */
02283 
02284 #if __STD_C
02285 static void do_check_chunk(mchunkptr p)
02286 #else
02287 static void do_check_chunk(p) mchunkptr p;
02288 #endif
02289 {
02290 
02291   mstate av = get_malloc_state();
02292   unsigned long sz = chunksize(p);
02293 
02294   if (!chunk_is_mmapped(p)) {
02295     
02296     /* Has legal address ... */
02297     if (av->sbrk_base != NONCONTIGUOUS_REGIONS) {
02298       assert(((char*)p) >= ((char*)(av->sbrk_base)));
02299     }
02300 
02301     if (p != av->top) {
02302       if (av->sbrk_base != NONCONTIGUOUS_REGIONS) {
02303         assert(((char*)p + sz) <= ((char*)(av->top)));
02304       }
02305     }
02306     else {
02307       if (av->sbrk_base != NONCONTIGUOUS_REGIONS) {
02308         assert(((char*)p + sz) <= ((char*)(av->sbrk_base) + av->sbrked_mem));
02309       }
02310       /* top size is always at least MINSIZE */
02311       assert((long)(sz) >= (long)(MINSIZE));
02312       /* top predecessor always marked inuse */
02313       assert(prev_inuse(p));
02314     }
02315       
02316   }
02317   else {
02318 #if HAVE_MMAP
02319     /* address is outside main heap  */
02320     /* unless mmap has been used as sbrk backup */
02321     if (av->sbrk_base != NONCONTIGUOUS_REGIONS) { 
02322       assert(! (((char*)p) >= ((char*)av->sbrk_base) &&
02323                 ((char*)p) <  ((char*)(av->sbrk_base) + av->sbrked_mem)));
02324     }
02325     /* chunk is page-aligned */
02326     assert(((p->prev_size + sz) & (av->pagesize-1)) == 0);
02327     /* mem is aligned */
02328     assert(aligned_OK(chunk2mem(p)));
02329 #else
02330     /* force an appropriate assert violation if debug set */
02331     assert(!chunk_is_mmapped(p));
02332 #endif
02333   }
02334 
02335 }
02336 
02337 /*
02338   Properties of free chunks
02339 */
02340 
02341 
02342 #if __STD_C
02343 static void do_check_free_chunk(mchunkptr p)
02344 #else
02345 static void do_check_free_chunk(p) mchunkptr p;
02346 #endif
02347 {
02348   mstate av = get_malloc_state();
02349 
02350   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
02351   mchunkptr next = chunk_at_offset(p, sz);
02352 
02353   do_check_chunk(p);
02354 
02355   /* Chunk must claim to be free ... */
02356   assert(!inuse(p));
02357   assert (!chunk_is_mmapped(p));
02358 
02359   /* Unless a special marker, must have OK fields */
02360   if ((unsigned long)sz >= (unsigned long)MINSIZE)
02361   {
02362     assert((sz & MALLOC_ALIGN_MASK) == 0);
02363     assert(aligned_OK(chunk2mem(p)));
02364     /* ... matching footer field */
02365     assert(next->prev_size == sz);
02366     /* ... and is fully consolidated */
02367     assert(prev_inuse(p));
02368     assert (next == av->top || inuse(next));
02369 
02370     /* ... and has minimally sane links */
02371     assert(p->fd->bk == p);
02372     assert(p->bk->fd == p);
02373   }
02374   else /* markers are always of size SIZE_SZ */
02375     assert(sz == SIZE_SZ);
02376 }
02377 
02378 /*
02379   Properties of inuse chunks
02380 */
02381 
02382 
02383 #if __STD_C
02384 static void do_check_inuse_chunk(mchunkptr p)
02385 #else
02386 static void do_check_inuse_chunk(p) mchunkptr p;
02387 #endif
02388 {
02389   mstate av = get_malloc_state();
02390   mchunkptr next;
02391   do_check_chunk(p);
02392 
02393   if (chunk_is_mmapped(p))
02394     return; /* mmapped chunks have no next/prev */
02395 
02396   /* Check whether it claims to be in use ... */
02397   assert(inuse(p));
02398 
02399   next = next_chunk(p);
02400 
02401   /* ... and is surrounded by OK chunks.
02402     Since more things can be checked with free chunks than inuse ones,
02403     if an inuse chunk borders them and debug is on, it's worth doing them.
02404   */
02405   if (!prev_inuse(p))  {
02406     /* Note that we cannot even look at prev unless it is not inuse */
02407     mchunkptr prv = prev_chunk(p);
02408     assert(next_chunk(prv) == p);
02409     do_check_free_chunk(prv);
02410   }
02411 
02412   if (next == av->top) {
02413     assert(prev_inuse(next));
02414     assert(chunksize(next) >= MINSIZE);
02415   }
02416   else if (!inuse(next))
02417     do_check_free_chunk(next);
02418 
02419 }
02420 
02421 /*
02422   Properties of chunks recycled from fastbins
02423 */
02424 
02425 #if __STD_C
02426 static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
02427 #else
02428 static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
02429 #endif
02430 {
02431 
02432   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
02433 
02434   do_check_inuse_chunk(p);
02435 
02436   /* Legal size ... */
02437   assert((sz & MALLOC_ALIGN_MASK) == 0);
02438   assert((long)sz - (long)MINSIZE >= 0);
02439   assert((long)sz - (long)s >= 0);
02440   assert((long)sz - (long)(s + MINSIZE) < 0);
02441 
02442   /* ... and alignment */
02443   assert(aligned_OK(chunk2mem(p)));
02444 
02445 }
02446 
02447 /*
02448   Properties of nonrecycled chunks at the point they are malloced
02449 */
02450 
02451 #if __STD_C
02452 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
02453 #else
02454 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
02455 #endif
02456 {
02457   /* same as recycled case ... */
02458   do_check_remalloced_chunk(p, s);
02459 
02460   /*
02461     ... plus,  must obey implementation invariant that prev_inuse is
02462     always true of any allocated chunk; i.e., that each allocated
02463     chunk borders either a previously allocated and still in-use
02464     chunk, or the base of its memory arena. This is ensured
02465     by making all allocations from the the `lowest' part of any found
02466     chunk.  This does not necessarily hold however for chunks
02467     recycled via fastbins.
02468   */
02469 
02470   assert(prev_inuse(p));
02471 }
02472 
02473 
02474 /*
02475   Properties of malloc_state.
02476 
02477   This may be useful for debugging malloc, as well as detecting user
02478   programmer errors that somehow write into malloc_state.
02479 */
02480 
02481 static void do_check_malloc_state()
02482 {
02483   mstate av = get_malloc_state();
02484   int i;
02485   mchunkptr p;
02486   mchunkptr q;
02487   mbinptr b;
02488   unsigned int biton;
02489   int empty;
02490   unsigned int idx;
02491   INTERNAL_SIZE_T size;
02492   unsigned long total = 0;
02493   int max_fast_bin;
02494 
02495 
02496   /* internal size_t must be no wider than pointer type */
02497   assert(sizeof(INTERNAL_SIZE_T) <= sizeof(char*));
02498 
02499   /* alignment is a power of 2 */
02500   assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0);
02501 
02502   /* cannot run remaining checks until fully initialized */
02503   if (av->top == 0 || av->top == initial_top(av))
02504     return;
02505 
02506   /* pagesize is a power of 2 */
02507   assert((av->pagesize & (av->pagesize-1)) == 0);
02508 
02509   /* properties of fastbins */
02510 
02511   /* max_fast is in allowed range */
02512 
02513   assert((av->max_fast & ~1) <= request2size(MAX_FAST_SIZE));
02514 
02515   max_fast_bin = fastbin_index(av->max_fast);
02516 
02517   for (i = 0; i < NFASTBINS; ++i) {
02518     p = av->fastbins[i];
02519 
02520     /* all bins past max_fast are empty */
02521     if (i > max_fast_bin)
02522       assert(p == 0);
02523 
02524     while (p != 0) {
02525       /* each chunk claims to be inuse */
02526       do_check_inuse_chunk(p);
02527       total += chunksize(p);
02528       /* chunk belongs in this bin */
02529       assert(fastbin_index(chunksize(p)) == i);
02530       p = p->fd;
02531     }
02532   }
02533 
02534   if (total != 0)
02535     assert(have_fastchunks(av));
02536 
02537   /* check normal bins */
02538   for (i = 1; i < NBINS; ++i) {
02539     b = bin_at(av,i);
02540 
02541     /* binmap is accurate (except for bin 1 == unsorted_chunks) */
02542     if (i >= 2) {
02543       biton = get_binmap(av,i);
02544       empty = last(b) == b;
02545       if (!biton)
02546         assert(empty);
02547       else if (!empty)
02548         assert(biton);
02549     }
02550 
02551     for (p = last(b); p != b; p = p->bk) {
02552       /* each chunk claims to be free */
02553       do_check_free_chunk(p);
02554       size = chunksize(p);
02555       total += size;
02556       if (i >= 2) {
02557         /* chunk belongs in bin */
02558         idx = bin_index(size);
02559         assert(idx == i);
02560         /* lists are sorted */
02561         assert(p->bk == b || chunksize(p->bk) >= chunksize(p));
02562       }
02563       /* chunk is followed by a legal chain of inuse chunks */
02564       for (q = next_chunk(p);
02565            q != av->top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
02566            q = next_chunk(q))
02567         do_check_inuse_chunk(q);
02568 
02569     }
02570   }
02571 
02572   /* top chunk is OK */
02573   check_chunk(av->top);
02574 
02575   /* sanity checks for statistics */
02576 
02577   assert(total <= (unsigned long)(av->max_total_mem));
02578   assert(av->n_mmaps >= 0);
02579   assert(av->n_mmaps <= av->n_mmaps_max);
02580   assert(av->n_mmaps <= av->max_n_mmaps);
02581   assert(av->max_n_mmaps <= av->n_mmaps_max);
02582 
02583   assert((unsigned long)(av->sbrked_mem) <=
02584          (unsigned long)(av->max_sbrked_mem));
02585 
02586   assert((unsigned long)(av->mmapped_mem) <=
02587          (unsigned long)(av->max_mmapped_mem));
02588 
02589   assert((unsigned long)(av->max_total_mem) >=
02590          (unsigned long)(av->mmapped_mem) + (unsigned long)(av->sbrked_mem));
02591 
02592 }
02593 
02594 
02595 #endif
02596 
02597 
02598 
02599 
02600 
02601 /* ----------- Routines dealing with system allocation -------------- */
02602 
02603 /*
02604   Handle malloc cases requiring more memory from system.
02605   malloc relays to sYSMALLOc if it cannot allocate out of
02606   existing memory.
02607 
02608   On entry, it is assumed that av->top does not have enough
02609   space to service request for nb bytes, thus requiring more meory
02610   from system.
02611 */
02612 
02613 #if __STD_C
02614 static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av)
02615 #else
02616 static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av;
02617 #endif
02618 {
02619   mchunkptr       old_top;        /* incoming value of av->top */
02620   INTERNAL_SIZE_T old_size;       /* its size */
02621   char*           old_end;        /* its end address */
02622 
02623   long            size;           /* arg to first MORECORE or mmap call */
02624   char*           brk;            /* return value from MORECORE */
02625   char*           mm;             /* return value from mmap call*/
02626 
02627   long            correction;     /* arg to 2nd MORECORE call */
02628   char*           snd_brk;        /* 2nd return val */
02629 
02630   INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
02631   INTERNAL_SIZE_T end_misalign;   /* partial page left at end of new space */
02632   char*           aligned_brk;    /* aligned offset into brk */
02633 
02634   mchunkptr       p;              /* the allocated/returned chunk */
02635   mchunkptr       remainder;      /* remainder from allocation */
02636   long            remainder_size; /* its size */
02637 
02638   unsigned long   sum;            /* for updating stats */
02639 
02640   size_t          pagemask  = av->pagesize - 1;
02641 
02642   /*
02643     If have mmap, and the request size meets the mmap threshold, and
02644     the system supports mmap, and there are few enough currently
02645     allocated mmapped regions, and a call to mmap succeeds, try to
02646     directly map this request rather than expanding top.
02647   */
02648 
02649 #if HAVE_MMAP
02650   if ((unsigned long)nb >= (unsigned long)(av->mmap_threshold) &&
02651       (av->n_mmaps < av->n_mmaps_max)) {
02652 
02653     /*
02654       Round up size to nearest page.  For mmapped chunks, the overhead
02655       is one SIZE_SZ unit larger than for normal chunks, because there
02656       is no following chunk whose prev_size field could be used.
02657     */
02658     size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask;
02659     
02660     mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
02661 
02662     if (mm != (char*)(MORECORE_FAILURE)) {
02663 
02664       /*
02665         The offset to the start of the mmapped region is stored
02666         in the prev_size field of the chunk. This allows us to adjust
02667         returned start address to meet alignment requirements here 
02668         and in memalign(), and still be able to compute proper
02669         address argument for later munmap in free() and realloc().
02670       */
02671 
02672       front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK;
02673       if (front_misalign > 0) {
02674         correction = MALLOC_ALIGNMENT - front_misalign;
02675         p = (mchunkptr)(mm + correction);
02676         p->prev_size = correction;
02677         set_head(p, (size - correction) |IS_MMAPPED);
02678       }
02679       else {
02680         p = (mchunkptr)mm;
02681         set_head(p, size|IS_MMAPPED);
02682       }
02683 
02684       check_chunk(p);
02685       
02686       /* update statistics */
02687 
02688       if (++av->n_mmaps > av->max_n_mmaps) 
02689         av->max_n_mmaps = av->n_mmaps;
02690 
02691       sum = av->mmapped_mem += size;
02692       if (sum > (unsigned long)(av->max_mmapped_mem)) 
02693         av->max_mmapped_mem = sum;
02694       sum += av->sbrked_mem;
02695       if (sum > (unsigned long)(av->max_total_mem)) 
02696         av->max_total_mem = sum;
02697       
02698       return chunk2mem(p);
02699     }
02700   }
02701 #endif
02702 
02703   /* record incoming configuration of top */
02704 
02705   old_top  = av->top;
02706   old_size = chunksize(old_top);
02707   old_end  = (char*)(chunk_at_offset(old_top, old_size));
02708 
02709   brk = snd_brk = (char*)(MORECORE_FAILURE); 
02710 
02711   /* 
02712      If not the first time through, we require old_size to be
02713      at least MINSIZE and to have prev_inuse set.
02714   */
02715 
02716   assert(old_top == initial_top(av) || 
02717          ((unsigned long) (old_size) >= (unsigned long)(MINSIZE) &&
02718           prev_inuse(old_top)));
02719 
02720 
02721   /* Request enough space for nb + pad + overhead */
02722 
02723   size = nb + av->top_pad + MINSIZE;
02724 
02725   /*
02726     If contiguous, we can subtract out existing space that we hope to
02727     combine with new space. We add it back later only if
02728     we don't actually get contiguous space.
02729   */
02730 
02731   if (av->sbrk_base != NONCONTIGUOUS_REGIONS)
02732     size -= old_size;
02733 
02734   /*
02735     Round to a multiple of page size.
02736     If MORECORE is not contiguous, this ensures that we only call it
02737     with whole-page arguments.  And if MORECORE is contiguous and
02738     this is not first time through, this preserves page-alignment of
02739     previous calls. Otherwise, we re-correct anyway to page-align below.
02740   */
02741 
02742   size = (size + pagemask) & ~pagemask;
02743 
02744   /*
02745     Don't try to call MORECORE if argument is so big as to appear
02746     negative. Note that since mmap takes size_t arg, it may succeed
02747     below even if we cannot call MORECORE.
02748   */
02749 
02750   if (size > 0) 
02751     brk = (char*)(MORECORE(size));
02752 
02753   /*
02754     If have mmap, try using it as a backup when MORECORE fails. This
02755     is worth doing on systems that have "holes" in address space, so
02756     sbrk cannot extend to give contiguous space, but space is available
02757     elsewhere.  Note that we ignore mmap max count and threshold limits,
02758     since there is no reason to artificially limit use here.
02759   */
02760 
02761 #if HAVE_MMAP
02762   if (brk == (char*)(MORECORE_FAILURE)) {
02763 
02764     /* Cannot merge with old top, so add its size back in */
02765 
02766     if (av->sbrk_base != NONCONTIGUOUS_REGIONS)
02767       size = (size + old_size + pagemask) & ~pagemask;
02768 
02769     /* If we are relying on mmap as backup, then use larger units */
02770 
02771     if ((unsigned long)size < (unsigned long)MMAP_AS_MORECORE_SIZE)
02772       size = MMAP_AS_MORECORE_SIZE;
02773 
02774     brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
02775 
02776     if (brk != (char*)(MORECORE_FAILURE)) {
02777 
02778       /* We do not need, and cannot use, another sbrk call to find end */
02779       snd_brk = brk + size;
02780 
02781       /* 
02782          Record that we no longer have a contiguous sbrk region. 
02783          After the first time mmap is used as backup, we cannot
02784          ever rely on contiguous space.
02785       */
02786       av->sbrk_base = NONCONTIGUOUS_REGIONS; 
02787     }
02788   }
02789 #endif
02790 
02791   if (brk != (char*)(MORECORE_FAILURE)) {
02792 
02793     av->sbrked_mem += size;
02794 
02795     /*
02796       If MORECORE extends previous space, we can likewise extend top size.
02797     */
02798     
02799     if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) {
02800       set_head(old_top, (size + old_size) | PREV_INUSE);
02801     }
02802     
02803     /*
02804       Otherwise, make adjustments guided by the special values of 
02805       av->sbrk_base (MORECORE_FAILURE or NONCONTIGUOUS_REGIONS):
02806       
02807       * If the first time through or noncontiguous, we need to call sbrk
02808         just to find out where the end of memory lies.
02809 
02810       * We need to ensure that all returned chunks from malloc will meet
02811         MALLOC_ALIGNMENT
02812 
02813       * If there was an intervening foreign sbrk, we need to adjust sbrk
02814         request size to account for fact that we will not be able to
02815         combine new space with existing space in old_top.
02816 
02817       * Almost all systems internally allocate whole pages at a time, in
02818         which case we might as well use the whole last page of request.
02819         So we allocate enough more memory to hit a page boundary now,
02820         which in turn causes future contiguous calls to page-align.
02821 
02822     */
02823     
02824     else {
02825       front_misalign = 0;
02826       end_misalign = 0;
02827       correction = 0;
02828       aligned_brk = brk;
02829       
02830       /* handle contiguous cases */
02831       if (av->sbrk_base != NONCONTIGUOUS_REGIONS) { 
02832         
02833         /* Guarantee alignment of first new chunk made from this space */
02834 
02835         front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK;
02836         if (front_misalign > 0) {
02837 
02838           /*
02839             Skip over some bytes to arrive at an aligned position.
02840             We don't need to specially mark these wasted front bytes.
02841             They will never be accessed anyway because
02842             prev_inuse of av->top (and any chunk created from its start)
02843             is always true after initialization.
02844           */
02845 
02846           correction = MALLOC_ALIGNMENT - front_misalign;
02847           aligned_brk += correction;
02848         }
02849         
02850         /*
02851           If this isn't adjacent to a previous sbrk, then we will not
02852           be able to merge with old_top space, so must add to 2nd request.
02853         */
02854         
02855         correction += old_size;
02856         
02857         /* Pad out to hit a page boundary */
02858 
02859         end_misalign = (INTERNAL_SIZE_T)(brk + size + correction);
02860         correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign;
02861         
02862         assert(correction >= 0);
02863         
02864         snd_brk = (char*)(MORECORE(correction));
02865         
02866         /*
02867           If can't allocate correction, try to at least find out current
02868           brk.  It might be enough to proceed without failing.
02869           
02870           Note that if second sbrk did NOT fail, we assume that space
02871           is contiguous with first sbrk. This is a safe assumption unless
02872           program is multithreaded but doesn't use locks and a foreign sbrk
02873           occurred between our first and second calls.
02874         */
02875         
02876         if (snd_brk == (char*)(MORECORE_FAILURE)) {
02877           correction = 0;
02878           snd_brk = (char*)(MORECORE(0));
02879         }
02880       }
02881       
02882       /* handle non-contiguous cases */
02883       else { 
02884         
02885         /* MORECORE/mmap must correctly align etc */
02886         assert(((unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK) == 0);
02887         
02888         /* Find out current end of memory */
02889         if (snd_brk == (char*)(MORECORE_FAILURE)) {
02890           snd_brk = (char*)(MORECORE(0));
02891         }
02892         
02893         /* This must lie on a page boundary */
02894         if (snd_brk != (char*)(MORECORE_FAILURE)) {
02895           assert(((INTERNAL_SIZE_T)(snd_brk) & pagemask) == 0);
02896         }
02897       }
02898       
02899       /* Adjust top based on results of second sbrk */
02900       if (snd_brk != (char*)(MORECORE_FAILURE)) {
02901        
02902         av->top = (mchunkptr)aligned_brk;
02903         set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
02904         
02905         av->sbrked_mem += correction;
02906         
02907         /* If first time through and contiguous, record base */
02908         if (old_top == initial_top(av)) {
02909           if (av->sbrk_base == (char*)(MORECORE_FAILURE)) 
02910             av->sbrk_base = brk;
02911         }
02912         
02913         /*
02914           Otherwise, we either have a gap due to foreign sbrk or a
02915           non-contiguous region.  Insert a double fencepost at old_top
02916           to prevent consolidation with space we don't own. These
02917           fenceposts are artificial chunks that are marked as inuse
02918           and are in any case too small to use.  We need two to make
02919           sizes and alignments work out.
02920         */
02921 
02922         else {
02923           
02924           /* 
02925              Shrink old_top to insert fenceposts, keeping size a
02926              multiple of MALLOC_ALIGNMENT. 
02927           */
02928           old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
02929           set_head(old_top, old_size | PREV_INUSE);
02930           
02931           /*
02932             Note that the following assignments overwrite old_top when
02933             old_size was previously MINSIZE.  This is intentional. We
02934             need the fencepost, even if old_top otherwise gets lost.
02935           */
02936           chunk_at_offset(old_top, old_size          )->size =
02937             SIZE_SZ|PREV_INUSE;
02938 
02939           chunk_at_offset(old_top, old_size + SIZE_SZ)->size =
02940             SIZE_SZ|PREV_INUSE;
02941           
02942           /* If possible, release the rest. */
02943           if (old_size >= MINSIZE) 
02944             fREe(chunk2mem(old_top));
02945 
02946         }
02947       }
02948     }
02949     
02950     /* Update statistics */
02951     
02952     sum = av->sbrked_mem;
02953     if (sum > (unsigned long)(av->max_sbrked_mem))
02954       av->max_sbrked_mem = sum;
02955     
02956     sum += av->mmapped_mem;
02957     if (sum > (unsigned long)(av->max_total_mem))
02958       av->max_total_mem = sum;
02959 
02960     check_malloc_state();
02961     
02962     /* finally, do the allocation */
02963     
02964     p = av->top;
02965     size = chunksize(p);
02966     remainder_size = (long)size - (long)nb;
02967     
02968     /* check that one of the above allocation paths succeeded */
02969     if (remainder_size >= (long)MINSIZE) {
02970       remainder = chunk_at_offset(p, nb);
02971       av->top = remainder;
02972       set_head(p, nb | PREV_INUSE);
02973       set_head(remainder, remainder_size | PREV_INUSE);
02974       
02975       check_malloced_chunk(p, nb);
02976       return chunk2mem(p);
02977     }
02978   }
02979 
02980   /* catch all failure paths */
02981   MALLOC_FAILURE_ACTION;
02982   return 0;
02983 }
02984 
02985 
02986 /*
02987   sYSTRIm is an inverse of sorts to sYSMALLOc.
02988   It gives memory back to the system (via negative
02989   arguments to sbrk) if there is unused memory at the `high' end of
02990   the malloc pool. It is called automatically by free()
02991   when top space exceeds the trim threshold.
02992   returns 1 if it actually released any memory, else 0.
02993 */
02994 
02995 #if __STD_C
02996 static int sYSTRIm(size_t pad, mstate av)
02997 #else
02998 static int sYSTRIm(pad, av) size_t pad; mstate av;
02999 #endif
03000 {
03001   long  top_size;        /* Amount of top-most memory */
03002   long  extra;           /* Amount to release */
03003   long  released;        /* Amount actually released */
03004   char* current_brk;     /* address returned by pre-check sbrk call */
03005   char* new_brk;         /* address returned by post-check sbrk call */
03006   size_t pagesz;
03007 
03008   /* Don't bother trying if sbrk doesn't provide contiguous regions */
03009   if (av->sbrk_base != NONCONTIGUOUS_REGIONS) {
03010 
03011     pagesz = av->pagesize;
03012     top_size = chunksize(av->top);
03013     
03014     /* Release in pagesize units, keeping at least one page */
03015     extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
03016     
03017     if (extra > 0) {
03018       
03019       /*
03020         Only proceed if end of memory is where we last set it.
03021         This avoids problems if there were foreign sbrk calls.
03022       */
03023       current_brk = (char*)(MORECORE(0));
03024       if (current_brk == (char*)(av->top) + top_size) {
03025         
03026         /*
03027           Attempt to release memory. We ignore return value,
03028           and instead call again to find out where new end of memory is.
03029           This avoids problems if first call releases less than we asked,
03030           of if failure somehow altered brk value. (We could still
03031           encounter problems if it altered brk in some very bad way,
03032           but the only thing we can do is adjust anyway, which will cause
03033           some downstream failure.)
03034         */
03035         
03036         MORECORE(-extra);
03037         new_brk = (char*)(MORECORE(0));
03038         
03039         if (new_brk != (char*)MORECORE_FAILURE) {
03040           released = (long)(current_brk - new_brk);
03041 
03042           if (released != 0) {
03043             /* Success. Adjust top. */
03044             av->sbrked_mem -= released;
03045             set_head(av->top, (top_size - released) | PREV_INUSE);
03046             check_malloc_state();
03047             return 1;
03048           }
03049         }
03050       }
03051     }
03052   }
03053 
03054   return 0;
03055 }
03056 
03057 /* ----------------------- Main public routines ----------------------- */
03058 
03059 
03060 /*
03061   Malloc routine. See running comments for algorithm description.
03062 */
03063 
03064 #if __STD_C
03065 Void_t* mALLOc(size_t bytes)
03066 #else
03067   Void_t* mALLOc(bytes) size_t bytes;
03068 #endif
03069 {
03070   mstate av = get_malloc_state();
03071 
03072   INTERNAL_SIZE_T nb;               /* normalized request size */
03073   unsigned int    idx;              /* associated bin index */
03074   mbinptr         bin;              /* associated bin */
03075   mfastbinptr*    fb;               /* associated fastbin */
03076 
03077   mchunkptr       victim;           /* inspected/selected chunk */
03078   INTERNAL_SIZE_T size;             /* its size */
03079   int             victim_index;     /* its bin index */
03080 
03081   mchunkptr       remainder;        /* remainder from a split */
03082   long            remainder_size;   /* its size */
03083 
03084   unsigned int    block;            /* bit map traverser */
03085   unsigned int    bit;              /* bit map traverser */
03086   unsigned int    map;              /* current word of binmap */
03087 
03088   mchunkptr       fwd;              /* misc temp for linking */
03089   mchunkptr       bck;              /* misc temp for linking */
03090 
03091 
03092   /*
03093     Check request for legality and convert to internal form, nb.
03094     This rejects negative arguments when size_t is treated as
03095     signed. It also rejects arguments that are so large that the size
03096     appears negative when aligned and padded.  The converted form
03097     adds SIZE_T bytes overhead plus possibly more to obtain necessary
03098     alignment and/or to obtain a size of at least MINSIZE, the
03099     smallest allocatable size.
03100   */
03101 
03102   checked_request2size(bytes, nb);
03103 
03104   /*
03105     If the size qualifies as a fastbin, first check corresponding bin.
03106     This code is safe to execute even if av not yet initialized, so we
03107     can try it first, which saves some time on this fast path.
03108   */
03109 
03110   if (nb <= av->max_fast) { 
03111     fb = &(av->fastbins[(fastbin_index(nb))]);
03112     if ( (victim = *fb) != 0) {
03113       *fb = victim->fd;
03114       check_remalloced_chunk(victim, nb);
03115       return chunk2mem(victim);
03116     }
03117   }
03118 
03119   /*
03120     If a small request, check regular bin.  Since these "smallbins"
03121     hold one size each, no searching within bins is necessary.
03122     
03123     (If a large request, we need to wait until unsorted chunks are
03124     processed to find best fit. But for small ones, fits are exact
03125     anyway, so we can check now, which is faster.)
03126   */
03127 
03128   if (in_smallbin_range(nb)) {
03129     idx = smallbin_index(nb);
03130     bin = bin_at(av,idx);
03131 
03132     if ( (victim = last(bin)) != bin) {
03133       if (victim == 0) /* initialization check */
03134         malloc_consolidate(av);
03135       else {
03136         bck = victim->bk;
03137         set_inuse_bit_at_offset(victim, nb);
03138         bin->bk = bck;
03139         bck->fd = bin;
03140         
03141         check_malloced_chunk(victim, nb);
03142         return chunk2mem(victim);
03143       }
03144     }
03145   }
03146 
03147   /* 
03148      If a large request, consolidate fastbins before continuing.
03149      While it might look excessive to kill all fastbins before
03150      even seeing if there is space available, this avoids
03151      fragmentation problems normally associated with fastbins.
03152      Also, in practice, programs tend to have runs of either small or
03153      large requests, but less often mixtures, so consolidation is not 
03154      usually invoked all that often.
03155   */
03156 
03157   else {
03158     idx = largebin_index(nb);
03159     if (have_fastchunks(av)) /* consolidation/initialization check */
03160       malloc_consolidate(av);
03161   }
03162 
03163 
03164   /*
03165     Process recently freed or remaindered chunks, taking one only if
03166     it is exact fit, or, if a small request, it is the remainder from
03167     the most recent non-exact fit.  Place other traversed chunks in
03168     bins.  Note that this step is the only place in any routine where
03169     chunks are placed in bins.
03170 
03171     The outer loop here is needed because we might not realize until
03172     near the end of malloc that we should have consolidated, so must
03173     do so and retry. This happens at most once, and only when we would
03174     otherwise need to expand memory to service a "small" request.
03175   */
03176 
03177     
03178   for(;;) {    
03179     
03180     while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) {
03181       bck = victim->bk;
03182       size = chunksize(victim);
03183 
03184       /* 
03185          If a small request, try to use last remainder if it is the
03186          only chunk in unsorted bin.  This helps promote locality for
03187          runs of consecutive small requests. This is the only
03188          exception to best-fit.
03189       */
03190 
03191       if (in_smallbin_range(nb) && 
03192           victim == av->last_remainder &&
03193           bck == unsorted_chunks(av) &&
03194           (remainder_size = (long)size - (long)nb) >= (long)MINSIZE) {
03195 
03196         /* split and reattach remainder */
03197         remainder = chunk_at_offset(victim, nb);
03198         unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
03199         av->last_remainder = remainder; 
03200         remainder->bk = remainder->fd = unsorted_chunks(av);
03201         
03202         set_head(victim, nb | PREV_INUSE);
03203         set_head(remainder, remainder_size | PREV_INUSE);
03204         set_foot(remainder, remainder_size);
03205         
03206         check_malloced_chunk(victim, nb);
03207         return chunk2mem(victim);
03208       }
03209       
03210       /* remove from unsorted list */
03211       unsorted_chunks(av)->bk = bck;
03212       bck->fd = unsorted_chunks(av);
03213       
03214       /* Take now instead of binning if exact fit */
03215       
03216       if (size == nb) {
03217         set_inuse_bit_at_offset(victim, size);
03218         check_malloced_chunk(victim, nb);
03219         return chunk2mem(victim);
03220       }
03221       
03222       /* place chunk in bin */
03223       
03224       if (in_smallbin_range(size)) {
03225         victim_index = smallbin_index(size);
03226         bck = bin_at(av, victim_index);
03227         fwd = bck->fd;
03228       }
03229       else {
03230         victim_index = largebin_index(size);
03231         bck = bin_at(av, victim_index);
03232         fwd = bck->fd;
03233 
03234         /* maintain large bins in sorted order */
03235         if (fwd != bck) {
03236           /* if smaller than smallest, bypass loop below */
03237           if ((unsigned long)size <= 
03238               (unsigned long)(chunksize(bck->bk))) {
03239             fwd = bck;
03240             bck = bck->bk;
03241           }
03242           else {
03243             while (fwd != bck && 
03244                    (unsigned long)size < (unsigned long)(chunksize(fwd))) {
03245               fwd = fwd->fd;
03246             }
03247             bck = fwd->bk;
03248           }
03249         }
03250       }
03251       
03252       mark_bin(av, victim_index);
03253       victim->bk = bck;
03254       victim->fd = fwd;
03255       fwd->bk = victim;
03256       bck->fd = victim;
03257     }
03258    
03259     /*
03260       If a large request, scan through the chunks of current bin in
03261       sorted order to find smallest that fits.  This is the only step
03262       where an unbounded number of chunks might be scanned without doing
03263       anything useful with them. However the lists tend to be very
03264       short.
03265     */
03266       
03267     if (!in_smallbin_range(nb)) {
03268       bin = bin_at(av, idx);
03269 
03270       /* skip scan if largest chunk is too small */
03271       if ((victim = last(bin)) != bin &&
03272           (long)(chunksize(first(bin))) - (long)(nb) >= 0) {
03273         do {
03274           size = chunksize(victim);
03275           remainder_size = (long)size - (long)nb;
03276           
03277           if (remainder_size >= 0)  {
03278             unlink(victim, bck, fwd);
03279             
03280             /* Exhaust */
03281             if (remainder_size < (long)MINSIZE)  {
03282               set_inuse_bit_at_offset(victim, size);
03283               check_malloced_chunk(victim, nb);
03284               return chunk2mem(victim);
03285             }
03286             /* Split */
03287             else {
03288               remainder = chunk_at_offset(victim, nb);
03289               unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
03290               remainder->bk = remainder->fd = unsorted_chunks(av);
03291               set_head(victim, nb | PREV_INUSE);
03292               set_head(remainder, remainder_size | PREV_INUSE);
03293               set_foot(remainder, remainder_size);
03294               check_malloced_chunk(victim, nb);
03295               return chunk2mem(victim);
03296             }
03297           }
03298         } while ( (victim = victim->bk) != bin);
03299       }
03300     }    
03301 
03302     /*
03303       Search for a chunk by scanning bins, starting with next largest
03304       bin. This search is strictly by best-fit; i.e., the smallest
03305       (with ties going to approximately the least recently used) chunk
03306       that fits is selected.
03307       
03308       The bitmap avoids needing to check that most blocks are nonempty.
03309       The particular case of skipping all bins during warm-up phases
03310       when no chunks have been returned yet is faster than it might look.
03311     */
03312     
03313     ++idx;
03314     bin = bin_at(av,idx);
03315     block = idx2block(idx);
03316     map = av->binmap[block];
03317     bit = idx2bit(idx);
03318     
03319     for (;;) {
03320       /*
03321         Skip rest of block if there are no more set bits in this block.
03322       */
03323       
03324       if (bit > map || bit == 0) {
03325         for (;;) {
03326           if (++block >= BINMAPSIZE)  /* out of bins */
03327             break;
03328 
03329           else if ( (map = av->binmap[block]) != 0) {
03330             bin = bin_at(av, (block << BINMAPSHIFT));
03331             bit = 1;
03332             break;
03333           }
03334         }
03335         /* Optimizers seem to like this double-break better than goto */
03336         if (block >= BINMAPSIZE) 
03337           break;
03338       }
03339       
03340       /* Advance to bin with set bit. There must be one. */
03341       while ((bit & map) == 0) {
03342         bin = next_bin(bin);
03343         bit <<= 1;
03344       }
03345       
03346       victim = last(bin);
03347       
03348       /*  False alarm -- the bin is empty. Clear the bit. */
03349       if (victim == bin) {
03350         av->binmap[block] = map &= ~bit; /* Write through */
03351         bin = next_bin(bin);
03352         bit <<= 1;
03353       }
03354       
03355       /*  We know the first chunk in this bin is big enough to use. */
03356       else {
03357         size = chunksize(victim);
03358         remainder_size = (long)size - (long)nb;
03359         
03360         assert(remainder_size >= 0);
03361         
03362         /* unlink */
03363         bck = victim->bk;
03364         bin->bk = bck;
03365         bck->fd = bin;
03366         
03367         
03368         /* Exhaust */
03369         if (remainder_size < (long)MINSIZE) {
03370           set_inuse_bit_at_offset(victim, size);
03371           check_malloced_chunk(victim, nb);
03372           return chunk2mem(victim);
03373         }
03374         
03375         /* Split */
03376         else {
03377           remainder = chunk_at_offset(victim, nb);
03378           
03379           unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
03380           remainder->bk = remainder->fd = unsorted_chunks(av);
03381           /* advertise as last remainder */
03382           if (in_smallbin_range(nb)) 
03383             av->last_remainder = remainder; 
03384           
03385           set_head(victim, nb | PREV_INUSE);
03386           set_head(remainder, remainder_size | PREV_INUSE);
03387           set_foot(remainder, remainder_size);
03388           check_malloced_chunk(victim, nb);
03389           return chunk2mem(victim);
03390         }
03391       }
03392     }
03393     
03394     /*
03395       If large enough, split off the chunk bordering the end of memory
03396       ("top"). Note that this use of top is in accord with the best-fit search
03397       rule.  In effect, top is treated as larger (and thus less well
03398       fitting) than any other available chunk since it can be extended
03399       to be as large as necessary (up to system limitations).
03400 
03401       We require that "top" always exists (i.e., has size >= MINSIZE)
03402       after initialization, so if it would otherwise be exhuasted by
03403       current request, it is replenished. (Among the reasons for
03404       ensuring it exists is that we may need MINSIZE space to put in
03405       fenceposts in sysmalloc.)
03406     */
03407 
03408     victim = av->top;
03409     size = chunksize(victim);
03410     remainder_size = (long)size - (long)nb;
03411    
03412     if (remainder_size >= (long)MINSIZE) {
03413       remainder = chunk_at_offset(victim, nb);
03414       av->top = remainder;
03415       set_head(victim, nb | PREV_INUSE);
03416       set_head(remainder, remainder_size | PREV_INUSE);
03417       
03418       check_malloced_chunk(victim, nb);
03419       return chunk2mem(victim);
03420     }
03421     
03422     /*
03423       If there is space available in fastbins, consolidate and retry,
03424       to possibly avoid expanding memory. This can occur only if nb is
03425       in smallbin range so we didn't consolidate upon entry.
03426     */
03427 
03428     else if (have_fastchunks(av)) {
03429       assert(in_smallbin_range(nb));
03430       idx = smallbin_index(nb); /* restore original bin index */
03431       malloc_consolidate(av);
03432     }
03433 
03434     /* 
03435        Otherwise, relay to handle system-dependent cases 
03436     */
03437     else 
03438       return sYSMALLOc(nb, av);    
03439   }
03440 }
03441 
03442 
03443 
03444 /*
03445   Free routine. See running comments for algorithm description.
03446 */
03447 
03448 #if __STD_C
03449 void fREe(Void_t* mem)
03450 #else
03451 void fREe(mem) Void_t* mem;
03452 #endif
03453 {
03454   mstate av = get_malloc_state();
03455 
03456   mchunkptr       p;           /* chunk corresponding to mem */
03457   INTERNAL_SIZE_T size;        /* its size */
03458   mfastbinptr*    fb;          /* associated fastbin */
03459   mchunkptr       nextchunk;   /* next contiguous chunk */
03460   INTERNAL_SIZE_T nextsize;    /* its size */
03461   int             nextinuse;   /* true if nextchunk is used */
03462   INTERNAL_SIZE_T prevsize;    /* size of previous contiguous chunk */
03463   mchunkptr       bck;         /* misc temp for linking */
03464   mchunkptr       fwd;         /* misc temp for linking */
03465 
03466 
03467   /* free(0) has no effect */
03468   if (mem != 0) {
03469 
03470     p = mem2chunk(mem);
03471     check_inuse_chunk(p);
03472 
03473     size = chunksize(p);
03474 
03475     /*
03476       If eligible, place chunk on a fastbin so it can be found
03477       and used quickly in malloc.
03478     */
03479 
03480     if ((unsigned long)size <= (unsigned long)av->max_fast
03481 
03482 #if TRIM_FASTBINS
03483         /* 
03484            If TRIM_FASTBINS set, don't place chunks
03485            bordering top into fastbins
03486         */
03487         && (chunk_at_offset(p, size) != av->top)
03488 #endif
03489         ) {
03490 
03491       set_fastchunks(av);
03492       fb = &(av->fastbins[fastbin_index(size)]);
03493       p->fd = *fb;
03494       *fb = p;
03495     }
03496 
03497     /*
03498        Consolidate non-mmapped chunks as they arrive.
03499     */
03500 
03501     else if (!chunk_is_mmapped(p)) {
03502 
03503       nextchunk = chunk_at_offset(p, size);
03504 
03505       /* consolidate backward */
03506       if (!prev_inuse(p)) {
03507         prevsize = p->prev_size;
03508         size += prevsize;
03509         p = chunk_at_offset(p, -((long) prevsize));
03510         unlink(p, bck, fwd);
03511       }
03512 
03513       nextsize = chunksize(nextchunk);
03514 
03515       if (nextchunk != av->top) {
03516 
03517         /* get and clear inuse bit */
03518         nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
03519         set_head(nextchunk, nextsize);
03520 
03521         /* consolidate forward */
03522         if (!nextinuse) {
03523           unlink(nextchunk, bck, fwd);
03524           size += nextsize;
03525         }
03526 
03527         /*
03528           Place chunk in unsorted chunk list. Chunks are
03529           not placed into regular bins until after they have
03530           been given one chance to be used in malloc.
03531         */
03532 
03533         bck = unsorted_chunks(av);
03534         fwd = bck->fd;
03535         p->bk = bck;
03536         p->fd = fwd;
03537         bck->fd = p;
03538         fwd->bk = p;
03539 
03540         set_head(p, size | PREV_INUSE);
03541         set_foot(p, size);
03542       }
03543 
03544       /*
03545          If the chunk borders the current high end of memory,
03546          consolidate into top
03547       */
03548 
03549       else {
03550         size += nextsize;
03551         set_head(p, size | PREV_INUSE);
03552         av->top = p;
03553 
03554         /*
03555           If the total unused topmost memory exceeds trim
03556           threshold, ask malloc_trim to reduce top. 
03557 
03558           Unless max_fast is 0, we don't know if there are fastbins
03559           bordering top, so we cannot tell for sure whether threshold has
03560           been reached unless fastbins are consolidated.  But we don't
03561           want to consolidate on each free.  As a compromise,
03562           consolidation is performed if half the threshold is
03563           reached.
03564 
03565         */
03566 
03567         if ((unsigned long)(size) > (unsigned long)(av->trim_threshold / 2)) {
03568           if (have_fastchunks(av)) {
03569             malloc_consolidate(av);
03570             size = chunksize(av->top);
03571           }
03572 
03573           if ((unsigned long)(size) > (unsigned long)(av->trim_threshold)) 
03574             sYSTRIm(av->top_pad, av);
03575         }
03576       }
03577     }
03578 
03579     /*
03580       If the chunk was allocated via mmap, release via munmap()
03581       Note that if HAVE_MMAP is false but chunk_is_mmapped is
03582       true, then user must have overwritten memory. There's nothing
03583       we can do to catch this error unless DEBUG is set, in which case
03584       check_inuse_chunk (above) will have triggered error.
03585     */
03586 
03587     else {
03588 #if HAVE_MMAP
03589       int ret;
03590       INTERNAL_SIZE_T offset = p->prev_size;
03591       av->n_mmaps--;
03592       av->mmapped_mem -= (size + offset);
03593       ret = munmap((char*)p - offset, size + offset);
03594       /* munmap returns non-zero on failure */
03595       assert(ret == 0);
03596 #endif
03597     }
03598   }
03599 }
03600 
03601 
03602 
03603 /*
03604   malloc_consolidate is a specialized version of free() that tears
03605   down chunks held in fastbins.  Free itself cannot be used for this
03606   purpose since, among other things, it might place chunks back onto
03607   fastbins.  So, instead, we need to use a minor variant of the same
03608   code.
03609   
03610   Also, because this routine needs to be called the first time through
03611   malloc anyway, it turns out to be the perfect place to bury
03612   initialization code.
03613 */
03614 
03615 #if __STD_C
03616 static void malloc_consolidate(mstate av)
03617 #else
03618 static void malloc_consolidate(av) mstate av;
03619 #endif
03620 {
03621   mfastbinptr*    fb;
03622   mfastbinptr*    maxfb;
03623   mchunkptr       p;
03624   mchunkptr       nextp;
03625   mchunkptr       unsorted_bin;
03626   mchunkptr       first_unsorted;
03627 
03628   /* These have same use as in free() */
03629   mchunkptr       nextchunk;
03630   INTERNAL_SIZE_T size;
03631   INTERNAL_SIZE_T nextsize;
03632   INTERNAL_SIZE_T prevsize;
03633   int             nextinuse;
03634   mchunkptr       bck;
03635   mchunkptr       fwd;
03636 
03637   /*
03638     If max_fast is 0, we know that malloc hasn't
03639     yet been initialized, in which case do so.
03640   */
03641 
03642   if (av->max_fast == 0) {
03643     malloc_init_state(av);
03644     check_malloc_state();
03645   }
03646   else if (have_fastchunks(av)) {
03647     clear_fastchunks(av);
03648     
03649     unsorted_bin = unsorted_chunks(av);
03650     
03651     /*
03652       Remove each chunk from fast bin and consolidate it, placing it
03653       then in unsorted bin. Among other reasons for doing this,
03654       placing in unsorted bin avoids needing to calculate actual bins
03655       until malloc is sure that chunks aren't immediately going to be
03656       reused anyway.
03657     */
03658     
03659     maxfb = &(av->fastbins[fastbin_index(av->max_fast)]);
03660     fb = &(av->fastbins[0]);
03661     do {
03662       if ( (p = *fb) != 0) {
03663         *fb = 0;
03664         
03665         do {
03666           check_inuse_chunk(p);
03667           nextp = p->fd;
03668           
03669           /* Slightly streamlined version of consolidation code in free() */
03670           size = p->size & ~PREV_INUSE;
03671           nextchunk = chunk_at_offset(p, size);
03672           
03673           if (!prev_inuse(p)) {
03674             prevsize = p->prev_size;
03675             size += prevsize;
03676             p = chunk_at_offset(p, -((long) prevsize));
03677             unlink(p, bck, fwd);
03678           }
03679           
03680           nextsize = chunksize(nextchunk);
03681           
03682           if (nextchunk != av->top) {
03683             
03684             nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
03685             set_head(nextchunk, nextsize);
03686             
03687             if (!nextinuse) {
03688               size += nextsize;
03689               unlink(nextchunk, bck, fwd);
03690             }
03691             
03692             first_unsorted = unsorted_bin->fd;
03693             unsorted_bin->fd = p;
03694             first_unsorted->bk = p;
03695             
03696             set_head(p, size | PREV_INUSE);
03697             p->bk = unsorted_bin;
03698             p->fd = first_unsorted;
03699             set_foot(p, size);
03700           }
03701           
03702           else {
03703             size += nextsize;
03704             set_head(p, size | PREV_INUSE);
03705             av->top = p;
03706           }
03707           
03708         } while ( (p = nextp) != 0);
03709         
03710       }
03711     } while (fb++ != maxfb);
03712   }
03713 }
03714 
03715 
03716 
03717 
03718 /*
03719   Realloc algorithm cases:
03720 
03721   * Chunks that were obtained via mmap cannot be extended or shrunk
03722     unless HAVE_MREMAP is defined, in which case mremap is used.
03723     Otherwise, if the reallocation is for additional space, they are
03724     copied.  If for less, they are just left alone.
03725 
03726   * Otherwise, if the reallocation is for additional space, and the
03727     chunk can be extended, it is, else a malloc-copy-free sequence is
03728     taken.  There are several different ways that a chunk could be
03729     extended. All are tried:
03730 
03731        * Extending forward into following adjacent free chunk.
03732        * Shifting backwards, joining preceding adjacent space
03733        * Both shifting backwards and extending forward.
03734        * Extending into newly sbrked space
03735 
03736   * If there is not enough memory available to realloc, realloc
03737     returns null, but does NOT free the existing space.
03738 
03739   * If the reallocation is for less space, the newly unused space is
03740     lopped off and freed. Unless the #define REALLOC_ZERO_BYTES_FREES
03741     is set, realloc with a size argument of zero (re)allocates a
03742     minimum-sized chunk.
03743 
03744 
03745   The old unix realloc convention of allowing the last-free'd chunk
03746   to be used as an argument to realloc is no longer supported.
03747   I don't know of any programs still relying on this feature,
03748   and allowing it would also allow too many other incorrect
03749   usages of realloc to be sensible.
03750 */
03751 
03752 #if __STD_C
03753 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
03754 #else
03755 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
03756 #endif
03757 {
03758   mstate av = get_malloc_state();
03759 
03760   INTERNAL_SIZE_T  nb;              /* padded request size */
03761 
03762   mchunkptr        oldp;            /* chunk corresponding to oldmem */
03763   INTERNAL_SIZE_T  oldsize;         /* its size */
03764 
03765   mchunkptr        newp;            /* chunk to return */
03766   INTERNAL_SIZE_T  newsize;         /* its size */
03767   Void_t*          newmem;          /* corresponding user mem */
03768 
03769   mchunkptr        next;            /* next contiguous chunk after oldp */
03770   mchunkptr        prev;            /* previous contiguous chunk before oldp */
03771 
03772   mchunkptr        remainder;       /* extra space at end of newp */
03773   long             remainder_size;  /* its size */
03774 
03775   mchunkptr        bck;             /* misc temp for linking */
03776   mchunkptr        fwd;             /* misc temp for linking */
03777 
03778   INTERNAL_SIZE_T  copysize;        /* bytes to copy */
03779   int              ncopies;         /* INTERNAL_SIZE_T words to copy */
03780   INTERNAL_SIZE_T* s;               /* copy source */ 
03781   INTERNAL_SIZE_T* d;               /* copy destination */
03782 
03783 
03784 #ifdef REALLOC_ZERO_BYTES_FREES
03785   if (bytes == 0) {
03786     fREe(oldmem);
03787     return 0;
03788   }
03789 #endif
03790 
03791   /* realloc of null is supposed to be same as malloc */
03792   if (oldmem == 0) return mALLOc(bytes);
03793 
03794   checked_request2size(bytes, nb);
03795 
03796   oldp    = mem2chunk(oldmem);
03797   oldsize = chunksize(oldp);
03798 
03799   check_inuse_chunk(oldp);
03800 
03801   if (!chunk_is_mmapped(oldp)) {
03802 
03803     if ((unsigned long)(oldsize) >= (unsigned long)(nb)) {
03804       /* already big enough; split below */
03805       newp = oldp;
03806       newsize = oldsize;
03807     }
03808 
03809     else {
03810       newp = 0;
03811       newsize = 0;
03812 
03813       next = chunk_at_offset(oldp, oldsize);
03814 
03815       if (next == av->top) {            /* Expand forward into top */
03816         newsize = oldsize + chunksize(next);
03817 
03818         if ((unsigned long)(newsize) >= (unsigned long)(nb + MINSIZE)) {
03819           set_head_size(oldp, nb);
03820           av->top = chunk_at_offset(oldp, nb);
03821           set_head(av->top, (newsize - nb) | PREV_INUSE);
03822           return chunk2mem(oldp);
03823         }
03824 
03825         else if (!prev_inuse(oldp)) {   /* Shift backwards + top */
03826           prev = prev_chunk(oldp);
03827           newsize += chunksize(prev);
03828 
03829           if ((unsigned long)(newsize) >= (unsigned long)(nb + MINSIZE)) {
03830             newp = prev;
03831             unlink(prev, bck, fwd);
03832             av->top = chunk_at_offset(newp, nb);
03833             set_head(av->top, (newsize - nb) | PREV_INUSE);
03834             newsize = nb; 
03835           }
03836         }
03837       }
03838 
03839       else if (!inuse(next)) {          /* Forward into next chunk */
03840         newsize = oldsize + chunksize(next);
03841         
03842         if (((unsigned long)(newsize) >= (unsigned long)(nb))) {
03843           newp = oldp;
03844           unlink(next, bck, fwd);
03845         }
03846         
03847         else if (!prev_inuse(oldp)) {   /* Forward + backward */
03848           prev = prev_chunk(oldp);
03849           newsize += chunksize(prev);
03850           
03851           if (((unsigned long)(newsize) >= (unsigned long)(nb))) {
03852             newp = prev;
03853             unlink(prev, bck, fwd);
03854             unlink(next, bck, fwd);
03855           }
03856         }
03857       }
03858       
03859       else if (!prev_inuse(oldp)) {     /* Backward only */
03860         prev = prev_chunk(oldp);
03861         newsize = oldsize + chunksize(prev);
03862         
03863         if ((unsigned long)(newsize) >= (unsigned long)(nb)) {
03864           newp = prev;
03865           unlink(prev, bck, fwd);
03866         }
03867       }
03868       
03869       if (newp != 0) {
03870         if (newp != oldp) {
03871           /* Backward copies are not worth unrolling */
03872           MALLOC_COPY(chunk2mem(newp), oldmem, oldsize - SIZE_SZ, 1);
03873         }
03874       }
03875 
03876       /* Must allocate */
03877       else {                  
03878         newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
03879         if (newmem == 0)
03880           return 0; /* propagate failure */
03881 
03882         newp = mem2chunk(newmem);
03883         newsize = chunksize(newp);
03884 
03885         /*
03886           Avoid copy if newp is next chunk after oldp.
03887         */
03888         if (newp == next) {
03889           newsize += oldsize;
03890           newp = oldp;
03891         }
03892         else {
03893 
03894           /*
03895             Unroll copy of <= 36 bytes (72 if 8byte sizes)
03896             We know that contents have an odd number of
03897             INTERNAL_SIZE_T-sized words; minimally 3.
03898           */
03899           
03900           copysize = oldsize - SIZE_SZ;
03901           s = (INTERNAL_SIZE_T*)oldmem;
03902           d = (INTERNAL_SIZE_T*)(chunk2mem(newp));
03903           ncopies = copysize / sizeof(INTERNAL_SIZE_T);
03904           assert(ncopies >= 3);
03905           
03906           if (ncopies > 9)
03907             MALLOC_COPY(d, s, copysize, 0);
03908           
03909           else {
03910             *(d+0) = *(s+0);
03911             *(d+1) = *(s+1);
03912             *(d+2) = *(s+2);
03913             if (ncopies > 4) {
03914               *(d+3) = *(s+3);
03915               *(d+4) = *(s+4);
03916               if (ncopies > 6) {
03917                 *(d+5) = *(s+5);
03918                 *(d+6) = *(s+6);
03919                 if (ncopies > 8) {
03920                   *(d+7) = *(s+7);
03921                   *(d+8) = *(s+8);
03922                 }
03923               }
03924             }
03925           }
03926 
03927           fREe(oldmem);
03928           check_inuse_chunk(newp);
03929           return chunk2mem(newp);
03930         }
03931       }
03932     }
03933 
03934 
03935     /* If possible, free extra space in old or extended chunk */
03936 
03937     remainder_size = (long)newsize - (long)nb;
03938     assert(remainder_size >= 0);
03939 
03940     if (remainder_size >= (long)MINSIZE) { /* split remainder */
03941       remainder = chunk_at_offset(newp, nb);
03942       set_head_size(newp, nb);
03943       set_head(remainder, remainder_size | PREV_INUSE);
03944       /* Mark remainder as inuse so free() won't complain */
03945       set_inuse_bit_at_offset(remainder, remainder_size);
03946       fREe(chunk2mem(remainder)); 
03947     }
03948 
03949     else { /* not enough extra to split off */
03950       set_head_size(newp, newsize);
03951       set_inuse_bit_at_offset(newp, newsize);
03952     }
03953 
03954     check_inuse_chunk(newp);
03955     return chunk2mem(newp);
03956   }
03957 
03958   /*
03959     Handle mmap cases
03960   */
03961 
03962   else {
03963 #if HAVE_MMAP
03964 
03965 #if HAVE_MREMAP
03966     INTERNAL_SIZE_T offset = oldp->prev_size;
03967     size_t pagemask = av->pagesize - 1;
03968     char *cp;
03969     unsigned long sum;
03970     
03971     /* Note the extra SIZE_SZ overhead */
03972     newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask;
03973 
03974     /* don't need to remap if still within same page */
03975     if (oldsize == newsize - offset) 
03976       return oldmem;
03977 
03978     cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1);
03979     
03980     if (cp != (char*)MORECORE_FAILURE) {
03981 
03982       newp = (mchunkptr)(cp + offset);
03983       set_head(newp, (newsize - offset)|IS_MMAPPED);
03984       
03985       assert(aligned_OK(chunk2mem(newp)));
03986       assert((newp->prev_size == offset));
03987       
03988       /* update statistics */
03989       sum = av->mmapped_mem += newsize - oldsize;
03990       if (sum > (unsigned long)(av->max_mmapped_mem)) 
03991         av->max_mmapped_mem = sum;
03992       sum += av->sbrked_mem;
03993       if (sum > (unsigned long)(av->max_total_mem)) 
03994         av->max_total_mem = sum;
03995       
03996       return chunk2mem(newp);
03997     }
03998 
03999 #endif
04000 
04001     /* Note the extra SIZE_SZ overhead. */
04002     if ((long)oldsize - (long)SIZE_SZ >= (long)nb)
04003       newmem = oldmem; /* do nothing */
04004     else {
04005       /* Must alloc, copy, free. */
04006       newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
04007       if (newmem != 0) {
04008         MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ, 0);
04009         fREe(oldmem);
04010       }
04011     }
04012     return newmem;
04013 
04014 #else 
04015     /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */
04016     check_malloc_state();
04017     MALLOC_FAILURE_ACTION;
04018     return 0;
04019 #endif
04020   }
04021 
04022 }
04023 
04024 
04025 
04026 /*
04027   memalign requests more than enough space from malloc, finds a spot
04028   within that chunk that meets the alignment request, and then
04029   possibly frees the leading and trailing space.
04030   
04031   Alignments must be powers of two. If the argument is not a
04032   power of two, the nearest greater power is used.
04033   
04034   8-byte alignment is guaranteed by normal malloc calls, so don't
04035   bother calling memalign with an argument of 8 or less.
04036   
04037   Overreliance on memalign is a sure way to fragment space.
04038 */
04039 
04040 
04041 #if __STD_C
04042 Void_t* mEMALIGn(size_t alignment, size_t bytes)
04043 #else
04044 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
04045 #endif
04046 {
04047   INTERNAL_SIZE_T nb;             /* padded  request size */
04048   char*           m;              /* memory returned by malloc call */
04049   mchunkptr       p;              /* corresponding chunk */
04050   char*           brk;            /* alignment point within p */
04051   mchunkptr       newp;           /* chunk to return */
04052   INTERNAL_SIZE_T newsize;        /* its size */
04053   INTERNAL_SIZE_T leadsize;       /* leading space befor alignment point */
04054   mchunkptr       remainder;      /* spare room at end to split off */
04055   long            remainder_size; /* its size */
04056 
04057 
04058   /* If need less alignment than we give anyway, just relay to malloc */
04059 
04060   if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
04061 
04062   /* Otherwise, ensure that it is at least a minimum chunk size */
04063 
04064   if (alignment <  MINSIZE) alignment = MINSIZE;
04065 
04066   /* Make sure alignment is power of 2 (in case MINSIZE is not).  */
04067   if ((alignment & (alignment - 1)) != 0) {
04068     size_t a = MALLOC_ALIGNMENT * 2;
04069     while ((unsigned long)a < (unsigned long)alignment) a <<= 1;
04070     alignment = a;
04071   }
04072 
04073   checked_request2size(bytes, nb);
04074 
04075   /* Call malloc with worst case padding to hit alignment. */
04076 
04077   m  = (char*)(mALLOc(nb + alignment + MINSIZE));
04078 
04079   if (m == 0) return 0; /* propagate failure */
04080 
04081   p = mem2chunk(m);
04082 
04083   if ((((unsigned long)(m)) % alignment) != 0) { /* misaligned */
04084 
04085     /*
04086       Find an aligned spot inside chunk.  Since we need to give back
04087       leading space in a chunk of at least MINSIZE, if the first
04088       calculation places us at a spot with less than MINSIZE leader,
04089       we can move to the next aligned spot -- we've allocated enough
04090       total room so that this is always possible.
04091     */
04092 
04093     brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) &
04094                            -((signed long) alignment));
04095     if ((long)(brk - (char*)(p)) < (long)MINSIZE)
04096       brk = brk + alignment;
04097 
04098     newp = (mchunkptr)brk;
04099     leadsize = brk - (char*)(p);
04100     newsize = chunksize(p) - leadsize;
04101 
04102     /* For mmapped chunks, just adjust offset */
04103     if (chunk_is_mmapped(p)) {
04104       newp->prev_size = p->prev_size + leadsize;
04105       set_head(newp, newsize|IS_MMAPPED);
04106       return chunk2mem(newp);
04107     }
04108 
04109     /* Otherwise, give back leader, use the rest */
04110 
04111     set_head(newp, newsize | PREV_INUSE);
04112     set_inuse_bit_at_offset(newp, newsize);
04113     set_head_size(p, leadsize);
04114     fREe(chunk2mem(p));
04115     p = newp;
04116 
04117     assert (newsize >= nb &&
04118             (((unsigned long)(chunk2mem(p))) % alignment) == 0);
04119   }
04120 
04121   /* Also give back spare room at the end */
04122   if (!chunk_is_mmapped(p)) {
04123 
04124     remainder_size = (long)(chunksize(p)) - (long)nb;
04125 
04126     if (remainder_size >= (long)MINSIZE) {
04127       remainder = chunk_at_offset(p, nb);
04128       set_head(remainder, remainder_size | PREV_INUSE);
04129       set_head_size(p, nb);
04130       fREe(chunk2mem(remainder));
04131     }
04132   }
04133 
04134   check_inuse_chunk(p);
04135   return chunk2mem(p);
04136 
04137 }
04138 
04139 
04140 
04141 
04142 /*
04143   calloc calls malloc, then zeroes out the allocated chunk.
04144 */
04145 
04146 #if __STD_C
04147 Void_t* cALLOc(size_t n_elements, size_t elem_size)
04148 #else
04149 Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size;
04150 #endif
04151 {
04152   mchunkptr p;
04153   INTERNAL_SIZE_T clearsize;
04154   int nclears;
04155   INTERNAL_SIZE_T* d;
04156 
04157   Void_t* mem = mALLOc(n_elements * elem_size);
04158 
04159   if (mem != 0) {
04160     p = mem2chunk(mem);
04161     if (!chunk_is_mmapped(p)) {  /* don't need to clear mmapped space */
04162 
04163       /*
04164         Unroll clear of <= 36 bytes (72 if 8byte sizes)
04165         We know that contents have an odd number of
04166         INTERNAL_SIZE_T-sized words; minimally 3.
04167       */
04168 
04169       d = (INTERNAL_SIZE_T*)mem;
04170       clearsize = chunksize(p) - SIZE_SZ;
04171       nclears = clearsize / sizeof(INTERNAL_SIZE_T);
04172       assert(nclears >= 3);
04173 
04174       if (nclears > 9)
04175         MALLOC_ZERO(d, clearsize);
04176 
04177       else {
04178         *(d+0) = 0;
04179         *(d+1) = 0;
04180         *(d+2) = 0;
04181         if (nclears > 4) {
04182           *(d+3) = 0;
04183           *(d+4) = 0;
04184           if (nclears > 6) {
04185             *(d+5) = 0;
04186             *(d+6) = 0;
04187             if (nclears > 8) {
04188               *(d+7) = 0;
04189               *(d+8) = 0;
04190             }
04191           }
04192         }
04193       }
04194     }
04195   }
04196   return mem;
04197 }
04198 
04199 
04200 /*
04201   cfree just calls free. It is needed/defined on some systems
04202   that pair it with calloc, presumably for odd historical reasons
04203   (such as: cfree is used in example code in first edition of K&R).
04204 */
04205 
04206 #if __STD_C
04207 void cFREe(Void_t *mem)
04208 #else
04209 void cFREe(mem) Void_t *mem;
04210 #endif
04211 {
04212   fREe(mem);
04213 }
04214 
04215 
04216 
04217 
04218 
04219 /*
04220   valloc just invokes memalign with alignment argument equal
04221   to the page size of the system (or as near to this as can
04222   be figured out from all the includes/defines above.)
04223 */
04224 
04225 #if __STD_C
04226 Void_t* vALLOc(size_t bytes)
04227 #else
04228 Void_t* vALLOc(bytes) size_t bytes;
04229 #endif
04230 {
04231   /* Ensure initialization/consolidation */
04232   mstate av = get_malloc_state();
04233   malloc_consolidate(av);
04234   return mEMALIGn(av->pagesize, bytes);
04235 }
04236 
04237 /*
04238   pvalloc just invokes valloc for the nearest pagesize
04239   that will accommodate request
04240 */
04241 
04242 
04243 #if __STD_C
04244 Void_t* pVALLOc(size_t bytes)
04245 #else
04246 Void_t* pVALLOc(bytes) size_t bytes;
04247 #endif
04248 {
04249   mstate av = get_malloc_state();
04250   size_t pagesz;
04251 
04252   /* Ensure initialization/consolidation */
04253   malloc_consolidate(av);
04254 
04255   pagesz = av->pagesize;
04256   return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1));
04257 }
04258 
04259 
04260 /*
04261   Malloc_Trim  gives memory back to the system (via negative
04262   arguments to sbrk) if there is unused memory at the `high' end of
04263   the malloc pool. You can call this after freeing large blocks of
04264   memory to potentially reduce the system-level memory requirements
04265   of a program. However, it cannot guarantee to reduce memory. Under
04266   some allocation patterns, some large free blocks of memory will be
04267   locked between two used chunks, so they cannot be given back to
04268   the system.
04269   
04270   The `pad' argument to malloc_trim represents the amount of free
04271   trailing space to leave untrimmed. If this argument is zero,
04272   only the minimum amount of memory to maintain internal data
04273   structures will be left (one page or less). Non-zero arguments
04274   can be supplied to maintain enough trailing space to service
04275   future expected allocations without having to re-obtain memory
04276   from the system.
04277   
04278   Malloc_trim returns 1 if it actually released any memory, else 0.
04279 */
04280 
04281 #if __STD_C
04282 int mTRIm(size_t pad)
04283 #else
04284 int mTRIm(pad) size_t pad;
04285 #endif
04286 {
04287   mstate av = get_malloc_state();
04288   /* Ensure initialization/consolidation */
04289   malloc_consolidate(av);
04290 
04291   return sYSTRIm(pad, av);
04292 }
04293 
04294 /*
04295   malloc_usable_size tells you how many bytes you can actually use in
04296   an allocated chunk, which may be more than you requested (although
04297   often not). You can use this many bytes without worrying about
04298   overwriting other allocated objects. Not a particularly great
04299   programming practice, but still sometimes useful.
04300 */
04301 
04302 #if __STD_C
04303 size_t mUSABLe(Void_t* mem)
04304 #else
04305 size_t mUSABLe(mem) Void_t* mem;
04306 #endif
04307 {
04308   mchunkptr p;
04309   if (mem != 0) {
04310     p = mem2chunk(mem);
04311     if (chunk_is_mmapped(p))
04312       return chunksize(p) - 2*SIZE_SZ;
04313     else if (inuse(p))
04314       return chunksize(p) - SIZE_SZ;
04315   }
04316   return 0;
04317 }
04318 
04319 
04320 
04321 
04322 /*
04323   mallinfo returns a copy of updated current mallinfo.
04324 */
04325 
04326 struct mallinfo mALLINFo()
04327 {
04328   mstate av = get_malloc_state();
04329   struct mallinfo mi;
04330   int i;
04331   mbinptr b;
04332   mchunkptr p;
04333   INTERNAL_SIZE_T avail;
04334   int navail;
04335   int nfastblocks;
04336   int fastbytes;
04337 
04338   /* Ensure initialization */
04339   if (av->top == 0)  malloc_consolidate(av);
04340 
04341   check_malloc_state();
04342 
04343   /* Account for top */
04344   avail = chunksize(av->top);
04345   navail = 1;  /* top always exists */
04346 
04347   /* traverse fastbins */
04348   nfastblocks = 0;
04349   fastbytes = 0;
04350 
04351   for (i = 0; i < NFASTBINS; ++i) {
04352     for (p = av->fastbins[i]; p != 0; p = p->fd) {
04353       ++nfastblocks;
04354       fastbytes += chunksize(p);
04355     }
04356   }
04357 
04358   avail += fastbytes;
04359 
04360   /* traverse regular bins */
04361   for (i = 1; i < NBINS; ++i) {
04362     b = bin_at(av, i);
04363     for (p = last(b); p != b; p = p->bk) {
04364       avail += chunksize(p);
04365       navail++;
04366     }
04367   }
04368 
04369   mi.smblks = nfastblocks;
04370   mi.ordblks = navail;
04371   mi.fordblks = avail;
04372   mi.uordblks = av->sbrked_mem - avail;
04373   mi.arena = av->sbrked_mem;
04374   mi.hblks = av->n_mmaps;
04375   mi.hblkhd = av->mmapped_mem;
04376   mi.fsmblks = fastbytes;
04377   mi.keepcost = chunksize(av->top);
04378   mi.usmblks = av->max_total_mem;
04379   return mi;
04380 }
04381 
04382 
04383 
04384 /*
04385   malloc_stats prints on stderr the amount of space obtained from the
04386   system (both via sbrk and mmap), the maximum amount (which may be
04387   more than current if malloc_trim and/or munmap got called), and the
04388   current number of bytes allocated via malloc (or realloc, etc) but
04389   not yet freed. Note that this is the number of bytes allocated, not
04390   the number requested. It will be larger than the number requested
04391   because of alignment and bookkeeping overhead. Because it includes
04392   alignment wastage as being in use, this figure may be greater than zero 
04393   even when no user-level chunks are allocated.
04394 
04395   The reported current and maximum system memory can be inaccurate if
04396   a program makes other calls to system memory allocation functions
04397   (normally sbrk) outside of malloc.
04398 
04399   malloc_stats prints only the most commonly interesting statistics.
04400   More information can be obtained by calling mallinfo.
04401 */
04402 
04403 void mSTATs()
04404 {
04405   struct mallinfo mi = mALLINFo();
04406 
04407 #ifdef WIN32
04408   {
04409     unsigned long free, reserved, committed;
04410     vminfo (&free, &reserved, &committed);
04411     fprintf(stderr, "free bytes       = %10lu\n", 
04412             free);
04413     fprintf(stderr, "reserved bytes   = %10lu\n", 
04414             reserved);
04415     fprintf(stderr, "committed bytes  = %10lu\n", 
04416             committed);
04417   }
04418 #endif
04419 
04420 
04421   fprintf(stderr, "max system bytes = %10lu\n",
04422           (unsigned long)(mi.usmblks));
04423   fprintf(stderr, "system bytes     = %10lu\n",
04424           (unsigned long)(mi.arena + mi.hblkhd));
04425   fprintf(stderr, "in use bytes     = %10lu\n",
04426           (unsigned long)(mi.uordblks + mi.hblkhd));
04427 
04428 #ifdef WIN32 
04429   {
04430     unsigned long kernel, user;
04431     if (cpuinfo (TRUE, &kernel, &user)) {
04432       fprintf(stderr, "kernel ms        = %10lu\n", 
04433               kernel);
04434       fprintf(stderr, "user ms          = %10lu\n", 
04435               user);
04436     }
04437   }
04438 #endif
04439 }
04440 
04441 
04442 
04443 /*
04444   mallopt is the general SVID/XPG interface to tunable parameters.
04445   The format is to provide a (parameter-number, parameter-value)
04446   pair.  mallopt then sets the corresponding parameter to the
04447   argument value if it can (i.e., so long as the value is
04448   meaningful), and returns 1 if successful else 0.  See descriptions
04449   of tunable parameters above for meanings.
04450 */
04451 
04452 #if __STD_C
04453 int mALLOPt(int param_number, int value)
04454 #else
04455 int mALLOPt(param_number, value) int param_number; int value;
04456 #endif
04457 {
04458   mstate av = get_malloc_state();
04459   /* Ensure initialization/consolidation */
04460   malloc_consolidate(av);
04461 
04462   switch(param_number) {
04463   case M_MXFAST:
04464     if (value >= 0 && value <= MAX_FAST_SIZE) {
04465       av->max_fast = req2max_fast(value);
04466       return 1;
04467     }
04468     else
04469       return 0;
04470 
04471   case M_TRIM_THRESHOLD:
04472     av->trim_threshold = value;
04473     return 1;
04474 
04475   case M_TOP_PAD:
04476     av->top_pad = value;
04477     return 1;
04478 
04479   case M_MMAP_THRESHOLD:
04480     av->mmap_threshold = value;
04481     return 1;
04482 
04483   case M_MMAP_MAX:
04484 #if HAVE_MMAP
04485     av->n_mmaps_max = value;
04486     return 1;
04487 #else
04488     if (value != 0)
04489       return 0;
04490     else {
04491       av->n_mmaps_max = value;
04492       return 1;
04493     }
04494 #endif
04495 
04496   default:
04497     return 0;
04498   }
04499 }
04500 
04501 
04502 /* -------------------------------------------------------------- */
04503 
04504 /* 
04505    Emulation of sbrk for win32. 
04506    Donated by J. Walter <Walter@GeNeSys-e.de>.
04507    For additional information about this code, and malloc on Win32, see 
04508      http://www.genesys-e.de/jwalter/
04509 
04510 */
04511 
04512 
04513 #ifdef WIN32
04514 
04515 #ifdef _DEBUG
04516 /* #define TRACE */
04517 #endif
04518 
04519 /* Support for USE_MALLOC_LOCK */
04520 #ifdef USE_MALLOC_LOCK
04521 
04522 /* Wait for spin lock */
04523 static int slwait (int *sl) {
04524     while (InterlockedCompareExchange ((void **) sl, (void *) 1, (void *) 0) != 0) 
04525            Sleep (0);
04526     return 0;
04527 }
04528 
04529 /* Release spin lock */
04530 static int slrelease (int *sl) {
04531     InterlockedExchange (sl, 0);
04532     return 0;
04533 }
04534 
04535 #ifdef NEEDED
04536 /* Spin lock for emulation code */
04537 static int g_sl;
04538 #endif
04539 
04540 #endif /* USE_MALLOC_LOCK */
04541 
04542 /* getpagesize for windows */
04543 static long getpagesize (void) {
04544     static long g_pagesize = 0;
04545     if (! g_pagesize) {
04546         SYSTEM_INFO system_info;
04547         GetSystemInfo (&system_info);
04548         g_pagesize = system_info.dwPageSize;
04549     }
04550     return g_pagesize;
04551 }
04552 static long getregionsize (void) {
04553     static long g_regionsize = 0;
04554     if (! g_regionsize) {
04555         SYSTEM_INFO system_info;
04556         GetSystemInfo (&system_info);
04557         g_regionsize = system_info.dwAllocationGranularity;
04558     }
04559     return g_regionsize;
04560 }
04561 
04562 /* A region list entry */
04563 typedef struct _region_list_entry {
04564     void *top_allocated;
04565     void *top_committed;
04566     void *top_reserved;
04567     long reserve_size;
04568     struct _region_list_entry *previous;
04569 } region_list_entry;
04570 
04571 /* Allocate and link a region entry in the region list */
04572 static int region_list_append (region_list_entry **last, void *base_reserved, long reserve_size) {
04573     region_list_entry *next = HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry));
04574     if (! next)
04575         return FALSE;
04576     next->top_allocated = (char *) base_reserved;
04577     next->top_committed = (char *) base_reserved;
04578     next->top_reserved = (char *) base_reserved + reserve_size;
04579     next->reserve_size = reserve_size;
04580     next->previous = *last;
04581     *last = next;
04582     return TRUE;
04583 }
04584 /* Free and unlink the last region entry from the region list */
04585 static int region_list_remove (region_list_entry **last) {
04586     region_list_entry *previous = (*last)->previous;
04587     if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last))
04588         return FALSE;
04589     *last = previous;
04590     return TRUE;
04591 }
04592 
04593 #define CEIL(size,to)       (((size)+(to)-1)&~((to)-1))
04594 #define FLOOR(size,to)      ((size)&~((to)-1))
04595 
04596 #define SBRK_SCALE  0
04597 /* #define SBRK_SCALE  1 */
04598 /* #define SBRK_SCALE  2 */
04599 /* #define SBRK_SCALE  4  */
04600 
04601 /* sbrk for windows */
04602 static void *sbrk (long size) {
04603     static long g_pagesize, g_my_pagesize;
04604     static long g_regionsize, g_my_regionsize;
04605     static region_list_entry *g_last;
04606     void *result = (void *) MORECORE_FAILURE;
04607 #ifdef TRACE
04608     printf ("sbrk %d\n", size);
04609 #endif
04610 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
04611     /* Wait for spin lock */
04612     slwait (&g_sl);
04613 #endif
04614     /* First time initialization */
04615     if (! g_pagesize) {
04616         g_pagesize = getpagesize ();
04617         g_my_pagesize = g_pagesize << SBRK_SCALE;
04618     }
04619     if (! g_regionsize) {
04620         g_regionsize = getregionsize ();
04621         g_my_regionsize = g_regionsize << SBRK_SCALE;
04622     }
04623     if (! g_last) {
04624         if (! region_list_append (&g_last, 0, 0)) 
04625            goto sbrk_exit;
04626     }
04627     /* Assert invariants */
04628     assert (g_last);
04629     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
04630             g_last->top_allocated <= g_last->top_committed);
04631     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
04632             g_last->top_committed <= g_last->top_reserved &&
04633             (unsigned) g_last->top_committed % g_pagesize == 0);
04634     assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
04635     assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
04636     /* Allocation requested? */
04637     if (size >= 0) {
04638         /* Allocation size is the requested size */
04639         long allocate_size = size;
04640         /* Compute the size to commit */
04641         long to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
04642         /* Do we reach the commit limit? */
04643         if (to_commit > 0) {
04644             /* Round size to commit */
04645             long commit_size = CEIL (to_commit, g_my_pagesize);
04646             /* Compute the size to reserve */
04647             long to_reserve = (char *) g_last->top_committed + commit_size - (char *) g_last->top_reserved;
04648             /* Do we reach the reserve limit? */
04649             if (to_reserve > 0) {
04650                 /* Compute the remaining size to commit in the current region */
04651                 long remaining_commit_size = (char *) g_last->top_reserved - (char *) g_last->top_committed;
04652                 if (remaining_commit_size > 0) {
04653                     /* Assert preconditions */
04654                     assert ((unsigned) g_last->top_committed % g_pagesize == 0);
04655                     assert (0 < remaining_commit_size && remaining_commit_size % g_pagesize == 0); {
04656                         /* Commit this */
04657                         void *base_committed = VirtualAlloc (g_last->top_committed, remaining_commit_size,
04658                                                                                   MEM_COMMIT, PAGE_READWRITE);
04659                         /* Check returned pointer for consistency */
04660                         if (base_committed != g_last->top_committed)
04661                             goto sbrk_exit;
04662                         /* Assert postconditions */
04663                         assert ((unsigned) base_committed % g_pagesize == 0);
04664 #ifdef TRACE
04665                         printf ("Commit %p %d\n", base_committed, remaining_commit_size);
04666 #endif
04667                         /* Adjust the regions commit top */
04668                         g_last->top_committed = (char *) base_committed + remaining_commit_size;
04669                     }
04670                 } {
04671                     /* Now we are going to search and reserve. */
04672                     int contiguous = -1;
04673                     int found = FALSE;
04674                     MEMORY_BASIC_INFORMATION memory_info;
04675                     void *base_reserved;
04676                     long reserve_size;
04677                     do {
04678                         /* Assume contiguous memory */
04679                         contiguous = TRUE;
04680                         /* Round size to reserve */
04681                         reserve_size = CEIL (to_reserve, g_my_regionsize);
04682                         /* Start with the current region's top */
04683                         memory_info.BaseAddress = g_last->top_reserved;
04684                         /* Assert preconditions */
04685                         assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
04686                         assert (0 < reserve_size && reserve_size % g_regionsize == 0);
04687                         while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
04688                             /* Assert postconditions */
04689                             assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
04690 #ifdef TRACE
04691                             printf ("Query %p %d %s\n", memory_info.BaseAddress, memory_info.RegionSize, 
04692                                     memory_info.State == MEM_FREE ? "FREE": 
04693                                     (memory_info.State == MEM_RESERVE ? "RESERVED":
04694                                      (memory_info.State == MEM_COMMIT ? "COMMITTED": "?")));
04695 #endif
04696                             /* Region is free, well aligned and big enough: we are done */
04697                             if (memory_info.State == MEM_FREE &&
04698                                 (unsigned) memory_info.BaseAddress % g_regionsize == 0 &&
04699                                 memory_info.RegionSize >= (unsigned) reserve_size) {
04700                                 found = TRUE;
04701                                 break;
04702                             }
04703                             /* From now on we can't get contiguous memory! */
04704                             contiguous = FALSE;
04705                             /* Recompute size to reserve */
04706                             reserve_size = CEIL (allocate_size, g_my_regionsize);
04707                             memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
04708                             /* Assert preconditions */
04709                             assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
04710                             assert (0 < reserve_size && reserve_size % g_regionsize == 0);
04711                         }
04712                         /* Search failed? */
04713                         if (! found) 
04714                             goto sbrk_exit;
04715                         /* Assert preconditions */
04716                         assert ((unsigned) memory_info.BaseAddress % g_regionsize == 0);
04717                         assert (0 < reserve_size && reserve_size % g_regionsize == 0);
04718                         /* Try to reserve this */
04719                         base_reserved = VirtualAlloc (memory_info.BaseAddress, reserve_size, 
04720                                                                      MEM_RESERVE, PAGE_NOACCESS);
04721                         if (! base_reserved) {
04722                             int rc = GetLastError ();
04723                             if (rc != ERROR_INVALID_ADDRESS) 
04724                                 goto sbrk_exit;
04725                         }
04726                         /* A null pointer signals (hopefully) a race condition with another thread. */
04727                         /* In this case, we try again. */
04728                     } while (! base_reserved);
04729                     /* Check returned pointer for consistency */
04730                     if (memory_info.BaseAddress && base_reserved != memory_info.BaseAddress)
04731                         goto sbrk_exit;
04732                     /* Assert postconditions */
04733                     assert ((unsigned) base_reserved % g_regionsize == 0);
04734 #ifdef TRACE
04735                     printf ("Reserve %p %d\n", base_reserved, reserve_size);
04736 #endif
04737                     /* Did we get contiguous memory? */
04738                     if (contiguous) {
04739                         long start_size = (char *) g_last->top_committed - (char *) g_last->top_allocated;
04740                         /* Adjust allocation size */
04741                         allocate_size -= start_size;
04742                         /* Adjust the regions allocation top */
04743                         g_last->top_allocated = g_last->top_committed;
04744                         /* Recompute the size to commit */
04745                         to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
04746                         /* Round size to commit */
04747                         commit_size = CEIL (to_commit, g_my_pagesize);
04748                     } 
04749                     /* Append the new region to the list */
04750                     if (! region_list_append (&g_last, base_reserved, reserve_size))
04751                         goto sbrk_exit;
04752                     /* Didn't we get contiguous memory? */
04753                     if (! contiguous) {
04754                         /* Recompute the size to commit */
04755                         to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
04756                         /* Round size to commit */
04757                         commit_size = CEIL (to_commit, g_my_pagesize);
04758                     }
04759                 }
04760             } 
04761             /* Assert preconditions */
04762             assert ((unsigned) g_last->top_committed % g_pagesize == 0);
04763             assert (0 < commit_size && commit_size % g_pagesize == 0); {
04764                 /* Commit this */
04765                 void *base_committed = VirtualAlloc (g_last->top_committed, commit_size, 
04766                                                                       MEM_COMMIT, PAGE_READWRITE);
04767                 /* Check returned pointer for consistency */
04768                 if (base_committed != g_last->top_committed)
04769                     goto sbrk_exit;
04770                 /* Assert postconditions */
04771                 assert ((unsigned) base_committed % g_pagesize == 0);
04772 #ifdef TRACE
04773                 printf ("Commit %p %d\n", base_committed, commit_size);
04774 #endif
04775                 /* Adjust the regions commit top */
04776                 g_last->top_committed = (char *) base_committed + commit_size;
04777             }
04778         } 
04779         /* Adjust the regions allocation top */
04780         g_last->top_allocated = (char *) g_last->top_allocated + allocate_size;
04781         result = (char *) g_last->top_allocated - size;
04782     /* Deallocation requested? */
04783     } else if (size < 0) {
04784         long deallocate_size = - size;
04785         /* As long as we have a region to release */
04786         while ((char *) g_last->top_allocated - deallocate_size < (char *) g_last->top_reserved - g_last->reserve_size) {
04787             /* Get the size to release */
04788             long release_size = g_last->reserve_size;
04789             /* Get the base address */
04790             void *base_reserved = (char *) g_last->top_reserved - release_size;
04791             /* Assert preconditions */
04792             assert ((unsigned) base_reserved % g_regionsize == 0); 
04793             assert (0 < release_size && release_size % g_regionsize == 0); {
04794                 /* Release this */
04795                 int rc = VirtualFree (base_reserved, 0, 
04796                                       MEM_RELEASE);
04797                 /* Check returned code for consistency */
04798                 if (! rc)
04799                     goto sbrk_exit;
04800 #ifdef TRACE
04801                 printf ("Release %p %d\n", base_reserved, release_size);
04802 #endif
04803             }
04804             /* Adjust deallocation size */
04805             deallocate_size -= (char *) g_last->top_allocated - (char *) base_reserved;
04806             /* Remove the old region from the list */
04807             if (! region_list_remove (&g_last))
04808                 goto sbrk_exit;
04809         } {
04810             /* Compute the size to decommit */
04811             long to_decommit = (char *) g_last->top_committed - ((char *) g_last->top_allocated - deallocate_size);
04812             if (to_decommit >= g_my_pagesize) {
04813                 /* Compute the size to decommit */
04814                 long decommit_size = FLOOR (to_decommit, g_my_pagesize);
04815                 /*  Compute the base address */
04816                 void *base_committed = (char *) g_last->top_committed - decommit_size;
04817                 /* Assert preconditions */
04818                 assert ((unsigned) base_committed % g_pagesize == 0);
04819                 assert (0 < decommit_size && decommit_size % g_pagesize == 0); {
04820                     /* Decommit this */
04821                     int rc = VirtualFree ((char *) base_committed, decommit_size, 
04822                                           MEM_DECOMMIT);
04823                     /* Check returned code for consistency */
04824                     if (! rc)
04825                         goto sbrk_exit;
04826 #ifdef TRACE
04827                     printf ("Decommit %p %d\n", base_committed, decommit_size);
04828 #endif
04829                 }
04830                 /* Adjust deallocation size and regions commit and allocate top */
04831                 deallocate_size -= (char *) g_last->top_allocated - (char *) base_committed;
04832                 g_last->top_committed = base_committed;
04833                 g_last->top_allocated = base_committed;
04834             }
04835         }
04836         /* Adjust regions allocate top */
04837         g_last->top_allocated = (char *) g_last->top_allocated - deallocate_size;
04838         /* Check for underflow */
04839         if ((char *) g_last->top_reserved - g_last->reserve_size > (char *) g_last->top_allocated ||
04840             g_last->top_allocated > g_last->top_committed) {
04841             /* Adjust regions allocate top */
04842             g_last->top_allocated = (char *) g_last->top_reserved - g_last->reserve_size;
04843             goto sbrk_exit;
04844         }
04845         result = g_last->top_allocated;
04846     }
04847     /* Assert invariants */
04848     assert (g_last);
04849     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
04850             g_last->top_allocated <= g_last->top_committed);
04851     assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
04852             g_last->top_committed <= g_last->top_reserved &&
04853             (unsigned) g_last->top_committed % g_pagesize == 0);
04854     assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
04855     assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
04856 
04857 sbrk_exit:
04858 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
04859     /* Release spin lock */
04860     slrelease (&g_sl);
04861 #endif
04862     return result;
04863 }
04864 
04865 /* mmap for windows */
04866 static void *mmap (void *ptr, long size, long prot, long type, long handle, long arg) {
04867     static long g_pagesize;
04868     static long g_regionsize;
04869 #ifdef TRACE
04870     printf ("mmap %d\n", size);
04871 #endif
04872 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
04873     /* Wait for spin lock */
04874     slwait (&g_sl);
04875 #endif
04876     /* First time initialization */
04877     if (! g_pagesize) 
04878         g_pagesize = getpagesize ();
04879     if (! g_regionsize) 
04880         g_regionsize = getregionsize ();
04881     /* Assert preconditions */
04882     assert ((unsigned) ptr % g_regionsize == 0);
04883     assert (size % g_pagesize == 0);
04884     /* Allocate this */
04885     ptr = VirtualAlloc (ptr, size,
04886                                        MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE);
04887     if (! ptr) {
04888         ptr = (void *) MORECORE_FAILURE;
04889         goto mmap_exit;
04890     }
04891     /* Assert postconditions */
04892     assert ((unsigned) ptr % g_regionsize == 0);
04893 #ifdef TRACE
04894     printf ("Commit %p %d\n", ptr, size);
04895 #endif
04896 mmap_exit:
04897 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
04898     /* Release spin lock */
04899     slrelease (&g_sl);
04900 #endif
04901     return ptr;
04902 }
04903 
04904 /* munmap for windows */
04905 static long munmap (void *ptr, long size) {
04906     static long g_pagesize;
04907     static long g_regionsize;
04908     int rc = MUNMAP_FAILURE;
04909 #ifdef TRACE
04910     printf ("munmap %p %d\n", ptr, size);
04911 #endif
04912 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
04913     /* Wait for spin lock */
04914     slwait (&g_sl);
04915 #endif
04916     /* First time initialization */
04917     if (! g_pagesize) 
04918         g_pagesize = getpagesize ();
04919     if (! g_regionsize) 
04920         g_regionsize = getregionsize ();
04921     /* Assert preconditions */
04922     assert ((unsigned) ptr % g_regionsize == 0);
04923     assert (size % g_pagesize == 0);
04924     /* Free this */
04925     if (! VirtualFree (ptr, 0, 
04926                        MEM_RELEASE))
04927         goto munmap_exit;
04928     rc = 0;
04929 #ifdef TRACE
04930     printf ("Release %p %d\n", ptr, size);
04931 #endif
04932 munmap_exit:
04933 #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
04934     /* Release spin lock */
04935     slrelease (&g_sl);
04936 #endif
04937     return rc;
04938 }
04939 
04940 static void vminfo (unsigned long *free, unsigned long *reserved, unsigned long *committed) {
04941     MEMORY_BASIC_INFORMATION memory_info;
04942     memory_info.BaseAddress = 0;
04943     *free = *reserved = *committed = 0;
04944     while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
04945         switch (memory_info.State) {
04946         case MEM_FREE:
04947             *free += memory_info.RegionSize;
04948             break;
04949         case MEM_RESERVE:
04950             *reserved += memory_info.RegionSize;
04951             break;
04952         case MEM_COMMIT:
04953             *committed += memory_info.RegionSize;
04954             break;
04955         }
04956         memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
04957     }
04958 }
04959 
04960 static int cpuinfo (int whole, unsigned long *kernel, unsigned long *user) {
04961     if (whole) {
04962         __int64 creation64, exit64, kernel64, user64;
04963         int rc = GetProcessTimes (GetCurrentProcess (), 
04964                                   (FILETIME *) &creation64,  
04965                                   (FILETIME *) &exit64, 
04966                                   (FILETIME *) &kernel64, 
04967                                   (FILETIME *) &user64);
04968         if (! rc) {
04969             *kernel = 0;
04970             *user = 0;
04971             return FALSE;
04972         } 
04973         *kernel = (unsigned long) (kernel64 / 10000);
04974         *user = (unsigned long) (user64 / 10000);
04975         return TRUE;
04976     } else {
04977         __int64 creation64, exit64, kernel64, user64;
04978         int rc = GetThreadTimes (GetCurrentThread (), 
04979                                  (FILETIME *) &creation64,  
04980                                  (FILETIME *) &exit64, 
04981                                  (FILETIME *) &kernel64, 
04982                                  (FILETIME *) &user64);
04983         if (! rc) {
04984             *kernel = 0;
04985             *user = 0;
04986             return FALSE;
04987         } 
04988         *kernel = (unsigned long) (kernel64 / 10000);
04989         *user = (unsigned long) (user64 / 10000);
04990         return TRUE;
04991     }
04992 }
04993 
04994 #endif /* WIN32 */
04995 
04996 /*
04997 
04998 History:
04999 
05000     V2.7.0
05001       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
05002       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for 
05003         helping test this.)
05004       * memalign: check alignment arg
05005       * realloc: use memmove when regions may overlap.
05006       * Collect all cases in malloc requiring system memory into sYSMALLOc
05007       * Use mmap as backup to sbrk, if available; fold these mmap-related 
05008         operations into others.
05009       * Place all internal state in malloc_state
05010       * Introduce fastbins (although similar to 2.5.1)
05011       * Many minor tunings and cosmetic improvements
05012       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK 
05013       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
05014         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
05015       * Adjust request2size to fit with MALLOC_FAILURE_ACTION.
05016       * Include errno.h to support default failure action.
05017       * Further improve WIN32 'sbrk()' emulation's 'findRegion()' routine
05018         to avoid infinite loop when allocating initial memory larger
05019         than RESERVED_SIZE and/or subsequent memory larger than
05020         NEXT_SIZE.  Thanks to Andreas Mueller <a.mueller at paradatec.de>
05021         for finding this one.
05022 
05023     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
05024       * return null for negative arguments
05025       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
05026          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
05027           (e.g. WIN32 platforms)
05028          * Cleanup header file inclusion for WIN32 platforms
05029          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
05030          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
05031            memory allocation routines
05032          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
05033          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
05034            usage of 'assert' in non-WIN32 code
05035          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
05036            avoid infinite loop
05037       * Always call 'fREe()' rather than 'free()'
05038 
05039     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
05040       * Fixed ordering problem with boundary-stamping
05041 
05042     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
05043       * Added pvalloc, as recommended by H.J. Liu
05044       * Added 64bit pointer support mainly from Wolfram Gloger
05045       * Added anonymously donated WIN32 sbrk emulation
05046       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
05047       * malloc_extend_top: fix mask error that caused wastage after
05048         foreign sbrks
05049       * Add linux mremap support code from HJ Liu
05050 
05051     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
05052       * Integrated most documentation with the code.
05053       * Add support for mmap, with help from
05054         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
05055       * Use last_remainder in more cases.
05056       * Pack bins using idea from  colin@nyx10.cs.du.edu
05057       * Use ordered bins instead of best-fit threshhold
05058       * Eliminate block-local decls to simplify tracing and debugging.
05059       * Support another case of realloc via move into top
05060       * Fix error occuring when initial sbrk_base not word-aligned.
05061       * Rely on page size for units instead of SBRK_UNIT to
05062         avoid surprises about sbrk alignment conventions.
05063       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
05064         (raymond@es.ele.tue.nl) for the suggestion.
05065       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
05066       * More precautions for cases where other routines call sbrk,
05067         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
05068       * Added macros etc., allowing use in linux libc from
05069         H.J. Lu (hjl@gnu.ai.mit.edu)
05070       * Inverted this history list
05071 
05072     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
05073       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
05074       * Removed all preallocation code since under current scheme
05075         the work required to undo bad preallocations exceeds
05076         the work saved in good cases for most test programs.
05077       * No longer use return list or unconsolidated bins since
05078         no scheme using them consistently outperforms those that don't
05079         given above changes.
05080       * Use best fit for very large chunks to prevent some worst-cases.
05081       * Added some support for debugging
05082 
05083     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
05084       * Removed footers when chunks are in use. Thanks to
05085         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
05086 
05087     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
05088       * Added malloc_trim, with help from Wolfram Gloger
05089         (wmglo@Dent.MED.Uni-Muenchen.DE).
05090 
05091     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
05092 
05093     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
05094       * realloc: try to expand in both directions
05095       * malloc: swap order of clean-bin strategy;
05096       * realloc: only conditionally expand backwards
05097       * Try not to scavenge used bins
05098       * Use bin counts as a guide to preallocation
05099       * Occasionally bin return list chunks in first scan
05100       * Add a few optimizations from colin@nyx10.cs.du.edu
05101 
05102     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
05103       * faster bin computation & slightly different binning
05104       * merged all consolidations to one part of malloc proper
05105          (eliminating old malloc_find_space & malloc_clean_bin)
05106       * Scan 2 returns chunks (not just 1)
05107       * Propagate failure in realloc if malloc returns 0
05108       * Add stuff to allow compilation on non-ANSI compilers
05109           from kpv@research.att.com
05110 
05111     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
05112       * removed potential for odd address access in prev_chunk
05113       * removed dependency on getpagesize.h
05114       * misc cosmetics and a bit more internal documentation
05115       * anticosmetics: mangled names in macros to evade debugger strangeness
05116       * tested on sparc, hp-700, dec-mips, rs6000
05117           with gcc & native cc (hp, dec only) allowing
05118           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
05119 
05120     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
05121       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
05122          structure of old version,  but most details differ.)
05123 
05124 */
05125 
05126