xref: /openbmc/u-boot/common/dlmalloc.c (revision e2211743)
1 #if 0	/* Moved to malloc.h */
2 /* ---------- To make a malloc.h, start cutting here ------------ */
3 
4 /*
5   A version of malloc/free/realloc written by Doug Lea and released to the
6   public domain.  Send questions/comments/complaints/performance data
7   to dl@cs.oswego.edu
8 
9 * VERSION 2.6.6  Sun Mar  5 19:10:03 2000  Doug Lea  (dl at gee)
10 
11    Note: There may be an updated version of this malloc obtainable at
12            ftp://g.oswego.edu/pub/misc/malloc.c
13          Check before installing!
14 
15 * Why use this malloc?
16 
17   This is not the fastest, most space-conserving, most portable, or
18   most tunable malloc ever written. However it is among the fastest
19   while also being among the most space-conserving, portable and tunable.
20   Consistent balance across these factors results in a good general-purpose
21   allocator. For a high-level description, see
22      http://g.oswego.edu/dl/html/malloc.html
23 
24 * Synopsis of public routines
25 
26   (Much fuller descriptions are contained in the program documentation below.)
27 
28   malloc(size_t n);
29      Return a pointer to a newly allocated chunk of at least n bytes, or null
30      if no space is available.
31   free(Void_t* p);
32      Release the chunk of memory pointed to by p, or no effect if p is null.
33   realloc(Void_t* p, size_t n);
34      Return a pointer to a chunk of size n that contains the same data
35      as does chunk p up to the minimum of (n, p's size) bytes, or null
36      if no space is available. The returned pointer may or may not be
37      the same as p. If p is null, equivalent to malloc.  Unless the
38      #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
39      size argument of zero (re)allocates a minimum-sized chunk.
40   memalign(size_t alignment, size_t n);
41      Return a pointer to a newly allocated chunk of n bytes, aligned
42      in accord with the alignment argument, which must be a power of
43      two.
44   valloc(size_t n);
45      Equivalent to memalign(pagesize, n), where pagesize is the page
46      size of the system (or as near to this as can be figured out from
47      all the includes/defines below.)
48   pvalloc(size_t n);
49      Equivalent to valloc(minimum-page-that-holds(n)), that is,
50      round up n to nearest pagesize.
51   calloc(size_t unit, size_t quantity);
52      Returns a pointer to quantity * unit bytes, with all locations
53      set to zero.
54   cfree(Void_t* p);
55      Equivalent to free(p).
56   malloc_trim(size_t pad);
57      Release all but pad bytes of freed top-most memory back
58      to the system. Return 1 if successful, else 0.
59   malloc_usable_size(Void_t* p);
60      Report the number usable allocated bytes associated with allocated
61      chunk p. This may or may not report more bytes than were requested,
62      due to alignment and minimum size constraints.
63   malloc_stats();
64      Prints brief summary statistics.
65   mallinfo()
66      Returns (by copy) a struct containing various summary statistics.
67   mallopt(int parameter_number, int parameter_value)
68      Changes one of the tunable parameters described below. Returns
69      1 if successful in changing the parameter, else 0.
70 
71 * Vital statistics:
72 
73   Alignment:                            8-byte
74        8 byte alignment is currently hardwired into the design.  This
75        seems to suffice for all current machines and C compilers.
76 
77   Assumed pointer representation:       4 or 8 bytes
78        Code for 8-byte pointers is untested by me but has worked
79        reliably by Wolfram Gloger, who contributed most of the
80        changes supporting this.
81 
82   Assumed size_t  representation:       4 or 8 bytes
83        Note that size_t is allowed to be 4 bytes even if pointers are 8.
84 
85   Minimum overhead per allocated chunk: 4 or 8 bytes
86        Each malloced chunk has a hidden overhead of 4 bytes holding size
87        and status information.
88 
89   Minimum allocated size: 4-byte ptrs:  16 bytes    (including 4 overhead)
90                           8-byte ptrs:  24/32 bytes (including, 4/8 overhead)
91 
92        When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
93        ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
94        needed; 4 (8) for a trailing size field
95        and 8 (16) bytes for free list pointers. Thus, the minimum
96        allocatable size is 16/24/32 bytes.
97 
98        Even a request for zero bytes (i.e., malloc(0)) returns a
99        pointer to something of the minimum allocatable size.
100 
101   Maximum allocated size: 4-byte size_t: 2^31 -  8 bytes
102                           8-byte size_t: 2^63 - 16 bytes
103 
104        It is assumed that (possibly signed) size_t bit values suffice to
105        represent chunk sizes. `Possibly signed' is due to the fact
106        that `size_t' may be defined on a system as either a signed or
107        an unsigned type. To be conservative, values that would appear
108        as negative numbers are avoided.
109        Requests for sizes with a negative sign bit when the request
110        size is treaded as a long will return null.
111 
112   Maximum overhead wastage per allocated chunk: normally 15 bytes
113 
114        Alignnment demands, plus the minimum allocatable size restriction
115        make the normal worst-case wastage 15 bytes (i.e., up to 15
116        more bytes will be allocated than were requested in malloc), with
117        two exceptions:
118          1. Because requests for zero bytes allocate non-zero space,
119             the worst case wastage for a request of zero bytes is 24 bytes.
120          2. For requests >= mmap_threshold that are serviced via
121             mmap(), the worst case wastage is 8 bytes plus the remainder
122             from a system page (the minimal mmap unit); typically 4096 bytes.
123 
124 * Limitations
125 
126     Here are some features that are NOT currently supported
127 
128     * No user-definable hooks for callbacks and the like.
129     * No automated mechanism for fully checking that all accesses
130       to malloced memory stay within their bounds.
131     * No support for compaction.
132 
133 * Synopsis of compile-time options:
134 
135     People have reported using previous versions of this malloc on all
136     versions of Unix, sometimes by tweaking some of the defines
137     below. It has been tested most extensively on Solaris and
138     Linux. It is also reported to work on WIN32 platforms.
139     People have also reported adapting this malloc for use in
140     stand-alone embedded systems.
141 
142     The implementation is in straight, hand-tuned ANSI C.  Among other
143     consequences, it uses a lot of macros.  Because of this, to be at
144     all usable, this code should be compiled using an optimizing compiler
145     (for example gcc -O2) that can simplify expressions and control
146     paths.
147 
148   __STD_C                  (default: derived from C compiler defines)
149      Nonzero if using ANSI-standard C compiler, a C++ compiler, or
150      a C compiler sufficiently close to ANSI to get away with it.
151   DEBUG                    (default: NOT defined)
152      Define to enable debugging. Adds fairly extensive assertion-based
153      checking to help track down memory errors, but noticeably slows down
154      execution.
155   REALLOC_ZERO_BYTES_FREES (default: NOT defined)
156      Define this if you think that realloc(p, 0) should be equivalent
157      to free(p). Otherwise, since malloc returns a unique pointer for
158      malloc(0), so does realloc(p, 0).
159   HAVE_MEMCPY               (default: defined)
160      Define if you are not otherwise using ANSI STD C, but still
161      have memcpy and memset in your C library and want to use them.
162      Otherwise, simple internal versions are supplied.
163   USE_MEMCPY               (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
164      Define as 1 if you want the C library versions of memset and
165      memcpy called in realloc and calloc (otherwise macro versions are used).
166      At least on some platforms, the simple macro versions usually
167      outperform libc versions.
168   HAVE_MMAP                 (default: defined as 1)
169      Define to non-zero to optionally make malloc() use mmap() to
170      allocate very large blocks.
171   HAVE_MREMAP                 (default: defined as 0 unless Linux libc set)
172      Define to non-zero to optionally make realloc() use mremap() to
173      reallocate very large blocks.
174   malloc_getpagesize        (default: derived from system #includes)
175      Either a constant or routine call returning the system page size.
176   HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
177      Optionally define if you are on a system with a /usr/include/malloc.h
178      that declares struct mallinfo. It is not at all necessary to
179      define this even if you do, but will ensure consistency.
180   INTERNAL_SIZE_T           (default: size_t)
181      Define to a 32-bit type (probably `unsigned int') if you are on a
182      64-bit machine, yet do not want or need to allow malloc requests of
183      greater than 2^31 to be handled. This saves space, especially for
184      very small chunks.
185   INTERNAL_LINUX_C_LIB      (default: NOT defined)
186      Defined only when compiled as part of Linux libc.
187      Also note that there is some odd internal name-mangling via defines
188      (for example, internally, `malloc' is named `mALLOc') needed
189      when compiling in this case. These look funny but don't otherwise
190      affect anything.
191   WIN32                     (default: undefined)
192      Define this on MS win (95, nt) platforms to compile in sbrk emulation.
193   LACKS_UNISTD_H            (default: undefined if not WIN32)
194      Define this if your system does not have a <unistd.h>.
195   LACKS_SYS_PARAM_H         (default: undefined if not WIN32)
196      Define this if your system does not have a <sys/param.h>.
197   MORECORE                  (default: sbrk)
198      The name of the routine to call to obtain more memory from the system.
199   MORECORE_FAILURE          (default: -1)
200      The value returned upon failure of MORECORE.
201   MORECORE_CLEARS           (default 1)
202      True (1) if the routine mapped to MORECORE zeroes out memory (which
203      holds for sbrk).
204   DEFAULT_TRIM_THRESHOLD
205   DEFAULT_TOP_PAD
206   DEFAULT_MMAP_THRESHOLD
207   DEFAULT_MMAP_MAX
208      Default values of tunable parameters (described in detail below)
209      controlling interaction with host system routines (sbrk, mmap, etc).
210      These values may also be changed dynamically via mallopt(). The
211      preset defaults are those that give best performance for typical
212      programs/systems.
213   USE_DL_PREFIX             (default: undefined)
214      Prefix all public routines with the string 'dl'.  Useful to
215      quickly avoid procedure declaration conflicts and linker symbol
216      conflicts with existing memory allocation routines.
217 
218 
219 */
220 
221 
222 
223 
224 /* Preliminaries */
225 
226 #ifndef __STD_C
227 #ifdef __STDC__
228 #define __STD_C     1
229 #else
230 #if __cplusplus
231 #define __STD_C     1
232 #else
233 #define __STD_C     0
234 #endif /*__cplusplus*/
235 #endif /*__STDC__*/
236 #endif /*__STD_C*/
237 
238 #ifndef Void_t
239 #if (__STD_C || defined(WIN32))
240 #define Void_t      void
241 #else
242 #define Void_t      char
243 #endif
244 #endif /*Void_t*/
245 
246 #if __STD_C
247 #include <stddef.h>   /* for size_t */
248 #else
249 #include <sys/types.h>
250 #endif
251 
252 #ifdef __cplusplus
253 extern "C" {
254 #endif
255 
256 #include <stdio.h>    /* needed for malloc_stats */
257 
258 
259 /*
260   Compile-time options
261 */
262 
263 
264 /*
265     Debugging:
266 
267     Because freed chunks may be overwritten with link fields, this
268     malloc will often die when freed memory is overwritten by user
269     programs.  This can be very effective (albeit in an annoying way)
270     in helping track down dangling pointers.
271 
272     If you compile with -DDEBUG, a number of assertion checks are
273     enabled that will catch more memory errors. You probably won't be
274     able to make much sense of the actual assertion errors, but they
275     should help you locate incorrectly overwritten memory.  The
276     checking is fairly extensive, and will slow down execution
277     noticeably. Calling malloc_stats or mallinfo with DEBUG set will
278     attempt to check every non-mmapped allocated and free chunk in the
279     course of computing the summmaries. (By nature, mmapped regions
280     cannot be checked very much automatically.)
281 
282     Setting DEBUG may also be helpful if you are trying to modify
283     this code. The assertions in the check routines spell out in more
284     detail the assumptions and invariants underlying the algorithms.
285 
286 */
287 
288 #ifdef DEBUG
289 #include <assert.h>
290 #else
291 #define assert(x) ((void)0)
292 #endif
293 
294 
295 /*
296   INTERNAL_SIZE_T is the word-size used for internal bookkeeping
297   of chunk sizes. On a 64-bit machine, you can reduce malloc
298   overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
299   at the expense of not being able to handle requests greater than
300   2^31. This limitation is hardly ever a concern; you are encouraged
301   to set this. However, the default version is the same as size_t.
302 */
303 
304 #ifndef INTERNAL_SIZE_T
305 #define INTERNAL_SIZE_T size_t
306 #endif
307 
308 /*
309   REALLOC_ZERO_BYTES_FREES should be set if a call to
310   realloc with zero bytes should be the same as a call to free.
311   Some people think it should. Otherwise, since this malloc
312   returns a unique pointer for malloc(0), so does realloc(p, 0).
313 */
314 
315 
316 /*   #define REALLOC_ZERO_BYTES_FREES */
317 
318 
319 /*
320   WIN32 causes an emulation of sbrk to be compiled in
321   mmap-based options are not currently supported in WIN32.
322 */
323 
324 /* #define WIN32 */
325 #ifdef WIN32
326 #define MORECORE wsbrk
327 #define HAVE_MMAP 0
328 
329 #define LACKS_UNISTD_H
330 #define LACKS_SYS_PARAM_H
331 
332 /*
333   Include 'windows.h' to get the necessary declarations for the
334   Microsoft Visual C++ data structures and routines used in the 'sbrk'
335   emulation.
336 
337   Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft
338   Visual C++ header files are included.
339 */
340 #define WIN32_LEAN_AND_MEAN
341 #include <windows.h>
342 #endif
343 
344 
345 /*
346   HAVE_MEMCPY should be defined if you are not otherwise using
347   ANSI STD C, but still have memcpy and memset in your C library
348   and want to use them in calloc and realloc. Otherwise simple
349   macro versions are defined here.
350 
351   USE_MEMCPY should be defined as 1 if you actually want to
352   have memset and memcpy called. People report that the macro
353   versions are often enough faster than libc versions on many
354   systems that it is better to use them.
355 
356 */
357 
358 #define HAVE_MEMCPY
359 
360 #ifndef USE_MEMCPY
361 #ifdef HAVE_MEMCPY
362 #define USE_MEMCPY 1
363 #else
364 #define USE_MEMCPY 0
365 #endif
366 #endif
367 
368 #if (__STD_C || defined(HAVE_MEMCPY))
369 
370 #if __STD_C
371 void* memset(void*, int, size_t);
372 void* memcpy(void*, const void*, size_t);
373 #else
374 #ifdef WIN32
375 // On Win32 platforms, 'memset()' and 'memcpy()' are already declared in
376 // 'windows.h'
377 #else
378 Void_t* memset();
379 Void_t* memcpy();
380 #endif
381 #endif
382 #endif
383 
384 #if USE_MEMCPY
385 
386 /* The following macros are only invoked with (2n+1)-multiples of
387    INTERNAL_SIZE_T units, with a positive integer n. This is exploited
388    for fast inline execution when n is small. */
389 
390 #define MALLOC_ZERO(charp, nbytes)                                            \
391 do {                                                                          \
392   INTERNAL_SIZE_T mzsz = (nbytes);                                            \
393   if(mzsz <= 9*sizeof(mzsz)) {                                                \
394     INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp);                         \
395     if(mzsz >= 5*sizeof(mzsz)) {     *mz++ = 0;                               \
396                                      *mz++ = 0;                               \
397       if(mzsz >= 7*sizeof(mzsz)) {   *mz++ = 0;                               \
398                                      *mz++ = 0;                               \
399         if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0;                               \
400                                      *mz++ = 0; }}}                           \
401                                      *mz++ = 0;                               \
402                                      *mz++ = 0;                               \
403                                      *mz   = 0;                               \
404   } else memset((charp), 0, mzsz);                                            \
405 } while(0)
406 
407 #define MALLOC_COPY(dest,src,nbytes)                                          \
408 do {                                                                          \
409   INTERNAL_SIZE_T mcsz = (nbytes);                                            \
410   if(mcsz <= 9*sizeof(mcsz)) {                                                \
411     INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src);                        \
412     INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest);                       \
413     if(mcsz >= 5*sizeof(mcsz)) {     *mcdst++ = *mcsrc++;                     \
414                                      *mcdst++ = *mcsrc++;                     \
415       if(mcsz >= 7*sizeof(mcsz)) {   *mcdst++ = *mcsrc++;                     \
416                                      *mcdst++ = *mcsrc++;                     \
417         if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++;                     \
418                                      *mcdst++ = *mcsrc++; }}}                 \
419                                      *mcdst++ = *mcsrc++;                     \
420                                      *mcdst++ = *mcsrc++;                     \
421                                      *mcdst   = *mcsrc  ;                     \
422   } else memcpy(dest, src, mcsz);                                             \
423 } while(0)
424 
425 #else /* !USE_MEMCPY */
426 
427 /* Use Duff's device for good zeroing/copying performance. */
428 
429 #define MALLOC_ZERO(charp, nbytes)                                            \
430 do {                                                                          \
431   INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp);                           \
432   long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
433   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
434   switch (mctmp) {                                                            \
435     case 0: for(;;) { *mzp++ = 0;                                             \
436     case 7:           *mzp++ = 0;                                             \
437     case 6:           *mzp++ = 0;                                             \
438     case 5:           *mzp++ = 0;                                             \
439     case 4:           *mzp++ = 0;                                             \
440     case 3:           *mzp++ = 0;                                             \
441     case 2:           *mzp++ = 0;                                             \
442     case 1:           *mzp++ = 0; if(mcn <= 0) break; mcn--; }                \
443   }                                                                           \
444 } while(0)
445 
446 #define MALLOC_COPY(dest,src,nbytes)                                          \
447 do {                                                                          \
448   INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src;                            \
449   INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest;                           \
450   long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn;                         \
451   if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; }             \
452   switch (mctmp) {                                                            \
453     case 0: for(;;) { *mcdst++ = *mcsrc++;                                    \
454     case 7:           *mcdst++ = *mcsrc++;                                    \
455     case 6:           *mcdst++ = *mcsrc++;                                    \
456     case 5:           *mcdst++ = *mcsrc++;                                    \
457     case 4:           *mcdst++ = *mcsrc++;                                    \
458     case 3:           *mcdst++ = *mcsrc++;                                    \
459     case 2:           *mcdst++ = *mcsrc++;                                    \
460     case 1:           *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; }       \
461   }                                                                           \
462 } while(0)
463 
464 #endif
465 
466 
467 /*
468   Define HAVE_MMAP to optionally make malloc() use mmap() to
469   allocate very large blocks.  These will be returned to the
470   operating system immediately after a free().
471 */
472 
473 #ifndef HAVE_MMAP
474 #define HAVE_MMAP 1
475 #endif
476 
477 /*
478   Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
479   large blocks.  This is currently only possible on Linux with
480   kernel versions newer than 1.3.77.
481 */
482 
483 #ifndef HAVE_MREMAP
484 #ifdef INTERNAL_LINUX_C_LIB
485 #define HAVE_MREMAP 1
486 #else
487 #define HAVE_MREMAP 0
488 #endif
489 #endif
490 
491 #if HAVE_MMAP
492 
493 #include <unistd.h>
494 #include <fcntl.h>
495 #include <sys/mman.h>
496 
497 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
498 #define MAP_ANONYMOUS MAP_ANON
499 #endif
500 
501 #endif /* HAVE_MMAP */
502 
503 /*
504   Access to system page size. To the extent possible, this malloc
505   manages memory from the system in page-size units.
506 
507   The following mechanics for getpagesize were adapted from
508   bsd/gnu getpagesize.h
509 */
510 
511 #ifndef LACKS_UNISTD_H
512 #  include <unistd.h>
513 #endif
514 
515 #ifndef malloc_getpagesize
516 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
517 #    ifndef _SC_PAGE_SIZE
518 #      define _SC_PAGE_SIZE _SC_PAGESIZE
519 #    endif
520 #  endif
521 #  ifdef _SC_PAGE_SIZE
522 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
523 #  else
524 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
525        extern size_t getpagesize();
526 #      define malloc_getpagesize getpagesize()
527 #    else
528 #      ifdef WIN32
529 #        define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */
530 #      else
531 #        ifndef LACKS_SYS_PARAM_H
532 #          include <sys/param.h>
533 #        endif
534 #        ifdef EXEC_PAGESIZE
535 #          define malloc_getpagesize EXEC_PAGESIZE
536 #        else
537 #          ifdef NBPG
538 #            ifndef CLSIZE
539 #              define malloc_getpagesize NBPG
540 #            else
541 #              define malloc_getpagesize (NBPG * CLSIZE)
542 #            endif
543 #          else
544 #            ifdef NBPC
545 #              define malloc_getpagesize NBPC
546 #            else
547 #              ifdef PAGESIZE
548 #                define malloc_getpagesize PAGESIZE
549 #              else
550 #                define malloc_getpagesize (4096) /* just guess */
551 #              endif
552 #            endif
553 #          endif
554 #        endif
555 #      endif
556 #    endif
557 #  endif
558 #endif
559 
560 
561 
562 /*
563 
564   This version of malloc supports the standard SVID/XPG mallinfo
565   routine that returns a struct containing the same kind of
566   information you can get from malloc_stats. It should work on
567   any SVID/XPG compliant system that has a /usr/include/malloc.h
568   defining struct mallinfo. (If you'd like to install such a thing
569   yourself, cut out the preliminary declarations as described above
570   and below and save them in a malloc.h file. But there's no
571   compelling reason to bother to do this.)
572 
573   The main declaration needed is the mallinfo struct that is returned
574   (by-copy) by mallinfo().  The SVID/XPG malloinfo struct contains a
575   bunch of fields, most of which are not even meaningful in this
576   version of malloc. Some of these fields are are instead filled by
577   mallinfo() with other numbers that might possibly be of interest.
578 
579   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
580   /usr/include/malloc.h file that includes a declaration of struct
581   mallinfo.  If so, it is included; else an SVID2/XPG2 compliant
582   version is declared below.  These must be precisely the same for
583   mallinfo() to work.
584 
585 */
586 
587 /* #define HAVE_USR_INCLUDE_MALLOC_H */
588 
589 #if HAVE_USR_INCLUDE_MALLOC_H
590 #include "/usr/include/malloc.h"
591 #else
592 
593 /* SVID2/XPG mallinfo structure */
594 
595 struct mallinfo {
596   int arena;    /* total space allocated from system */
597   int ordblks;  /* number of non-inuse chunks */
598   int smblks;   /* unused -- always zero */
599   int hblks;    /* number of mmapped regions */
600   int hblkhd;   /* total space in mmapped regions */
601   int usmblks;  /* unused -- always zero */
602   int fsmblks;  /* unused -- always zero */
603   int uordblks; /* total allocated space */
604   int fordblks; /* total non-inuse space */
605   int keepcost; /* top-most, releasable (via malloc_trim) space */
606 };
607 
608 /* SVID2/XPG mallopt options */
609 
610 #define M_MXFAST  1    /* UNUSED in this malloc */
611 #define M_NLBLKS  2    /* UNUSED in this malloc */
612 #define M_GRAIN   3    /* UNUSED in this malloc */
613 #define M_KEEP    4    /* UNUSED in this malloc */
614 
615 #endif
616 
617 /* mallopt options that actually do something */
618 
619 #define M_TRIM_THRESHOLD    -1
620 #define M_TOP_PAD           -2
621 #define M_MMAP_THRESHOLD    -3
622 #define M_MMAP_MAX          -4
623 
624 
625 
626 #ifndef DEFAULT_TRIM_THRESHOLD
627 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
628 #endif
629 
630 /*
631     M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
632       to keep before releasing via malloc_trim in free().
633 
634       Automatic trimming is mainly useful in long-lived programs.
635       Because trimming via sbrk can be slow on some systems, and can
636       sometimes be wasteful (in cases where programs immediately
637       afterward allocate more large chunks) the value should be high
638       enough so that your overall system performance would improve by
639       releasing.
640 
641       The trim threshold and the mmap control parameters (see below)
642       can be traded off with one another. Trimming and mmapping are
643       two different ways of releasing unused memory back to the
644       system. Between these two, it is often possible to keep
645       system-level demands of a long-lived program down to a bare
646       minimum. For example, in one test suite of sessions measuring
647       the XF86 X server on Linux, using a trim threshold of 128K and a
648       mmap threshold of 192K led to near-minimal long term resource
649       consumption.
650 
651       If you are using this malloc in a long-lived program, it should
652       pay to experiment with these values.  As a rough guide, you
653       might set to a value close to the average size of a process
654       (program) running on your system.  Releasing this much memory
655       would allow such a process to run in memory.  Generally, it's
656       worth it to tune for trimming rather tham memory mapping when a
657       program undergoes phases where several large chunks are
658       allocated and released in ways that can reuse each other's
659       storage, perhaps mixed with phases where there are no such
660       chunks at all.  And in well-behaved long-lived programs,
661       controlling release of large blocks via trimming versus mapping
662       is usually faster.
663 
664       However, in most programs, these parameters serve mainly as
665       protection against the system-level effects of carrying around
666       massive amounts of unneeded memory. Since frequent calls to
667       sbrk, mmap, and munmap otherwise degrade performance, the default
668       parameters are set to relatively high values that serve only as
669       safeguards.
670 
671       The default trim value is high enough to cause trimming only in
672       fairly extreme (by current memory consumption standards) cases.
673       It must be greater than page size to have any useful effect.  To
674       disable trimming completely, you can set to (unsigned long)(-1);
675 
676 
677 */
678 
679 
680 #ifndef DEFAULT_TOP_PAD
681 #define DEFAULT_TOP_PAD        (0)
682 #endif
683 
684 /*
685     M_TOP_PAD is the amount of extra `padding' space to allocate or
686       retain whenever sbrk is called. It is used in two ways internally:
687 
688       * When sbrk is called to extend the top of the arena to satisfy
689         a new malloc request, this much padding is added to the sbrk
690         request.
691 
692       * When malloc_trim is called automatically from free(),
693         it is used as the `pad' argument.
694 
695       In both cases, the actual amount of padding is rounded
696       so that the end of the arena is always a system page boundary.
697 
698       The main reason for using padding is to avoid calling sbrk so
699       often. Having even a small pad greatly reduces the likelihood
700       that nearly every malloc request during program start-up (or
701       after trimming) will invoke sbrk, which needlessly wastes
702       time.
703 
704       Automatic rounding-up to page-size units is normally sufficient
705       to avoid measurable overhead, so the default is 0.  However, in
706       systems where sbrk is relatively slow, it can pay to increase
707       this value, at the expense of carrying around more memory than
708       the program needs.
709 
710 */
711 
712 
713 #ifndef DEFAULT_MMAP_THRESHOLD
714 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
715 #endif
716 
717 /*
718 
719     M_MMAP_THRESHOLD is the request size threshold for using mmap()
720       to service a request. Requests of at least this size that cannot
721       be allocated using already-existing space will be serviced via mmap.
722       (If enough normal freed space already exists it is used instead.)
723 
724       Using mmap segregates relatively large chunks of memory so that
725       they can be individually obtained and released from the host
726       system. A request serviced through mmap is never reused by any
727       other request (at least not directly; the system may just so
728       happen to remap successive requests to the same locations).
729 
730       Segregating space in this way has the benefit that mmapped space
731       can ALWAYS be individually released back to the system, which
732       helps keep the system level memory demands of a long-lived
733       program low. Mapped memory can never become `locked' between
734       other chunks, as can happen with normally allocated chunks, which
735       menas that even trimming via malloc_trim would not release them.
736 
737       However, it has the disadvantages that:
738 
739          1. The space cannot be reclaimed, consolidated, and then
740             used to service later requests, as happens with normal chunks.
741          2. It can lead to more wastage because of mmap page alignment
742             requirements
743          3. It causes malloc performance to be more dependent on host
744             system memory management support routines which may vary in
745             implementation quality and may impose arbitrary
746             limitations. Generally, servicing a request via normal
747             malloc steps is faster than going through a system's mmap.
748 
749       All together, these considerations should lead you to use mmap
750       only for relatively large requests.
751 
752 
753 */
754 
755 
756 
757 #ifndef DEFAULT_MMAP_MAX
758 #if HAVE_MMAP
759 #define DEFAULT_MMAP_MAX       (64)
760 #else
761 #define DEFAULT_MMAP_MAX       (0)
762 #endif
763 #endif
764 
765 /*
766     M_MMAP_MAX is the maximum number of requests to simultaneously
767       service using mmap. This parameter exists because:
768 
769          1. Some systems have a limited number of internal tables for
770             use by mmap.
771          2. In most systems, overreliance on mmap can degrade overall
772             performance.
773          3. If a program allocates many large regions, it is probably
774             better off using normal sbrk-based allocation routines that
775             can reclaim and reallocate normal heap memory. Using a
776             small value allows transition into this mode after the
777             first few allocations.
778 
779       Setting to 0 disables all use of mmap.  If HAVE_MMAP is not set,
780       the default value is 0, and attempts to set it to non-zero values
781       in mallopt will fail.
782 */
783 
784 
785 
786 
787 /*
788     USE_DL_PREFIX will prefix all public routines with the string 'dl'.
789       Useful to quickly avoid procedure declaration conflicts and linker
790       symbol conflicts with existing memory allocation routines.
791 
792 */
793 
794 /* #define USE_DL_PREFIX */
795 
796 
797 
798 
799 /*
800 
801   Special defines for linux libc
802 
803   Except when compiled using these special defines for Linux libc
804   using weak aliases, this malloc is NOT designed to work in
805   multithreaded applications.  No semaphores or other concurrency
806   control are provided to ensure that multiple malloc or free calls
807   don't run at the same time, which could be disasterous. A single
808   semaphore could be used across malloc, realloc, and free (which is
809   essentially the effect of the linux weak alias approach). It would
810   be hard to obtain finer granularity.
811 
812 */
813 
814 
815 #ifdef INTERNAL_LINUX_C_LIB
816 
817 #if __STD_C
818 
819 Void_t * __default_morecore_init (ptrdiff_t);
820 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
821 
822 #else
823 
824 Void_t * __default_morecore_init ();
825 Void_t *(*__morecore)() = __default_morecore_init;
826 
827 #endif
828 
829 #define MORECORE (*__morecore)
830 #define MORECORE_FAILURE 0
831 #define MORECORE_CLEARS 1
832 
833 #else /* INTERNAL_LINUX_C_LIB */
834 
835 #if __STD_C
836 extern Void_t*     sbrk(ptrdiff_t);
837 #else
838 extern Void_t*     sbrk();
839 #endif
840 
841 #ifndef MORECORE
842 #define MORECORE sbrk
843 #endif
844 
845 #ifndef MORECORE_FAILURE
846 #define MORECORE_FAILURE -1
847 #endif
848 
849 #ifndef MORECORE_CLEARS
850 #define MORECORE_CLEARS 1
851 #endif
852 
853 #endif /* INTERNAL_LINUX_C_LIB */
854 
855 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
856 
857 #define cALLOc		__libc_calloc
858 #define fREe		__libc_free
859 #define mALLOc		__libc_malloc
860 #define mEMALIGn	__libc_memalign
861 #define rEALLOc		__libc_realloc
862 #define vALLOc		__libc_valloc
863 #define pvALLOc		__libc_pvalloc
864 #define mALLINFo	__libc_mallinfo
865 #define mALLOPt		__libc_mallopt
866 
867 #pragma weak calloc = __libc_calloc
868 #pragma weak free = __libc_free
869 #pragma weak cfree = __libc_free
870 #pragma weak malloc = __libc_malloc
871 #pragma weak memalign = __libc_memalign
872 #pragma weak realloc = __libc_realloc
873 #pragma weak valloc = __libc_valloc
874 #pragma weak pvalloc = __libc_pvalloc
875 #pragma weak mallinfo = __libc_mallinfo
876 #pragma weak mallopt = __libc_mallopt
877 
878 #else
879 
880 #ifdef USE_DL_PREFIX
881 #define cALLOc		dlcalloc
882 #define fREe		dlfree
883 #define mALLOc		dlmalloc
884 #define mEMALIGn	dlmemalign
885 #define rEALLOc		dlrealloc
886 #define vALLOc		dlvalloc
887 #define pvALLOc		dlpvalloc
888 #define mALLINFo	dlmallinfo
889 #define mALLOPt		dlmallopt
890 #else /* USE_DL_PREFIX */
891 #define cALLOc		calloc
892 #define fREe		free
893 #define mALLOc		malloc
894 #define mEMALIGn	memalign
895 #define rEALLOc		realloc
896 #define vALLOc		valloc
897 #define pvALLOc		pvalloc
898 #define mALLINFo	mallinfo
899 #define mALLOPt		mallopt
900 #endif /* USE_DL_PREFIX */
901 
902 #endif
903 
904 /* Public routines */
905 
906 #if __STD_C
907 
908 Void_t* mALLOc(size_t);
909 void    fREe(Void_t*);
910 Void_t* rEALLOc(Void_t*, size_t);
911 Void_t* mEMALIGn(size_t, size_t);
912 Void_t* vALLOc(size_t);
913 Void_t* pvALLOc(size_t);
914 Void_t* cALLOc(size_t, size_t);
915 void    cfree(Void_t*);
916 int     malloc_trim(size_t);
917 size_t  malloc_usable_size(Void_t*);
918 void    malloc_stats();
919 int     mALLOPt(int, int);
920 struct mallinfo mALLINFo(void);
921 #else
922 Void_t* mALLOc();
923 void    fREe();
924 Void_t* rEALLOc();
925 Void_t* mEMALIGn();
926 Void_t* vALLOc();
927 Void_t* pvALLOc();
928 Void_t* cALLOc();
929 void    cfree();
930 int     malloc_trim();
931 size_t  malloc_usable_size();
932 void    malloc_stats();
933 int     mALLOPt();
934 struct mallinfo mALLINFo();
935 #endif
936 
937 
938 #ifdef __cplusplus
939 };  /* end of extern "C" */
940 #endif
941 
942 /* ---------- To make a malloc.h, end cutting here ------------ */
943 #else				/* Moved to malloc.h */
944 
945 #include <malloc.h>
946 #if 0
947 #if __STD_C
948 static void malloc_update_mallinfo (void);
949 void malloc_stats (void);
950 #else
951 static void malloc_update_mallinfo ();
952 void malloc_stats();
953 #endif
954 #endif	/* 0 */
955 
956 #endif	/* 0 */			/* Moved to malloc.h */
957 #include <common.h>
958 
959 /*
960   Emulation of sbrk for WIN32
961   All code within the ifdef WIN32 is untested by me.
962 
963   Thanks to Martin Fong and others for supplying this.
964 */
965 
966 
967 #ifdef WIN32
968 
969 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
970 ~(malloc_getpagesize-1))
971 #define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
972 
973 /* resrve 64MB to insure large contiguous space */
974 #define RESERVED_SIZE (1024*1024*64)
975 #define NEXT_SIZE (2048*1024)
976 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
977 
978 struct GmListElement;
979 typedef struct GmListElement GmListElement;
980 
981 struct GmListElement
982 {
983 	GmListElement* next;
984 	void* base;
985 };
986 
987 static GmListElement* head = 0;
988 static unsigned int gNextAddress = 0;
989 static unsigned int gAddressBase = 0;
990 static unsigned int gAllocatedSize = 0;
991 
992 static
993 GmListElement* makeGmListElement (void* bas)
994 {
995 	GmListElement* this;
996 	this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
997 	assert (this);
998 	if (this)
999 	{
1000 		this->base = bas;
1001 		this->next = head;
1002 		head = this;
1003 	}
1004 	return this;
1005 }
1006 
1007 void gcleanup ()
1008 {
1009 	BOOL rval;
1010 	assert ( (head == NULL) || (head->base == (void*)gAddressBase));
1011 	if (gAddressBase && (gNextAddress - gAddressBase))
1012 	{
1013 		rval = VirtualFree ((void*)gAddressBase,
1014 							gNextAddress - gAddressBase,
1015 							MEM_DECOMMIT);
1016         assert (rval);
1017 	}
1018 	while (head)
1019 	{
1020 		GmListElement* next = head->next;
1021 		rval = VirtualFree (head->base, 0, MEM_RELEASE);
1022 		assert (rval);
1023 		LocalFree (head);
1024 		head = next;
1025 	}
1026 }
1027 
1028 static
1029 void* findRegion (void* start_address, unsigned long size)
1030 {
1031 	MEMORY_BASIC_INFORMATION info;
1032 	if (size >= TOP_MEMORY) return NULL;
1033 
1034 	while ((unsigned long)start_address + size < TOP_MEMORY)
1035 	{
1036 		VirtualQuery (start_address, &info, sizeof (info));
1037 		if ((info.State == MEM_FREE) && (info.RegionSize >= size))
1038 			return start_address;
1039 		else
1040 		{
1041 			// Requested region is not available so see if the
1042 			// next region is available.  Set 'start_address'
1043 			// to the next region and call 'VirtualQuery()'
1044 			// again.
1045 
1046 			start_address = (char*)info.BaseAddress + info.RegionSize;
1047 
1048 			// Make sure we start looking for the next region
1049 			// on the *next* 64K boundary.  Otherwise, even if
1050 			// the new region is free according to
1051 			// 'VirtualQuery()', the subsequent call to
1052 			// 'VirtualAlloc()' (which follows the call to
1053 			// this routine in 'wsbrk()') will round *down*
1054 			// the requested address to a 64K boundary which
1055 			// we already know is an address in the
1056 			// unavailable region.  Thus, the subsequent call
1057 			// to 'VirtualAlloc()' will fail and bring us back
1058 			// here, causing us to go into an infinite loop.
1059 
1060 			start_address =
1061 				(void *) AlignPage64K((unsigned long) start_address);
1062 		}
1063 	}
1064 	return NULL;
1065 
1066 }
1067 
1068 
1069 void* wsbrk (long size)
1070 {
1071 	void* tmp;
1072 	if (size > 0)
1073 	{
1074 		if (gAddressBase == 0)
1075 		{
1076 			gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
1077 			gNextAddress = gAddressBase =
1078 				(unsigned int)VirtualAlloc (NULL, gAllocatedSize,
1079 											MEM_RESERVE, PAGE_NOACCESS);
1080 		} else if (AlignPage (gNextAddress + size) > (gAddressBase +
1081 gAllocatedSize))
1082 		{
1083 			long new_size = max (NEXT_SIZE, AlignPage (size));
1084 			void* new_address = (void*)(gAddressBase+gAllocatedSize);
1085 			do
1086 			{
1087 				new_address = findRegion (new_address, new_size);
1088 
1089 				if (new_address == 0)
1090 					return (void*)-1;
1091 
1092 				gAddressBase = gNextAddress =
1093 					(unsigned int)VirtualAlloc (new_address, new_size,
1094 												MEM_RESERVE, PAGE_NOACCESS);
1095 				// repeat in case of race condition
1096 				// The region that we found has been snagged
1097 				// by another thread
1098 			}
1099 			while (gAddressBase == 0);
1100 
1101 			assert (new_address == (void*)gAddressBase);
1102 
1103 			gAllocatedSize = new_size;
1104 
1105 			if (!makeGmListElement ((void*)gAddressBase))
1106 				return (void*)-1;
1107 		}
1108 		if ((size + gNextAddress) > AlignPage (gNextAddress))
1109 		{
1110 			void* res;
1111 			res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1112 								(size + gNextAddress -
1113 								 AlignPage (gNextAddress)),
1114 								MEM_COMMIT, PAGE_READWRITE);
1115 			if (res == 0)
1116 				return (void*)-1;
1117 		}
1118 		tmp = (void*)gNextAddress;
1119 		gNextAddress = (unsigned int)tmp + size;
1120 		return tmp;
1121 	}
1122 	else if (size < 0)
1123 	{
1124 		unsigned int alignedGoal = AlignPage (gNextAddress + size);
1125 		/* Trim by releasing the virtual memory */
1126 		if (alignedGoal >= gAddressBase)
1127 		{
1128 			VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1129 						 MEM_DECOMMIT);
1130 			gNextAddress = gNextAddress + size;
1131 			return (void*)gNextAddress;
1132 		}
1133 		else
1134 		{
1135 			VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1136 						 MEM_DECOMMIT);
1137 			gNextAddress = gAddressBase;
1138 			return (void*)-1;
1139 		}
1140 	}
1141 	else
1142 	{
1143 		return (void*)gNextAddress;
1144 	}
1145 }
1146 
1147 #endif
1148 
1149 
1150 
1151 /*
1152   Type declarations
1153 */
1154 
1155 
1156 struct malloc_chunk
1157 {
1158   INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1159   INTERNAL_SIZE_T size;      /* Size in bytes, including overhead. */
1160   struct malloc_chunk* fd;   /* double links -- used only if free. */
1161   struct malloc_chunk* bk;
1162 };
1163 
1164 typedef struct malloc_chunk* mchunkptr;
1165 
1166 /*
1167 
1168    malloc_chunk details:
1169 
1170     (The following includes lightly edited explanations by Colin Plumb.)
1171 
1172     Chunks of memory are maintained using a `boundary tag' method as
1173     described in e.g., Knuth or Standish.  (See the paper by Paul
1174     Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1175     survey of such techniques.)  Sizes of free chunks are stored both
1176     in the front of each chunk and at the end.  This makes
1177     consolidating fragmented chunks into bigger chunks very fast.  The
1178     size fields also hold bits representing whether chunks are free or
1179     in use.
1180 
1181     An allocated chunk looks like this:
1182 
1183 
1184     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1185             |             Size of previous chunk, if allocated            | |
1186             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1187             |             Size of chunk, in bytes                         |P|
1188       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1189             |             User data starts here...                          .
1190             .                                                               .
1191             .             (malloc_usable_space() bytes)                     .
1192             .                                                               |
1193 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1194             |             Size of chunk                                     |
1195             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1196 
1197 
1198     Where "chunk" is the front of the chunk for the purpose of most of
1199     the malloc code, but "mem" is the pointer that is returned to the
1200     user.  "Nextchunk" is the beginning of the next contiguous chunk.
1201 
1202     Chunks always begin on even word boundries, so the mem portion
1203     (which is returned to the user) is also on an even word boundary, and
1204     thus double-word aligned.
1205 
1206     Free chunks are stored in circular doubly-linked lists, and look like this:
1207 
1208     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1209             |             Size of previous chunk                            |
1210             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1211     `head:' |             Size of chunk, in bytes                         |P|
1212       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1213             |             Forward pointer to next chunk in list             |
1214             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1215             |             Back pointer to previous chunk in list            |
1216             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1217             |             Unused space (may be 0 bytes long)                .
1218             .                                                               .
1219             .                                                               |
1220 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1221     `foot:' |             Size of chunk, in bytes                           |
1222             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1223 
1224     The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1225     chunk size (which is always a multiple of two words), is an in-use
1226     bit for the *previous* chunk.  If that bit is *clear*, then the
1227     word before the current chunk size contains the previous chunk
1228     size, and can be used to find the front of the previous chunk.
1229     (The very first chunk allocated always has this bit set,
1230     preventing access to non-existent (or non-owned) memory.)
1231 
1232     Note that the `foot' of the current chunk is actually represented
1233     as the prev_size of the NEXT chunk. (This makes it easier to
1234     deal with alignments etc).
1235 
1236     The two exceptions to all this are
1237 
1238      1. The special chunk `top', which doesn't bother using the
1239         trailing size field since there is no
1240         next contiguous chunk that would have to index off it. (After
1241         initialization, `top' is forced to always exist.  If it would
1242         become less than MINSIZE bytes long, it is replenished via
1243         malloc_extend_top.)
1244 
1245      2. Chunks allocated via mmap, which have the second-lowest-order
1246         bit (IS_MMAPPED) set in their size fields.  Because they are
1247         never merged or traversed from any other chunk, they have no
1248         foot size or inuse information.
1249 
1250     Available chunks are kept in any of several places (all declared below):
1251 
1252     * `av': An array of chunks serving as bin headers for consolidated
1253        chunks. Each bin is doubly linked.  The bins are approximately
1254        proportionally (log) spaced.  There are a lot of these bins
1255        (128). This may look excessive, but works very well in
1256        practice.  All procedures maintain the invariant that no
1257        consolidated chunk physically borders another one. Chunks in
1258        bins are kept in size order, with ties going to the
1259        approximately least recently used chunk.
1260 
1261        The chunks in each bin are maintained in decreasing sorted order by
1262        size.  This is irrelevant for the small bins, which all contain
1263        the same-sized chunks, but facilitates best-fit allocation for
1264        larger chunks. (These lists are just sequential. Keeping them in
1265        order almost never requires enough traversal to warrant using
1266        fancier ordered data structures.)  Chunks of the same size are
1267        linked with the most recently freed at the front, and allocations
1268        are taken from the back.  This results in LRU or FIFO allocation
1269        order, which tends to give each chunk an equal opportunity to be
1270        consolidated with adjacent freed chunks, resulting in larger free
1271        chunks and less fragmentation.
1272 
1273     * `top': The top-most available chunk (i.e., the one bordering the
1274        end of available memory) is treated specially. It is never
1275        included in any bin, is used only if no other chunk is
1276        available, and is released back to the system if it is very
1277        large (see M_TRIM_THRESHOLD).
1278 
1279     * `last_remainder': A bin holding only the remainder of the
1280        most recently split (non-top) chunk. This bin is checked
1281        before other non-fitting chunks, so as to provide better
1282        locality for runs of sequentially allocated chunks.
1283 
1284     *  Implicitly, through the host system's memory mapping tables.
1285        If supported, requests greater than a threshold are usually
1286        serviced via calls to mmap, and then later released via munmap.
1287 
1288 */
1289 
1290 
1291 
1292 
1293 
1294 
1295 /*  sizes, alignments */
1296 
1297 #define SIZE_SZ                (sizeof(INTERNAL_SIZE_T))
1298 #define MALLOC_ALIGNMENT       (SIZE_SZ + SIZE_SZ)
1299 #define MALLOC_ALIGN_MASK      (MALLOC_ALIGNMENT - 1)
1300 #define MINSIZE                (sizeof(struct malloc_chunk))
1301 
1302 /* conversion from malloc headers to user pointers, and back */
1303 
1304 #define chunk2mem(p)   ((Void_t*)((char*)(p) + 2*SIZE_SZ))
1305 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1306 
1307 /* pad request bytes into a usable size */
1308 
1309 #define request2size(req) \
1310  (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1311   (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1312    (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1313 
1314 /* Check if m has acceptable alignment */
1315 
1316 #define aligned_OK(m)    (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
1317 
1318 
1319 
1320 
1321 /*
1322   Physical chunk operations
1323 */
1324 
1325 
1326 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1327 
1328 #define PREV_INUSE 0x1
1329 
1330 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1331 
1332 #define IS_MMAPPED 0x2
1333 
1334 /* Bits to mask off when extracting size */
1335 
1336 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1337 
1338 
1339 /* Ptr to next physical malloc_chunk. */
1340 
1341 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1342 
1343 /* Ptr to previous physical malloc_chunk */
1344 
1345 #define prev_chunk(p)\
1346    ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1347 
1348 
1349 /* Treat space at ptr + offset as a chunk */
1350 
1351 #define chunk_at_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1352 
1353 
1354 
1355 
1356 /*
1357   Dealing with use bits
1358 */
1359 
1360 /* extract p's inuse bit */
1361 
1362 #define inuse(p)\
1363 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1364 
1365 /* extract inuse bit of previous chunk */
1366 
1367 #define prev_inuse(p)  ((p)->size & PREV_INUSE)
1368 
1369 /* check for mmap()'ed chunk */
1370 
1371 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1372 
1373 /* set/clear chunk as in use without otherwise disturbing */
1374 
1375 #define set_inuse(p)\
1376 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1377 
1378 #define clear_inuse(p)\
1379 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1380 
1381 /* check/set/clear inuse bits in known places */
1382 
1383 #define inuse_bit_at_offset(p, s)\
1384  (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1385 
1386 #define set_inuse_bit_at_offset(p, s)\
1387  (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1388 
1389 #define clear_inuse_bit_at_offset(p, s)\
1390  (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1391 
1392 
1393 
1394 
1395 /*
1396   Dealing with size fields
1397 */
1398 
1399 /* Get size, ignoring use bits */
1400 
1401 #define chunksize(p)          ((p)->size & ~(SIZE_BITS))
1402 
1403 /* Set size at head, without disturbing its use bit */
1404 
1405 #define set_head_size(p, s)   ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1406 
1407 /* Set size/use ignoring previous bits in header */
1408 
1409 #define set_head(p, s)        ((p)->size = (s))
1410 
1411 /* Set size at footer (only when chunk is not in use) */
1412 
1413 #define set_foot(p, s)   (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1414 
1415 
1416 
1417 
1418 
1419 /*
1420    Bins
1421 
1422     The bins, `av_' are an array of pairs of pointers serving as the
1423     heads of (initially empty) doubly-linked lists of chunks, laid out
1424     in a way so that each pair can be treated as if it were in a
1425     malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1426     and chunks are the same).
1427 
1428     Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1429     8 bytes apart. Larger bins are approximately logarithmically
1430     spaced. (See the table below.) The `av_' array is never mentioned
1431     directly in the code, but instead via bin access macros.
1432 
1433     Bin layout:
1434 
1435     64 bins of size       8
1436     32 bins of size      64
1437     16 bins of size     512
1438      8 bins of size    4096
1439      4 bins of size   32768
1440      2 bins of size  262144
1441      1 bin  of size what's left
1442 
1443     There is actually a little bit of slop in the numbers in bin_index
1444     for the sake of speed. This makes no difference elsewhere.
1445 
1446     The special chunks `top' and `last_remainder' get their own bins,
1447     (this is implemented via yet more trickery with the av_ array),
1448     although `top' is never properly linked to its bin since it is
1449     always handled specially.
1450 
1451 */
1452 
1453 #define NAV             128   /* number of bins */
1454 
1455 typedef struct malloc_chunk* mbinptr;
1456 
1457 /* access macros */
1458 
1459 #define bin_at(i)      ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1460 #define next_bin(b)    ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
1461 #define prev_bin(b)    ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1462 
1463 /*
1464    The first 2 bins are never indexed. The corresponding av_ cells are instead
1465    used for bookkeeping. This is not to save space, but to simplify
1466    indexing, maintain locality, and avoid some initialization tests.
1467 */
1468 
1469 #define top            (bin_at(0)->fd)   /* The topmost chunk */
1470 #define last_remainder (bin_at(1))       /* remainder from last split */
1471 
1472 
1473 /*
1474    Because top initially points to its own bin with initial
1475    zero size, thus forcing extension on the first malloc request,
1476    we avoid having any special code in malloc to check whether
1477    it even exists yet. But we still need to in malloc_extend_top.
1478 */
1479 
1480 #define initial_top    ((mchunkptr)(bin_at(0)))
1481 
1482 /* Helper macro to initialize bins */
1483 
1484 #define IAV(i)  bin_at(i), bin_at(i)
1485 
1486 static mbinptr av_[NAV * 2 + 2] = {
1487  0, 0,
1488  IAV(0),   IAV(1),   IAV(2),   IAV(3),   IAV(4),   IAV(5),   IAV(6),   IAV(7),
1489  IAV(8),   IAV(9),   IAV(10),  IAV(11),  IAV(12),  IAV(13),  IAV(14),  IAV(15),
1490  IAV(16),  IAV(17),  IAV(18),  IAV(19),  IAV(20),  IAV(21),  IAV(22),  IAV(23),
1491  IAV(24),  IAV(25),  IAV(26),  IAV(27),  IAV(28),  IAV(29),  IAV(30),  IAV(31),
1492  IAV(32),  IAV(33),  IAV(34),  IAV(35),  IAV(36),  IAV(37),  IAV(38),  IAV(39),
1493  IAV(40),  IAV(41),  IAV(42),  IAV(43),  IAV(44),  IAV(45),  IAV(46),  IAV(47),
1494  IAV(48),  IAV(49),  IAV(50),  IAV(51),  IAV(52),  IAV(53),  IAV(54),  IAV(55),
1495  IAV(56),  IAV(57),  IAV(58),  IAV(59),  IAV(60),  IAV(61),  IAV(62),  IAV(63),
1496  IAV(64),  IAV(65),  IAV(66),  IAV(67),  IAV(68),  IAV(69),  IAV(70),  IAV(71),
1497  IAV(72),  IAV(73),  IAV(74),  IAV(75),  IAV(76),  IAV(77),  IAV(78),  IAV(79),
1498  IAV(80),  IAV(81),  IAV(82),  IAV(83),  IAV(84),  IAV(85),  IAV(86),  IAV(87),
1499  IAV(88),  IAV(89),  IAV(90),  IAV(91),  IAV(92),  IAV(93),  IAV(94),  IAV(95),
1500  IAV(96),  IAV(97),  IAV(98),  IAV(99),  IAV(100), IAV(101), IAV(102), IAV(103),
1501  IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1502  IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1503  IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1504 };
1505 
1506 void malloc_bin_reloc (void)
1507 {
1508 	DECLARE_GLOBAL_DATA_PTR;
1509 
1510 	unsigned long *p = (unsigned long *)(&av_[2]);
1511 	int i;
1512 	for (i=2; i<(sizeof(av_)/sizeof(mbinptr)); ++i) {
1513 		*p++ += gd->reloc_off;
1514 	}
1515 }
1516 
1517 
1518 /* field-extraction macros */
1519 
1520 #define first(b) ((b)->fd)
1521 #define last(b)  ((b)->bk)
1522 
1523 /*
1524   Indexing into bins
1525 */
1526 
1527 #define bin_index(sz)                                                          \
1528 (((((unsigned long)(sz)) >> 9) ==    0) ?       (((unsigned long)(sz)) >>  3): \
1529  ((((unsigned long)(sz)) >> 9) <=    4) ?  56 + (((unsigned long)(sz)) >>  6): \
1530  ((((unsigned long)(sz)) >> 9) <=   20) ?  91 + (((unsigned long)(sz)) >>  9): \
1531  ((((unsigned long)(sz)) >> 9) <=   84) ? 110 + (((unsigned long)(sz)) >> 12): \
1532  ((((unsigned long)(sz)) >> 9) <=  340) ? 119 + (((unsigned long)(sz)) >> 15): \
1533  ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1534                                           126)
1535 /*
1536   bins for chunks < 512 are all spaced 8 bytes apart, and hold
1537   identically sized chunks. This is exploited in malloc.
1538 */
1539 
1540 #define MAX_SMALLBIN         63
1541 #define MAX_SMALLBIN_SIZE   512
1542 #define SMALLBIN_WIDTH        8
1543 
1544 #define smallbin_index(sz)  (((unsigned long)(sz)) >> 3)
1545 
1546 /*
1547    Requests are `small' if both the corresponding and the next bin are small
1548 */
1549 
1550 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1551 
1552 
1553 
1554 /*
1555     To help compensate for the large number of bins, a one-level index
1556     structure is used for bin-by-bin searching.  `binblocks' is a
1557     one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1558     have any (possibly) non-empty bins, so they can be skipped over
1559     all at once during during traversals. The bits are NOT always
1560     cleared as soon as all bins in a block are empty, but instead only
1561     when all are noticed to be empty during traversal in malloc.
1562 */
1563 
1564 #define BINBLOCKWIDTH     4   /* bins per block */
1565 
1566 #define binblocks      (bin_at(0)->size) /* bitvector of nonempty blocks */
1567 
1568 /* bin<->block macros */
1569 
1570 #define idx2binblock(ix)    ((unsigned)1 << (ix / BINBLOCKWIDTH))
1571 #define mark_binblock(ii)   (binblocks |= idx2binblock(ii))
1572 #define clear_binblock(ii)  (binblocks &= ~(idx2binblock(ii)))
1573 
1574 
1575 
1576 
1577 
1578 /*  Other static bookkeeping data */
1579 
1580 /* variables holding tunable values */
1581 
1582 static unsigned long trim_threshold   = DEFAULT_TRIM_THRESHOLD;
1583 static unsigned long top_pad          = DEFAULT_TOP_PAD;
1584 static unsigned int  n_mmaps_max      = DEFAULT_MMAP_MAX;
1585 static unsigned long mmap_threshold   = DEFAULT_MMAP_THRESHOLD;
1586 
1587 /* The first value returned from sbrk */
1588 static char* sbrk_base = (char*)(-1);
1589 
1590 /* The maximum memory obtained from system via sbrk */
1591 static unsigned long max_sbrked_mem = 0;
1592 
1593 /* The maximum via either sbrk or mmap */
1594 static unsigned long max_total_mem = 0;
1595 
1596 /* internal working copy of mallinfo */
1597 static struct mallinfo current_mallinfo = {  0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1598 
1599 /* The total memory obtained from system via sbrk */
1600 #define sbrked_mem  (current_mallinfo.arena)
1601 
1602 /* Tracking mmaps */
1603 
1604 #if 0
1605 static unsigned int n_mmaps = 0;
1606 #endif	/* 0 */
1607 static unsigned long mmapped_mem = 0;
1608 #if HAVE_MMAP
1609 static unsigned int max_n_mmaps = 0;
1610 static unsigned long max_mmapped_mem = 0;
1611 #endif
1612 
1613 
1614 
1615 /*
1616   Debugging support
1617 */
1618 
1619 #ifdef DEBUG
1620 
1621 
1622 /*
1623   These routines make a number of assertions about the states
1624   of data structures that should be true at all times. If any
1625   are not true, it's very likely that a user program has somehow
1626   trashed memory. (It's also possible that there is a coding error
1627   in malloc. In which case, please report it!)
1628 */
1629 
1630 #if __STD_C
1631 static void do_check_chunk(mchunkptr p)
1632 #else
1633 static void do_check_chunk(p) mchunkptr p;
1634 #endif
1635 {
1636 #if 0	/* causes warnings because assert() is off */
1637   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1638 #endif	/* 0 */
1639 
1640   /* No checkable chunk is mmapped */
1641   assert(!chunk_is_mmapped(p));
1642 
1643   /* Check for legal address ... */
1644   assert((char*)p >= sbrk_base);
1645   if (p != top)
1646     assert((char*)p + sz <= (char*)top);
1647   else
1648     assert((char*)p + sz <= sbrk_base + sbrked_mem);
1649 
1650 }
1651 
1652 
1653 #if __STD_C
1654 static void do_check_free_chunk(mchunkptr p)
1655 #else
1656 static void do_check_free_chunk(p) mchunkptr p;
1657 #endif
1658 {
1659   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1660 #if 0	/* causes warnings because assert() is off */
1661   mchunkptr next = chunk_at_offset(p, sz);
1662 #endif	/* 0 */
1663 
1664   do_check_chunk(p);
1665 
1666   /* Check whether it claims to be free ... */
1667   assert(!inuse(p));
1668 
1669   /* Unless a special marker, must have OK fields */
1670   if ((long)sz >= (long)MINSIZE)
1671   {
1672     assert((sz & MALLOC_ALIGN_MASK) == 0);
1673     assert(aligned_OK(chunk2mem(p)));
1674     /* ... matching footer field */
1675     assert(next->prev_size == sz);
1676     /* ... and is fully consolidated */
1677     assert(prev_inuse(p));
1678     assert (next == top || inuse(next));
1679 
1680     /* ... and has minimally sane links */
1681     assert(p->fd->bk == p);
1682     assert(p->bk->fd == p);
1683   }
1684   else /* markers are always of size SIZE_SZ */
1685     assert(sz == SIZE_SZ);
1686 }
1687 
1688 #if __STD_C
1689 static void do_check_inuse_chunk(mchunkptr p)
1690 #else
1691 static void do_check_inuse_chunk(p) mchunkptr p;
1692 #endif
1693 {
1694   mchunkptr next = next_chunk(p);
1695   do_check_chunk(p);
1696 
1697   /* Check whether it claims to be in use ... */
1698   assert(inuse(p));
1699 
1700   /* ... and is surrounded by OK chunks.
1701     Since more things can be checked with free chunks than inuse ones,
1702     if an inuse chunk borders them and debug is on, it's worth doing them.
1703   */
1704   if (!prev_inuse(p))
1705   {
1706     mchunkptr prv = prev_chunk(p);
1707     assert(next_chunk(prv) == p);
1708     do_check_free_chunk(prv);
1709   }
1710   if (next == top)
1711   {
1712     assert(prev_inuse(next));
1713     assert(chunksize(next) >= MINSIZE);
1714   }
1715   else if (!inuse(next))
1716     do_check_free_chunk(next);
1717 
1718 }
1719 
1720 #if __STD_C
1721 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
1722 #else
1723 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1724 #endif
1725 {
1726 #if 0	/* causes warnings because assert() is off */
1727   INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1728   long room = sz - s;
1729 #endif	/* 0 */
1730 
1731   do_check_inuse_chunk(p);
1732 
1733   /* Legal size ... */
1734   assert((long)sz >= (long)MINSIZE);
1735   assert((sz & MALLOC_ALIGN_MASK) == 0);
1736   assert(room >= 0);
1737   assert(room < (long)MINSIZE);
1738 
1739   /* ... and alignment */
1740   assert(aligned_OK(chunk2mem(p)));
1741 
1742 
1743   /* ... and was allocated at front of an available chunk */
1744   assert(prev_inuse(p));
1745 
1746 }
1747 
1748 
1749 #define check_free_chunk(P)  do_check_free_chunk(P)
1750 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
1751 #define check_chunk(P) do_check_chunk(P)
1752 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
1753 #else
1754 #define check_free_chunk(P)
1755 #define check_inuse_chunk(P)
1756 #define check_chunk(P)
1757 #define check_malloced_chunk(P,N)
1758 #endif
1759 
1760 
1761 
1762 /*
1763   Macro-based internal utilities
1764 */
1765 
1766 
1767 /*
1768   Linking chunks in bin lists.
1769   Call these only with variables, not arbitrary expressions, as arguments.
1770 */
1771 
1772 /*
1773   Place chunk p of size s in its bin, in size order,
1774   putting it ahead of others of same size.
1775 */
1776 
1777 
1778 #define frontlink(P, S, IDX, BK, FD)                                          \
1779 {                                                                             \
1780   if (S < MAX_SMALLBIN_SIZE)                                                  \
1781   {                                                                           \
1782     IDX = smallbin_index(S);                                                  \
1783     mark_binblock(IDX);                                                       \
1784     BK = bin_at(IDX);                                                         \
1785     FD = BK->fd;                                                              \
1786     P->bk = BK;                                                               \
1787     P->fd = FD;                                                               \
1788     FD->bk = BK->fd = P;                                                      \
1789   }                                                                           \
1790   else                                                                        \
1791   {                                                                           \
1792     IDX = bin_index(S);                                                       \
1793     BK = bin_at(IDX);                                                         \
1794     FD = BK->fd;                                                              \
1795     if (FD == BK) mark_binblock(IDX);                                         \
1796     else                                                                      \
1797     {                                                                         \
1798       while (FD != BK && S < chunksize(FD)) FD = FD->fd;                      \
1799       BK = FD->bk;                                                            \
1800     }                                                                         \
1801     P->bk = BK;                                                               \
1802     P->fd = FD;                                                               \
1803     FD->bk = BK->fd = P;                                                      \
1804   }                                                                           \
1805 }
1806 
1807 
1808 /* take a chunk off a list */
1809 
1810 #define unlink(P, BK, FD)                                                     \
1811 {                                                                             \
1812   BK = P->bk;                                                                 \
1813   FD = P->fd;                                                                 \
1814   FD->bk = BK;                                                                \
1815   BK->fd = FD;                                                                \
1816 }                                                                             \
1817 
1818 /* Place p as the last remainder */
1819 
1820 #define link_last_remainder(P)                                                \
1821 {                                                                             \
1822   last_remainder->fd = last_remainder->bk =  P;                               \
1823   P->fd = P->bk = last_remainder;                                             \
1824 }
1825 
1826 /* Clear the last_remainder bin */
1827 
1828 #define clear_last_remainder \
1829   (last_remainder->fd = last_remainder->bk = last_remainder)
1830 
1831 
1832 
1833 
1834 
1835 
1836 /* Routines dealing with mmap(). */
1837 
1838 #if HAVE_MMAP
1839 
1840 #if __STD_C
1841 static mchunkptr mmap_chunk(size_t size)
1842 #else
1843 static mchunkptr mmap_chunk(size) size_t size;
1844 #endif
1845 {
1846   size_t page_mask = malloc_getpagesize - 1;
1847   mchunkptr p;
1848 
1849 #ifndef MAP_ANONYMOUS
1850   static int fd = -1;
1851 #endif
1852 
1853   if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1854 
1855   /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1856    * there is no following chunk whose prev_size field could be used.
1857    */
1858   size = (size + SIZE_SZ + page_mask) & ~page_mask;
1859 
1860 #ifdef MAP_ANONYMOUS
1861   p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1862 		      MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1863 #else /* !MAP_ANONYMOUS */
1864   if (fd < 0)
1865   {
1866     fd = open("/dev/zero", O_RDWR);
1867     if(fd < 0) return 0;
1868   }
1869   p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1870 #endif
1871 
1872   if(p == (mchunkptr)-1) return 0;
1873 
1874   n_mmaps++;
1875   if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1876 
1877   /* We demand that eight bytes into a page must be 8-byte aligned. */
1878   assert(aligned_OK(chunk2mem(p)));
1879 
1880   /* The offset to the start of the mmapped region is stored
1881    * in the prev_size field of the chunk; normally it is zero,
1882    * but that can be changed in memalign().
1883    */
1884   p->prev_size = 0;
1885   set_head(p, size|IS_MMAPPED);
1886 
1887   mmapped_mem += size;
1888   if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1889     max_mmapped_mem = mmapped_mem;
1890   if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1891     max_total_mem = mmapped_mem + sbrked_mem;
1892   return p;
1893 }
1894 
1895 #if __STD_C
1896 static void munmap_chunk(mchunkptr p)
1897 #else
1898 static void munmap_chunk(p) mchunkptr p;
1899 #endif
1900 {
1901   INTERNAL_SIZE_T size = chunksize(p);
1902   int ret;
1903 
1904   assert (chunk_is_mmapped(p));
1905   assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1906   assert((n_mmaps > 0));
1907   assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1908 
1909   n_mmaps--;
1910   mmapped_mem -= (size + p->prev_size);
1911 
1912   ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1913 
1914   /* munmap returns non-zero on failure */
1915   assert(ret == 0);
1916 }
1917 
1918 #if HAVE_MREMAP
1919 
1920 #if __STD_C
1921 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
1922 #else
1923 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1924 #endif
1925 {
1926   size_t page_mask = malloc_getpagesize - 1;
1927   INTERNAL_SIZE_T offset = p->prev_size;
1928   INTERNAL_SIZE_T size = chunksize(p);
1929   char *cp;
1930 
1931   assert (chunk_is_mmapped(p));
1932   assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1933   assert((n_mmaps > 0));
1934   assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1935 
1936   /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1937   new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1938 
1939   cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1940 
1941   if (cp == (char *)-1) return 0;
1942 
1943   p = (mchunkptr)(cp + offset);
1944 
1945   assert(aligned_OK(chunk2mem(p)));
1946 
1947   assert((p->prev_size == offset));
1948   set_head(p, (new_size - offset)|IS_MMAPPED);
1949 
1950   mmapped_mem -= size + offset;
1951   mmapped_mem += new_size;
1952   if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1953     max_mmapped_mem = mmapped_mem;
1954   if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1955     max_total_mem = mmapped_mem + sbrked_mem;
1956   return p;
1957 }
1958 
1959 #endif /* HAVE_MREMAP */
1960 
1961 #endif /* HAVE_MMAP */
1962 
1963 
1964 
1965 
1966 /*
1967   Extend the top-most chunk by obtaining memory from system.
1968   Main interface to sbrk (but see also malloc_trim).
1969 */
1970 
1971 #if __STD_C
1972 static void malloc_extend_top(INTERNAL_SIZE_T nb)
1973 #else
1974 static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1975 #endif
1976 {
1977   char*     brk;                  /* return value from sbrk */
1978   INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1979   INTERNAL_SIZE_T correction;     /* bytes for 2nd sbrk call */
1980   char*     new_brk;              /* return of 2nd sbrk call */
1981   INTERNAL_SIZE_T top_size;       /* new size of top chunk */
1982 
1983   mchunkptr old_top     = top;  /* Record state of old top */
1984   INTERNAL_SIZE_T old_top_size = chunksize(old_top);
1985   char*     old_end      = (char*)(chunk_at_offset(old_top, old_top_size));
1986 
1987   /* Pad request with top_pad plus minimal overhead */
1988 
1989   INTERNAL_SIZE_T    sbrk_size     = nb + top_pad + MINSIZE;
1990   unsigned long pagesz    = malloc_getpagesize;
1991 
1992   /* If not the first time through, round to preserve page boundary */
1993   /* Otherwise, we need to correct to a page size below anyway. */
1994   /* (We also correct below if an intervening foreign sbrk call.) */
1995 
1996   if (sbrk_base != (char*)(-1))
1997     sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
1998 
1999   brk = (char*)(MORECORE (sbrk_size));
2000 
2001   /* Fail if sbrk failed or if a foreign sbrk call killed our space */
2002   if (brk == (char*)(MORECORE_FAILURE) ||
2003       (brk < old_end && old_top != initial_top))
2004     return;
2005 
2006   sbrked_mem += sbrk_size;
2007 
2008   if (brk == old_end) /* can just add bytes to current top */
2009   {
2010     top_size = sbrk_size + old_top_size;
2011     set_head(top, top_size | PREV_INUSE);
2012   }
2013   else
2014   {
2015     if (sbrk_base == (char*)(-1))  /* First time through. Record base */
2016       sbrk_base = brk;
2017     else  /* Someone else called sbrk().  Count those bytes as sbrked_mem. */
2018       sbrked_mem += brk - (char*)old_end;
2019 
2020     /* Guarantee alignment of first new chunk made from this space */
2021     front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
2022     if (front_misalign > 0)
2023     {
2024       correction = (MALLOC_ALIGNMENT) - front_misalign;
2025       brk += correction;
2026     }
2027     else
2028       correction = 0;
2029 
2030     /* Guarantee the next brk will be at a page boundary */
2031 
2032     correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
2033                    ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
2034 
2035     /* Allocate correction */
2036     new_brk = (char*)(MORECORE (correction));
2037     if (new_brk == (char*)(MORECORE_FAILURE)) return;
2038 
2039     sbrked_mem += correction;
2040 
2041     top = (mchunkptr)brk;
2042     top_size = new_brk - brk + correction;
2043     set_head(top, top_size | PREV_INUSE);
2044 
2045     if (old_top != initial_top)
2046     {
2047 
2048       /* There must have been an intervening foreign sbrk call. */
2049       /* A double fencepost is necessary to prevent consolidation */
2050 
2051       /* If not enough space to do this, then user did something very wrong */
2052       if (old_top_size < MINSIZE)
2053       {
2054         set_head(top, PREV_INUSE); /* will force null return from malloc */
2055         return;
2056       }
2057 
2058       /* Also keep size a multiple of MALLOC_ALIGNMENT */
2059       old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2060       set_head_size(old_top, old_top_size);
2061       chunk_at_offset(old_top, old_top_size          )->size =
2062         SIZE_SZ|PREV_INUSE;
2063       chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2064         SIZE_SZ|PREV_INUSE;
2065       /* If possible, release the rest. */
2066       if (old_top_size >= MINSIZE)
2067         fREe(chunk2mem(old_top));
2068     }
2069   }
2070 
2071   if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
2072     max_sbrked_mem = sbrked_mem;
2073   if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
2074     max_total_mem = mmapped_mem + sbrked_mem;
2075 
2076   /* We always land on a page boundary */
2077   assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2078 }
2079 
2080 
2081 
2082 
2083 /* Main public routines */
2084 
2085 
2086 /*
2087   Malloc Algorthim:
2088 
2089     The requested size is first converted into a usable form, `nb'.
2090     This currently means to add 4 bytes overhead plus possibly more to
2091     obtain 8-byte alignment and/or to obtain a size of at least
2092     MINSIZE (currently 16 bytes), the smallest allocatable size.
2093     (All fits are considered `exact' if they are within MINSIZE bytes.)
2094 
2095     From there, the first successful of the following steps is taken:
2096 
2097       1. The bin corresponding to the request size is scanned, and if
2098          a chunk of exactly the right size is found, it is taken.
2099 
2100       2. The most recently remaindered chunk is used if it is big
2101          enough.  This is a form of (roving) first fit, used only in
2102          the absence of exact fits. Runs of consecutive requests use
2103          the remainder of the chunk used for the previous such request
2104          whenever possible. This limited use of a first-fit style
2105          allocation strategy tends to give contiguous chunks
2106          coextensive lifetimes, which improves locality and can reduce
2107          fragmentation in the long run.
2108 
2109       3. Other bins are scanned in increasing size order, using a
2110          chunk big enough to fulfill the request, and splitting off
2111          any remainder.  This search is strictly by best-fit; i.e.,
2112          the smallest (with ties going to approximately the least
2113          recently used) chunk that fits is selected.
2114 
2115       4. If large enough, the chunk bordering the end of memory
2116          (`top') is split off. (This use of `top' is in accord with
2117          the best-fit search rule.  In effect, `top' is treated as
2118          larger (and thus less well fitting) than any other available
2119          chunk since it can be extended to be as large as necessary
2120          (up to system limitations).
2121 
2122       5. If the request size meets the mmap threshold and the
2123          system supports mmap, and there are few enough currently
2124          allocated mmapped regions, and a call to mmap succeeds,
2125          the request is allocated via direct memory mapping.
2126 
2127       6. Otherwise, the top of memory is extended by
2128          obtaining more space from the system (normally using sbrk,
2129          but definable to anything else via the MORECORE macro).
2130          Memory is gathered from the system (in system page-sized
2131          units) in a way that allows chunks obtained across different
2132          sbrk calls to be consolidated, but does not require
2133          contiguous memory. Thus, it should be safe to intersperse
2134          mallocs with other sbrk calls.
2135 
2136 
2137       All allocations are made from the the `lowest' part of any found
2138       chunk. (The implementation invariant is that prev_inuse is
2139       always true of any allocated chunk; i.e., that each allocated
2140       chunk borders either a previously allocated and still in-use chunk,
2141       or the base of its memory arena.)
2142 
2143 */
2144 
2145 #if __STD_C
2146 Void_t* mALLOc(size_t bytes)
2147 #else
2148 Void_t* mALLOc(bytes) size_t bytes;
2149 #endif
2150 {
2151   mchunkptr victim;                  /* inspected/selected chunk */
2152   INTERNAL_SIZE_T victim_size;       /* its size */
2153   int       idx;                     /* index for bin traversal */
2154   mbinptr   bin;                     /* associated bin */
2155   mchunkptr remainder;               /* remainder from a split */
2156   long      remainder_size;          /* its size */
2157   int       remainder_index;         /* its bin index */
2158   unsigned long block;               /* block traverser bit */
2159   int       startidx;                /* first bin of a traversed block */
2160   mchunkptr fwd;                     /* misc temp for linking */
2161   mchunkptr bck;                     /* misc temp for linking */
2162   mbinptr q;                         /* misc temp */
2163 
2164   INTERNAL_SIZE_T nb;
2165 
2166   if ((long)bytes < 0) return 0;
2167 
2168   nb = request2size(bytes);  /* padded request size; */
2169 
2170   /* Check for exact match in a bin */
2171 
2172   if (is_small_request(nb))  /* Faster version for small requests */
2173   {
2174     idx = smallbin_index(nb);
2175 
2176     /* No traversal or size check necessary for small bins.  */
2177 
2178     q = bin_at(idx);
2179     victim = last(q);
2180 
2181     /* Also scan the next one, since it would have a remainder < MINSIZE */
2182     if (victim == q)
2183     {
2184       q = next_bin(q);
2185       victim = last(q);
2186     }
2187     if (victim != q)
2188     {
2189       victim_size = chunksize(victim);
2190       unlink(victim, bck, fwd);
2191       set_inuse_bit_at_offset(victim, victim_size);
2192       check_malloced_chunk(victim, nb);
2193       return chunk2mem(victim);
2194     }
2195 
2196     idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2197 
2198   }
2199   else
2200   {
2201     idx = bin_index(nb);
2202     bin = bin_at(idx);
2203 
2204     for (victim = last(bin); victim != bin; victim = victim->bk)
2205     {
2206       victim_size = chunksize(victim);
2207       remainder_size = victim_size - nb;
2208 
2209       if (remainder_size >= (long)MINSIZE) /* too big */
2210       {
2211         --idx; /* adjust to rescan below after checking last remainder */
2212         break;
2213       }
2214 
2215       else if (remainder_size >= 0) /* exact fit */
2216       {
2217         unlink(victim, bck, fwd);
2218         set_inuse_bit_at_offset(victim, victim_size);
2219         check_malloced_chunk(victim, nb);
2220         return chunk2mem(victim);
2221       }
2222     }
2223 
2224     ++idx;
2225 
2226   }
2227 
2228   /* Try to use the last split-off remainder */
2229 
2230   if ( (victim = last_remainder->fd) != last_remainder)
2231   {
2232     victim_size = chunksize(victim);
2233     remainder_size = victim_size - nb;
2234 
2235     if (remainder_size >= (long)MINSIZE) /* re-split */
2236     {
2237       remainder = chunk_at_offset(victim, nb);
2238       set_head(victim, nb | PREV_INUSE);
2239       link_last_remainder(remainder);
2240       set_head(remainder, remainder_size | PREV_INUSE);
2241       set_foot(remainder, remainder_size);
2242       check_malloced_chunk(victim, nb);
2243       return chunk2mem(victim);
2244     }
2245 
2246     clear_last_remainder;
2247 
2248     if (remainder_size >= 0)  /* exhaust */
2249     {
2250       set_inuse_bit_at_offset(victim, victim_size);
2251       check_malloced_chunk(victim, nb);
2252       return chunk2mem(victim);
2253     }
2254 
2255     /* Else place in bin */
2256 
2257     frontlink(victim, victim_size, remainder_index, bck, fwd);
2258   }
2259 
2260   /*
2261      If there are any possibly nonempty big-enough blocks,
2262      search for best fitting chunk by scanning bins in blockwidth units.
2263   */
2264 
2265   if ( (block = idx2binblock(idx)) <= binblocks)
2266   {
2267 
2268     /* Get to the first marked block */
2269 
2270     if ( (block & binblocks) == 0)
2271     {
2272       /* force to an even block boundary */
2273       idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2274       block <<= 1;
2275       while ((block & binblocks) == 0)
2276       {
2277         idx += BINBLOCKWIDTH;
2278         block <<= 1;
2279       }
2280     }
2281 
2282     /* For each possibly nonempty block ... */
2283     for (;;)
2284     {
2285       startidx = idx;          /* (track incomplete blocks) */
2286       q = bin = bin_at(idx);
2287 
2288       /* For each bin in this block ... */
2289       do
2290       {
2291         /* Find and use first big enough chunk ... */
2292 
2293         for (victim = last(bin); victim != bin; victim = victim->bk)
2294         {
2295           victim_size = chunksize(victim);
2296           remainder_size = victim_size - nb;
2297 
2298           if (remainder_size >= (long)MINSIZE) /* split */
2299           {
2300             remainder = chunk_at_offset(victim, nb);
2301             set_head(victim, nb | PREV_INUSE);
2302             unlink(victim, bck, fwd);
2303             link_last_remainder(remainder);
2304             set_head(remainder, remainder_size | PREV_INUSE);
2305             set_foot(remainder, remainder_size);
2306             check_malloced_chunk(victim, nb);
2307             return chunk2mem(victim);
2308           }
2309 
2310           else if (remainder_size >= 0)  /* take */
2311           {
2312             set_inuse_bit_at_offset(victim, victim_size);
2313             unlink(victim, bck, fwd);
2314             check_malloced_chunk(victim, nb);
2315             return chunk2mem(victim);
2316           }
2317 
2318         }
2319 
2320        bin = next_bin(bin);
2321 
2322       } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2323 
2324       /* Clear out the block bit. */
2325 
2326       do   /* Possibly backtrack to try to clear a partial block */
2327       {
2328         if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2329         {
2330           binblocks &= ~block;
2331           break;
2332         }
2333         --startidx;
2334        q = prev_bin(q);
2335       } while (first(q) == q);
2336 
2337       /* Get to the next possibly nonempty block */
2338 
2339       if ( (block <<= 1) <= binblocks && (block != 0) )
2340       {
2341         while ((block & binblocks) == 0)
2342         {
2343           idx += BINBLOCKWIDTH;
2344           block <<= 1;
2345         }
2346       }
2347       else
2348         break;
2349     }
2350   }
2351 
2352 
2353   /* Try to use top chunk */
2354 
2355   /* Require that there be a remainder, ensuring top always exists  */
2356   if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2357   {
2358 
2359 #if HAVE_MMAP
2360     /* If big and would otherwise need to extend, try to use mmap instead */
2361     if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2362         (victim = mmap_chunk(nb)) != 0)
2363       return chunk2mem(victim);
2364 #endif
2365 
2366     /* Try to extend */
2367     malloc_extend_top(nb);
2368     if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2369       return 0; /* propagate failure */
2370   }
2371 
2372   victim = top;
2373   set_head(victim, nb | PREV_INUSE);
2374   top = chunk_at_offset(victim, nb);
2375   set_head(top, remainder_size | PREV_INUSE);
2376   check_malloced_chunk(victim, nb);
2377   return chunk2mem(victim);
2378 
2379 }
2380 
2381 
2382 
2383 
2384 /*
2385 
2386   free() algorithm :
2387 
2388     cases:
2389 
2390        1. free(0) has no effect.
2391 
2392        2. If the chunk was allocated via mmap, it is release via munmap().
2393 
2394        3. If a returned chunk borders the current high end of memory,
2395           it is consolidated into the top, and if the total unused
2396           topmost memory exceeds the trim threshold, malloc_trim is
2397           called.
2398 
2399        4. Other chunks are consolidated as they arrive, and
2400           placed in corresponding bins. (This includes the case of
2401           consolidating with the current `last_remainder').
2402 
2403 */
2404 
2405 
2406 #if __STD_C
2407 void fREe(Void_t* mem)
2408 #else
2409 void fREe(mem) Void_t* mem;
2410 #endif
2411 {
2412   mchunkptr p;         /* chunk corresponding to mem */
2413   INTERNAL_SIZE_T hd;  /* its head field */
2414   INTERNAL_SIZE_T sz;  /* its size */
2415   int       idx;       /* its bin index */
2416   mchunkptr next;      /* next contiguous chunk */
2417   INTERNAL_SIZE_T nextsz; /* its size */
2418   INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2419   mchunkptr bck;       /* misc temp for linking */
2420   mchunkptr fwd;       /* misc temp for linking */
2421   int       islr;      /* track whether merging with last_remainder */
2422 
2423   if (mem == 0)                              /* free(0) has no effect */
2424     return;
2425 
2426   p = mem2chunk(mem);
2427   hd = p->size;
2428 
2429 #if HAVE_MMAP
2430   if (hd & IS_MMAPPED)                       /* release mmapped memory. */
2431   {
2432     munmap_chunk(p);
2433     return;
2434   }
2435 #endif
2436 
2437   check_inuse_chunk(p);
2438 
2439   sz = hd & ~PREV_INUSE;
2440   next = chunk_at_offset(p, sz);
2441   nextsz = chunksize(next);
2442 
2443   if (next == top)                            /* merge with top */
2444   {
2445     sz += nextsz;
2446 
2447     if (!(hd & PREV_INUSE))                    /* consolidate backward */
2448     {
2449       prevsz = p->prev_size;
2450       p = chunk_at_offset(p, -((long) prevsz));
2451       sz += prevsz;
2452       unlink(p, bck, fwd);
2453     }
2454 
2455     set_head(p, sz | PREV_INUSE);
2456     top = p;
2457     if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2458       malloc_trim(top_pad);
2459     return;
2460   }
2461 
2462   set_head(next, nextsz);                    /* clear inuse bit */
2463 
2464   islr = 0;
2465 
2466   if (!(hd & PREV_INUSE))                    /* consolidate backward */
2467   {
2468     prevsz = p->prev_size;
2469     p = chunk_at_offset(p, -((long) prevsz));
2470     sz += prevsz;
2471 
2472     if (p->fd == last_remainder)             /* keep as last_remainder */
2473       islr = 1;
2474     else
2475       unlink(p, bck, fwd);
2476   }
2477 
2478   if (!(inuse_bit_at_offset(next, nextsz)))   /* consolidate forward */
2479   {
2480     sz += nextsz;
2481 
2482     if (!islr && next->fd == last_remainder)  /* re-insert last_remainder */
2483     {
2484       islr = 1;
2485       link_last_remainder(p);
2486     }
2487     else
2488       unlink(next, bck, fwd);
2489   }
2490 
2491 
2492   set_head(p, sz | PREV_INUSE);
2493   set_foot(p, sz);
2494   if (!islr)
2495     frontlink(p, sz, idx, bck, fwd);
2496 }
2497 
2498 
2499 
2500 
2501 
2502 /*
2503 
2504   Realloc algorithm:
2505 
2506     Chunks that were obtained via mmap cannot be extended or shrunk
2507     unless HAVE_MREMAP is defined, in which case mremap is used.
2508     Otherwise, if their reallocation is for additional space, they are
2509     copied.  If for less, they are just left alone.
2510 
2511     Otherwise, if the reallocation is for additional space, and the
2512     chunk can be extended, it is, else a malloc-copy-free sequence is
2513     taken.  There are several different ways that a chunk could be
2514     extended. All are tried:
2515 
2516        * Extending forward into following adjacent free chunk.
2517        * Shifting backwards, joining preceding adjacent space
2518        * Both shifting backwards and extending forward.
2519        * Extending into newly sbrked space
2520 
2521     Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2522     size argument of zero (re)allocates a minimum-sized chunk.
2523 
2524     If the reallocation is for less space, and the new request is for
2525     a `small' (<512 bytes) size, then the newly unused space is lopped
2526     off and freed.
2527 
2528     The old unix realloc convention of allowing the last-free'd chunk
2529     to be used as an argument to realloc is no longer supported.
2530     I don't know of any programs still relying on this feature,
2531     and allowing it would also allow too many other incorrect
2532     usages of realloc to be sensible.
2533 
2534 
2535 */
2536 
2537 
2538 #if __STD_C
2539 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
2540 #else
2541 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2542 #endif
2543 {
2544   INTERNAL_SIZE_T    nb;      /* padded request size */
2545 
2546   mchunkptr oldp;             /* chunk corresponding to oldmem */
2547   INTERNAL_SIZE_T    oldsize; /* its size */
2548 
2549   mchunkptr newp;             /* chunk to return */
2550   INTERNAL_SIZE_T    newsize; /* its size */
2551   Void_t*   newmem;           /* corresponding user mem */
2552 
2553   mchunkptr next;             /* next contiguous chunk after oldp */
2554   INTERNAL_SIZE_T  nextsize;  /* its size */
2555 
2556   mchunkptr prev;             /* previous contiguous chunk before oldp */
2557   INTERNAL_SIZE_T  prevsize;  /* its size */
2558 
2559   mchunkptr remainder;        /* holds split off extra space from newp */
2560   INTERNAL_SIZE_T  remainder_size;   /* its size */
2561 
2562   mchunkptr bck;              /* misc temp for linking */
2563   mchunkptr fwd;              /* misc temp for linking */
2564 
2565 #ifdef REALLOC_ZERO_BYTES_FREES
2566   if (bytes == 0) { fREe(oldmem); return 0; }
2567 #endif
2568 
2569   if ((long)bytes < 0) return 0;
2570 
2571   /* realloc of null is supposed to be same as malloc */
2572   if (oldmem == 0) return mALLOc(bytes);
2573 
2574   newp    = oldp    = mem2chunk(oldmem);
2575   newsize = oldsize = chunksize(oldp);
2576 
2577 
2578   nb = request2size(bytes);
2579 
2580 #if HAVE_MMAP
2581   if (chunk_is_mmapped(oldp))
2582   {
2583 #if HAVE_MREMAP
2584     newp = mremap_chunk(oldp, nb);
2585     if(newp) return chunk2mem(newp);
2586 #endif
2587     /* Note the extra SIZE_SZ overhead. */
2588     if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2589     /* Must alloc, copy, free. */
2590     newmem = mALLOc(bytes);
2591     if (newmem == 0) return 0; /* propagate failure */
2592     MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2593     munmap_chunk(oldp);
2594     return newmem;
2595   }
2596 #endif
2597 
2598   check_inuse_chunk(oldp);
2599 
2600   if ((long)(oldsize) < (long)(nb))
2601   {
2602 
2603     /* Try expanding forward */
2604 
2605     next = chunk_at_offset(oldp, oldsize);
2606     if (next == top || !inuse(next))
2607     {
2608       nextsize = chunksize(next);
2609 
2610       /* Forward into top only if a remainder */
2611       if (next == top)
2612       {
2613         if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2614         {
2615           newsize += nextsize;
2616           top = chunk_at_offset(oldp, nb);
2617           set_head(top, (newsize - nb) | PREV_INUSE);
2618           set_head_size(oldp, nb);
2619           return chunk2mem(oldp);
2620         }
2621       }
2622 
2623       /* Forward into next chunk */
2624       else if (((long)(nextsize + newsize) >= (long)(nb)))
2625       {
2626         unlink(next, bck, fwd);
2627         newsize  += nextsize;
2628         goto split;
2629       }
2630     }
2631     else
2632     {
2633       next = 0;
2634       nextsize = 0;
2635     }
2636 
2637     /* Try shifting backwards. */
2638 
2639     if (!prev_inuse(oldp))
2640     {
2641       prev = prev_chunk(oldp);
2642       prevsize = chunksize(prev);
2643 
2644       /* try forward + backward first to save a later consolidation */
2645 
2646       if (next != 0)
2647       {
2648         /* into top */
2649         if (next == top)
2650         {
2651           if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2652           {
2653             unlink(prev, bck, fwd);
2654             newp = prev;
2655             newsize += prevsize + nextsize;
2656             newmem = chunk2mem(newp);
2657             MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2658             top = chunk_at_offset(newp, nb);
2659             set_head(top, (newsize - nb) | PREV_INUSE);
2660             set_head_size(newp, nb);
2661             return newmem;
2662           }
2663         }
2664 
2665         /* into next chunk */
2666         else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2667         {
2668           unlink(next, bck, fwd);
2669           unlink(prev, bck, fwd);
2670           newp = prev;
2671           newsize += nextsize + prevsize;
2672           newmem = chunk2mem(newp);
2673           MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2674           goto split;
2675         }
2676       }
2677 
2678       /* backward only */
2679       if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2680       {
2681         unlink(prev, bck, fwd);
2682         newp = prev;
2683         newsize += prevsize;
2684         newmem = chunk2mem(newp);
2685         MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2686         goto split;
2687       }
2688     }
2689 
2690     /* Must allocate */
2691 
2692     newmem = mALLOc (bytes);
2693 
2694     if (newmem == 0)  /* propagate failure */
2695       return 0;
2696 
2697     /* Avoid copy if newp is next chunk after oldp. */
2698     /* (This can only happen when new chunk is sbrk'ed.) */
2699 
2700     if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2701     {
2702       newsize += chunksize(newp);
2703       newp = oldp;
2704       goto split;
2705     }
2706 
2707     /* Otherwise copy, free, and exit */
2708     MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2709     fREe(oldmem);
2710     return newmem;
2711   }
2712 
2713 
2714  split:  /* split off extra room in old or expanded chunk */
2715 
2716   if (newsize - nb >= MINSIZE) /* split off remainder */
2717   {
2718     remainder = chunk_at_offset(newp, nb);
2719     remainder_size = newsize - nb;
2720     set_head_size(newp, nb);
2721     set_head(remainder, remainder_size | PREV_INUSE);
2722     set_inuse_bit_at_offset(remainder, remainder_size);
2723     fREe(chunk2mem(remainder)); /* let free() deal with it */
2724   }
2725   else
2726   {
2727     set_head_size(newp, newsize);
2728     set_inuse_bit_at_offset(newp, newsize);
2729   }
2730 
2731   check_inuse_chunk(newp);
2732   return chunk2mem(newp);
2733 }
2734 
2735 
2736 
2737 
2738 /*
2739 
2740   memalign algorithm:
2741 
2742     memalign requests more than enough space from malloc, finds a spot
2743     within that chunk that meets the alignment request, and then
2744     possibly frees the leading and trailing space.
2745 
2746     The alignment argument must be a power of two. This property is not
2747     checked by memalign, so misuse may result in random runtime errors.
2748 
2749     8-byte alignment is guaranteed by normal malloc calls, so don't
2750     bother calling memalign with an argument of 8 or less.
2751 
2752     Overreliance on memalign is a sure way to fragment space.
2753 
2754 */
2755 
2756 
2757 #if __STD_C
2758 Void_t* mEMALIGn(size_t alignment, size_t bytes)
2759 #else
2760 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2761 #endif
2762 {
2763   INTERNAL_SIZE_T    nb;      /* padded  request size */
2764   char*     m;                /* memory returned by malloc call */
2765   mchunkptr p;                /* corresponding chunk */
2766   char*     brk;              /* alignment point within p */
2767   mchunkptr newp;             /* chunk to return */
2768   INTERNAL_SIZE_T  newsize;   /* its size */
2769   INTERNAL_SIZE_T  leadsize;  /* leading space befor alignment point */
2770   mchunkptr remainder;        /* spare room at end to split off */
2771   long      remainder_size;   /* its size */
2772 
2773   if ((long)bytes < 0) return 0;
2774 
2775   /* If need less alignment than we give anyway, just relay to malloc */
2776 
2777   if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2778 
2779   /* Otherwise, ensure that it is at least a minimum chunk size */
2780 
2781   if (alignment <  MINSIZE) alignment = MINSIZE;
2782 
2783   /* Call malloc with worst case padding to hit alignment. */
2784 
2785   nb = request2size(bytes);
2786   m  = (char*)(mALLOc(nb + alignment + MINSIZE));
2787 
2788   if (m == 0) return 0; /* propagate failure */
2789 
2790   p = mem2chunk(m);
2791 
2792   if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2793   {
2794 #if HAVE_MMAP
2795     if(chunk_is_mmapped(p))
2796       return chunk2mem(p); /* nothing more to do */
2797 #endif
2798   }
2799   else /* misaligned */
2800   {
2801     /*
2802       Find an aligned spot inside chunk.
2803       Since we need to give back leading space in a chunk of at
2804       least MINSIZE, if the first calculation places us at
2805       a spot with less than MINSIZE leader, we can move to the
2806       next aligned spot -- we've allocated enough total room so that
2807       this is always possible.
2808     */
2809 
2810     brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
2811     if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2812 
2813     newp = (mchunkptr)brk;
2814     leadsize = brk - (char*)(p);
2815     newsize = chunksize(p) - leadsize;
2816 
2817 #if HAVE_MMAP
2818     if(chunk_is_mmapped(p))
2819     {
2820       newp->prev_size = p->prev_size + leadsize;
2821       set_head(newp, newsize|IS_MMAPPED);
2822       return chunk2mem(newp);
2823     }
2824 #endif
2825 
2826     /* give back leader, use the rest */
2827 
2828     set_head(newp, newsize | PREV_INUSE);
2829     set_inuse_bit_at_offset(newp, newsize);
2830     set_head_size(p, leadsize);
2831     fREe(chunk2mem(p));
2832     p = newp;
2833 
2834     assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2835   }
2836 
2837   /* Also give back spare room at the end */
2838 
2839   remainder_size = chunksize(p) - nb;
2840 
2841   if (remainder_size >= (long)MINSIZE)
2842   {
2843     remainder = chunk_at_offset(p, nb);
2844     set_head(remainder, remainder_size | PREV_INUSE);
2845     set_head_size(p, nb);
2846     fREe(chunk2mem(remainder));
2847   }
2848 
2849   check_inuse_chunk(p);
2850   return chunk2mem(p);
2851 
2852 }
2853 
2854 
2855 
2856 
2857 /*
2858     valloc just invokes memalign with alignment argument equal
2859     to the page size of the system (or as near to this as can
2860     be figured out from all the includes/defines above.)
2861 */
2862 
2863 #if __STD_C
2864 Void_t* vALLOc(size_t bytes)
2865 #else
2866 Void_t* vALLOc(bytes) size_t bytes;
2867 #endif
2868 {
2869   return mEMALIGn (malloc_getpagesize, bytes);
2870 }
2871 
2872 /*
2873   pvalloc just invokes valloc for the nearest pagesize
2874   that will accommodate request
2875 */
2876 
2877 
2878 #if __STD_C
2879 Void_t* pvALLOc(size_t bytes)
2880 #else
2881 Void_t* pvALLOc(bytes) size_t bytes;
2882 #endif
2883 {
2884   size_t pagesize = malloc_getpagesize;
2885   return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2886 }
2887 
2888 /*
2889 
2890   calloc calls malloc, then zeroes out the allocated chunk.
2891 
2892 */
2893 
2894 #if __STD_C
2895 Void_t* cALLOc(size_t n, size_t elem_size)
2896 #else
2897 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2898 #endif
2899 {
2900   mchunkptr p;
2901   INTERNAL_SIZE_T csz;
2902 
2903   INTERNAL_SIZE_T sz = n * elem_size;
2904 
2905 
2906   /* check if expand_top called, in which case don't need to clear */
2907 #if MORECORE_CLEARS
2908   mchunkptr oldtop = top;
2909   INTERNAL_SIZE_T oldtopsize = chunksize(top);
2910 #endif
2911   Void_t* mem = mALLOc (sz);
2912 
2913   if ((long)n < 0) return 0;
2914 
2915   if (mem == 0)
2916     return 0;
2917   else
2918   {
2919     p = mem2chunk(mem);
2920 
2921     /* Two optional cases in which clearing not necessary */
2922 
2923 
2924 #if HAVE_MMAP
2925     if (chunk_is_mmapped(p)) return mem;
2926 #endif
2927 
2928     csz = chunksize(p);
2929 
2930 #if MORECORE_CLEARS
2931     if (p == oldtop && csz > oldtopsize)
2932     {
2933       /* clear only the bytes from non-freshly-sbrked memory */
2934       csz = oldtopsize;
2935     }
2936 #endif
2937 
2938     MALLOC_ZERO(mem, csz - SIZE_SZ);
2939     return mem;
2940   }
2941 }
2942 
2943 /*
2944 
2945   cfree just calls free. It is needed/defined on some systems
2946   that pair it with calloc, presumably for odd historical reasons.
2947 
2948 */
2949 
2950 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
2951 #if __STD_C
2952 void cfree(Void_t *mem)
2953 #else
2954 void cfree(mem) Void_t *mem;
2955 #endif
2956 {
2957   fREe(mem);
2958 }
2959 #endif
2960 
2961 
2962 
2963 /*
2964 
2965     Malloc_trim gives memory back to the system (via negative
2966     arguments to sbrk) if there is unused memory at the `high' end of
2967     the malloc pool. You can call this after freeing large blocks of
2968     memory to potentially reduce the system-level memory requirements
2969     of a program. However, it cannot guarantee to reduce memory. Under
2970     some allocation patterns, some large free blocks of memory will be
2971     locked between two used chunks, so they cannot be given back to
2972     the system.
2973 
2974     The `pad' argument to malloc_trim represents the amount of free
2975     trailing space to leave untrimmed. If this argument is zero,
2976     only the minimum amount of memory to maintain internal data
2977     structures will be left (one page or less). Non-zero arguments
2978     can be supplied to maintain enough trailing space to service
2979     future expected allocations without having to re-obtain memory
2980     from the system.
2981 
2982     Malloc_trim returns 1 if it actually released any memory, else 0.
2983 
2984 */
2985 
2986 #if __STD_C
2987 int malloc_trim(size_t pad)
2988 #else
2989 int malloc_trim(pad) size_t pad;
2990 #endif
2991 {
2992   long  top_size;        /* Amount of top-most memory */
2993   long  extra;           /* Amount to release */
2994   char* current_brk;     /* address returned by pre-check sbrk call */
2995   char* new_brk;         /* address returned by negative sbrk call */
2996 
2997   unsigned long pagesz = malloc_getpagesize;
2998 
2999   top_size = chunksize(top);
3000   extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
3001 
3002   if (extra < (long)pagesz)  /* Not enough memory to release */
3003     return 0;
3004 
3005   else
3006   {
3007     /* Test to make sure no one else called sbrk */
3008     current_brk = (char*)(MORECORE (0));
3009     if (current_brk != (char*)(top) + top_size)
3010       return 0;     /* Apparently we don't own memory; must fail */
3011 
3012     else
3013     {
3014       new_brk = (char*)(MORECORE (-extra));
3015 
3016       if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
3017       {
3018         /* Try to figure out what we have */
3019         current_brk = (char*)(MORECORE (0));
3020         top_size = current_brk - (char*)top;
3021         if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
3022         {
3023           sbrked_mem = current_brk - sbrk_base;
3024           set_head(top, top_size | PREV_INUSE);
3025         }
3026         check_chunk(top);
3027         return 0;
3028       }
3029 
3030       else
3031       {
3032         /* Success. Adjust top accordingly. */
3033         set_head(top, (top_size - extra) | PREV_INUSE);
3034         sbrked_mem -= extra;
3035         check_chunk(top);
3036         return 1;
3037       }
3038     }
3039   }
3040 }
3041 
3042 
3043 
3044 /*
3045   malloc_usable_size:
3046 
3047     This routine tells you how many bytes you can actually use in an
3048     allocated chunk, which may be more than you requested (although
3049     often not). You can use this many bytes without worrying about
3050     overwriting other allocated objects. Not a particularly great
3051     programming practice, but still sometimes useful.
3052 
3053 */
3054 
3055 #if __STD_C
3056 size_t malloc_usable_size(Void_t* mem)
3057 #else
3058 size_t malloc_usable_size(mem) Void_t* mem;
3059 #endif
3060 {
3061   mchunkptr p;
3062   if (mem == 0)
3063     return 0;
3064   else
3065   {
3066     p = mem2chunk(mem);
3067     if(!chunk_is_mmapped(p))
3068     {
3069       if (!inuse(p)) return 0;
3070       check_inuse_chunk(p);
3071       return chunksize(p) - SIZE_SZ;
3072     }
3073     return chunksize(p) - 2*SIZE_SZ;
3074   }
3075 }
3076 
3077 
3078 
3079 
3080 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
3081 
3082 #if 0
3083 static void malloc_update_mallinfo()
3084 {
3085   int i;
3086   mbinptr b;
3087   mchunkptr p;
3088 #ifdef DEBUG
3089   mchunkptr q;
3090 #endif
3091 
3092   INTERNAL_SIZE_T avail = chunksize(top);
3093   int   navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
3094 
3095   for (i = 1; i < NAV; ++i)
3096   {
3097     b = bin_at(i);
3098     for (p = last(b); p != b; p = p->bk)
3099     {
3100 #ifdef DEBUG
3101       check_free_chunk(p);
3102       for (q = next_chunk(p);
3103            q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
3104            q = next_chunk(q))
3105         check_inuse_chunk(q);
3106 #endif
3107       avail += chunksize(p);
3108       navail++;
3109     }
3110   }
3111 
3112   current_mallinfo.ordblks = navail;
3113   current_mallinfo.uordblks = sbrked_mem - avail;
3114   current_mallinfo.fordblks = avail;
3115   current_mallinfo.hblks = n_mmaps;
3116   current_mallinfo.hblkhd = mmapped_mem;
3117   current_mallinfo.keepcost = chunksize(top);
3118 
3119 }
3120 #endif	/* 0 */
3121 
3122 
3123 
3124 /*
3125 
3126   malloc_stats:
3127 
3128     Prints on the amount of space obtain from the system (both
3129     via sbrk and mmap), the maximum amount (which may be more than
3130     current if malloc_trim and/or munmap got called), the maximum
3131     number of simultaneous mmap regions used, and the current number
3132     of bytes allocated via malloc (or realloc, etc) but not yet
3133     freed. (Note that this is the number of bytes allocated, not the
3134     number requested. It will be larger than the number requested
3135     because of alignment and bookkeeping overhead.)
3136 
3137 */
3138 
3139 #if 0
3140 void malloc_stats()
3141 {
3142   malloc_update_mallinfo();
3143   printf("max system bytes = %10u\n",
3144           (unsigned int)(max_total_mem));
3145   printf("system bytes     = %10u\n",
3146           (unsigned int)(sbrked_mem + mmapped_mem));
3147   printf("in use bytes     = %10u\n",
3148           (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
3149 #if HAVE_MMAP
3150   printf("max mmap regions = %10u\n",
3151           (unsigned int)max_n_mmaps);
3152 #endif
3153 }
3154 #endif	/* 0 */
3155 
3156 /*
3157   mallinfo returns a copy of updated current mallinfo.
3158 */
3159 
3160 #if 0
3161 struct mallinfo mALLINFo()
3162 {
3163   malloc_update_mallinfo();
3164   return current_mallinfo;
3165 }
3166 #endif	/* 0 */
3167 
3168 
3169 
3170 
3171 /*
3172   mallopt:
3173 
3174     mallopt is the general SVID/XPG interface to tunable parameters.
3175     The format is to provide a (parameter-number, parameter-value) pair.
3176     mallopt then sets the corresponding parameter to the argument
3177     value if it can (i.e., so long as the value is meaningful),
3178     and returns 1 if successful else 0.
3179 
3180     See descriptions of tunable parameters above.
3181 
3182 */
3183 
3184 #if __STD_C
3185 int mALLOPt(int param_number, int value)
3186 #else
3187 int mALLOPt(param_number, value) int param_number; int value;
3188 #endif
3189 {
3190   switch(param_number)
3191   {
3192     case M_TRIM_THRESHOLD:
3193       trim_threshold = value; return 1;
3194     case M_TOP_PAD:
3195       top_pad = value; return 1;
3196     case M_MMAP_THRESHOLD:
3197       mmap_threshold = value; return 1;
3198     case M_MMAP_MAX:
3199 #if HAVE_MMAP
3200       n_mmaps_max = value; return 1;
3201 #else
3202       if (value != 0) return 0; else  n_mmaps_max = value; return 1;
3203 #endif
3204 
3205     default:
3206       return 0;
3207   }
3208 }
3209 
3210 /*
3211 
3212 History:
3213 
3214     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
3215       * return null for negative arguments
3216       * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com>
3217          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3218           (e.g. WIN32 platforms)
3219          * Cleanup up header file inclusion for WIN32 platforms
3220          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3221          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3222            memory allocation routines
3223          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3224          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
3225 	   usage of 'assert' in non-WIN32 code
3226          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3227            avoid infinite loop
3228       * Always call 'fREe()' rather than 'free()'
3229 
3230     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
3231       * Fixed ordering problem with boundary-stamping
3232 
3233     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
3234       * Added pvalloc, as recommended by H.J. Liu
3235       * Added 64bit pointer support mainly from Wolfram Gloger
3236       * Added anonymously donated WIN32 sbrk emulation
3237       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3238       * malloc_extend_top: fix mask error that caused wastage after
3239         foreign sbrks
3240       * Add linux mremap support code from HJ Liu
3241 
3242     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
3243       * Integrated most documentation with the code.
3244       * Add support for mmap, with help from
3245         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3246       * Use last_remainder in more cases.
3247       * Pack bins using idea from  colin@nyx10.cs.du.edu
3248       * Use ordered bins instead of best-fit threshhold
3249       * Eliminate block-local decls to simplify tracing and debugging.
3250       * Support another case of realloc via move into top
3251       * Fix error occuring when initial sbrk_base not word-aligned.
3252       * Rely on page size for units instead of SBRK_UNIT to
3253         avoid surprises about sbrk alignment conventions.
3254       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3255         (raymond@es.ele.tue.nl) for the suggestion.
3256       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3257       * More precautions for cases where other routines call sbrk,
3258         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3259       * Added macros etc., allowing use in linux libc from
3260         H.J. Lu (hjl@gnu.ai.mit.edu)
3261       * Inverted this history list
3262 
3263     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
3264       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3265       * Removed all preallocation code since under current scheme
3266         the work required to undo bad preallocations exceeds
3267         the work saved in good cases for most test programs.
3268       * No longer use return list or unconsolidated bins since
3269         no scheme using them consistently outperforms those that don't
3270         given above changes.
3271       * Use best fit for very large chunks to prevent some worst-cases.
3272       * Added some support for debugging
3273 
3274     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
3275       * Removed footers when chunks are in use. Thanks to
3276         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3277 
3278     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
3279       * Added malloc_trim, with help from Wolfram Gloger
3280         (wmglo@Dent.MED.Uni-Muenchen.DE).
3281 
3282     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
3283 
3284     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
3285       * realloc: try to expand in both directions
3286       * malloc: swap order of clean-bin strategy;
3287       * realloc: only conditionally expand backwards
3288       * Try not to scavenge used bins
3289       * Use bin counts as a guide to preallocation
3290       * Occasionally bin return list chunks in first scan
3291       * Add a few optimizations from colin@nyx10.cs.du.edu
3292 
3293     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
3294       * faster bin computation & slightly different binning
3295       * merged all consolidations to one part of malloc proper
3296          (eliminating old malloc_find_space & malloc_clean_bin)
3297       * Scan 2 returns chunks (not just 1)
3298       * Propagate failure in realloc if malloc returns 0
3299       * Add stuff to allow compilation on non-ANSI compilers
3300           from kpv@research.att.com
3301 
3302     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
3303       * removed potential for odd address access in prev_chunk
3304       * removed dependency on getpagesize.h
3305       * misc cosmetics and a bit more internal documentation
3306       * anticosmetics: mangled names in macros to evade debugger strangeness
3307       * tested on sparc, hp-700, dec-mips, rs6000
3308           with gcc & native cc (hp, dec only) allowing
3309           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3310 
3311     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
3312       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3313          structure of old version,  but most details differ.)
3314 
3315 */
3316 
3317 
3318