1 #include <common.h> 2 3 #if 0 /* Moved to malloc.h */ 4 /* ---------- To make a malloc.h, start cutting here ------------ */ 5 6 /* 7 A version of malloc/free/realloc written by Doug Lea and released to the 8 public domain. Send questions/comments/complaints/performance data 9 to dl@cs.oswego.edu 10 11 * VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee) 12 13 Note: There may be an updated version of this malloc obtainable at 14 ftp://g.oswego.edu/pub/misc/malloc.c 15 Check before installing! 16 17 * Why use this malloc? 18 19 This is not the fastest, most space-conserving, most portable, or 20 most tunable malloc ever written. However it is among the fastest 21 while also being among the most space-conserving, portable and tunable. 22 Consistent balance across these factors results in a good general-purpose 23 allocator. For a high-level description, see 24 http://g.oswego.edu/dl/html/malloc.html 25 26 * Synopsis of public routines 27 28 (Much fuller descriptions are contained in the program documentation below.) 29 30 malloc(size_t n); 31 Return a pointer to a newly allocated chunk of at least n bytes, or null 32 if no space is available. 33 free(Void_t* p); 34 Release the chunk of memory pointed to by p, or no effect if p is null. 35 realloc(Void_t* p, size_t n); 36 Return a pointer to a chunk of size n that contains the same data 37 as does chunk p up to the minimum of (n, p's size) bytes, or null 38 if no space is available. The returned pointer may or may not be 39 the same as p. If p is null, equivalent to malloc. Unless the 40 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a 41 size argument of zero (re)allocates a minimum-sized chunk. 42 memalign(size_t alignment, size_t n); 43 Return a pointer to a newly allocated chunk of n bytes, aligned 44 in accord with the alignment argument, which must be a power of 45 two. 46 valloc(size_t n); 47 Equivalent to memalign(pagesize, n), where pagesize is the page 48 size of the system (or as near to this as can be figured out from 49 all the includes/defines below.) 50 pvalloc(size_t n); 51 Equivalent to valloc(minimum-page-that-holds(n)), that is, 52 round up n to nearest pagesize. 53 calloc(size_t unit, size_t quantity); 54 Returns a pointer to quantity * unit bytes, with all locations 55 set to zero. 56 cfree(Void_t* p); 57 Equivalent to free(p). 58 malloc_trim(size_t pad); 59 Release all but pad bytes of freed top-most memory back 60 to the system. Return 1 if successful, else 0. 61 malloc_usable_size(Void_t* p); 62 Report the number usable allocated bytes associated with allocated 63 chunk p. This may or may not report more bytes than were requested, 64 due to alignment and minimum size constraints. 65 malloc_stats(); 66 Prints brief summary statistics. 67 mallinfo() 68 Returns (by copy) a struct containing various summary statistics. 69 mallopt(int parameter_number, int parameter_value) 70 Changes one of the tunable parameters described below. Returns 71 1 if successful in changing the parameter, else 0. 72 73 * Vital statistics: 74 75 Alignment: 8-byte 76 8 byte alignment is currently hardwired into the design. This 77 seems to suffice for all current machines and C compilers. 78 79 Assumed pointer representation: 4 or 8 bytes 80 Code for 8-byte pointers is untested by me but has worked 81 reliably by Wolfram Gloger, who contributed most of the 82 changes supporting this. 83 84 Assumed size_t representation: 4 or 8 bytes 85 Note that size_t is allowed to be 4 bytes even if pointers are 8. 86 87 Minimum overhead per allocated chunk: 4 or 8 bytes 88 Each malloced chunk has a hidden overhead of 4 bytes holding size 89 and status information. 90 91 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead) 92 8-byte ptrs: 24/32 bytes (including, 4/8 overhead) 93 94 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte 95 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are 96 needed; 4 (8) for a trailing size field 97 and 8 (16) bytes for free list pointers. Thus, the minimum 98 allocatable size is 16/24/32 bytes. 99 100 Even a request for zero bytes (i.e., malloc(0)) returns a 101 pointer to something of the minimum allocatable size. 102 103 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes 104 8-byte size_t: 2^63 - 16 bytes 105 106 It is assumed that (possibly signed) size_t bit values suffice to 107 represent chunk sizes. `Possibly signed' is due to the fact 108 that `size_t' may be defined on a system as either a signed or 109 an unsigned type. To be conservative, values that would appear 110 as negative numbers are avoided. 111 Requests for sizes with a negative sign bit when the request 112 size is treaded as a long will return null. 113 114 Maximum overhead wastage per allocated chunk: normally 15 bytes 115 116 Alignnment demands, plus the minimum allocatable size restriction 117 make the normal worst-case wastage 15 bytes (i.e., up to 15 118 more bytes will be allocated than were requested in malloc), with 119 two exceptions: 120 1. Because requests for zero bytes allocate non-zero space, 121 the worst case wastage for a request of zero bytes is 24 bytes. 122 2. For requests >= mmap_threshold that are serviced via 123 mmap(), the worst case wastage is 8 bytes plus the remainder 124 from a system page (the minimal mmap unit); typically 4096 bytes. 125 126 * Limitations 127 128 Here are some features that are NOT currently supported 129 130 * No user-definable hooks for callbacks and the like. 131 * No automated mechanism for fully checking that all accesses 132 to malloced memory stay within their bounds. 133 * No support for compaction. 134 135 * Synopsis of compile-time options: 136 137 People have reported using previous versions of this malloc on all 138 versions of Unix, sometimes by tweaking some of the defines 139 below. It has been tested most extensively on Solaris and 140 Linux. It is also reported to work on WIN32 platforms. 141 People have also reported adapting this malloc for use in 142 stand-alone embedded systems. 143 144 The implementation is in straight, hand-tuned ANSI C. Among other 145 consequences, it uses a lot of macros. Because of this, to be at 146 all usable, this code should be compiled using an optimizing compiler 147 (for example gcc -O2) that can simplify expressions and control 148 paths. 149 150 __STD_C (default: derived from C compiler defines) 151 Nonzero if using ANSI-standard C compiler, a C++ compiler, or 152 a C compiler sufficiently close to ANSI to get away with it. 153 DEBUG (default: NOT defined) 154 Define to enable debugging. Adds fairly extensive assertion-based 155 checking to help track down memory errors, but noticeably slows down 156 execution. 157 REALLOC_ZERO_BYTES_FREES (default: NOT defined) 158 Define this if you think that realloc(p, 0) should be equivalent 159 to free(p). Otherwise, since malloc returns a unique pointer for 160 malloc(0), so does realloc(p, 0). 161 HAVE_MEMCPY (default: defined) 162 Define if you are not otherwise using ANSI STD C, but still 163 have memcpy and memset in your C library and want to use them. 164 Otherwise, simple internal versions are supplied. 165 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise) 166 Define as 1 if you want the C library versions of memset and 167 memcpy called in realloc and calloc (otherwise macro versions are used). 168 At least on some platforms, the simple macro versions usually 169 outperform libc versions. 170 HAVE_MMAP (default: defined as 1) 171 Define to non-zero to optionally make malloc() use mmap() to 172 allocate very large blocks. 173 HAVE_MREMAP (default: defined as 0 unless Linux libc set) 174 Define to non-zero to optionally make realloc() use mremap() to 175 reallocate very large blocks. 176 malloc_getpagesize (default: derived from system #includes) 177 Either a constant or routine call returning the system page size. 178 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined) 179 Optionally define if you are on a system with a /usr/include/malloc.h 180 that declares struct mallinfo. It is not at all necessary to 181 define this even if you do, but will ensure consistency. 182 INTERNAL_SIZE_T (default: size_t) 183 Define to a 32-bit type (probably `unsigned int') if you are on a 184 64-bit machine, yet do not want or need to allow malloc requests of 185 greater than 2^31 to be handled. This saves space, especially for 186 very small chunks. 187 INTERNAL_LINUX_C_LIB (default: NOT defined) 188 Defined only when compiled as part of Linux libc. 189 Also note that there is some odd internal name-mangling via defines 190 (for example, internally, `malloc' is named `mALLOc') needed 191 when compiling in this case. These look funny but don't otherwise 192 affect anything. 193 WIN32 (default: undefined) 194 Define this on MS win (95, nt) platforms to compile in sbrk emulation. 195 LACKS_UNISTD_H (default: undefined if not WIN32) 196 Define this if your system does not have a <unistd.h>. 197 LACKS_SYS_PARAM_H (default: undefined if not WIN32) 198 Define this if your system does not have a <sys/param.h>. 199 MORECORE (default: sbrk) 200 The name of the routine to call to obtain more memory from the system. 201 MORECORE_FAILURE (default: -1) 202 The value returned upon failure of MORECORE. 203 MORECORE_CLEARS (default 1) 204 True (1) if the routine mapped to MORECORE zeroes out memory (which 205 holds for sbrk). 206 DEFAULT_TRIM_THRESHOLD 207 DEFAULT_TOP_PAD 208 DEFAULT_MMAP_THRESHOLD 209 DEFAULT_MMAP_MAX 210 Default values of tunable parameters (described in detail below) 211 controlling interaction with host system routines (sbrk, mmap, etc). 212 These values may also be changed dynamically via mallopt(). The 213 preset defaults are those that give best performance for typical 214 programs/systems. 215 USE_DL_PREFIX (default: undefined) 216 Prefix all public routines with the string 'dl'. Useful to 217 quickly avoid procedure declaration conflicts and linker symbol 218 conflicts with existing memory allocation routines. 219 220 221 */ 222 223 224 225 /* Preliminaries */ 226 227 #ifndef __STD_C 228 #ifdef __STDC__ 229 #define __STD_C 1 230 #else 231 #if __cplusplus 232 #define __STD_C 1 233 #else 234 #define __STD_C 0 235 #endif /*__cplusplus*/ 236 #endif /*__STDC__*/ 237 #endif /*__STD_C*/ 238 239 #ifndef Void_t 240 #if (__STD_C || defined(WIN32)) 241 #define Void_t void 242 #else 243 #define Void_t char 244 #endif 245 #endif /*Void_t*/ 246 247 #if __STD_C 248 #include <stddef.h> /* for size_t */ 249 #else 250 #include <sys/types.h> 251 #endif 252 253 #ifdef __cplusplus 254 extern "C" { 255 #endif 256 257 #include <stdio.h> /* needed for malloc_stats */ 258 259 260 /* 261 Compile-time options 262 */ 263 264 265 /* 266 Debugging: 267 268 Because freed chunks may be overwritten with link fields, this 269 malloc will often die when freed memory is overwritten by user 270 programs. This can be very effective (albeit in an annoying way) 271 in helping track down dangling pointers. 272 273 If you compile with -DDEBUG, a number of assertion checks are 274 enabled that will catch more memory errors. You probably won't be 275 able to make much sense of the actual assertion errors, but they 276 should help you locate incorrectly overwritten memory. The 277 checking is fairly extensive, and will slow down execution 278 noticeably. Calling malloc_stats or mallinfo with DEBUG set will 279 attempt to check every non-mmapped allocated and free chunk in the 280 course of computing the summmaries. (By nature, mmapped regions 281 cannot be checked very much automatically.) 282 283 Setting DEBUG may also be helpful if you are trying to modify 284 this code. The assertions in the check routines spell out in more 285 detail the assumptions and invariants underlying the algorithms. 286 287 */ 288 289 #ifdef DEBUG 290 #include <assert.h> 291 #else 292 #define assert(x) ((void)0) 293 #endif 294 295 296 /* 297 INTERNAL_SIZE_T is the word-size used for internal bookkeeping 298 of chunk sizes. On a 64-bit machine, you can reduce malloc 299 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' 300 at the expense of not being able to handle requests greater than 301 2^31. This limitation is hardly ever a concern; you are encouraged 302 to set this. However, the default version is the same as size_t. 303 */ 304 305 #ifndef INTERNAL_SIZE_T 306 #define INTERNAL_SIZE_T size_t 307 #endif 308 309 /* 310 REALLOC_ZERO_BYTES_FREES should be set if a call to 311 realloc with zero bytes should be the same as a call to free. 312 Some people think it should. Otherwise, since this malloc 313 returns a unique pointer for malloc(0), so does realloc(p, 0). 314 */ 315 316 317 /* #define REALLOC_ZERO_BYTES_FREES */ 318 319 320 /* 321 WIN32 causes an emulation of sbrk to be compiled in 322 mmap-based options are not currently supported in WIN32. 323 */ 324 325 /* #define WIN32 */ 326 #ifdef WIN32 327 #define MORECORE wsbrk 328 #define HAVE_MMAP 0 329 330 #define LACKS_UNISTD_H 331 #define LACKS_SYS_PARAM_H 332 333 /* 334 Include 'windows.h' to get the necessary declarations for the 335 Microsoft Visual C++ data structures and routines used in the 'sbrk' 336 emulation. 337 338 Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft 339 Visual C++ header files are included. 340 */ 341 #define WIN32_LEAN_AND_MEAN 342 #include <windows.h> 343 #endif 344 345 346 /* 347 HAVE_MEMCPY should be defined if you are not otherwise using 348 ANSI STD C, but still have memcpy and memset in your C library 349 and want to use them in calloc and realloc. Otherwise simple 350 macro versions are defined here. 351 352 USE_MEMCPY should be defined as 1 if you actually want to 353 have memset and memcpy called. People report that the macro 354 versions are often enough faster than libc versions on many 355 systems that it is better to use them. 356 357 */ 358 359 #define HAVE_MEMCPY 360 361 #ifndef USE_MEMCPY 362 #ifdef HAVE_MEMCPY 363 #define USE_MEMCPY 1 364 #else 365 #define USE_MEMCPY 0 366 #endif 367 #endif 368 369 #if (__STD_C || defined(HAVE_MEMCPY)) 370 371 #if __STD_C 372 void* memset(void*, int, size_t); 373 void* memcpy(void*, const void*, size_t); 374 #else 375 #ifdef WIN32 376 /* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */ 377 /* 'windows.h' */ 378 #else 379 Void_t* memset(); 380 Void_t* memcpy(); 381 #endif 382 #endif 383 #endif 384 385 #if USE_MEMCPY 386 387 /* The following macros are only invoked with (2n+1)-multiples of 388 INTERNAL_SIZE_T units, with a positive integer n. This is exploited 389 for fast inline execution when n is small. */ 390 391 #define MALLOC_ZERO(charp, nbytes) \ 392 do { \ 393 INTERNAL_SIZE_T mzsz = (nbytes); \ 394 if(mzsz <= 9*sizeof(mzsz)) { \ 395 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \ 396 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \ 397 *mz++ = 0; \ 398 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \ 399 *mz++ = 0; \ 400 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \ 401 *mz++ = 0; }}} \ 402 *mz++ = 0; \ 403 *mz++ = 0; \ 404 *mz = 0; \ 405 } else memset((charp), 0, mzsz); \ 406 } while(0) 407 408 #define MALLOC_COPY(dest,src,nbytes) \ 409 do { \ 410 INTERNAL_SIZE_T mcsz = (nbytes); \ 411 if(mcsz <= 9*sizeof(mcsz)) { \ 412 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \ 413 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \ 414 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ 415 *mcdst++ = *mcsrc++; \ 416 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ 417 *mcdst++ = *mcsrc++; \ 418 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ 419 *mcdst++ = *mcsrc++; }}} \ 420 *mcdst++ = *mcsrc++; \ 421 *mcdst++ = *mcsrc++; \ 422 *mcdst = *mcsrc ; \ 423 } else memcpy(dest, src, mcsz); \ 424 } while(0) 425 426 #else /* !USE_MEMCPY */ 427 428 /* Use Duff's device for good zeroing/copying performance. */ 429 430 #define MALLOC_ZERO(charp, nbytes) \ 431 do { \ 432 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \ 433 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \ 434 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ 435 switch (mctmp) { \ 436 case 0: for(;;) { *mzp++ = 0; \ 437 case 7: *mzp++ = 0; \ 438 case 6: *mzp++ = 0; \ 439 case 5: *mzp++ = 0; \ 440 case 4: *mzp++ = 0; \ 441 case 3: *mzp++ = 0; \ 442 case 2: *mzp++ = 0; \ 443 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \ 444 } \ 445 } while(0) 446 447 #define MALLOC_COPY(dest,src,nbytes) \ 448 do { \ 449 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \ 450 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \ 451 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \ 452 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ 453 switch (mctmp) { \ 454 case 0: for(;;) { *mcdst++ = *mcsrc++; \ 455 case 7: *mcdst++ = *mcsrc++; \ 456 case 6: *mcdst++ = *mcsrc++; \ 457 case 5: *mcdst++ = *mcsrc++; \ 458 case 4: *mcdst++ = *mcsrc++; \ 459 case 3: *mcdst++ = *mcsrc++; \ 460 case 2: *mcdst++ = *mcsrc++; \ 461 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \ 462 } \ 463 } while(0) 464 465 #endif 466 467 468 /* 469 Define HAVE_MMAP to optionally make malloc() use mmap() to 470 allocate very large blocks. These will be returned to the 471 operating system immediately after a free(). 472 */ 473 474 #ifndef HAVE_MMAP 475 #define HAVE_MMAP 1 476 #endif 477 478 /* 479 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate 480 large blocks. This is currently only possible on Linux with 481 kernel versions newer than 1.3.77. 482 */ 483 484 #ifndef HAVE_MREMAP 485 #ifdef INTERNAL_LINUX_C_LIB 486 #define HAVE_MREMAP 1 487 #else 488 #define HAVE_MREMAP 0 489 #endif 490 #endif 491 492 #if HAVE_MMAP 493 494 #include <unistd.h> 495 #include <fcntl.h> 496 #include <sys/mman.h> 497 498 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) 499 #define MAP_ANONYMOUS MAP_ANON 500 #endif 501 502 #endif /* HAVE_MMAP */ 503 504 /* 505 Access to system page size. To the extent possible, this malloc 506 manages memory from the system in page-size units. 507 508 The following mechanics for getpagesize were adapted from 509 bsd/gnu getpagesize.h 510 */ 511 512 #ifndef LACKS_UNISTD_H 513 # include <unistd.h> 514 #endif 515 516 #ifndef malloc_getpagesize 517 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ 518 # ifndef _SC_PAGE_SIZE 519 # define _SC_PAGE_SIZE _SC_PAGESIZE 520 # endif 521 # endif 522 # ifdef _SC_PAGE_SIZE 523 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE) 524 # else 525 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) 526 extern size_t getpagesize(); 527 # define malloc_getpagesize getpagesize() 528 # else 529 # ifdef WIN32 530 # define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */ 531 # else 532 # ifndef LACKS_SYS_PARAM_H 533 # include <sys/param.h> 534 # endif 535 # ifdef EXEC_PAGESIZE 536 # define malloc_getpagesize EXEC_PAGESIZE 537 # else 538 # ifdef NBPG 539 # ifndef CLSIZE 540 # define malloc_getpagesize NBPG 541 # else 542 # define malloc_getpagesize (NBPG * CLSIZE) 543 # endif 544 # else 545 # ifdef NBPC 546 # define malloc_getpagesize NBPC 547 # else 548 # ifdef PAGESIZE 549 # define malloc_getpagesize PAGESIZE 550 # else 551 # define malloc_getpagesize (4096) /* just guess */ 552 # endif 553 # endif 554 # endif 555 # endif 556 # endif 557 # endif 558 # endif 559 #endif 560 561 562 /* 563 564 This version of malloc supports the standard SVID/XPG mallinfo 565 routine that returns a struct containing the same kind of 566 information you can get from malloc_stats. It should work on 567 any SVID/XPG compliant system that has a /usr/include/malloc.h 568 defining struct mallinfo. (If you'd like to install such a thing 569 yourself, cut out the preliminary declarations as described above 570 and below and save them in a malloc.h file. But there's no 571 compelling reason to bother to do this.) 572 573 The main declaration needed is the mallinfo struct that is returned 574 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a 575 bunch of fields, most of which are not even meaningful in this 576 version of malloc. Some of these fields are are instead filled by 577 mallinfo() with other numbers that might possibly be of interest. 578 579 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a 580 /usr/include/malloc.h file that includes a declaration of struct 581 mallinfo. If so, it is included; else an SVID2/XPG2 compliant 582 version is declared below. These must be precisely the same for 583 mallinfo() to work. 584 585 */ 586 587 /* #define HAVE_USR_INCLUDE_MALLOC_H */ 588 589 #if HAVE_USR_INCLUDE_MALLOC_H 590 #include "/usr/include/malloc.h" 591 #else 592 593 /* SVID2/XPG mallinfo structure */ 594 595 struct mallinfo { 596 int arena; /* total space allocated from system */ 597 int ordblks; /* number of non-inuse chunks */ 598 int smblks; /* unused -- always zero */ 599 int hblks; /* number of mmapped regions */ 600 int hblkhd; /* total space in mmapped regions */ 601 int usmblks; /* unused -- always zero */ 602 int fsmblks; /* unused -- always zero */ 603 int uordblks; /* total allocated space */ 604 int fordblks; /* total non-inuse space */ 605 int keepcost; /* top-most, releasable (via malloc_trim) space */ 606 }; 607 608 /* SVID2/XPG mallopt options */ 609 610 #define M_MXFAST 1 /* UNUSED in this malloc */ 611 #define M_NLBLKS 2 /* UNUSED in this malloc */ 612 #define M_GRAIN 3 /* UNUSED in this malloc */ 613 #define M_KEEP 4 /* UNUSED in this malloc */ 614 615 #endif 616 617 /* mallopt options that actually do something */ 618 619 #define M_TRIM_THRESHOLD -1 620 #define M_TOP_PAD -2 621 #define M_MMAP_THRESHOLD -3 622 #define M_MMAP_MAX -4 623 624 625 #ifndef DEFAULT_TRIM_THRESHOLD 626 #define DEFAULT_TRIM_THRESHOLD (128 * 1024) 627 #endif 628 629 /* 630 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory 631 to keep before releasing via malloc_trim in free(). 632 633 Automatic trimming is mainly useful in long-lived programs. 634 Because trimming via sbrk can be slow on some systems, and can 635 sometimes be wasteful (in cases where programs immediately 636 afterward allocate more large chunks) the value should be high 637 enough so that your overall system performance would improve by 638 releasing. 639 640 The trim threshold and the mmap control parameters (see below) 641 can be traded off with one another. Trimming and mmapping are 642 two different ways of releasing unused memory back to the 643 system. Between these two, it is often possible to keep 644 system-level demands of a long-lived program down to a bare 645 minimum. For example, in one test suite of sessions measuring 646 the XF86 X server on Linux, using a trim threshold of 128K and a 647 mmap threshold of 192K led to near-minimal long term resource 648 consumption. 649 650 If you are using this malloc in a long-lived program, it should 651 pay to experiment with these values. As a rough guide, you 652 might set to a value close to the average size of a process 653 (program) running on your system. Releasing this much memory 654 would allow such a process to run in memory. Generally, it's 655 worth it to tune for trimming rather tham memory mapping when a 656 program undergoes phases where several large chunks are 657 allocated and released in ways that can reuse each other's 658 storage, perhaps mixed with phases where there are no such 659 chunks at all. And in well-behaved long-lived programs, 660 controlling release of large blocks via trimming versus mapping 661 is usually faster. 662 663 However, in most programs, these parameters serve mainly as 664 protection against the system-level effects of carrying around 665 massive amounts of unneeded memory. Since frequent calls to 666 sbrk, mmap, and munmap otherwise degrade performance, the default 667 parameters are set to relatively high values that serve only as 668 safeguards. 669 670 The default trim value is high enough to cause trimming only in 671 fairly extreme (by current memory consumption standards) cases. 672 It must be greater than page size to have any useful effect. To 673 disable trimming completely, you can set to (unsigned long)(-1); 674 675 676 */ 677 678 679 #ifndef DEFAULT_TOP_PAD 680 #define DEFAULT_TOP_PAD (0) 681 #endif 682 683 /* 684 M_TOP_PAD is the amount of extra `padding' space to allocate or 685 retain whenever sbrk is called. It is used in two ways internally: 686 687 * When sbrk is called to extend the top of the arena to satisfy 688 a new malloc request, this much padding is added to the sbrk 689 request. 690 691 * When malloc_trim is called automatically from free(), 692 it is used as the `pad' argument. 693 694 In both cases, the actual amount of padding is rounded 695 so that the end of the arena is always a system page boundary. 696 697 The main reason for using padding is to avoid calling sbrk so 698 often. Having even a small pad greatly reduces the likelihood 699 that nearly every malloc request during program start-up (or 700 after trimming) will invoke sbrk, which needlessly wastes 701 time. 702 703 Automatic rounding-up to page-size units is normally sufficient 704 to avoid measurable overhead, so the default is 0. However, in 705 systems where sbrk is relatively slow, it can pay to increase 706 this value, at the expense of carrying around more memory than 707 the program needs. 708 709 */ 710 711 712 #ifndef DEFAULT_MMAP_THRESHOLD 713 #define DEFAULT_MMAP_THRESHOLD (128 * 1024) 714 #endif 715 716 /* 717 718 M_MMAP_THRESHOLD is the request size threshold for using mmap() 719 to service a request. Requests of at least this size that cannot 720 be allocated using already-existing space will be serviced via mmap. 721 (If enough normal freed space already exists it is used instead.) 722 723 Using mmap segregates relatively large chunks of memory so that 724 they can be individually obtained and released from the host 725 system. A request serviced through mmap is never reused by any 726 other request (at least not directly; the system may just so 727 happen to remap successive requests to the same locations). 728 729 Segregating space in this way has the benefit that mmapped space 730 can ALWAYS be individually released back to the system, which 731 helps keep the system level memory demands of a long-lived 732 program low. Mapped memory can never become `locked' between 733 other chunks, as can happen with normally allocated chunks, which 734 menas that even trimming via malloc_trim would not release them. 735 736 However, it has the disadvantages that: 737 738 1. The space cannot be reclaimed, consolidated, and then 739 used to service later requests, as happens with normal chunks. 740 2. It can lead to more wastage because of mmap page alignment 741 requirements 742 3. It causes malloc performance to be more dependent on host 743 system memory management support routines which may vary in 744 implementation quality and may impose arbitrary 745 limitations. Generally, servicing a request via normal 746 malloc steps is faster than going through a system's mmap. 747 748 All together, these considerations should lead you to use mmap 749 only for relatively large requests. 750 751 752 */ 753 754 755 #ifndef DEFAULT_MMAP_MAX 756 #if HAVE_MMAP 757 #define DEFAULT_MMAP_MAX (64) 758 #else 759 #define DEFAULT_MMAP_MAX (0) 760 #endif 761 #endif 762 763 /* 764 M_MMAP_MAX is the maximum number of requests to simultaneously 765 service using mmap. This parameter exists because: 766 767 1. Some systems have a limited number of internal tables for 768 use by mmap. 769 2. In most systems, overreliance on mmap can degrade overall 770 performance. 771 3. If a program allocates many large regions, it is probably 772 better off using normal sbrk-based allocation routines that 773 can reclaim and reallocate normal heap memory. Using a 774 small value allows transition into this mode after the 775 first few allocations. 776 777 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set, 778 the default value is 0, and attempts to set it to non-zero values 779 in mallopt will fail. 780 */ 781 782 783 /* 784 USE_DL_PREFIX will prefix all public routines with the string 'dl'. 785 Useful to quickly avoid procedure declaration conflicts and linker 786 symbol conflicts with existing memory allocation routines. 787 788 */ 789 790 /* #define USE_DL_PREFIX */ 791 792 793 /* 794 795 Special defines for linux libc 796 797 Except when compiled using these special defines for Linux libc 798 using weak aliases, this malloc is NOT designed to work in 799 multithreaded applications. No semaphores or other concurrency 800 control are provided to ensure that multiple malloc or free calls 801 don't run at the same time, which could be disasterous. A single 802 semaphore could be used across malloc, realloc, and free (which is 803 essentially the effect of the linux weak alias approach). It would 804 be hard to obtain finer granularity. 805 806 */ 807 808 809 #ifdef INTERNAL_LINUX_C_LIB 810 811 #if __STD_C 812 813 Void_t * __default_morecore_init (ptrdiff_t); 814 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init; 815 816 #else 817 818 Void_t * __default_morecore_init (); 819 Void_t *(*__morecore)() = __default_morecore_init; 820 821 #endif 822 823 #define MORECORE (*__morecore) 824 #define MORECORE_FAILURE 0 825 #define MORECORE_CLEARS 1 826 827 #else /* INTERNAL_LINUX_C_LIB */ 828 829 #if __STD_C 830 extern Void_t* sbrk(ptrdiff_t); 831 #else 832 extern Void_t* sbrk(); 833 #endif 834 835 #ifndef MORECORE 836 #define MORECORE sbrk 837 #endif 838 839 #ifndef MORECORE_FAILURE 840 #define MORECORE_FAILURE -1 841 #endif 842 843 #ifndef MORECORE_CLEARS 844 #define MORECORE_CLEARS 1 845 #endif 846 847 #endif /* INTERNAL_LINUX_C_LIB */ 848 849 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__) 850 851 #define cALLOc __libc_calloc 852 #define fREe __libc_free 853 #define mALLOc __libc_malloc 854 #define mEMALIGn __libc_memalign 855 #define rEALLOc __libc_realloc 856 #define vALLOc __libc_valloc 857 #define pvALLOc __libc_pvalloc 858 #define mALLINFo __libc_mallinfo 859 #define mALLOPt __libc_mallopt 860 861 #pragma weak calloc = __libc_calloc 862 #pragma weak free = __libc_free 863 #pragma weak cfree = __libc_free 864 #pragma weak malloc = __libc_malloc 865 #pragma weak memalign = __libc_memalign 866 #pragma weak realloc = __libc_realloc 867 #pragma weak valloc = __libc_valloc 868 #pragma weak pvalloc = __libc_pvalloc 869 #pragma weak mallinfo = __libc_mallinfo 870 #pragma weak mallopt = __libc_mallopt 871 872 #else 873 874 #ifdef USE_DL_PREFIX 875 #define cALLOc dlcalloc 876 #define fREe dlfree 877 #define mALLOc dlmalloc 878 #define mEMALIGn dlmemalign 879 #define rEALLOc dlrealloc 880 #define vALLOc dlvalloc 881 #define pvALLOc dlpvalloc 882 #define mALLINFo dlmallinfo 883 #define mALLOPt dlmallopt 884 #else /* USE_DL_PREFIX */ 885 #define cALLOc calloc 886 #define fREe free 887 #define mALLOc malloc 888 #define mEMALIGn memalign 889 #define rEALLOc realloc 890 #define vALLOc valloc 891 #define pvALLOc pvalloc 892 #define mALLINFo mallinfo 893 #define mALLOPt mallopt 894 #endif /* USE_DL_PREFIX */ 895 896 #endif 897 898 /* Public routines */ 899 900 #if __STD_C 901 902 Void_t* mALLOc(size_t); 903 void fREe(Void_t*); 904 Void_t* rEALLOc(Void_t*, size_t); 905 Void_t* mEMALIGn(size_t, size_t); 906 Void_t* vALLOc(size_t); 907 Void_t* pvALLOc(size_t); 908 Void_t* cALLOc(size_t, size_t); 909 void cfree(Void_t*); 910 int malloc_trim(size_t); 911 size_t malloc_usable_size(Void_t*); 912 void malloc_stats(); 913 int mALLOPt(int, int); 914 struct mallinfo mALLINFo(void); 915 #else 916 Void_t* mALLOc(); 917 void fREe(); 918 Void_t* rEALLOc(); 919 Void_t* mEMALIGn(); 920 Void_t* vALLOc(); 921 Void_t* pvALLOc(); 922 Void_t* cALLOc(); 923 void cfree(); 924 int malloc_trim(); 925 size_t malloc_usable_size(); 926 void malloc_stats(); 927 int mALLOPt(); 928 struct mallinfo mALLINFo(); 929 #endif 930 931 932 #ifdef __cplusplus 933 }; /* end of extern "C" */ 934 #endif 935 936 /* ---------- To make a malloc.h, end cutting here ------------ */ 937 #endif /* 0 */ /* Moved to malloc.h */ 938 939 #include <malloc.h> 940 #ifdef DEBUG 941 #if __STD_C 942 static void malloc_update_mallinfo (void); 943 void malloc_stats (void); 944 #else 945 static void malloc_update_mallinfo (); 946 void malloc_stats(); 947 #endif 948 #endif /* DEBUG */ 949 950 DECLARE_GLOBAL_DATA_PTR; 951 952 /* 953 Emulation of sbrk for WIN32 954 All code within the ifdef WIN32 is untested by me. 955 956 Thanks to Martin Fong and others for supplying this. 957 */ 958 959 960 #ifdef WIN32 961 962 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \ 963 ~(malloc_getpagesize-1)) 964 #define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1)) 965 966 /* resrve 64MB to insure large contiguous space */ 967 #define RESERVED_SIZE (1024*1024*64) 968 #define NEXT_SIZE (2048*1024) 969 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024) 970 971 struct GmListElement; 972 typedef struct GmListElement GmListElement; 973 974 struct GmListElement 975 { 976 GmListElement* next; 977 void* base; 978 }; 979 980 static GmListElement* head = 0; 981 static unsigned int gNextAddress = 0; 982 static unsigned int gAddressBase = 0; 983 static unsigned int gAllocatedSize = 0; 984 985 static 986 GmListElement* makeGmListElement (void* bas) 987 { 988 GmListElement* this; 989 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement)); 990 assert (this); 991 if (this) 992 { 993 this->base = bas; 994 this->next = head; 995 head = this; 996 } 997 return this; 998 } 999 1000 void gcleanup () 1001 { 1002 BOOL rval; 1003 assert ( (head == NULL) || (head->base == (void*)gAddressBase)); 1004 if (gAddressBase && (gNextAddress - gAddressBase)) 1005 { 1006 rval = VirtualFree ((void*)gAddressBase, 1007 gNextAddress - gAddressBase, 1008 MEM_DECOMMIT); 1009 assert (rval); 1010 } 1011 while (head) 1012 { 1013 GmListElement* next = head->next; 1014 rval = VirtualFree (head->base, 0, MEM_RELEASE); 1015 assert (rval); 1016 LocalFree (head); 1017 head = next; 1018 } 1019 } 1020 1021 static 1022 void* findRegion (void* start_address, unsigned long size) 1023 { 1024 MEMORY_BASIC_INFORMATION info; 1025 if (size >= TOP_MEMORY) return NULL; 1026 1027 while ((unsigned long)start_address + size < TOP_MEMORY) 1028 { 1029 VirtualQuery (start_address, &info, sizeof (info)); 1030 if ((info.State == MEM_FREE) && (info.RegionSize >= size)) 1031 return start_address; 1032 else 1033 { 1034 /* Requested region is not available so see if the */ 1035 /* next region is available. Set 'start_address' */ 1036 /* to the next region and call 'VirtualQuery()' */ 1037 /* again. */ 1038 1039 start_address = (char*)info.BaseAddress + info.RegionSize; 1040 1041 /* Make sure we start looking for the next region */ 1042 /* on the *next* 64K boundary. Otherwise, even if */ 1043 /* the new region is free according to */ 1044 /* 'VirtualQuery()', the subsequent call to */ 1045 /* 'VirtualAlloc()' (which follows the call to */ 1046 /* this routine in 'wsbrk()') will round *down* */ 1047 /* the requested address to a 64K boundary which */ 1048 /* we already know is an address in the */ 1049 /* unavailable region. Thus, the subsequent call */ 1050 /* to 'VirtualAlloc()' will fail and bring us back */ 1051 /* here, causing us to go into an infinite loop. */ 1052 1053 start_address = 1054 (void *) AlignPage64K((unsigned long) start_address); 1055 } 1056 } 1057 return NULL; 1058 1059 } 1060 1061 1062 void* wsbrk (long size) 1063 { 1064 void* tmp; 1065 if (size > 0) 1066 { 1067 if (gAddressBase == 0) 1068 { 1069 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size)); 1070 gNextAddress = gAddressBase = 1071 (unsigned int)VirtualAlloc (NULL, gAllocatedSize, 1072 MEM_RESERVE, PAGE_NOACCESS); 1073 } else if (AlignPage (gNextAddress + size) > (gAddressBase + 1074 gAllocatedSize)) 1075 { 1076 long new_size = max (NEXT_SIZE, AlignPage (size)); 1077 void* new_address = (void*)(gAddressBase+gAllocatedSize); 1078 do 1079 { 1080 new_address = findRegion (new_address, new_size); 1081 1082 if (new_address == 0) 1083 return (void*)-1; 1084 1085 gAddressBase = gNextAddress = 1086 (unsigned int)VirtualAlloc (new_address, new_size, 1087 MEM_RESERVE, PAGE_NOACCESS); 1088 /* repeat in case of race condition */ 1089 /* The region that we found has been snagged */ 1090 /* by another thread */ 1091 } 1092 while (gAddressBase == 0); 1093 1094 assert (new_address == (void*)gAddressBase); 1095 1096 gAllocatedSize = new_size; 1097 1098 if (!makeGmListElement ((void*)gAddressBase)) 1099 return (void*)-1; 1100 } 1101 if ((size + gNextAddress) > AlignPage (gNextAddress)) 1102 { 1103 void* res; 1104 res = VirtualAlloc ((void*)AlignPage (gNextAddress), 1105 (size + gNextAddress - 1106 AlignPage (gNextAddress)), 1107 MEM_COMMIT, PAGE_READWRITE); 1108 if (res == 0) 1109 return (void*)-1; 1110 } 1111 tmp = (void*)gNextAddress; 1112 gNextAddress = (unsigned int)tmp + size; 1113 return tmp; 1114 } 1115 else if (size < 0) 1116 { 1117 unsigned int alignedGoal = AlignPage (gNextAddress + size); 1118 /* Trim by releasing the virtual memory */ 1119 if (alignedGoal >= gAddressBase) 1120 { 1121 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal, 1122 MEM_DECOMMIT); 1123 gNextAddress = gNextAddress + size; 1124 return (void*)gNextAddress; 1125 } 1126 else 1127 { 1128 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase, 1129 MEM_DECOMMIT); 1130 gNextAddress = gAddressBase; 1131 return (void*)-1; 1132 } 1133 } 1134 else 1135 { 1136 return (void*)gNextAddress; 1137 } 1138 } 1139 1140 #endif 1141 1142 1143 1144 /* 1145 Type declarations 1146 */ 1147 1148 1149 struct malloc_chunk 1150 { 1151 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */ 1152 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */ 1153 struct malloc_chunk* fd; /* double links -- used only if free. */ 1154 struct malloc_chunk* bk; 1155 } __attribute__((__may_alias__)) ; 1156 1157 typedef struct malloc_chunk* mchunkptr; 1158 1159 /* 1160 1161 malloc_chunk details: 1162 1163 (The following includes lightly edited explanations by Colin Plumb.) 1164 1165 Chunks of memory are maintained using a `boundary tag' method as 1166 described in e.g., Knuth or Standish. (See the paper by Paul 1167 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a 1168 survey of such techniques.) Sizes of free chunks are stored both 1169 in the front of each chunk and at the end. This makes 1170 consolidating fragmented chunks into bigger chunks very fast. The 1171 size fields also hold bits representing whether chunks are free or 1172 in use. 1173 1174 An allocated chunk looks like this: 1175 1176 1177 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1178 | Size of previous chunk, if allocated | | 1179 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1180 | Size of chunk, in bytes |P| 1181 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1182 | User data starts here... . 1183 . . 1184 . (malloc_usable_space() bytes) . 1185 . | 1186 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1187 | Size of chunk | 1188 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1189 1190 1191 Where "chunk" is the front of the chunk for the purpose of most of 1192 the malloc code, but "mem" is the pointer that is returned to the 1193 user. "Nextchunk" is the beginning of the next contiguous chunk. 1194 1195 Chunks always begin on even word boundries, so the mem portion 1196 (which is returned to the user) is also on an even word boundary, and 1197 thus double-word aligned. 1198 1199 Free chunks are stored in circular doubly-linked lists, and look like this: 1200 1201 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1202 | Size of previous chunk | 1203 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1204 `head:' | Size of chunk, in bytes |P| 1205 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1206 | Forward pointer to next chunk in list | 1207 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1208 | Back pointer to previous chunk in list | 1209 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1210 | Unused space (may be 0 bytes long) . 1211 . . 1212 . | 1213 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1214 `foot:' | Size of chunk, in bytes | 1215 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1216 1217 The P (PREV_INUSE) bit, stored in the unused low-order bit of the 1218 chunk size (which is always a multiple of two words), is an in-use 1219 bit for the *previous* chunk. If that bit is *clear*, then the 1220 word before the current chunk size contains the previous chunk 1221 size, and can be used to find the front of the previous chunk. 1222 (The very first chunk allocated always has this bit set, 1223 preventing access to non-existent (or non-owned) memory.) 1224 1225 Note that the `foot' of the current chunk is actually represented 1226 as the prev_size of the NEXT chunk. (This makes it easier to 1227 deal with alignments etc). 1228 1229 The two exceptions to all this are 1230 1231 1. The special chunk `top', which doesn't bother using the 1232 trailing size field since there is no 1233 next contiguous chunk that would have to index off it. (After 1234 initialization, `top' is forced to always exist. If it would 1235 become less than MINSIZE bytes long, it is replenished via 1236 malloc_extend_top.) 1237 1238 2. Chunks allocated via mmap, which have the second-lowest-order 1239 bit (IS_MMAPPED) set in their size fields. Because they are 1240 never merged or traversed from any other chunk, they have no 1241 foot size or inuse information. 1242 1243 Available chunks are kept in any of several places (all declared below): 1244 1245 * `av': An array of chunks serving as bin headers for consolidated 1246 chunks. Each bin is doubly linked. The bins are approximately 1247 proportionally (log) spaced. There are a lot of these bins 1248 (128). This may look excessive, but works very well in 1249 practice. All procedures maintain the invariant that no 1250 consolidated chunk physically borders another one. Chunks in 1251 bins are kept in size order, with ties going to the 1252 approximately least recently used chunk. 1253 1254 The chunks in each bin are maintained in decreasing sorted order by 1255 size. This is irrelevant for the small bins, which all contain 1256 the same-sized chunks, but facilitates best-fit allocation for 1257 larger chunks. (These lists are just sequential. Keeping them in 1258 order almost never requires enough traversal to warrant using 1259 fancier ordered data structures.) Chunks of the same size are 1260 linked with the most recently freed at the front, and allocations 1261 are taken from the back. This results in LRU or FIFO allocation 1262 order, which tends to give each chunk an equal opportunity to be 1263 consolidated with adjacent freed chunks, resulting in larger free 1264 chunks and less fragmentation. 1265 1266 * `top': The top-most available chunk (i.e., the one bordering the 1267 end of available memory) is treated specially. It is never 1268 included in any bin, is used only if no other chunk is 1269 available, and is released back to the system if it is very 1270 large (see M_TRIM_THRESHOLD). 1271 1272 * `last_remainder': A bin holding only the remainder of the 1273 most recently split (non-top) chunk. This bin is checked 1274 before other non-fitting chunks, so as to provide better 1275 locality for runs of sequentially allocated chunks. 1276 1277 * Implicitly, through the host system's memory mapping tables. 1278 If supported, requests greater than a threshold are usually 1279 serviced via calls to mmap, and then later released via munmap. 1280 1281 */ 1282 1283 /* sizes, alignments */ 1284 1285 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T)) 1286 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ) 1287 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1) 1288 #define MINSIZE (sizeof(struct malloc_chunk)) 1289 1290 /* conversion from malloc headers to user pointers, and back */ 1291 1292 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ)) 1293 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ)) 1294 1295 /* pad request bytes into a usable size */ 1296 1297 #define request2size(req) \ 1298 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \ 1299 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \ 1300 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK))) 1301 1302 /* Check if m has acceptable alignment */ 1303 1304 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0) 1305 1306 1307 1308 1309 /* 1310 Physical chunk operations 1311 */ 1312 1313 1314 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */ 1315 1316 #define PREV_INUSE 0x1 1317 1318 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */ 1319 1320 #define IS_MMAPPED 0x2 1321 1322 /* Bits to mask off when extracting size */ 1323 1324 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED) 1325 1326 1327 /* Ptr to next physical malloc_chunk. */ 1328 1329 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) )) 1330 1331 /* Ptr to previous physical malloc_chunk */ 1332 1333 #define prev_chunk(p)\ 1334 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) )) 1335 1336 1337 /* Treat space at ptr + offset as a chunk */ 1338 1339 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) 1340 1341 1342 1343 1344 /* 1345 Dealing with use bits 1346 */ 1347 1348 /* extract p's inuse bit */ 1349 1350 #define inuse(p)\ 1351 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE) 1352 1353 /* extract inuse bit of previous chunk */ 1354 1355 #define prev_inuse(p) ((p)->size & PREV_INUSE) 1356 1357 /* check for mmap()'ed chunk */ 1358 1359 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED) 1360 1361 /* set/clear chunk as in use without otherwise disturbing */ 1362 1363 #define set_inuse(p)\ 1364 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE 1365 1366 #define clear_inuse(p)\ 1367 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE) 1368 1369 /* check/set/clear inuse bits in known places */ 1370 1371 #define inuse_bit_at_offset(p, s)\ 1372 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE) 1373 1374 #define set_inuse_bit_at_offset(p, s)\ 1375 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE) 1376 1377 #define clear_inuse_bit_at_offset(p, s)\ 1378 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE)) 1379 1380 1381 1382 1383 /* 1384 Dealing with size fields 1385 */ 1386 1387 /* Get size, ignoring use bits */ 1388 1389 #define chunksize(p) ((p)->size & ~(SIZE_BITS)) 1390 1391 /* Set size at head, without disturbing its use bit */ 1392 1393 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s))) 1394 1395 /* Set size/use ignoring previous bits in header */ 1396 1397 #define set_head(p, s) ((p)->size = (s)) 1398 1399 /* Set size at footer (only when chunk is not in use) */ 1400 1401 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s)) 1402 1403 1404 1405 1406 1407 /* 1408 Bins 1409 1410 The bins, `av_' are an array of pairs of pointers serving as the 1411 heads of (initially empty) doubly-linked lists of chunks, laid out 1412 in a way so that each pair can be treated as if it were in a 1413 malloc_chunk. (This way, the fd/bk offsets for linking bin heads 1414 and chunks are the same). 1415 1416 Bins for sizes < 512 bytes contain chunks of all the same size, spaced 1417 8 bytes apart. Larger bins are approximately logarithmically 1418 spaced. (See the table below.) The `av_' array is never mentioned 1419 directly in the code, but instead via bin access macros. 1420 1421 Bin layout: 1422 1423 64 bins of size 8 1424 32 bins of size 64 1425 16 bins of size 512 1426 8 bins of size 4096 1427 4 bins of size 32768 1428 2 bins of size 262144 1429 1 bin of size what's left 1430 1431 There is actually a little bit of slop in the numbers in bin_index 1432 for the sake of speed. This makes no difference elsewhere. 1433 1434 The special chunks `top' and `last_remainder' get their own bins, 1435 (this is implemented via yet more trickery with the av_ array), 1436 although `top' is never properly linked to its bin since it is 1437 always handled specially. 1438 1439 */ 1440 1441 #define NAV 128 /* number of bins */ 1442 1443 typedef struct malloc_chunk* mbinptr; 1444 1445 /* access macros */ 1446 1447 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ)) 1448 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr))) 1449 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr))) 1450 1451 /* 1452 The first 2 bins are never indexed. The corresponding av_ cells are instead 1453 used for bookkeeping. This is not to save space, but to simplify 1454 indexing, maintain locality, and avoid some initialization tests. 1455 */ 1456 1457 #define top (av_[2]) /* The topmost chunk */ 1458 #define last_remainder (bin_at(1)) /* remainder from last split */ 1459 1460 1461 /* 1462 Because top initially points to its own bin with initial 1463 zero size, thus forcing extension on the first malloc request, 1464 we avoid having any special code in malloc to check whether 1465 it even exists yet. But we still need to in malloc_extend_top. 1466 */ 1467 1468 #define initial_top ((mchunkptr)(bin_at(0))) 1469 1470 /* Helper macro to initialize bins */ 1471 1472 #define IAV(i) bin_at(i), bin_at(i) 1473 1474 static mbinptr av_[NAV * 2 + 2] = { 1475 0, 0, 1476 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7), 1477 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15), 1478 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23), 1479 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31), 1480 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39), 1481 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47), 1482 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55), 1483 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63), 1484 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71), 1485 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79), 1486 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87), 1487 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95), 1488 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103), 1489 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111), 1490 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119), 1491 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127) 1492 }; 1493 1494 #ifdef CONFIG_NEEDS_MANUAL_RELOC 1495 void malloc_bin_reloc (void) 1496 { 1497 unsigned long *p = (unsigned long *)(&av_[2]); 1498 int i; 1499 for (i=2; i<(sizeof(av_)/sizeof(mbinptr)); ++i) { 1500 *p++ += gd->reloc_off; 1501 } 1502 } 1503 #endif 1504 1505 ulong mem_malloc_start = 0; 1506 ulong mem_malloc_end = 0; 1507 ulong mem_malloc_brk = 0; 1508 1509 void *sbrk(ptrdiff_t increment) 1510 { 1511 ulong old = mem_malloc_brk; 1512 ulong new = old + increment; 1513 1514 /* 1515 * if we are giving memory back make sure we clear it out since 1516 * we set MORECORE_CLEARS to 1 1517 */ 1518 if (increment < 0) 1519 memset((void *)new, 0, -increment); 1520 1521 if ((new < mem_malloc_start) || (new > mem_malloc_end)) 1522 return (void *)MORECORE_FAILURE; 1523 1524 mem_malloc_brk = new; 1525 1526 return (void *)old; 1527 } 1528 1529 void mem_malloc_init(ulong start, ulong size) 1530 { 1531 mem_malloc_start = start; 1532 mem_malloc_end = start + size; 1533 mem_malloc_brk = start; 1534 1535 memset((void *)mem_malloc_start, 0, size); 1536 } 1537 1538 /* field-extraction macros */ 1539 1540 #define first(b) ((b)->fd) 1541 #define last(b) ((b)->bk) 1542 1543 /* 1544 Indexing into bins 1545 */ 1546 1547 #define bin_index(sz) \ 1548 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \ 1549 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \ 1550 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \ 1551 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \ 1552 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \ 1553 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \ 1554 126) 1555 /* 1556 bins for chunks < 512 are all spaced 8 bytes apart, and hold 1557 identically sized chunks. This is exploited in malloc. 1558 */ 1559 1560 #define MAX_SMALLBIN 63 1561 #define MAX_SMALLBIN_SIZE 512 1562 #define SMALLBIN_WIDTH 8 1563 1564 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3) 1565 1566 /* 1567 Requests are `small' if both the corresponding and the next bin are small 1568 */ 1569 1570 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH) 1571 1572 1573 1574 /* 1575 To help compensate for the large number of bins, a one-level index 1576 structure is used for bin-by-bin searching. `binblocks' is a 1577 one-word bitvector recording whether groups of BINBLOCKWIDTH bins 1578 have any (possibly) non-empty bins, so they can be skipped over 1579 all at once during during traversals. The bits are NOT always 1580 cleared as soon as all bins in a block are empty, but instead only 1581 when all are noticed to be empty during traversal in malloc. 1582 */ 1583 1584 #define BINBLOCKWIDTH 4 /* bins per block */ 1585 1586 #define binblocks_r ((INTERNAL_SIZE_T)av_[1]) /* bitvector of nonempty blocks */ 1587 #define binblocks_w (av_[1]) 1588 1589 /* bin<->block macros */ 1590 1591 #define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH)) 1592 #define mark_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r | idx2binblock(ii))) 1593 #define clear_binblock(ii) (binblocks_w = (mbinptr)(binblocks_r & ~(idx2binblock(ii)))) 1594 1595 1596 1597 1598 1599 /* Other static bookkeeping data */ 1600 1601 /* variables holding tunable values */ 1602 1603 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD; 1604 static unsigned long top_pad = DEFAULT_TOP_PAD; 1605 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX; 1606 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD; 1607 1608 /* The first value returned from sbrk */ 1609 static char* sbrk_base = (char*)(-1); 1610 1611 /* The maximum memory obtained from system via sbrk */ 1612 static unsigned long max_sbrked_mem = 0; 1613 1614 /* The maximum via either sbrk or mmap */ 1615 static unsigned long max_total_mem = 0; 1616 1617 /* internal working copy of mallinfo */ 1618 static struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; 1619 1620 /* The total memory obtained from system via sbrk */ 1621 #define sbrked_mem (current_mallinfo.arena) 1622 1623 /* Tracking mmaps */ 1624 1625 #ifdef DEBUG 1626 static unsigned int n_mmaps = 0; 1627 #endif /* DEBUG */ 1628 static unsigned long mmapped_mem = 0; 1629 #if HAVE_MMAP 1630 static unsigned int max_n_mmaps = 0; 1631 static unsigned long max_mmapped_mem = 0; 1632 #endif 1633 1634 1635 1636 /* 1637 Debugging support 1638 */ 1639 1640 #ifdef DEBUG 1641 1642 1643 /* 1644 These routines make a number of assertions about the states 1645 of data structures that should be true at all times. If any 1646 are not true, it's very likely that a user program has somehow 1647 trashed memory. (It's also possible that there is a coding error 1648 in malloc. In which case, please report it!) 1649 */ 1650 1651 #if __STD_C 1652 static void do_check_chunk(mchunkptr p) 1653 #else 1654 static void do_check_chunk(p) mchunkptr p; 1655 #endif 1656 { 1657 #if 0 /* causes warnings because assert() is off */ 1658 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; 1659 #endif /* 0 */ 1660 1661 /* No checkable chunk is mmapped */ 1662 assert(!chunk_is_mmapped(p)); 1663 1664 /* Check for legal address ... */ 1665 assert((char*)p >= sbrk_base); 1666 if (p != top) 1667 assert((char*)p + sz <= (char*)top); 1668 else 1669 assert((char*)p + sz <= sbrk_base + sbrked_mem); 1670 1671 } 1672 1673 1674 #if __STD_C 1675 static void do_check_free_chunk(mchunkptr p) 1676 #else 1677 static void do_check_free_chunk(p) mchunkptr p; 1678 #endif 1679 { 1680 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; 1681 #if 0 /* causes warnings because assert() is off */ 1682 mchunkptr next = chunk_at_offset(p, sz); 1683 #endif /* 0 */ 1684 1685 do_check_chunk(p); 1686 1687 /* Check whether it claims to be free ... */ 1688 assert(!inuse(p)); 1689 1690 /* Unless a special marker, must have OK fields */ 1691 if ((long)sz >= (long)MINSIZE) 1692 { 1693 assert((sz & MALLOC_ALIGN_MASK) == 0); 1694 assert(aligned_OK(chunk2mem(p))); 1695 /* ... matching footer field */ 1696 assert(next->prev_size == sz); 1697 /* ... and is fully consolidated */ 1698 assert(prev_inuse(p)); 1699 assert (next == top || inuse(next)); 1700 1701 /* ... and has minimally sane links */ 1702 assert(p->fd->bk == p); 1703 assert(p->bk->fd == p); 1704 } 1705 else /* markers are always of size SIZE_SZ */ 1706 assert(sz == SIZE_SZ); 1707 } 1708 1709 #if __STD_C 1710 static void do_check_inuse_chunk(mchunkptr p) 1711 #else 1712 static void do_check_inuse_chunk(p) mchunkptr p; 1713 #endif 1714 { 1715 mchunkptr next = next_chunk(p); 1716 do_check_chunk(p); 1717 1718 /* Check whether it claims to be in use ... */ 1719 assert(inuse(p)); 1720 1721 /* ... and is surrounded by OK chunks. 1722 Since more things can be checked with free chunks than inuse ones, 1723 if an inuse chunk borders them and debug is on, it's worth doing them. 1724 */ 1725 if (!prev_inuse(p)) 1726 { 1727 mchunkptr prv = prev_chunk(p); 1728 assert(next_chunk(prv) == p); 1729 do_check_free_chunk(prv); 1730 } 1731 if (next == top) 1732 { 1733 assert(prev_inuse(next)); 1734 assert(chunksize(next) >= MINSIZE); 1735 } 1736 else if (!inuse(next)) 1737 do_check_free_chunk(next); 1738 1739 } 1740 1741 #if __STD_C 1742 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s) 1743 #else 1744 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s; 1745 #endif 1746 { 1747 #if 0 /* causes warnings because assert() is off */ 1748 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE; 1749 long room = sz - s; 1750 #endif /* 0 */ 1751 1752 do_check_inuse_chunk(p); 1753 1754 /* Legal size ... */ 1755 assert((long)sz >= (long)MINSIZE); 1756 assert((sz & MALLOC_ALIGN_MASK) == 0); 1757 assert(room >= 0); 1758 assert(room < (long)MINSIZE); 1759 1760 /* ... and alignment */ 1761 assert(aligned_OK(chunk2mem(p))); 1762 1763 1764 /* ... and was allocated at front of an available chunk */ 1765 assert(prev_inuse(p)); 1766 1767 } 1768 1769 1770 #define check_free_chunk(P) do_check_free_chunk(P) 1771 #define check_inuse_chunk(P) do_check_inuse_chunk(P) 1772 #define check_chunk(P) do_check_chunk(P) 1773 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N) 1774 #else 1775 #define check_free_chunk(P) 1776 #define check_inuse_chunk(P) 1777 #define check_chunk(P) 1778 #define check_malloced_chunk(P,N) 1779 #endif 1780 1781 1782 1783 /* 1784 Macro-based internal utilities 1785 */ 1786 1787 1788 /* 1789 Linking chunks in bin lists. 1790 Call these only with variables, not arbitrary expressions, as arguments. 1791 */ 1792 1793 /* 1794 Place chunk p of size s in its bin, in size order, 1795 putting it ahead of others of same size. 1796 */ 1797 1798 1799 #define frontlink(P, S, IDX, BK, FD) \ 1800 { \ 1801 if (S < MAX_SMALLBIN_SIZE) \ 1802 { \ 1803 IDX = smallbin_index(S); \ 1804 mark_binblock(IDX); \ 1805 BK = bin_at(IDX); \ 1806 FD = BK->fd; \ 1807 P->bk = BK; \ 1808 P->fd = FD; \ 1809 FD->bk = BK->fd = P; \ 1810 } \ 1811 else \ 1812 { \ 1813 IDX = bin_index(S); \ 1814 BK = bin_at(IDX); \ 1815 FD = BK->fd; \ 1816 if (FD == BK) mark_binblock(IDX); \ 1817 else \ 1818 { \ 1819 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \ 1820 BK = FD->bk; \ 1821 } \ 1822 P->bk = BK; \ 1823 P->fd = FD; \ 1824 FD->bk = BK->fd = P; \ 1825 } \ 1826 } 1827 1828 1829 /* take a chunk off a list */ 1830 1831 #define unlink(P, BK, FD) \ 1832 { \ 1833 BK = P->bk; \ 1834 FD = P->fd; \ 1835 FD->bk = BK; \ 1836 BK->fd = FD; \ 1837 } \ 1838 1839 /* Place p as the last remainder */ 1840 1841 #define link_last_remainder(P) \ 1842 { \ 1843 last_remainder->fd = last_remainder->bk = P; \ 1844 P->fd = P->bk = last_remainder; \ 1845 } 1846 1847 /* Clear the last_remainder bin */ 1848 1849 #define clear_last_remainder \ 1850 (last_remainder->fd = last_remainder->bk = last_remainder) 1851 1852 1853 1854 1855 1856 /* Routines dealing with mmap(). */ 1857 1858 #if HAVE_MMAP 1859 1860 #if __STD_C 1861 static mchunkptr mmap_chunk(size_t size) 1862 #else 1863 static mchunkptr mmap_chunk(size) size_t size; 1864 #endif 1865 { 1866 size_t page_mask = malloc_getpagesize - 1; 1867 mchunkptr p; 1868 1869 #ifndef MAP_ANONYMOUS 1870 static int fd = -1; 1871 #endif 1872 1873 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */ 1874 1875 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because 1876 * there is no following chunk whose prev_size field could be used. 1877 */ 1878 size = (size + SIZE_SZ + page_mask) & ~page_mask; 1879 1880 #ifdef MAP_ANONYMOUS 1881 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, 1882 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0); 1883 #else /* !MAP_ANONYMOUS */ 1884 if (fd < 0) 1885 { 1886 fd = open("/dev/zero", O_RDWR); 1887 if(fd < 0) return 0; 1888 } 1889 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0); 1890 #endif 1891 1892 if(p == (mchunkptr)-1) return 0; 1893 1894 n_mmaps++; 1895 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps; 1896 1897 /* We demand that eight bytes into a page must be 8-byte aligned. */ 1898 assert(aligned_OK(chunk2mem(p))); 1899 1900 /* The offset to the start of the mmapped region is stored 1901 * in the prev_size field of the chunk; normally it is zero, 1902 * but that can be changed in memalign(). 1903 */ 1904 p->prev_size = 0; 1905 set_head(p, size|IS_MMAPPED); 1906 1907 mmapped_mem += size; 1908 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 1909 max_mmapped_mem = mmapped_mem; 1910 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 1911 max_total_mem = mmapped_mem + sbrked_mem; 1912 return p; 1913 } 1914 1915 #if __STD_C 1916 static void munmap_chunk(mchunkptr p) 1917 #else 1918 static void munmap_chunk(p) mchunkptr p; 1919 #endif 1920 { 1921 INTERNAL_SIZE_T size = chunksize(p); 1922 int ret; 1923 1924 assert (chunk_is_mmapped(p)); 1925 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem)); 1926 assert((n_mmaps > 0)); 1927 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0); 1928 1929 n_mmaps--; 1930 mmapped_mem -= (size + p->prev_size); 1931 1932 ret = munmap((char *)p - p->prev_size, size + p->prev_size); 1933 1934 /* munmap returns non-zero on failure */ 1935 assert(ret == 0); 1936 } 1937 1938 #if HAVE_MREMAP 1939 1940 #if __STD_C 1941 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size) 1942 #else 1943 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size; 1944 #endif 1945 { 1946 size_t page_mask = malloc_getpagesize - 1; 1947 INTERNAL_SIZE_T offset = p->prev_size; 1948 INTERNAL_SIZE_T size = chunksize(p); 1949 char *cp; 1950 1951 assert (chunk_is_mmapped(p)); 1952 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem)); 1953 assert((n_mmaps > 0)); 1954 assert(((size + offset) & (malloc_getpagesize-1)) == 0); 1955 1956 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */ 1957 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask; 1958 1959 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1); 1960 1961 if (cp == (char *)-1) return 0; 1962 1963 p = (mchunkptr)(cp + offset); 1964 1965 assert(aligned_OK(chunk2mem(p))); 1966 1967 assert((p->prev_size == offset)); 1968 set_head(p, (new_size - offset)|IS_MMAPPED); 1969 1970 mmapped_mem -= size + offset; 1971 mmapped_mem += new_size; 1972 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem) 1973 max_mmapped_mem = mmapped_mem; 1974 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 1975 max_total_mem = mmapped_mem + sbrked_mem; 1976 return p; 1977 } 1978 1979 #endif /* HAVE_MREMAP */ 1980 1981 #endif /* HAVE_MMAP */ 1982 1983 1984 1985 1986 /* 1987 Extend the top-most chunk by obtaining memory from system. 1988 Main interface to sbrk (but see also malloc_trim). 1989 */ 1990 1991 #if __STD_C 1992 static void malloc_extend_top(INTERNAL_SIZE_T nb) 1993 #else 1994 static void malloc_extend_top(nb) INTERNAL_SIZE_T nb; 1995 #endif 1996 { 1997 char* brk; /* return value from sbrk */ 1998 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */ 1999 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */ 2000 char* new_brk; /* return of 2nd sbrk call */ 2001 INTERNAL_SIZE_T top_size; /* new size of top chunk */ 2002 2003 mchunkptr old_top = top; /* Record state of old top */ 2004 INTERNAL_SIZE_T old_top_size = chunksize(old_top); 2005 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size)); 2006 2007 /* Pad request with top_pad plus minimal overhead */ 2008 2009 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE; 2010 unsigned long pagesz = malloc_getpagesize; 2011 2012 /* If not the first time through, round to preserve page boundary */ 2013 /* Otherwise, we need to correct to a page size below anyway. */ 2014 /* (We also correct below if an intervening foreign sbrk call.) */ 2015 2016 if (sbrk_base != (char*)(-1)) 2017 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1); 2018 2019 brk = (char*)(MORECORE (sbrk_size)); 2020 2021 /* Fail if sbrk failed or if a foreign sbrk call killed our space */ 2022 if (brk == (char*)(MORECORE_FAILURE) || 2023 (brk < old_end && old_top != initial_top)) 2024 return; 2025 2026 sbrked_mem += sbrk_size; 2027 2028 if (brk == old_end) /* can just add bytes to current top */ 2029 { 2030 top_size = sbrk_size + old_top_size; 2031 set_head(top, top_size | PREV_INUSE); 2032 } 2033 else 2034 { 2035 if (sbrk_base == (char*)(-1)) /* First time through. Record base */ 2036 sbrk_base = brk; 2037 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */ 2038 sbrked_mem += brk - (char*)old_end; 2039 2040 /* Guarantee alignment of first new chunk made from this space */ 2041 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK; 2042 if (front_misalign > 0) 2043 { 2044 correction = (MALLOC_ALIGNMENT) - front_misalign; 2045 brk += correction; 2046 } 2047 else 2048 correction = 0; 2049 2050 /* Guarantee the next brk will be at a page boundary */ 2051 2052 correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) & 2053 ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size)); 2054 2055 /* Allocate correction */ 2056 new_brk = (char*)(MORECORE (correction)); 2057 if (new_brk == (char*)(MORECORE_FAILURE)) return; 2058 2059 sbrked_mem += correction; 2060 2061 top = (mchunkptr)brk; 2062 top_size = new_brk - brk + correction; 2063 set_head(top, top_size | PREV_INUSE); 2064 2065 if (old_top != initial_top) 2066 { 2067 2068 /* There must have been an intervening foreign sbrk call. */ 2069 /* A double fencepost is necessary to prevent consolidation */ 2070 2071 /* If not enough space to do this, then user did something very wrong */ 2072 if (old_top_size < MINSIZE) 2073 { 2074 set_head(top, PREV_INUSE); /* will force null return from malloc */ 2075 return; 2076 } 2077 2078 /* Also keep size a multiple of MALLOC_ALIGNMENT */ 2079 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK; 2080 set_head_size(old_top, old_top_size); 2081 chunk_at_offset(old_top, old_top_size )->size = 2082 SIZE_SZ|PREV_INUSE; 2083 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size = 2084 SIZE_SZ|PREV_INUSE; 2085 /* If possible, release the rest. */ 2086 if (old_top_size >= MINSIZE) 2087 fREe(chunk2mem(old_top)); 2088 } 2089 } 2090 2091 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem) 2092 max_sbrked_mem = sbrked_mem; 2093 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem) 2094 max_total_mem = mmapped_mem + sbrked_mem; 2095 2096 /* We always land on a page boundary */ 2097 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0); 2098 } 2099 2100 2101 2102 2103 /* Main public routines */ 2104 2105 2106 /* 2107 Malloc Algorthim: 2108 2109 The requested size is first converted into a usable form, `nb'. 2110 This currently means to add 4 bytes overhead plus possibly more to 2111 obtain 8-byte alignment and/or to obtain a size of at least 2112 MINSIZE (currently 16 bytes), the smallest allocatable size. 2113 (All fits are considered `exact' if they are within MINSIZE bytes.) 2114 2115 From there, the first successful of the following steps is taken: 2116 2117 1. The bin corresponding to the request size is scanned, and if 2118 a chunk of exactly the right size is found, it is taken. 2119 2120 2. The most recently remaindered chunk is used if it is big 2121 enough. This is a form of (roving) first fit, used only in 2122 the absence of exact fits. Runs of consecutive requests use 2123 the remainder of the chunk used for the previous such request 2124 whenever possible. This limited use of a first-fit style 2125 allocation strategy tends to give contiguous chunks 2126 coextensive lifetimes, which improves locality and can reduce 2127 fragmentation in the long run. 2128 2129 3. Other bins are scanned in increasing size order, using a 2130 chunk big enough to fulfill the request, and splitting off 2131 any remainder. This search is strictly by best-fit; i.e., 2132 the smallest (with ties going to approximately the least 2133 recently used) chunk that fits is selected. 2134 2135 4. If large enough, the chunk bordering the end of memory 2136 (`top') is split off. (This use of `top' is in accord with 2137 the best-fit search rule. In effect, `top' is treated as 2138 larger (and thus less well fitting) than any other available 2139 chunk since it can be extended to be as large as necessary 2140 (up to system limitations). 2141 2142 5. If the request size meets the mmap threshold and the 2143 system supports mmap, and there are few enough currently 2144 allocated mmapped regions, and a call to mmap succeeds, 2145 the request is allocated via direct memory mapping. 2146 2147 6. Otherwise, the top of memory is extended by 2148 obtaining more space from the system (normally using sbrk, 2149 but definable to anything else via the MORECORE macro). 2150 Memory is gathered from the system (in system page-sized 2151 units) in a way that allows chunks obtained across different 2152 sbrk calls to be consolidated, but does not require 2153 contiguous memory. Thus, it should be safe to intersperse 2154 mallocs with other sbrk calls. 2155 2156 2157 All allocations are made from the the `lowest' part of any found 2158 chunk. (The implementation invariant is that prev_inuse is 2159 always true of any allocated chunk; i.e., that each allocated 2160 chunk borders either a previously allocated and still in-use chunk, 2161 or the base of its memory arena.) 2162 2163 */ 2164 2165 #if __STD_C 2166 Void_t* mALLOc(size_t bytes) 2167 #else 2168 Void_t* mALLOc(bytes) size_t bytes; 2169 #endif 2170 { 2171 mchunkptr victim; /* inspected/selected chunk */ 2172 INTERNAL_SIZE_T victim_size; /* its size */ 2173 int idx; /* index for bin traversal */ 2174 mbinptr bin; /* associated bin */ 2175 mchunkptr remainder; /* remainder from a split */ 2176 long remainder_size; /* its size */ 2177 int remainder_index; /* its bin index */ 2178 unsigned long block; /* block traverser bit */ 2179 int startidx; /* first bin of a traversed block */ 2180 mchunkptr fwd; /* misc temp for linking */ 2181 mchunkptr bck; /* misc temp for linking */ 2182 mbinptr q; /* misc temp */ 2183 2184 INTERNAL_SIZE_T nb; 2185 2186 /* check if mem_malloc_init() was run */ 2187 if ((mem_malloc_start == 0) && (mem_malloc_end == 0)) { 2188 /* not initialized yet */ 2189 return 0; 2190 } 2191 2192 if ((long)bytes < 0) return 0; 2193 2194 nb = request2size(bytes); /* padded request size; */ 2195 2196 /* Check for exact match in a bin */ 2197 2198 if (is_small_request(nb)) /* Faster version for small requests */ 2199 { 2200 idx = smallbin_index(nb); 2201 2202 /* No traversal or size check necessary for small bins. */ 2203 2204 q = bin_at(idx); 2205 victim = last(q); 2206 2207 /* Also scan the next one, since it would have a remainder < MINSIZE */ 2208 if (victim == q) 2209 { 2210 q = next_bin(q); 2211 victim = last(q); 2212 } 2213 if (victim != q) 2214 { 2215 victim_size = chunksize(victim); 2216 unlink(victim, bck, fwd); 2217 set_inuse_bit_at_offset(victim, victim_size); 2218 check_malloced_chunk(victim, nb); 2219 return chunk2mem(victim); 2220 } 2221 2222 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */ 2223 2224 } 2225 else 2226 { 2227 idx = bin_index(nb); 2228 bin = bin_at(idx); 2229 2230 for (victim = last(bin); victim != bin; victim = victim->bk) 2231 { 2232 victim_size = chunksize(victim); 2233 remainder_size = victim_size - nb; 2234 2235 if (remainder_size >= (long)MINSIZE) /* too big */ 2236 { 2237 --idx; /* adjust to rescan below after checking last remainder */ 2238 break; 2239 } 2240 2241 else if (remainder_size >= 0) /* exact fit */ 2242 { 2243 unlink(victim, bck, fwd); 2244 set_inuse_bit_at_offset(victim, victim_size); 2245 check_malloced_chunk(victim, nb); 2246 return chunk2mem(victim); 2247 } 2248 } 2249 2250 ++idx; 2251 2252 } 2253 2254 /* Try to use the last split-off remainder */ 2255 2256 if ( (victim = last_remainder->fd) != last_remainder) 2257 { 2258 victim_size = chunksize(victim); 2259 remainder_size = victim_size - nb; 2260 2261 if (remainder_size >= (long)MINSIZE) /* re-split */ 2262 { 2263 remainder = chunk_at_offset(victim, nb); 2264 set_head(victim, nb | PREV_INUSE); 2265 link_last_remainder(remainder); 2266 set_head(remainder, remainder_size | PREV_INUSE); 2267 set_foot(remainder, remainder_size); 2268 check_malloced_chunk(victim, nb); 2269 return chunk2mem(victim); 2270 } 2271 2272 clear_last_remainder; 2273 2274 if (remainder_size >= 0) /* exhaust */ 2275 { 2276 set_inuse_bit_at_offset(victim, victim_size); 2277 check_malloced_chunk(victim, nb); 2278 return chunk2mem(victim); 2279 } 2280 2281 /* Else place in bin */ 2282 2283 frontlink(victim, victim_size, remainder_index, bck, fwd); 2284 } 2285 2286 /* 2287 If there are any possibly nonempty big-enough blocks, 2288 search for best fitting chunk by scanning bins in blockwidth units. 2289 */ 2290 2291 if ( (block = idx2binblock(idx)) <= binblocks_r) 2292 { 2293 2294 /* Get to the first marked block */ 2295 2296 if ( (block & binblocks_r) == 0) 2297 { 2298 /* force to an even block boundary */ 2299 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH; 2300 block <<= 1; 2301 while ((block & binblocks_r) == 0) 2302 { 2303 idx += BINBLOCKWIDTH; 2304 block <<= 1; 2305 } 2306 } 2307 2308 /* For each possibly nonempty block ... */ 2309 for (;;) 2310 { 2311 startidx = idx; /* (track incomplete blocks) */ 2312 q = bin = bin_at(idx); 2313 2314 /* For each bin in this block ... */ 2315 do 2316 { 2317 /* Find and use first big enough chunk ... */ 2318 2319 for (victim = last(bin); victim != bin; victim = victim->bk) 2320 { 2321 victim_size = chunksize(victim); 2322 remainder_size = victim_size - nb; 2323 2324 if (remainder_size >= (long)MINSIZE) /* split */ 2325 { 2326 remainder = chunk_at_offset(victim, nb); 2327 set_head(victim, nb | PREV_INUSE); 2328 unlink(victim, bck, fwd); 2329 link_last_remainder(remainder); 2330 set_head(remainder, remainder_size | PREV_INUSE); 2331 set_foot(remainder, remainder_size); 2332 check_malloced_chunk(victim, nb); 2333 return chunk2mem(victim); 2334 } 2335 2336 else if (remainder_size >= 0) /* take */ 2337 { 2338 set_inuse_bit_at_offset(victim, victim_size); 2339 unlink(victim, bck, fwd); 2340 check_malloced_chunk(victim, nb); 2341 return chunk2mem(victim); 2342 } 2343 2344 } 2345 2346 bin = next_bin(bin); 2347 2348 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0); 2349 2350 /* Clear out the block bit. */ 2351 2352 do /* Possibly backtrack to try to clear a partial block */ 2353 { 2354 if ((startidx & (BINBLOCKWIDTH - 1)) == 0) 2355 { 2356 av_[1] = (mbinptr)(binblocks_r & ~block); 2357 break; 2358 } 2359 --startidx; 2360 q = prev_bin(q); 2361 } while (first(q) == q); 2362 2363 /* Get to the next possibly nonempty block */ 2364 2365 if ( (block <<= 1) <= binblocks_r && (block != 0) ) 2366 { 2367 while ((block & binblocks_r) == 0) 2368 { 2369 idx += BINBLOCKWIDTH; 2370 block <<= 1; 2371 } 2372 } 2373 else 2374 break; 2375 } 2376 } 2377 2378 2379 /* Try to use top chunk */ 2380 2381 /* Require that there be a remainder, ensuring top always exists */ 2382 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE) 2383 { 2384 2385 #if HAVE_MMAP 2386 /* If big and would otherwise need to extend, try to use mmap instead */ 2387 if ((unsigned long)nb >= (unsigned long)mmap_threshold && 2388 (victim = mmap_chunk(nb)) != 0) 2389 return chunk2mem(victim); 2390 #endif 2391 2392 /* Try to extend */ 2393 malloc_extend_top(nb); 2394 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE) 2395 return 0; /* propagate failure */ 2396 } 2397 2398 victim = top; 2399 set_head(victim, nb | PREV_INUSE); 2400 top = chunk_at_offset(victim, nb); 2401 set_head(top, remainder_size | PREV_INUSE); 2402 check_malloced_chunk(victim, nb); 2403 return chunk2mem(victim); 2404 2405 } 2406 2407 2408 2409 2410 /* 2411 2412 free() algorithm : 2413 2414 cases: 2415 2416 1. free(0) has no effect. 2417 2418 2. If the chunk was allocated via mmap, it is release via munmap(). 2419 2420 3. If a returned chunk borders the current high end of memory, 2421 it is consolidated into the top, and if the total unused 2422 topmost memory exceeds the trim threshold, malloc_trim is 2423 called. 2424 2425 4. Other chunks are consolidated as they arrive, and 2426 placed in corresponding bins. (This includes the case of 2427 consolidating with the current `last_remainder'). 2428 2429 */ 2430 2431 2432 #if __STD_C 2433 void fREe(Void_t* mem) 2434 #else 2435 void fREe(mem) Void_t* mem; 2436 #endif 2437 { 2438 mchunkptr p; /* chunk corresponding to mem */ 2439 INTERNAL_SIZE_T hd; /* its head field */ 2440 INTERNAL_SIZE_T sz; /* its size */ 2441 int idx; /* its bin index */ 2442 mchunkptr next; /* next contiguous chunk */ 2443 INTERNAL_SIZE_T nextsz; /* its size */ 2444 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */ 2445 mchunkptr bck; /* misc temp for linking */ 2446 mchunkptr fwd; /* misc temp for linking */ 2447 int islr; /* track whether merging with last_remainder */ 2448 2449 if (mem == 0) /* free(0) has no effect */ 2450 return; 2451 2452 p = mem2chunk(mem); 2453 hd = p->size; 2454 2455 #if HAVE_MMAP 2456 if (hd & IS_MMAPPED) /* release mmapped memory. */ 2457 { 2458 munmap_chunk(p); 2459 return; 2460 } 2461 #endif 2462 2463 check_inuse_chunk(p); 2464 2465 sz = hd & ~PREV_INUSE; 2466 next = chunk_at_offset(p, sz); 2467 nextsz = chunksize(next); 2468 2469 if (next == top) /* merge with top */ 2470 { 2471 sz += nextsz; 2472 2473 if (!(hd & PREV_INUSE)) /* consolidate backward */ 2474 { 2475 prevsz = p->prev_size; 2476 p = chunk_at_offset(p, -((long) prevsz)); 2477 sz += prevsz; 2478 unlink(p, bck, fwd); 2479 } 2480 2481 set_head(p, sz | PREV_INUSE); 2482 top = p; 2483 if ((unsigned long)(sz) >= (unsigned long)trim_threshold) 2484 malloc_trim(top_pad); 2485 return; 2486 } 2487 2488 set_head(next, nextsz); /* clear inuse bit */ 2489 2490 islr = 0; 2491 2492 if (!(hd & PREV_INUSE)) /* consolidate backward */ 2493 { 2494 prevsz = p->prev_size; 2495 p = chunk_at_offset(p, -((long) prevsz)); 2496 sz += prevsz; 2497 2498 if (p->fd == last_remainder) /* keep as last_remainder */ 2499 islr = 1; 2500 else 2501 unlink(p, bck, fwd); 2502 } 2503 2504 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */ 2505 { 2506 sz += nextsz; 2507 2508 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */ 2509 { 2510 islr = 1; 2511 link_last_remainder(p); 2512 } 2513 else 2514 unlink(next, bck, fwd); 2515 } 2516 2517 2518 set_head(p, sz | PREV_INUSE); 2519 set_foot(p, sz); 2520 if (!islr) 2521 frontlink(p, sz, idx, bck, fwd); 2522 } 2523 2524 2525 2526 2527 2528 /* 2529 2530 Realloc algorithm: 2531 2532 Chunks that were obtained via mmap cannot be extended or shrunk 2533 unless HAVE_MREMAP is defined, in which case mremap is used. 2534 Otherwise, if their reallocation is for additional space, they are 2535 copied. If for less, they are just left alone. 2536 2537 Otherwise, if the reallocation is for additional space, and the 2538 chunk can be extended, it is, else a malloc-copy-free sequence is 2539 taken. There are several different ways that a chunk could be 2540 extended. All are tried: 2541 2542 * Extending forward into following adjacent free chunk. 2543 * Shifting backwards, joining preceding adjacent space 2544 * Both shifting backwards and extending forward. 2545 * Extending into newly sbrked space 2546 2547 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a 2548 size argument of zero (re)allocates a minimum-sized chunk. 2549 2550 If the reallocation is for less space, and the new request is for 2551 a `small' (<512 bytes) size, then the newly unused space is lopped 2552 off and freed. 2553 2554 The old unix realloc convention of allowing the last-free'd chunk 2555 to be used as an argument to realloc is no longer supported. 2556 I don't know of any programs still relying on this feature, 2557 and allowing it would also allow too many other incorrect 2558 usages of realloc to be sensible. 2559 2560 2561 */ 2562 2563 2564 #if __STD_C 2565 Void_t* rEALLOc(Void_t* oldmem, size_t bytes) 2566 #else 2567 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes; 2568 #endif 2569 { 2570 INTERNAL_SIZE_T nb; /* padded request size */ 2571 2572 mchunkptr oldp; /* chunk corresponding to oldmem */ 2573 INTERNAL_SIZE_T oldsize; /* its size */ 2574 2575 mchunkptr newp; /* chunk to return */ 2576 INTERNAL_SIZE_T newsize; /* its size */ 2577 Void_t* newmem; /* corresponding user mem */ 2578 2579 mchunkptr next; /* next contiguous chunk after oldp */ 2580 INTERNAL_SIZE_T nextsize; /* its size */ 2581 2582 mchunkptr prev; /* previous contiguous chunk before oldp */ 2583 INTERNAL_SIZE_T prevsize; /* its size */ 2584 2585 mchunkptr remainder; /* holds split off extra space from newp */ 2586 INTERNAL_SIZE_T remainder_size; /* its size */ 2587 2588 mchunkptr bck; /* misc temp for linking */ 2589 mchunkptr fwd; /* misc temp for linking */ 2590 2591 #ifdef REALLOC_ZERO_BYTES_FREES 2592 if (bytes == 0) { fREe(oldmem); return 0; } 2593 #endif 2594 2595 if ((long)bytes < 0) return 0; 2596 2597 /* realloc of null is supposed to be same as malloc */ 2598 if (oldmem == 0) return mALLOc(bytes); 2599 2600 newp = oldp = mem2chunk(oldmem); 2601 newsize = oldsize = chunksize(oldp); 2602 2603 2604 nb = request2size(bytes); 2605 2606 #if HAVE_MMAP 2607 if (chunk_is_mmapped(oldp)) 2608 { 2609 #if HAVE_MREMAP 2610 newp = mremap_chunk(oldp, nb); 2611 if(newp) return chunk2mem(newp); 2612 #endif 2613 /* Note the extra SIZE_SZ overhead. */ 2614 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */ 2615 /* Must alloc, copy, free. */ 2616 newmem = mALLOc(bytes); 2617 if (newmem == 0) return 0; /* propagate failure */ 2618 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ); 2619 munmap_chunk(oldp); 2620 return newmem; 2621 } 2622 #endif 2623 2624 check_inuse_chunk(oldp); 2625 2626 if ((long)(oldsize) < (long)(nb)) 2627 { 2628 2629 /* Try expanding forward */ 2630 2631 next = chunk_at_offset(oldp, oldsize); 2632 if (next == top || !inuse(next)) 2633 { 2634 nextsize = chunksize(next); 2635 2636 /* Forward into top only if a remainder */ 2637 if (next == top) 2638 { 2639 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE)) 2640 { 2641 newsize += nextsize; 2642 top = chunk_at_offset(oldp, nb); 2643 set_head(top, (newsize - nb) | PREV_INUSE); 2644 set_head_size(oldp, nb); 2645 return chunk2mem(oldp); 2646 } 2647 } 2648 2649 /* Forward into next chunk */ 2650 else if (((long)(nextsize + newsize) >= (long)(nb))) 2651 { 2652 unlink(next, bck, fwd); 2653 newsize += nextsize; 2654 goto split; 2655 } 2656 } 2657 else 2658 { 2659 next = 0; 2660 nextsize = 0; 2661 } 2662 2663 /* Try shifting backwards. */ 2664 2665 if (!prev_inuse(oldp)) 2666 { 2667 prev = prev_chunk(oldp); 2668 prevsize = chunksize(prev); 2669 2670 /* try forward + backward first to save a later consolidation */ 2671 2672 if (next != 0) 2673 { 2674 /* into top */ 2675 if (next == top) 2676 { 2677 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE)) 2678 { 2679 unlink(prev, bck, fwd); 2680 newp = prev; 2681 newsize += prevsize + nextsize; 2682 newmem = chunk2mem(newp); 2683 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ); 2684 top = chunk_at_offset(newp, nb); 2685 set_head(top, (newsize - nb) | PREV_INUSE); 2686 set_head_size(newp, nb); 2687 return newmem; 2688 } 2689 } 2690 2691 /* into next chunk */ 2692 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb))) 2693 { 2694 unlink(next, bck, fwd); 2695 unlink(prev, bck, fwd); 2696 newp = prev; 2697 newsize += nextsize + prevsize; 2698 newmem = chunk2mem(newp); 2699 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ); 2700 goto split; 2701 } 2702 } 2703 2704 /* backward only */ 2705 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb) 2706 { 2707 unlink(prev, bck, fwd); 2708 newp = prev; 2709 newsize += prevsize; 2710 newmem = chunk2mem(newp); 2711 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ); 2712 goto split; 2713 } 2714 } 2715 2716 /* Must allocate */ 2717 2718 newmem = mALLOc (bytes); 2719 2720 if (newmem == 0) /* propagate failure */ 2721 return 0; 2722 2723 /* Avoid copy if newp is next chunk after oldp. */ 2724 /* (This can only happen when new chunk is sbrk'ed.) */ 2725 2726 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp)) 2727 { 2728 newsize += chunksize(newp); 2729 newp = oldp; 2730 goto split; 2731 } 2732 2733 /* Otherwise copy, free, and exit */ 2734 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ); 2735 fREe(oldmem); 2736 return newmem; 2737 } 2738 2739 2740 split: /* split off extra room in old or expanded chunk */ 2741 2742 if (newsize - nb >= MINSIZE) /* split off remainder */ 2743 { 2744 remainder = chunk_at_offset(newp, nb); 2745 remainder_size = newsize - nb; 2746 set_head_size(newp, nb); 2747 set_head(remainder, remainder_size | PREV_INUSE); 2748 set_inuse_bit_at_offset(remainder, remainder_size); 2749 fREe(chunk2mem(remainder)); /* let free() deal with it */ 2750 } 2751 else 2752 { 2753 set_head_size(newp, newsize); 2754 set_inuse_bit_at_offset(newp, newsize); 2755 } 2756 2757 check_inuse_chunk(newp); 2758 return chunk2mem(newp); 2759 } 2760 2761 2762 2763 2764 /* 2765 2766 memalign algorithm: 2767 2768 memalign requests more than enough space from malloc, finds a spot 2769 within that chunk that meets the alignment request, and then 2770 possibly frees the leading and trailing space. 2771 2772 The alignment argument must be a power of two. This property is not 2773 checked by memalign, so misuse may result in random runtime errors. 2774 2775 8-byte alignment is guaranteed by normal malloc calls, so don't 2776 bother calling memalign with an argument of 8 or less. 2777 2778 Overreliance on memalign is a sure way to fragment space. 2779 2780 */ 2781 2782 2783 #if __STD_C 2784 Void_t* mEMALIGn(size_t alignment, size_t bytes) 2785 #else 2786 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes; 2787 #endif 2788 { 2789 INTERNAL_SIZE_T nb; /* padded request size */ 2790 char* m; /* memory returned by malloc call */ 2791 mchunkptr p; /* corresponding chunk */ 2792 char* brk; /* alignment point within p */ 2793 mchunkptr newp; /* chunk to return */ 2794 INTERNAL_SIZE_T newsize; /* its size */ 2795 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */ 2796 mchunkptr remainder; /* spare room at end to split off */ 2797 long remainder_size; /* its size */ 2798 2799 if ((long)bytes < 0) return 0; 2800 2801 /* If need less alignment than we give anyway, just relay to malloc */ 2802 2803 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes); 2804 2805 /* Otherwise, ensure that it is at least a minimum chunk size */ 2806 2807 if (alignment < MINSIZE) alignment = MINSIZE; 2808 2809 /* Call malloc with worst case padding to hit alignment. */ 2810 2811 nb = request2size(bytes); 2812 m = (char*)(mALLOc(nb + alignment + MINSIZE)); 2813 2814 if (m == 0) return 0; /* propagate failure */ 2815 2816 p = mem2chunk(m); 2817 2818 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */ 2819 { 2820 #if HAVE_MMAP 2821 if(chunk_is_mmapped(p)) 2822 return chunk2mem(p); /* nothing more to do */ 2823 #endif 2824 } 2825 else /* misaligned */ 2826 { 2827 /* 2828 Find an aligned spot inside chunk. 2829 Since we need to give back leading space in a chunk of at 2830 least MINSIZE, if the first calculation places us at 2831 a spot with less than MINSIZE leader, we can move to the 2832 next aligned spot -- we've allocated enough total room so that 2833 this is always possible. 2834 */ 2835 2836 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment)); 2837 if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment; 2838 2839 newp = (mchunkptr)brk; 2840 leadsize = brk - (char*)(p); 2841 newsize = chunksize(p) - leadsize; 2842 2843 #if HAVE_MMAP 2844 if(chunk_is_mmapped(p)) 2845 { 2846 newp->prev_size = p->prev_size + leadsize; 2847 set_head(newp, newsize|IS_MMAPPED); 2848 return chunk2mem(newp); 2849 } 2850 #endif 2851 2852 /* give back leader, use the rest */ 2853 2854 set_head(newp, newsize | PREV_INUSE); 2855 set_inuse_bit_at_offset(newp, newsize); 2856 set_head_size(p, leadsize); 2857 fREe(chunk2mem(p)); 2858 p = newp; 2859 2860 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0); 2861 } 2862 2863 /* Also give back spare room at the end */ 2864 2865 remainder_size = chunksize(p) - nb; 2866 2867 if (remainder_size >= (long)MINSIZE) 2868 { 2869 remainder = chunk_at_offset(p, nb); 2870 set_head(remainder, remainder_size | PREV_INUSE); 2871 set_head_size(p, nb); 2872 fREe(chunk2mem(remainder)); 2873 } 2874 2875 check_inuse_chunk(p); 2876 return chunk2mem(p); 2877 2878 } 2879 2880 2881 2882 2883 /* 2884 valloc just invokes memalign with alignment argument equal 2885 to the page size of the system (or as near to this as can 2886 be figured out from all the includes/defines above.) 2887 */ 2888 2889 #if __STD_C 2890 Void_t* vALLOc(size_t bytes) 2891 #else 2892 Void_t* vALLOc(bytes) size_t bytes; 2893 #endif 2894 { 2895 return mEMALIGn (malloc_getpagesize, bytes); 2896 } 2897 2898 /* 2899 pvalloc just invokes valloc for the nearest pagesize 2900 that will accommodate request 2901 */ 2902 2903 2904 #if __STD_C 2905 Void_t* pvALLOc(size_t bytes) 2906 #else 2907 Void_t* pvALLOc(bytes) size_t bytes; 2908 #endif 2909 { 2910 size_t pagesize = malloc_getpagesize; 2911 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1)); 2912 } 2913 2914 /* 2915 2916 calloc calls malloc, then zeroes out the allocated chunk. 2917 2918 */ 2919 2920 #if __STD_C 2921 Void_t* cALLOc(size_t n, size_t elem_size) 2922 #else 2923 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size; 2924 #endif 2925 { 2926 mchunkptr p; 2927 INTERNAL_SIZE_T csz; 2928 2929 INTERNAL_SIZE_T sz = n * elem_size; 2930 2931 2932 /* check if expand_top called, in which case don't need to clear */ 2933 #if MORECORE_CLEARS 2934 mchunkptr oldtop = top; 2935 INTERNAL_SIZE_T oldtopsize = chunksize(top); 2936 #endif 2937 Void_t* mem = mALLOc (sz); 2938 2939 if ((long)n < 0) return 0; 2940 2941 if (mem == 0) 2942 return 0; 2943 else 2944 { 2945 p = mem2chunk(mem); 2946 2947 /* Two optional cases in which clearing not necessary */ 2948 2949 2950 #if HAVE_MMAP 2951 if (chunk_is_mmapped(p)) return mem; 2952 #endif 2953 2954 csz = chunksize(p); 2955 2956 #if MORECORE_CLEARS 2957 if (p == oldtop && csz > oldtopsize) 2958 { 2959 /* clear only the bytes from non-freshly-sbrked memory */ 2960 csz = oldtopsize; 2961 } 2962 #endif 2963 2964 MALLOC_ZERO(mem, csz - SIZE_SZ); 2965 return mem; 2966 } 2967 } 2968 2969 /* 2970 2971 cfree just calls free. It is needed/defined on some systems 2972 that pair it with calloc, presumably for odd historical reasons. 2973 2974 */ 2975 2976 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__) 2977 #if __STD_C 2978 void cfree(Void_t *mem) 2979 #else 2980 void cfree(mem) Void_t *mem; 2981 #endif 2982 { 2983 fREe(mem); 2984 } 2985 #endif 2986 2987 2988 2989 /* 2990 2991 Malloc_trim gives memory back to the system (via negative 2992 arguments to sbrk) if there is unused memory at the `high' end of 2993 the malloc pool. You can call this after freeing large blocks of 2994 memory to potentially reduce the system-level memory requirements 2995 of a program. However, it cannot guarantee to reduce memory. Under 2996 some allocation patterns, some large free blocks of memory will be 2997 locked between two used chunks, so they cannot be given back to 2998 the system. 2999 3000 The `pad' argument to malloc_trim represents the amount of free 3001 trailing space to leave untrimmed. If this argument is zero, 3002 only the minimum amount of memory to maintain internal data 3003 structures will be left (one page or less). Non-zero arguments 3004 can be supplied to maintain enough trailing space to service 3005 future expected allocations without having to re-obtain memory 3006 from the system. 3007 3008 Malloc_trim returns 1 if it actually released any memory, else 0. 3009 3010 */ 3011 3012 #if __STD_C 3013 int malloc_trim(size_t pad) 3014 #else 3015 int malloc_trim(pad) size_t pad; 3016 #endif 3017 { 3018 long top_size; /* Amount of top-most memory */ 3019 long extra; /* Amount to release */ 3020 char* current_brk; /* address returned by pre-check sbrk call */ 3021 char* new_brk; /* address returned by negative sbrk call */ 3022 3023 unsigned long pagesz = malloc_getpagesize; 3024 3025 top_size = chunksize(top); 3026 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz; 3027 3028 if (extra < (long)pagesz) /* Not enough memory to release */ 3029 return 0; 3030 3031 else 3032 { 3033 /* Test to make sure no one else called sbrk */ 3034 current_brk = (char*)(MORECORE (0)); 3035 if (current_brk != (char*)(top) + top_size) 3036 return 0; /* Apparently we don't own memory; must fail */ 3037 3038 else 3039 { 3040 new_brk = (char*)(MORECORE (-extra)); 3041 3042 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */ 3043 { 3044 /* Try to figure out what we have */ 3045 current_brk = (char*)(MORECORE (0)); 3046 top_size = current_brk - (char*)top; 3047 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */ 3048 { 3049 sbrked_mem = current_brk - sbrk_base; 3050 set_head(top, top_size | PREV_INUSE); 3051 } 3052 check_chunk(top); 3053 return 0; 3054 } 3055 3056 else 3057 { 3058 /* Success. Adjust top accordingly. */ 3059 set_head(top, (top_size - extra) | PREV_INUSE); 3060 sbrked_mem -= extra; 3061 check_chunk(top); 3062 return 1; 3063 } 3064 } 3065 } 3066 } 3067 3068 3069 3070 /* 3071 malloc_usable_size: 3072 3073 This routine tells you how many bytes you can actually use in an 3074 allocated chunk, which may be more than you requested (although 3075 often not). You can use this many bytes without worrying about 3076 overwriting other allocated objects. Not a particularly great 3077 programming practice, but still sometimes useful. 3078 3079 */ 3080 3081 #if __STD_C 3082 size_t malloc_usable_size(Void_t* mem) 3083 #else 3084 size_t malloc_usable_size(mem) Void_t* mem; 3085 #endif 3086 { 3087 mchunkptr p; 3088 if (mem == 0) 3089 return 0; 3090 else 3091 { 3092 p = mem2chunk(mem); 3093 if(!chunk_is_mmapped(p)) 3094 { 3095 if (!inuse(p)) return 0; 3096 check_inuse_chunk(p); 3097 return chunksize(p) - SIZE_SZ; 3098 } 3099 return chunksize(p) - 2*SIZE_SZ; 3100 } 3101 } 3102 3103 3104 3105 3106 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */ 3107 3108 #ifdef DEBUG 3109 static void malloc_update_mallinfo() 3110 { 3111 int i; 3112 mbinptr b; 3113 mchunkptr p; 3114 #ifdef DEBUG 3115 mchunkptr q; 3116 #endif 3117 3118 INTERNAL_SIZE_T avail = chunksize(top); 3119 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0; 3120 3121 for (i = 1; i < NAV; ++i) 3122 { 3123 b = bin_at(i); 3124 for (p = last(b); p != b; p = p->bk) 3125 { 3126 #ifdef DEBUG 3127 check_free_chunk(p); 3128 for (q = next_chunk(p); 3129 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE; 3130 q = next_chunk(q)) 3131 check_inuse_chunk(q); 3132 #endif 3133 avail += chunksize(p); 3134 navail++; 3135 } 3136 } 3137 3138 current_mallinfo.ordblks = navail; 3139 current_mallinfo.uordblks = sbrked_mem - avail; 3140 current_mallinfo.fordblks = avail; 3141 current_mallinfo.hblks = n_mmaps; 3142 current_mallinfo.hblkhd = mmapped_mem; 3143 current_mallinfo.keepcost = chunksize(top); 3144 3145 } 3146 #endif /* DEBUG */ 3147 3148 3149 3150 /* 3151 3152 malloc_stats: 3153 3154 Prints on the amount of space obtain from the system (both 3155 via sbrk and mmap), the maximum amount (which may be more than 3156 current if malloc_trim and/or munmap got called), the maximum 3157 number of simultaneous mmap regions used, and the current number 3158 of bytes allocated via malloc (or realloc, etc) but not yet 3159 freed. (Note that this is the number of bytes allocated, not the 3160 number requested. It will be larger than the number requested 3161 because of alignment and bookkeeping overhead.) 3162 3163 */ 3164 3165 #ifdef DEBUG 3166 void malloc_stats() 3167 { 3168 malloc_update_mallinfo(); 3169 printf("max system bytes = %10u\n", 3170 (unsigned int)(max_total_mem)); 3171 printf("system bytes = %10u\n", 3172 (unsigned int)(sbrked_mem + mmapped_mem)); 3173 printf("in use bytes = %10u\n", 3174 (unsigned int)(current_mallinfo.uordblks + mmapped_mem)); 3175 #if HAVE_MMAP 3176 printf("max mmap regions = %10u\n", 3177 (unsigned int)max_n_mmaps); 3178 #endif 3179 } 3180 #endif /* DEBUG */ 3181 3182 /* 3183 mallinfo returns a copy of updated current mallinfo. 3184 */ 3185 3186 #ifdef DEBUG 3187 struct mallinfo mALLINFo() 3188 { 3189 malloc_update_mallinfo(); 3190 return current_mallinfo; 3191 } 3192 #endif /* DEBUG */ 3193 3194 3195 3196 3197 /* 3198 mallopt: 3199 3200 mallopt is the general SVID/XPG interface to tunable parameters. 3201 The format is to provide a (parameter-number, parameter-value) pair. 3202 mallopt then sets the corresponding parameter to the argument 3203 value if it can (i.e., so long as the value is meaningful), 3204 and returns 1 if successful else 0. 3205 3206 See descriptions of tunable parameters above. 3207 3208 */ 3209 3210 #if __STD_C 3211 int mALLOPt(int param_number, int value) 3212 #else 3213 int mALLOPt(param_number, value) int param_number; int value; 3214 #endif 3215 { 3216 switch(param_number) 3217 { 3218 case M_TRIM_THRESHOLD: 3219 trim_threshold = value; return 1; 3220 case M_TOP_PAD: 3221 top_pad = value; return 1; 3222 case M_MMAP_THRESHOLD: 3223 mmap_threshold = value; return 1; 3224 case M_MMAP_MAX: 3225 #if HAVE_MMAP 3226 n_mmaps_max = value; return 1; 3227 #else 3228 if (value != 0) return 0; else n_mmaps_max = value; return 1; 3229 #endif 3230 3231 default: 3232 return 0; 3233 } 3234 } 3235 3236 /* 3237 3238 History: 3239 3240 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee) 3241 * return null for negative arguments 3242 * Added Several WIN32 cleanups from Martin C. Fong <mcfong@yahoo.com> 3243 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h' 3244 (e.g. WIN32 platforms) 3245 * Cleanup up header file inclusion for WIN32 platforms 3246 * Cleanup code to avoid Microsoft Visual C++ compiler complaints 3247 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing 3248 memory allocation routines 3249 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work) 3250 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to 3251 usage of 'assert' in non-WIN32 code 3252 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to 3253 avoid infinite loop 3254 * Always call 'fREe()' rather than 'free()' 3255 3256 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee) 3257 * Fixed ordering problem with boundary-stamping 3258 3259 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) 3260 * Added pvalloc, as recommended by H.J. Liu 3261 * Added 64bit pointer support mainly from Wolfram Gloger 3262 * Added anonymously donated WIN32 sbrk emulation 3263 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen 3264 * malloc_extend_top: fix mask error that caused wastage after 3265 foreign sbrks 3266 * Add linux mremap support code from HJ Liu 3267 3268 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) 3269 * Integrated most documentation with the code. 3270 * Add support for mmap, with help from 3271 Wolfram Gloger (Gloger@lrz.uni-muenchen.de). 3272 * Use last_remainder in more cases. 3273 * Pack bins using idea from colin@nyx10.cs.du.edu 3274 * Use ordered bins instead of best-fit threshhold 3275 * Eliminate block-local decls to simplify tracing and debugging. 3276 * Support another case of realloc via move into top 3277 * Fix error occuring when initial sbrk_base not word-aligned. 3278 * Rely on page size for units instead of SBRK_UNIT to 3279 avoid surprises about sbrk alignment conventions. 3280 * Add mallinfo, mallopt. Thanks to Raymond Nijssen 3281 (raymond@es.ele.tue.nl) for the suggestion. 3282 * Add `pad' argument to malloc_trim and top_pad mallopt parameter. 3283 * More precautions for cases where other routines call sbrk, 3284 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de). 3285 * Added macros etc., allowing use in linux libc from 3286 H.J. Lu (hjl@gnu.ai.mit.edu) 3287 * Inverted this history list 3288 3289 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) 3290 * Re-tuned and fixed to behave more nicely with V2.6.0 changes. 3291 * Removed all preallocation code since under current scheme 3292 the work required to undo bad preallocations exceeds 3293 the work saved in good cases for most test programs. 3294 * No longer use return list or unconsolidated bins since 3295 no scheme using them consistently outperforms those that don't 3296 given above changes. 3297 * Use best fit for very large chunks to prevent some worst-cases. 3298 * Added some support for debugging 3299 3300 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) 3301 * Removed footers when chunks are in use. Thanks to 3302 Paul Wilson (wilson@cs.texas.edu) for the suggestion. 3303 3304 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) 3305 * Added malloc_trim, with help from Wolfram Gloger 3306 (wmglo@Dent.MED.Uni-Muenchen.DE). 3307 3308 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) 3309 3310 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) 3311 * realloc: try to expand in both directions 3312 * malloc: swap order of clean-bin strategy; 3313 * realloc: only conditionally expand backwards 3314 * Try not to scavenge used bins 3315 * Use bin counts as a guide to preallocation 3316 * Occasionally bin return list chunks in first scan 3317 * Add a few optimizations from colin@nyx10.cs.du.edu 3318 3319 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) 3320 * faster bin computation & slightly different binning 3321 * merged all consolidations to one part of malloc proper 3322 (eliminating old malloc_find_space & malloc_clean_bin) 3323 * Scan 2 returns chunks (not just 1) 3324 * Propagate failure in realloc if malloc returns 0 3325 * Add stuff to allow compilation on non-ANSI compilers 3326 from kpv@research.att.com 3327 3328 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) 3329 * removed potential for odd address access in prev_chunk 3330 * removed dependency on getpagesize.h 3331 * misc cosmetics and a bit more internal documentation 3332 * anticosmetics: mangled names in macros to evade debugger strangeness 3333 * tested on sparc, hp-700, dec-mips, rs6000 3334 with gcc & native cc (hp, dec only) allowing 3335 Detlefs & Zorn comparison study (in SIGPLAN Notices.) 3336 3337 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) 3338 * Based loosely on libg++-1.2X malloc. (It retains some of the overall 3339 structure of old version, but most details differ.) 3340 3341 */ 3342