Lines Matching +full:- +full:- +full:disable +full:- +full:malloc +full:- +full:trim

1 /* ---------- To make a malloc.h, start cutting here ------------ */
4 A version of malloc/free/realloc written by Doug Lea and released to the
10 Note: There may be an updated version of this malloc obtainable at
11 ftp://g.oswego.edu/pub/misc/malloc.c
14 * Why use this malloc?
16 This is not the fastest, most space-conserving, most portable, or
17 most tunable malloc ever written. However it is among the fastest
18 while also being among the most space-conserving, portable and tunable.
19 Consistent balance across these factors results in a good general-purpose
20 allocator. For a high-level description, see
21 http://g.oswego.edu/dl/html/malloc.html
27 malloc(size_t n);
36 the same as p. If p is null, equivalent to malloc. Unless the
38 size argument of zero (re)allocates a minimum-sized chunk.
48 Equivalent to valloc(minimum-page-that-holds(n)), that is,
56 Release all but pad bytes of freed top-most memory back
72 Alignment: 8-byte
77 Code for 8-byte pointers is untested by me but has worked
88 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
89 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
97 Even a request for zero bytes (i.e., malloc(0)) returns a
100 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
101 8-byte size_t: 2^63 - 16 bytes
114 make the normal worst-case wastage 15 bytes (i.e., up to 15
115 more bytes will be allocated than were requested in malloc), with
117 1. Because requests for zero bytes allocate non-zero space,
127 * No user-definable hooks for callbacks and the like.
132 * Synopsis of compile-time options:
134 People have reported using previous versions of this malloc on all
138 People have also reported adapting this malloc for use in
139 stand-alone embedded systems.
141 The implementation is in straight, hand-tuned ANSI C. Among other
144 (for example gcc -O2) that can simplify expressions and control
148 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
151 Define to enable debugging. Adds fairly extensive assertion-based
156 to free(p). Otherwise, since malloc returns a unique pointer for
157 malloc(0), so does realloc(p, 0).
168 Define to non-zero to optionally make malloc() use mmap() to
171 Define to non-zero to optionally make realloc() use mremap() to
176 Optionally define if you are on a system with a /usr/include/malloc.h
180 Define to a 32-bit type (probably `unsigned int') if you are on a
181 64-bit machine, yet do not want or need to allow malloc requests of
186 Also note that there is some odd internal name-mangling via defines
187 (for example, internally, `malloc' is named `mALLOc') needed
198 MORECORE_FAILURE (default: -1)
259 Compile-time options
267 malloc will often die when freed memory is overwritten by user
271 If you compile with -DDEBUG, a number of assertion checks are
277 attempt to check every non-mmapped allocated and free chunk in the
295 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
296 of chunk sizes. On a 64-bit machine, you can reduce malloc
310 Some people think it should. Otherwise, since this malloc
311 returns a unique pointer for malloc(0), so does realloc(p, 0).
320 mmap-based options are not currently supported in WIN32.
385 /* The following macros are only invoked with (2n+1)-multiples of
432 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
441 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
450 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
459 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
467 Define HAVE_MMAP to optionally make malloc() use mmap() to
477 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
503 Access to system page size. To the extent possible, this malloc
504 manages memory from the system in page-size units.
562 This version of malloc supports the standard SVID/XPG mallinfo
565 any SVID/XPG compliant system that has a /usr/include/malloc.h
568 and below and save them in a malloc.h file. But there's no
572 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
574 version of malloc. Some of these fields are are instead filled by
578 /usr/include/malloc.h file that includes a declaration of struct
588 #include "/usr/include/malloc.h"
595 int ordblks; /* number of non-inuse chunks */
596 int smblks; /* unused -- always zero */
599 int usmblks; /* unused -- always zero */
600 int fsmblks; /* unused -- always zero */
602 int fordblks; /* total non-inuse space */
603 int keepcost; /* top-most, releasable (via malloc_trim) space */
608 #define M_MXFAST 1 /* UNUSED in this malloc */
609 #define M_NLBLKS 2 /* UNUSED in this malloc */
610 #define M_GRAIN 3 /* UNUSED in this malloc */
611 #define M_KEEP 4 /* UNUSED in this malloc */
617 #define M_TRIM_THRESHOLD -1
618 #define M_TOP_PAD -2
619 #define M_MMAP_THRESHOLD -3
620 #define M_MMAP_MAX -4
628 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
631 Automatic trimming is mainly useful in long-lived programs.
638 The trim threshold and the mmap control parameters (see below)
642 system-level demands of a long-lived program down to a bare
644 the XF86 X server on Linux, using a trim threshold of 128K and a
645 mmap threshold of 192K led to near-minimal long term resource
648 If you are using this malloc in a long-lived program, it should
657 chunks at all. And in well-behaved long-lived programs,
662 protection against the system-level effects of carrying around
668 The default trim value is high enough to cause trimming only in
671 disable trimming completely, you can set to (unsigned long)(-1);
686 a new malloc request, this much padding is added to the sbrk
697 that nearly every malloc request during program start-up (or
701 Automatic rounding-up to page-size units is normally sufficient
718 be allocated using already-existing space will be serviced via mmap.
729 helps keep the system level memory demands of a long-lived
740 3. It causes malloc performance to be more dependent on host
744 malloc steps is faster than going through a system's mmap.
770 better off using normal sbrk-based allocation routines that
776 the default value is 0, and attempts to set it to non-zero values
796 using weak aliases, this malloc is NOT designed to work in
798 control are provided to ensure that multiple malloc or free calls
800 semaphore could be used across malloc, realloc, and free (which is
838 #define MORECORE_FAILURE -1
851 #define mALLOc __libc_malloc
862 #pragma weak malloc = __libc_malloc
875 #define mALLOc dlmalloc
885 #define mALLOc malloc
900 Void_t* mALLOc(size_t);
914 Void_t* mALLOc();
934 /* ---------- To make a malloc.h, end cutting here ------------ */
947 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) & \
948 ~(malloc_getpagesize-1))
949 #define AlignPage64K(add) (((add) + (0x10000 - 1)) & ~(0x10000 - 1))
978 this->base = bas;
979 this->next = head;
988 assert ( (head == NULL) || (head->base == (void*)gAddressBase));
989 if (gAddressBase && (gNextAddress - gAddressBase))
992 gNextAddress - gAddressBase,
998 GmListElement* next = head->next;
999 rval = VirtualFree (head->base, 0, MEM_RELEASE);
1068 return (void*)-1;
1084 return (void*)-1;
1090 (size + gNextAddress -
1094 return (void*)-1;
1103 /* Trim by releasing the virtual memory */
1106 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1113 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1116 return (void*)-1;
1138 struct malloc_chunk* fd; /* double links -- used only if free. */
1162 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1164 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1166 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1171 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1173 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1177 the malloc code, but "mem" is the pointer that is returned to the
1182 thus double-word aligned.
1184 Free chunks are stored in circular doubly-linked lists, and look like this:
1186 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1188 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1190 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1192 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1194 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1198 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1200 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1202 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1203 chunk size (which is always a multiple of two words), is an in-use
1208 preventing access to non-existent (or non-owned) memory.)
1223 2. Chunks allocated via mmap, which have the second-lowest-order
1241 the same-sized chunks, but facilitates best-fit allocation for
1251 * `top': The top-most available chunk (i.e., the one bordering the
1258 most recently split (non-top) chunk. This bin is checked
1259 before other non-fitting chunks, so as to provide better
1276 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1279 /* conversion from malloc headers to user pointers, and back */
1282 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
1318 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
1323 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1340 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1344 #define prev_inuse(p) ((p)->size & PREV_INUSE)
1348 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
1353 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1356 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1361 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1364 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1367 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1378 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
1382 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
1386 #define set_head(p, s) ((p)->size = (s))
1390 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
1400 heads of (initially empty) doubly-linked lists of chunks, laid out
1436 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
1438 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
1446 #define top (bin_at(0)->fd) /* The topmost chunk */
1452 zero size, thus forcing extension on the first malloc request,
1453 we avoid having any special code in malloc to check whether
1485 /* field-extraction macros */
1487 #define first(b) ((b)->fd)
1488 #define last(b) ((b)->bk)
1504 identically sized chunks. This is exploited in malloc.
1517 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
1522 To help compensate for the large number of bins, a one-level index
1523 structure is used for bin-by-bin searching. `binblocks' is a
1524 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1525 have any (possibly) non-empty bins, so they can be skipped over
1528 when all are noticed to be empty during traversal in malloc.
1533 #define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1535 /* bin<->block macros */
1555 static char* sbrk_base = (char*)(-1);
1590 in malloc. In which case, please report it!)
1599 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1620 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1634 assert(next->prev_size == sz);
1640 assert(p->fd->bk == p);
1641 assert(p->bk->fd == p);
1685 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1686 long room = sz - s;
1720 Macro-based internal utilities
1742 FD = BK->fd; \
1743 P->bk = BK; \
1744 P->fd = FD; \
1745 FD->bk = BK->fd = P; \
1751 FD = BK->fd; \
1755 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1756 BK = FD->bk; \
1758 P->bk = BK; \
1759 P->fd = FD; \
1760 FD->bk = BK->fd = P; \
1769 BK = P->bk; \
1770 FD = P->fd; \
1771 FD->bk = BK; \
1772 BK->fd = FD; \
1779 last_remainder->fd = last_remainder->bk = P; \
1780 P->fd = P->bk = last_remainder; \
1786 (last_remainder->fd = last_remainder->bk = last_remainder)
1802 size_t page_mask = malloc_getpagesize - 1;
1806 static int fd = -1;
1818 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1828 if(p == (mchunkptr)-1) return 0;
1833 /* We demand that eight bytes into a page must be 8-byte aligned. */
1840 p->prev_size = 0;
1863 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1865 n_mmaps--;
1866 mmapped_mem -= (size + p->prev_size);
1868 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1870 /* munmap returns non-zero on failure */
1882 size_t page_mask = malloc_getpagesize - 1;
1883 INTERNAL_SIZE_T offset = p->prev_size;
1890 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1895 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1897 if (cp == (char *)-1) return 0;
1903 assert((p->prev_size == offset));
1904 set_head(p, (new_size - offset)|IS_MMAPPED);
1906 mmapped_mem -= size + offset;
1923 Extend the top-most chunk by obtaining memory from system.
1952 if (sbrk_base != (char*)(-1))
1953 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
1971 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
1974 sbrked_mem += brk - (char*)old_end;
1980 correction = (MALLOC_ALIGNMENT) - front_misalign;
1988 correction += ((((unsigned long)(brk + sbrk_size))+(pagesz-1)) &
1989 ~(pagesz - 1)) - ((unsigned long)(brk + sbrk_size));
1998 top_size = new_brk - brk + correction;
2010 set_head(top, PREV_INUSE); /* will force null return from malloc */
2015 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
2017 chunk_at_offset(old_top, old_top_size )->size =
2019 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
2033 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
2043 Malloc Algorthim:
2047 obtain 8-byte alignment and/or to obtain a size of at least
2060 whenever possible. This limited use of a first-fit style
2067 any remainder. This search is strictly by best-fit; i.e.,
2073 the best-fit search rule. In effect, `top' is treated as
2086 Memory is gathered from the system (in system page-sized
2096 chunk borders either a previously allocated and still in-use chunk,
2102 Void_t* mALLOc(size_t bytes)
2104 Void_t* mALLOc(bytes) size_t bytes;
2160 for (victim = last(bin); victim != bin; victim = victim->bk)
2163 remainder_size = victim_size - nb;
2167 --idx; /* adjust to rescan below after checking last remainder */
2184 /* Try to use the last split-off remainder */
2186 if ( (victim = last_remainder->fd) != last_remainder)
2189 remainder_size = victim_size - nb;
2191 if (remainder_size >= (long)MINSIZE) /* re-split */
2217 If there are any possibly nonempty big-enough blocks,
2229 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2249 for (victim = last(bin); victim != bin; victim = victim->bk)
2252 remainder_size = victim_size - nb;
2278 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2284 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2289 --startidx;
2312 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2324 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2352 topmost memory exceeds the trim threshold, malloc_trim is
2383 hd = p->size;
2405 prevsz = p->prev_size;
2406 p = chunk_at_offset(p, -((long) prevsz));
2424 prevsz = p->prev_size;
2425 p = chunk_at_offset(p, -((long) prevsz));
2428 if (p->fd == last_remainder) /* keep as last_remainder */
2438 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2468 chunk can be extended, it is, else a malloc-copy-free sequence is
2478 size argument of zero (re)allocates a minimum-sized chunk.
2484 The old unix realloc convention of allowing the last-free'd chunk
2527 /* realloc of null is supposed to be same as malloc */
2528 if (oldmem == 0) return mALLOc(bytes);
2544 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2546 newmem = mALLOc(bytes);
2548 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2573 set_head(top, (newsize - nb) | PREV_INUSE);
2613 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2615 set_head(top, (newsize - nb) | PREV_INUSE);
2629 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2641 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2648 newmem = mALLOc (bytes);
2664 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2672 if (newsize - nb >= MINSIZE) /* split off remainder */
2675 remainder_size = newsize - nb;
2698 memalign requests more than enough space from malloc, finds a spot
2705 8-byte alignment is guaranteed by normal malloc calls, so don't
2720 char* m; /* memory returned by malloc call */
2731 /* If need less alignment than we give anyway, just relay to malloc */
2733 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2739 /* Call malloc with worst case padding to hit alignment. */
2742 m = (char*)(mALLOc(nb + alignment + MINSIZE));
2762 next aligned spot -- we've allocated enough total room so that
2766 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -((signed) alignment));
2767 if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2770 leadsize = brk - (char*)(p);
2771 newsize = chunksize(p) - leadsize;
2776 newp->prev_size = p->prev_size + leadsize;
2795 remainder_size = chunksize(p) - nb;
2841 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2846 calloc calls malloc, then zeroes out the allocated chunk.
2867 Void_t* mem = mALLOc (sz);
2889 /* clear only the bytes from non-freshly-sbrked memory */
2894 MALLOC_ZERO(mem, csz - SIZE_SZ);
2923 the malloc pool. You can call this after freeing large blocks of
2924 memory to potentially reduce the system-level memory requirements
2933 structures will be left (one page or less). Non-zero arguments
2935 future expected allocations without having to re-obtain memory
2948 long top_size; /* Amount of top-most memory */
2950 char* current_brk; /* address returned by pre-check sbrk call */
2956 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
2970 new_brk = (char*)(MORECORE (-extra));
2976 top_size = current_brk - (char*)top;
2979 sbrked_mem = current_brk - sbrk_base;
2989 set_head(top, (top_size - extra) | PREV_INUSE);
2990 sbrked_mem -= extra;
3027 return chunksize(p) - SIZE_SZ;
3029 return chunksize(p) - 2*SIZE_SZ;
3053 for (p = last(b); p != b; p = p->bk)
3068 current_mallinfo.uordblks = sbrked_mem - avail;
3086 of bytes allocated via malloc (or realloc, etc) but not yet
3125 The format is to provide a (parameter-number, parameter-value) pair.
3171 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3175 usage of 'assert' in non-WIN32 code
3181 * Fixed ordering problem with boundary-stamping
3187 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3195 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3198 * Use ordered bins instead of best-fit threshhold
3199 * Eliminate block-local decls to simplify tracing and debugging.
3201 * Fix error occuring when initial sbrk_base not word-aligned.
3208 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3214 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3221 * Use best fit for very large chunks to prevent some worst-cases.
3230 (wmglo@Dent.MED.Uni-Muenchen.DE).
3236 * malloc: swap order of clean-bin strategy;
3245 * merged all consolidations to one part of malloc proper
3248 * Propagate failure in realloc if malloc returns 0
3249 * Add stuff to allow compilation on non-ANSI compilers
3257 * tested on sparc, hp-700, dec-mips, rs6000
3262 * Based loosely on libg++-1.2X malloc. (It retains some of the overall