----+----------------------------------------------------- 1GB |Kernel Space ----+---------------------------------------------------- |Random Stack offset | +------------------------------------------------------ 3GB |Stack |Grows Down +---------------------------------------------------- | +----------------------------------------------------- |Memory Mapping Segment |File Mapping and anonymous mappings |Grows Down +---------------------------------------------------- | +-------------brk----------------------------------- |Heap |Grows Up +-------------Start_brk----------------------------- |random brk of offset +-------------------------------------------------- |BSS(Block Start by Symbol) Segment |Uninitialized static variable,filled with 0 +------------------------------------------------- |Data Segment |Static variable initialized by programmer +------------------------------------------------- |Text Segment |Instructions ----+-------------------------------------------------
malloc_chunk
申请的内存块使用malloc_chunk表示,无论大小,无论处于分配还是释放状态
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
/* This struct declaration is misleading (but accurate and necessary). It declares a "view" into memory allowing access to necessary fields at known offsets from a given base. See explanation below. */
structmalloc_chunk {
INTERNAL_SIZE_T mchunk_prev_size; /* Size of previous chunk (if free). */ INTERNAL_SIZE_T mchunk_size; /* Size in bytes, including overhead. */
structmalloc_chunk* fd;/* double links -- used only if free. */ structmalloc_chunk* bk;
/* Only used for large blocks: pointer to next larger size. */ structmalloc_chunk* fd_nextsize;/* double links -- used only if free. */ structmalloc_chunk* bk_nextsize; };
/* have_fastchunks indicates that there are probably some fastbin chunks. It is set true on entering a chunk into any fastbin, and cleared early in malloc_consolidate. The value is approximate since it may be set when there are no fastbin chunks, or it may be clear even if there are fastbin chunks available. Given it's sole purpose is to reduce number of redundant calls to malloc_consolidate, it does not affect correctness. As a result we can safely use relaxed atomic accesses. */
/* Set if the fastbin chunks contain recently inserted free blocks. */ /* Note this is a bool but not all targets support atomics on booleans. */ int have_fastchunks;
/* Fastbins */ mfastbinptr fastbinsY[NFASTBINS];
/* Base of the topmost chunk -- not otherwise kept in a bin */ mchunkptr top;
/* The remainder from the most recent split of a small request */ mchunkptr last_remainder;
/* Normal bins packed as described above */ mchunkptr bins[NBINS * 2 - 2]; // default NBINS=128
/* Bitmap of bins */ unsignedint binmap[BINMAPSIZE];
/* Linked list */ structmalloc_state *next;
/* Linked list for free arenas. Access to this field is serialized by free_list_lock in arena.c. */ structmalloc_state *next_free;
/* Number of threads attached to this arena. 0 if the arena is on the free list. Access to this field is serialized by free_list_lock in arena.c. */ INTERNAL_SIZE_T attached_threads;
/* Memory allocated from the system in this arena. */ INTERNAL_SIZE_T system_mem; INTERNAL_SIZE_T max_system_mem; };
所有内部状态都位于malloc_state实例中,除非
USE_MALLOC_LOCK 被定义,mALLOC_MUTEx在上方声明
mmap不支持匿名映射,会有假的文件描述符
bin
glibc算法介绍
The main properties of the algorithms are:
For large (>= 512 bytes) requests, it is a pure best-fit allocator, with ties normally decided via FIFO (i.e. least recently used).
For small (<= 64 bytes by default) requests, it is a caching allocator, that maintains pools of quickly recycled chunks.
In between, and for combinations of large and small requests, it does the best it can trying to meet both goals at once.
For very large requests (>= 128KB by default), it relies on system memory mapping facilities, if supported.
一些值
OPTION DEFAULT VALUE
Compilation Environment options:
HAVE_MREMAP 0
Changing default word sizes:
INTERNAL_SIZE_T size_t
Configuration and functionality options:
USE_PUBLIC_MALLOC_WRAPPERS NOT defined
USE_MALLOC_LOCK NOT defined
MALLOC_DEBUG NOT defined
REALLOC_ZERO_BYTES_FREES 1
TRIM_FASTBINS 0
Options for customizing MORECORE:
MORECORE sbrk
MORECORE_FAILURE -1
MORECORE_CONTIGUOUS 1
MORECORE_CANNOT_TRIM NOT defined
MORECORE_CLEARS 1
MMAP_AS_MORECORE_SIZE (1024 * 1024)
Tuning options that are also dynamically changeable via mallopt:
DEFAULT_MXFAST 64 (for 32bit), 128 (for 64bit)
DEFAULT_TRIM_THRESHOLD 128 * 1024
DEFAULT_TOP_PAD 0
DEFAULT_MMAP_THRESHOLD 128 * 1024
DEFAULT_MMAP_MAX 65536
/* Bins An array of bin headers for free chunks. Each bin is doubly linked. The bins are approximately proportionally (log) spaced. There are a lot of these bins (128). This may look excessive, but works very well in practice. Most bins hold sizes that are unusual as malloc request sizes, but are more usual for fragments and consolidated sets of chunks, which is what these bins hold, so they can be found quickly. All procedures maintain the invariant that no consolidated chunk physically borders another one, so each chunk in a list is known to be preceeded and followed by either inuse chunks or the ends of memory. Chunks in bins are kept in size order, with ties going to the approximately least recently used chunk. Ordering isn't needed for the small bins, which all contain the same-sized chunks, but facilitates best-fit allocation for larger chunks. These lists are just sequential. Keeping them in order almost never requires enough traversal to warrant using fancier ordered data structures. Chunks of the same size are linked with the most recently freed at the front, and allocations are taken from the back. This results in LRU (FIFO) allocation order, which tends to give each chunk an equal opportunity to be consolidated with adjacent freed chunks, resulting in larger free chunks and less fragmentation. To simplify use in double-linked lists, each bin header acts as a malloc_chunk. This avoids special-casing for headers. But to conserve space and improve locality, we allocate only the fd/bk pointers of bins, and then use repositioning tricks to treat these as the fields of a malloc_chunk*. */
typedefstructmalloc_chunk *mbinptr;// bin是chunk链表 /* addressing -- note that bin_at(0) does not exist */ #define bin_at(m, i) \ (mbinptr) (((char *) &((m)->bins[((i) - 1) * 2])) \ - offsetof (struct malloc_chunk, fd)) /* analog of ++bin */ #define next_bin(b) ((mbinptr) ((char *) (b) + (sizeof (mchunkptr) << 1))) /* Reminders about list directionality within bins */ #define first(b) ((b)->fd) #define last(b) ((b)->bk) /* Indexing Bins for sizes < 512 bytes contain chunks of all the same size, spaced 8 bytes apart. Larger bins are approximately logarithmically spaced: 64 bins of size 8 32 bins of size 64 16 bins of size 512 8 bins of size 4096 4 bins of size 32768 2 bins of size 262144 1 bin of size what's left There is actually a little bit of slop in the numbers in bin_index for the sake of speed. This makes no difference elsewhere. The bins top out around 1MB because we expect to service large requests via mmap. Bin 0 does not exist. Bin 1 is the unordered list; if that would be a valid chunk size the small bins are bumped up one. */ #define NBINS 128 #define NSMALLBINS 64 #define SMALLBIN_WIDTH MALLOC_ALIGNMENT #define SMALLBIN_CORRECTION (MALLOC_ALIGNMENT > 2 * SIZE_SZ) #define MIN_LARGE_SIZE ((NSMALLBINS - SMALLBIN_CORRECTION) * SMALLBIN_WIDTH) #define in_smallbin_range(sz) \ ((unsignedlong) (sz) < (unsignedlong) MIN_LARGE_SIZE) #define smallbin_index(sz) \ ((SMALLBIN_WIDTH == 16 ? (((unsigned) (sz)) >> 4) : (((unsigned) (sz)) >> 3))\ + SMALLBIN_CORRECTION) #define largebin_index_32(sz) \ (((((unsignedlong) (sz)) >> 6) <= 38) ? 56 + (((unsignedlong) (sz)) >> 6) :\ ((((unsignedlong) (sz)) >> 9) <= 20) ? 91 + (((unsignedlong) (sz)) >> 9) :\ ((((unsignedlong) (sz)) >> 12) <= 10) ? 110 + (((unsignedlong) (sz)) >> 12) :\ ((((unsignedlong) (sz)) >> 15) <= 4) ? 119 + (((unsignedlong) (sz)) >> 15) :\ ((((unsignedlong) (sz)) >> 18) <= 2) ? 124 + (((unsignedlong) (sz)) >> 18) :\ 126) #define largebin_index_32_big(sz) \ (((((unsignedlong) (sz)) >> 6) <= 45) ? 49 + (((unsignedlong) (sz)) >> 6) :\ ((((unsignedlong) (sz)) >> 9) <= 20) ? 91 + (((unsignedlong) (sz)) >> 9) :\ ((((unsignedlong) (sz)) >> 12) <= 10) ? 110 + (((unsignedlong) (sz)) >> 12) :\ ((((unsignedlong) (sz)) >> 15) <= 4) ? 119 + (((unsignedlong) (sz)) >> 15) :\ ((((unsignedlong) (sz)) >> 18) <= 2) ? 124 + (((unsignedlong) (sz)) >> 18) :\ 126) // XXX It remains to be seen whether it is good to keep the widths of // XXX the buckets the same or whether it should be scaled by a factor // XXX of two as well. #define largebin_index_64(sz) \ (((((unsignedlong) (sz)) >> 6) <= 48) ? 48 + (((unsignedlong) (sz)) >> 6) :\ ((((unsignedlong) (sz)) >> 9) <= 20) ? 91 + (((unsignedlong) (sz)) >> 9) :\ ((((unsignedlong) (sz)) >> 12) <= 10) ? 110 + (((unsignedlong) (sz)) >> 12) :\ ((((unsignedlong) (sz)) >> 15) <= 4) ? 119 + (((unsignedlong) (sz)) >> 15) :\ ((((unsignedlong) (sz)) >> 18) <= 2) ? 124 + (((unsignedlong) (sz)) >> 18) :\ 126) #define largebin_index(sz) \ (SIZE_SZ == 8 ? largebin_index_64 (sz) \ : MALLOC_ALIGNMENT == 16 ? largebin_index_32_big (sz) \ : largebin_index_32 (sz)) #define bin_index(sz) \ ((in_smallbin_range (sz)) ? smallbin_index (sz) : largebin_index (sz))