Go memory allocator


readNotes on the touranddravenessSummary of memory management of go

  1. technological process
    The user program obtains new memory space from heap through allocator,Reclaim space through collector

  2. distributor
    Go uses free list allocator to allocate memory, and adopts isolation adaptation method

  3. Free list allocator
    When the user program applies for memory, the free list allocator will traverse the free memory blocks in turn, find enough memory, and then apply for new resources and modify the list

    That is to say, the allocated memory space must be continuous. If this one is not enough, we will see whether the next one is enough, until we find a large enough one, instead of the combination of multiple blocks (in short, continuous memory)

  4. Isolation adaptation
    The memory is divided into multiple linked lists, and the memory blocks in each linked list are the same size. When applying for memory, first find the linked list that meets the conditions, and then select the appropriate memory block from the linked list

  5. classification

    1. Thread cache
    2. Central cache
    3. Page heap
      The thread cache only belongs to the current thread and does not need to be locked, so there is no competition
      Memory requests above 32K need to be allocated from the page heap
  6. The basic unit of memory management (mspan)
    Mspan implements go memory managementIsolation adaptationeffect
    The size classes that each mspan can assign are fixed
    Each mspan contains fields that point to the pointer of the previous and next mspan, and points to the content of the same sizecalasses under the same mcache

  7. Thread cache (mcache)

    type mcache {
     alloc [numSpanClasses]*mspan //numSpanClasses = 67*2  = 134

    Mcache is bound to GMP’s P (scheduler)
    Alloc is an array of 134 (67 pointer type and 67 non pointer type) that is empty at the beginning. During use, it goes to the central cache of the corresponding specification to dynamically apply for mspan

  8. Central cache

    type mcentral struct {
     lock      mutex
      spanclass spanClass   //spanClass Id
      Nonempty mspanlist // all unused free span
      Empty mspanlist // has been taken away by mcache, and the unreturned ones will be mounted here
      Nmalloc Uint64 // this mccentral allocates the cumulative count of mspans
    1. The central cache is a public area. Multiple mcaches can apply for space, so they have to be locked
    2. Each central cache only manages mspan of one size (spanclass)(so the total number of mccentral does not exceed 67 * 2 = 134)
    3. When mcache comes to apply for mspan, he first looks in nonempty to see if there is anything that can be used. If he doesn’t look in empty, he goes to the page heap to apply for some space again
  9. Page heap (mheap)
    Each heapar is 8b in size and can manage 64M of memory space
    Up to 4m heapparenes
    So it can use up to 256tb
    8b per pointer
    So the size of meta information is 4m * 8b = 32m (that is, 32m of memory is used to record the allocation of 256t)


  1. Can the size of a single large object not exceed the size that can be managed by a single heapar (64M)

    No, you can store a large object in multiple pages

  2. Is the size of the objects managed in mspan that belong to large objects different??? (refer to point 6, is it correct?)

    Those over 32KB in size are represented by a special class. The class ID is 0, and each class contains only one object, so it also meets the sixth point

  3. Is there only 134 pages and mspan?

  4. The elements in mspan may come from different mheap?

This work adoptsCC agreementReprint must indicate the author and the link of this article