Copyright 2009 The Go Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
Page heap. See malloc.go for overview.

package runtime

import (
	
	
	
	
)

minPhysPageSize is a lower-bound on the physical page size. The true physical page size may be larger than this. In contrast, sys.PhysPageSize is an upper-bound on the physical page size.
maxPhysPageSize is the maximum page size the runtime supports.
	maxPhysPageSize = 512 << 10
maxPhysHugePageSize sets an upper-bound on the maximum huge page size that the runtime supports.
pagesPerReclaimerChunk indicates how many pages to scan from the pageInUse bitmap at a time. Used by the page reclaimer. Higher values reduce contention on scanning indexes (such as h.reclaimIndex), but increase the minimum latency of the operation. The time required to scan this many pages can vary a lot depending on how many spans are actually freed. Experimentally, it can scan for pages at ~300 GB/ms on a 2.6GHz Core i7, but can only free spans at ~32 MB/ms. Using 512 pages bounds this at roughly 100µs. Must be a multiple of the pageInUse bitmap element size and must also evenly divide pagesPerArena.
physPageAlignedStacks indicates whether stack allocations must be physical page aligned. This is a requirement for MAP_STACK on OpenBSD.
	physPageAlignedStacks = GOOS == "openbsd"
)
Main malloc heap. The heap itself is the "free" and "scav" treaps, but all the other global data is here too. mheap must not be heap-allocated because it contains mSpanLists, which must not be heap-allocated.go:notinheap
lock must only be acquired on the system stack, otherwise a g could self-deadlock if its stack grows with the lock held.
	lock      mutex
	pages     pageAlloc // page allocation data structure
	sweepgen  uint32    // sweep generation, see comment in mspan; written during STW
	sweepdone uint32    // all spans are swept
	sweepers  uint32    // number of active sweepone calls
allspans is a slice of all mspans ever created. Each mspan appears exactly once. The memory for allspans is manually managed and can be reallocated and move as the heap grows. In general, allspans is protected by mheap_.lock, which prevents concurrent access as well as freeing the backing store. Accesses during STW might not hold the lock, but must ensure that allocation cannot happen around the access (since that may free the backing store).
	allspans []*mspan // all spans out there

	_ uint32 // align uint64 fields on 32-bit for atomics
Proportional sweep These parameters represent a linear function from heap_live to page sweep count. The proportional sweep system works to stay in the black by keeping the current page sweep count above this line at the current heap_live. The line has slope sweepPagesPerByte and passes through a basis point at (sweepHeapLiveBasis, pagesSweptBasis). At any given time, the system is at (memstats.heap_live, pagesSwept) in this space. It's important that the line pass through a point we control rather than simply starting at a (0,0) origin because that lets us adjust sweep pacing at any time while accounting for current progress. If we could only adjust the slope, it would create a discontinuity in debt if any progress has already been made.
	pagesInUse         uint64  // pages of spans in stats mSpanInUse; updated atomically
	pagesSwept         uint64  // pages swept this cycle; updated atomically
	pagesSweptBasis    uint64  // pagesSwept to use as the origin of the sweep ratio; updated atomically
	sweepHeapLiveBasis uint64  // value of heap_live to use as the origin of sweep ratio; written with lock, read without
TODO(austin): pagesInUse should be a uintptr, but the 386 compiler can't 8-byte align fields.
scavengeGoal is the amount of total retained heap memory (measured by heapRetained) that the runtime will try to maintain by returning memory to the OS.
Page reclaimer state
reclaimIndex is the page index in allArenas of next page to reclaim. Specifically, it refers to page (i % pagesPerArena) of arena allArenas[i / pagesPerArena]. If this is >= 1<<63, the page reclaimer is done scanning the page marks. This is accessed atomically.
reclaimCredit is spare credit for extra pages swept. Since the page reclaimer works in large chunks, it may reclaim more than requested. Any spare pages released go to this credit pool. This is accessed atomically.
arenas is the heap arena map. It points to the metadata for the heap for every arena frame of the entire usable virtual address space. Use arenaIndex to compute indexes into this array. For regions of the address space that are not backed by the Go heap, the arena map contains nil. Modifications are protected by mheap_.lock. Reads can be performed without locking; however, a given entry can transition from nil to non-nil at any time when the lock isn't held. (Entries never transitions back to nil.) In general, this is a two-level mapping consisting of an L1 map and possibly many L2 maps. This saves space when there are a huge number of arena frames. However, on many platforms (even 64-bit), arenaL1Bits is 0, making this effectively a single-level map. In this case, arenas[0] will never be nil.
heapArenaAlloc is pre-reserved space for allocating heapArena objects. This is only used on 32-bit, where we pre-reserve this space to avoid interleaving it with the heap itself.
arenaHints is a list of addresses at which to attempt to add more heap arenas. This is initially populated with a set of general hint addresses, and grown with the bounds of actual heap arena ranges.
arena is a pre-reserved space for allocating heap arenas (the actual arenas). This is only used on 32-bit.
allArenas is the arenaIndex of every mapped arena. This can be used to iterate through the address space. Access is protected by mheap_.lock. However, since this is append-only and old backing arrays are never freed, it is safe to acquire mheap_.lock, copy the slice header, and then release mheap_.lock.
sweepArenas is a snapshot of allArenas taken at the beginning of the sweep cycle. This can be read safely by simply blocking GC (by disabling preemption).
markArenas is a snapshot of allArenas taken at the beginning of the mark cycle. Because allArenas is append-only, neither this slice nor its contents will change during the mark, so it can be read safely.
curArena is the arena that the heap is currently growing into. This should always be physPageSize-aligned.
	curArena struct {
		base, end uintptr
	}

	_ uint32 // ensure 64-bit alignment of central
central free lists for small size classes. the padding makes sure that the mcentrals are spaced CacheLinePadSize bytes apart, so that each mcentral.lock gets its own cache line. central is indexed by spanClass.
	central [numSpanClasses]struct {
		mcentral mcentral
		pad      [cpu.CacheLinePadSize - unsafe.Sizeof(mcentral{})%cpu.CacheLinePadSize]byte
	}

	spanalloc             fixalloc // allocator for span*
	cachealloc            fixalloc // allocator for mcache*
	specialfinalizeralloc fixalloc // allocator for specialfinalizer*
	specialprofilealloc   fixalloc // allocator for specialprofile*
	speciallock           mutex    // lock for special record allocators.
	arenaHintAlloc        fixalloc // allocator for arenaHints

	unused *specialfinalizer // never set, just here to force the specialfinalizer type into DWARF
}

var mheap_ mheap
A heapArena stores metadata for a heap arena. heapArenas are stored outside of the Go heap and accessed via the mheap_.arenas index.go:notinheap
bitmap stores the pointer/scalar bitmap for the words in this arena. See mbitmap.go for a description. Use the heapBits type to access this.
spans maps from virtual address page ID within this arena to *mspan. For allocated spans, their pages map to the span itself. For free spans, only the lowest and highest pages map to the span itself. Internal pages map to an arbitrary span. For pages that have never been allocated, spans entries are nil. Modifications are protected by mheap.lock. Reads can be performed without locking, but ONLY from indexes that are known to contain in-use or stack spans. This means there must not be a safe-point between establishing that an address is live and looking it up in the spans array.
pageInUse is a bitmap that indicates which spans are in state mSpanInUse. This bitmap is indexed by page number, but only the bit corresponding to the first page in each span is used. Reads and writes are atomic.
pageMarks is a bitmap that indicates which spans have any marked objects on them. Like pageInUse, only the bit corresponding to the first page in each span is used. Writes are done atomically during marking. Reads are non-atomic and lock-free since they only occur during sweeping (and hence never race with writes). This is used to quickly find whole spans that can be freed. TODO(austin): It would be nice if this was uint64 for faster scanning, but we don't have 64-bit atomic bit operations.
pageSpecials is a bitmap that indicates which spans have specials (finalizers or other). Like pageInUse, only the bit corresponding to the first page in each span is used. Writes are done atomically whenever a special is added to a span and whenever the last special is removed from a span. Reads are done atomically to find spans containing specials during marking.
checkmarks stores the debug.gccheckmark state. It is only used if debug.gccheckmark > 0.
zeroedBase marks the first byte of the first page in this arena which hasn't been used yet and is therefore already zero. zeroedBase is relative to the arena base. Increases monotonically until it hits heapArenaBytes. This field is sufficient to determine if an allocation needs to be zeroed because the page allocator follows an address-ordered first-fit policy. Read atomically and written with an atomic CAS.
arenaHint is a hint for where to grow the heap arenas. See mheap_.arenaHints.go:notinheap
An mspan is a run of pages. When a mspan is in the heap free treap, state == mSpanFree and heapmap(s->start) == span, heapmap(s->start+s->npages-1) == span. If the mspan is in the heap scav treap, then in addition to the above scavenged == true. scavenged == false in all other cases. When a mspan is allocated, state == mSpanInUse or mSpanManual and heapmap(i) == span for all s->start <= i < s->start+s->npages.
Every mspan is in one doubly-linked list, either in the mheap's busy list or one of the mcentral's span lists.
An mspan representing actual memory has state mSpanInUse, mSpanManual, or mSpanFree. Transitions between these states are constrained as follows: * A span may transition from free to in-use or manual during any GC phase. * During sweeping (gcphase == _GCoff), a span may transition from in-use to free (as a result of sweeping) or manual to free (as a result of stacks being freed). * During GC (gcphase != _GCoff), a span *must not* transition from manual or in-use to free. Because concurrent GC may read a pointer and then look up its span, the span state must be monotonic. Setting mspan.state to mSpanInUse or mSpanManual must be done atomically and only after all other span fields are valid. Likewise, if inspecting a span is contingent on it being mSpanInUse, the state should be loaded atomically and checked before depending on other fields. This allows the garbage collector to safely deal with potentially invalid pointers, since resolving such pointers may race with a span being allocated.
type mSpanState uint8

const (
	mSpanDead   mSpanState = iota
	mSpanInUse             // allocated for garbage collected heap
	mSpanManual            // allocated for manual management (e.g., stack allocator)
)
mSpanStateNames are the names of the span states, indexed by mSpanState.
var mSpanStateNames = []string{
	"mSpanDead",
	"mSpanInUse",
	"mSpanManual",
	"mSpanFree",
}
mSpanStateBox holds an mSpanState and provides atomic operations on it. This is a separate type to disallow accidental comparison or assignment with mSpanState.
type mSpanStateBox struct {
	s mSpanState
}

func ( *mSpanStateBox) ( mSpanState) {
	atomic.Store8((*uint8)(&.s), uint8())
}

func ( *mSpanStateBox) () mSpanState {
	return mSpanState(atomic.Load8((*uint8)(&.s)))
}
mSpanList heads a linked list of spans.go:notinheap
type mSpanList struct {
	first *mspan // first span in list, or nil if none
	last  *mspan // last span in list, or nil if none
}
go:notinheap
type mspan struct {
	next *mspan     // next span in list, or nil if none
	prev *mspan     // previous span in list, or nil if none
	list *mSpanList // For debugging. TODO: Remove.

	startAddr uintptr // address of first byte of span aka s.base()
	npages    uintptr // number of pages in span

	manualFreeList gclinkptr // list of free objects in mSpanManual spans
freeindex is the slot index between 0 and nelems at which to begin scanning for the next free object in this span. Each allocation scans allocBits starting at freeindex until it encounters a 0 indicating a free object. freeindex is then adjusted so that subsequent scans begin just past the newly discovered free object. If freeindex == nelem, this span has no free objects. allocBits is a bitmap of objects in this span. If n >= freeindex and allocBits[n/8] & (1<<(n%8)) is 0 then object n is free; otherwise, object n is allocated. Bits starting at nelem are undefined and should never be referenced. Object n starts at address n*elemsize + (start << pageShift).
TODO: Look up nelems from sizeclass and remove this field if it helps performance.
	nelems uintptr // number of object in the span.
Cache of the allocBits at freeindex. allocCache is shifted such that the lowest bit corresponds to the bit freeindex. allocCache holds the complement of allocBits, thus allowing ctz (count trailing zero) to use it directly. allocCache may contain bits beyond s.nelems; the caller must ignore these.
allocBits and gcmarkBits hold pointers to a span's mark and allocation bits. The pointers are 8 byte aligned. There are three arenas where this data is held. free: Dirty arenas that are no longer accessed and can be reused. next: Holds information to be used in the next GC cycle. current: Information being used during this GC cycle. previous: Information being used during the last GC cycle. A new GC cycle starts with the call to finishsweep_m. finishsweep_m moves the previous arena to the free arena, the current arena to the previous arena, and the next arena to the current arena. The next arena is populated as the spans request memory to hold gcmarkBits for the next GC cycle as well as allocBits for newly allocated spans. The pointer arithmetic is done "by hand" instead of using arrays to avoid bounds checks along critical performance paths. The sweep will free the old allocBits and set allocBits to the gcmarkBits. The gcmarkBits are replaced with a fresh zeroed out memory.
sweep generation: if sweepgen == h->sweepgen - 2, the span needs sweeping if sweepgen == h->sweepgen - 1, the span is currently being swept if sweepgen == h->sweepgen, the span is swept and ready to use if sweepgen == h->sweepgen + 1, the span was cached before sweep began and is still cached, and needs sweeping if sweepgen == h->sweepgen + 3, the span was swept and then cached and is still cached h->sweepgen is incremented by 2 after every GC

	sweepgen    uint32
	divMul      uint16        // for divide by elemsize - divMagic.mul
	baseMask    uint16        // if non-0, elemsize is a power of 2, & this will get object allocation base
	allocCount  uint16        // number of allocated objects
	spanclass   spanClass     // size class and noscan (uint8)
	state       mSpanStateBox // mSpanInUse etc; accessed atomically (get/set methods)
	needzero    uint8         // needs to be zeroed before allocation
	divShift    uint8         // for divide by elemsize - divMagic.shift
	divShift2   uint8         // for divide by elemsize - divMagic.shift2
	elemsize    uintptr       // computed from sizeclass or from npages
	limit       uintptr       // end of data in span
	speciallock mutex         // guards specials list
	specials    *special      // linked list of special records sorted by offset.
}

func ( *mspan) () uintptr {
	return .startAddr
}

func ( *mspan) () (, ,  uintptr) {
	 = .npages << _PageShift
	 = .elemsize
	if  > 0 {
		 =  / 
	}
	return
}
recordspan adds a newly allocated span to h.allspans. This only happens the first time a span is allocated from mheap.spanalloc (it is not called when a span is reused). Write barriers are disallowed here because it can be called from gcWork when allocating new workbufs. However, because it's an indirect call from the fixalloc initializer, the compiler can't see this. The heap lock must be held.go:nowritebarrierrec
func ( unsafe.Pointer,  unsafe.Pointer) {
	 := (*mheap)()
	 := (*mspan)()

	assertLockHeld(&.lock)

	if len(.allspans) >= cap(.allspans) {
		 := 64 * 1024 / sys.PtrSize
		if  < cap(.allspans)*3/2 {
			 = cap(.allspans) * 3 / 2
		}
		var  []*mspan
		 := (*slice)(unsafe.Pointer(&))
		.array = sysAlloc(uintptr()*sys.PtrSize, &memstats.other_sys)
		if .array == nil {
			throw("runtime: cannot allocate memory")
		}
		.len = len(.allspans)
		.cap = 
		if len(.allspans) > 0 {
			copy(, .allspans)
		}
		 := .allspans
		*(*notInHeapSlice)(unsafe.Pointer(&.allspans)) = *(*notInHeapSlice)(unsafe.Pointer(&))
		if len() != 0 {
			sysFree(unsafe.Pointer(&[0]), uintptr(cap())*unsafe.Sizeof([0]), &memstats.other_sys)
		}
	}
	.allspans = .allspans[:len(.allspans)+1]
	.allspans[len(.allspans)-1] = 
}
A spanClass represents the size class and noscan-ness of a span. Each size class has a noscan spanClass and a scan spanClass. The noscan spanClass contains only noscan objects, which do not contain pointers and thus do not need to be scanned by the garbage collector.
type spanClass uint8

const (
	numSpanClasses = _NumSizeClasses << 1
	tinySpanClass  = spanClass(tinySizeClass<<1 | 1)
)

func ( uint8,  bool) spanClass {
	return spanClass(<<1) | spanClass(bool2int())
}

func ( spanClass) () int8 {
	return int8( >> 1)
}

func ( spanClass) () bool {
	return &1 != 0
}
arenaIndex returns the index into mheap_.arenas of the arena containing metadata for p. This index combines of an index into the L1 map and an index into the L2 map and should be used as mheap_.arenas[ai.l1()][ai.l2()]. If p is outside the range of valid heap addresses, either l1() or l2() will be out of bounds. It is nosplit because it's called by spanOf and several other nosplit functions.go:nosplit
arenaBase returns the low address of the region covered by heap arena i.
Let the compiler optimize this away if there's no L1 map.
		return 0
	} else {
		return uint() >> arenaL1Shift
	}
}

func ( arenaIdx) () uint {
	if arenaL1Bits == 0 {
		return uint()
	} else {
		return uint() & (1<<arenaL2Bits - 1)
	}
}
inheap reports whether b is a pointer into a (potentially dead) heap object. It returns false for pointers into mSpanManual spans. Non-preemptible because it is used by write barriers.go:nowritebarriergo:nosplit
func ( uintptr) bool {
	return spanOfHeap() != nil
}
inHeapOrStack is a variant of inheap that returns true for pointers into any allocated heap span.go:nowritebarriergo:nosplit
func ( uintptr) bool {
	 := spanOf()
	if  == nil ||  < .base() {
		return false
	}
	switch .state.get() {
	case mSpanInUse, mSpanManual:
		return  < .limit
	default:
		return false
	}
}
spanOf returns the span of p. If p does not point into the heap arena or no span has ever contained p, spanOf returns nil. If p does not point to allocated memory, this may return a non-nil span that does *not* contain p. If this is a possibility, the caller should either call spanOfHeap or check the span bounds explicitly. Must be nosplit because it has callers that are nosplit.go:nosplit
This function looks big, but we use a lot of constant folding around arenaL1Bits to get it under the inlining budget. Also, many of the checks here are safety checks that Go needs to do anyway, so the generated code is quite short.
	 := arenaIndex()
If there's no L1, then ri.l1() can't be out of bounds but ri.l2() can.
		if .l2() >= uint(len(mheap_.arenas[0])) {
			return nil
		}
If there's an L1, then ri.l1() can be out of bounds but ri.l2() can't.
		if .l1() >= uint(len(mheap_.arenas)) {
			return nil
		}
	}
	 := mheap_.arenas[.l1()]
	if arenaL1Bits != 0 &&  == nil { // Should never happen if there's no L1.
		return nil
	}
	 := [.l2()]
	if  == nil {
		return nil
	}
	return .spans[(/pageSize)%pagesPerArena]
}
spanOfUnchecked is equivalent to spanOf, but the caller must ensure that p points into an allocated heap arena. Must be nosplit because it has callers that are nosplit.go:nosplit
func ( uintptr) *mspan {
	 := arenaIndex()
	return mheap_.arenas[.l1()][.l2()].spans[(/pageSize)%pagesPerArena]
}
spanOfHeap is like spanOf, but returns nil if p does not point to a heap object. Must be nosplit because it has callers that are nosplit.go:nosplit
s is nil if it's never been allocated. Otherwise, we check its state first because we don't trust this pointer, so we have to synchronize with span initialization. Then, it's still possible we picked up a stale span pointer, so we have to check the span's bounds.
	if  == nil || .state.get() != mSpanInUse ||  < .base() ||  >= .limit {
		return nil
	}
	return 
}
pageIndexOf returns the arena, page index, and page mask for pointer p. The caller must ensure p is in the heap.
func ( uintptr) ( *heapArena,  uintptr,  uint8) {
	 := arenaIndex()
	 = mheap_.arenas[.l1()][.l2()]
	 = (( / pageSize) / 8) % uintptr(len(.pageInUse))
	 = byte(1 << (( / pageSize) % 8))
	return
}
Don't zero mspan allocations. Background sweeping can inspect a span concurrently with allocating it, so it's important that the span's sweepgen survive across freeing and re-allocating a span to prevent background sweeping from improperly cas'ing it from 0. This is safe because mspan contains no heap pointers.
h->mapcache needs no init

	for  := range .central {
		.central[].mcentral.init(spanClass())
	}

	.pages.init(&.lock, &memstats.gcMiscSys)
}
reclaim sweeps and reclaims at least npage pages into the heap. It is called before allocating npage pages to keep growth in check. reclaim implements the page-reclaimer half of the sweeper. h.lock must NOT be held.
TODO(austin): Half of the time spent freeing spans is in locking/unlocking the heap (even with low contention). We could make the slow path here several times faster by batching heap frees.
Bail early if there's no more reclaim work.
	if atomic.Load64(&.reclaimIndex) >= 1<<63 {
		return
	}
Disable preemption so the GC can't start while we're sweeping, so we can read h.sweepArenas, and so traceGCSweepStart/Done pair on the P.
	 := acquirem()

	if trace.enabled {
		traceGCSweepStart()
	}

	 := .sweepArenas
	 := false
Pull from accumulated credit first.
		if  := atomic.Loaduintptr(&.reclaimCredit);  > 0 {
			 := 
Take only what we need.
				 = 
			}
			if atomic.Casuintptr(&.reclaimCredit, , -) {
				 -= 
			}
			continue
		}
Page reclaiming is done.
			atomic.Store64(&.reclaimIndex, 1<<63)
			break
		}

Lock the heap for reclaimChunk.
			lock(&.lock)
			 = true
		}
Scan this chunk.
		 := .reclaimChunk(, , pagesPerReclaimerChunk)
		if  <=  {
			 -= 
Put spare pages toward global credit.
			atomic.Xadduintptr(&.reclaimCredit, -)
			 = 0
		}
	}
	if  {
		unlock(&.lock)
	}

	if trace.enabled {
		traceGCSweepDone()
	}
	releasem()
}
reclaimChunk sweeps unmarked spans that start at page indexes [pageIdx, pageIdx+n). It returns the number of pages returned to the heap. h.lock must be held and the caller must be non-preemptible. Note: h.lock may be temporarily unlocked and re-locked in order to do sweeping or if tracing is enabled.
The heap lock must be held because this accesses the heapArena.spans arrays using potentially non-live pointers. In particular, if a span were freed and merged concurrently with this probing heapArena.spans, it would be possible to observe arbitrary, stale span pointers.
	assertLockHeld(&.lock)

	 := 
	var  uintptr
	 := .sweepgen
	for  > 0 {
		 := [/pagesPerArena]
		 := .arenas[.l1()][.l2()]
Get a chunk of the bitmap to work on.
		 := uint( % pagesPerArena)
		 := .pageInUse[/8:]
		 := .pageMarks[/8:]
		if uintptr(len()) > /8 {
			 = [:/8]
			 = [:/8]
		}
Scan this bitmap chunk for spans that are in-use but have no marked objects on them.
		for  := range  {
			 := atomic.Load8(&[]) &^ []
			if  == 0 {
				continue
			}

			for  := uint(0);  < 8; ++ {
				if &(1<<) != 0 {
					 := .spans[+uint()*8+]
					if atomic.Load(&.sweepgen) == -2 && atomic.Cas(&.sweepgen, -2, -1) {
						 := .npages
						unlock(&.lock)
						if .sweep(false) {
							 += 
						}
Reload inUse. It's possible nearby spans were freed when we dropped the lock and we don't want to get stale pointers from the spans array.
						 = atomic.Load8(&[]) &^ []
					}
				}
			}
		}
Advance.
		 += uintptr(len() * 8)
		 -= uintptr(len() * 8)
	}
	if trace.enabled {
Account for pages scanned but not reclaimed.
		traceGCSweepSpan(( - ) * pageSize)
		lock(&.lock)
	}

	assertLockHeld(&.lock) // Must be locked on return.
	return 
}
spanAllocType represents the type of allocation to make, or the type of allocation to be freed.
type spanAllocType uint8

const (
	spanAllocHeap          spanAllocType = iota // heap span
	spanAllocStack                              // stack span
	spanAllocPtrScalarBits                      // unrolled GC prog bitmap span
	spanAllocWorkBuf                            // work buf span
)
manual returns true if the span allocation is manually managed.
func ( spanAllocType) () bool {
	return  != spanAllocHeap
}
alloc allocates a new span of npage pages from the GC'd heap. spanclass indicates the span's size class and scannability. If needzero is true, the memory for the returned span will be zeroed.
Don't do any operations that lock the heap on the G stack. It might trigger stack growth, and the stack growth code needs to be able to allocate heap.
	var  *mspan
To prevent excessive heap growth, before allocating n pages we need to sweep and reclaim at least n pages.
		if .sweepdone == 0 {
			.reclaim()
		}
		 = .allocSpan(, spanAllocHeap, )
	})

	if  != nil {
		if  && .needzero != 0 {
			memclrNoHeapPointers(unsafe.Pointer(.base()), .npages<<_PageShift)
		}
		.needzero = 0
	}
	return 
}
allocManual allocates a manually-managed span of npage pages. allocManual returns nil if allocation fails. allocManual adds the bytes used to *stat, which should be a memstats in-use field. Unlike allocations in the GC'd heap, the allocation does *not* count toward heap_inuse or heap_sys. The memory backing the returned span may not be zeroed if span.needzero is set. allocManual must be called on the system stack because it may acquire the heap lock via allocSpan. See mheap for details. If new code is written to call allocManual, do NOT use an existing spanAllocType value and instead declare a new one.go:systemstack
func ( *mheap) ( uintptr,  spanAllocType) *mspan {
	if !.manual() {
		throw("manual span allocation called with non-manually-managed type")
	}
	return .allocSpan(, , 0)
}
setSpans modifies the span map so [spanOf(base), spanOf(base+npage*pageSize)) is s.
func ( *mheap) (,  uintptr,  *mspan) {
	 :=  / pageSize
	 := arenaIndex()
	 := .arenas[.l1()][.l2()]
	for  := uintptr(0);  < ; ++ {
		 := ( + ) % pagesPerArena
		if  == 0 {
			 = arenaIndex( + *pageSize)
			 = .arenas[.l1()][.l2()]
		}
		.spans[] = 
	}
}
allocNeedsZero checks if the region of address space [base, base+npage*pageSize), assumed to be allocated, needs to be zeroed, updating heap arena metadata for future allocations. This must be called each time pages are allocated from the heap, even if the page allocator can otherwise prove the memory it's allocating is already zero because they're fresh from the operating system. It updates heapArena metadata that is critical for future page allocations. There are no locking constraints on this method.
func ( *mheap) (,  uintptr) ( bool) {
	for  > 0 {
		 := arenaIndex()
		 := .arenas[.l1()][.l2()]

		 := atomic.Loaduintptr(&.zeroedBase)
		 :=  % heapArenaBytes
We extended into the non-zeroed part of the arena, so this region needs to be zeroed before use. zeroedBase is monotonically increasing, so if we see this now then we can be sure we need to zero this memory region. We still need to update zeroedBase for this arena, and potentially more arenas.
			 = true
We may observe arenaBase > zeroedBase if we're racing with one or more allocations which are acquiring memory directly before us in the address space. But, because we know no one else is acquiring *this* memory, it's still safe to not zero.
Compute how far into the arena we extend into, capped at heapArenaBytes.
		 :=  + *pageSize
		if  > heapArenaBytes {
			 = heapArenaBytes
Increase ha.zeroedBase so it's >= arenaLimit. We may be racing with other updates.
		for  >  {
			if atomic.Casuintptr(&.zeroedBase, , ) {
				break
			}
Sanity check zeroedBase.
The zeroedBase moved into the space we were trying to claim. That's very bad, and indicates someone allocated the same region we did.
				throw("potentially overlapping in-use allocations detected")
			}
		}
Move base forward and subtract from npage to move into the next arena, or finish.
		 +=  - 
		 -= ( - ) / pageSize
	}
	return
}
tryAllocMSpan attempts to allocate an mspan object from the P-local cache, but may fail. h.lock need not be held. This caller must ensure that its P won't change underneath it during this function. Currently to ensure that we enforce that the function is run on the system stack, because that's the only place it is used now. In the future, this requirement may be relaxed if its use is necessary elsewhere.go:systemstack
func ( *mheap) () *mspan {
If we don't have a p or the cache is empty, we can't do anything here.
	if  == nil || .mspancache.len == 0 {
		return nil
Pull off the last entry in the cache.
	 := .mspancache.buf[.mspancache.len-1]
	.mspancache.len--
	return 
}
allocMSpanLocked allocates an mspan object. h.lock must be held. allocMSpanLocked must be called on the system stack because its caller holds the heap lock. See mheap for details. Running on the system stack also ensures that we won't switch Ps during this function. See tryAllocMSpan for details.go:systemstack
func ( *mheap) () *mspan {
	assertLockHeld(&.lock)

	 := getg().m.p.ptr()
We don't have a p so just do the normal thing.
		return (*mspan)(.spanalloc.alloc())
Refill the cache if necessary.
	if .mspancache.len == 0 {
		const  = len(.mspancache.buf) / 2
		for  := 0;  < ; ++ {
			.mspancache.buf[] = (*mspan)(.spanalloc.alloc())
		}
		.mspancache.len = 
Pull off the last entry in the cache.
	 := .mspancache.buf[.mspancache.len-1]
	.mspancache.len--
	return 
}
freeMSpanLocked free an mspan object. h.lock must be held. freeMSpanLocked must be called on the system stack because its caller holds the heap lock. See mheap for details. Running on the system stack also ensures that we won't switch Ps during this function. See tryAllocMSpan for details.go:systemstack
First try to free the mspan directly to the cache.
	if  != nil && .mspancache.len < len(.mspancache.buf) {
		.mspancache.buf[.mspancache.len] = 
		.mspancache.len++
		return
Failing that (or if we don't have a p), just free it to the heap.
allocSpan allocates an mspan which owns npages worth of memory. If typ.manual() == false, allocSpan allocates a heap span of class spanclass and updates heap accounting. If manual == true, allocSpan allocates a manually-managed span (spanclass is ignored), and the caller is responsible for any accounting related to its use of the span. Either way, allocSpan will atomically add the bytes in the newly allocated span to *sysStat. The returned span is fully initialized. h.lock must not be held. allocSpan must be called on the system stack both because it acquires the heap lock and because it must block GC transitions.go:systemstack
Function-global state.
	 := getg()
	,  := uintptr(0), uintptr(0)
On some platforms we need to provide physical page aligned stack allocations. Where the page size is less than the physical page size, we already manage to do this by default.
	 := physPageAlignedStacks &&  == spanAllocStack && pageSize < physPageSize
If the allocation is small enough, try the page cache! The page cache does not support aligned allocations, so we cannot use it if we need to provide a physical page aligned stack allocation.
	 := .m.p.ptr()
	if ! &&  != nil &&  < pageCachePages/4 {
		 := &.pcache
If the cache is empty, refill it.
		if .empty() {
			lock(&.lock)
			* = .pages.allocToCache()
			unlock(&.lock)
		}
Try to allocate from the cache.
		,  = .alloc()
		if  != 0 {
			 = .tryAllocMSpan()
			if  != nil {
				goto 
We have a base but no mspan, so we need to lock the heap.
		}
	}
For one reason or another, we couldn't get the whole job done without the heap lock.
	lock(&.lock)

Overallocate by a physical page to allow for later alignment.
		 += physPageSize / pageSize
	}

Try to acquire a base address.
		,  = .pages.alloc()
		if  == 0 {
			if !.grow() {
				unlock(&.lock)
				return nil
			}
			,  = .pages.alloc()
			if  == 0 {
				throw("grew heap, but no adequate free space found")
			}
		}
	}
We failed to get an mspan earlier, so grab one now that we have the heap lock.
		 = .allocMSpanLocked()
	}

	if  {
		,  := , 
		 = alignUp(, physPageSize)
		 -= physPageSize / pageSize
Return memory around the aligned allocation.
		 :=  - 
		if  > 0 {
			.pages.free(, /pageSize)
		}
		 := (-)*pageSize - 
		if  > 0 {
			.pages.free(+*pageSize, /pageSize)
		}
	}

	unlock(&.lock)

At this point, both s != nil and base != 0, and the heap lock is no longer held. Initialize the span.
	.init(, )
	if .allocNeedsZero(, ) {
		.needzero = 1
	}
	 :=  * pageSize
	if .manual() {
		.manualFreeList = 0
		.nelems = 0
		.limit = .base() + .npages*pageSize
		.state.set(mSpanManual)
We must set span properties before the span is published anywhere since we're not holding the heap lock.
		.spanclass = 
		if  := .sizeclass();  == 0 {
			.elemsize = 
			.nelems = 1

			.divShift = 0
			.divMul = 0
			.divShift2 = 0
			.baseMask = 0
		} else {
			.elemsize = uintptr(class_to_size[])
			.nelems =  / .elemsize

			 := &class_to_divmagic[]
			.divShift = .shift
			.divMul = .mul
			.divShift2 = .shift2
			.baseMask = .baseMask
		}
Initialize mark and allocation structures.
		.freeindex = 0
		.allocCache = ^uint64(0) // all 1s indicating all free.
		.gcmarkBits = newMarkBits(.nelems)
		.allocBits = newAllocBits(.nelems)
It's safe to access h.sweepgen without the heap lock because it's only ever updated with the world stopped and we run on the systemstack which blocks a STW transition.
Now that the span is filled in, set its state. This is a publication barrier for the other fields in the span. While valid pointers into this span should never be visible until the span is returned, if the garbage collector finds an invalid pointer, access to the span may race with initialization of the span. We resolve this race by atomically setting the state after the span is fully initialized, and atomically checking the state in any situation where a pointer is suspect.
Commit and account for any scavenged memory that the span now owns.
sysUsed all the pages that are actually available in the span since some of them might be scavenged.
Update stats.
	if  == spanAllocHeap {
		atomic.Xadd64(&memstats.heap_inuse, int64())
	}
Manually managed memory doesn't count toward heap_sys.
Update consistent stats.
Publish the span in various locations.
This is safe to call without the lock held because the slots related to this span will only ever be read or modified by this thread until pointers into the span are published (and we execute a publication barrier at the end of this function before that happens) or pageInUse is updated.
	.setSpans(.base(), , )

Mark in-use span in arena page bitmap. This publishes the span to the page sweeper, so it's imperative that the span be completely initialized prior to this line.
		, ,  := pageIndexOf(.base())
		atomic.Or8(&.pageInUse[], )
Update related page sweeper stats.
		atomic.Xadd64(&.pagesInUse, int64())
	}
Make sure the newly allocated span will be observed by the GC before pointers into the span are published.
	publicationBarrier()

	return 
}
Try to add at least npage pages of memory to the heap, returning whether it worked. h.lock must be held.
func ( *mheap) ( uintptr) bool {
	assertLockHeld(&.lock)
We must grow the heap in whole palloc chunks.
	 := alignUp(, pallocChunkPages) * pageSize

This may overflow because ask could be very large and is otherwise unrelated to h.curArena.base.
	 := .curArena.base + 
	 := alignUp(, physPageSize)
Not enough room in the current arena. Allocate more arena space. This may not be contiguous with the current arena, so we have to request the full ask.
		,  := .sysAlloc()
		if  == nil {
			print("runtime: out of memory: cannot allocate ", , "-byte block (", memstats.heap_sys, " in use)\n")
			return false
		}

The new space is contiguous with the old space, so just extend the current space.
			.curArena.end = uintptr() + 
The new space is discontiguous. Track what remains of the current space and switch to the new space. This should be rare.
			if  := .curArena.end - .curArena.base;  != 0 {
				.pages.grow(.curArena.base, )
				 += 
Switch to the new space.
			.curArena.base = uintptr()
			.curArena.end = uintptr() + 
		}
The memory just allocated counts as both released and idle, even though it's not yet backed by spans. The allocation is always aligned to the heap arena size which is always > physPageSize, so its safe to just add directly to heap_released.
Recalculate nBase. We know this won't overflow, because sysAlloc returned a valid region starting at h.curArena.base which is at least ask bytes in size.
		 = alignUp(.curArena.base+, physPageSize)
	}
Grow into the current arena.
	 := .curArena.base
	.curArena.base = 
	.pages.grow(, -)
	 +=  - 
We just caused a heap growth, so scavenge down what will soon be used. By scavenging inline we deal with the failure to allocate out of memory fragments by scavenging the memory fragments that are least likely to be re-used.
	if  := heapRetained(); +uint64() > .scavengeGoal {
		 := 
		if  := uintptr( + uint64() - .scavengeGoal);  >  {
			 = 
		}
		.pages.scavenge(, false)
	}
	return true
}
Free the span back into the heap.
func ( *mheap) ( *mspan) {
	systemstack(func() {
		lock(&.lock)
Tell msan that this entire span is no longer in use.
			 := unsafe.Pointer(.base())
			 := .npages << _PageShift
			msanfree(, )
		}
		.freeSpanLocked(, spanAllocHeap)
		unlock(&.lock)
	})
}
freeManual frees a manually-managed span returned by allocManual. typ must be the same as the spanAllocType passed to the allocManual that allocated s. This must only be called when gcphase == _GCoff. See mSpanState for an explanation. freeManual must be called on the system stack because it acquires the heap lock. See mheap for details.go:systemstack
func ( *mheap) ( *mspan,  spanAllocType) {
	.needzero = 1
	lock(&.lock)
	.freeSpanLocked(, )
	unlock(&.lock)
}

func ( *mheap) ( *mspan,  spanAllocType) {
	assertLockHeld(&.lock)

	switch .state.get() {
	case mSpanManual:
		if .allocCount != 0 {
			throw("mheap.freeSpanLocked - invalid stack free")
		}
	case mSpanInUse:
		if .allocCount != 0 || .sweepgen != .sweepgen {
			print("mheap.freeSpanLocked - span ", , " ptr ", hex(.base()), " allocCount ", .allocCount, " sweepgen ", .sweepgen, "/", .sweepgen, "\n")
			throw("mheap.freeSpanLocked - invalid free")
		}
		atomic.Xadd64(&.pagesInUse, -int64(.npages))
Clear in-use bit in arena page bitmap.
		, ,  := pageIndexOf(.base())
		atomic.And8(&.pageInUse[], ^)
	default:
		throw("mheap.freeSpanLocked - invalid span state")
	}
Update stats. Mirrors the code in allocSpan.
	 := .npages * pageSize
	if  == spanAllocHeap {
		atomic.Xadd64(&memstats.heap_inuse, -int64())
	}
Manually managed memory doesn't count toward heap_sys, so add it back.
Update consistent stats.
Mark the space as free.
	.pages.free(.base(), .npages)
Free the span structure. We no longer have a use for it.
scavengeAll acquires the heap lock (blocking any additional manipulation of the page allocator) and iterates over the whole heap, scavenging every free page available.
Disallow malloc or panic while holding the heap lock. We do this here because this is a non-mallocgc entry-point to the mheap API.
	 := getg()
	.m.mallocing++
Start a new scavenge generation so we have a chance to walk over the whole heap.
	.pages.scavengeStartGen()
	 := .pages.scavenge(^uintptr(0), false)
	 := .pages.scav.gen
	unlock(&.lock)
	.m.mallocing--

	if debug.scavtrace > 0 {
		printScavTrace(, , true)
	}
}
go:linkname runtime_debug_freeOSMemory runtime/debug.freeOSMemory
Initialize a new span with the given start and npages.
span is *not* zeroed.
	.next = nil
	.prev = nil
	.list = nil
	.startAddr = 
	.npages = 
	.allocCount = 0
	.spanclass = 0
	.elemsize = 0
	.speciallock.key = 0
	.specials = nil
	.needzero = 0
	.freeindex = 0
	.allocBits = nil
	.gcmarkBits = nil
	.state.set(mSpanDead)
	lockInit(&.speciallock, lockRankMspanSpecial)
}

func ( *mspan) () bool {
	return .list != nil
}
Initialize an empty doubly-linked list.
func ( *mSpanList) () {
	.first = nil
	.last = nil
}

func ( *mSpanList) ( *mspan) {
	if .list !=  {
		print("runtime: failed mSpanList.remove span.npages=", .npages,
			" span=", , " prev=", .prev, " span.list=", .list, " list=", , "\n")
		throw("mSpanList.remove")
	}
	if .first ==  {
		.first = .next
	} else {
		.prev.next = .next
	}
	if .last ==  {
		.last = .prev
	} else {
		.next.prev = .prev
	}
	.next = nil
	.prev = nil
	.list = nil
}

func ( *mSpanList) () bool {
	return .first == nil
}

func ( *mSpanList) ( *mspan) {
	if .next != nil || .prev != nil || .list != nil {
		println("runtime: failed mSpanList.insert", , .next, .prev, .list)
		throw("mSpanList.insert")
	}
	.next = .first
The list contains at least one span; link it in. The last span in the list doesn't change.
		.first.prev = 
The list contains no spans, so this is also the last span.
		.last = 
	}
	.first = 
	.list = 
}

func ( *mSpanList) ( *mspan) {
	if .next != nil || .prev != nil || .list != nil {
		println("runtime: failed mSpanList.insertBack", , .next, .prev, .list)
		throw("mSpanList.insertBack")
	}
	.prev = .last
The list contains at least one span.
		.last.next = 
The list contains no spans, so this is also the first span.
		.first = 
	}
	.last = 
	.list = 
}
takeAll removes all spans from other and inserts them at the front of list.
func ( *mSpanList) ( *mSpanList) {
	if .isEmpty() {
		return
	}
Reparent everything in other to list.
	for  := .first;  != nil;  = .next {
		.list = 
	}
Concatenate the lists.
	if .isEmpty() {
		* = *
Neither list is empty. Put other before list.
		.last.next = .first
		.first.prev = .last
		.first = .first
	}

	.first, .last = nil, nil
}

const (
	_KindSpecialFinalizer = 1
Note: The finalizer special must be first because if we're freeing an object, a finalizer special will cause the freeing operation to abort, and we want to keep the other special records around if that happens.
)
go:notinheap
type special struct {
	next   *special // linked list in span
	offset uint16   // span offset of object
	kind   byte     // kind of special
}
spanHasSpecials marks a span as having specials in the arena bitmap.
func ( *mspan) {
	 := (.base() / pageSize) % pagesPerArena
	 := arenaIndex(.base())
	 := mheap_.arenas[.l1()][.l2()]
	atomic.Or8(&.pageSpecials[/8], uint8(1)<<(%8))
}
spanHasNoSpecials marks a span as having no specials in the arena bitmap.
func ( *mspan) {
	 := (.base() / pageSize) % pagesPerArena
	 := arenaIndex(.base())
	 := mheap_.arenas[.l1()][.l2()]
	atomic.And8(&.pageSpecials[/8], ^(uint8(1) << ( % 8)))
}
Adds the special record s to the list of special records for the object p. All fields of s should be filled in except for offset & next, which this routine will fill in. Returns true if the special was successfully added, false otherwise. (The add will fail only if a record with the same p and s->kind already exists.)
func ( unsafe.Pointer,  *special) bool {
	 := spanOfHeap(uintptr())
	if  == nil {
		throw("addspecial on invalid pointer")
	}
Ensure that the span is swept. Sweeping accesses the specials list w/o locks, so we have to synchronize with it. And it's just much safer.
	 := acquirem()
	.ensureSwept()

	 := uintptr() - .base()
	 := .kind

	lock(&.speciallock)
Find splice point, check for existing record.
	 := &.specials
	for {
		 := *
		if  == nil {
			break
		}
		if  == uintptr(.offset) &&  == .kind {
			unlock(&.speciallock)
			releasem()
			return false // already exists
		}
		if  < uintptr(.offset) || ( == uintptr(.offset) &&  < .kind) {
			break
		}
		 = &.next
	}
Splice in record, fill in offset.
	.offset = uint16()
	.next = *
	* = 
	spanHasSpecials()
	unlock(&.speciallock)
	releasem()

	return true
}
Removes the Special record of the given kind for the object p. Returns the record if the record existed, nil otherwise. The caller must FixAlloc_Free the result.
func ( unsafe.Pointer,  uint8) *special {
	 := spanOfHeap(uintptr())
	if  == nil {
		throw("removespecial on invalid pointer")
	}
Ensure that the span is swept. Sweeping accesses the specials list w/o locks, so we have to synchronize with it. And it's just much safer.
	 := acquirem()
	.ensureSwept()

	 := uintptr() - .base()

	var  *special
	lock(&.speciallock)
	 := &.specials
	for {
		 := *
		if  == nil {
			break
This function is used for finalizers only, so we don't check for "interior" specials (p must be exactly equal to s->offset).
		if  == uintptr(.offset) &&  == .kind {
			* = .next
			 = 
			break
		}
		 = &.next
	}
	if .specials == nil {
		spanHasNoSpecials()
	}
	unlock(&.speciallock)
	releasem()
	return 
}
The described object has a finalizer set for it. specialfinalizer is allocated from non-GC'd memory, so any heap pointers must be specially handled.go:notinheap
type specialfinalizer struct {
	special special
	fn      *funcval // May be a heap pointer.
	nret    uintptr
	fint    *_type   // May be a heap pointer, but always live.
	ot      *ptrtype // May be a heap pointer, but always live.
}
Adds a finalizer to the object p. Returns true if it succeeded.
This is responsible for maintaining the same GC-related invariants as markrootSpans in any situation where it's possible that markrootSpans has already run but mark termination hasn't yet.
		if gcphase != _GCoff {
			, ,  := findObject(uintptr(), 0, 0)
			 := acquirem()
Mark everything reachable from the object so it's retained for the finalizer.
Mark the finalizer itself, since the special isn't part of the GC'd heap.
			scanblock(uintptr(unsafe.Pointer(&.fn)), sys.PtrSize, &oneptrmask[0], , nil)
			releasem()
		}
		return true
	}
Removes the finalizer (if any) from the object p.
The described object is being heap profiled.go:notinheap
Set the heap profile bucket associated with addr to b.
func ( unsafe.Pointer,  *bucket) {
	lock(&mheap_.speciallock)
	 := (*specialprofile)(mheap_.specialprofilealloc.alloc())
	unlock(&mheap_.speciallock)
	.special.kind = _KindSpecialProfile
	.b = 
	if !addspecial(, &.special) {
		throw("setprofilebucket: profile already set")
	}
}
Do whatever cleanup needs to be done to deallocate s. It has already been unlinked from the mspan specials list.
gcBits is an alloc/mark bitmap. This is always used as *gcBits.go:notinheap
bytep returns a pointer to the n'th byte of b.
func ( *gcBits) ( uintptr) *uint8 {
	return addb((*uint8)(), )
}
bitp returns a pointer to the byte containing bit n and a mask for selecting that bit from *bytep.
func ( *gcBits) ( uintptr) ( *uint8,  uint8) {
	return .bytep( / 8), 1 << ( % 8)
}

const gcBitsChunkBytes = uintptr(64 << 10)
const gcBitsHeaderBytes = unsafe.Sizeof(gcBitsHeader{})

type gcBitsHeader struct {
	free uintptr // free is the index into bits of the next free byte.
	next uintptr // *gcBits triggers recursive type bug. (issue 14620)
}
go:notinheap
gcBitsHeader // side step recursive type bug (issue 14620) by including fields by hand.
	free uintptr // free is the index into bits of the next free byte; read/write atomically
	next *gcBitsArena
	bits [gcBitsChunkBytes - gcBitsHeaderBytes]gcBits
}

var gcBitsArenas struct {
	lock     mutex
	free     *gcBitsArena
	next     *gcBitsArena // Read atomically. Write atomically under lock.
	current  *gcBitsArena
	previous *gcBitsArena
}
tryAlloc allocates from b or returns nil if b does not have enough room. This is safe to call concurrently.
func ( *gcBitsArena) ( uintptr) *gcBits {
	if  == nil || atomic.Loaduintptr(&.free)+ > uintptr(len(.bits)) {
		return nil
Try to allocate from this block.
	 := atomic.Xadduintptr(&.free, )
	if  > uintptr(len(.bits)) {
		return nil
There was enough room.
	 :=  - 
	return &.bits[]
}
newMarkBits returns a pointer to 8 byte aligned bytes to be used for a span's mark bits.
func ( uintptr) *gcBits {
	 := uintptr(( + 63) / 64)
	 :=  * 8
Try directly allocating from the current head arena.
	 := (*gcBitsArena)(atomic.Loadp(unsafe.Pointer(&gcBitsArenas.next)))
	if  := .tryAlloc();  != nil {
		return 
	}
There's not enough room in the head arena. We may need to allocate a new arena.
Try the head arena again, since it may have changed. Now that we hold the lock, the list head can't change, but its free position still can.
	if  := gcBitsArenas.next.tryAlloc();  != nil {
		unlock(&gcBitsArenas.lock)
		return 
	}
Allocate a new arena. This may temporarily drop the lock.
If newArenaMayUnlock dropped the lock, another thread may have put a fresh arena on the "next" list. Try allocating from next again.
Put fresh back on the free list. TODO: Mark it "already zeroed"
Allocate from the fresh arena. We haven't linked it in yet, so this cannot race and is guaranteed to succeed.
	 := .tryAlloc()
	if  == nil {
		throw("markBits overflow")
	}
Add the fresh arena to the "next" list.
newAllocBits returns a pointer to 8 byte aligned bytes to be used for this span's alloc bits. newAllocBits is used to provide newly initialized spans allocation bits. For spans not being initialized the mark bits are repurposed as allocation bits when the span is swept.
func ( uintptr) *gcBits {
	return newMarkBits()
}
nextMarkBitArenaEpoch establishes a new epoch for the arenas holding the mark bits. The arenas are named relative to the current GC cycle which is demarcated by the call to finishweep_m. All current spans have been swept. During that sweep each span allocated room for its gcmarkBits in gcBitsArenas.next block. gcBitsArenas.next becomes the gcBitsArenas.current where the GC will mark objects and after each span is swept these bits will be used to allocate objects. gcBitsArenas.current becomes gcBitsArenas.previous where the span's gcAllocBits live until all the spans have been swept during this GC cycle. The span's sweep extinguishes all the references to gcBitsArenas.previous by pointing gcAllocBits into the gcBitsArenas.current. The gcBitsArenas.previous is released to the gcBitsArenas.free list.
Find end of previous arenas.
newArenaMayUnlock allocates and zeroes a gcBits arena. The caller must hold gcBitsArena.lock. This may temporarily release it.
If result.bits is not 8 byte aligned adjust index so that &result.bits[result.free] is 8 byte aligned.
	if uintptr(unsafe.Offsetof(gcBitsArena{}.bits))&7 == 0 {
		.free = 0
	} else {
		.free = 8 - (uintptr(unsafe.Pointer(&.bits[0])) & 7)
	}
	return