Copyright 2009 The Go Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
Garbage collector: sweeping
The sweeper consists of two different algorithms: * The object reclaimer finds and frees unmarked slots in spans. It can free a whole span if none of the objects are marked, but that isn't its goal. This can be driven either synchronously by mcentral.cacheSpan for mcentral spans, or asynchronously by sweepone, which looks at all the mcentral lists. * The span reclaimer looks for spans that contain no marked objects and frees whole spans. This is a separate algorithm because freeing whole spans is the hardest task for the object reclaimer, but is critical when allocating new spans. The entry point for this is mheap_.reclaim and it's driven by a sequential scan of the page marks bitmap in the heap arenas. Both algorithms ultimately call mspan.sweep, which sweeps a single heap span.

package runtime

import (
	
	
)

var sweep sweepdata
State of background sweep.
centralIndex is the current unswept span class. It represents an index into the mcentral span sets. Accessed and updated via its load and update methods. Not protected by a lock. Reset at mark termination. Used by mheap.nextSpanForSweep.
sweepClass is a spanClass and one bit to represent whether we're currently sweeping partial or full spans.
Only update *s if its current value is less than sNew, since *s increases monotonically.
	 := .load()
	for  <  && !atomic.Cas((*uint32)(), uint32(), uint32()) {
		 = .load()
TODO(mknyszek): This isn't the only place we have an atomic monotonically increasing counter. It would be nice to have an "atomic max" which is just implemented as the above on most architectures. Some architectures like RISC-V however have native support for an atomic max.
}

func ( *sweepClass) () {
	atomic.Store((*uint32)(), 0)
}
split returns the underlying span class as well as whether we're interested in the full or partial unswept lists for that class, indicated as a boolean (true means "full").
func ( sweepClass) () ( spanClass,  bool) {
	return spanClass( >> 1), &1 == 0
}
nextSpanForSweep finds and pops the next span for sweeping from the central sweep buffers. It returns ownership of the span to the caller. Returns nil if no such span exists.
func ( *mheap) () *mspan {
	 := .sweepgen
	for  := sweep.centralIndex.load();  < numSweepClasses; ++ {
		,  := .split()
		 := &.central[].mcentral
		var  *mspan
		if  {
			 = .fullUnswept().pop()
		} else {
			 = .partialUnswept().pop()
		}
Write down that we found something so future sweepers can start from here.
			sweep.centralIndex.update()
			return 
		}
Write down that we found nothing.
finishsweep_m ensures that all spans are swept. The world must be stopped. This ensures there are no sweeps in progress.go:nowritebarrier
Sweeping must be complete before marking commences, so sweep any unswept spans. If this is a concurrent GC, there shouldn't be any spans left to sweep, so this should finish instantly. If GC was forced before the concurrent sweep finished, there may be spans to sweep.
	for sweepone() != ^uintptr(0) {
		sweep.npausesweep++
	}
Reset all the unswept buffers, which should be empty. Do this in sweep termination as opposed to mark termination so that we can catch unswept spans and reclaim blocks as soon as possible.
	 := mheap_.sweepgen
	for  := range mheap_.central {
		 := &mheap_.central[].mcentral
		.partialUnswept().reset()
		.fullUnswept().reset()
	}
Sweeping is done, so if the scavenger isn't already awake, wake it up. There's definitely work for it to do at this point.
This can happen if a GC runs between gosweepone returning ^0 above and the lock being acquired.
sweepone sweeps some unswept heap span and returns the number of pages returned to the heap, or ^uintptr(0) if there was nothing to sweep.
func () uintptr {
	 := getg()
	 := mheap_.sweepPagesPerByte // For debugging
increment locks to ensure that the goroutine is not preempted in the middle of sweep thus leaving the span in an inconsistent state for next GC
	.m.locks++
	if atomic.Load(&mheap_.sweepdone) != 0 {
		.m.locks--
		return ^uintptr(0)
	}
	atomic.Xadd(&mheap_.sweepers, +1)
Find a span to sweep.
	var  *mspan
	 := mheap_.sweepgen
	for {
		 = mheap_.nextSpanForSweep()
		if  == nil {
			atomic.Store(&mheap_.sweepdone, 1)
			break
		}
This can happen if direct sweeping already swept this span, but in that case the sweep generation should always be up-to-date.
			if !(.sweepgen ==  || .sweepgen == +3) {
				print("runtime: bad span s.state=", , " s.sweepgen=", .sweepgen, " sweepgen=", , "\n")
				throw("non in-use span in unswept list")
			}
			continue
		}
		if .sweepgen == -2 && atomic.Cas(&.sweepgen, -2, -1) {
			break
		}
	}
Sweep the span we found.
	 := ^uintptr(0)
	if  != nil {
		 = .npages
Whole span was freed. Count it toward the page reclaimer credit since these pages can now be used for span allocation.
Span is still in-use, so this returned no pages to the heap and the span needs to move to the swept in-use list.
			 = 0
		}
	}
Decrement the number of active sweepers and if this is the last one print trace information.
Since the sweeper is done, move the scavenge gen forward (signalling that there's new work to do) and wake the scavenger. The scavenger is signaled by the last sweeper because once sweeping is done, we will definitely have useful work for the scavenger to do, since the scavenger only runs over the heap once per GC cyle. This update is not done during sweep termination because in some cases there may be a long delay between sweep done and sweep termination (e.g. not enough allocations to trigger a GC) which would be nice to fill in with scavenging work.
Since we might sweep in an allocation path, it's not possible for us to wake the scavenger directly via wakeScavenger, since it could allocate. Ask sysmon to do it for us instead.
		readyForScavenger()

		if debug.gcpacertrace > 0 {
			print("pacer: sweep done at heap size ", memstats.heap_live>>20, "MB; allocated ", (memstats.heap_live-mheap_.sweepHeapLiveBasis)>>20, "MB during sweep; swept ", mheap_.pagesSwept, " pages at ", , " pages/byte\n")
		}
	}
	.m.locks--
	return 
}
isSweepDone reports whether all spans are swept or currently being swept. Note that this condition may transition from false to true at any time as the sweeper runs. It may transition from true to false if a GC runs; to prevent that the caller must be non-preemptible or must somehow block GC progress.
func () bool {
	return mheap_.sweepdone != 0
}
Returns only when span s has been swept.go:nowritebarrier
Caller must disable preemption. Otherwise when this function returns the span can become unswept again (if GC is triggered on another goroutine).
	 := getg()
	if .m.locks == 0 && .m.mallocing == 0 &&  != .m.g0 {
		throw("mspan.ensureSwept: m is not locked")
	}

	 := mheap_.sweepgen
	 := atomic.Load(&.sweepgen)
	if  ==  ||  == +3 {
		return
The caller must be sure that the span is a mSpanInUse span.
	if atomic.Cas(&.sweepgen, -2, -1) {
		.sweep(false)
		return
unfortunate condition, and we don't have efficient means to wait
	for {
		 := atomic.Load(&.sweepgen)
		if  ==  ||  == +3 {
			break
		}
		osyield()
	}
}
Sweep frees or collects finalizers for blocks not marked in the mark phase. It clears the mark bits in preparation for the next GC round. Returns true if the span was returned to heap. If preserve=true, don't return it to heap nor relink in mcentral lists; caller takes care of it.
It's critical that we enter this function with preemption disabled, GC must not start while we are in the middle of this function.
	 := getg()
	if .m.locks == 0 && .m.mallocing == 0 &&  != .m.g0 {
		throw("mspan.sweep: m is not locked")
	}
	 := mheap_.sweepgen
	if  := .state.get();  != mSpanInUse || .sweepgen != -1 {
		print("mspan.sweep: state=", , " sweepgen=", .sweepgen, " mheap.sweepgen=", , "\n")
		throw("mspan.sweep: bad span state")
	}

	if trace.enabled {
		traceGCSweepSpan(.npages * _PageSize)
	}

	atomic.Xadd64(&mheap_.pagesSwept, int64(.npages))

	 := .spanclass
	 := .elemsize
The allocBits indicate which unmarked objects don't need to be processed since they were free at the end of the last GC cycle and were not allocated since then. If the allocBits index is >= s.freeindex and the bit is not marked then the object remains unallocated since the last GC. This situation is analogous to being on a freelist.
Unlink & free special records for any objects we're about to free. Two complications here: 1. An object can have both finalizer and profile special records. In such case we need to queue finalizer for execution, mark the object as live and preserve the profile special. 2. A tiny object can have several finalizers setup for different offsets. If such object is not marked, we need to queue all finalizers at once. Both 1 and 2 are possible at the same time.
	 := .specials != nil
	 := &.specials
	 := *
A finalizer can be set for an inner byte of an object, find object beginning.
		 := uintptr(.offset) / 
		 := .base() + *
		 := .markBitsForIndex()
This object is not marked and has at least one special record. Pass 1: see if it has at least one finalizer.
			 := false
			 :=  - .base() + 
			for  := ;  != nil && uintptr(.offset) < ;  = .next {
Stop freeing of object if it has a finalizer.
					.setMarkedNonAtomic()
					 = true
					break
				}
Pass 2: queue all finalizers _or_ handle profile record.
Find the exact byte for which the special was setup (as opposed to object beginning).
				 := .base() + uintptr(.offset)
Splice out special record.
					 := 
					 = .next
					* = 
					freespecial(, unsafe.Pointer(), )
This is profile record, but the object has finalizers (so kept alive). Keep special record.
					 = &.next
					 = *
				}
			}
object is still live: keep special record
			 = &.next
			 = *
		}
	}
	if  && .specials == nil {
		spanHasNoSpecials()
	}

Find all newly freed objects. This doesn't have to efficient; allocfreetrace has massive overhead.
		 := .markBitsForBase()
		 := .allocBitsForIndex(0)
		for  := uintptr(0);  < .nelems; ++ {
			if !.isMarked() && (.index < .freeindex || .isMarked()) {
				 := .base() + *.elemsize
				if debug.allocfreetrace != 0 {
					tracefree(unsafe.Pointer(), )
				}
				if debug.clobberfree != 0 {
					clobberfree(unsafe.Pointer(), )
				}
				if raceenabled {
					racefree(unsafe.Pointer(), )
				}
				if msanenabled {
					msanfree(unsafe.Pointer(), )
				}
			}
			.advance()
			.advance()
		}
	}
Check for zombie objects.
Everything < freeindex is allocated and hence cannot be zombies. Check the first bitmap byte, where we have to be careful with freeindex.
		 := .freeindex
		if (*.gcmarkBits.bytep( / 8)&^*.allocBits.bytep( / 8))>>(%8) != 0 {
			.reportZombies()
Check remaining bytes.
		for  := /8 + 1;  < divRoundUp(.nelems, 8); ++ {
			if *.gcmarkBits.bytep()&^*.allocBits.bytep() != 0 {
				.reportZombies()
			}
		}
	}
Count the number of free objects in this span.
	 := uint16(.countAlloc())
	 := .allocCount - 
The zombie check above should have caught this in more detail.
		print("runtime: nelems=", .nelems, " nalloc=", , " previous allocCount=", .allocCount, " nfreed=", , "\n")
		throw("sweep increased allocation count")
	}

	.allocCount = 
	.freeindex = 0 // reset allocation index to start of span.
	if trace.enabled {
		getg().m.p.ptr().traceReclaimed += uintptr() * .elemsize
	}
gcmarkBits becomes the allocBits. get a fresh cleared gcmarkBits in preparation for next GC
Initialize alloc bits cache.
The span must be in our exclusive ownership until we update sweepgen, check for potential races.
	if  := .state.get();  != mSpanInUse || .sweepgen != -1 {
		print("mspan.sweep: state=", , " sweepgen=", .sweepgen, " mheap.sweepgen=", , "\n")
		throw("mspan.sweep: bad span state after sweep")
	}
	if .sweepgen == +1 || .sweepgen == +3 {
		throw("swept cached span")
	}
We need to set s.sweepgen = h.sweepgen only when all blocks are swept, because of the potential for a concurrent free/SetFinalizer. But we need to set it before we make the span available for allocation (return it to heap or mcentral), because allocation code assumes that a span is already swept if available for allocation. Serialization point. At this point the mark bits are cleared and allocation ready to go so release the span.
	atomic.Store(&.sweepgen, )

Handle spans for small objects.
Only mark the span as needing zeroing if we've freed any objects, because a fresh span that had been allocated into, wasn't totally filled, but then swept, still has all of its free slots zeroed.
The caller may not have removed this span from whatever unswept set its on but taken ownership of the span for sweeping by updating sweepgen. If this span still is in an unswept set, then the mcentral will pop it off the set, check its sweepgen, and ignore it.
Free totally free span directly back to the heap.
				mheap_.freeSpan()
				return true
Return span back to the right mcentral list.
			if uintptr() == .nelems {
				mheap_.central[].mcentral.fullSwept().push()
			} else {
				mheap_.central[].mcentral.partialSwept().push()
			}
		}
Handle spans for large objects.
Free large object span to heap.
NOTE(rsc,dvyukov): The original implementation of efence in CL 22060046 used sysFree instead of sysFault, so that the operating system would eventually give the memory back to us again, so that an efence program could run longer without running out of memory. Unfortunately, calling sysFree here without any kind of adjustment of the heap data structures means that when the memory does come back to us, we have the wrong metadata for it, either in the mspan structures or in the garbage collection bitmap. Using sysFault here means that the program will run out of memory fairly quickly in efence mode, but at least it won't have mysterious crashes due to confused memory reuse. It should be possible to switch back to sysFree if we also implement and then call some kind of mheap.deleteSpan.
			if debug.efence > 0 {
				.limit = 0 // prevent mlookup from finding this span
				sysFault(unsafe.Pointer(.base()), )
			} else {
				mheap_.freeSpan()
			}
			 := memstats.heapStats.acquire()
			atomic.Xadduintptr(&.largeFreeCount, 1)
			atomic.Xadduintptr(&.largeFree, )
			memstats.heapStats.release()
			return true
		}
Add a large span directly onto the full+swept list.
		mheap_.central[].mcentral.fullSwept().push()
	}
	return false
}
reportZombies reports any marked but free objects in s and throws. This generally means one of the following: 1. User code converted a pointer to a uintptr and then back unsafely, and a GC ran while the uintptr was the only reference to an object. 2. User code (or a compiler bug) constructed a bad pointer that points to a free slot, often a past-the-end pointer. 3. The GC two cycles ago missed a pointer and freed a live object, but it was still live in the last cycle, so this GC cycle found a pointer to that object and marked it.
func ( *mspan) () {
	printlock()
	print("runtime: marked free object in span ", , ", elemsize=", .elemsize, " freeindex=", .freeindex, " (bad use of unsafe.Pointer? try -d=checkptr)\n")
	 := .markBitsForBase()
	 := .allocBitsForIndex(0)
	for  := uintptr(0);  < .nelems; ++ {
		 := .base() + *.elemsize
		print(hex())
		 :=  < .freeindex || .isMarked()
		if  {
			print(" alloc")
		} else {
			print(" free ")
		}
		if .isMarked() {
			print(" marked  ")
		} else {
			print(" unmarked")
		}
		 := .isMarked() && !
		if  {
			print(" zombie")
		}
		print("\n")
		if  {
			 := .elemsize
			if  > 1024 {
				 = 1024
			}
			hexdumpWords(, +, nil)
		}
		.advance()
		.advance()
	}
	throw("found pointer to free object")
}
deductSweepCredit deducts sweep credit for allocating a span of size spanBytes. This must be performed *before* the span is allocated to ensure the system has enough credit. If necessary, it performs sweeping to prevent going in to debt. If the caller will also sweep pages (e.g., for a large allocation), it can pass a non-zero callerSweepPages to leave that many pages unswept. deductSweepCredit makes a worst-case assumption that all spanBytes bytes of the ultimately allocated span will be available for object allocation. deductSweepCredit is the core of the "proportional sweep" system. It uses statistics gathered by the garbage collector to perform enough sweeping so that all pages are swept during the concurrent sweep phase between GC cycles. mheap_ must NOT be locked.
func ( uintptr,  uintptr) {
Proportional sweep is done or disabled.
		return
	}

	if trace.enabled {
		traceGCSweepStart()
	}

:
	 := atomic.Load64(&mheap_.pagesSweptBasis)
Fix debt if necessary.
	 := uintptr(atomic.Load64(&memstats.heap_live)-mheap_.sweepHeapLiveBasis) + 
	 := int64(mheap_.sweepPagesPerByte*float64()) - int64()
	for  > int64(atomic.Load64(&mheap_.pagesSwept)-) {
		if sweepone() == ^uintptr(0) {
			mheap_.sweepPagesPerByte = 0
			break
		}
Sweep pacing changed. Recompute debt.
			goto 
		}
	}

	if trace.enabled {
		traceGCSweepDone()
	}
}
clobberfree sets the memory content at x to bad content, for debugging purposes.
size (span.elemsize) is always a multiple of 4.
	for  := uintptr(0);  < ;  += 4 {
		*(*uint32)(add(, )) = 0xdeadbeef
	}