Copyright 2009 The Go Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
Time-related runtime and pieces of package time.

package runtime

import (
	
	
	
)
Package time knows the layout of this structure. If this struct changes, adjust ../time/sleep.go:/runtimeTimer.
If this timer is on a heap, which P's heap it is on. puintptr rather than *p to match uintptr in the versions of this struct defined in other packages.
Timer wakes up at when, and then at when+period, ... (period > 0 only) each time calling f(arg, now) in the timer goroutine, so f must be a well-behaved function and not block. when must be positive on an active timer.
	when   int64
	period int64
	f      func(interface{}, uintptr)
	arg    interface{}
	seq    uintptr
What to set the when field to in timerModifiedXX status.
The status field holds one of the values below.
Code outside this file has to be careful in using a timer value. The pp, status, and nextwhen fields may only be used by code in this file. Code that creates a new timer value can set the when, period, f, arg, and seq fields. A new timer value may be passed to addtimer (called by time.startTimer). After doing that no fields may be touched. An active timer (one that has been passed to addtimer) may be passed to deltimer (time.stopTimer), after which it is no longer an active timer. It is an inactive timer. In an inactive timer the period, f, arg, and seq fields may be modified, but not the when field. It's OK to just drop an inactive timer and let the GC collect it. It's not OK to pass an inactive timer to addtimer. Only newly allocated timer values may be passed to addtimer. An active timer may be passed to modtimer. No fields may be touched. It remains an active timer. An inactive timer may be passed to resettimer to turn into an active timer with an updated when field. It's OK to pass a newly allocated timer value to resettimer. Timer operations are addtimer, deltimer, modtimer, resettimer, cleantimers, adjusttimers, and runtimer. We don't permit calling addtimer/deltimer/modtimer/resettimer simultaneously, but adjusttimers and runtimer can be called at the same time as any of those. Active timers live in heaps attached to P, in the timers field. Inactive timers live there too temporarily, until they are removed. addtimer: timerNoStatus -> timerWaiting anything else -> panic: invalid value deltimer: timerWaiting -> timerModifying -> timerDeleted timerModifiedEarlier -> timerModifying -> timerDeleted timerModifiedLater -> timerModifying -> timerDeleted timerNoStatus -> do nothing timerDeleted -> do nothing timerRemoving -> do nothing timerRemoved -> do nothing timerRunning -> wait until status changes timerMoving -> wait until status changes timerModifying -> wait until status changes modtimer: timerWaiting -> timerModifying -> timerModifiedXX timerModifiedXX -> timerModifying -> timerModifiedYY timerNoStatus -> timerModifying -> timerWaiting timerRemoved -> timerModifying -> timerWaiting timerDeleted -> timerModifying -> timerModifiedXX timerRunning -> wait until status changes timerMoving -> wait until status changes timerRemoving -> wait until status changes timerModifying -> wait until status changes cleantimers (looks in P's timer heap): timerDeleted -> timerRemoving -> timerRemoved timerModifiedXX -> timerMoving -> timerWaiting adjusttimers (looks in P's timer heap): timerDeleted -> timerRemoving -> timerRemoved timerModifiedXX -> timerMoving -> timerWaiting runtimer (looks in P's timer heap): timerNoStatus -> panic: uninitialized timer timerWaiting -> timerWaiting or timerWaiting -> timerRunning -> timerNoStatus or timerWaiting -> timerRunning -> timerWaiting timerModifying -> wait until status changes timerModifiedXX -> timerMoving -> timerWaiting timerDeleted -> timerRemoving -> timerRemoved timerRunning -> panic: concurrent runtimer calls timerRemoved -> panic: inconsistent timer heap timerRemoving -> panic: inconsistent timer heap timerMoving -> panic: inconsistent timer heap
Values for the timer status field.
Timer has no status set yet.
Waiting for timer to fire. The timer is in some P's heap.
Running the timer function. A timer will only have this status briefly.
The timer is deleted and should be removed. It should not be run, but it is still in some P's heap.
The timer is being removed. The timer will only have this status briefly.
The timer has been stopped. It is not in any P's heap.
The timer is being modified. The timer will only have this status briefly.
The timer has been modified to an earlier time. The new when value is in the nextwhen field. The timer is in some P's heap, possibly in the wrong place.
The timer has been modified to the same or a later time. The new when value is in the nextwhen field. The timer is in some P's heap, possibly in the wrong place.
The timer has been modified and is being moved. The timer will only have this status briefly.
maxWhen is the maximum value for timer's when field.
const maxWhen = 1<<63 - 1
verifyTimers can be set to true to add debugging checks that the timer heaps are valid.
Package time APIs. Godoc uses the comments in package time, not these.
time.now is implemented in assembly.
timeSleep puts the current goroutine to sleep for at least ns nanoseconds.go:linkname timeSleep time.Sleep
func ( int64) {
	if  <= 0 {
		return
	}

	 := getg()
	 := .timer
	if  == nil {
		 = new(timer)
		.timer = 
	}
	.f = goroutineReady
	.arg = 
	.nextwhen = nanotime() + 
	if .nextwhen < 0 { // check for overflow.
		.nextwhen = maxWhen
	}
	gopark(resetForSleep, unsafe.Pointer(), waitReasonSleep, traceEvGoSleep, 1)
}
resetForSleep is called after the goroutine is parked for timeSleep. We can't call resettimer in timeSleep itself because if this is a short sleep and there are many goroutines then the P can wind up running the timer function, goroutineReady, before the goroutine has been parked.
func ( *g,  unsafe.Pointer) bool {
	 := (*timer)()
	resettimer(, .nextwhen)
	return true
}
startTimer adds t to the timer heap.go:linkname startTimer time.startTimer
stopTimer stops a timer. It reports whether t was stopped before being run.go:linkname stopTimer time.stopTimer
func ( *timer) bool {
	return deltimer()
}
resetTimer resets an inactive timer, adding it to the heap.go:linkname resetTimer time.resetTimer Reports whether the timer was modified before it was run.
func ( *timer,  int64) bool {
	if raceenabled {
		racerelease(unsafe.Pointer())
	}
	return resettimer(, )
}
modTimer modifies an existing timer.go:linkname modTimer time.modTimer
func ( *timer, ,  int64,  func(interface{}, uintptr),  interface{},  uintptr) {
	modtimer(, , , , , )
}
Go runtime.
Ready the goroutine arg.
func ( interface{},  uintptr) {
	goready(.(*g), 0)
}
addtimer adds a timer to the current P. This should only be called with a newly created timer. That avoids the risk of changing the when field of a timer in some P's heap, which could cause the heap to become unsorted.
when must be positive. A negative value will cause runtimer to overflow during its delta calculation and never expire other runtime timers. Zero will cause checkTimers to fail to notice the timer.
	if .when <= 0 {
		throw("timer when must be positive")
	}
	if .period < 0 {
		throw("timer period must be non-negative")
	}
	if .status != timerNoStatus {
		throw("addtimer called with initialized timer")
	}
	.status = timerWaiting

	 := .when
Disable preemption while using pp to avoid changing another P's heap.
	 := acquirem()

	 := getg().m.p.ptr()
	lock(&.timersLock)
	cleantimers()
	doaddtimer(, )
	unlock(&.timersLock)

	wakeNetPoller()

	releasem()
}
doaddtimer adds t to the current P's heap. The caller must have locked the timers for pp.
Timers rely on the network poller, so make sure the poller has started.
	if netpollInited == 0 {
		netpollGenericInit()
	}

	if .pp != 0 {
		throw("doaddtimer: P already set in timer")
	}
	.pp.set()
	 := len(.timers)
	.timers = append(.timers, )
	siftupTimer(.timers, )
	if  == .timers[0] {
		atomic.Store64(&.timer0When, uint64(.when))
	}
	atomic.Xadd(&.numTimers, 1)
}
deltimer deletes the timer t. It may be on some other P, so we can't actually remove it from the timers heap. We can only mark it as deleted. It will be removed in due course by the P whose heap it is on. Reports whether the timer was removed before it was run.
func ( *timer) bool {
	for {
		switch  := atomic.Load(&.status);  {
Prevent preemption while the timer is in timerModifying. This could lead to a self-deadlock. See #38070.
			 := acquirem()
Must fetch t.pp before changing status, as cleantimers in another goroutine can clear t.pp of a timerDeleted timer.
Timer was not yet run.
				return true
			} else {
				releasem()
			}
Prevent preemption while the timer is in timerModifying. This could lead to a self-deadlock. See #38070.
			 := acquirem()
Must fetch t.pp before setting status to timerDeleted.
Timer was not yet run.
				return true
			} else {
				releasem()
			}
Timer was already run.
			return false
The timer is being run or moved, by a different P. Wait for it to complete.
Removing timer that was never added or has already been run. Also see issue 21874.
			return false
Simultaneous calls to deltimer and modtimer. Wait for the other call to complete.
			osyield()
		default:
			badTimer()
		}
	}
}
dodeltimer removes timer i from the current P's heap. We are locked on the P when this is called. It reports whether it saw no problems due to races. The caller must have locked the timers for pp.
func ( *p,  int) {
	if  := .timers[]; .pp.ptr() !=  {
		throw("dodeltimer: wrong P")
	} else {
		.pp = 0
	}
	 := len(.timers) - 1
	if  !=  {
		.timers[] = .timers[]
	}
	.timers[] = nil
	.timers = .timers[:]
Moving to i may have moved the last timer to a new parent, so sift up to preserve the heap guarantee.
		siftupTimer(.timers, )
		siftdownTimer(.timers, )
	}
	if  == 0 {
		updateTimer0When()
	}
	atomic.Xadd(&.numTimers, -1)
}
dodeltimer0 removes timer 0 from the current P's heap. We are locked on the P when this is called. It reports whether it saw no problems due to races. The caller must have locked the timers for pp.
func ( *p) {
	if  := .timers[0]; .pp.ptr() !=  {
		throw("dodeltimer0: wrong P")
	} else {
		.pp = 0
	}
	 := len(.timers) - 1
	if  > 0 {
		.timers[0] = .timers[]
	}
	.timers[] = nil
	.timers = .timers[:]
	if  > 0 {
		siftdownTimer(.timers, 0)
	}
	updateTimer0When()
	atomic.Xadd(&.numTimers, -1)
}
modtimer modifies an existing timer. This is called by the netpoll code or time.Ticker.Reset or time.Timer.Reset. Reports whether the timer was modified before it was run.
func ( *timer, ,  int64,  func(interface{}, uintptr),  interface{},  uintptr) bool {
	if  <= 0 {
		throw("timer when must be positive")
	}
	if  < 0 {
		throw("timer period must be non-negative")
	}

	 := uint32(timerNoStatus)
	 := false
	var  bool
	var  *m
:
	for {
		switch  = atomic.Load(&.status);  {
Prevent preemption while the timer is in timerModifying. This could lead to a self-deadlock. See #38070.
			 = acquirem()
			if atomic.Cas(&.status, , timerModifying) {
				 = true // timer not yet run
				break 
			}
			releasem()
Prevent preemption while the timer is in timerModifying. This could lead to a self-deadlock. See #38070.
			 = acquirem()
Timer was already run and t is no longer in a heap. Act like addtimer.
			if atomic.Cas(&.status, , timerModifying) {
				 = true
				 = false // timer already run or stopped
				break 
			}
			releasem()
Prevent preemption while the timer is in timerModifying. This could lead to a self-deadlock. See #38070.
			 = acquirem()
			if atomic.Cas(&.status, , timerModifying) {
				atomic.Xadd(&.pp.ptr().deletedTimers, -1)
				 = false // timer already stopped
				break 
			}
			releasem()
The timer is being run or moved, by a different P. Wait for it to complete.
Multiple simultaneous calls to modtimer. Wait for the other call to complete.
			osyield()
		default:
			badTimer()
		}
	}

	.period = 
	.f = 
	.arg = 
	.seq = 

	if  {
		.when = 
		 := getg().m.p.ptr()
		lock(&.timersLock)
		doaddtimer(, )
		unlock(&.timersLock)
		if !atomic.Cas(&.status, timerModifying, timerWaiting) {
			badTimer()
		}
		releasem()
		wakeNetPoller()
The timer is in some other P's heap, so we can't change the when field. If we did, the other P's heap would be out of order. So we put the new when value in the nextwhen field, and let the other P set the when field when it is prepared to resort the heap.
		.nextwhen = 

		 := uint32(timerModifiedLater)
		if  < .when {
			 = timerModifiedEarlier
		}

		 := .pp.ptr()
Update the adjustTimers field. Subtract one if we are removing a timerModifiedEarlier, add one if we are adding a timerModifiedEarlier.
		 := int32(0)
		if  == timerModifiedEarlier {
			--
		}
		if  == timerModifiedEarlier {
			++
			updateTimerModifiedEarliest(, )
		}
		if  != 0 {
			atomic.Xadd(&.adjustTimers, )
		}
Set the new status of the timer.
		if !atomic.Cas(&.status, timerModifying, ) {
			badTimer()
		}
		releasem()
If the new status is earlier, wake up the poller.
		if  == timerModifiedEarlier {
			wakeNetPoller()
		}
	}

	return 
}
resettimer resets the time when a timer should fire. If used for an inactive timer, the timer will become active. This should be called instead of addtimer if the timer value has been, or may have been, used previously. Reports whether the timer was modified before it was run.
func ( *timer,  int64) bool {
	return modtimer(, , .period, .f, .arg, .seq)
}
cleantimers cleans up the head of the timer queue. This speeds up programs that create and delete timers; leaving them in the heap slows down addtimer. Reports whether no timer problems were found. The caller must have locked the timers for pp.
func ( *p) {
	 := getg()
	for {
		if len(.timers) == 0 {
			return
		}
This loop can theoretically run for a while, and because it is holding timersLock it cannot be preempted. If someone is trying to preempt us, just return. We can clean the timers later.
		if .preemptStop {
			return
		}

		 := .timers[0]
		if .pp.ptr() !=  {
			throw("cleantimers: bad p")
		}
		switch  := atomic.Load(&.status);  {
		case timerDeleted:
			if !atomic.Cas(&.status, , timerRemoving) {
				continue
			}
			dodeltimer0()
			if !atomic.Cas(&.status, timerRemoving, timerRemoved) {
				badTimer()
			}
			atomic.Xadd(&.deletedTimers, -1)
		case timerModifiedEarlier, timerModifiedLater:
			if !atomic.Cas(&.status, , timerMoving) {
				continue
Now we can change the when field.
Move t to the right position.
Head of timers does not need adjustment.
			return
		}
	}
}
moveTimers moves a slice of timers to pp. The slice has been taken from a different P. This is currently called when the world is stopped, but the caller is expected to have locked the timers for pp.
func ( *p,  []*timer) {
	for ,  := range  {
	:
		for {
			switch  := atomic.Load(&.status);  {
			case timerWaiting:
				if !atomic.Cas(&.status, , timerMoving) {
					continue
				}
				.pp = 0
				doaddtimer(, )
				if !atomic.Cas(&.status, timerMoving, timerWaiting) {
					badTimer()
				}
				break 
			case timerModifiedEarlier, timerModifiedLater:
				if !atomic.Cas(&.status, , timerMoving) {
					continue
				}
				.when = .nextwhen
				.pp = 0
				doaddtimer(, )
				if !atomic.Cas(&.status, timerMoving, timerWaiting) {
					badTimer()
				}
				break 
			case timerDeleted:
				if !atomic.Cas(&.status, , timerRemoved) {
					continue
				}
We no longer need this timer in the heap.
				break 
Loop until the modification is complete.
We should not see these status values in a timers heap.
Some other P thinks it owns this timer, which should not happen.
				badTimer()
			default:
				badTimer()
			}
		}
	}
}
adjusttimers looks through the timers in the current P's heap for any timers that have been modified to run earlier, and puts them in the correct place in the heap. While looking for those timers, it also moves timers that have been modified to run later, and removes deleted timers. The caller must have locked the timers for pp.
func ( *p,  int64) {
	if atomic.Load(&.adjustTimers) == 0 {
		if verifyTimers {
			verifyTimerHeap()
There are no timers to adjust, so it is safe to clear timerModifiedEarliest. Do so in case it is stale. Everything will work if we don't do this, but clearing here may save future calls to adjusttimers.
If we haven't yet reached the time of the first timerModifiedEarlier timer, don't do anything. This speeds up programs that adjust a lot of timers back and forth if the timers rarely expire. We'll postpone looking through all the adjusted timers until one would actually expire.
	if  := atomic.Load64(&.timerModifiedEarliest);  != 0 {
		if int64() >  {
			if verifyTimers {
				verifyTimerHeap()
			}
			return
		}
We are going to clear all timerModifiedEarlier timers.
		atomic.Store64(&.timerModifiedEarliest, 0)
	}

	var  []*timer
:
	for  := 0;  < len(.timers); ++ {
		 := .timers[]
		if .pp.ptr() !=  {
			throw("adjusttimers: bad p")
		}
		switch  := atomic.Load(&.status);  {
		case timerDeleted:
			if atomic.Cas(&.status, , timerRemoving) {
				dodeltimer(, )
				if !atomic.Cas(&.status, timerRemoving, timerRemoved) {
					badTimer()
				}
Look at this heap position again.
Now we can change the when field.
Take t off the heap, and hold onto it. We don't add it back yet because the heap manipulation could cause our loop to skip some other timer.
				dodeltimer(, )
				 = append(, )
				if  == timerModifiedEarlier {
					if  := atomic.Xadd(&.adjustTimers, -1); int32() <= 0 {
						break 
					}
Look at this heap position again.
OK, nothing to do.
Check again after modification is complete.
			osyield()
			--
		default:
			badTimer()
		}
	}

	if len() > 0 {
		addAdjustedTimers(, )
	}

	if verifyTimers {
		verifyTimerHeap()
	}
}
addAdjustedTimers adds any timers we adjusted in adjusttimers back to the timer heap.
func ( *p,  []*timer) {
	for ,  := range  {
		doaddtimer(, )
		if !atomic.Cas(&.status, timerMoving, timerWaiting) {
			badTimer()
		}
	}
}
nobarrierWakeTime looks at P's timers and returns the time when we should wake up the netpoller. It returns 0 if there are no timers. This function is invoked when dropping a P, and must run without any write barriers.go:nowritebarrierrec
func ( *p) int64 {
	 := int64(atomic.Load64(&.timer0When))
	 := int64(atomic.Load64(&.timerModifiedEarliest))
	if  == 0 || ( != 0 &&  < ) {
		 = 
	}
	return 
}
runtimer examines the first timer in timers. If it is ready based on now, it runs the timer and removes or updates it. Returns 0 if it ran a timer, -1 if there are no more timers, or the time when the first timer should run. The caller must have locked the timers for pp. If a timer is run, this will temporarily unlock the timers.go:systemstack
func ( *p,  int64) int64 {
	for {
		 := .timers[0]
		if .pp.ptr() !=  {
			throw("runtimer: bad p")
		}
		switch  := atomic.Load(&.status);  {
		case timerWaiting:
Not ready to run.
				return .when
			}

			if !atomic.Cas(&.status, , timerRunning) {
				continue
Note that runOneTimer may temporarily unlock pp.timersLock.
			runOneTimer(, , )
			return 0

		case timerDeleted:
			if !atomic.Cas(&.status, , timerRemoving) {
				continue
			}
			dodeltimer0()
			if !atomic.Cas(&.status, timerRemoving, timerRemoved) {
				badTimer()
			}
			atomic.Xadd(&.deletedTimers, -1)
			if len(.timers) == 0 {
				return -1
			}

		case timerModifiedEarlier, timerModifiedLater:
			if !atomic.Cas(&.status, , timerMoving) {
				continue
			}
			.when = .nextwhen
			dodeltimer0()
			doaddtimer(, )
			if  == timerModifiedEarlier {
				atomic.Xadd(&.adjustTimers, -1)
			}
			if !atomic.Cas(&.status, timerMoving, timerWaiting) {
				badTimer()
			}

Wait for modification to complete.
Should not see a new or inactive timer on the heap.
These should only be set when timers are locked, and we didn't do it.
			badTimer()
		default:
			badTimer()
		}
	}
}
runOneTimer runs a single timer. The caller must have locked the timers for pp. This will temporarily unlock the timers while running the timer function.go:systemstack
func ( *p,  *timer,  int64) {
	if raceenabled {
		 := getg().m.p.ptr()
		if .timerRaceCtx == 0 {
			.timerRaceCtx = racegostart(funcPC(runtimer) + sys.PCQuantum)
		}
		raceacquirectx(.timerRaceCtx, unsafe.Pointer())
	}

	 := .f
	 := .arg
	 := .seq

Leave in heap but adjust next time to fire.
		 := .when - 
		.when += .period * (1 + -/.period)
		if .when < 0 { // check for overflow.
			.when = maxWhen
		}
		siftdownTimer(.timers, 0)
		if !atomic.Cas(&.status, timerRunning, timerWaiting) {
			badTimer()
		}
		updateTimer0When()
Remove from heap.
Temporarily use the current P's racectx for g0.
		 := getg()
		if .racectx != 0 {
			throw("runOneTimer: unexpected racectx")
		}
		.racectx = .m.p.ptr().timerRaceCtx
	}

	unlock(&.timersLock)

	(, )

	lock(&.timersLock)

	if raceenabled {
		 := getg()
		.racectx = 0
	}
}
clearDeletedTimers removes all deleted timers from the P's timer heap. This is used to avoid clogging up the heap if the program starts a lot of long-running timers and then stops them. For example, this can happen via context.WithTimeout. This is the only function that walks through the entire timer heap, other than moveTimers which only runs when the world is stopped. The caller must have locked the timers for pp.
We are going to clear all timerModifiedEarlier timers. Do this now in case new ones show up while we are looping.
	atomic.Store64(&.timerModifiedEarliest, 0)

	 := int32(0)
	 := int32(0)
	 := 0
	 := false
	 := .timers
:
	for ,  := range  {
		for {
			switch  := atomic.Load(&.status);  {
			case timerWaiting:
				if  {
					[] = 
					siftupTimer(, )
				}
				++
				continue 
			case timerModifiedEarlier, timerModifiedLater:
				if atomic.Cas(&.status, , timerMoving) {
					.when = .nextwhen
					[] = 
					siftupTimer(, )
					++
					 = true
					if !atomic.Cas(&.status, timerMoving, timerWaiting) {
						badTimer()
					}
					if  == timerModifiedEarlier {
						++
					}
					continue 
				}
			case timerDeleted:
				if atomic.Cas(&.status, , timerRemoving) {
					.pp = 0
					++
					if !atomic.Cas(&.status, timerRemoving, timerRemoved) {
						badTimer()
					}
					 = true
					continue 
				}
Loop until modification complete.
We should not see these status values in a timer heap.
Some other P thinks it owns this timer, which should not happen.
				badTimer()
			default:
				badTimer()
			}
		}
	}
Set remaining slots in timers slice to nil, so that the timer values can be garbage collected.
	for  := ;  < len(); ++ {
		[] = nil
	}

	atomic.Xadd(&.deletedTimers, -)
	atomic.Xadd(&.numTimers, -)
	atomic.Xadd(&.adjustTimers, -)

	 = [:]
	.timers = 
	updateTimer0When()

	if verifyTimers {
		verifyTimerHeap()
	}
}
verifyTimerHeap verifies that the timer heap is in a valid state. This is only for debugging, and is only called if verifyTimers is true. The caller must have locked the timers.
func ( *p) {
	for ,  := range .timers {
First timer has no parent.
			continue
		}
The heap is 4-ary. See siftupTimer and siftdownTimer.
		 := ( - 1) / 4
		if .when < .timers[].when {
			print("bad timer heap at ", , ": ", , ": ", .timers[].when, ", ", , ": ", .when, "\n")
			throw("bad timer heap")
		}
	}
	if  := int(atomic.Load(&.numTimers)); len(.timers) !=  {
		println("timer heap len", len(.timers), "!= numTimers", )
		throw("bad timer heap len")
	}
}
updateTimer0When sets the P's timer0When field. The caller must have locked the timers for pp.
func ( *p) {
	if len(.timers) == 0 {
		atomic.Store64(&.timer0When, 0)
	} else {
		atomic.Store64(&.timer0When, uint64(.timers[0].when))
	}
}
updateTimerModifiedEarliest updates the recorded nextwhen field of the earlier timerModifiedEarier value. The timers for pp will not be locked.
func ( *p,  int64) {
	for {
		 := atomic.Load64(&.timerModifiedEarliest)
		if  != 0 && int64() <  {
			return
		}
		if atomic.Cas64(&.timerModifiedEarliest, , uint64()) {
			return
		}
	}
}
timeSleepUntil returns the time when the next timer should fire, and the P that holds the timer heap that that timer is on. This is only called by sysmon and checkdead.
func () (int64, *p) {
	 := int64(maxWhen)
	var  *p
Prevent allp slice changes. This is like retake.
	lock(&allpLock)
	for ,  := range allp {
This can happen if procresize has grown allp but not yet created new Ps.
			continue
		}

		 := int64(atomic.Load64(&.timer0When))
		if  != 0 &&  <  {
			 = 
			 = 
		}

		 = int64(atomic.Load64(&.timerModifiedEarliest))
		if  != 0 &&  <  {
			 = 
			 = 
		}
	}
	unlock(&allpLock)

	return , 
}
Heap maintenance algorithms. These algorithms check for slice index errors manually. Slice index error can happen if the program is using racy access to timers. We don't want to panic here, because it will cause the program to crash with a mysterious "panic holding locks" message. Instead, we panic while not holding a lock.

func ( []*timer,  int) {
	if  >= len() {
		badTimer()
	}
	 := [].when
	if  <= 0 {
		badTimer()
	}
	 := []
	for  > 0 {
		 := ( - 1) / 4 // parent
		if  >= [].when {
			break
		}
		[] = []
		 = 
	}
	if  != [] {
		[] = 
	}
}

func ( []*timer,  int) {
	 := len()
	if  >=  {
		badTimer()
	}
	 := [].when
	if  <= 0 {
		badTimer()
	}
	 := []
	for {
		 := *4 + 1 // left child
		 :=  + 2  // mid child
		if  >=  {
			break
		}
		 := [].when
		if +1 <  && [+1].when <  {
			 = [+1].when
			++
		}
		if  <  {
			 := [].when
			if +1 <  && [+1].when <  {
				 = [+1].when
				++
			}
			if  <  {
				 = 
				 = 
			}
		}
		if  >=  {
			break
		}
		[] = []
		 = 
	}
	if  != [] {
		[] = 
	}
}
badTimer is called if the timer data structures have been corrupted, presumably due to racy use by the program. We panic here rather than panicing due to invalid slice access while holding locks. See issue #25686.
func () {
	throw("timer data corruption")