Involved Source Filesallocs.gobenchmark.gocover.goexample.gomatch.gorun_example.go
Package testing provides support for automated testing of Go packages.
It is intended to be used in concert with the "go test" command, which automates
execution of any function of the form
func TestXxx(*testing.T)
where Xxx does not start with a lowercase letter. The function name
serves to identify the test routine.
Within these functions, use the Error, Fail or related methods to signal failure.
To write a new test suite, create a file whose name ends _test.go that
contains the TestXxx functions as described here. Put the file in the same
package as the one being tested. The file will be excluded from regular
package builds but will be included when the "go test" command is run.
For more detail, run "go help test" and "go help testflag".
A simple test function looks like this:
func TestAbs(t *testing.T) {
got := Abs(-1)
if got != 1 {
t.Errorf("Abs(-1) = %d; want 1", got)
}
}
Benchmarks
Functions of the form
func BenchmarkXxx(*testing.B)
are considered benchmarks, and are executed by the "go test" command when
its -bench flag is provided. Benchmarks are run sequentially.
For a description of the testing flags, see
https://golang.org/cmd/go/#hdr-Testing_flags
A sample benchmark function looks like this:
func BenchmarkRandInt(b *testing.B) {
for i := 0; i < b.N; i++ {
rand.Int()
}
}
The benchmark function must run the target code b.N times.
During benchmark execution, b.N is adjusted until the benchmark function lasts
long enough to be timed reliably. The output
BenchmarkRandInt-8 68453040 17.8 ns/op
means that the loop ran 68453040 times at a speed of 17.8 ns per loop.
If a benchmark needs some expensive setup before running, the timer
may be reset:
func BenchmarkBigLen(b *testing.B) {
big := NewBig()
b.ResetTimer()
for i := 0; i < b.N; i++ {
big.Len()
}
}
If a benchmark needs to test performance in a parallel setting, it may use
the RunParallel helper function; such benchmarks are intended to be used with
the go test -cpu flag:
func BenchmarkTemplateParallel(b *testing.B) {
templ := template.Must(template.New("test").Parse("Hello, {{.}}!"))
b.RunParallel(func(pb *testing.PB) {
var buf bytes.Buffer
for pb.Next() {
buf.Reset()
templ.Execute(&buf, "World")
}
})
}
Examples
The package also runs and verifies example code. Example functions may
include a concluding line comment that begins with "Output:" and is compared with
the standard output of the function when the tests are run. (The comparison
ignores leading and trailing space.) These are examples of an example:
func ExampleHello() {
fmt.Println("hello")
// Output: hello
}
func ExampleSalutations() {
fmt.Println("hello, and")
fmt.Println("goodbye")
// Output:
// hello, and
// goodbye
}
The comment prefix "Unordered output:" is like "Output:", but matches any
line order:
func ExamplePerm() {
for _, value := range Perm(5) {
fmt.Println(value)
}
// Unordered output: 4
// 2
// 1
// 3
// 0
}
Example functions without output comments are compiled but not executed.
The naming convention to declare examples for the package, a function F, a type T and
method M on type T are:
func Example() { ... }
func ExampleF() { ... }
func ExampleT() { ... }
func ExampleT_M() { ... }
Multiple example functions for a package/type/function/method may be provided by
appending a distinct suffix to the name. The suffix must start with a
lower-case letter.
func Example_suffix() { ... }
func ExampleF_suffix() { ... }
func ExampleT_suffix() { ... }
func ExampleT_M_suffix() { ... }
The entire test file is presented as the example when it contains a single
example function, at least one other function, type, variable, or constant
declaration, and no test or benchmark functions.
Skipping
Tests or benchmarks may be skipped at run time with a call to
the Skip method of *T or *B:
func TestTimeConsuming(t *testing.T) {
if testing.Short() {
t.Skip("skipping test in short mode.")
}
...
}
Subtests and Sub-benchmarks
The Run methods of T and B allow defining subtests and sub-benchmarks,
without having to define separate functions for each. This enables uses
like table-driven benchmarks and creating hierarchical tests.
It also provides a way to share common setup and tear-down code:
func TestFoo(t *testing.T) {
// <setup code>
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
// <tear-down code>
}
Each subtest and sub-benchmark has a unique name: the combination of the name
of the top-level test and the sequence of names passed to Run, separated by
slashes, with an optional trailing sequence number for disambiguation.
The argument to the -run and -bench command-line flags is an unanchored regular
expression that matches the test's name. For tests with multiple slash-separated
elements, such as subtests, the argument is itself slash-separated, with
expressions matching each name element in turn. Because it is unanchored, an
empty expression matches any string.
For example, using "matching" to mean "whose name contains":
go test -run '' # Run all tests.
go test -run Foo # Run top-level tests matching "Foo", such as "TestFooBar".
go test -run Foo/A= # For top-level tests matching "Foo", run subtests matching "A=".
go test -run /A=1 # For all top-level tests, run subtests matching "A=1".
Subtests can also be used to control parallelism. A parent test will only
complete once all of its subtests complete. In this example, all tests are
run in parallel with each other, and only with each other, regardless of
other top-level tests that may be defined:
func TestGroupedParallel(t *testing.T) {
for _, tc := range tests {
tc := tc // capture range variable
t.Run(tc.Name, func(t *testing.T) {
t.Parallel()
...
})
}
}
The race detector kills the program if it exceeds 8192 concurrent goroutines,
so use care when running parallel tests with the -race flag set.
Run does not return until parallel subtests have completed, providing a way
to clean up after a group of parallel tests:
func TestTeardownParallel(t *testing.T) {
// This Run will not return until the parallel tests finish.
t.Run("group", func(t *testing.T) {
t.Run("Test1", parallelTest1)
t.Run("Test2", parallelTest2)
t.Run("Test3", parallelTest3)
})
// <tear-down code>
}
Main
It is sometimes necessary for a test program to do extra setup or teardown
before or after testing. It is also sometimes necessary for a test to control
which code runs on the main thread. To support these and other cases,
if a test file contains a function:
func TestMain(m *testing.M)
then the generated test will call TestMain(m) instead of running the tests
directly. TestMain runs in the main goroutine and can do whatever setup
and teardown is necessary around a call to m.Run. m.Run will return an exit
code that may be passed to os.Exit. If TestMain returns, the test wrapper
will pass the result of m.Run to os.Exit itself.
When TestMain is called, flag.Parse has not been run. If TestMain depends on
command-line flags, including those of the testing package, it should call
flag.Parse explicitly. Command line flags are always parsed by the time test
or benchmark functions run.
A simple implementation of TestMain is:
func TestMain(m *testing.M) {
// call flag.Parse() here if TestMain uses flags
os.Exit(m.Run())
}
Package-Level Type Names (total 22, in which 11 are exported)
/* sort exporteds by: | */
B is a type passed to Benchmark functions to manage benchmark
timing and to specify the number of iterations to run.
A benchmark ends when its Benchmark function returns or calls any of the methods
FailNow, Fatal, Fatalf, SkipNow, Skip, or Skipf. Those methods must be called
only from the goroutine running the Benchmark function.
The other reporting methods, such as the variations of Log and Error,
may be called simultaneously from multiple goroutines.
Like in tests, benchmark logs are accumulated during execution
and dumped to standard output when done. Unlike in tests, benchmark logs
are always printed, so as not to hide output whose existence may be
affecting benchmark results.
NintbenchFuncfunc(b *B)benchTimebenchTimeFlagbytesint64commoncommon
// To signal parallel subtests they may start.
// Whether the current test is a benchmark.
// A copy of chattyPrinter, if the chatty flag is set.
// Name of the cleanup function.
// The stack trace at the point where Cleanup was called.
// optional functions to be called at the end of the test
// If level > 0, the stack trace at the point where the parent called t.Run.
// Test is finished and all subtests have completed.
common.durationtime.Duration
// Test or benchmark has failed.
// Test function has completed.
// Written atomically.
// helperPCs converted to function names
// functions to be skipped when writing file/line info
// Nesting depth of test or benchmark.
// guards this group of fields
// Name of test or benchmark.
// Output generated by test or benchmark.
common.parent*common
// Number of races detected during test.
// Test or benchmark (or one of its subtests) was executed.
// Function name of tRunner running the test.
// To signal a test is done.
// Test of benchmark has been skipped.
// Time test or benchmark started
// Queue of subtests to be run in parallel.
common.tempDirstringcommon.tempDirErrerrorcommon.tempDirMusync.Mutexcommon.tempDirSeqint32
// For flushToParent.
context*benchContext
Extra metrics collected by ReportMetric.
// import path of the package containing the benchmark
// one of the subbenchmarks does not have bytes set.
The net total of this test after being run.
netBytesuint64
// RunParallel creates parallelism*GOMAXPROCS goroutines
// total duration of the previous run
// number of iterations in the previous run
resultBenchmarkResultshowAllocResultbool
The initial states of memStats.Mallocs and memStats.TotalAlloc.
startBytesuint64timerOnbool
Cleanup registers a function to be called when the test and all its
subtests complete. Cleanup functions will be called in last added,
first called order.
Error is equivalent to Log followed by Fail.
Errorf is equivalent to Logf followed by Fail.
Fail marks the function as having failed but continues execution.
FailNow marks the function as having failed and stops its execution
by calling runtime.Goexit (which then runs all deferred calls in the
current goroutine).
Execution will continue at the next test or benchmark.
FailNow must be called from the goroutine running the
test or benchmark function, not from other goroutines
created during the test. Calling FailNow does not stop
those other goroutines.
Failed reports whether the function has failed.
Fatal is equivalent to Log followed by FailNow.
Fatalf is equivalent to Logf followed by FailNow.
Helper marks the calling function as a test helper function.
When printing file and line information, that function will be skipped.
Helper may be called simultaneously from multiple goroutines.
Log formats its arguments using default formatting, analogous to Println,
and records the text in the error log. For tests, the text will be printed only if
the test fails or the -test.v flag is set. For benchmarks, the text is always
printed to avoid having performance depend on the value of the -test.v flag.
Logf formats its arguments according to the format, analogous to Printf, and
records the text in the error log. A final newline is added if not provided. For
tests, the text will be printed only if the test fails or the -test.v flag is
set. For benchmarks, the text is always printed to avoid having performance
depend on the value of the -test.v flag.
Name returns the name of the running test or benchmark.
ReportAllocs enables malloc statistics for this benchmark.
It is equivalent to setting -test.benchmem, but it only affects the
benchmark function that calls ReportAllocs.
ReportMetric adds "n unit" to the reported benchmark results.
If the metric is per-iteration, the caller should divide by b.N,
and by convention units should end in "/op".
ReportMetric overrides any previously reported value for the same unit.
ReportMetric panics if unit is the empty string or if unit contains
any whitespace.
If unit is a unit normally reported by the benchmark framework itself
(such as "allocs/op"), ReportMetric will override that metric.
Setting "ns/op" to 0 will suppress that built-in metric.
ResetTimer zeroes the elapsed benchmark time and memory allocation counters
and deletes user-reported metrics.
It does not affect whether the timer is running.
Run benchmarks f as a subbenchmark with the given name. It reports
whether there were any failures.
A subbenchmark is like any other benchmark. A benchmark that calls Run at
least once will not be measured itself and will be called once with N=1.
RunParallel runs a benchmark in parallel.
It creates multiple goroutines and distributes b.N iterations among them.
The number of goroutines defaults to GOMAXPROCS. To increase parallelism for
non-CPU-bound benchmarks, call SetParallelism before RunParallel.
RunParallel is usually used with the go test -cpu flag.
The body function will be run in each goroutine. It should set up any
goroutine-local state and then iterate until pb.Next returns false.
It should not use the StartTimer, StopTimer, or ResetTimer functions,
because they have global effect. It should also not call Run.
SetBytes records the number of bytes processed in a single operation.
If this is called, the benchmark will report ns/op and MB/s.
SetParallelism sets the number of goroutines used by RunParallel to p*GOMAXPROCS.
There is usually no need to call SetParallelism for CPU-bound benchmarks.
If p is less than 1, this call will have no effect.
Skip is equivalent to Log followed by SkipNow.
SkipNow marks the test as having been skipped and stops its execution
by calling runtime.Goexit.
If a test fails (see Error, Errorf, Fail) and is then skipped,
it is still considered to have failed.
Execution will continue at the next test or benchmark. See also FailNow.
SkipNow must be called from the goroutine running the test, not from
other goroutines created during the test. Calling SkipNow does not stop
those other goroutines.
Skipf is equivalent to Logf followed by SkipNow.
Skipped reports whether the test was skipped.
StartTimer starts timing a test. This function is called automatically
before a benchmark starts, but it can also be used to resume timing after
a call to StopTimer.
StopTimer stops timing a test. This can be used to pause the timer
while performing complex initialization that you don't
want to measure.
TempDir returns a temporary directory for the test to use.
The directory is automatically removed by Cleanup when the test and
all its subtests complete.
Each subsequent call to t.TempDir returns a unique directory;
if the directory creation fails, TempDir terminates the test by calling Fatal.
add simulates running benchmarks in sequence in a single iteration. It is
used to give some meaningful results in case func Benchmark is used in
combination with Run.
decorate prefixes the string with the file and line of the call site
and inserts the final newline if needed and indentation spaces for formatting.
This function must be called with c.mu held.
(*T) doBench() BenchmarkResult
flushToParent writes c.output to the parent after first writing the header
with the given format and arguments.
frameSkip searches, starting after skip frames, for the first caller frame
in a function not marked as a helper and returns that frame.
The search stops if it finds a tRunner function that
was the entry point into the test and the test is not a subtest.
This function must be called with c.mu held.
launch launches the benchmark function. It gradually increases the number
of benchmark iterations until the benchmark runs for the requested benchtime.
launch is run by the doBench function as a separate goroutine.
run1 must have been called on b.
log generates the output. It's always at the same stack depth.
logDepth generates the output at an arbitrary stack depth.
(*T) private()
run executes the benchmark in a separate goroutine, including all of its
subbenchmarks. b must not have subbenchmarks.
run1 runs the first iteration of benchFunc. It reports whether more
iterations of this benchmarks should be run.
runCleanup is called at the end of the test.
If catchPanic is true, this will catch panics, and return the recovered
value if any.
runN runs a single benchmark for the specified number of iterations.
(*T) setRan()(*T) skip()
trimOutput shortens the output from a benchmark, which can be very long.
*T : TB
*T : github.com/aws/aws-sdk-go/aws.Logger
BenchmarkResult contains the results of a benchmark run.
// Bytes processed in one iteration.
Extra records additional metrics reported by ReportMetric.
// The total number of memory allocations.
// The total number of bytes allocated.
// The number of iterations.
// The total time taken.
AllocedBytesPerOp returns the "B/op" metric,
which is calculated as r.MemBytes / r.N.
AllocsPerOp returns the "allocs/op" metric,
which is calculated as r.MemAllocs / r.N.
MemString returns r.AllocedBytesPerOp and r.AllocsPerOp in the same format as 'go test'.
NsPerOp returns the "ns/op" metric.
String returns a summary of the benchmark results.
It follows the benchmark result line format from
https://golang.org/design/14313-benchmark-format, not including the
benchmark name.
Extra metrics override built-in metrics of the same name.
String does not include allocs/op or B/op, since those are reported
by MemString.
mbPerSec returns the "MB/s" metric.
T : expvar.Var
T : fmt.Stringer
T : context.stringer
T : runtime.stringer
func Benchmark(f func(b *B)) BenchmarkResult
func (*B).doBench() BenchmarkResult
func (*B).add(other BenchmarkResult)
CoverBlock records the coverage data for a single basic block.
The fields are 1-indexed, as in an editor: The opening line of
the file is number 1, for example. Columns are measured
in bytes.
NOTE: This struct is internal to the testing infrastructure and may change.
It is not covered (yet) by the Go 1 compatibility guidelines.
// Column number for block start.
// Column number for block end.
// Line number for block start.
// Line number for block end.
// Number of statements included in this block.
Ffunc()NamestringOutputstringUnorderedbool
processRunResult computes a summary and status of the result of running an example test.
stdout is the captured output from stdout of the test.
recovered is the result of invoking recover after running the test, in case it panicked.
If stdout doesn't match the expected output or if recovered is non-nil, it'll print the cause of failure to stdout.
If the test is chatty/verbose, it'll print a success message to stdout.
If recovered is non-nil, it'll panic with that value.
If the test panicked with nil, or invoked runtime.Goexit, it'll be
made to fail and panic with errNilPanicOrGoexit
func Main(matchString func(pat, str string) (bool, error), tests []InternalTest, benchmarks []InternalBenchmark, examples []InternalExample)
func MainStart(deps testDeps, tests []InternalTest, benchmarks []InternalBenchmark, examples []InternalExample) *M
func RunExamples(matchString func(pat, str string) (bool, error), examples []InternalExample) (ok bool)
func listTests(matchString func(pat, str string) (bool, error), tests []InternalTest, benchmarks []InternalBenchmark, examples []InternalExample)
func runExample(eg InternalExample) (ok bool)
func runExamples(matchString func(pat, str string) (bool, error), examples []InternalExample) (ran, ok bool)
A PB is used by RunParallel for running parallel benchmarks.
// total number of iterations to execute (b.N)
// local cache of acquired iterations
// shared between all worker goroutines iteration counter
// acquire that many iterations from globalN at once
Next reports whether there are more iterations to execute.
T is a type passed to Test functions to manage test state and support formatted test logs.
A test ends when its Test function returns or calls any of the methods
FailNow, Fatal, Fatalf, SkipNow, Skip, or Skipf. Those methods, as well as
the Parallel method, must be called only from the goroutine running the
Test function.
The other reporting methods, such as the variations of Log and Error,
may be called simultaneously from multiple goroutines.
commoncommon
// To signal parallel subtests they may start.
// Whether the current test is a benchmark.
// A copy of chattyPrinter, if the chatty flag is set.
// Name of the cleanup function.
// The stack trace at the point where Cleanup was called.
// optional functions to be called at the end of the test
// If level > 0, the stack trace at the point where the parent called t.Run.
// Test is finished and all subtests have completed.
common.durationtime.Duration
// Test or benchmark has failed.
// Test function has completed.
// Written atomically.
// helperPCs converted to function names
// functions to be skipped when writing file/line info
// Nesting depth of test or benchmark.
// guards this group of fields
// Name of test or benchmark.
// Output generated by test or benchmark.
common.parent*common
// Number of races detected during test.
// Test or benchmark (or one of its subtests) was executed.
// Function name of tRunner running the test.
// To signal a test is done.
// Test of benchmark has been skipped.
// Time test or benchmark started
// Queue of subtests to be run in parallel.
common.tempDirstringcommon.tempDirErrerrorcommon.tempDirMusync.Mutexcommon.tempDirSeqint32
// For flushToParent.
// For running tests and subtests.
isParallelbool
Cleanup registers a function to be called when the test and all its
subtests complete. Cleanup functions will be called in last added,
first called order.
Deadline reports the time at which the test binary will have
exceeded the timeout specified by the -timeout flag.
The ok result is false if the -timeout flag indicates “no timeout” (0).
Error is equivalent to Log followed by Fail.
Errorf is equivalent to Logf followed by Fail.
Fail marks the function as having failed but continues execution.
FailNow marks the function as having failed and stops its execution
by calling runtime.Goexit (which then runs all deferred calls in the
current goroutine).
Execution will continue at the next test or benchmark.
FailNow must be called from the goroutine running the
test or benchmark function, not from other goroutines
created during the test. Calling FailNow does not stop
those other goroutines.
Failed reports whether the function has failed.
Fatal is equivalent to Log followed by FailNow.
Fatalf is equivalent to Logf followed by FailNow.
Helper marks the calling function as a test helper function.
When printing file and line information, that function will be skipped.
Helper may be called simultaneously from multiple goroutines.
Log formats its arguments using default formatting, analogous to Println,
and records the text in the error log. For tests, the text will be printed only if
the test fails or the -test.v flag is set. For benchmarks, the text is always
printed to avoid having performance depend on the value of the -test.v flag.
Logf formats its arguments according to the format, analogous to Printf, and
records the text in the error log. A final newline is added if not provided. For
tests, the text will be printed only if the test fails or the -test.v flag is
set. For benchmarks, the text is always printed to avoid having performance
depend on the value of the -test.v flag.
Name returns the name of the running test or benchmark.
Parallel signals that this test is to be run in parallel with (and only with)
other parallel tests. When a test is run multiple times due to use of
-test.count or -test.cpu, multiple instances of a single test never run in
parallel with each other.
Run runs f as a subtest of t called name. It runs f in a separate goroutine
and blocks until f returns or calls t.Parallel to become a parallel test.
Run reports whether f succeeded (or at least did not fail before calling t.Parallel).
Run may be called simultaneously from multiple goroutines, but all such calls
must return before the outer test function for t returns.
Skip is equivalent to Log followed by SkipNow.
SkipNow marks the test as having been skipped and stops its execution
by calling runtime.Goexit.
If a test fails (see Error, Errorf, Fail) and is then skipped,
it is still considered to have failed.
Execution will continue at the next test or benchmark. See also FailNow.
SkipNow must be called from the goroutine running the test, not from
other goroutines created during the test. Calling SkipNow does not stop
those other goroutines.
Skipf is equivalent to Logf followed by SkipNow.
Skipped reports whether the test was skipped.
TempDir returns a temporary directory for the test to use.
The directory is automatically removed by Cleanup when the test and
all its subtests complete.
Each subsequent call to t.TempDir returns a unique directory;
if the directory creation fails, TempDir terminates the test by calling Fatal.
decorate prefixes the string with the file and line of the call site
and inserts the final newline if needed and indentation spaces for formatting.
This function must be called with c.mu held.
flushToParent writes c.output to the parent after first writing the header
with the given format and arguments.
frameSkip searches, starting after skip frames, for the first caller frame
in a function not marked as a helper and returns that frame.
The search stops if it finds a tRunner function that
was the entry point into the test and the test is not a subtest.
This function must be called with c.mu held.
log generates the output. It's always at the same stack depth.
logDepth generates the output at an arbitrary stack depth.
(*T) private()(*T) report()
runCleanup is called at the end of the test.
If catchPanic is true, this will catch panics, and return the recovered
value if any.
(*T) setRan()(*T) skip()
*T : TB
*T : github.com/aws/aws-sdk-go/aws.Logger
func golang.org/x/pkgsite/internal/index.SetupTestIndex(t *T, versions []*internal.IndexVersion) (*index.Client, func())
func golang.org/x/pkgsite/internal/postgres.GetFromSearchDocuments(ctx context.Context, t *T, db *postgres.DB, packagePath string) (modulePath, version string, found bool)
func golang.org/x/pkgsite/internal/postgres.InsertSampleDirectoryTree(ctx context.Context, t *T, testDB *postgres.DB)
func golang.org/x/pkgsite/internal/postgres.MustInsertModule(ctx context.Context, t *T, db *postgres.DB, m *internal.Module)
func golang.org/x/pkgsite/internal/postgres.MustInsertModuleGoMod(ctx context.Context, t *T, db *postgres.DB, m *internal.Module, goMod string)
func golang.org/x/pkgsite/internal/postgres.MustInsertModuleNotLatest(ctx context.Context, t *T, db *postgres.DB, m *internal.Module)
func golang.org/x/pkgsite/internal/postgres.ResetTestDB(db *postgres.DB, t *T)
func golang.org/x/pkgsite/internal/proxy.SetupTestClient(t *T, modules []*proxy.Module) (*proxy.Client, func())
func golang.org/x/pkgsite/internal/testing/testhelper.CompareWithGolden(t *T, got, filename string, update bool)
func tRunner(t *T, fn func(t *T))
func golang.org/x/pkgsite/internal/postgres.addLatest(ctx context.Context, t *T, db *postgres.DB, modulePath, version, modFile string) *internal.LatestModuleVersions
func golang.org/x/pkgsite/internal/postgres.mustInsertModule(ctx context.Context, t *T, db *postgres.DB, m *internal.Module, goMod string, latest bool)
func golang.org/x/pkgsite/internal/testing/testhelper.readGolden(t *T, name string) string
func golang.org/x/pkgsite/internal/testing/testhelper.writeGolden(t *T, name string, data string)
TB is the interface common to T and B.
( T) Cleanup(func())( T) Error(args ...interface{})( T) Errorf(format string, args ...interface{})( T) Fail()( T) FailNow()( T) Failed() bool( T) Fatal(args ...interface{})( T) Fatalf(format string, args ...interface{})( T) Helper()( T) Log(args ...interface{})( T) Logf(format string, args ...interface{})( T) Name() string( T) Skip(args ...interface{})( T) SkipNow()( T) Skipf(format string, args ...interface{})( T) Skipped() bool( T) TempDir() string
A private method to prevent users implementing the
interface and so future additions to it will not
violate Go 1 compatibility.
*B
*T
*common
T : github.com/aws/aws-sdk-go/aws.Logger
// Maximum extension length.
match*matcher
// The largest recorded benchmark name.
processBench runs bench b for the configured CPU counts and prints the results.
// last printed test name in chatty mode
// guards lastName
wio.Writer
Printf prints a message, generated by the named test, that does not
necessarily mention that tests's name itself.
Updatef prints a message about the status of the named test to w.
The formatted message must include the test name itself.
func newChattyPrinter(w io.Writer) *chattyPrinter
common holds the elements common between T and B and
captures common methods such as Errorf.
// To signal parallel subtests they may start.
// Whether the current test is a benchmark.
// A copy of chattyPrinter, if the chatty flag is set.
// Name of the cleanup function.
// The stack trace at the point where Cleanup was called.
// optional functions to be called at the end of the test
// If level > 0, the stack trace at the point where the parent called t.Run.
// Test is finished and all subtests have completed.
durationtime.Duration
// Test or benchmark has failed.
// Test function has completed.
// Written atomically.
// helperPCs converted to function names
// functions to be skipped when writing file/line info
// Nesting depth of test or benchmark.
// guards this group of fields
// Name of test or benchmark.
// Output generated by test or benchmark.
parent*common
// Number of races detected during test.
// Test or benchmark (or one of its subtests) was executed.
// Function name of tRunner running the test.
// To signal a test is done.
// Test of benchmark has been skipped.
// Time test or benchmark started
// Queue of subtests to be run in parallel.
tempDirstringtempDirErrerrortempDirMusync.MutextempDirSeqint32
// For flushToParent.
Cleanup registers a function to be called when the test and all its
subtests complete. Cleanup functions will be called in last added,
first called order.
Error is equivalent to Log followed by Fail.
Errorf is equivalent to Logf followed by Fail.
Fail marks the function as having failed but continues execution.
FailNow marks the function as having failed and stops its execution
by calling runtime.Goexit (which then runs all deferred calls in the
current goroutine).
Execution will continue at the next test or benchmark.
FailNow must be called from the goroutine running the
test or benchmark function, not from other goroutines
created during the test. Calling FailNow does not stop
those other goroutines.
Failed reports whether the function has failed.
Fatal is equivalent to Log followed by FailNow.
Fatalf is equivalent to Logf followed by FailNow.
Helper marks the calling function as a test helper function.
When printing file and line information, that function will be skipped.
Helper may be called simultaneously from multiple goroutines.
Log formats its arguments using default formatting, analogous to Println,
and records the text in the error log. For tests, the text will be printed only if
the test fails or the -test.v flag is set. For benchmarks, the text is always
printed to avoid having performance depend on the value of the -test.v flag.
Logf formats its arguments according to the format, analogous to Printf, and
records the text in the error log. A final newline is added if not provided. For
tests, the text will be printed only if the test fails or the -test.v flag is
set. For benchmarks, the text is always printed to avoid having performance
depend on the value of the -test.v flag.
Name returns the name of the running test or benchmark.
Skip is equivalent to Log followed by SkipNow.
SkipNow marks the test as having been skipped and stops its execution
by calling runtime.Goexit.
If a test fails (see Error, Errorf, Fail) and is then skipped,
it is still considered to have failed.
Execution will continue at the next test or benchmark. See also FailNow.
SkipNow must be called from the goroutine running the test, not from
other goroutines created during the test. Calling SkipNow does not stop
those other goroutines.
Skipf is equivalent to Logf followed by SkipNow.
Skipped reports whether the test was skipped.
TempDir returns a temporary directory for the test to use.
The directory is automatically removed by Cleanup when the test and
all its subtests complete.
Each subsequent call to t.TempDir returns a unique directory;
if the directory creation fails, TempDir terminates the test by calling Fatal.
decorate prefixes the string with the file and line of the call site
and inserts the final newline if needed and indentation spaces for formatting.
This function must be called with c.mu held.
flushToParent writes c.output to the parent after first writing the header
with the given format and arguments.
frameSkip searches, starting after skip frames, for the first caller frame
in a function not marked as a helper and returns that frame.
The search stops if it finds a tRunner function that
was the entry point into the test and the test is not a subtest.
This function must be called with c.mu held.
log generates the output. It's always at the same stack depth.
logDepth generates the output at an arbitrary stack depth.
(*T) private()
runCleanup is called at the end of the test.
If catchPanic is true, this will catch panics, and return the recovered
value if any.
(*T) setRan()(*T) skip()
*T : TB
*T : github.com/aws/aws-sdk-go/aws.Logger
( T) Write(b []byte) (n int, err error)
T : github.com/go-git/go-git/v5/plumbing/protocol/packp/sideband.Progress
T : github.com/jbenet/go-context/io.Writer
T : io.Writer
c*common( T) Write(b []byte) (n int, err error)
T : github.com/go-git/go-git/v5/plumbing/protocol/packp/sideband.Progress
T : github.com/jbenet/go-context/io.Writer
T : io.Writer
testContext holds all fields that are common to all tests. This includes
synchronization primitives to run at most *parallel tests.
deadlinetime.Timematch*matcher
maxParallel is a copy of the parallel flag.
musync.Mutex
numWaiting is the number tests waiting to be run in parallel.
running is the number of tests currently running in parallel.
This does not include tests that are waiting for subtests to complete.
Channel used to signal tests that are ready to be run in parallel.
(*T) release()(*T) waitParallel()
func newTestContext(maxParallel int, m *matcher) *testContext
Package-Level Functions (total 39, in which 13 are exported)
AllocsPerRun returns the average number of allocations during calls to f.
Although the return value has type float64, it will always be an integral value.
To compute the number of allocations, the function will first be run once as
a warm-up. The average number of allocations over the specified number of
runs will then be measured and returned.
AllocsPerRun sets GOMAXPROCS to 1 during its measurement and will restore
it before returning.
Benchmark benchmarks a single function. It is useful for creating
custom benchmarks that do not use the "go test" command.
If f depends on testing flags, then Init must be used to register
those flags before calling Benchmark and before calling flag.Parse.
If f calls Run, the result will be an estimate of running all its
subbenchmarks that don't call Run in sequence in a single benchmark.
Coverage reports the current code coverage as a fraction in the range [0, 1].
If coverage is not enabled, Coverage returns 0.
When running a large set of sequential test cases, checking Coverage after each one
can be useful for identifying which test cases exercise new code paths.
It is not a replacement for the reports generated by 'go test -cover' and
'go tool cover'.
CoverMode reports what the test coverage mode is set to. The
values are "set", "count", or "atomic". The return value will be
empty if test coverage is not enabled.
Init registers testing flags. These flags are automatically registered by
the "go test" command before running test functions, so Init is only needed
when calling functions such as Benchmark without using "go test".
Init has no effect if it was already called.
Main is an internal function, part of the implementation of the "go test" command.
It was exported because it is cross-package and predates "internal" packages.
It is no longer used by "go test" but preserved, as much as possible, for other
systems that simulate "go test" using Main, but Main sometimes cannot be updated as
new functionality is added to the testing package.
Systems simulating "go test" should be updated to use MainStart.
MainStart is meant for use by tests generated by 'go test'.
It is not meant to be called directly and is not subject to the Go 1 compatibility document.
It may change signature from release to release.
RegisterCover records the coverage data accumulators for the tests.
NOTE: This function is internal to the testing infrastructure and may change.
It is not covered (yet) by the Go 1 compatibility guidelines.
RunBenchmarks is an internal function but exported because it is cross-package;
it is part of the implementation of the "go test" command.
RunExamples is an internal function but exported because it is cross-package;
it is part of the implementation of the "go test" command.
RunTests is an internal function but exported because it is cross-package;
it is part of the implementation of the "go test" command.
Short reports whether the -test.short flag is set.
Verbose reports whether the -test.v flag is set.
benchmarkName returns full name of benchmark including procs suffix.
callerName gives the function name (qualified with a package path)
for the caller after skip frames (where 0 means the current function).
coverReport reports the coverage percentage and writes a coverage profile if requested.
fmtDuration returns a string representing d in the form "87.00s".
The pages are generated with Goldsv0.3.2-preview. (GOOS=darwin GOARCH=amd64)
Golds is a Go 101 project developed by Tapir Liu.
PR and bug reports are welcome and can be submitted to the issue list.
Please follow @Go100and1 (reachable from the left QR code) to get the latest news of Golds.