Package-Level Type Names (total 207, in which 187 are exported)
/* sort exporteds by: | */
Describes how to combine multiple time series to provide a different view of
the data. Aggregation of time series is done in two steps. First, each time
series in the set is _aligned_ to the same time interval boundaries, then the
set of time series is optionally _reduced_ in number.
Alignment consists of applying the `per_series_aligner` operation
to each time series after its data has been divided into regular
`alignment_period` time intervals. This process takes _all_ of the data
points in an alignment period, applies a mathematical transformation such as
averaging, minimum, maximum, delta, etc., and converts them into a single
data point per period.
Reduction is when the aligned and transformed time series can optionally be
combined, reducing the number of time series through similar mathematical
transformations. Reduction involves applying a `cross_series_reducer` to
all the time series, optionally sorting the time series into subsets with
`group_by_fields`, and applying the reducer to each subset.
The raw time series data can contain a huge amount of information from
multiple sources. Alignment and reduction transforms this mass of data into
a more manageable and representative collection of data, for example "the
95% latency across the average of all tasks in a cluster". This
representative data can be more easily graphed and comprehended, and the
individual time series data is still available for later drilldown. For more
details, see [Filtering and
aggregation](https://cloud.google.com/monitoring/api/v3/aggregation).
The `alignment_period` specifies a time interval, in seconds, that is used
to divide the data in all the
[time series][google.monitoring.v3.TimeSeries] into consistent blocks of
time. This will be done before the per-series aligner can be applied to
the data.
The value must be at least 60 seconds. If a per-series aligner other than
`ALIGN_NONE` is specified, this field is required or an error is returned.
If no per-series aligner is specified, or the aligner `ALIGN_NONE` is
specified, then this field is ignored.
The reduction operation to be used to combine time series into a single
time series, where the value of each data point in the resulting series is
a function of all the already aligned values in the input time series.
Not all reducer operations can be applied to all time series. The valid
choices depend on the `metric_kind` and the `value_type` of the original
time series. Reduction can yield a time series with a different
`metric_kind` or `value_type` than the input time series.
Time series data must first be aligned (see `per_series_aligner`) in order
to perform cross-time series reduction. If `cross_series_reducer` is
specified, then `per_series_aligner` must be specified, and must not be
`ALIGN_NONE`. An `alignment_period` must also be specified; otherwise, an
error is returned.
The set of fields to preserve when `cross_series_reducer` is
specified. The `group_by_fields` determine how the time series are
partitioned into subsets prior to applying the aggregation
operation. Each subset contains time series that have the same
value for each of the grouping fields. Each individual time
series is a member of exactly one subset. The
`cross_series_reducer` is applied to each subset of time series.
It is not possible to reduce across different resource types, so
this field implicitly contains `resource.type`. Fields not
specified in `group_by_fields` are aggregated away. If
`group_by_fields` is not specified and all the time series have
the same resource type, then the time series are aggregated into
a single output time series. If `cross_series_reducer` is not
defined, this field is ignored.
An `Aligner` describes how to bring the data points in a single
time series into temporal alignment. Except for `ALIGN_NONE`, all
alignments cause all the data points in an `alignment_period` to be
mathematically grouped together, resulting in a single data point for
each `alignment_period` with end timestamp at the end of the period.
Not all alignment operations may be applied to all time series. The valid
choices depend on the `metric_kind` and `value_type` of the original time
series. Alignment can change the `metric_kind` or the `value_type` of
the time series.
Time series data must be aligned in order to perform cross-time
series reduction. If `cross_series_reducer` is specified, then
`per_series_aligner` must be specified and not equal to `ALIGN_NONE`
and `alignment_period` must be specified; otherwise, an error is
returned.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use Aggregation.ProtoReflect.Descriptor instead.
(*T) GetAlignmentPeriod() *durationpb.Duration(*T) GetCrossSeriesReducer() Aggregation_Reducer(*T) GetGroupByFields() []string(*T) GetPerSeriesAligner() Aggregation_Aligner(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*AlertPolicy_Condition_MetricAbsence).GetAggregations() []*Aggregation
func (*AlertPolicy_Condition_MetricThreshold).GetAggregations() []*Aggregation
func (*AlertPolicy_Condition_MetricThreshold).GetDenominatorAggregations() []*Aggregation
func (*ListTimeSeriesRequest).GetAggregation() *Aggregation
A condition is a true/false test that determines when an alerting policy
should open an incident. If a condition evaluates to true, it signifies
that something is wrong.
Only one of the following condition types will be specified.
Types that are assignable to Condition:
*AlertPolicy_Condition_ConditionThreshold
*AlertPolicy_Condition_ConditionAbsent
A short name or phrase used to identify the condition in dashboards,
notifications, and incidents. To avoid confusion, don't use the same
display name for multiple conditions in the same policy.
Required if the condition exists. The unique resource name for this
condition. Its format is:
projects/[PROJECT_ID_OR_NUMBER]/alertPolicies/[POLICY_ID]/conditions/[CONDITION_ID]
`[CONDITION_ID]` is assigned by Stackdriver Monitoring when the
condition is created as part of a new or updated alerting policy.
When calling the
[alertPolicies.create][google.monitoring.v3.AlertPolicyService.CreateAlertPolicy]
method, do not include the `name` field in the conditions of the
requested alerting policy. Stackdriver Monitoring creates the
condition identifiers and includes them in the new policy.
When calling the
[alertPolicies.update][google.monitoring.v3.AlertPolicyService.UpdateAlertPolicy]
method to update a policy, including a condition `name` causes the
existing condition to be updated. Conditions without names are added to
the updated policy. Existing conditions are deleted if they are not
updated.
Best practice is to preserve `[CONDITION_ID]` if you make only small
changes, such as those to condition thresholds, durations, or trigger
values. Otherwise, treat the change as a new condition and let the
existing condition be deleted.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use AlertPolicy_Condition.ProtoReflect.Descriptor instead.
(*T) GetCondition() isAlertPolicy_Condition_Condition(*T) GetConditionAbsent() *AlertPolicy_Condition_MetricAbsence(*T) GetConditionThreshold() *AlertPolicy_Condition_MetricThreshold(*T) GetDisplayName() string(*T) GetName() string(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*AlertPolicy).GetConditions() []*AlertPolicy_Condition
A condition type that checks that monitored resources
are reporting data. The configuration defines a metric and
a set of monitored resources. The predicate is considered in violation
when a time series for the specified metric of a monitored
resource does not include any data in the specified `duration`.
Specifies the alignment of data points in individual time series as
well as how to combine the retrieved time series together (such as
when aggregating multiple streams on each resource to a single
stream for each resource or when aggregating streams across all
members of a group of resrouces). Multiple aggregations
are applied in the order specified.
This field is similar to the one in the [`ListTimeSeries`
request](https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list).
It is advisable to use the `ListTimeSeries` method when debugging this
field.
The amount of time that a time series must fail to report new
data to be considered failing. Currently, only values that
are a multiple of a minute--e.g. 60, 120, or 300
seconds--are supported. If an invalid value is given, an
error will be returned. The `Duration.nanos` field is
ignored.
A [filter](https://cloud.google.com/monitoring/api/v3/filters) that
identifies which time series should be compared with the threshold.
The filter is similar to the one that is specified in the
[`ListTimeSeries`
request](https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list)
(that call is useful to verify the time series that will be retrieved /
processed) and must specify the metric type and optionally may contain
restrictions on resource type, resource labels, and metric labels.
This field may not exceed 2048 Unicode characters in length.
The number/percent of time series for which the comparison must hold
in order for the condition to trigger. If unspecified, then the
condition will trigger if the comparison is true for any of the
time series that have been identified by `filter` and `aggregations`.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use AlertPolicy_Condition_MetricAbsence.ProtoReflect.Descriptor instead.
(*T) GetAggregations() []*Aggregation(*T) GetDuration() *durationpb.Duration(*T) GetFilter() string(*T) GetTrigger() *AlertPolicy_Condition_Trigger(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*AlertPolicy_Condition).GetConditionAbsent() *AlertPolicy_Condition_MetricAbsence
A condition type that compares a collection of time series
against a threshold.
Specifies the alignment of data points in individual time series as
well as how to combine the retrieved time series together (such as
when aggregating multiple streams on each resource to a single
stream for each resource or when aggregating streams across all
members of a group of resrouces). Multiple aggregations
are applied in the order specified.
This field is similar to the one in the [`ListTimeSeries`
request](https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list).
It is advisable to use the `ListTimeSeries` method when debugging this
field.
The comparison to apply between the time series (indicated by `filter`
and `aggregation`) and the threshold (indicated by `threshold_value`).
The comparison is applied on each time series, with the time series
on the left-hand side and the threshold on the right-hand side.
Only `COMPARISON_LT` and `COMPARISON_GT` are supported currently.
Specifies the alignment of data points in individual time series
selected by `denominatorFilter` as
well as how to combine the retrieved time series together (such as
when aggregating multiple streams on each resource to a single
stream for each resource or when aggregating streams across all
members of a group of resources).
When computing ratios, the `aggregations` and
`denominator_aggregations` fields must use the same alignment period
and produce time series that have the same periodicity and labels.
A [filter](https://cloud.google.com/monitoring/api/v3/filters) that
identifies a time series that should be used as the denominator of a
ratio that will be compared with the threshold. If a
`denominator_filter` is specified, the time series specified by the
`filter` field will be used as the numerator.
The filter must specify the metric type and optionally may contain
restrictions on resource type, resource labels, and metric labels.
This field may not exceed 2048 Unicode characters in length.
The amount of time that a time series must violate the
threshold to be considered failing. Currently, only values
that are a multiple of a minute--e.g., 0, 60, 120, or 300
seconds--are supported. If an invalid value is given, an
error will be returned. When choosing a duration, it is useful to
keep in mind the frequency of the underlying time series data
(which may also be affected by any alignments specified in the
`aggregations` field); a good duration is long enough so that a single
outlier does not generate spurious alerts, but short enough that
unhealthy states are detected and alerted on quickly.
A [filter](https://cloud.google.com/monitoring/api/v3/filters) that
identifies which time series should be compared with the threshold.
The filter is similar to the one that is specified in the
[`ListTimeSeries`
request](https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list)
(that call is useful to verify the time series that will be retrieved /
processed) and must specify the metric type and optionally may contain
restrictions on resource type, resource labels, and metric labels.
This field may not exceed 2048 Unicode characters in length.
A value against which to compare the time series.
The number/percent of time series for which the comparison must hold
in order for the condition to trigger. If unspecified, then the
condition will trigger if the comparison is true for any of the
time series that have been identified by `filter` and `aggregations`,
or by the ratio, if `denominator_filter` and `denominator_aggregations`
are specified.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use AlertPolicy_Condition_MetricThreshold.ProtoReflect.Descriptor instead.
(*T) GetAggregations() []*Aggregation(*T) GetComparison() ComparisonType(*T) GetDenominatorAggregations() []*Aggregation(*T) GetDenominatorFilter() string(*T) GetDuration() *durationpb.Duration(*T) GetFilter() string(*T) GetThresholdValue() float64(*T) GetTrigger() *AlertPolicy_Condition_Trigger(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*AlertPolicy_Condition).GetConditionThreshold() *AlertPolicy_Condition_MetricThreshold
A content string and a MIME type that describes the content string's
format.
The text of the documentation, interpreted according to `mime_type`.
The content may not exceed 8,192 Unicode characters and may not exceed
more than 10,240 bytes when encoded in UTF-8 format, whichever is
smaller.
The format of the `content` field. Presently, only the value
`"text/markdown"` is supported. See
[Markdown](https://en.wikipedia.org/wiki/Markdown) for more information.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use AlertPolicy_Documentation.ProtoReflect.Descriptor instead.
(*T) GetContent() string(*T) GetMimeType() string(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*AlertPolicy).GetDocumentation() *AlertPolicy_Documentation
AlertPolicyServiceClient is the client API for AlertPolicyService service.
For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
Creates a new alerting policy.
Deletes an alerting policy.
Gets a single alerting policy.
Lists the existing alerting policies for the project.
Updates an alerting policy. You can either replace the entire policy with
a new one or replace only certain fields in the current alerting policy by
specifying the fields to be updated via `updateMask`. Returns the
updated alerting policy.
*alertPolicyServiceClient
func NewAlertPolicyServiceClient(cc grpc.ClientConnInterface) AlertPolicyServiceClient
AlertPolicyServiceServer is the server API for AlertPolicyService service.
Creates a new alerting policy.
Deletes an alerting policy.
Gets a single alerting policy.
Lists the existing alerting policies for the project.
Updates an alerting policy. You can either replace the entire policy with
a new one or replace only certain fields in the current alerting policy by
specifying the fields to be updated via `updateMask`. Returns the
updated alerting policy.
*UnimplementedAlertPolicyServiceServer
func RegisterAlertPolicyServiceServer(s *grpc.Server, srv AlertPolicyServiceServer)
An SLI measuring performance on a well-known service type. Performance will
be computed on the basis of pre-defined metrics. The type of the
`service_resource` determines the metrics to use and the
`service_resource.labels` and `metric_labels` are used to construct a
monitoring filter to filter that metric down to just the data relevant to
this service.
OPTIONAL: The set of locations to which this SLI is relevant. Telemetry
from other locations will not be used to calculate performance for this
SLI. If omitted, this SLI applies to all locations in which the Service has
activity. For service types that don't support breaking down by location,
setting this field will result in an error.
OPTIONAL: The set of RPCs to which this SLI is relevant. Telemetry from
other methods will not be used to calculate performance for this SLI. If
omitted, this SLI applies to all the Service's methods. For service types
that don't support breaking down by method, setting this field will result
in an error.
This SLI can be evaluated on the basis of availability or latency.
Types that are assignable to SliCriteria:
*BasicSli_Availability
*BasicSli_Latency
OPTIONAL: The set of API versions to which this SLI is relevant. Telemetry
from other API versions will not be used to calculate performance for this
SLI. If omitted, this SLI applies to all API versions. For service types
that don't support breaking down by version, setting this field will result
in an error.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use BasicSli.ProtoReflect.Descriptor instead.
(*T) GetAvailability() *BasicSli_AvailabilityCriteria(*T) GetLatency() *BasicSli_LatencyCriteria(*T) GetLocation() []string(*T) GetMethod() []string(*T) GetSliCriteria() isBasicSli_SliCriteria(*T) GetVersion() []string(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*ServiceLevelIndicator).GetBasicSli() *BasicSli
func (*WindowsBasedSli_PerformanceThreshold).GetBasicSliPerformance() *BasicSli
Good service is defined to be the count of requests made to this service
that are fast enough with respect to `latency.threshold`.
(*T) isBasicSli_SliCriteria()
*T : isBasicSli_SliCriteria
A `DistributionCut` defines a `TimeSeries` and thresholds used for measuring
good service and total service. The `TimeSeries` must have `ValueType =
DISTRIBUTION` and `MetricKind = DELTA` or `MetricKind = CUMULATIVE`. The
computed `good_service` will be the count of values x in the `Distribution`
such that `range.min <= x < range.max`.
A [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters)
specifying a `TimeSeries` aggregating values. Must have `ValueType =
DISTRIBUTION` and `MetricKind = DELTA` or `MetricKind = CUMULATIVE`.
Range of values considered "good." For a one-sided range, set one bound to
an infinite value.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use DistributionCut.ProtoReflect.Descriptor instead.
(*T) GetDistributionFilter() string(*T) GetRange() *Range(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*RequestBasedSli).GetDistributionCut() *DistributionCut
A set of (label, value) pairs which were dropped during aggregation, attached
to google.api.Distribution.Exemplars in google.api.Distribution values during
aggregation.
These values are used in combination with the label values that remain on the
aggregated Distribution timeseries to construct the full label set for the
exemplar values. The resulting full label set may be used to identify the
specific task/job/instance (for example) which may be contributing to a
long-tail, while allowing the storage savings of only storing aggregated
distribution values for a large group.
Note that there are no guarantees on ordering of the labels from
exemplar-to-exemplar and from distribution-to-distribution in the same
stream, and there may be duplicates. It is up to clients to resolve any
ambiguities.
Map from label to its value, for all labels dropped in any aggregation.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use DroppedLabels.ProtoReflect.Descriptor instead.
(*T) GetLabel() map[string]string(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
GroupServiceClient is the client API for GroupService service.
For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
Creates a new group.
Deletes an existing group.
Gets a single group.
Lists the monitored resources that are members of a group.
Lists the existing groups.
Updates an existing group.
You can change any group attributes except `name`.
*groupServiceClient
func NewGroupServiceClient(cc grpc.ClientConnInterface) GroupServiceClient
GroupServiceServer is the server API for GroupService service.
Creates a new group.
Deletes an existing group.
Gets a single group.
Lists the monitored resources that are members of a group.
Lists the existing groups.
Updates an existing group.
You can change any group attributes except `name`.
*UnimplementedGroupServiceServer
func RegisterGroupServiceServer(s *grpc.Server, srv GroupServiceServer)
An internal checker allows Uptime checks to run on private/internal GCP
resources.
Deprecated: Do not use.
The checker's human-readable name. The display name
should be unique within a Stackdriver Workspace in order to make it easier
to identify; however, uniqueness is not enforced.
The GCP zone the Uptime check should egress from. Only respected for
internal Uptime checks, where internal_network is specified.
A unique resource name for this InternalChecker. The format is:
projects/[PROJECT_ID_OR_NUMBER]/internalCheckers/[INTERNAL_CHECKER_ID]
`[PROJECT_ID_OR_NUMBER]` is the Stackdriver Workspace project for the
Uptime check config associated with the internal checker.
The [GCP VPC network](https://cloud.google.com/vpc/docs/vpc) where the
internal resource lives (ex: "default").
The GCP project ID where the internal checker lives. Not necessary
the same as the Workspace project.
The current operational state of the internal checker.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use InternalChecker.ProtoReflect.Descriptor instead.
(*T) GetDisplayName() string(*T) GetGcpZone() string(*T) GetName() string(*T) GetNetwork() string(*T) GetPeerProjectId() string(*T) GetState() InternalChecker_State(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*UptimeCheckConfig).GetInternalCheckers() []*InternalChecker
A group name. The format is:
projects/[PROJECT_ID_OR_NUMBER]/groups/[GROUP_ID]
Returns groups that are ancestors of the specified group.
The groups are returned in order, starting with the immediate parent and
ending with the most distant ancestor. If the specified group has no
immediate parent, the results are empty.
(*T) isListGroupsRequest_Filter()
*T : isListGroupsRequest_Filter
A group name. The format is:
projects/[PROJECT_ID_OR_NUMBER]/groups/[GROUP_ID]
Returns groups whose `parent_name` field contains the group
name. If no groups have this parent, the results are empty.
(*T) isListGroupsRequest_Filter()
*T : isListGroupsRequest_Filter
A group name. The format is:
projects/[PROJECT_ID_OR_NUMBER]/groups/[GROUP_ID]
Returns the descendants of the specified group. This is a superset of
the results returned by the `children_of_group` filter, and includes
children-of-children, and so forth.
(*T) isListGroupsRequest_Filter()
*T : isListGroupsRequest_Filter
The `ListTimeSeries` request.
Specifies the alignment of data points in individual time series as
well as how to combine the retrieved time series across specified labels.
By default (if no `aggregation` is explicitly specified), the raw time
series data is returned.
Required. A [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters)
that specifies which time series should be returned. The filter must
specify a single metric type, and can additionally specify metric labels
and other information. For example:
metric.type = "compute.googleapis.com/instance/cpu/usage_time" AND
metric.labels.instance_name = "my-instance-name"
Required. The time interval for which results should be returned. Only time series
that contain data points in the specified interval are included
in the response.
Required. The project on which to execute the request. The format is:
projects/[PROJECT_ID_OR_NUMBER]
Unsupported: must be left blank. The points in each time series are
currently returned in reverse time order (most recent to oldest).
A positive number that is the maximum number of results to return. If
`page_size` is empty or more than 100,000 results, the effective
`page_size` is 100,000 results. If `view` is set to `FULL`, this is the
maximum number of `Points` returned. If `view` is set to `HEADERS`, this is
the maximum number of `TimeSeries` returned.
If this field is not empty then it must contain the `nextPageToken` value
returned by a previous call to this method. Using this field causes the
method to return additional results from the previous method call.
Required. Specifies which information is returned about the time series.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use ListTimeSeriesRequest.ProtoReflect.Descriptor instead.
(*T) GetAggregation() *Aggregation(*T) GetFilter() string(*T) GetInterval() *TimeInterval(*T) GetName() string(*T) GetOrderBy() string(*T) GetPageSize() int32(*T) GetPageToken() string(*T) GetView() ListTimeSeriesRequest_TimeSeriesView(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func MetricServiceClient.ListTimeSeries(ctx context.Context, in *ListTimeSeriesRequest, opts ...grpc.CallOption) (*ListTimeSeriesResponse, error)
func MetricServiceServer.ListTimeSeries(context.Context, *ListTimeSeriesRequest) (*ListTimeSeriesResponse, error)
func (*UnimplementedMetricServiceServer).ListTimeSeries(context.Context, *ListTimeSeriesRequest) (*ListTimeSeriesResponse, error)
func cloud.google.com/go/monitoring/apiv3.(*MetricClient).ListTimeSeries(ctx context.Context, req *ListTimeSeriesRequest, opts ...gax.CallOption) *monitoring.TimeSeriesIterator
MetricServiceClient is the client API for MetricService service.
For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
Creates a new metric descriptor.
User-created metric descriptors define
[custom metrics](https://cloud.google.com/monitoring/custom-metrics).
Creates or adds data to one or more time series.
The response is empty if all time series in the request were written.
If any time series could not be written, a corresponding failure message is
included in the error response.
Deletes a metric descriptor. Only user-created
[custom metrics](https://cloud.google.com/monitoring/custom-metrics) can be
deleted.
Gets a single metric descriptor. This method does not require a Workspace.
Gets a single monitored resource descriptor. This method does not require a Workspace.
Lists metric descriptors that match a filter. This method does not require a Workspace.
Lists monitored resource descriptors that match a filter. This method does not require a Workspace.
Lists time series that match a filter. This method does not require a Workspace.
*metricServiceClient
func NewMetricServiceClient(cc grpc.ClientConnInterface) MetricServiceClient
MetricServiceServer is the server API for MetricService service.
Creates a new metric descriptor.
User-created metric descriptors define
[custom metrics](https://cloud.google.com/monitoring/custom-metrics).
Creates or adds data to one or more time series.
The response is empty if all time series in the request were written.
If any time series could not be written, a corresponding failure message is
included in the error response.
Deletes a metric descriptor. Only user-created
[custom metrics](https://cloud.google.com/monitoring/custom-metrics) can be
deleted.
Gets a single metric descriptor. This method does not require a Workspace.
Gets a single monitored resource descriptor. This method does not require a Workspace.
Lists metric descriptors that match a filter. This method does not require a Workspace.
Lists monitored resource descriptors that match a filter. This method does not require a Workspace.
Lists time series that match a filter. This method does not require a Workspace.
*UnimplementedMetricServiceServer
func RegisterMetricServiceServer(s *grpc.Server, srv MetricServiceServer)
NotificationChannelServiceClient is the client API for NotificationChannelService service.
For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
Creates a new notification channel, representing a single notification
endpoint such as an email address, SMS number, or PagerDuty service.
Deletes a notification channel.
Gets a single notification channel. The channel includes the relevant
configuration details with which the channel was created. However, the
response may truncate or omit passwords, API keys, or other private key
matter and thus the response may not be 100% identical to the information
that was supplied in the call to the create method.
Gets a single channel descriptor. The descriptor indicates which fields
are expected / permitted for a notification channel of the given type.
Requests a verification code for an already verified channel that can then
be used in a call to VerifyNotificationChannel() on a different channel
with an equivalent identity in the same or in a different project. This
makes it possible to copy a channel between projects without requiring
manual reverification of the channel. If the channel is not in the
verified state, this method will fail (in other words, this may only be
used if the SendNotificationChannelVerificationCode and
VerifyNotificationChannel paths have already been used to put the given
channel into the verified state).
There is no guarantee that the verification codes returned by this method
will be of a similar structure or form as the ones that are delivered
to the channel via SendNotificationChannelVerificationCode; while
VerifyNotificationChannel() will recognize both the codes delivered via
SendNotificationChannelVerificationCode() and returned from
GetNotificationChannelVerificationCode(), it is typically the case that
the verification codes delivered via
SendNotificationChannelVerificationCode() will be shorter and also
have a shorter expiration (e.g. codes such as "G-123456") whereas
GetVerificationCode() will typically return a much longer, websafe base
64 encoded string that has a longer expiration time.
Lists the descriptors for supported channel types. The use of descriptors
makes it possible for new channel types to be dynamically added.
Lists the notification channels that have been created for the project.
Causes a verification code to be delivered to the channel. The code
can then be supplied in `VerifyNotificationChannel` to verify the channel.
Updates a notification channel. Fields not specified in the field mask
remain unchanged.
Verifies a `NotificationChannel` by proving receipt of the code
delivered to the channel as a result of calling
`SendNotificationChannelVerificationCode`.
*notificationChannelServiceClient
func NewNotificationChannelServiceClient(cc grpc.ClientConnInterface) NotificationChannelServiceClient
NotificationChannelServiceServer is the server API for NotificationChannelService service.
Creates a new notification channel, representing a single notification
endpoint such as an email address, SMS number, or PagerDuty service.
Deletes a notification channel.
Gets a single notification channel. The channel includes the relevant
configuration details with which the channel was created. However, the
response may truncate or omit passwords, API keys, or other private key
matter and thus the response may not be 100% identical to the information
that was supplied in the call to the create method.
Gets a single channel descriptor. The descriptor indicates which fields
are expected / permitted for a notification channel of the given type.
Requests a verification code for an already verified channel that can then
be used in a call to VerifyNotificationChannel() on a different channel
with an equivalent identity in the same or in a different project. This
makes it possible to copy a channel between projects without requiring
manual reverification of the channel. If the channel is not in the
verified state, this method will fail (in other words, this may only be
used if the SendNotificationChannelVerificationCode and
VerifyNotificationChannel paths have already been used to put the given
channel into the verified state).
There is no guarantee that the verification codes returned by this method
will be of a similar structure or form as the ones that are delivered
to the channel via SendNotificationChannelVerificationCode; while
VerifyNotificationChannel() will recognize both the codes delivered via
SendNotificationChannelVerificationCode() and returned from
GetNotificationChannelVerificationCode(), it is typically the case that
the verification codes delivered via
SendNotificationChannelVerificationCode() will be shorter and also
have a shorter expiration (e.g. codes such as "G-123456") whereas
GetVerificationCode() will typically return a much longer, websafe base
64 encoded string that has a longer expiration time.
Lists the descriptors for supported channel types. The use of descriptors
makes it possible for new channel types to be dynamically added.
Lists the notification channels that have been created for the project.
Causes a verification code to be delivered to the channel. The code
can then be supplied in `VerifyNotificationChannel` to verify the channel.
Updates a notification channel. Fields not specified in the field mask
remain unchanged.
Verifies a `NotificationChannel` by proving receipt of the code
delivered to the channel as a result of calling
`SendNotificationChannelVerificationCode`.
*UnimplementedNotificationChannelServiceServer
func RegisterNotificationChannelServiceServer(s *grpc.Server, srv NotificationChannelServiceServer)
The `QueryTimeSeries` request.
Required. The project on which to execute the request. The format is:
projects/[PROJECT_ID_OR_NUMBER]
A positive number that is the maximum number of time_series_data to return.
If this field is not empty then it must contain the `nextPageToken` value
returned by a previous call to this method. Using this field causes the
method to return additional results from the previous method call.
Required. The query in the monitoring query language format. The default
time zone is in UTC.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use QueryTimeSeriesRequest.ProtoReflect.Descriptor instead.
(*T) GetName() string(*T) GetPageSize() int32(*T) GetPageToken() string(*T) GetQuery() string(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
`distribution_cut` is used when `good_service` is a count of values
aggregated in a `Distribution` that fall into a good range. The
`total_service` is the total count of all values aggregated in the
`Distribution`.
(*T) isRequestBasedSli_Method()
*T : isRequestBasedSli_Method
Istio service scoped to a single Kubernetes cluster. Learn more at
http://istio.io.
Deprecated: Do not use.
The name of the Kubernetes cluster in which this Istio service is
defined. Corresponds to the `cluster_name` resource label in
`k8s_cluster` resources.
The location of the Kubernetes cluster in which this Istio service is
defined. Corresponds to the `location` resource label in `k8s_cluster`
resources.
The name of the Istio service underlying this service. Corresponds to the
`destination_service_name` metric label in Istio metrics.
The namespace of the Istio service underlying this service. Corresponds
to the `destination_service_namespace` metric label in Istio metrics.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use Service_ClusterIstio.ProtoReflect.Descriptor instead.
(*T) GetClusterName() string(*T) GetLocation() string(*T) GetServiceName() string(*T) GetServiceNamespace() string(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*Service).GetClusterIstio() *Service_ClusterIstio
A Service-Level Indicator (SLI) describes the "performance" of a service. For
some services, the SLI is well-defined. In such cases, the SLI can be
described easily by referencing the well-known SLI and providing the needed
parameters. Alternatively, a "custom" SLI can be defined with a query to the
underlying metric store. An SLI is defined to be `good_service /
total_service` over any queried time interval. The value of performance
always falls into the range `0 <= performance <= 1`. A custom SLI describes
how to compute this ratio, whether this is by dividing values from a pair of
time series, cutting a `Distribution` into good and bad counts, or counting
time windows in which the service complies with a criterion. For separation
of concerns, a single Service-Level Indicator measures performance for only
one aspect of service quality, such as fraction of successful queries or
fast-enough queries.
Service level indicators can be grouped by whether the "unit" of service
being measured is based on counts of good requests or on counts of good
time windows
Types that are assignable to Type:
*ServiceLevelIndicator_BasicSli
*ServiceLevelIndicator_RequestBased
*ServiceLevelIndicator_WindowsBased
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use ServiceLevelIndicator.ProtoReflect.Descriptor instead.
(*T) GetBasicSli() *BasicSli(*T) GetRequestBased() *RequestBasedSli(*T) GetType() isServiceLevelIndicator_Type(*T) GetWindowsBased() *WindowsBasedSli(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*ServiceLevelObjective).GetServiceLevelIndicator() *ServiceLevelIndicator
ServiceMonitoringServiceClient is the client API for ServiceMonitoringService service.
For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
Create a `Service`.
Create a `ServiceLevelObjective` for the given `Service`.
Soft delete this `Service`.
Delete the given `ServiceLevelObjective`.
Get the named `Service`.
Get a `ServiceLevelObjective` by name.
List the `ServiceLevelObjective`s for the given `Service`.
List `Service`s for this workspace.
Update this `Service`.
Update the given `ServiceLevelObjective`.
*serviceMonitoringServiceClient
func NewServiceMonitoringServiceClient(cc grpc.ClientConnInterface) ServiceMonitoringServiceClient
ServiceMonitoringServiceServer is the server API for ServiceMonitoringService service.
Create a `Service`.
Create a `ServiceLevelObjective` for the given `Service`.
Soft delete this `Service`.
Delete the given `ServiceLevelObjective`.
Get the named `Service`.
Get a `ServiceLevelObjective` by name.
List the `ServiceLevelObjective`s for the given `Service`.
List `Service`s for this workspace.
Update this `Service`.
Update the given `ServiceLevelObjective`.
*UnimplementedServiceMonitoringServiceServer
func RegisterServiceMonitoringServiceServer(s *grpc.Server, srv ServiceMonitoringServiceServer)
The context of a span, attached to
[Exemplars][google.api.Distribution.Exemplars]
in [Distribution][google.api.Distribution] values during aggregation.
It contains the name of a span with format:
projects/[PROJECT_ID_OR_NUMBER]/traces/[TRACE_ID]/spans/[SPAN_ID]
The resource name of the span. The format is:
projects/[PROJECT_ID_OR_NUMBER]/traces/[TRACE_ID]/spans/[SPAN_ID]
`[TRACE_ID]` is a unique identifier for a trace within a project;
it is a 32-character hexadecimal encoding of a 16-byte array.
`[SPAN_ID]` is a unique identifier for a span within a trace; it
is a 16-character hexadecimal encoding of an 8-byte array.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use SpanContext.ProtoReflect.Descriptor instead.
(*T) GetSpanName() string(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
A locator for text. Indicates a particular part of the text of a request or
of an object referenced in the request.
For example, suppose the request field `text` contains:
text: "The quick brown fox jumps over the lazy dog."
Then the locator:
source: "text"
start_position {
line: 1
column: 17
}
end_position {
line: 1
column: 19
}
refers to the part of the text: "fox".
The position of the last byte within the text.
If `source`, `start_position`, and `end_position` describe a call on
some object (e.g. a macro in the time series query language text) and a
location is to be designated in that object's text, `nested_locator`
identifies the location within that object.
When `nested_locator` is set, this field gives the reason for the nesting.
Usually, the reason is a macro invocation. In that case, the macro name
(including the leading '@') signals the location of the macro call
in the text and a macro argument name (including the leading '$') signals
the location of the macro argument inside the macro body that got
substituted away.
The source of the text. The source may be a field in the request, in which
case its format is the format of the
google.rpc.BadRequest.FieldViolation.field field in
https://cloud.google.com/apis/design/errors#error_details. It may also be
be a source other than the request field (e.g. a macro definition
referenced in the text of the query), in which case this is the name of
the source (e.g. the macro name).
The position of the first byte within the text.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use TextLocator.ProtoReflect.Descriptor instead.
(*T) GetEndPosition() *TextLocator_Position(*T) GetNestedLocator() *TextLocator(*T) GetNestingReason() string(*T) GetSource() string(*T) GetStartPosition() *TextLocator_Position(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*QueryError).GetLocator() *TextLocator
func (*TextLocator).GetNestedLocator() *TextLocator
A closed time interval. It extends from the start time to the end time, and
includes both: `[startTime, endTime]`. Valid time intervals depend on the
[`MetricKind`](https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.metricDescriptors#MetricKind)
of the metric value. In no case can the end time be earlier than the start
time.
* For a `GAUGE` metric, the `startTime` value is technically optional; if
no value is specified, the start time defaults to the value of the
end time, and the interval represents a single point in time. If both
start and end times are specified, they must be identical. Such an
interval is valid only for `GAUGE` metrics, which are point-in-time
measurements.
* For `DELTA` and `CUMULATIVE` metrics, the start time must be earlier
than the end time.
* In all cases, the start time of the next interval must be
at least a millisecond after the end time of the previous interval.
Because the interval is closed, if the start time of a new interval
is the same as the end time of the previous interval, data written
at the new start time could overwrite data written at the previous
end time.
Required. The end of the time interval.
Optional. The beginning of the time interval. The default value
for the start time is the end time. The start time must not be
later than the end time.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use TimeInterval.ProtoReflect.Descriptor instead.
(*T) GetEndTime() *timestamppb.Timestamp(*T) GetStartTime() *timestamppb.Timestamp(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*ListGroupMembersRequest).GetInterval() *TimeInterval
func (*ListTimeSeriesRequest).GetInterval() *TimeInterval
func (*Point).GetInterval() *TimeInterval
func (*TimeSeriesData_PointData).GetTimeInterval() *TimeInterval
A collection of data points that describes the time-varying values
of a metric. A time series is identified by a combination of a
fully-specified monitored resource and a fully-specified metric.
This type is used for both listing and creating time series.
Output only. The associated monitored resource metadata. When reading a
a timeseries, this field will include metadata labels that are explicitly
named in the reduction. When creating a timeseries, this field is ignored.
The associated metric. A fully-specified metric used to identify the time
series.
The metric kind of the time series. When listing time series, this metric
kind might be different from the metric kind of the associated metric if
this time series is an alignment or reduction of other time series.
When creating a time series, this field is optional. If present, it must be
the same as the metric kind of the associated metric. If the associated
metric's descriptor must be auto-created, then this field specifies the
metric kind of the new descriptor and must be either `GAUGE` (the default)
or `CUMULATIVE`.
The data points of this time series. When listing time series, points are
returned in reverse time order.
When creating a time series, this field must contain exactly one point and
the point's type must be the same as the value type of the associated
metric. If the associated metric's descriptor must be auto-created, then
the value type of the descriptor is determined by the point's type, which
must be `BOOL`, `INT64`, `DOUBLE`, or `DISTRIBUTION`.
The associated monitored resource. Custom metrics can use only certain
monitored resource types in their time series data.
The value type of the time series. When listing time series, this value
type might be different from the value type of the associated metric if
this time series is an alignment or reduction of other time series.
When creating a time series, this field is optional. If present, it must be
the same as the type of the data in the `points` field.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use TimeSeries.ProtoReflect.Descriptor instead.
(*T) GetMetadata() *monitoredres.MonitoredResourceMetadata(*T) GetMetric() *metric.Metric(*T) GetMetricKind() metric.MetricDescriptor_MetricKind(*T) GetPoints() []*Point(*T) GetResource() *monitoredres.MonitoredResource(*T) GetValueType() metric.MetricDescriptor_ValueType(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*CreateTimeSeriesError).GetTimeSeries() *TimeSeries
func (*CreateTimeSeriesRequest).GetTimeSeries() []*TimeSeries
func (*ListTimeSeriesResponse).GetTimeSeries() []*TimeSeries
func cloud.google.com/go/monitoring/apiv3.(*TimeSeriesIterator).Next() (*TimeSeries, error)
A `TimeSeriesRatio` specifies two `TimeSeries` to use for computing the
`good_service / total_service` ratio. The specified `TimeSeries` must have
`ValueType = DOUBLE` or `ValueType = INT64` and must have `MetricKind =
DELTA` or `MetricKind = CUMULATIVE`. The `TimeSeriesRatio` must specify
exactly two of good, bad, and total, and the relationship `good_service +
bad_service = total_service` will be assumed.
A [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters)
specifying a `TimeSeries` quantifying bad service, either demanded service
that was not provided or demanded service that was of inadequate quality.
Must have `ValueType = DOUBLE` or `ValueType = INT64` and must have
`MetricKind = DELTA` or `MetricKind = CUMULATIVE`.
A [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters)
specifying a `TimeSeries` quantifying good service provided. Must have
`ValueType = DOUBLE` or `ValueType = INT64` and must have `MetricKind =
DELTA` or `MetricKind = CUMULATIVE`.
A [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters)
specifying a `TimeSeries` quantifying total demanded service. Must have
`ValueType = DOUBLE` or `ValueType = INT64` and must have `MetricKind =
DELTA` or `MetricKind = CUMULATIVE`.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use TimeSeriesRatio.ProtoReflect.Descriptor instead.
(*T) GetBadServiceFilter() string(*T) GetGoodServiceFilter() string(*T) GetTotalServiceFilter() string(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*RequestBasedSli).GetGoodTotalRatio() *TimeSeriesRatio
The protocol for the `UpdateAlertPolicy` request.
Required. The updated alerting policy or the updated values for the
fields listed in `update_mask`.
If `update_mask` is not empty, any fields in this policy that are
not in `update_mask` are ignored.
Optional. A list of alerting policy field names. If this field is not
empty, each listed field in the existing alerting policy is set to the
value of the corresponding field in the supplied policy (`alert_policy`),
or to the field's default value if the field is not in the supplied
alerting policy. Fields not listed retain their previous value.
Examples of valid field masks include `display_name`, `documentation`,
`documentation.content`, `documentation.mime_type`, `user_labels`,
`user_label.nameofkey`, `enabled`, `conditions`, `combiner`, etc.
If this field is empty, then the supplied alerting policy replaces the
existing policy. It is the same as deleting the existing policy and
adding the supplied policy, except for the following:
+ The new policy will have the same `[ALERT_POLICY_ID]` as the former
policy. This gives you continuity with the former policy in your
notifications and incidents.
+ Conditions in the new policy will keep their former `[CONDITION_ID]` if
the supplied condition includes the `name` field with that
`[CONDITION_ID]`. If the supplied condition omits the `name` field,
then a new `[CONDITION_ID]` is created.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use UpdateAlertPolicyRequest.ProtoReflect.Descriptor instead.
(*T) GetAlertPolicy() *AlertPolicy(*T) GetUpdateMask() *fieldmaskpb.FieldMask(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func AlertPolicyServiceClient.UpdateAlertPolicy(ctx context.Context, in *UpdateAlertPolicyRequest, opts ...grpc.CallOption) (*AlertPolicy, error)
func AlertPolicyServiceServer.UpdateAlertPolicy(context.Context, *UpdateAlertPolicyRequest) (*AlertPolicy, error)
func (*UnimplementedAlertPolicyServiceServer).UpdateAlertPolicy(context.Context, *UpdateAlertPolicyRequest) (*AlertPolicy, error)
func cloud.google.com/go/monitoring/apiv3.(*AlertPolicyClient).UpdateAlertPolicy(ctx context.Context, req *UpdateAlertPolicyRequest, opts ...gax.CallOption) (*AlertPolicy, error)
Information involved in an HTTP/HTTPS Uptime check request.
The authentication information. Optional when creating an HTTP check;
defaults to empty.
The request body associated with the HTTP request. If `content_type` is
`URL_ENCODED`, the body passed in must be URL-encoded. Users can provide
a `Content-Length` header via the `headers` field or the API will do
so. The maximum byte size is 1 megabyte. Note: As with all `bytes` fields
JSON representations are base64 encoded.
The content type to use for the check.
The list of headers to send as part of the Uptime check request.
If two headers have the same key and different values, they should
be entered as a single header, with the value being a comma-separated
list of all the desired values as described at
https://www.w3.org/Protocols/rfc2616/rfc2616.txt (page 31).
Entering two separate headers with the same key in a Create call will
cause the first to be overwritten by the second.
The maximum number of headers allowed is 100.
Boolean specifiying whether to encrypt the header information.
Encryption should be specified for any headers related to authentication
that you do not wish to be seen when retrieving the configuration. The
server will be responsible for encrypting the headers.
On Get/List calls, if `mask_headers` is set to `true` then the headers
will be obscured with `******.`
Optional (defaults to "/"). The path to the page against which to run
the check. Will be combined with the `host` (specified within the
`monitored_resource`) and `port` to construct the full URL. If the
provided path does not begin with "/", a "/" will be prepended
automatically.
Optional (defaults to 80 when `use_ssl` is `false`, and 443 when
`use_ssl` is `true`). The TCP port on the HTTP server against which to
run the check. Will be combined with host (specified within the
`monitored_resource`) and `path` to construct the full URL.
The HTTP request method to use for the check.
If `true`, use HTTPS instead of HTTP to run the check.
Boolean specifying whether to include SSL certificate validation as a
part of the Uptime check. Only applies to checks where
`monitored_resource` is set to `uptime_url`. If `use_ssl` is `false`,
setting `validate_ssl` to `true` has no effect.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use UptimeCheckConfig_HttpCheck.ProtoReflect.Descriptor instead.
(*T) GetAuthInfo() *UptimeCheckConfig_HttpCheck_BasicAuthentication(*T) GetBody() []byte(*T) GetContentType() UptimeCheckConfig_HttpCheck_ContentType(*T) GetHeaders() map[string]string(*T) GetMaskHeaders() bool(*T) GetPath() string(*T) GetPort() int32(*T) GetRequestMethod() UptimeCheckConfig_HttpCheck_RequestMethod(*T) GetUseSsl() bool(*T) GetValidateSsl() bool(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*UptimeCheckConfig).GetHttpCheck() *UptimeCheckConfig_HttpCheck
The [monitored
resource](https://cloud.google.com/monitoring/api/resources) associated
with the configuration.
The following monitored resource types are supported for Uptime checks:
`uptime_url`,
`gce_instance`,
`gae_app`,
`aws_ec2_instance`,
`aws_elb_load_balancer`
(*T) isUptimeCheckConfig_Resource()
*T : isUptimeCheckConfig_Resource
Contains the region, location, and list of IP
addresses where checkers in the location run from.
The IP address from which the Uptime check originates. This is a fully
specified IP address (not an IP address range). Most IP addresses, as of
this publication, are in IPv4 format; however, one should not rely on the
IP addresses being in IPv4 format indefinitely, and should support
interpreting this field in either IPv4 or IPv6 format.
A more specific location within the region that typically encodes
a particular city/town/metro (and its containing state/province or country)
within the broader umbrella region category.
A broad region category in which the IP address is located.
sizeCacheprotoimpl.SizeCachestateprotoimpl.MessageStateunknownFieldsprotoimpl.UnknownFields
Deprecated: Use UptimeCheckIp.ProtoReflect.Descriptor instead.
(*T) GetIpAddress() string(*T) GetLocation() string(*T) GetRegion() UptimeCheckRegion(*T) ProtoMessage()(*T) ProtoReflect() protoreflect.Message(*T) Reset()(*T) String() string
*T : google.golang.org/protobuf/reflect/protoreflect.ProtoMessage
*T : google.golang.org/protobuf/runtime/protoiface.MessageV1
*T : expvar.Var
*T : fmt.Stringer
*T : google.golang.org/protobuf/internal/impl.messageV1
*T : context.stringer
*T : runtime.stringer
func (*ListUptimeCheckIpsResponse).GetUptimeCheckIps() []*UptimeCheckIp
func cloud.google.com/go/monitoring/apiv3.(*UptimeCheckIpIterator).Next() (*UptimeCheckIp, error)
UptimeCheckServiceClient is the client API for UptimeCheckService service.
For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
Creates a new Uptime check configuration.
Deletes an Uptime check configuration. Note that this method will fail
if the Uptime check configuration is referenced by an alert policy or
other dependent configs that would be rendered invalid by the deletion.
Gets a single Uptime check configuration.
Lists the existing valid Uptime check configurations for the project
(leaving out any invalid configurations).
Returns the list of IP addresses that checkers run from
Updates an Uptime check configuration. You can either replace the entire
configuration with a new one or replace only certain fields in the current
configuration by specifying the fields to be updated via `updateMask`.
Returns the updated configuration.
*uptimeCheckServiceClient
func NewUptimeCheckServiceClient(cc grpc.ClientConnInterface) UptimeCheckServiceClient
UptimeCheckServiceServer is the server API for UptimeCheckService service.
Creates a new Uptime check configuration.
Deletes an Uptime check configuration. Note that this method will fail
if the Uptime check configuration is referenced by an alert policy or
other dependent configs that would be rendered invalid by the deletion.
Gets a single Uptime check configuration.
Lists the existing valid Uptime check configurations for the project
(leaving out any invalid configurations).
Returns the list of IP addresses that checkers run from
Updates an Uptime check configuration. You can either replace the entire
configuration with a new one or replace only certain fields in the current
configuration by specifying the fields to be updated via `updateMask`.
Returns the updated configuration.
*UnimplementedUptimeCheckServiceServer
func RegisterUptimeCheckServiceServer(s *grpc.Server, srv UptimeCheckServiceServer)
A [monitoring filter](https://cloud.google.com/monitoring/api/v3/filters)
specifying a `TimeSeries` with `ValueType = BOOL`. The window is good if
any `true` values appear in the window.
(*T) isWindowsBasedSli_WindowCriterion()
*T : isWindowsBasedSli_WindowCriterion
Package-Level Constants (total 76, all are exported)
Align the time series by returning the number of values in each alignment
period. This aligner is valid for `GAUGE` and `DELTA` metrics with
numeric or Boolean values. The `value_type` of the aligned result is
`INT64`.
Align the time series by returning the number of `False` values in
each alignment period. This aligner is valid for `GAUGE` metrics with
Boolean values. The `value_type` of the output is `INT64`.
Align the time series by returning the number of `True` values in
each alignment period. This aligner is valid for `GAUGE` metrics with
Boolean values. The `value_type` of the output is `INT64`.
Align and convert to
[DELTA][google.api.MetricDescriptor.MetricKind.DELTA].
The output is `delta = y1 - y0`.
This alignment is valid for
[CUMULATIVE][google.api.MetricDescriptor.MetricKind.CUMULATIVE] and
`DELTA` metrics. If the selected alignment period results in periods
with no data, then the aligned value for such a period is created by
interpolation. The `value_type` of the aligned result is the same as
the `value_type` of the input.
Align the time series by returning the ratio of the number of `True`
values to the total number of values in each alignment period. This
aligner is valid for `GAUGE` metrics with Boolean values. The output
value is in the range [0.0, 1.0] and has `value_type` `DOUBLE`.
Align by interpolating between adjacent points around the alignment
period boundary. This aligner is valid for `GAUGE` metrics with
numeric values. The `value_type` of the aligned result is the same as the
`value_type` of the input.
Align the time series by returning the maximum value in each alignment
period. This aligner is valid for `GAUGE` and `DELTA` metrics with
numeric values. The `value_type` of the aligned result is the same as
the `value_type` of the input.
Align the time series by returning the mean value in each alignment
period. This aligner is valid for `GAUGE` and `DELTA` metrics with
numeric values. The `value_type` of the aligned result is `DOUBLE`.
Align the time series by returning the minimum value in each alignment
period. This aligner is valid for `GAUGE` and `DELTA` metrics with
numeric values. The `value_type` of the aligned result is the same as
the `value_type` of the input.
Align by moving the most recent data point before the end of the
alignment period to the boundary at the end of the alignment
period. This aligner is valid for `GAUGE` metrics. The `value_type` of
the aligned result is the same as the `value_type` of the input.
No alignment. Raw data is returned. Not valid if cross-series reduction
is requested. The `value_type` of the result is the same as the
`value_type` of the input.
Align and convert to a percentage change. This aligner is valid for
`GAUGE` and `DELTA` metrics with numeric values. This alignment returns
`((current - previous)/previous) * 100`, where the value of `previous` is
determined based on the `alignment_period`.
If the values of `current` and `previous` are both 0, then the returned
value is 0. If only `previous` is 0, the returned value is infinity.
A 10-minute moving mean is computed at each point of the alignment period
prior to the above calculation to smooth the metric and prevent false
positives from very short-lived spikes. The moving mean is only
applicable for data whose values are `>= 0`. Any values `< 0` are
treated as a missing datapoint, and are ignored. While `DELTA`
metrics are accepted by this alignment, special care should be taken that
the values for the metric will always be positive. The output is a
`GAUGE` metric with `value_type` `DOUBLE`.
Align the time series by using [percentile
aggregation](https://en.wikipedia.org/wiki/Percentile). The resulting
data point in each alignment period is the 5th percentile of all data
points in the period. This aligner is valid for `GAUGE` and `DELTA`
metrics with distribution values. The output is a `GAUGE` metric with
`value_type` `DOUBLE`.
Align the time series by using [percentile
aggregation](https://en.wikipedia.org/wiki/Percentile). The resulting
data point in each alignment period is the 50th percentile of all data
points in the period. This aligner is valid for `GAUGE` and `DELTA`
metrics with distribution values. The output is a `GAUGE` metric with
`value_type` `DOUBLE`.
Align the time series by using [percentile
aggregation](https://en.wikipedia.org/wiki/Percentile). The resulting
data point in each alignment period is the 95th percentile of all data
points in the period. This aligner is valid for `GAUGE` and `DELTA`
metrics with distribution values. The output is a `GAUGE` metric with
`value_type` `DOUBLE`.
Align the time series by using [percentile
aggregation](https://en.wikipedia.org/wiki/Percentile). The resulting
data point in each alignment period is the 99th percentile of all data
points in the period. This aligner is valid for `GAUGE` and `DELTA`
metrics with distribution values. The output is a `GAUGE` metric with
`value_type` `DOUBLE`.
Align and convert to a rate. The result is computed as
`rate = (y1 - y0)/(t1 - t0)`, or "delta over time".
Think of this aligner as providing the slope of the line that passes
through the value at the start and at the end of the `alignment_period`.
This aligner is valid for `CUMULATIVE`
and `DELTA` metrics with numeric values. If the selected alignment
period results in periods with no data, then the aligned value for
such a period is created by interpolation. The output is a `GAUGE`
metric with `value_type` `DOUBLE`.
If, by "rate", you mean "percentage change", see the
`ALIGN_PERCENT_CHANGE` aligner instead.
Align the time series by returning the standard deviation of the values
in each alignment period. This aligner is valid for `GAUGE` and
`DELTA` metrics with numeric values. The `value_type` of the output is
`DOUBLE`.
Align the time series by returning the sum of the values in each
alignment period. This aligner is valid for `GAUGE` and `DELTA`
metrics with numeric and distribution values. The `value_type` of the
aligned result is the same as the `value_type` of the input.
Reduce by computing the number of data points across time series
for each alignment period. This reducer is valid for `DELTA` and
`GAUGE` metrics of numeric, Boolean, distribution, and string
`value_type`. The `value_type` of the output is `INT64`.
Reduce by computing the number of `False`-valued data points across time
series for each alignment period. This reducer is valid for `DELTA` and
`GAUGE` metrics of Boolean `value_type`. The `value_type` of the output
is `INT64`.
Reduce by computing the number of `True`-valued data points across time
series for each alignment period. This reducer is valid for `DELTA` and
`GAUGE` metrics of Boolean `value_type`. The `value_type` of the output
is `INT64`.
Reduce by computing the ratio of the number of `True`-valued data points
to the total number of data points for each alignment period. This
reducer is valid for `DELTA` and `GAUGE` metrics of Boolean `value_type`.
The output value is in the range [0.0, 1.0] and has `value_type`
`DOUBLE`.
Reduce by computing the maximum value across time series for each
alignment period. This reducer is valid for `DELTA` and `GAUGE` metrics
with numeric values. The `value_type` of the output is the same as the
`value_type` of the input.
Reduce by computing the mean value across time series for each
alignment period. This reducer is valid for
[DELTA][google.api.MetricDescriptor.MetricKind.DELTA] and
[GAUGE][google.api.MetricDescriptor.MetricKind.GAUGE] metrics with
numeric or distribution values. The `value_type` of the output is
[DOUBLE][google.api.MetricDescriptor.ValueType.DOUBLE].
Reduce by computing the minimum value across time series for each
alignment period. This reducer is valid for `DELTA` and `GAUGE` metrics
with numeric values. The `value_type` of the output is the same as the
`value_type` of the input.
No cross-time series reduction. The output of the `Aligner` is
returned.
Reduce by computing the [5th
percentile](https://en.wikipedia.org/wiki/Percentile) of data points
across time series for each alignment period. This reducer is valid for
`GAUGE` and `DELTA` metrics of numeric and distribution type. The value
of the output is `DOUBLE`.
Reduce by computing the [50th
percentile](https://en.wikipedia.org/wiki/Percentile) of data points
across time series for each alignment period. This reducer is valid for
`GAUGE` and `DELTA` metrics of numeric and distribution type. The value
of the output is `DOUBLE`.
Reduce by computing the [95th
percentile](https://en.wikipedia.org/wiki/Percentile) of data points
across time series for each alignment period. This reducer is valid for
`GAUGE` and `DELTA` metrics of numeric and distribution type. The value
of the output is `DOUBLE`.
Reduce by computing the [99th
percentile](https://en.wikipedia.org/wiki/Percentile) of data points
across time series for each alignment period. This reducer is valid for
`GAUGE` and `DELTA` metrics of numeric and distribution type. The value
of the output is `DOUBLE`.
Reduce by computing the standard deviation across time series
for each alignment period. This reducer is valid for `DELTA` and
`GAUGE` metrics with numeric or distribution values. The `value_type`
of the output is `DOUBLE`.
Reduce by computing the sum across time series for each
alignment period. This reducer is valid for `DELTA` and `GAUGE` metrics
with numeric and distribution values. The `value_type` of the output is
the same as the `value_type` of the input.
Combine conditions using the logical `AND` operator. An
incident is created only if all the conditions are met
simultaneously. This combiner is satisfied if all conditions are
met, even if they are met on completely different resources.
Combine conditions using logical `AND` operator, but unlike the regular
`AND` option, an incident is created only if all conditions are met
simultaneously on at least one resource.
An unspecified combiner.
Combine conditions using the logical `OR` operator. An incident
is created if any of the listed conditions is met.
True if the left argument is equal to the right argument.
True if the left argument is greater than or equal to the right argument.
True if the left argument is greater than the right argument.
True if the left argument is less than or equal to the right argument.
True if the left argument is less than the right argument.
True if the left argument is not equal to the right argument.
No ordering relationship is specified.
A group of Amazon ELB load balancers.
A group of instances from Google Cloud Platform (GCP) or
Amazon Web Services (AWS).
Default value (not valid).
The checker is being created, provisioned, and configured. A checker in
this state can be returned by `ListInternalCheckers` or
`GetInternalChecker`, as well as by examining the [long running
Operation](https://cloud.google.com/apis/design/design_patterns#long_running_operations)
that created it.
The checker is running and available for use. A checker in this state
can be returned by `ListInternalCheckers` or `GetInternalChecker` as
well as by examining the [long running
Operation](https://cloud.google.com/apis/design/design_patterns#long_running_operations)
that created it.
If a checker is being torn down, it is neither visible nor usable, so
there is no "deleting" or "down" state.
An internal checker should never be in the unspecified state.
Returns the identity of the metric(s), the time series,
and the time series data.
Returns the identity of the metric and the time series resource,
but not the time series data.
The channel has yet to be verified and requires verification to function.
Note that this state also applies to the case where the verification
process has been initiated by sending a verification code but where
the verification code has not been submitted to complete the process.
Sentinel value used to indicate that the state is unknown, omitted, or
is not applicable (as in the case of channels that neither support
nor require verification in order to function).
It has been proven that notifications can be received on this
notification channel and that someone on the project has access
to messages that are delivered to that channel.
For `ServiceLevelIndicator`s using `BasicSli` articulation, instead
return the `ServiceLevelIndicator` with its mode of computation fully
spelled out as a `RequestBasedSli`. For `ServiceLevelIndicator`s using
`RequestBasedSli` or `WindowsBasedSli`, return the
`ServiceLevelIndicator` as it was provided.
Return the embedded `ServiceLevelIndicator` in the form in which it was
defined. If it was defined using a `BasicSli`, return that `BasicSli`.
Same as FULL.
The Stackdriver Basic tier, a free tier of service that provides basic
features, a moderate allotment of logs, and access to built-in metrics.
A number of features are not available in this tier. For more details,
see [the service tiers
documentation](https://cloud.google.com/monitoring/workspaces/tiers).
The Stackdriver Premium tier, a higher, more expensive tier of service
that provides access to all Stackdriver features, lets you use Stackdriver
with AWS accounts, and has a larger allotments for logs and metrics. For
more details, see [the service tiers
documentation](https://cloud.google.com/monitoring/workspaces/tiers).
An invalid sentinel value, used to indicate that a tier has not
been provided explicitly.
Selects substring matching. The match succeeds if the output contains
the `content` string. This is the default value for checks without
a `matcher` option, or where the value of `matcher` is
`CONTENT_MATCHER_OPTION_UNSPECIFIED`.
No content matcher type specified (maintained for backward
compatibility, but deprecated for future use).
Treated as `CONTAINS_STRING`.
Selects regular-expression matching. The match succeeds of the output
matches the regular expression specified in the `content` string.
Selects negation of substring matching. The match succeeds if the
output does _NOT_ contain the `content` string.
Selects negation of regular-expression matching. The match succeeds if
the output does _NOT_ match the regular expression specified in the
`content` string.
GET request.
No request method specified.
POST request.
No content type specified. If the request method is POST, an
unspecified content type results in a check creation rejection.
`body` is in URL-encoded form. Equivalent to setting the `Content-Type`
to `application/x-www-form-urlencoded` in the HTTP request.
Allows checks to run from locations within the Asia Pacific area (ex:
Singapore).
Allows checks to run from locations within the continent of Europe.
Default value if no region is specified. Will result in Uptime checks
running from all regions.
Allows checks to run from locations within the continent of South
America.
Allows checks to run from locations within the United States of America.
The pages are generated with Goldsv0.3.2-preview. (GOOS=darwin GOARCH=amd64)
Golds is a Go 101 project developed by Tapir Liu.
PR and bug reports are welcome and can be submitted to the issue list.
Please follow @Go100and1 (reachable from the left QR code) to get the latest news of Golds.