Updates README, docs/walkthrough and deploy/

Signed-off-by: JoaoBraveCoding <jmarcal@redhat.com>
This commit is contained in:
Joao Marcal 2022-09-08 11:13:57 +01:00 committed by JoaoBraveCoding
parent 3afe2c74bc
commit 372dfc9d3a
No known key found for this signature in database
GPG key ID: 7F3A705256E2C828
17 changed files with 114 additions and 97 deletions

View file

@ -1,9 +1,9 @@
# Prometheus Adapter for Kubernetes Metrics APIs # Prometheus Adapter for Kubernetes Metrics APIs
This repository contains an implementation of the Kubernetes This repository contains an implementation of the Kubernetes
[resource metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md), [resource metrics](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/resource-metrics-api.md),
[custom metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md), and [custom metrics](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/custom-metrics-api.md), and
[external metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md) APIs. [external metrics](https://github.com/kubernetes/design-proposals-archive/blob/main/instrumentation/external-metrics-api.md) APIs.
This adapter is therefore suitable for use with the autoscaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+. This adapter is therefore suitable for use with the autoscaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+.
It can also replace the [metrics server](https://github.com/kubernetes-incubator/metrics-server) on clusters that already run Prometheus and collect the appropriate metrics. It can also replace the [metrics server](https://github.com/kubernetes-incubator/metrics-server) on clusters that already run Prometheus and collect the appropriate metrics.
@ -51,7 +51,7 @@ will attempt to using [Kubernetes in-cluster
config](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod) config](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod)
to connect to the cluster. to connect to the cluster.
It takes the following addition arguments specific to configuring how the It takes the following additional arguments specific to configuring how the
adapter talks to Prometheus and the main Kubernetes cluster: adapter talks to Prometheus and the main Kubernetes cluster:
- `--lister-kubeconfig=<path-to-kubeconfig>`: This configures - `--lister-kubeconfig=<path-to-kubeconfig>`: This configures

View file

@ -1,20 +1,11 @@
Example Deployment Example Deployment
================== ==================
1. Make sure you've built the included Dockerfile with `TAG=latest make container`. The image should be tagged as `gcr.io/k8s-staging-prometheus-adapter:latest`. 1. Make sure you've built the included Dockerfile with `TAG=latest make container`. The image should be tagged as `registry.k8s.io/prometheus-adapter/staging-prometheus-adapter:latest`.
2. Create a secret called `cm-adapter-serving-certs` with two values: 2. `kubectl create namespace monitoring` to ensure that the namespace that we're installing
`serving.crt` and `serving.key`. These are the serving certificates used
by the adapter for serving HTTPS traffic. For more information on how to
generate these certificates, see the [auth concepts
documentation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/auth.md)
in the apiserver-builder repository.
The kube-prometheus project published two scripts [gencerts.sh](https://github.com/prometheus-operator/kube-prometheus/blob/62fff622e9900fade8aecbd02bc9c557b736ef85/experimental/custom-metrics-api/gencerts.sh)
and [deploy.sh](https://github.com/prometheus-operator/kube-prometheus/blob/62fff622e9900fade8aecbd02bc9c557b736ef85/experimental/custom-metrics-api/deploy.sh) to create the `cm-adapter-serving-certs` secret.
3. `kubectl create namespace custom-metrics` to ensure that the namespace that we're installing
the custom metrics adapter in exists. the custom metrics adapter in exists.
4. `kubectl create -f manifests/`, modifying the Deployment as necessary to 3. `kubectl create -f manifests/`, modifying the Deployment as necessary to
point to your Prometheus server, and the ConfigMap to contain your desired point to your Prometheus server, and the ConfigMap to contain your desired
metrics discovery configuration. metrics discovery configuration.

View file

@ -4,7 +4,7 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: v1beta1.metrics.k8s.io name: v1beta1.metrics.k8s.io
spec: spec:
group: metrics.k8s.io group: metrics.k8s.io

View file

@ -4,7 +4,7 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true" rbac.authorization.k8s.io/aggregate-to-view: "true"

View file

@ -4,7 +4,7 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: resource-metrics:system:auth-delegator name: resource-metrics:system:auth-delegator
namespace: monitoring namespace: monitoring
roleRef: roleRef:

View file

@ -4,7 +4,7 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: prometheus-adapter name: prometheus-adapter
namespace: monitoring namespace: monitoring
roleRef: roleRef:

View file

@ -4,7 +4,7 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: resource-metrics-server-resources name: resource-metrics-server-resources
rules: rules:
- apiGroups: - apiGroups:

View file

@ -4,7 +4,7 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: prometheus-adapter name: prometheus-adapter
rules: rules:
- apiGroups: - apiGroups:

View file

@ -12,16 +12,8 @@ data:
) )
"nodeQuery": | "nodeQuery": |
sum by (<<.GroupBy>>) ( sum by (<<.GroupBy>>) (
1 - irate( irate(
node_cpu_seconds_total{mode="idle"}[4m] container_cpu_usage_seconds_total{<<.LabelMatchers>>,id='/'}[4m]
)
* on(namespace, pod) group_left(node) (
node_namespace_pod:kube_pod_info:{<<.LabelMatchers>>}
)
)
or sum by (<<.GroupBy>>) (
1 - irate(
windows_cpu_time_total{mode="idle", job="windows-exporter",<<.LabelMatchers>>}[4m]
) )
) )
"resources": "resources":
@ -40,18 +32,11 @@ data:
) )
"nodeQuery": | "nodeQuery": |
sum by (<<.GroupBy>>) ( sum by (<<.GroupBy>>) (
node_memory_MemTotal_bytes{job="node-exporter",<<.LabelMatchers>>} container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}
-
node_memory_MemAvailable_bytes{job="node-exporter",<<.LabelMatchers>>}
)
or sum by (<<.GroupBy>>) (
windows_cs_physical_memory_bytes{job="windows-exporter",<<.LabelMatchers>>}
-
windows_memory_available_bytes{job="windows-exporter",<<.LabelMatchers>>}
) )
"resources": "resources":
"overrides": "overrides":
"instance": "node":
"resource": "node" "resource": "node"
"namespace": "namespace":
"resource": "namespace" "resource": "namespace"
@ -63,6 +48,6 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: adapter-config name: adapter-config
namespace: monitoring namespace: monitoring

View file

@ -4,7 +4,7 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: prometheus-adapter name: prometheus-adapter
namespace: monitoring namespace: monitoring
spec: spec:
@ -22,7 +22,7 @@ spec:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
spec: spec:
automountServiceAccountToken: true automountServiceAccountToken: true
containers: containers:
@ -31,7 +31,7 @@ spec:
- --config=/etc/adapter/config.yaml - --config=/etc/adapter/config.yaml
- --logtostderr=true - --logtostderr=true
- --metrics-relist-interval=1m - --metrics-relist-interval=1m
- --prometheus-url=https://setup-monit-prometheus.monitoring.svc:9090/ - --prometheus-url=https://prometheus.monitoring.svc:9090/
- --secure-port=6443 - --secure-port=6443
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
image: registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.10.0 image: registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.10.0

View file

@ -4,7 +4,7 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: prometheus-adapter name: prometheus-adapter
namespace: monitoring namespace: monitoring
spec: spec:

View file

@ -4,7 +4,7 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: prometheus-adapter name: prometheus-adapter
namespace: monitoring namespace: monitoring
spec: spec:

View file

@ -4,7 +4,7 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: resource-metrics-auth-reader name: resource-metrics-auth-reader
namespace: kube-system namespace: kube-system
roleRef: roleRef:

View file

@ -5,6 +5,6 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: prometheus-adapter name: prometheus-adapter
namespace: monitoring namespace: monitoring

View file

@ -1,26 +0,0 @@
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1
name: prometheus-adapter
namespace: monitoring
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
interval: 30s
metricRelabelings:
- action: drop
regex: (apiserver_client_certificate_.*|apiserver_envelope_.*|apiserver_flowcontrol_.*|apiserver_storage_.*|apiserver_webhooks_.*|workqueue_.*)
sourceLabels:
- __name__
port: https
scheme: https
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter

View file

@ -4,7 +4,7 @@ metadata:
labels: labels:
app.kubernetes.io/component: metrics-adapter app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.9.1 app.kubernetes.io/version: 0.10.0
name: prometheus-adapter name: prometheus-adapter
namespace: monitoring namespace: monitoring
spec: spec:

View file

@ -142,11 +142,11 @@ a HorizontalPodAutoscaler like this to accomplish the autoscaling:
<details> <details>
<summary>sample-app-hpa.yaml</summary> <summary>sample-app.hpa.yaml</summary>
```yaml ```yaml
kind: HorizontalPodAutoscaler kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1 apiVersion: autoscaling/v2
metadata: metadata:
name: sample-app name: sample-app
spec: spec:
@ -165,10 +165,13 @@ spec:
- type: Pods - type: Pods
pods: pods:
# use the metric that you used above: pods/http_requests # use the metric that you used above: pods/http_requests
metricName: http_requests metric:
name: http_requests
# target 500 milli-requests per second, # target 500 milli-requests per second,
# which is 1 request every two seconds # which is 1 request every two seconds
targetAverageValue: 500m target:
type: Value
averageValue: 500m
``` ```
</details> </details>
@ -176,7 +179,7 @@ spec:
If you try creating that now (and take a look at your controller-manager If you try creating that now (and take a look at your controller-manager
logs), you'll see that the that the HorizontalPodAutoscaler controller is logs), you'll see that the that the HorizontalPodAutoscaler controller is
attempting to fetch metrics from attempting to fetch metrics from
`/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app`, `/apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app`,
but right now, nothing's serving that API. but right now, nothing's serving that API.
Before you can autoscale your application, you'll need to make sure that Before you can autoscale your application, you'll need to make sure that
@ -197,11 +200,11 @@ First, you'll need to deploy the Prometheus Operator. Check out the
guide](https://github.com/prometheus-operator/prometheus-operator#quickstart) guide](https://github.com/prometheus-operator/prometheus-operator#quickstart)
for the Operator to deploy a copy of Prometheus. for the Operator to deploy a copy of Prometheus.
This walkthrough assumes that Prometheus is deployed in the `prom` This walkthrough assumes that Prometheus is deployed in the `monitoring`
namespace. Most of the sample commands and files are namespace-agnostic, namespace. Most of the sample commands and files are namespace-agnostic,
but there are a few commands or pieces of configuration that rely on that but there are a few commands or pieces of configuration that rely on that
namespace. If you're using a different namespace, simply substitute that namespace. If you're using a different namespace, simply substitute that
in for `prom` when it appears. in for `monitoring` when it appears.
### Monitoring Your Application ### Monitoring Your Application
@ -213,7 +216,7 @@ service:
<details> <details>
<summary>service-monitor.yaml</summary> <summary>sample-app.monitor.yaml</summary>
```yaml ```yaml
kind: ServiceMonitor kind: ServiceMonitor
@ -233,12 +236,12 @@ spec:
</details> </details>
```shell ```shell
$ kubectl create -f service-monitor.yaml $ kubectl create -f sample-app.monitor.yaml
``` ```
Now, you should see your metrics appear in your Prometheus instance. Look Now, you should see your metrics (`http_requests_total`) appear in your Prometheus instance. Look
them up via the dashboard, and make sure they have the `namespace` and them up via the dashboard, and make sure they have the `namespace` and
`pod` labels. `pod` labels. If not, check the labels on the service monitor match the ones on the Prometheus CRD.
### Launching the Adapter ### Launching the Adapter
@ -256,7 +259,46 @@ the steps to deploy the adapter. Note that if you're deploying on
a non-x86_64 (amd64) platform, you'll need to change the `image` field in a non-x86_64 (amd64) platform, you'll need to change the `image` field in
the Deployment to be the appropriate image for your platform. the Deployment to be the appropriate image for your platform.
The default adapter configuration should work for this walkthrough and However an update to the adapter config is necessary in order to
expose custom metrics.
<details>
<summary>prom-adapter.config.yaml</summary>
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: monitoring
data:
config.yaml: |-
"rules":
- "seriesQuery": |
{namespace!="",__name__!~"^container_.*"}
"resources":
"template": "<<.Resource>>"
"name":
"matches": "^(.*)_total"
"as": ""
"metricsQuery": |
sum by (<<.GroupBy>>) (
irate (
<<.Series>>{<<.LabelMatchers>>}[1m]
)
)
```
</details>
```shell
$ kubectl apply -f prom-adapter.config.yaml
# Restart prom-adapter pods
$ kubectl rollout restart deployment prometheus-adapter -n monitoring
```
This adapter configuration should work for this walkthrough together with
a standard Prometheus Operator configuration, but if you've got custom a standard Prometheus Operator configuration, but if you've got custom
relabelling rules, or your labels above weren't exactly `namespace` and relabelling rules, or your labels above weren't exactly `namespace` and
`pod`, you may need to edit the configuration in the ConfigMap. The `pod`, you may need to edit the configuration in the ConfigMap. The
@ -265,11 +307,36 @@ overview of how configuration works.
### The Registered API ### The Registered API
As part of the creation of the adapter Deployment and associated objects We also need to register the custom metrics API with the API aggregator (part of
(performed above), we registered the API with the API aggregator (part of the main Kubernetes API server). For that we need to create an APIService resource
the main Kubernetes API server).
The API is registered as `custom.metrics.k8s.io/v1beta1`, and you can find <details>
<summary>api-service.yaml</summary>
```yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta2.custom.metrics.k8s.io
spec:
group: custom.metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: prometheus-adapter
namespace: monitoring
version: v1beta2
versionPriority: 100
```
</details>
```shell
$ kubectl create -f api-service.yaml
```
The API is registered as `custom.metrics.k8s.io/v1beta2`, and you can find
more information about aggregation at [Concepts: more information about aggregation at [Concepts:
Aggregation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/aggregation.md). Aggregation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/aggregation.md).
@ -280,7 +347,7 @@ With that all set, your custom metrics API should show up in discovery.
Try fetching the discovery information for it: Try fetching the discovery information for it:
```shell ```shell
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 $ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta2
``` ```
Since you've set up Prometheus to collect your app's metrics, you should Since you've set up Prometheus to collect your app's metrics, you should
@ -294,12 +361,12 @@ sends a raw GET request to the Kubernetes API server, automatically
injecting auth information: injecting auth information:
```shell ```shell
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app" $ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app"
``` ```
Because of the adapter's configuration, the cumulative metric Because of the adapter's configuration, the cumulative metric
`http_requests_total` has been converted into a rate metric, `http_requests_total` has been converted into a rate metric,
`pods/http_requests`, which measures requests per second over a 2 minute `pods/http_requests`, which measures requests per second over a 1 minute
interval. The value should currently be close to zero, since there's no interval. The value should currently be close to zero, since there's no
traffic to your app, except for the regular metrics collection from traffic to your app, except for the regular metrics collection from
Prometheus. Prometheus.
@ -350,7 +417,7 @@ and make decisions based on it.
If you didn't create the HorizontalPodAutoscaler above, create it now: If you didn't create the HorizontalPodAutoscaler above, create it now:
```shell ```shell
$ kubectl create -f sample-app-hpa.yaml $ kubectl create -f sample-app.hpa.yaml
``` ```
Wait a little bit, and then examine the HPA: Wait a little bit, and then examine the HPA:
@ -396,4 +463,4 @@ setting different labels or using the `Object` metric source type.
For more information on how metrics are exposed by the Prometheus adapter, For more information on how metrics are exposed by the Prometheus adapter,
see [config documentation](/docs/config.md), and check the [default see [config documentation](/docs/config.md), and check the [default
configuration](/deploy/manifests/custom-metrics-config-map.yaml). configuration](/deploy/manifests/config-map.yaml).