[docs] update to v1.9 API versions

This updates the README and walkthrough to use v1.9 API versions,
and to use Prometheus v2.
This commit is contained in:
Solly Ross 2018-02-13 13:10:22 -05:00
parent 61b071c186
commit 79f9248ded
2 changed files with 28 additions and 23 deletions

View file

@ -5,7 +5,7 @@ Kubernetes Custom Metrics Adapter for Prometheus
This repository contains an implementation of the Kubernetes custom This repository contains an implementation of the Kubernetes custom
metrics API metrics API
([custom-metrics.metrics.k8s.io/v1alpha1](https://github.com/kubernetes/metrics/tree/master/pkg/apis/custom_metrics)), ([custom.metrics.k8s.io/v1beta1](https://github.com/kubernetes/metrics/tree/master/pkg/apis/custom_metrics)),
suitable for use with the autoscaling/v2 Horizontal Pod Autoscaler in suitable for use with the autoscaling/v2 Horizontal Pod Autoscaler in
Kubernetes 1.6+. Kubernetes 1.6+.
@ -65,7 +65,7 @@ Additionally, [@luxas](https://github.com/luxas) has an excellent example
deployment of Prometheus, this adapter, and a demo pod which serves deployment of Prometheus, this adapter, and a demo pod which serves
a metric `http_requests_total`, which becomes the custom metrics API a metric `http_requests_total`, which becomes the custom metrics API
metric `pods/http_requests`. It also autoscales on that metric using the metric `pods/http_requests`. It also autoscales on that metric using the
`autoscaling/v2alpha1` HorizontalPodAutoscaler. `autoscaling/v2beta1` HorizontalPodAutoscaler.
It can be found at https://github.com/luxas/kubeadm-workshop. Pay special It can be found at https://github.com/luxas/kubeadm-workshop. Pay special
attention to: attention to:

View file

@ -20,9 +20,13 @@ Detailed instructions can be found in the Kubernetes documentation under
[Horizontal Pod [Horizontal Pod
Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics). Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics).
Make sure that you've properly configured Heapster with the `--api-server` Make sure that you've properly configured metrics-server (as is default in
flag, otherwise enabling custom metrics autoscaling support with disable Kubernetes 1.9+), or enabling custom metrics autoscaling support with
CPU autoscaling support. disable CPU autoscaling support.
Note that most of the API versions in this walkthrough target Kubernetes
1.9. It should still work with 1.7 and 1.8, but you might have to change
some minor details.
### Binaries and Images ### ### Binaries and Images ###
@ -142,7 +146,7 @@ ConfigMap from above, and proceed from there:
<summary>prom-adapter.deployment.yaml [Prometheus only]</summary> <summary>prom-adapter.deployment.yaml [Prometheus only]</summary>
```yaml ```yaml
apiVersion: apps/v1beta1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
name: prometheus name: prometheus
@ -156,15 +160,13 @@ spec:
labels: labels:
app: prometheus app: prometheus
spec: spec:
serviceAccountName: prom-cm-adapter serviceAccountName: prom-cm-adapter
containers: containers:
- image: prom/prometheus:v1.6.1 - image: prom/prometheus:v2.2.0-rc.0
name: prometheus name: prometheus
args: args:
- -storage.local.retention=6h
- -storage.local.memory-chunks=500000
# point prometheus at the configuration that you mount in below # point prometheus at the configuration that you mount in below
- -config.file=/etc/prometheus/prometheus.yml - --config.file=/etc/prometheus/prometheus.yml
ports: ports:
# this port exposes the dashboard and the HTTP API # this port exposes the dashboard and the HTTP API
- containerPort: 9090 - containerPort: 9090
@ -289,16 +291,16 @@ $ kubectl -n prom create service clusterip prometheus --tcp=443:6443
Now that you have a running deployment of Prometheus and the adapter, Now that you have a running deployment of Prometheus and the adapter,
you'll need to register it as providing the you'll need to register it as providing the
`custom-metrics.metrics.k8s.io/v1alpha` API. `custom.metrics.k8s.io/v1beta1` API.
For more information on how this works, see [Concepts: For more information on how this works, see [Concepts:
Aggregation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/aggregation.md). Aggregation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/aggregation.md).
You'll need to create an API registration record for the You'll need to create an API registration record for the
`custom-metrics.metrics.k8s.io/v1alpha1` API. In order to do this, you'll `custom.metrics.k8s.io/v1beta1` API. In order to do this, you'll need the
need the base64 encoded version of the CA certificate used to sign the base64 encoded version of the CA certificate used to sign the serving
serving certificates you created above. If the CA certificate is stored certificates you created above. If the CA certificate is stored in
in `/tmp/ca.crt`, you can get the base64-encoded form like this: `/tmp/ca.crt`, you can get the base64-encoded form like this:
```shell ```shell
$ base64 --w 0 < /tmp/ca.crt $ base64 --w 0 < /tmp/ca.crt
@ -310,19 +312,22 @@ Take the resulting value, and place it into the following file:
<summary>cm-registration.yaml</summary> <summary>cm-registration.yaml</summary>
*Note that apiregistration moved to stable in 1.10, so you'll need to use
the `apiregistration.k8s.io/v1` API version there*.
```yaml ```yaml
apiVersion: apiregistration.k8s.io/v1beta1 apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService kind: APIService
metadata: metadata:
name: v1alpha1.custom-metrics.metrics.k8s.io name: v1beta1.custom.metrics.k8s.io
spec: spec:
# this tells the aggregator how to verify that your API server is # this tells the aggregator how to verify that your API server is
# actually who it claims to be # actually who it claims to be
caBundle: <base-64-value-from-above> caBundle: <base-64-value-from-above>
# these specify which group and version you're registering the API # these specify which group and version you're registering the API
# server for # server for
group: custom-metrics.metrics.k8s.io group: custom.metrics.k8s.io
version: v1alpha1 version: v1beta1
# these control how the aggregator prioritizes your registration. # these control how the aggregator prioritizes your registration.
# it's not particularly relevant in this case. # it's not particularly relevant in this case.
groupPriorityMinimum: 1000 groupPriorityMinimum: 1000
@ -349,7 +354,7 @@ With that all set, your custom metrics API should show up in discovery.
Try fetching the discovery information for it: Try fetching the discovery information for it:
```shell ```shell
$ kubectl get --raw /apis/custom-metrics.metrics.k8s.io/v1alpha1 $ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
``` ```
Since you don't have any metrics collected yet, you shouldn't see any Since you don't have any metrics collected yet, you shouldn't see any
@ -417,7 +422,7 @@ Yor Work](#double-checking-your-work). The cumulative Prometheus metric
metric `pods/http_requests`. Check out its value: metric `pods/http_requests`. Check out its value:
```shell ```shell
$ kubectl get --raw "/apis/custom-metrics.metrics.k8s.io/v1alpha1/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app" $ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app"
``` ```
It should be zero, since you're not currently accessing it. Now, create It should be zero, since you're not currently accessing it. Now, create
@ -451,14 +456,14 @@ Create a description for the HorizontalPodAutoscaler (HPA):
```yaml ```yaml
kind: HorizontalPodAutoscaler kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2alpha1 apiVersion: autoscaling/v2beta1
metadata: metadata:
name: sample-app name: sample-app
spec: spec:
scaleTargetRef: scaleTargetRef:
# point the HPA at the sample application # point the HPA at the sample application
# you created above # you created above
apiVersion: apps/v1beta1 apiVersion: apps/v1
kind: Deployment kind: Deployment
name: sample-app name: sample-app
# autoscale between 1 and 10 replicas # autoscale between 1 and 10 replicas