Compare commits

..

No commits in common. "master" and "v0.8.2" have entirely different histories.

4034 changed files with 1291604 additions and 6302 deletions

View file

@ -1,52 +0,0 @@
---
name: Bug report
about: Report a bug encountered while running prometheus-adapter
title: ''
labels: kind/bug
assignees: ''
---
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately see https://github.com/kubernetes/kube-state-metrics/blob/master/SECURITY.md
-->
**What happened?**:
**What did you expect to happen?**:
**Please provide the prometheus-adapter config**:
<details open>
<summary>prometheus-adapter config</summary>
<!--- INSERT config HERE --->
</details>
**Please provide the HPA resource used for autoscaling**:
<details open>
<summary>HPA yaml</summary>
<!--- INSERT yaml HERE --->
</details>
**Please provide the HPA status**:
**Please provide the prometheus-adapter logs with -v=6 around the time the issue happened**:
<details open>
<summary>prometheus-adapter logs</summary>
<!--- INSERT logs HERE --->
</details>
**Anything else we need to know?**:
**Environment**:
- prometheus-adapter version:
- prometheus version:
- Kubernetes version (use `kubectl version`):
- Cloud provider or hardware configuration:
- Other info:

5
.gitignore vendored
View file

@ -1,5 +1,4 @@
*.swp
*~
/vendor
/adapter
.e2e
_output
deploy/adapter

View file

@ -1,39 +0,0 @@
run:
deadline: 5m
linters:
disable-all: true
enable:
- bodyclose
- dogsled
- dupl
- errcheck
- exportloopref
- gocritic
- gocyclo
- gofmt
- goimports
- gosec
- goprintffuncname
- gosimple
- govet
- ineffassign
- misspell
- nakedret
- nolintlint
- revive
- staticcheck
- stylecheck
- typecheck
- unconvert
- unused
- whitespace
linters-settings:
goimports:
local-prefixes: sigs.k8s.io/prometheus-adapter
revive:
rules:
- name: exported
arguments:
- disableStutteringCheck

9
.travis-deploy.sh Executable file
View file

@ -0,0 +1,9 @@
#!/bin/bash
set -x
docker login -u "$DOCKER_USERNAME" -p "$DOCKER_PASSWORD"
if [[ -n $TRAVIS_TAG ]]; then
make push VERSION=${TRAVIS_TAG}
else
make push-amd64
fi

26
.travis.yml Normal file
View file

@ -0,0 +1,26 @@
language: go
go:
- '1.13'
# blech, Travis downloads with capitals in DirectXMan12, which confuses go
go_import_path: github.com/directxman12/k8s-prometheus-adapter
script: make verify && git diff --exit-code
sudo: required
services:
- docker
env:
- GO111MODULE=on
deploy:
- provider: script
script: bash .travis-deploy.sh
on:
branch: master
- provider: script
script: bash .travis-deploy.sh
on:
tags: true

View file

@ -1,22 +0,0 @@
ARG ARCH
ARG GO_VERSION
FROM golang:${GO_VERSION} as build
WORKDIR /go/src/sigs.k8s.io/prometheus-adapter
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY pkg pkg
COPY cmd cmd
COPY Makefile Makefile
ARG ARCH
RUN make prometheus-adapter
FROM gcr.io/distroless/static:latest-$ARCH
COPY --from=build /go/src/sigs.k8s.io/prometheus-adapter/adapter /
USER 65534
ENTRYPOINT ["/adapter"]

176
Makefile
View file

@ -1,118 +1,90 @@
REGISTRY?=gcr.io/k8s-staging-prometheus-adapter
IMAGE=prometheus-adapter
REGISTRY?=directxman12
IMAGE?=k8s-prometheus-adapter
ARCH?=$(shell go env GOARCH)
ALL_ARCH=amd64 arm arm64 ppc64le s390x
GOPATH:=$(shell go env GOPATH)
ML_PLATFORMS=linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x
OUT_DIR?=$(PWD)/_output
VERSION=$(shell cat VERSION)
TAG_PREFIX=v
TAG?=$(TAG_PREFIX)$(VERSION)
OPENAPI_PATH=./vendor/k8s.io/kube-openapi
GO_VERSION?=1.22.5
GOLANGCI_VERSION?=1.56.2
VERSION?=latest
GOIMAGE=golang:1.13
GO111MODULE=on
export GO111MODULE
.PHONY: all
all: prometheus-adapter
# Build
# -----
SRC_DEPS=$(shell find pkg cmd -type f -name "*.go")
prometheus-adapter: $(SRC_DEPS)
CGO_ENABLED=0 GOARCH=$(ARCH) go build sigs.k8s.io/prometheus-adapter/cmd/adapter
.PHONY: container
container:
docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG) --build-arg ARCH=$(ARCH) --build-arg GO_VERSION=$(GO_VERSION) .
# Container push
# --------------
PUSH_ARCH_TARGETS=$(addprefix push-,$(ALL_ARCH))
.PHONY: push
push: container
docker push $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG)
push-all: $(PUSH_ARCH_TARGETS) push-multi-arch;
.PHONY: $(PUSH_ARCH_TARGETS)
$(PUSH_ARCH_TARGETS): push-%:
ARCH=$* $(MAKE) push
.PHONY: push-multi-arch
push-multi-arch: export DOCKER_CLI_EXPERIMENTAL = enabled
push-multi-arch:
docker manifest create --amend $(REGISTRY)/$(IMAGE):$(TAG) $(shell echo $(ALL_ARCH) | sed -e "s~[^ ]*~$(REGISTRY)/$(IMAGE)\-&:$(TAG)~g")
@for arch in $(ALL_ARCH); do docker manifest annotate --arch $${arch} $(REGISTRY)/$(IMAGE):$(TAG) $(REGISTRY)/$(IMAGE)-$${arch}:$(TAG); done
docker manifest push --purge $(REGISTRY)/$(IMAGE):$(TAG)
# Test
# ----
.PHONY: test
test:
CGO_ENABLED=0 go test ./cmd/... ./pkg/...
.PHONY: test-e2e
test-e2e:
./test/run-e2e-tests.sh
# Static analysis
# ---------------
.PHONY: verify
verify: verify-lint verify-deps verify-generated
.PHONY: update
update: update-lint update-generated
# Format and lint
# ---------------
HAS_GOLANGCI_VERSION:=$(shell $(GOPATH)/bin/golangci-lint version --format=short)
.PHONY: golangci
golangci:
ifneq ($(HAS_GOLANGCI_VERSION), $(GOLANGCI_VERSION))
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOPATH)/bin v$(GOLANGCI_VERSION)
ifeq ($(ARCH),amd64)
BASEIMAGE?=busybox
endif
ifeq ($(ARCH),arm)
BASEIMAGE?=armhf/busybox
endif
ifeq ($(ARCH),arm64)
BASEIMAGE?=aarch64/busybox
endif
ifeq ($(ARCH),ppc64le)
BASEIMAGE?=ppc64le/busybox
endif
ifeq ($(ARCH),s390x)
BASEIMAGE?=s390x/busybox
endif
.PHONY: verify-lint
verify-lint: golangci
$(GOPATH)/bin/golangci-lint run --modules-download-mode=readonly || (echo 'Run "make update-lint"' && exit 1)
.PHONY: all docker-build push-% push test verify-gofmt gofmt verify build-local-image
.PHONY: update-lint
update-lint: golangci
$(GOPATH)/bin/golangci-lint run --fix --modules-download-mode=readonly
all: $(OUT_DIR)/$(ARCH)/adapter
src_deps=$(shell find pkg cmd -type f -name "*.go")
$(OUT_DIR)/%/adapter: $(src_deps)
CGO_ENABLED=0 GOARCH=$* go build -tags netgo -o $(OUT_DIR)/$*/adapter github.com/directxman12/k8s-prometheus-adapter/cmd/adapter
# Dependencies
# ------------
docker-build: $(OUT_DIR)/Dockerfile
docker run -it -v $(OUT_DIR):/build -v $(PWD):/go/src/github.com/directxman12/k8s-prometheus-adapter -e GOARCH=$(ARCH) $(GOIMAGE) /bin/bash -c "\
CGO_ENABLED=0 go build -tags netgo -o /build/$(ARCH)/adapter github.com/directxman12/k8s-prometheus-adapter/cmd/adapter"
.PHONY: verify-deps
verify-deps:
go mod verify
docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) --build-arg ARCH=$(ARCH) --build-arg BASEIMAGE=$(BASEIMAGE) $(OUT_DIR)
$(OUT_DIR)/Dockerfile: deploy/Dockerfile
mkdir -p $(OUT_DIR)
cp deploy/Dockerfile $(OUT_DIR)/Dockerfile
build-local-image: $(OUT_DIR)/Dockerfile $(OUT_DIR)/$(ARCH)/adapter
docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) --build-arg ARCH=$(ARCH) --build-arg BASEIMAGE=scratch $(OUT_DIR)
push-%:
$(MAKE) ARCH=$* docker-build
docker push $(REGISTRY)/$(IMAGE)-$*:$(VERSION)
push: ./manifest-tool $(addprefix push-,$(ALL_ARCH))
./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:$(VERSION) --target $(REGISTRY)/$(IMAGE):$(VERSION)
./manifest-tool:
curl -sSL https://github.com/estesp/manifest-tool/releases/download/v0.5.0/manifest-tool-linux-amd64 > manifest-tool
chmod +x manifest-tool
vendor:
go mod tidy
@git diff --exit-code -- go.mod go.sum
go mod vendor
# Generation
# ----------
test:
CGO_ENABLED=0 go test ./pkg/...
generated_files=pkg/api/generated/openapi/zz_generated.openapi.go
verify-gofmt:
./hack/gofmt-all.sh -v
.PHONY: verify-generated
verify-generated: update-generated
@git diff --exit-code -- $(generated_files)
gofmt:
./hack/gofmt-all.sh
.PHONY: update-generated
update-generated:
go install -mod=readonly k8s.io/kube-openapi/cmd/openapi-gen
$(GOPATH)/bin/openapi-gen --logtostderr \
--go-header-file ./hack/boilerplate.go.txt \
--output-pkg ./pkg/api/generated/openapi \
--output-file zz_generated.openapi.go \
--output-dir ./pkg/api/generated/openapi \
-r /dev/null \
"k8s.io/metrics/pkg/apis/custom_metrics" "k8s.io/metrics/pkg/apis/custom_metrics/v1beta1" "k8s.io/metrics/pkg/apis/custom_metrics/v1beta2" "k8s.io/metrics/pkg/apis/external_metrics" "k8s.io/metrics/pkg/apis/external_metrics/v1beta1" "k8s.io/metrics/pkg/apis/metrics" "k8s.io/metrics/pkg/apis/metrics/v1beta1" "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/version" "k8s.io/api/core/v1"
go-mod:
go mod tidy
go mod vendor
go mod verify
verify: verify-gofmt go-mod test
pkg/api/generated/openapi/zz_generated.openapi.go: go.mod go.sum
go run $(OPENAPI_PATH)/cmd/openapi-gen/openapi-gen.go --logtostderr \
-i k8s.io/metrics/pkg/apis/custom_metrics,k8s.io/metrics/pkg/apis/custom_metrics/v1beta1,k8s.io/metrics/pkg/apis/custom_metrics/v1beta2,k8s.io/metrics/pkg/apis/external_metrics,k8s.io/metrics/pkg/apis/external_metrics/v1beta1,k8s.io/metrics/pkg/apis/metrics,k8s.io/metrics/pkg/apis/metrics/v1beta1,k8s.io/apimachinery/pkg/apis/meta/v1,k8s.io/apimachinery/pkg/api/resource,k8s.io/apimachinery/pkg/version,k8s.io/api/core/v1 \
-h ./hack/boilerplate.go.txt \
-p ./pkg/api/generated/openapi \
-O zz_generated.openapi \
-o ./ \
-r /dev/null

16
NOTICE
View file

@ -1,16 +0,0 @@
When donating the k8s-prometheus-adapter project to the CNCF, we could not
reach all the contributors to make them sign the CNCF CLA. As such, according
to the CNCF rules to donate a repository, we must add a NOTICE referencing
section 7 of the CLA with a list of developers who could not be reached.
`7. Should You wish to submit work that is not Your original creation, You may
submit it to the Foundation separately from any Contribution, identifying the
complete details of its source and of any license or other restriction
(including, but not limited to, related patents, trademarks, and license
agreements) of which you are personally aware, and conspicuously marking the
work as "Submitted on behalf of a third-party: [named here]".`
Submitted on behalf of a third-party: Andrew "thisisamurray" Murray
Submitted on behalf of a third-party: Duane "duane-ibm" D'Souza
Submitted on behalf of a third-party: John "john-delivuk" Delivuk
Submitted on behalf of a third-party: Richard "rrtaylor" Taylor

View file

@ -1,17 +1,15 @@
# See the OWNERS docs at https://go.k8s.io/owners
owners:
- s-urbaniak
- lilic
approvers:
- dgrisonnet
- logicalhan
- dashpole
- prometheus-adapter-approvers
reviewers:
- dgrisonnet
- olivierlemasle
- logicalhan
- dashpole
- prometheus-adapter-approvers
- prometheus-adapter-reviewers
emeritus_approvers:
- brancz
- directxman12
- lilic
- s-urbaniak

7
OWNERS_ALIASES.txt Normal file
View file

@ -0,0 +1,7 @@
# See the OWNERS docs at https://go.k8s.io/owners#owners_aliases
aliases:
prometheus-adapter-approvers:
- s-urbaniak
- lilic
prometheus-adapter-reviewers: []

View file

@ -1,7 +1,11 @@
# Prometheus Adapter for Kubernetes Metrics APIs
This repository contains an implementation of the Kubernetes Custom, Resource and External
[Metric APIs](https://github.com/kubernetes/metrics).
[![Build Status](https://travis-ci.org/DirectXMan12/k8s-prometheus-adapter.svg?branch=master)](https://travis-ci.org/DirectXMan12/k8s-prometheus-adapter)
This repository contains an implementation of the Kubernetes
[resource metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md),
[custom metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md), and
[external metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md) APIs.
This adapter is therefore suitable for use with the autoscaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+.
It can also replace the [metrics server](https://github.com/kubernetes-incubator/metrics-server) on clusters that already run Prometheus and collect the appropriate metrics.
@ -19,26 +23,11 @@ If you're a helm user, a helm chart is listed on prometheus-community repository
To install it with the release name `my-release`, run this Helm command:
For Helm2
```console
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install --name my-release prometheus-community/prometheus-adapter
```
For Helm3 ( as name is mandatory )
```console
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install my-release prometheus-community/prometheus-adapter
```
Official images
---
All official images for releases after v0.8.4 are available in `registry.k8s.io/prometheus-adapter/prometheus-adapter:$VERSION`. The project also maintains a [staging registry](https://console.cloud.google.com/gcr/images/k8s-staging-prometheus-adapter/GLOBAL/) where images for each commit from the master branch are published. You can use this registry if you need to test a version from a specific commit, or if you need to deploy a patch while waiting for a new release.
Images for versions v0.8.4 and prior are only available in unofficial registries:
* https://quay.io/repository/coreos/k8s-prometheus-adapter-amd64
* https://hub.docker.com/r/directxman12/k8s-prometheus-adapter/
Configuration
-------------
@ -49,7 +38,7 @@ will attempt to using [Kubernetes in-cluster
config](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod)
to connect to the cluster.
It takes the following additional arguments specific to configuring how the
It takes the following addition arguments specific to configuring how the
adapter talks to Prometheus and the main Kubernetes cluster:
- `--lister-kubeconfig=<path-to-kubeconfig>`: This configures
@ -58,26 +47,11 @@ adapter talks to Prometheus and the main Kubernetes cluster:
in-cluster config.
- `--metrics-relist-interval=<duration>`: This is the interval at which to
update the cache of available metrics from Prometheus. By default, this
value is set to 10 minutes.
- `--metrics-max-age=<duration>`: This is the max age of the metrics to be
loaded from Prometheus. For example, when set to `10m`, it will query
Prometheus for metrics since 10m ago, and only those that has datapoints
within the time period will appear in the adapter. Therefore, the metrics-max-age
should be equal to or larger than your Prometheus' scrape interval,
or your metrics will occaisonally disappear from the adapter.
By default, this is set to be the same as metrics-relist-interval to avoid
some confusing behavior (See this [PR](https://github.com/kubernetes-sigs/prometheus-adapter/pull/230)).
Note: We recommend setting this only if you understand what is happening.
For example, this setting could be useful in cases where the scrape duration is
over a network call, e.g. pulling metrics from AWS CloudWatch, or Google Monitoring,
more specifically, Google Monitoring sometimes have delays on when data will show
up in their system after being sampled. This means that even if you scraped data
frequently, they might not show up soon. If you configured the relist interval to
a short period but without configuring this, you might not be able to see your
metrics in the adapter in certain scenarios.
update the cache of available metrics from Prometheus. Since the adapter
only lists metrics during discovery that exist between the current time and
the last discovery query, your relist interval should be equal to or larger
than your Prometheus scrape interval, otherwise your metrics will
occaisonally disappear from the adapter.
- `--prometheus-url=<url>`: This is the URL used to connect to Prometheus.
It will eventually contain query parameters to configure the connection.

View file

@ -4,10 +4,6 @@ prometheus-adapter is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
1. At least one [OWNERS](OWNERS) must LGTM this release
1. A PR that bumps version hardcoded in code is created and merged
1. An OWNER creates a draft Github release
1. An OWNER creates a release tag using `git tag -s $VERSION`, inserts the changelog and pushes the tag with `git push $VERSION`. Then waits for [prow.k8s.io](https://prow.k8s.io) to build and push new images to [gcr.io/k8s-staging-prometheus-adapter](https://gcr.io/k8s-staging-prometheus-adapter)
1. A PR in [kubernetes/k8s.io](https://github.com/kubernetes/k8s.io/blob/main/k8s.gcr.io/images/k8s-staging-prometheus-adapter/images.yaml) is created to release images to `k8s.gcr.io`
1. An OWNER publishes the GitHub release
1. An announcement email is sent to `kubernetes-sig-instrumentation@googlegroups.com` with the subject `[ANNOUNCE] prometheus-adapter $VERSION is released`
1. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
1. The release issue is closed
1. An announcement email is sent to `kubernetes-sig-instrumentation@googlegroups.com` with the subject `[ANNOUNCE] prometheus-adapter $VERSION is released`

View file

@ -10,5 +10,5 @@
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
dgrisonnet
s-urbaniak
lilic

View file

@ -1 +0,0 @@
0.12.0

View file

@ -1,11 +0,0 @@
# See https://cloud.google.com/cloud-build/docs/build-config
timeout: 3600s
options:
substitution_option: ALLOW_LOOSE
steps:
- name: 'gcr.io/k8s-staging-test-infra/gcb-docker-gcloud:v20211118-2f2d816b90'
entrypoint: make
env:
- TAG=$_PULL_BASE_REF
args:
- push-all

View file

@ -19,38 +19,36 @@ package main
import (
"crypto/tls"
"crypto/x509"
"flag"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"os"
"strings"
"time"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
openapinamer "k8s.io/apiserver/pkg/endpoints/openapi"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/client-go/metadata"
"k8s.io/client-go/metadata/metadatainformer"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/transport"
"k8s.io/component-base/logs"
"k8s.io/klog/v2"
"k8s.io/sample-apiserver/pkg/apiserver"
customexternalmetrics "sigs.k8s.io/custom-metrics-apiserver/pkg/apiserver"
basecmd "sigs.k8s.io/custom-metrics-apiserver/pkg/cmd"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
basecmd "github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/cmd"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
"sigs.k8s.io/metrics-server/pkg/api"
generatedopenapi "sigs.k8s.io/prometheus-adapter/pkg/api/generated/openapi"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
mprom "sigs.k8s.io/prometheus-adapter/pkg/client/metrics"
adaptercfg "sigs.k8s.io/prometheus-adapter/pkg/config"
cmprov "sigs.k8s.io/prometheus-adapter/pkg/custom-provider"
extprov "sigs.k8s.io/prometheus-adapter/pkg/external-provider"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
resprov "sigs.k8s.io/prometheus-adapter/pkg/resourceprovider"
generatedopenapi "github.com/directxman12/k8s-prometheus-adapter/pkg/api/generated/openapi"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
mprom "github.com/directxman12/k8s-prometheus-adapter/pkg/client/metrics"
adaptercfg "github.com/directxman12/k8s-prometheus-adapter/pkg/config"
cmprov "github.com/directxman12/k8s-prometheus-adapter/pkg/custom-provider"
extprov "github.com/directxman12/k8s-prometheus-adapter/pkg/external-provider"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
resprov "github.com/directxman12/k8s-prometheus-adapter/pkg/resourceprovider"
)
type PrometheusAdapter struct {
@ -64,24 +62,15 @@ type PrometheusAdapter struct {
PrometheusAuthConf string
// PrometheusCAFile points to the file containing the ca-root for connecting with Prometheus
PrometheusCAFile string
// PrometheusClientTLSCertFile points to the file containing the client TLS cert for connecting with Prometheus
PrometheusClientTLSCertFile string
// PrometheusClientTLSKeyFile points to the file containing the client TLS key for connecting with Prometheus
PrometheusClientTLSKeyFile string
// PrometheusTokenFile points to the file that contains the bearer token when connecting with Prometheus
PrometheusTokenFile string
// PrometheusHeaders is a k=v list of headers to set on requests to PrometheusURL
PrometheusHeaders []string
// PrometheusVerb is a verb to set on requests to PrometheusURL
PrometheusVerb string
// AdapterConfigFile points to the file containing the metrics discovery configuration.
AdapterConfigFile string
// MetricsRelistInterval is the interval at which to relist the set of available metrics
MetricsRelistInterval time.Duration
// MetricsMaxAge is the period to query available metrics for
MetricsMaxAge time.Duration
// DisableHTTP2 indicates that http2 should not be enabled.
DisableHTTP2 bool
metricsConfig *adaptercfg.MetricsDiscoveryConfig
}
@ -91,14 +80,10 @@ func (cmd *PrometheusAdapter) makePromClient() (prom.Client, error) {
return nil, fmt.Errorf("invalid Prometheus URL %q: %v", baseURL, err)
}
if cmd.PrometheusVerb != http.MethodGet && cmd.PrometheusVerb != http.MethodPost {
return nil, fmt.Errorf("unsupported Prometheus HTTP verb %q; supported verbs: \"GET\" and \"POST\"", cmd.PrometheusVerb)
}
var httpClient *http.Client
if cmd.PrometheusCAFile != "" {
prometheusCAClient, err := makePrometheusCAClient(cmd.PrometheusCAFile, cmd.PrometheusClientTLSCertFile, cmd.PrometheusClientTLSKeyFile)
prometheusCAClient, err := makePrometheusCAClient(cmd.PrometheusCAFile)
if err != nil {
return nil, err
}
@ -114,19 +99,16 @@ func (cmd *PrometheusAdapter) makePromClient() (prom.Client, error) {
}
if cmd.PrometheusTokenFile != "" {
data, err := os.ReadFile(cmd.PrometheusTokenFile)
data, err := ioutil.ReadFile(cmd.PrometheusTokenFile)
if err != nil {
return nil, fmt.Errorf("failed to read prometheus-token-file: %v", err)
}
wrappedTransport := http.DefaultTransport
if httpClient.Transport != nil {
wrappedTransport = httpClient.Transport
}
httpClient.Transport = transport.NewBearerAuthRoundTripper(string(data), wrappedTransport)
httpClient.Transport = transport.NewBearerAuthRoundTripper(string(data), httpClient.Transport)
}
genericPromClient := prom.NewGenericAPIClient(httpClient, baseURL, parseHeaderArgs(cmd.PrometheusHeaders))
genericPromClient := prom.NewGenericAPIClient(httpClient, baseURL)
instrumentedGenericPromClient := mprom.InstrumentGenericAPIClient(genericPromClient, baseURL.String())
return prom.NewClientForAPI(instrumentedGenericPromClient, cmd.PrometheusVerb), nil
return prom.NewClientForAPI(instrumentedGenericPromClient), nil
}
func (cmd *PrometheusAdapter) addFlags() {
@ -138,28 +120,15 @@ func (cmd *PrometheusAdapter) addFlags() {
"kubeconfig file used to configure auth when connecting to Prometheus.")
cmd.Flags().StringVar(&cmd.PrometheusCAFile, "prometheus-ca-file", cmd.PrometheusCAFile,
"Optional CA file to use when connecting with Prometheus")
cmd.Flags().StringVar(&cmd.PrometheusClientTLSCertFile, "prometheus-client-tls-cert-file", cmd.PrometheusClientTLSCertFile,
"Optional client TLS cert file to use when connecting with Prometheus, auto-renewal is not supported")
cmd.Flags().StringVar(&cmd.PrometheusClientTLSKeyFile, "prometheus-client-tls-key-file", cmd.PrometheusClientTLSKeyFile,
"Optional client TLS key file to use when connecting with Prometheus, auto-renewal is not supported")
cmd.Flags().StringVar(&cmd.PrometheusTokenFile, "prometheus-token-file", cmd.PrometheusTokenFile,
"Optional file containing the bearer token to use when connecting with Prometheus")
cmd.Flags().StringArrayVar(&cmd.PrometheusHeaders, "prometheus-header", cmd.PrometheusHeaders,
"Optional header to set on requests to prometheus-url. Can be repeated")
cmd.Flags().StringVar(&cmd.PrometheusVerb, "prometheus-verb", cmd.PrometheusVerb,
"HTTP verb to set on requests to Prometheus. Possible values: \"GET\", \"POST\"")
cmd.Flags().StringVar(&cmd.AdapterConfigFile, "config", cmd.AdapterConfigFile,
"Configuration file containing details of how to transform between Prometheus metrics "+
"and custom metrics API resources")
cmd.Flags().DurationVar(&cmd.MetricsRelistInterval, "metrics-relist-interval", cmd.MetricsRelistInterval,
cmd.Flags().DurationVar(&cmd.MetricsRelistInterval, "metrics-relist-interval", cmd.MetricsRelistInterval, ""+
"interval at which to re-list the set of all available metrics from Prometheus")
cmd.Flags().DurationVar(&cmd.MetricsMaxAge, "metrics-max-age", cmd.MetricsMaxAge,
cmd.Flags().DurationVar(&cmd.MetricsMaxAge, "metrics-max-age", cmd.MetricsMaxAge, ""+
"period for which to query the set of available metrics from Prometheus")
cmd.Flags().BoolVar(&cmd.DisableHTTP2, "disable-http2", cmd.DisableHTTP2,
"Disable HTTP/2 support")
// Add logging flags
logs.AddFlags(cmd.Flags())
}
func (cmd *PrometheusAdapter) loadConfig() error {
@ -227,13 +196,13 @@ func (cmd *PrometheusAdapter) makeExternalProvider(promClient prom.Client, stopC
}
// construct the provider and start it
emProvider, runner := extprov.NewExternalPrometheusProvider(promClient, namers, cmd.MetricsRelistInterval, cmd.MetricsMaxAge)
emProvider, runner := extprov.NewExternalPrometheusProvider(promClient, namers, cmd.MetricsRelistInterval)
runner.RunUntil(stopCh)
return emProvider, nil
}
func (cmd *PrometheusAdapter) addResourceMetricsAPI(promClient prom.Client, stopCh <-chan struct{}) error {
func (cmd *PrometheusAdapter) addResourceMetricsAPI(promClient prom.Client) error {
if cmd.metricsConfig.ResourceRules == nil {
// bail if we don't have rules for setting things up
return nil
@ -249,48 +218,19 @@ func (cmd *PrometheusAdapter) addResourceMetricsAPI(promClient prom.Client, stop
return fmt.Errorf("unable to construct resource metrics API provider: %v", err)
}
rest, err := cmd.ClientConfig()
informers, err := cmd.Informers()
if err != nil {
return err
}
client, err := metadata.NewForConfig(rest)
if err != nil {
return err
}
podInformerFactory := metadatainformer.NewFilteredSharedInformerFactory(client, 0, corev1.NamespaceAll, func(options *metav1.ListOptions) {
options.FieldSelector = "status.phase=Running"
})
podInformer := podInformerFactory.ForResource(corev1.SchemeGroupVersion.WithResource("pods"))
informer, err := cmd.Informers()
if err != nil {
return err
}
config, err := cmd.Config()
if err != nil {
return err
}
config.GenericConfig.EnableMetrics = false
server, err := cmd.Server()
if err != nil {
return err
}
metricsHandler, err := mprom.MetricsHandler()
if err != nil {
if err := api.Install(provider, informers.Core().V1(), server.GenericAPIServer); err != nil {
return err
}
server.GenericAPIServer.Handler.NonGoRestfulMux.HandleFunc("/metrics", metricsHandler)
if err := api.Install(provider, podInformer.Lister(), informer.Core().V1().Nodes().Lister(), server.GenericAPIServer, nil); err != nil {
return err
}
go podInformer.Informer().Run(stopCh)
return nil
}
@ -302,28 +242,20 @@ func main() {
// set up flags
cmd := &PrometheusAdapter{
PrometheusURL: "https://localhost",
PrometheusVerb: http.MethodGet,
MetricsRelistInterval: 10 * time.Minute,
}
cmd.Name = "prometheus-metrics-adapter"
cmd.OpenAPIConfig = genericapiserver.DefaultOpenAPIConfig(generatedopenapi.GetOpenAPIDefinitions, openapinamer.NewDefinitionNamer(apiserver.Scheme))
cmd.OpenAPIConfig.Info.Title = "prometheus-metrics-adapter"
cmd.OpenAPIConfig.Info.Version = "1.0.0"
cmd.addFlags()
cmd.Flags().AddGoFlagSet(flag.CommandLine) // make sure we get the klog flags
if err := cmd.Flags().Parse(os.Args); err != nil {
klog.Fatalf("unable to parse flags: %v", err)
}
if cmd.OpenAPIConfig == nil {
cmd.OpenAPIConfig = genericapiserver.DefaultOpenAPIConfig(generatedopenapi.GetOpenAPIDefinitions, openapinamer.NewDefinitionNamer(api.Scheme, customexternalmetrics.Scheme))
cmd.OpenAPIConfig.Info.Title = "prometheus-metrics-adapter"
cmd.OpenAPIConfig.Info.Version = "1.0.0"
}
if cmd.OpenAPIV3Config == nil {
cmd.OpenAPIV3Config = genericapiserver.DefaultOpenAPIV3Config(generatedopenapi.GetOpenAPIDefinitions, openapinamer.NewDefinitionNamer(api.Scheme, customexternalmetrics.Scheme))
cmd.OpenAPIV3Config.Info.Title = "prometheus-metrics-adapter"
cmd.OpenAPIV3Config.Info.Version = "1.0.0"
}
// if --metrics-max-age is not set, make it equal to --metrics-relist-interval
if cmd.MetricsMaxAge == 0*time.Second {
cmd.MetricsMaxAge = cmd.MetricsRelistInterval
@ -340,11 +272,8 @@ func main() {
klog.Fatalf("unable to load metrics discovery config: %v", err)
}
// stop channel closed on SIGTERM and SIGINT
stopCh := genericapiserver.SetupSignalHandler()
// construct the provider
cmProvider, err := cmd.makeProvider(promClient, stopCh)
cmProvider, err := cmd.makeProvider(promClient, wait.NeverStop)
if err != nil {
klog.Fatalf("unable to construct custom metrics provider: %v", err)
}
@ -355,7 +284,7 @@ func main() {
}
// construct the external provider
emProvider, err := cmd.makeExternalProvider(promClient, stopCh)
emProvider, err := cmd.makeExternalProvider(promClient, wait.NeverStop)
if err != nil {
klog.Fatalf("unable to construct external metrics provider: %v", err)
}
@ -366,20 +295,12 @@ func main() {
}
// attach resource metrics support, if it's needed
if err := cmd.addResourceMetricsAPI(promClient, stopCh); err != nil {
if err := cmd.addResourceMetricsAPI(promClient); err != nil {
klog.Fatalf("unable to install resource metrics API: %v", err)
}
// disable HTTP/2 to mitigate CVE-2023-44487 until the Go standard library
// and golang.org/x/net are fully fixed.
server, err := cmd.Server()
if err != nil {
klog.Fatalf("unable to fetch server: %v", err)
}
server.GenericAPIServer.SecureServingInfo.DisableHTTP2 = cmd.DisableHTTP2
// run the server
if err := cmd.Run(stopCh); err != nil {
if err := cmd.Run(wait.NeverStop); err != nil {
klog.Fatalf("unable to run custom metrics adapter: %v", err)
}
}
@ -403,7 +324,7 @@ func makeKubeconfigHTTPClient(inClusterAuth bool, kubeConfigPath string) (*http.
loader := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, &clientcmd.ConfigOverrides{})
authConf, err = loader.ClientConfig()
if err != nil {
return nil, fmt.Errorf("unable to construct auth configuration from %q for connecting to Prometheus: %v", kubeConfigPath, err)
return nil, fmt.Errorf("unable to construct auth configuration from %q for connecting to Prometheus: %v", kubeConfigPath, err)
}
} else {
var err error
@ -419,8 +340,8 @@ func makeKubeconfigHTTPClient(inClusterAuth bool, kubeConfigPath string) (*http.
return &http.Client{Transport: tr}, nil
}
func makePrometheusCAClient(caFilePath string, tlsCertFilePath string, tlsKeyFilePath string) (*http.Client, error) {
data, err := os.ReadFile(caFilePath)
func makePrometheusCAClient(caFilename string) (*http.Client, error) {
data, err := ioutil.ReadFile(caFilename)
if err != nil {
return nil, fmt.Errorf("failed to read prometheus-ca-file: %v", err)
}
@ -430,41 +351,11 @@ func makePrometheusCAClient(caFilePath string, tlsCertFilePath string, tlsKeyFil
return nil, fmt.Errorf("no certs found in prometheus-ca-file")
}
if (tlsCertFilePath != "") && (tlsKeyFilePath != "") {
tlsClientCerts, err := tls.LoadX509KeyPair(tlsCertFilePath, tlsKeyFilePath)
if err != nil {
return nil, fmt.Errorf("failed to read TLS key pair: %v", err)
}
return &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
Certificates: []tls.Certificate{tlsClientCerts},
MinVersion: tls.VersionTLS12,
},
},
}, nil
}
return &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
MinVersion: tls.VersionTLS12,
RootCAs: pool,
},
},
}, nil
}
func parseHeaderArgs(args []string) http.Header {
headers := make(http.Header, len(args))
for _, h := range args {
parts := strings.SplitN(h, "=", 2)
value := ""
if len(parts) > 1 {
value = parts[1]
}
headers.Add(parts[0], value)
}
return headers
}

View file

@ -1,211 +0,0 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"net/http"
"os"
"path/filepath"
"reflect"
"testing"
)
const certsDir = "testdata"
func TestMakeKubeconfigHTTPClient(t *testing.T) {
tests := []struct {
kubeconfigPath string
inClusterAuth bool
success bool
}{
{
kubeconfigPath: filepath.Join(certsDir, "kubeconfig"),
inClusterAuth: false,
success: true,
},
{
kubeconfigPath: filepath.Join(certsDir, "kubeconfig"),
inClusterAuth: true,
success: false,
},
{
kubeconfigPath: filepath.Join(certsDir, "kubeconfig-error"),
inClusterAuth: false,
success: false,
},
{
kubeconfigPath: "",
inClusterAuth: false,
success: true,
},
}
os.Setenv("KUBERNETES_SERVICE_HOST", "prometheus")
os.Setenv("KUBERNETES_SERVICE_PORT", "8080")
for _, test := range tests {
t.Logf("Running test for: inClusterAuth %v, kubeconfigPath %v", test.inClusterAuth, test.kubeconfigPath)
kubeconfigHTTPClient, err := makeKubeconfigHTTPClient(test.inClusterAuth, test.kubeconfigPath)
if test.success {
if err != nil {
t.Errorf("Error is %v, expected nil", err)
continue
}
if kubeconfigHTTPClient.Transport == nil {
if test.inClusterAuth || test.kubeconfigPath != "" {
t.Error("HTTP client Transport is nil, expected http.RoundTripper")
}
}
} else if err == nil {
t.Errorf("Error is nil, expected %v", err)
}
}
}
func TestMakePrometheusCAClient(t *testing.T) {
tests := []struct {
caFilePath string
tlsCertFilePath string
tlsKeyFilePath string
success bool
tlsUsed bool
}{
{
caFilePath: filepath.Join(certsDir, "ca.pem"),
tlsCertFilePath: filepath.Join(certsDir, "tlscert.crt"),
tlsKeyFilePath: filepath.Join(certsDir, "tlskey.key"),
success: true,
tlsUsed: true,
},
{
caFilePath: filepath.Join(certsDir, "ca-error.pem"),
tlsCertFilePath: filepath.Join(certsDir, "tlscert.crt"),
tlsKeyFilePath: filepath.Join(certsDir, "tlskey.key"),
success: false,
tlsUsed: true,
},
{
caFilePath: filepath.Join(certsDir, "ca.pem"),
tlsCertFilePath: filepath.Join(certsDir, "tlscert-error.crt"),
tlsKeyFilePath: filepath.Join(certsDir, "tlskey.key"),
success: false,
tlsUsed: true,
},
{
caFilePath: filepath.Join(certsDir, "ca.pem"),
tlsCertFilePath: "",
tlsKeyFilePath: "",
success: true,
tlsUsed: false,
},
}
for _, test := range tests {
t.Logf("Running test for: caFilePath %v, tlsCertFilePath %v, tlsKeyFilePath %v", test.caFilePath, test.tlsCertFilePath, test.tlsKeyFilePath)
prometheusCAClient, err := makePrometheusCAClient(test.caFilePath, test.tlsCertFilePath, test.tlsKeyFilePath)
if test.success {
if err != nil {
t.Errorf("Error is %v, expected nil", err)
continue
}
if prometheusCAClient.Transport.(*http.Transport).TLSClientConfig.RootCAs == nil {
t.Error("RootCAs is nil, expected *x509.CertPool")
continue
}
if test.tlsUsed {
if prometheusCAClient.Transport.(*http.Transport).TLSClientConfig.Certificates == nil {
t.Error("TLS certificates is nil, expected []tls.Certificate")
continue
}
} else {
if prometheusCAClient.Transport.(*http.Transport).TLSClientConfig.Certificates != nil {
t.Errorf("TLS certificates is %+v, expected nil", prometheusCAClient.Transport.(*http.Transport).TLSClientConfig.Certificates)
}
}
} else if err == nil {
t.Errorf("Error is nil, expected %v", err)
}
}
}
func TestParseHeaderArgs(t *testing.T) {
tests := []struct {
args []string
headers http.Header
}{
{
headers: http.Header{},
},
{
args: []string{"foo=bar"},
headers: http.Header{
"Foo": []string{"bar"},
},
},
{
args: []string{"foo"},
headers: http.Header{
"Foo": []string{""},
},
},
{
args: []string{"foo=bar", "foo=baz", "bux=baz=23"},
headers: http.Header{
"Foo": []string{"bar", "baz"},
"Bux": []string{"baz=23"},
},
},
}
for _, test := range tests {
got := parseHeaderArgs(test.args)
if !reflect.DeepEqual(got, test.headers) {
t.Errorf("Expected %#v but got %#v", test.headers, got)
}
}
}
func TestFlags(t *testing.T) {
cmd := &PrometheusAdapter{
PrometheusURL: "https://localhost",
}
cmd.addFlags()
flags := cmd.FlagSet
if flags == nil {
t.Fatalf("FlagSet should not be nil")
}
expectedFlags := []struct {
flag string
defaultValue string
}{
{flag: "v", defaultValue: "0"}, // logging flag (klog)
{flag: "prometheus-url", defaultValue: "https://localhost"}, // default is set in cmd
}
for _, e := range expectedFlags {
flag := flags.Lookup(e.flag)
if flag == nil {
t.Errorf("Flag %q expected to be present, was absent", e.flag)
continue
}
if flag.DefValue != e.defaultValue {
t.Errorf("Expected default value %q for flag %q, got %q", e.defaultValue, e.flag, flag.DefValue)
}
}
}

View file

@ -1,16 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDdjCCAl4CCQDdbOsYxSKoeDANBgkqhkiG9w0BAQsFADB9MQswCQYDVQQGEwJV
UzENMAsGA1UEBwwEVGVzdDEaMBgGA1UECgwRUHJvbWV0aGV1c0FkYXB0ZXIxJDAi
BgNVBAMMG2s4cy1wcm9tZXRoZXVzLWFkYXB0ZXIudGVzdDEdMBsGCSqGSIb3DQEJ
ARYOdGVzdEB0ZXN0LnRlc3QwHhcNMjEwMjA5MTE0NzUwWhcNMjYwMjA4MTE0NzUw
WjB9MQswCQYDVQQGEwJVUzENMAsGA1UEBwwEVGVzdDEaMBgGA1UECgwRUHJvbWV0
aGV1c0FkYXB0ZXIxJDAiBgNVBAMMG2s4cy1wcm9tZXRoZXVzLWFkYXB0ZXIudGVz
dDEdMBsGCSqGSIb3DQEJARYOdGVzdEB0ZXN0LnRlc3QwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQC24TDfTWLtYZPLDXqEjF7yn4K7oBOltX5Nngsk7LNd
AQELBQADggEBAD/bbeAZuyvtuEwdJ+4wkhBsHYXQ4OPxff1f3t4buIQFtnilWTXE
S60K3SEaQS8rOw8V9eHmzCsh3mPuVCoM7WsgKhp2mVhbGVZoBWBZ8kPQXqtsw+v4
tqTuJXnFPiF4clXb6Wp96Rc7nxzRAfn/6uVbSWds4JwRToUVszVOxe+yu0I84vuB
SHrRa077b1V+UT8otm+C5tC3jBZ0/IPRNWoT/rVcSoVLouX0fkbtxNF7c9v+PYg6
849A9T8cGKWKpKPGNEwBL9HYwtK6W0tTJr8A8pnAJ/UlniHA6u7SMHN+NoqBfi6M
bqq9lQ4QhjFrN2B1z9r3ak+EzQX1711TQ8w=
-----END CERTIFICATE-----

View file

@ -1,20 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDLjCCAhYCCQDlnNCOw7JHFDANBgkqhkiG9w0BAQsFADBYMQswCQYDVQQGEwJV
UzENMAsGA1UECgwEVGVzdDEbMBkGA1UEAwwScHJvbWV0aGV1cy1hZGFwdGVyMR0w
GwYJKoZIhvcNAQkBFg50ZXN0QHRlc3QudGVzdDAgFw0yMTAyMjIyMDMxNTBaGA80
NzU5MDEyMDIwMzE1MFowWDELMAkGA1UEBhMCVVMxDTALBgNVBAoMBFRlc3QxGzAZ
BgNVBAMMEnByb21ldGhldXMtYWRhcHRlcjEdMBsGCSqGSIb3DQEJARYOdGVzdEB0
ZXN0LnRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvn9HQlfhw
qDH77+eEFU+N+ztqCtat54neVez+sFa4dfYxuvVYK+nc+oh4E7SS4u+17eKV+QFb
ZhRhrOTNI+fmuO+xDPKyU1MuYUDfwasRfMDcUpssea2fO/SrsxHmX9oOam0kgefJ
8aSwI9TYw4N4kIpG+EGatogDlR2KXhrqsRfx5PUB4npFaCrdoglyvvAQig83Iq5L
+bCknSe6NUMiqtL9CcuLzzRKB3DMOrvbB0tJdb4uv/gS26sx/Hp/1ri73/tv4I9z
GLLoUUoff7vfvxrhiGR9i+qBOda7THbbmYBD54y+SR0dBa2uuDDX0JbgNNfXtjiG
52hvAnc1/wv7AgMBAAEwDQYJKoZIhvcNAQELBQADggEBACCysIzT9NKaniEvXtnx
Yx/jRxpiEEUGl8kg83a95X4f13jdPpUSwcn3/iK5SAE/7ntGVM+ajtlXrHGxwjB7
ER0w4WC6Ozypzoh/yI/VXs+DRJTJu8CBJOBRQEpzkK4r64HU8iN2c9lPp1+6b3Vy
jfbf3yfnRUbJztSjOFDUeA2t3FThVddhqif/oxj65s5R8p9HEurcwhA3Q6lE53yx
jgee8qV9HXAqa4V0qQQJ0tjcpajhQahDTtThRr+Z2H4TzQuwHa3dM7IIF6EPWsCo
DtbUXEPL7zT3EBH7THOdvNsFlD/SFmT2RwiQ5606bRAHwAzzxjxjxFTMl7r4tX5W
Ldc=
-----END CERTIFICATE-----

View file

@ -1,17 +0,0 @@
apiVersion: v1
kind: Config
clusters:
- name: test
cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMakNDQWhZQ0NRRGxuTkNPdzdKSEZEQU5CZ2txaGtpRzl3MEJBUXNGQURCWU1Rc3dDUVlEVlFRR0V3SlYKVXpFTk1Bc0dBMVVFQ2d3RVZHVnpkREViTUJrR0ExVUVBd3dTY0hKdmJXVjBhR1YxY3kxaFpHRndkR1Z5TVIwdwpHd1lKS29aSWh2Y05BUWtCRmc1MFpYTjBRSFJsYzNRdWRHVnpkREFnRncweU1UQXlNakl5TURNeE5UQmFHQTgwCk56VTVNREV5TURJd016RTFNRm93V0RFTE1Ba0dBMVVFQmhNQ1ZWTXhEVEFMQmdOVkJBb01CRlJsYzNReEd6QVoKQmdOVkJBTU1FbkJ5YjIxbGRHaGxkWE10WVdSaGNIUmxjakVkTUJzR0NTcUdTSWIzRFFFSkFSWU9kR1Z6ZEVCMApaWE4wTG5SbGMzUXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDdm45SFFsZmh3CnFESDc3K2VFRlUrTit6dHFDdGF0NTRuZVZleitzRmE0ZGZZeHV2VllLK25jK29oNEU3U1M0dSsxN2VLVitRRmIKWmhSaHJPVE5JK2ZtdU8reERQS3lVMU11WVVEZndhc1JmTURjVXBzc2VhMmZPL1Nyc3hIbVg5b09hbTBrZ2VmSgo4YVN3STlUWXc0TjRrSXBHK0VHYXRvZ0RsUjJLWGhycXNSZng1UFVCNG5wRmFDcmRvZ2x5dnZBUWlnODNJcTVMCitiQ2tuU2U2TlVNaXF0TDlDY3VMenpSS0IzRE1PcnZiQjB0SmRiNHV2L2dTMjZzeC9IcC8xcmk3My90djRJOXoKR0xMb1VVb2ZmN3ZmdnhyaGlHUjlpK3FCT2RhN1RIYmJtWUJENTR5K1NSMGRCYTJ1dUREWDBKYmdOTmZYdGppRwo1Mmh2QW5jMS93djdBZ01CQUFFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDQ3lzSXpUOU5LYW5pRXZYdG54Cll4L2pSeHBpRUVVR2w4a2c4M2E5NVg0ZjEzamRQcFVTd2NuMy9pSzVTQUUvN250R1ZNK2FqdGxYckhHeHdqQjcKRVIwdzRXQzZPenlwem9oL3lJL1ZYcytEUkpUSnU4Q0JKT0JSUUVwemtLNHI2NEhVOGlOMmM5bFBwMSs2YjNWeQpqZmJmM3lmblJVYkp6dFNqT0ZEVWVBMnQzRlRoVmRkaHFpZi9veGo2NXM1UjhwOUhFdXJjd2hBM1E2bEU1M3l4CmpnZWU4cVY5SFhBcWE0VjBxUVFKMHRqY3BhamhRYWhEVHRUaFJyK1oySDRUelF1d0hhM2RNN0lJRjZFUFdzQ28KRHRiVVhFUEw3elQzRUJIN1RIT2R2TnNGbEQvU0ZtVDJSd2lRNTYwNmJSQUh3QXp6eGp4anhGVE1sN3I0dFg1VwpMZGM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: test.test
contexts:
- name: test
context:
cluster: test
user: test-user
current-context: test
users:
- name: test-user
user:
token: abcde12345

View file

@ -1,18 +0,0 @@
apiVersion: v1
kind: Config
clusters:
- name: test
cluster:
certificate-authority-data: abcde12345
server: test.test
contexts:
- name: test
context:
cluster: test
user: test-user
current-context: test
users:
- name: test-user
user:
token: abcde12345

View file

@ -1,24 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIFdjCCA14CCQC+svUhDVv51DANBgkqhkiG9w0BAQsFADB9MQswCQYDVQQGEwJV
UzENMAsGA1UEBwwEVGVzdDEaMBgGA1UECgwRUHJvbWV0aGV1c0FkYXB0ZXIxJDAi
BgNVBAMMG2s4cy1wcm9tZXRoZXVzLWFkYXB0ZXIudGVzdDEdMBsGCSqGSIb3DQEJ
ARYOdGVzdEB0ZXN0LnRlc3QwHhcNMjEwMjA5MTE0NDMyWhcNMjIwMjA5MTE0NDMy
WjB9MQswCQYDVQQGEwJVUzENMAsGA1UEBwwEVGVzdDEaMBgGA1UECgwRUHJvbWV0
aGV1c0FkYXB0ZXIxJDAiBgNVBAMMG2s4cy1wcm9tZXRoZXVzLWFkYXB0ZXIudGVz
dDEdMBsGCSqGSIb3DQEJARYOdGVzdEB0ZXN0LnRlc3QwggIiMA0GCSqGSIb3DQEB
AQUAA4ICDwAwggIKAoICAQDtLqKuIJqRETLOUSMDBtUDmIBaD2pG9Qv+cOBhQbVS
apZRWk8uKZKxqBOxgQ3UxY1szeVkx1Dphe3RN6ndmofiRc23ns1qncbDllgbtflk
GFvLKGcVBa6Z/lZ6FCZDWn6K6mJb0a7jtkOMG6+J/5eJHfZ23u/GYL1RKxH+qPPc
AwIDAQABMA0GCSqGSIb3DQEBCwUAA4ICAQC0oE4/Yel5GHeuKJ5+U9KEBsLCrBjj
jUW8or7SIZ1Gl4J2ak3p3WabhZjqOe10rsIXCNTaC2rEMvkNiP5Om0aeE6bx14jV
e9FIfJ7ytAL/PISaZXINgml05m4Su3bUaxCQpajBgqneNp4w57jfeFhcPt35j0H4
bxGk/hnIY1MmRULSOFBstmxNZSDNsGlTcZoN3+0KtuqLg6vTNuuJIyx1zd9/QT8t
RJ4fgrffJcPJscvq6hEdWmtcJhaDLWOEblsbFfN0J+zK07hHhqRavQrnwaBZgFWa
OIqAo6NfZONhCFy9mWFxLvQky1NXr60y220+N1GkEiLRQES7+p1pcKgn0v+f2EfW
uN6+LCppWX7FqtkB3OhZkHM6nbE/9GP5T76Kj30Fed/nHkTJ3QORRMQUTs4J6LNk
BD1i14MZMCn3UBZh8wX+d63xJHtfMvfac7L655GwLEnWW8JM8h8DDfRYM7JuEURG
pSbvoaygyvddT0FKRLcFGhfI7aBSWGcJH5rHdEcUQ+mnloD1RioQqTC+kxUSddJI
QNjgYivl9kwW9cJV1jzmKd8GQfg+j1X+jR9icNT5cacvclwnL0Mim0w/ZLfWQYmJ
q2ud+GS9+5RtPzWwHR60+Qs3dr8oQGh5wO12qUJ8d5MI+4YGWRjKRyYdio6g1Bhi
9WInD4va9cC7fw==
-----END CERTIFICATE-----

View file

@ -1,30 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIFLjCCAxYCCQDMlabDYYlDKzANBgkqhkiG9w0BAQsFADBYMQswCQYDVQQGEwJV
UzENMAsGA1UECgwEVGVzdDEbMBkGA1UEAwwScHJvbWV0aGV1cy1hZGFwdGVyMR0w
GwYJKoZIhvcNAQkBFg50ZXN0QHRlc3QudGVzdDAgFw0yMTAyMjIyMDMwMTNaGA80
NzU5MDEyMDIwMzAxM1owWDELMAkGA1UEBhMCVVMxDTALBgNVBAoMBFRlc3QxGzAZ
BgNVBAMMEnByb21ldGhldXMtYWRhcHRlcjEdMBsGCSqGSIb3DQEJARYOdGVzdEB0
ZXN0LnRlc3QwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDIJOO8apS3
84bssvnTVHp1VAiPg1tX+E6wPjayVPx+S4LgMKA4QM2kNKoPLQtvGV+lyqfhYp0H
cdGzCPVQ6aBlbHuOhInusJaXjgOlNTalThgigzky0t1jFxqaNjFtXqv6ME1Zcb9H
VrGMreEfNj8/Ijp7cfsBe7Jv2rBc06aidx9j66+oPTC98XcNnURUGO95UF8SjTQt
oi10m17uA7z/JUUSBvDJAg5Z62myZPU2stz38cuthROyEyXRBWimHh7bD17rwqhc
WRCfkFORWvwz9GMV5KfFCfWm2D2pm1f3ZWm5/FQbQrlxgxUagwDMoma+F6hQp84a
/sYPqqkWDRUK0NGZzWwxjfra8r8H2xFab+5ZFVr9+FhMgy6eelZ1JJc860s35Qpk
ZrSRH8RNMqLRG1cnDwHn9Md6joCZgLJhEW9L5xjpCVWkLXK59yA9ry5Jau9/2zDs
wlRzYI4TNazbVa84KliEjt3nZ6DgQh3PRtxHDrqJIQSSYu1MtUmPArLtEDDP/BqD
fGWCayc/SdxSWW9qU/aOq+D4KMKQXV44qc22f6rd/LKt/fcvDpbfcexXbeeNABQg
x1rAnhA8L/rYc4WTTbTrb8jwhaUoqJve6XOsHPVbk/L4CS9ReP1UwvhMM1C+Ast6
rr/a2bZkoMK+jmkA2QTwUsjvt/8G4dtVOwIDAQABMA0GCSqGSIb3DQEBCwUAA4IC
AQAvrWslCR21l8XRGI+l4z6bpZ7+089KQRemYcHWOKZ/nDTYcydDWQMdMtDZS43d
B2Wuu4UfOnK27YwuP4Ojf2hAzeaDBgv7xOcKRZ1K+zOCm+VqbtWV49c/Ow0Rc5KU
N7rApohdpXeJBp5TB1qQJsKcBv3gveLAivCFTeD0LiLMVdxjRRl9ZbMXtD3PABDC
KKFtE/n2MV/0wroMD9hHs9LckcNjHSrIFaQEy9cESn8q3kngFf3wvc2oM77yCOZ1
5y0AN+9ZXyMHHlMjye7GuW0Mpiwo1O4tW2brC0boqSmvSFNW9KRogKvu6Oij9Pm6
jJpuUsM0KOnID8m9jJ+Xb+DGC9cgLGHRJc+zw74X2KMQnH4/pZDNbIGG7d8xEoPn
RS/EbCoALmUbI2kqflVN88kN4ZUchsoHly5gIdidfo9yjeOihTF0xEEou/tzGW+K
AYxwy9uIYhz4lmH894H5nqJWPY/aLxD4M9nFW0yxczCQ8tpjwVYmP3/dCKp1IUXy
0h9TjyBRPv9O3JrTLTYBPLisLNqiU+YOZM6wgqmZTPtTCxKMmNlhGWKa8prAhMdb
GRxwkO6ylyL/j3J3HgcDHEC22/685L21HVFv8z/DMuj/eba4yyn1FBVXOU9hgLWS
LVLoVFFp7RaGSIECcqTyXldoZZpZrA89XDVuqSvHDiCOrg==
-----END CERTIFICATE-----

View file

@ -1,52 +0,0 @@
-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQDIJOO8apS384bs
svnTVHp1VAiPg1tX+E6wPjayVPx+S4LgMKA4QM2kNKoPLQtvGV+lyqfhYp0HcdGz
CPVQ6aBlbHuOhInusJaXjgOlNTalThgigzky0t1jFxqaNjFtXqv6ME1Zcb9HVrGM
reEfNj8/Ijp7cfsBe7Jv2rBc06aidx9j66+oPTC98XcNnURUGO95UF8SjTQtoi10
m17uA7z/JUUSBvDJAg5Z62myZPU2stz38cuthROyEyXRBWimHh7bD17rwqhcWRCf
kFORWvwz9GMV5KfFCfWm2D2pm1f3ZWm5/FQbQrlxgxUagwDMoma+F6hQp84a/sYP
qqkWDRUK0NGZzWwxjfra8r8H2xFab+5ZFVr9+FhMgy6eelZ1JJc860s35QpkZrSR
H8RNMqLRG1cnDwHn9Md6joCZgLJhEW9L5xjpCVWkLXK59yA9ry5Jau9/2zDswlRz
YI4TNazbVa84KliEjt3nZ6DgQh3PRtxHDrqJIQSSYu1MtUmPArLtEDDP/BqDfGWC
ayc/SdxSWW9qU/aOq+D4KMKQXV44qc22f6rd/LKt/fcvDpbfcexXbeeNABQgx1rA
nhA8L/rYc4WTTbTrb8jwhaUoqJve6XOsHPVbk/L4CS9ReP1UwvhMM1C+Ast6rr/a
2bZkoMK+jmkA2QTwUsjvt/8G4dtVOwIDAQABAoICAQCFd9RG+exjH4uCnXfsbhGb
3IY47igj6frPnS1sjzAyKLkGOGcgHFcGgfhGVouhcxJNxW9e5hxBsq1c70RoyOOl
v0pGKCyzeB90wce8jFf8tK9zlH64XdY1FlsvK6Sagt+84Ck01J3yPOX6IppV7h8P
Qwws9j2lJ5A+919VB++/uCC+yZVCZEv03um9snq2ekp4ZBiCjpeVNumJMXOE1glb
PMdq1iYMZcqcPFkoFhtQdsbUsfJZrL0Nq6c0VJ8M6Fk7TGzIW+9aZiqnvd98t2go
XXkWSH148MNYmCvGx0lKOd7foF2WMFDqWbfhDiuiS0qoya3822qepff+ypgnlGHK
vr+9pLsWT7TG8pBfbXj47a7TwYAXkRMi+78vFQwoaeiKdehJM1YXZg9vBVS8BV3r
+0wYNE4WpdxUvX3aAnJO6ntRU6KCz3/D1+fxUT/w1rKX2Z1uTH5x2UxB6UUGDSF9
HiJfDp6RRtXHbQMR6uowM6UYBn0dl9Aso21oc2K4Gpx5QlsZaPi9M6BBMbPUhFcx
QH+w7fLmccwneJVGxjHkYOcLVLF7nuH5C2DsffrMubrgwuhSw2b8zy7ZpZ0eJ83D
CjJN9EgqwbmH0Or5N91YyVdR0Zm4EtODAo615O1kEMCKasKjpolOx/t9cgtbdkiq
pbLruOS+8jEG1erA7nYkQQKCAQEA4yba38hSkfIMUzfrlgF7AkXHbU4iINgKpHti
A9QrvEL9W4VHRiA5UTezzblyfMck9w/Hhx74pQjjGLj76L+8ZssCFI8ingNo3ItL
/AX3MN68NT4heiy8EvKRwRNWV05WEehZg9tTUKexIDRcDSr/9E+qG/cW5KOIQpYl
RIsKW2RUNFd3TVCQVUIzwe/0n6LuO2b7Btow+nfJ7U3mWQmHGYu7+SYjVlvIoQ68
jFGviGRineu/J7EiPND7qQzj78AtnXkULf+mjK2JdapRcn2EBNL34QepVCyjfXZf
QWm/ykI9nVOKRy1F38OhRHKrBICfWhN2Bgyvw3PPhGcb8EdknwKCAQEA4Y/2bpiz
S0H8LPUUsOLZWCadpp8yzvTerB/vuigkQiHM8w4QUwEpL2iXSF36MD8yV/c4ilVN
8m1p5prp1YtasTxsOWv7FDEs4oZfum1w3QsAvlrFRhctsACsZ1i4i3mvxQWJ955q
zZxs5vhO5CL24rVoQYGVQj/uCSHlyK7ko9AA8XkejTlZMJ5h0Mip+oWNxz3M/VTa
sJlYkQrbP0cWxCjKJLEmtVlVSCMeHoILGZzLcol6RVPbaAb57i27SRwY9YIFt1A+
OMpHFs4fgDa4A1IlobBwhhd1dAw3CL5QJN+ylDnBYsm1bwBRHx/AKUjpRv+7ZXQb
H9ngSivFHrXN5QKCAQBAqzUw9LUdO83qe0ck47MDiJ4oLlBlDVyqSz4yXNs+s8ux
nJYYDuCCkNstvJgtkfyiIenqPBUJ1yfgR/nf34ZhtXYYKE/wsIPQFhBB5ejkDuWC
OvgI8mdw9YItd7XjEThLzNx/P5fOpI823fE/BnjsMyn44DWyTiRi4KAnjXYbYsre
Q/CBIGiW/UwC8K+yKw6r9ruMzd2X0Ta5yq3Dt4Sw7ylK22LAGU1bHPjs8eyJZhr1
XsKDKFjY+55KGJNkFFBoPqpSFjByaI1z5FNfxwAo528Or8GzZyn8dBDWbKbfjFBC
VCBP90GnXOiytfqeQ4gaeuPlAQOhH3168mfv1kN9AoIBABOZzgFYVaRBjKdfeLfS
Tq7BVEvJY8HmN39fmxZjLJtukn/AhhygajLLdPH98KLGqxpHymsC9K4PYfd/GLjM
zkm+hW0L/BqKF2tr39+0aO1camkgPCpWE0tLE7A7XnYIUgTd8VpKMt/BKxl7FGfw
veF/gBrJJu5F3ep/PpeM0yOFDL/vFX+SLzTxXnClL1gsyOA6d5jACez0tmSMO/co
t0q+fKpploKFy8pj+tcN1+cW3/sJBU4G9nb4vDk9UhwNTAHxlYuTdoS61yidKtGa
b60iM1D0oyKT4Un/Ubz5xL8fjUYiKrLp8lE+Bs6clLdBtbvMtz0etMi0xy/K0+tS
Qx0CggEBALfe2TUfAt9aMqpcidaFwgNFTr61wgOeoLWLt559OeRIeZWKAEB81bnz
EJLxDF51Y2tLc/pEXrc0zJzzrFIfk/drYe0uD5RnJjRxE3+spwin6D32ZOZW3KSX
1zReW1On80o/LJU6nyDJrNJvay2eL9PyWi47nBdO7MRZi53im72BmmwxaAKXf40l
StykjloyFdI+eyGyQUqcs4nFHd3WWmV+lLIDhGDlF5EBUgueCJz1xO54oPj1PKGl
vDs7JXdJiS3HDf20GREGwvL1y1kewX+KqdO7aBZhLN3Rx/fZnS/UFC3xCtbikuG4
LeU1NmvuCRmWmrgEkqiKs3jgjbEPVQI=
-----END PRIVATE KEY-----

View file

@ -8,7 +8,7 @@ import (
"github.com/spf13/cobra"
yaml "gopkg.in/yaml.v2"
"sigs.k8s.io/prometheus-adapter/cmd/config-gen/utils"
"github.com/directxman12/k8s-prometheus-adapter/cmd/config-gen/utils"
)
func main() {

View file

@ -4,66 +4,65 @@ import (
"fmt"
"time"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
. "github.com/directxman12/k8s-prometheus-adapter/pkg/config"
pmodel "github.com/prometheus/common/model"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/config"
)
// DefaultConfig returns a configuration equivalent to the former
// pre-advanced-config settings. This means that "normal" series labels
// will be of the form `<prefix><<.Resource>>`, cadvisor series will be
// of the form `container_`, and have the label `pod`. Any series ending
// of the form `container_`, and have the label `pod_name`. Any series ending
// in total will be treated as a rate metric.
func DefaultConfig(rateInterval time.Duration, labelPrefix string) *config.MetricsDiscoveryConfig {
return &config.MetricsDiscoveryConfig{
Rules: []config.DiscoveryRule{
func DefaultConfig(rateInterval time.Duration, labelPrefix string) *MetricsDiscoveryConfig {
return &MetricsDiscoveryConfig{
Rules: []DiscoveryRule{
// container seconds rate metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))),
Resources: config.ResourceMapping{
Overrides: map[string]config.GroupResource{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container_name", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod_name", ""))),
Resources: ResourceMapping{
Overrides: map[string]GroupResource{
"namespace": {Resource: "namespace"},
"pod": {Resource: "pod"},
"pod_name": {Resource: "pod"},
},
},
Name: config.NameMapping{Matches: "^container_(.*)_seconds_total$"},
MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()),
Name: NameMapping{Matches: "^container_(.*)_seconds_total$"},
MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()),
},
// container rate metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))),
SeriesFilters: []config.RegexFilter{{IsNot: "^container_.*_seconds_total$"}},
Resources: config.ResourceMapping{
Overrides: map[string]config.GroupResource{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container_name", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod_name", ""))),
SeriesFilters: []RegexFilter{{IsNot: "^container_.*_seconds_total$"}},
Resources: ResourceMapping{
Overrides: map[string]GroupResource{
"namespace": {Resource: "namespace"},
"pod": {Resource: "pod"},
"pod_name": {Resource: "pod"},
},
},
Name: config.NameMapping{Matches: "^container_(.*)_total$"},
MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()),
Name: NameMapping{Matches: "^container_(.*)_total$"},
MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()),
},
// container non-cumulative metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))),
SeriesFilters: []config.RegexFilter{{IsNot: "^container_.*_total$"}},
Resources: config.ResourceMapping{
Overrides: map[string]config.GroupResource{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container_name", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod_name", ""))),
SeriesFilters: []RegexFilter{{IsNot: "^container_.*_total$"}},
Resources: ResourceMapping{
Overrides: map[string]GroupResource{
"namespace": {Resource: "namespace"},
"pod": {Resource: "pod"},
"pod_name": {Resource: "pod"},
},
},
Name: config.NameMapping{Matches: "^container_(.*)$"},
MetricsQuery: `sum(<<.Series>>{<<.LabelMatchers>>,container!="POD"}) by (<<.GroupBy>>)`,
Name: NameMapping{Matches: "^container_(.*)$"},
MetricsQuery: `sum(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}) by (<<.GroupBy>>)`,
},
// normal non-cumulative metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))),
SeriesFilters: []config.RegexFilter{{IsNot: ".*_total$"}},
Resources: config.ResourceMapping{
SeriesFilters: []RegexFilter{{IsNot: ".*_total$"}},
Resources: ResourceMapping{
Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix),
},
MetricsQuery: "sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)",
@ -72,9 +71,9 @@ func DefaultConfig(rateInterval time.Duration, labelPrefix string) *config.Metri
// normal rate metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))),
SeriesFilters: []config.RegexFilter{{IsNot: ".*_seconds_total"}},
Name: config.NameMapping{Matches: "^(.*)_total$"},
Resources: config.ResourceMapping{
SeriesFilters: []RegexFilter{{IsNot: ".*_seconds_total"}},
Name: NameMapping{Matches: "^(.*)_total$"},
Resources: ResourceMapping{
Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix),
},
MetricsQuery: fmt.Sprintf("sum(rate(<<.Series>>{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
@ -83,38 +82,38 @@ func DefaultConfig(rateInterval time.Duration, labelPrefix string) *config.Metri
// seconds rate metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))),
Name: config.NameMapping{Matches: "^(.*)_seconds_total$"},
Resources: config.ResourceMapping{
Name: NameMapping{Matches: "^(.*)_seconds_total$"},
Resources: ResourceMapping{
Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix),
},
MetricsQuery: fmt.Sprintf("sum(rate(<<.Series>>{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
},
},
ResourceRules: &config.ResourceRules{
CPU: config.ResourceRule{
ResourceRules: &ResourceRules{
CPU: ResourceRule{
ContainerQuery: fmt.Sprintf("sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
NodeQuery: fmt.Sprintf("sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, id='/'}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
Resources: config.ResourceMapping{
Overrides: map[string]config.GroupResource{
Resources: ResourceMapping{
Overrides: map[string]GroupResource{
"namespace": {Resource: "namespace"},
"pod": {Resource: "pod"},
"pod_name": {Resource: "pod"},
"instance": {Resource: "node"},
},
},
ContainerLabel: fmt.Sprintf("%scontainer", labelPrefix),
ContainerLabel: fmt.Sprintf("%scontainer_name", labelPrefix),
},
Memory: config.ResourceRule{
Memory: ResourceRule{
ContainerQuery: "sum(container_memory_working_set_bytes{<<.LabelMatchers>>}) by (<<.GroupBy>>)",
NodeQuery: "sum(container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}) by (<<.GroupBy>>)",
Resources: config.ResourceMapping{
Overrides: map[string]config.GroupResource{
Resources: ResourceMapping{
Overrides: map[string]GroupResource{
"namespace": {Resource: "namespace"},
"pod": {Resource: "pod"},
"pod_name": {Resource: "pod"},
"instance": {Resource: "node"},
},
},
ContainerLabel: fmt.Sprintf("%scontainer", labelPrefix),
ContainerLabel: fmt.Sprintf("%scontainer_name", labelPrefix),
},
Window: pmodel.Duration(rateInterval),
},

6
deploy/Dockerfile Normal file
View file

@ -0,0 +1,6 @@
ARG BASEIMAGE
FROM ${BASEIMAGE}
ARG ARCH
COPY ${ARCH}/adapter /
USER 1001:1001
ENTRYPOINT ["/adapter"]

View file

@ -1,11 +1,20 @@
Example Deployment
==================
1. Make sure you've built the included Dockerfile with `TAG=latest make container`. The image should be tagged as `registry.k8s.io/prometheus-adapter/staging-prometheus-adapter:latest`.
1. Make sure you've built the included Dockerfile with `make docker-build`. The image should be tagged as `directxman12/k8s-prometheus-adapter:latest`.
2. `kubectl create namespace monitoring` to ensure that the namespace that we're installing
2. Create a secret called `cm-adapter-serving-certs` with two values:
`serving.crt` and `serving.key`. These are the serving certificates used
by the adapter for serving HTTPS traffic. For more information on how to
generate these certificates, see the [auth concepts
documentation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/auth.md)
in the apiserver-builder repository.
The kube-prometheus project published two scripts [gencerts.sh](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/experimental/custom-metrics-api/gencerts.sh)
and [deploy.sh](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/experimental/custom-metrics-api/deploy.sh) to create the `cm-adapter-serving-certs` secret.
3. `kubectl create namespace custom-metrics` to ensure that the namespace that we're installing
the custom metrics adapter in exists.
3. `kubectl create -f manifests/`, modifying the Deployment as necessary to
4. `kubectl create -f manifests/`, modifying the Deployment as necessary to
point to your Prometheus server, and the ConfigMap to contain your desired
metrics discovery configuration.

View file

@ -1,17 +0,0 @@
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: prometheus-adapter
namespace: monitoring
version: v1beta1
versionPriority: 100

View file

@ -1,22 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
namespace: monitoring
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch

View file

@ -1,17 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: resource-metrics:system:auth-delegator
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: prometheus-adapter
namespace: monitoring

View file

@ -1,17 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-adapter
subjects:
- kind: ServiceAccount
name: prometheus-adapter
namespace: monitoring

View file

@ -1,15 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: resource-metrics-server-resources
rules:
- apiGroups:
- metrics.k8s.io
resources:
- '*'
verbs:
- '*'

View file

@ -1,20 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- pods
- services
verbs:
- get
- list
- watch

View file

@ -1,53 +0,0 @@
apiVersion: v1
data:
config.yaml: |-
"resourceRules":
"cpu":
"containerLabel": "container"
"containerQuery": |
sum by (<<.GroupBy>>) (
irate (
container_cpu_usage_seconds_total{<<.LabelMatchers>>,container!="",pod!=""}[4m]
)
)
"nodeQuery": |
sum by (<<.GroupBy>>) (
irate(
node_cpu_usage_seconds_total{<<.LabelMatchers>>}[4m]
)
)
"resources":
"overrides":
"namespace":
"resource": "namespace"
"node":
"resource": "node"
"pod":
"resource": "pod"
"memory":
"containerLabel": "container"
"containerQuery": |
sum by (<<.GroupBy>>) (
container_memory_working_set_bytes{<<.LabelMatchers>>,container!="",pod!=""}
)
"nodeQuery": |
sum by (<<.GroupBy>>) (
node_memory_working_set_bytes{<<.LabelMatchers>>}
)
"resources":
"overrides":
"node":
"resource": "node"
"namespace":
"resource": "namespace"
"pod":
"resource": "pod"
"window": "5m"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: adapter-config
namespace: monitoring

View file

@ -1,12 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
name: custom-metrics:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: prometheus
namespace: prometheus-adapter-e2e
name: custom-metrics-apiserver
namespace: custom-metrics

View file

@ -1,11 +1,7 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: resource-metrics-auth-reader
name: custom-metrics-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
@ -13,5 +9,5 @@ roleRef:
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: prometheus-adapter
namespace: monitoring
name: custom-metrics-apiserver
namespace: custom-metrics

View file

@ -0,0 +1,51 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
namespace: custom-metrics
spec:
replicas: 1
selector:
matchLabels:
app: custom-metrics-apiserver
template:
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
spec:
serviceAccountName: custom-metrics-apiserver
containers:
- name: custom-metrics-apiserver
image: directxman12/k8s-prometheus-adapter-amd64
args:
- --secure-port=6443
- --tls-cert-file=/var/run/serving-cert/serving.crt
- --tls-private-key-file=/var/run/serving-cert/serving.key
- --logtostderr=true
- --prometheus-url=http://prometheus.prom.svc:9090/
- --metrics-relist-interval=1m
- --v=10
- --config=/etc/adapter/config.yaml
ports:
- containerPort: 6443
volumeMounts:
- mountPath: /var/run/serving-cert
name: volume-serving-cert
readOnly: true
- mountPath: /etc/adapter/
name: config
readOnly: true
- mountPath: /tmp
name: tmp-vol
volumes:
- name: volume-serving-cert
secret:
secretName: cm-adapter-serving-certs
- name: config
configMap:
name: adapter-config
- name: tmp-vol
emptyDir: {}

View file

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: custom-metrics-resource-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: custom-metrics-resource-reader
subjects:
- kind: ServiceAccount
name: custom-metrics-apiserver
namespace: custom-metrics

View file

@ -0,0 +1,5 @@
kind: ServiceAccount
apiVersion: v1
metadata:
name: custom-metrics-apiserver
namespace: custom-metrics

View file

@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: custom-metrics-apiserver
namespace: custom-metrics
spec:
ports:
- port: 443
targetPort: 6443
selector:
app: custom-metrics-apiserver

View file

@ -0,0 +1,42 @@
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
spec:
service:
name: custom-metrics-apiserver
namespace: custom-metrics
group: custom.metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta2.custom.metrics.k8s.io
spec:
service:
name: custom-metrics-apiserver
namespace: custom-metrics
group: custom.metrics.k8s.io
version: v1beta2
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 200
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.external.metrics.k8s.io
spec:
service:
name: custom-metrics-apiserver
namespace: custom-metrics
group: external.metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---

View file

@ -0,0 +1,10 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-metrics-server-resources
rules:
- apiGroups:
- custom.metrics.k8s.io
- external.metrics.k8s.io
resources: ["*"]
verbs: ["*"]

View file

@ -0,0 +1,117 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: custom-metrics
data:
config.yaml: |
rules:
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
seriesFilters: []
resources:
overrides:
namespace:
resource: namespace
pod_name:
resource: pod
name:
matches: ^container_(.*)_seconds_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[1m])) by (<<.GroupBy>>)
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
seriesFilters:
- isNot: ^container_.*_seconds_total$
resources:
overrides:
namespace:
resource: namespace
pod_name:
resource: pod
name:
matches: ^container_(.*)_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[1m])) by (<<.GroupBy>>)
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
seriesFilters:
- isNot: ^container_.*_total$
resources:
overrides:
namespace:
resource: namespace
pod_name:
resource: pod
name:
matches: ^container_(.*)$
as: ""
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters:
- isNot: .*_total$
resources:
template: <<.Resource>>
name:
matches: ""
as: ""
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters:
- isNot: .*_seconds_total
resources:
template: <<.Resource>>
name:
matches: ^(.*)_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters: []
resources:
template: <<.Resource>>
name:
matches: ^(.*)_seconds_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
resourceRules:
cpu:
containerQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
nodeQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, id='/'}[1m])) by (<<.GroupBy>>)
resources:
overrides:
instance:
resource: node
namespace:
resource: namespace
pod_name:
resource: pod
containerLabel: container_name
memory:
containerQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>}) by (<<.GroupBy>>)
nodeQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}) by (<<.GroupBy>>)
resources:
overrides:
instance:
resource: node
namespace:
resource: namespace
pod_name:
resource: pod
containerLabel: container_name
window: 1m
externalRules:
- seriesQuery: '{__name__=~"^.*_queue_(length|size)$",namespace!=""}'
resources:
overrides:
namespace:
resource: namespace
name:
matches: ^.*_queue_(length|size)$
as: "$0"
metricsQuery: max(<<.Series>>{<<.LabelMatchers>>})
- seriesQuery: '{__name__=~"^.*_queue$",namespace!=""}'
resources:
overrides:
namespace:
resource: namespace
name:
matches: ^.*_queue$
as: "$0"
metricsQuery: max(<<.Series>>{<<.LabelMatchers>>})

View file

@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-metrics-resource-reader
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
verbs:
- get
- list
- watch

View file

@ -1,89 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
spec:
automountServiceAccountToken: true
containers:
- args:
- --cert-dir=/var/run/serving-cert
- --config=/etc/adapter/config.yaml
- --metrics-relist-interval=1m
- --prometheus-url=https://prometheus.monitoring.svc:9090/
- --secure-port=6443
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
image: registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.12.0
livenessProbe:
failureThreshold: 5
httpGet:
path: /livez
port: https
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 5
name: prometheus-adapter
ports:
- containerPort: 6443
name: https
readinessProbe:
failureThreshold: 5
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 5
resources:
requests:
cpu: 102m
memory: 180Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /tmp
name: tmpfs
readOnly: false
- mountPath: /var/run/serving-cert
name: volume-serving-cert
readOnly: false
- mountPath: /etc/adapter
name: config
readOnly: false
nodeSelector:
kubernetes.io/os: linux
securityContext: {}
serviceAccountName: prometheus-adapter
volumes:
- emptyDir: {}
name: tmpfs
- emptyDir: {}
name: volume-serving-cert
- configMap:
name: adapter-config
name: config

View file

@ -2,9 +2,6 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: hpa-controller-custom-metrics
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole

View file

@ -1,21 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
egress:
- {}
ingress:
- {}
podSelector:
matchLabels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
policyTypes:
- Egress
- Ingress

View file

@ -1,15 +0,0 @@
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
minAvailable: 1
selector:
matchLabels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter

View file

@ -1,10 +0,0 @@
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring

View file

@ -1,17 +0,0 @@
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
ports:
- name: https
port: 443
targetPort: 6443
selector:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter

View file

@ -31,13 +31,13 @@ might look like:
```yaml
rules:
# this rule matches cumulative cAdvisor metrics measured in seconds
- seriesQuery: '{__name__=~"^container_.*",container!="POD",namespace!="",pod!=""}'
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
resources:
# skip specifying generic resource<->label mappings, and just
# attach only pod and namespace resources by mapping label names to group-resources
overrides:
namespace: {resource: "namespace"}
pod: {resource: "pod"}
namespace: {resource: "namespace"},
pod_name: {resource: "pod"},
# specify that the `container_` and `_seconds_total` suffixes should be removed.
# this also introduces an implicit filter on metric family names
name:
@ -48,7 +48,7 @@ rules:
# This is a Go template where the `.Series` and `.LabelMatchers` string values
# are available, and the delimiters are `<<` and `>>` to avoid conflicts with
# the prometheus query language
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[2m])) by (<<.GroupBy>>)"
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)"
```
Discovery
@ -83,7 +83,7 @@ For example:
```yaml
# match all cAdvisor metrics that aren't measured in seconds
seriesQuery: '{__name__=~"^container_.*_total",container!="POD",namespace!="",pod!=""}'
seriesQuery: '{__name__=~"^container_.*_total",container_name!="POD",namespace!="",pod_name!=""}'
seriesFilters:
- isNot: "^container_.*_seconds_total"
```
@ -211,5 +211,5 @@ For example:
```yaml
# convert cumulative cAdvisor metrics into rates calculated over 2 minutes
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[2m])) by (<<.GroupBy>>)"
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)"
```

View file

@ -1,84 +0,0 @@
External Metrics
===========
It's possible to configure [Autoscaling on metrics not related to Kubernetes objects](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects) in Kubernetes. This is done with a special `External Metrics` system. Using external metrics in Kubernetes with the adapter requires you to configure special `external` rules in the configuration.
The configuration for `external` metrics rules is almost identical to the normal `rules`:
```yaml
externalRules:
- seriesQuery: '{__name__="queue_consumer_lag",name!=""}'
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (name)
resources:
overrides: { namespace: {resource: "namespace"} }
```
Namespacing
-----------
All Kubernetes Horizontal Pod Autoscaler (HPA) resources are namespaced. And when you create an HPA that
references an external metric the adapter will automatically add a `namespace` label to the `seriesQuery` you have configured.
This is done because the External Merics API Specification *requires* a namespace component in the URL:
```shell
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/queue_consumer_lag"
```
Cross-Namespace or No Namespace Queries
---------------------------------------
A semi-common scenario is to have a `workload` in one namespace that needs to scale based on a metric from a different namespace. This is normally not
possible with `external` rules because the `namespace` label is set to match that of the source `workload`.
However, you can explicitly disable the automatic add of the HPA namepace to the query, and instead opt to not set a namespace at all, or to target a different namespace.
This is done by setting `namespaced: false` in the `resources` section of the `external` rule:
```yaml
# rules: ...
externalRules:
- seriesQuery: '{__name__="queue_depth",name!=""}'
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (name)
resources:
namespaced: false
```
Given the `external` rules defined above any `External` metric query for `queue_depth` will simply ignore the source `namespace` of the HPA. This allows you to explicilty not put a namespace into an external query, or to set the namespace to one that might be different from that of the HPA.
```yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: external-queue-scaler
# the HPA and scaleTargetRef must exist in a namespace
namespace: default
annotations:
# The "External" metric below targets a metricName that has namespaced=false
# and this allows the metric to explicitly query a different
# namespace than that of the HPA and scaleTargetRef
autoscaling.alpha.kubernetes.io/metrics: |
[
{
"type": "External",
"external": {
"metricName": "queue_depth",
"metricSelector": {
"matchLabels": {
"namespace": "queue",
"name": "my-sample-queue"
}
},
"targetAverageValue": "50"
}
}
]
spec:
maxReplicas: 5
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
```

View file

@ -10,13 +10,13 @@ rules:
# can be found in pkg/config/default.go
# this rule matches cumulative cAdvisor metrics measured in seconds
- seriesQuery: '{__name__=~"^container_.*",container!="POD",namespace!="",pod!=""}'
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
resources:
# skip specifying generic resource<->label mappings, and just
# attach only pod and namespace resources by mapping label names to group-resources
overrides:
namespace: {resource: "namespace"}
pod: {resource: "pod"}
namespace: {resource: "namespace"},
pod_name: {resource: "pod"},
# specify that the `container_` and `_seconds_total` suffixes should be removed.
# this also introduces an implicit filter on metric family names
name:
@ -27,19 +27,19 @@ rules:
# This is a Go template where the `.Series` and `.LabelMatchers` string values
# are available, and the delimiters are `<<` and `>>` to avoid conflicts with
# the prometheus query language
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[2m])) by (<<.GroupBy>>)"
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)"
# this rule matches cumulative cAdvisor metrics not measured in seconds
- seriesQuery: '{__name__=~"^container_.*_total",container!="POD",namespace!="",pod!=""}'
- seriesQuery: '{__name__=~"^container_.*_total",container_name!="POD",namespace!="",pod_name!=""}'
resources:
overrides:
namespace: {resource: "namespace"}
pod: {resource: "pod"}
namespace: {resource: "namespace"},
pod_name: {resource: "pod"},
seriesFilters:
# since this is a superset of the query above, we introduce an additional filter here
- isNot: "^container_.*_seconds_total$"
name: {matches: "^container_(.*)_total$"}
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[2m])) by (<<.GroupBy>>)"
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)"
# this rule matches cumulative non-cAdvisor metrics
- seriesQuery: '{namespace!="",__name__!="^container_.*"}'
@ -52,7 +52,7 @@ rules:
# Group will be converted to a form acceptible for use as a label automatically.
template: "<<.Resource>>"
# if we wanted to, we could also specify overrides here
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[2m])) by (<<.GroupBy>>)"
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)"
# this rule matches only a single metric, explicitly naming it something else
# It's series query *must* return only a single metric family
@ -63,21 +63,7 @@ rules:
overrides:
# this should still resolve in our cluster
brand: {group: "cheese.io", resource: "brand"}
metricsQuery: 'count(cheddar{sharp="true"})'
# external rules are not tied to a Kubernetes resource and can reference any metric
# https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects
externalRules:
- seriesQuery: '{__name__="queue_consumer_lag",name!=""}'
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (name)
- seriesQuery: '{__name__="queue_depth",topic!=""}'
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (name)
# Kubernetes metric queries include a namespace in the query by default
# but you can explicitly disable namespaces if needed with "namespaced: false"
# this is useful if you have an HPA with an external metric in namespace A
# but want to query for metrics from namespace B
resources:
namespaced: false
metricQuery: 'count(cheddar{sharp="true"})'
# TODO: should we be able to map to a constant instance of a resource
# (e.g. `resources: {constant: [{resource: "namespace", name: "kube-system"}}]`)?

View file

@ -34,24 +34,20 @@ significantly different.
In order to follow this walkthrough, you'll need container images for
Prometheus and the custom metrics adapter.
The [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator),
The [Prometheus Operator](https://coreos.com/operators/prometheus/docs/latest/),
makes it easy to get up and running with Prometheus. This walkthrough
will assume you're planning on doing that -- if you've deployed it by hand
instead, you'll need to make a few adjustments to the way you expose
metrics to Prometheus.
The adapter has different images for each arch, which can be found at
`gcr.io/k8s-staging-prometheus-adapter/prometheus-adapter-${ARCH}`. For
instance, if you're on an x86_64 machine, use
`gcr.io/k8s-staging-prometheus-adapter/prometheus-adapter-amd64` image.
`directxman12/k8s-prometheus-adapter-${ARCH}`. For instance, if you're on
an x86_64 machine, use the `directxman12/k8s-prometheus-adapter-amd64`
image.
There is also an official multi arch image available at
`registry.k8s.io/prometheus-adapter/prometheus-adapter:${VERSION}`.
If you're feeling adventurous, you can build the latest version of
prometheus-adapter by running `make container` or get the latest image from the
staging registry `gcr.io/k8s-staging-prometheus-adapter/prometheus-adapter`.
If you're feeling adventurous, you can build the latest version of the
custom metrics adapter by running `make docker-build` or `make
build-local-image`.
Special thanks to [@luxas](https://github.com/luxas) for providing the
demo application for this walkthrough.
@ -98,33 +94,9 @@ spec:
</details>
<details>
<summary>sample-app.service.yaml</summary>
```yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: sample-app
name: sample-app
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: sample-app
type: ClusterIP
```
</details>
```shell
$ kubectl create -f sample-app.deploy.yaml
$ kubectl create -f sample-app.service.yaml
$ kubectl create service clusterip sample-app --tcp=80:8080
```
Now, check your app, which exposes metrics and counts the number of
@ -142,11 +114,11 @@ a HorizontalPodAutoscaler like this to accomplish the autoscaling:
<details>
<summary>sample-app.hpa.yaml</summary>
<summary>sample-app-hpa.yaml</summary>
```yaml
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2
apiVersion: autoscaling/v2beta1
metadata:
name: sample-app
spec:
@ -165,13 +137,10 @@ spec:
- type: Pods
pods:
# use the metric that you used above: pods/http_requests
metric:
name: http_requests
metricName: http_requests
# target 500 milli-requests per second,
# which is 1 request every two seconds
target:
type: Value
averageValue: 500m
targetAverageValue: 500m
```
</details>
@ -179,7 +148,7 @@ spec:
If you try creating that now (and take a look at your controller-manager
logs), you'll see that the that the HorizontalPodAutoscaler controller is
attempting to fetch metrics from
`/apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app`,
`/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app`,
but right now, nothing's serving that API.
Before you can autoscale your application, you'll need to make sure that
@ -196,15 +165,15 @@ Prometheus adapter to serve metrics out of Prometheus.
### Launching Prometheus
First, you'll need to deploy the Prometheus Operator. Check out the
[quick start
guide](https://github.com/prometheus-operator/prometheus-operator#quickstart)
[getting started
guide](https://coreos.com/operators/prometheus/docs/latest/user-guides/getting-started.html)
for the Operator to deploy a copy of Prometheus.
This walkthrough assumes that Prometheus is deployed in the `monitoring`
This walkthrough assumes that Prometheus is deployed in the `prom`
namespace. Most of the sample commands and files are namespace-agnostic,
but there are a few commands or pieces of configuration that rely on that
namespace. If you're using a different namespace, simply substitute that
in for `monitoring` when it appears.
in for `prom` when it appears.
### Monitoring Your Application
@ -216,7 +185,7 @@ service:
<details>
<summary>sample-app.monitor.yaml</summary>
<summary>service-monitor.yaml</summary>
```yaml
kind: ServiceMonitor
@ -236,12 +205,12 @@ spec:
</details>
```shell
$ kubectl create -f sample-app.monitor.yaml
$ kubectl create -f service-monitor.yaml
```
Now, you should see your metrics (`http_requests_total`) appear in your Prometheus instance. Look
Now, you should see your metrics appear in your Prometheus instance. Look
them up via the dashboard, and make sure they have the `namespace` and
`pod` labels. If not, check the labels on the service monitor match the ones on the Prometheus CRD.
`pod` labels.
### Launching the Adapter
@ -259,46 +228,7 @@ the steps to deploy the adapter. Note that if you're deploying on
a non-x86_64 (amd64) platform, you'll need to change the `image` field in
the Deployment to be the appropriate image for your platform.
However an update to the adapter config is necessary in order to
expose custom metrics.
<details>
<summary>prom-adapter.config.yaml</summary>
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: monitoring
data:
config.yaml: |-
"rules":
- "seriesQuery": |
{namespace!="",__name__!~"^container_.*"}
"resources":
"template": "<<.Resource>>"
"name":
"matches": "^(.*)_total"
"as": ""
"metricsQuery": |
sum by (<<.GroupBy>>) (
irate (
<<.Series>>{<<.LabelMatchers>>}[1m]
)
)
```
</details>
```shell
$ kubectl apply -f prom-adapter.config.yaml
# Restart prom-adapter pods
$ kubectl rollout restart deployment prometheus-adapter -n monitoring
```
This adapter configuration should work for this walkthrough together with
The default adapter configuration should work for this walkthrough and
a standard Prometheus Operator configuration, but if you've got custom
relabelling rules, or your labels above weren't exactly `namespace` and
`pod`, you may need to edit the configuration in the ConfigMap. The
@ -307,36 +237,11 @@ overview of how configuration works.
### The Registered API
We also need to register the custom metrics API with the API aggregator (part of
the main Kubernetes API server). For that we need to create an APIService resource
As part of the creation of the adapter Deployment and associated objects
(performed above), we registered the API with the API aggregator (part of
the main Kubernetes API server).
<details>
<summary>api-service.yaml</summary>
```yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta2.custom.metrics.k8s.io
spec:
group: custom.metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: prometheus-adapter
namespace: monitoring
version: v1beta2
versionPriority: 100
```
</details>
```shell
$ kubectl create -f api-service.yaml
```
The API is registered as `custom.metrics.k8s.io/v1beta2`, and you can find
The API is registered as `custom.metrics.k8s.io/v1beta1`, and you can find
more information about aggregation at [Concepts:
Aggregation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/aggregation.md).
@ -347,7 +252,7 @@ With that all set, your custom metrics API should show up in discovery.
Try fetching the discovery information for it:
```shell
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta2
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
```
Since you've set up Prometheus to collect your app's metrics, you should
@ -361,12 +266,12 @@ sends a raw GET request to the Kubernetes API server, automatically
injecting auth information:
```shell
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app"
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app"
```
Because of the adapter's configuration, the cumulative metric
`http_requests_total` has been converted into a rate metric,
`pods/http_requests`, which measures requests per second over a 1 minute
`pods/http_requests`, which measures requests per second over a 2 minute
interval. The value should currently be close to zero, since there's no
traffic to your app, except for the regular metrics collection from
Prometheus.
@ -417,7 +322,7 @@ and make decisions based on it.
If you didn't create the HorizontalPodAutoscaler above, create it now:
```shell
$ kubectl create -f sample-app.hpa.yaml
$ kubectl create -f sample-app-hpa.yaml
```
Wait a little bit, and then examine the HPA:
@ -463,4 +368,4 @@ setting different labels or using the `Object` metric source type.
For more information on how metrics are exposed by the Prometheus adapter,
see [config documentation](/docs/config.md), and check the [default
configuration](/deploy/manifests/config-map.yaml).
configuration](/deploy/manifests/custom-metrics-config-map.yaml).

138
go.mod
View file

@ -1,118 +1,30 @@
module sigs.k8s.io/prometheus-adapter
module github.com/directxman12/k8s-prometheus-adapter
go 1.22.1
toolchain go1.22.2
go 1.13
require (
github.com/onsi/ginkgo v1.16.5
github.com/onsi/gomega v1.33.1
github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.73.2
github.com/prometheus-operator/prometheus-operator/pkg/client v0.73.2
github.com/prometheus/client_golang v1.18.0
github.com/prometheus/common v0.46.0
github.com/spf13/cobra v1.8.0
github.com/stretchr/testify v1.9.0
gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.30.0
k8s.io/apimachinery v0.30.0
k8s.io/apiserver v0.30.0
k8s.io/client-go v0.30.0
k8s.io/component-base v0.30.0
k8s.io/klog/v2 v2.120.1
k8s.io/kube-openapi v0.0.0-20240430033511-f0e62f92d13f
k8s.io/metrics v0.30.0
sigs.k8s.io/custom-metrics-apiserver v1.30.0
sigs.k8s.io/metrics-server v0.7.1
github.com/NYTimes/gziphandler v1.0.1 // indirect
github.com/go-openapi/spec v0.19.8
github.com/imdario/mergo v0.3.8 // indirect
github.com/kubernetes-sigs/custom-metrics-apiserver v0.0.0-20201110135240-8c12d6d92362
github.com/onsi/ginkgo v1.11.0
github.com/onsi/gomega v1.7.0
github.com/prometheus/client_golang v1.7.1
github.com/prometheus/common v0.10.0
github.com/spf13/cobra v1.0.0
github.com/stretchr/testify v1.6.1
gopkg.in/yaml.v2 v2.2.8
k8s.io/api v0.19.3
k8s.io/apimachinery v0.19.3
k8s.io/apiserver v0.19.3
k8s.io/client-go v0.19.3
k8s.io/component-base v0.19.3
k8s.io/klog/v2 v2.3.0
k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6
k8s.io/metrics v0.19.3
k8s.io/sample-apiserver v0.18.5
sigs.k8s.io/metrics-server v0.3.7-0.20201028092756-2a1d1385123b
)
require (
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.0.0-20230305170008-8188dc5388df // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.12.0 // indirect
github.com/evanphx/json-patch v5.9.0+incompatible // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/cel-go v0.17.8 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.1 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nxadm/tail v1.4.8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
go.etcd.io/etcd/api/v3 v3.5.11 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.11 // indirect
go.etcd.io/etcd/client/v3 v3.5.11 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 // indirect
go.opentelemetry.io/otel v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 // indirect
go.opentelemetry.io/otel/metric v1.21.0 // indirect
go.opentelemetry.io/otel/sdk v1.21.0 // indirect
go.opentelemetry.io/otel/trace v1.21.0 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.26.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/exp v0.0.0-20231226003508-02704c960a9b // indirect
golang.org/x/mod v0.17.0 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/oauth2 v0.18.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.5.0 // indirect
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto v0.0.0-20231212172506-995d672761c0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20231212172506-995d672761c0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20231212172506-995d672761c0 // indirect
google.golang.org/grpc v1.60.1 // indirect
google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiextensions-apiserver v0.29.3 // indirect
k8s.io/gengo/v2 v2.0.0-20240228010128-51d4e06bde70 // indirect
k8s.io/kms v0.30.0 // indirect
k8s.io/utils v0.0.0-20240423183400-0849a56e8f22 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.29.0 // indirect
sigs.k8s.io/controller-runtime v0.17.2 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
)
// forced by the inclusion of sigs.k8s.io/metrics-server's use of this in their go.mod
replace k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1 => ./localvendor/k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1

967
go.sum

File diff suppressed because it is too large Load diff

41
hack/gofmt-all.sh Executable file
View file

@ -0,0 +1,41 @@
#!/bin/bash
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
verify=0
if [[ ${1:-} = "--verify" || ${1:-} = "-v" ]]; then
verify=1
fi
find_files() {
find . -not \( \( \
-wholename './_output' \
-o -wholename './vendor' \
\) -prune \) -name '*.go'
}
if [[ $verify -eq 1 ]]; then
diff=$(find_files | xargs gofmt -s -d 2>&1)
if [[ -n "${diff}" ]]; then
echo "gofmt -s -w $(echo "${diff}" | awk '/^diff / { print $2 }' | tr '\n' ' ')"
exit 1
fi
else
find_files | xargs gofmt -s -w
fi

View file

@ -0,0 +1 @@
module k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1

View file

@ -0,0 +1,332 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// Summary is a top-level container for holding NodeStats and PodStats.
type Summary struct {
// Overall node stats.
Node NodeStats `json:"node"`
// Per-pod stats.
Pods []PodStats `json:"pods"`
}
// NodeStats holds node-level unprocessed sample stats.
type NodeStats struct {
// Reference to the measured Node.
NodeName string `json:"nodeName"`
// Stats of system daemons tracked as raw containers.
// The system containers are named according to the SystemContainer* constants.
// +optional
// +patchMergeKey=name
// +patchStrategy=merge
SystemContainers []ContainerStats `json:"systemContainers,omitempty" patchStrategy:"merge" patchMergeKey:"name"`
// The time at which data collection for the node-scoped (i.e. aggregate) stats was (re)started.
StartTime metav1.Time `json:"startTime"`
// Stats pertaining to CPU resources.
// +optional
CPU *CPUStats `json:"cpu,omitempty"`
// Stats pertaining to memory (RAM) resources.
// +optional
Memory *MemoryStats `json:"memory,omitempty"`
// Stats pertaining to network resources.
// +optional
Network *NetworkStats `json:"network,omitempty"`
// Stats pertaining to total usage of filesystem resources on the rootfs used by node k8s components.
// NodeFs.Used is the total bytes used on the filesystem.
// +optional
Fs *FsStats `json:"fs,omitempty"`
// Stats about the underlying container runtime.
// +optional
Runtime *RuntimeStats `json:"runtime,omitempty"`
// Stats about the rlimit of system.
// +optional
Rlimit *RlimitStats `json:"rlimit,omitempty"`
}
// RlimitStats are stats rlimit of OS.
type RlimitStats struct {
Time metav1.Time `json:"time"`
// The max PID of OS.
MaxPID *int64 `json:"maxpid,omitempty"`
// The number of running process in the OS.
NumOfRunningProcesses *int64 `json:"curproc,omitempty"`
}
// RuntimeStats are stats pertaining to the underlying container runtime.
type RuntimeStats struct {
// Stats about the underlying filesystem where container images are stored.
// This filesystem could be the same as the primary (root) filesystem.
// Usage here refers to the total number of bytes occupied by images on the filesystem.
// +optional
ImageFs *FsStats `json:"imageFs,omitempty"`
}
const (
// SystemContainerKubelet is the container name for the system container tracking Kubelet usage.
SystemContainerKubelet = "kubelet"
// SystemContainerRuntime is the container name for the system container tracking the runtime (e.g. docker) usage.
SystemContainerRuntime = "runtime"
// SystemContainerMisc is the container name for the system container tracking non-kubernetes processes.
SystemContainerMisc = "misc"
// SystemContainerPods is the container name for the system container tracking user pods.
SystemContainerPods = "pods"
)
// PodStats holds pod-level unprocessed sample stats.
type PodStats struct {
// Reference to the measured Pod.
PodRef PodReference `json:"podRef"`
// The time at which data collection for the pod-scoped (e.g. network) stats was (re)started.
StartTime metav1.Time `json:"startTime"`
// Stats of containers in the measured pod.
// +patchMergeKey=name
// +patchStrategy=merge
Containers []ContainerStats `json:"containers" patchStrategy:"merge" patchMergeKey:"name"`
// Stats pertaining to CPU resources consumed by pod cgroup (which includes all containers' resource usage and pod overhead).
// +optional
CPU *CPUStats `json:"cpu,omitempty"`
// Stats pertaining to memory (RAM) resources consumed by pod cgroup (which includes all containers' resource usage and pod overhead).
// +optional
Memory *MemoryStats `json:"memory,omitempty"`
// Stats pertaining to network resources.
// +optional
Network *NetworkStats `json:"network,omitempty"`
// Stats pertaining to volume usage of filesystem resources.
// VolumeStats.UsedBytes is the number of bytes used by the Volume
// +optional
// +patchMergeKey=name
// +patchStrategy=merge
VolumeStats []VolumeStats `json:"volume,omitempty" patchStrategy:"merge" patchMergeKey:"name"`
// EphemeralStorage reports the total filesystem usage for the containers and emptyDir-backed volumes in the measured Pod.
// +optional
EphemeralStorage *FsStats `json:"ephemeral-storage,omitempty"`
}
// ContainerStats holds container-level unprocessed sample stats.
type ContainerStats struct {
// Reference to the measured container.
Name string `json:"name"`
// The time at which data collection for this container was (re)started.
StartTime metav1.Time `json:"startTime"`
// Stats pertaining to CPU resources.
// +optional
CPU *CPUStats `json:"cpu,omitempty"`
// Stats pertaining to memory (RAM) resources.
// +optional
Memory *MemoryStats `json:"memory,omitempty"`
// Metrics for Accelerators. Each Accelerator corresponds to one element in the array.
Accelerators []AcceleratorStats `json:"accelerators,omitempty"`
// Stats pertaining to container rootfs usage of filesystem resources.
// Rootfs.UsedBytes is the number of bytes used for the container write layer.
// +optional
Rootfs *FsStats `json:"rootfs,omitempty"`
// Stats pertaining to container logs usage of filesystem resources.
// Logs.UsedBytes is the number of bytes used for the container logs.
// +optional
Logs *FsStats `json:"logs,omitempty"`
// User defined metrics that are exposed by containers in the pod. Typically, we expect only one container in the pod to be exposing user defined metrics. In the event of multiple containers exposing metrics, they will be combined here.
// +patchMergeKey=name
// +patchStrategy=merge
UserDefinedMetrics []UserDefinedMetric `json:"userDefinedMetrics,omitempty" patchStrategy:"merge" patchMergeKey:"name"`
}
// PodReference contains enough information to locate the referenced pod.
type PodReference struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
UID string `json:"uid"`
}
// InterfaceStats contains resource value data about interface.
type InterfaceStats struct {
// The name of the interface
Name string `json:"name"`
// Cumulative count of bytes received.
// +optional
RxBytes *uint64 `json:"rxBytes,omitempty"`
// Cumulative count of receive errors encountered.
// +optional
RxErrors *uint64 `json:"rxErrors,omitempty"`
// Cumulative count of bytes transmitted.
// +optional
TxBytes *uint64 `json:"txBytes,omitempty"`
// Cumulative count of transmit errors encountered.
// +optional
TxErrors *uint64 `json:"txErrors,omitempty"`
}
// NetworkStats contains data about network resources.
type NetworkStats struct {
// The time at which these stats were updated.
Time metav1.Time `json:"time"`
// Stats for the default interface, if found
InterfaceStats `json:",inline"`
Interfaces []InterfaceStats `json:"interfaces,omitempty"`
}
// CPUStats contains data about CPU usage.
type CPUStats struct {
// The time at which these stats were updated.
Time metav1.Time `json:"time"`
// Total CPU usage (sum of all cores) averaged over the sample window.
// The "core" unit can be interpreted as CPU core-nanoseconds per second.
// +optional
UsageNanoCores *uint64 `json:"usageNanoCores,omitempty"`
// Cumulative CPU usage (sum of all cores) since object creation.
// +optional
UsageCoreNanoSeconds *uint64 `json:"usageCoreNanoSeconds,omitempty"`
}
// MemoryStats contains data about memory usage.
type MemoryStats struct {
// The time at which these stats were updated.
Time metav1.Time `json:"time"`
// Available memory for use. This is defined as the memory limit - workingSetBytes.
// If memory limit is undefined, the available bytes is omitted.
// +optional
AvailableBytes *uint64 `json:"availableBytes,omitempty"`
// Total memory in use. This includes all memory regardless of when it was accessed.
// +optional
UsageBytes *uint64 `json:"usageBytes,omitempty"`
// The amount of working set memory. This includes recently accessed memory,
// dirty memory, and kernel memory. WorkingSetBytes is <= UsageBytes
// +optional
WorkingSetBytes *uint64 `json:"workingSetBytes,omitempty"`
// The amount of anonymous and swap cache memory (includes transparent
// hugepages).
// +optional
RSSBytes *uint64 `json:"rssBytes,omitempty"`
// Cumulative number of minor page faults.
// +optional
PageFaults *uint64 `json:"pageFaults,omitempty"`
// Cumulative number of major page faults.
// +optional
MajorPageFaults *uint64 `json:"majorPageFaults,omitempty"`
}
// AcceleratorStats contains stats for accelerators attached to the container.
type AcceleratorStats struct {
// Make of the accelerator (nvidia, amd, google etc.)
Make string `json:"make"`
// Model of the accelerator (tesla-p100, tesla-k80 etc.)
Model string `json:"model"`
// ID of the accelerator.
ID string `json:"id"`
// Total accelerator memory.
// unit: bytes
MemoryTotal uint64 `json:"memoryTotal"`
// Total accelerator memory allocated.
// unit: bytes
MemoryUsed uint64 `json:"memoryUsed"`
// Percent of time over the past sample period (10s) during which
// the accelerator was actively processing.
DutyCycle uint64 `json:"dutyCycle"`
}
// VolumeStats contains data about Volume filesystem usage.
type VolumeStats struct {
// Embedded FsStats
FsStats
// Name is the name given to the Volume
// +optional
Name string `json:"name,omitempty"`
// Reference to the PVC, if one exists
// +optional
PVCRef *PVCReference `json:"pvcRef,omitempty"`
}
// PVCReference contains enough information to describe the referenced PVC.
type PVCReference struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
}
// FsStats contains data about filesystem usage.
type FsStats struct {
// The time at which these stats were updated.
Time metav1.Time `json:"time"`
// AvailableBytes represents the storage space available (bytes) for the filesystem.
// +optional
AvailableBytes *uint64 `json:"availableBytes,omitempty"`
// CapacityBytes represents the total capacity (bytes) of the filesystems underlying storage.
// +optional
CapacityBytes *uint64 `json:"capacityBytes,omitempty"`
// UsedBytes represents the bytes used for a specific task on the filesystem.
// This may differ from the total bytes used on the filesystem and may not equal CapacityBytes - AvailableBytes.
// e.g. For ContainerStats.Rootfs this is the bytes used by the container rootfs on the filesystem.
// +optional
UsedBytes *uint64 `json:"usedBytes,omitempty"`
// InodesFree represents the free inodes in the filesystem.
// +optional
InodesFree *uint64 `json:"inodesFree,omitempty"`
// Inodes represents the total inodes in the filesystem.
// +optional
Inodes *uint64 `json:"inodes,omitempty"`
// InodesUsed represents the inodes used by the filesystem
// This may not equal Inodes - InodesFree because this filesystem may share inodes with other "filesystems"
// e.g. For ContainerStats.Rootfs, this is the inodes used only by that container, and does not count inodes used by other containers.
InodesUsed *uint64 `json:"inodesUsed,omitempty"`
}
// UserDefinedMetricType defines how the metric should be interpreted by the user.
type UserDefinedMetricType string
const (
// MetricGauge is an instantaneous value. May increase or decrease.
MetricGauge UserDefinedMetricType = "gauge"
// MetricCumulative is a counter-like value that is only expected to increase.
MetricCumulative UserDefinedMetricType = "cumulative"
// MetricDelta is a rate over a time period.
MetricDelta UserDefinedMetricType = "delta"
)
// UserDefinedMetricDescriptor contains metadata that describes a user defined metric.
type UserDefinedMetricDescriptor struct {
// The name of the metric.
Name string `json:"name"`
// Type of the metric.
Type UserDefinedMetricType `json:"type"`
// Display Units for the stats.
Units string `json:"units"`
// Metadata labels associated with this metric.
// +optional
Labels map[string]string `json:"labels,omitempty"`
}
// UserDefinedMetric represents a metric defined and generated by users.
type UserDefinedMetric struct {
UserDefinedMetricDescriptor `json:",inline"`
// The time at which these stats were updated.
Time metav1.Time `json:"time"`
// Value of the metric. Float64s have 53 bit precision.
// We do not foresee any metrics exceeding that value.
Value float64 `json:"value"`
}

File diff suppressed because it is too large Load diff

View file

@ -1,4 +1,3 @@
//go:build codegen
// +build codegen
/*

View file

@ -21,10 +21,10 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"path"
"strings"
"time"
"github.com/prometheus/common/model"
@ -47,31 +47,17 @@ type GenericAPIClient interface {
type httpAPIClient struct {
client *http.Client
baseURL *url.URL
headers http.Header
}
func (c *httpAPIClient) Do(ctx context.Context, verb, endpoint string, query url.Values) (APIResponse, error) {
u := *c.baseURL
u.Path = path.Join(c.baseURL.Path, endpoint)
var reqBody io.Reader
if verb == http.MethodGet {
u.RawQuery = query.Encode()
} else if verb == http.MethodPost {
reqBody = strings.NewReader(query.Encode())
}
req, err := http.NewRequestWithContext(ctx, verb, u.String(), reqBody)
u.RawQuery = query.Encode()
req, err := http.NewRequest(verb, u.String(), nil)
if err != nil {
return APIResponse{}, fmt.Errorf("error constructing HTTP request to Prometheus: %v", err)
}
for key, values := range c.headers {
for _, value := range values {
req.Header.Add(key, value)
}
}
if verb == http.MethodPost {
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
}
req.WithContext(ctx)
resp, err := c.client.Do(req)
defer func() {
@ -100,7 +86,7 @@ func (c *httpAPIClient) Do(ctx context.Context, verb, endpoint string, query url
var body io.Reader = resp.Body
if klog.V(8).Enabled() {
data, err := io.ReadAll(body)
data, err := ioutil.ReadAll(body)
if err != nil {
return APIResponse{}, fmt.Errorf("unable to log response body: %v", err)
}
@ -127,11 +113,10 @@ func (c *httpAPIClient) Do(ctx context.Context, verb, endpoint string, query url
}
// NewGenericAPIClient builds a new generic Prometheus API client for the given base URL and HTTP Client.
func NewGenericAPIClient(client *http.Client, baseURL *url.URL, headers http.Header) GenericAPIClient {
func NewGenericAPIClient(client *http.Client, baseURL *url.URL) GenericAPIClient {
return &httpAPIClient{
client: client,
baseURL: baseURL,
headers: headers,
}
}
@ -143,22 +128,20 @@ const (
// queryClient is a Client that connects to the Prometheus HTTP API.
type queryClient struct {
api GenericAPIClient
verb string
api GenericAPIClient
}
// NewClientForAPI creates a Client for the given generic Prometheus API client.
func NewClientForAPI(client GenericAPIClient, verb string) Client {
func NewClientForAPI(client GenericAPIClient) Client {
return &queryClient{
api: client,
verb: verb,
api: client,
}
}
// NewClient creates a Client for the given HTTP client and base URL (the location of the Prometheus server).
func NewClient(client *http.Client, baseURL *url.URL, headers http.Header, verb string) Client {
genericClient := NewGenericAPIClient(client, baseURL, headers)
return NewClientForAPI(genericClient, verb)
func NewClient(client *http.Client, baseURL *url.URL) Client {
genericClient := NewGenericAPIClient(client, baseURL)
return NewClientForAPI(genericClient)
}
func (h *queryClient) Series(ctx context.Context, interval model.Interval, selectors ...Selector) ([]Series, error) {
@ -174,7 +157,7 @@ func (h *queryClient) Series(ctx context.Context, interval model.Interval, selec
vals.Add("match[]", string(selector))
}
res, err := h.api.Do(ctx, h.verb, seriesURL, vals)
res, err := h.api.Do(ctx, "GET", seriesURL, vals)
if err != nil {
return nil, err
}
@ -194,7 +177,7 @@ func (h *queryClient) Query(ctx context.Context, t model.Time, query Selector) (
vals.Set("timeout", model.Duration(timeout).String())
}
res, err := h.api.Do(ctx, h.verb, queryURL, vals)
res, err := h.api.Do(ctx, "GET", queryURL, vals)
if err != nil {
return QueryResult{}, err
}
@ -221,7 +204,7 @@ func (h *queryClient) QueryRange(ctx context.Context, r Range, query Selector) (
vals.Set("timeout", model.Duration(timeout).String())
}
res, err := h.api.Do(ctx, h.verb, queryRangeURL, vals)
res, err := h.api.Do(ctx, "GET", queryRangeURL, vals)
if err != nil {
return QueryResult{}, err
}
@ -235,7 +218,7 @@ func (h *queryClient) QueryRange(ctx context.Context, r Range, query Selector) (
// when present
func timeoutFromContext(ctx context.Context) (time.Duration, bool) {
if deadline, hasDeadline := ctx.Deadline(); hasDeadline {
return time.Since(deadline), true
return time.Now().Sub(deadline), true
}
return time.Duration(0), false

View file

@ -20,9 +20,8 @@ import (
"context"
"fmt"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
pmodel "github.com/prometheus/common/model"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
)
// FakePrometheusClient is a fake instance of prom.Client

View file

@ -13,7 +13,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (

View file

@ -13,7 +13,6 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (

View file

@ -13,51 +13,34 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package metrics
import (
"context"
"net/http"
"net/url"
"time"
"github.com/prometheus/client_golang/prometheus"
apimetrics "k8s.io/apiserver/pkg/endpoints/metrics"
"k8s.io/component-base/metrics"
"k8s.io/component-base/metrics/legacyregistry"
"sigs.k8s.io/prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/client"
)
var (
// queryLatency is the total latency of any query going through the
// various endpoints (query, range-query, series). It includes some deserialization
// overhead and HTTP overhead.
queryLatency = metrics.NewHistogramVec(
&metrics.HistogramOpts{
Namespace: "prometheus_adapter",
Subsystem: "prometheus_client",
Name: "request_duration_seconds",
Help: "Prometheus client query latency in seconds. Broken down by target prometheus endpoint and target server",
Buckets: prometheus.DefBuckets,
queryLatency = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "cmgateway_prometheus_query_latency_seconds",
Help: "Prometheus client query latency in seconds. Broken down by target prometheus endpoint and target server",
Buckets: prometheus.ExponentialBuckets(0.0001, 2, 10),
},
[]string{"path", "server"},
[]string{"endpoint", "server"},
)
)
func MetricsHandler() (http.HandlerFunc, error) {
registry := metrics.NewKubeRegistry()
err := registry.Register(queryLatency)
if err != nil {
return nil, err
}
apimetrics.Register()
return func(w http.ResponseWriter, req *http.Request) {
legacyregistry.Handler().ServeHTTP(w, req)
metrics.HandlerFor(registry, metrics.HandlerOpts{}).ServeHTTP(w, req)
}, nil
func init() {
prometheus.MustRegister(queryLatency)
}
// instrumentedClient is a client.GenericAPIClient which instruments calls to Do,
@ -79,7 +62,7 @@ func (c *instrumentedGenericClient) Do(ctx context.Context, verb, endpoint strin
return
}
}
queryLatency.With(prometheus.Labels{"path": endpoint, "server": c.serverName}).Observe(endTime.Sub(startTime).Seconds())
queryLatency.With(prometheus.Labels{"endpoint": endpoint, "server": c.serverName}).Observe(endTime.Sub(startTime).Seconds())
}()
var resp client.APIResponse

View file

@ -25,10 +25,10 @@ type ErrorType string
const (
ErrBadData ErrorType = "bad_data"
ErrTimeout ErrorType = "timeout"
ErrCanceled ErrorType = "canceled"
ErrExec ErrorType = "execution"
ErrBadResponse ErrorType = "bad_response"
ErrTimeout = "timeout"
ErrCanceled = "canceled"
ErrExec = "execution"
ErrBadResponse = "bad_response"
)
// Error is an error returned by the API.
@ -46,7 +46,7 @@ type ResponseStatus string
const (
ResponseSucceeded ResponseStatus = "succeeded"
ResponseError ResponseStatus = "error"
ResponseError = "error"
)
// APIResponse represents the raw response returned by the API.

View file

@ -9,9 +9,9 @@ type MetricsDiscoveryConfig struct {
// custom metrics API resources. The rules are applied independently,
// and thus must be mutually exclusive. Rules with the same SeriesQuery
// will make only a single API call.
Rules []DiscoveryRule `json:"rules" yaml:"rules"`
ResourceRules *ResourceRules `json:"resourceRules,omitempty" yaml:"resourceRules,omitempty"`
ExternalRules []DiscoveryRule `json:"externalRules,omitempty" yaml:"externalRules,omitempty"`
Rules []DiscoveryRule `yaml:"rules"`
ResourceRules *ResourceRules `yaml:"resourceRules,omitempty"`
ExternalRules []DiscoveryRule `yaml:"externalRules,omitempty"`
}
// DiscoveryRule describes a set of rules for transforming Prometheus metrics to/from
@ -19,32 +19,32 @@ type MetricsDiscoveryConfig struct {
type DiscoveryRule struct {
// SeriesQuery specifies which metrics this rule should consider via a Prometheus query
// series selector query.
SeriesQuery string `json:"seriesQuery" yaml:"seriesQuery"`
SeriesQuery string `yaml:"seriesQuery"`
// SeriesFilters specifies additional regular expressions to be applied on
// the series names returned from the query. This is useful for constraints
// that can't be represented in the SeriesQuery (e.g. series matching `container_.+`
// not matching `container_.+_total`. A filter will be automatically appended to
// match the form specified in Name.
SeriesFilters []RegexFilter `json:"seriesFilters" yaml:"seriesFilters"`
SeriesFilters []RegexFilter `yaml:"seriesFilters"`
// Resources specifies how associated Kubernetes resources should be discovered for
// the given metrics.
Resources ResourceMapping `json:"resources" yaml:"resources"`
Resources ResourceMapping `yaml:"resources"`
// Name specifies how the metric name should be transformed between custom metric
// API resources, and Prometheus metric names.
Name NameMapping `json:"name" yaml:"name"`
Name NameMapping `yaml:"name"`
// MetricsQuery specifies modifications to the metrics query, such as converting
// cumulative metrics to rate metrics. It is a template where `.LabelMatchers` is
// a the comma-separated base label matchers and `.Series` is the series name, and
// `.GroupBy` is the comma-separated expected group-by label names. The delimeters
// are `<<` and `>>`.
MetricsQuery string `json:"metricsQuery,omitempty" yaml:"metricsQuery,omitempty"`
MetricsQuery string `yaml:"metricsQuery,omitempty"`
}
// RegexFilter is a filter that matches positively or negatively against a regex.
// Only one field may be set at a time.
type RegexFilter struct {
Is string `json:"is,omitempty" yaml:"is,omitempty"`
IsNot string `json:"isNot,omitempty" yaml:"isNot,omitempty"`
Is string `yaml:"is,omitempty"`
IsNot string `yaml:"isNot,omitempty"`
}
// ResourceMapping specifies how to map Kubernetes resources to Prometheus labels
@ -54,18 +54,16 @@ type ResourceMapping struct {
// the `.Group` and `.Resource` fields. The `.Group` field will have
// dots replaced with underscores, and the `.Resource` field will be
// singularized. The delimiters are `<<` and `>>`.
Template string `json:"template,omitempty" yaml:"template,omitempty"`
Template string `yaml:"template,omitempty"`
// Overrides specifies exceptions to the above template, mapping label names
// to group-resources
Overrides map[string]GroupResource `json:"overrides,omitempty" yaml:"overrides,omitempty"`
// Namespaced ignores the source namespace of the requester and requires one in the query
Namespaced *bool `json:"namespaced,omitempty" yaml:"namespaced,omitempty"`
Overrides map[string]GroupResource `yaml:"overrides,omitempty"`
}
// GroupResource represents a Kubernetes group-resource.
type GroupResource struct {
Group string `json:"group,omitempty" yaml:"group,omitempty"`
Resource string `json:"resource" yaml:"resource"`
Group string `yaml:"group,omitempty"`
Resource string `yaml:"resource"`
}
// NameMapping specifies how to convert Prometheus metrics
@ -74,38 +72,38 @@ type NameMapping struct {
// Matches is a regular expression that is used to match
// Prometheus series names. It may be left blank, in which
// case it is equivalent to `.*`.
Matches string `json:"matches" yaml:"matches"`
Matches string `yaml:"matches"`
// As is the name used in the API. Captures from Matches
// are available for use here. If not specified, it defaults
// to $0 if no capture groups are present in Matches, or $1
// if only one is present, and will error if multiple are.
As string `json:"as" yaml:"as"`
As string `yaml:"as"`
}
// ResourceRules describe the rules for querying resource metrics
// API results. It's assumed that the same metrics can be used
// to aggregate across different resources.
type ResourceRules struct {
CPU ResourceRule `json:"cpu" yaml:"cpu"`
Memory ResourceRule `json:"memory" yaml:"memory"`
CPU ResourceRule `yaml:"cpu"`
Memory ResourceRule `yaml:"memory"`
// Window is the window size reported by the resource metrics API. It should match the value used
// in your containerQuery and nodeQuery if you use a `rate` function.
Window pmodel.Duration `json:"window" yaml:"window"`
Window pmodel.Duration `yaml:"window"`
}
// ResourceRule describes how to query metrics for some particular
// system resource metric.
type ResourceRule struct {
// Container is the query used to fetch the metrics for containers.
ContainerQuery string `json:"containerQuery" yaml:"containerQuery"`
ContainerQuery string `yaml:"containerQuery"`
// NodeQuery is the query used to fetch the metrics for nodes
// (for instance, simply aggregating by node label is insufficient for
// cadvisor metrics -- you need to select the `/` container).
NodeQuery string `json:"nodeQuery" yaml:"nodeQuery"`
NodeQuery string `yaml:"nodeQuery"`
// Resources specifies how associated Kubernetes resources should be discovered for
// the given metrics.
Resources ResourceMapping `json:"resources" yaml:"resources"`
Resources ResourceMapping `yaml:"resources"`
// ContainerLabel indicates the name of the Prometheus label containing the container name
// (since "container" is not a resource, this can't go in the `resources` block, but is similar).
ContainerLabel string `json:"containerLabel" yaml:"containerLabel"`
ContainerLabel string `yaml:"containerLabel"`
}

View file

@ -2,7 +2,7 @@ package config
import (
"fmt"
"io"
"io/ioutil"
"os"
yaml "gopkg.in/yaml.v2"
@ -11,11 +11,11 @@ import (
// FromFile loads the configuration from a particular file.
func FromFile(filename string) (*MetricsDiscoveryConfig, error) {
file, err := os.Open(filename)
defer file.Close()
if err != nil {
return nil, fmt.Errorf("unable to load metrics discovery config file: %v", err)
}
defer file.Close()
contents, err := io.ReadAll(file)
contents, err := ioutil.ReadAll(file)
if err != nil {
return nil, fmt.Errorf("unable to load metrics discovery config file: %v", err)
}

View file

@ -22,8 +22,9 @@ import (
"math"
"time"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider/helpers"
pmodel "github.com/prometheus/common/model"
apierr "k8s.io/apimachinery/pkg/api/errors"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/api/resource"
@ -36,11 +37,8 @@ import (
"k8s.io/klog/v2"
"k8s.io/metrics/pkg/apis/custom_metrics"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider/helpers"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
)
// Runnable represents something that can be run until told to stop.
@ -80,7 +78,7 @@ func NewPrometheusProvider(mapper apimeta.RESTMapper, kubeClient dynamic.Interfa
}, lister
}
func (p *prometheusProvider) metricFor(value pmodel.SampleValue, name types.NamespacedName, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValue, error) {
func (p *prometheusProvider) metricFor(value pmodel.SampleValue, name types.NamespacedName, info provider.CustomMetricInfo) (*custom_metrics.MetricValue, error) {
ref, err := helpers.ReferenceFor(p.mapper, name, info)
if err != nil {
return nil, err
@ -92,29 +90,18 @@ func (p *prometheusProvider) metricFor(value pmodel.SampleValue, name types.Name
} else {
q = resource.NewMilliQuantity(int64(value*1000.0), resource.DecimalSI)
}
metric := &custom_metrics.MetricValue{
return &custom_metrics.MetricValue{
DescribedObject: ref,
Metric: custom_metrics.MetricIdentifier{
Name: info.Metric,
},
// TODO(directxman12): use the right timestamp
Timestamp: metav1.Time{Time: time.Now()},
Timestamp: metav1.Time{time.Now()},
Value: *q,
}
if !metricSelector.Empty() {
sel, err := metav1.ParseToLabelSelector(metricSelector.String())
if err != nil {
return nil, err
}
metric.Metric.Selector = sel
}
return metric, nil
}, nil
}
func (p *prometheusProvider) metricsFor(valueSet pmodel.Vector, namespace string, names []string, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValueList, error) {
func (p *prometheusProvider) metricsFor(valueSet pmodel.Vector, info provider.CustomMetricInfo, namespace string, names []string) (*custom_metrics.MetricValueList, error) {
values, found := p.MatchValuesToNames(info, valueSet)
if !found {
return nil, provider.NewMetricNotFoundError(info.GroupResource, info.Metric)
@ -126,7 +113,7 @@ func (p *prometheusProvider) metricsFor(valueSet pmodel.Vector, namespace string
continue
}
value, err := p.metricFor(values[name], types.NamespacedName{Namespace: namespace, Name: name}, info, metricSelector)
value, err := p.metricFor(values[name], types.NamespacedName{Namespace: namespace, Name: name}, info)
if err != nil {
return nil, err
}
@ -138,14 +125,14 @@ func (p *prometheusProvider) metricsFor(valueSet pmodel.Vector, namespace string
}, nil
}
func (p *prometheusProvider) buildQuery(ctx context.Context, info provider.CustomMetricInfo, namespace string, metricSelector labels.Selector, names ...string) (pmodel.Vector, error) {
func (p *prometheusProvider) buildQuery(info provider.CustomMetricInfo, namespace string, metricSelector labels.Selector, names ...string) (pmodel.Vector, error) {
query, found := p.QueryForMetric(info, namespace, metricSelector, names...)
if !found {
return nil, provider.NewMetricNotFoundError(info.GroupResource, info.Metric)
}
// TODO: use an actual context
queryResults, err := p.promClient.Query(ctx, pmodel.Now(), query)
queryResults, err := p.promClient.Query(context.TODO(), pmodel.Now(), query)
if err != nil {
klog.Errorf("unable to fetch metrics from prometheus: %v", err)
// don't leak implementation details to the user
@ -160,9 +147,9 @@ func (p *prometheusProvider) buildQuery(ctx context.Context, info provider.Custo
return *queryResults.Vector, nil
}
func (p *prometheusProvider) GetMetricByName(ctx context.Context, name types.NamespacedName, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValue, error) {
func (p *prometheusProvider) GetMetricByName(name types.NamespacedName, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValue, error) {
// construct a query
queryResults, err := p.buildQuery(ctx, info, name.Namespace, metricSelector, name.Name)
queryResults, err := p.buildQuery(info, name.Namespace, metricSelector, name.Name)
if err != nil {
return nil, err
}
@ -188,10 +175,10 @@ func (p *prometheusProvider) GetMetricByName(ctx context.Context, name types.Nam
}
// return the resulting metric
return p.metricFor(resultValue, name, info, metricSelector)
return p.metricFor(resultValue, name, info)
}
func (p *prometheusProvider) GetMetricBySelector(ctx context.Context, namespace string, selector labels.Selector, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValueList, error) {
func (p *prometheusProvider) GetMetricBySelector(namespace string, selector labels.Selector, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValueList, error) {
// fetch a list of relevant resource names
resourceNames, err := helpers.ListObjectNames(p.mapper, p.kubeClient, namespace, selector, info)
if err != nil {
@ -201,13 +188,13 @@ func (p *prometheusProvider) GetMetricBySelector(ctx context.Context, namespace
}
// construct the actual query
queryResults, err := p.buildQuery(ctx, info, namespace, metricSelector, resourceNames...)
queryResults, err := p.buildQuery(info, namespace, metricSelector, resourceNames...)
if err != nil {
return nil, err
}
// return the resulting metrics
return p.metricsFor(queryResults, namespace, resourceNames, info, metricSelector)
return p.metricsFor(queryResults, info, namespace, resourceNames)
}
type cachingMetricsLister struct {
@ -256,7 +243,7 @@ func (l *cachingMetricsLister) updateMetrics() error {
}
selectors[sel] = struct{}{}
go func() {
series, err := l.promClient.Series(context.TODO(), pmodel.Interval{Start: startTime, End: 0}, sel)
series, err := l.promClient.Series(context.TODO(), pmodel.Interval{startTime, 0}, sel)
if err != nil {
errs <- fmt.Errorf("unable to fetch metrics for query %q: %v", sel, err)
return

View file

@ -19,19 +19,17 @@ package provider
import (
"time"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
pmodel "github.com/prometheus/common/model"
"k8s.io/apimachinery/pkg/runtime/schema"
fakedyn "k8s.io/client-go/dynamic/fake"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
config "sigs.k8s.io/prometheus-adapter/cmd/config-gen/utils"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
fakeprom "sigs.k8s.io/prometheus-adapter/pkg/client/fake"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
config "github.com/directxman12/k8s-prometheus-adapter/cmd/config-gen/utils"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
fakeprom "github.com/directxman12/k8s-prometheus-adapter/pkg/client/fake"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
pmodel "github.com/prometheus/common/model"
)
const fakeProviderUpdateInterval = 2 * time.Second
@ -47,13 +45,13 @@ func setupPrometheusProvider() (provider.CustomMetricsProvider, *fakeprom.FakePr
prov, _ := NewPrometheusProvider(restMapper(), fakeKubeClient, fakeProm, namers, fakeProviderUpdateInterval, fakeProviderStartDuration)
containerSel := prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))
containerSel := prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container_name", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod_name", ""))
namespacedSel := prom.MatchSeries("", prom.LabelNeq("namespace", ""), prom.NameNotMatches("^container_.*"))
fakeProm.SeriesResults = map[prom.Selector][]prom.Series{
containerSel: {
{
Name: "container_some_usage",
Labels: pmodel.LabelSet{"pod": "somepod", "namespace": "somens", "container": "somecont"},
Labels: pmodel.LabelSet{"pod_name": "somepod", "namespace": "somens", "container_name": "somecont"},
},
},
namespacedSel: {
@ -87,7 +85,7 @@ var _ = Describe("Custom Metrics Provider", func() {
By("ensuring that no metrics are present before we start listing")
Expect(prov.ListAllMetrics()).To(BeEmpty())
By("setting the acceptable interval to now until the next update, with a bit of wiggle room")
By("setting the acceptible interval to now until the next update, with a bit of wiggle room")
startTime := pmodel.Now().Add(-1*fakeProviderUpdateInterval - fakeProviderUpdateInterval/10)
fakeProm.AcceptableInterval = pmodel.Interval{Start: startTime, End: 0}
@ -98,16 +96,16 @@ var _ = Describe("Custom Metrics Provider", func() {
By("listing all metrics, and checking that they contain the expected results")
Expect(prov.ListAllMetrics()).To(ConsistOf(
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "ingresses"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "deployments"}, Namespaced: true, Metric: "work_queue_wait"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "work_queue_wait"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_usage"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_usage"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "ingresses"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "deployments"}, true, "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_usage"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_usage"},
))
})
})

View file

@ -20,16 +20,14 @@ import (
"fmt"
"sync"
pmodel "github.com/prometheus/common/model"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/labels"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
pmodel "github.com/prometheus/common/model"
"k8s.io/klog/v2"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
)
// NB: container metrics sourced from cAdvisor don't consistently follow naming conventions,

View file

@ -20,11 +20,10 @@ import (
"fmt"
"time"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
pmodel "github.com/prometheus/common/model"
coreapi "k8s.io/api/core/v1"
extapi "k8s.io/api/extensions/v1beta1"
apimeta "k8s.io/apimachinery/pkg/api/meta"
@ -32,11 +31,9 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/selection"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
config "sigs.k8s.io/prometheus-adapter/cmd/config-gen/utils"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
config "github.com/directxman12/k8s-prometheus-adapter/cmd/config-gen/utils"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
)
// restMapper creates a RESTMapper with just the types we need for
@ -68,23 +65,23 @@ var seriesRegistryTestSeries = [][]prom.Series{
{
{
Name: "container_some_time_seconds_total",
Labels: pmodel.LabelSet{"pod": "somepod", "namespace": "somens", "container": "somecont"},
Labels: pmodel.LabelSet{"pod_name": "somepod", "namespace": "somens", "container_name": "somecont"},
},
},
{
{
Name: "container_some_count_total",
Labels: pmodel.LabelSet{"pod": "somepod", "namespace": "somens", "container": "somecont"},
Labels: pmodel.LabelSet{"pod_name": "somepod", "namespace": "somens", "container_name": "somecont"},
},
},
{
{
Name: "container_some_usage",
Labels: pmodel.LabelSet{"pod": "somepod", "namespace": "somens", "container": "somecont"},
Labels: pmodel.LabelSet{"pod_name": "somepod", "namespace": "somens", "container_name": "somecont"},
},
},
{
// gauge metrics
// guage metrics
{
Name: "node_gigawatts",
Labels: pmodel.LabelSet{"kube_node": "somenode"},
@ -159,35 +156,35 @@ var _ = Describe("Series Registry", func() {
// container metrics
{
title: "container metrics gauge / multiple resource names",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_usage"},
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_usage"},
namespace: "somens",
resourceNames: []string{"somepod1", "somepod2"},
metricSelector: labels.Everything(),
expectedQuery: "sum(container_some_usage{namespace=\"somens\",pod=~\"somepod1|somepod2\",container!=\"POD\"}) by (pod)",
expectedQuery: "sum(container_some_usage{namespace=\"somens\",pod_name=~\"somepod1|somepod2\",container_name!=\"POD\"}) by (pod_name)",
},
{
title: "container metrics counter",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_count"},
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_count"},
namespace: "somens",
resourceNames: []string{"somepod1", "somepod2"},
metricSelector: labels.Everything(),
expectedQuery: "sum(rate(container_some_count_total{namespace=\"somens\",pod=~\"somepod1|somepod2\",container!=\"POD\"}[1m])) by (pod)",
expectedQuery: "sum(rate(container_some_count_total{namespace=\"somens\",pod_name=~\"somepod1|somepod2\",container_name!=\"POD\"}[1m])) by (pod_name)",
},
{
title: "container metrics seconds counter",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_time"},
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_time"},
namespace: "somens",
resourceNames: []string{"somepod1", "somepod2"},
metricSelector: labels.Everything(),
expectedQuery: "sum(rate(container_some_time_seconds_total{namespace=\"somens\",pod=~\"somepod1|somepod2\",container!=\"POD\"}[1m])) by (pod)",
expectedQuery: "sum(rate(container_some_time_seconds_total{namespace=\"somens\",pod_name=~\"somepod1|somepod2\",container_name!=\"POD\"}[1m])) by (pod_name)",
},
// namespaced metrics
{
title: "namespaced metrics counter / multidimensional (service)",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "service"}, Namespaced: true, Metric: "ingress_hits"},
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "service"}, true, "ingress_hits"},
namespace: "somens",
resourceNames: []string{"somesvc"},
metricSelector: labels.Everything(),
@ -196,7 +193,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "namespaced metrics counter / multidimensional (service) / selection using labels",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "service"}, Namespaced: true, Metric: "ingress_hits"},
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "service"}, true, "ingress_hits"},
namespace: "somens",
resourceNames: []string{"somesvc"},
metricSelector: labels.NewSelector().Add(
@ -206,7 +203,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "namespaced metrics counter / multidimensional (ingress)",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "ingress"}, Namespaced: true, Metric: "ingress_hits"},
info: provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "ingress"}, true, "ingress_hits"},
namespace: "somens",
resourceNames: []string{"someingress"},
metricSelector: labels.Everything(),
@ -215,7 +212,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "namespaced metrics counter / multidimensional (pod)",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pod"}, Namespaced: true, Metric: "ingress_hits"},
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pod"}, true, "ingress_hits"},
namespace: "somens",
resourceNames: []string{"somepod"},
metricSelector: labels.Everything(),
@ -224,7 +221,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "namespaced metrics gauge",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "service"}, Namespaced: true, Metric: "service_proxy_packets"},
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "service"}, true, "service_proxy_packets"},
namespace: "somens",
resourceNames: []string{"somesvc"},
metricSelector: labels.Everything(),
@ -233,7 +230,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "namespaced metrics seconds counter",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "deployment"}, Namespaced: true, Metric: "work_queue_wait"},
info: provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "deployment"}, true, "work_queue_wait"},
namespace: "somens",
resourceNames: []string{"somedep"},
metricSelector: labels.Everything(),
@ -243,7 +240,7 @@ var _ = Describe("Series Registry", func() {
// non-namespaced series
{
title: "root scoped metrics gauge",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "node"}, Namespaced: false, Metric: "node_gigawatts"},
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "node"}, false, "node_gigawatts"},
resourceNames: []string{"somenode"},
metricSelector: labels.Everything(),
@ -251,7 +248,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "root scoped metrics counter",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "persistentvolume"}, Namespaced: false, Metric: "volume_claims"},
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "persistentvolume"}, false, "volume_claims"},
resourceNames: []string{"somepv"},
metricSelector: labels.Everything(),
@ -259,7 +256,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "root scoped metrics seconds counter",
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "node"}, Namespaced: false, Metric: "node_fan"},
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "node"}, false, "node_fan"},
resourceNames: []string{"somenode"},
metricSelector: labels.Everything(),
@ -281,23 +278,23 @@ var _ = Describe("Series Registry", func() {
It("should list all metrics", func() {
Expect(registry.ListAllMetrics()).To(ConsistOf(
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_count"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_count"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_time"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_time"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_usage"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_usage"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "ingresses"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "deployments"}, Namespaced: true, Metric: "work_queue_wait"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "work_queue_wait"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "nodes"}, Namespaced: false, Metric: "node_gigawatts"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "persistentvolumes"}, Namespaced: false, Metric: "volume_claims"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "nodes"}, Namespaced: false, Metric: "node_fan"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_count"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_count"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_time"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_time"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_usage"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_usage"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "ingresses"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "deployments"}, true, "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "nodes"}, false, "node_gigawatts"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "persistentvolumes"}, false, "volume_claims"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "nodes"}, false, "node_fan"},
))
})
})

View file

@ -21,12 +21,11 @@ import (
"fmt"
"time"
pmodel "github.com/prometheus/common/model"
"k8s.io/klog/v2"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
pmodel "github.com/prometheus/common/model"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
)
// Runnable represents something that can be run until told to stop.
@ -100,7 +99,7 @@ func (l *basicMetricLister) ListAllMetrics() (MetricUpdateResult, error) {
}
selectors[sel] = struct{}{}
go func() {
series, err := l.promClient.Series(context.TODO(), pmodel.Interval{Start: startTime, End: 0}, sel)
series, err := l.promClient.Series(context.TODO(), pmodel.Interval{startTime, 0}, sel)
if err != nil {
errs <- fmt.Errorf("unable to fetch metrics for query %q: %v", sel, err)
return

View file

@ -16,13 +16,12 @@ package provider
import (
"sync"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/klog/v2"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
)
// ExternalSeriesRegistry acts as the top-level converter for transforming Kubernetes requests
@ -103,6 +102,7 @@ func (r *externalSeriesRegistry) filterAndStoreMetrics(result MetricUpdateResult
r.metrics = apiMetricsCache
r.metricsInfo = rawMetricsCache
}
func (r *externalSeriesRegistry) ListAllMetrics() []provider.ExternalMetricInfo {

View file

@ -17,15 +17,12 @@ import (
"errors"
"fmt"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
"github.com/prometheus/common/model"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/metrics/pkg/apis/external_metrics"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
)
// MetricConverter provides a unified interface for converting the results of
@ -61,7 +58,7 @@ func (c *metricConverter) convertSample(info provider.ExternalMetricInfo, sample
singleMetric := external_metrics.ExternalMetricValue{
MetricName: info.Metric,
Timestamp: metav1.Time{
Time: sample.Timestamp.Time(),
sample.Timestamp.Time(),
},
Value: *resource.NewMilliQuantity(int64(sample.Value*1000.0), resource.DecimalSI),
MetricLabels: labels,
@ -133,7 +130,7 @@ func (c *metricConverter) convertScalar(info provider.ExternalMetricInfo, queryR
{
MetricName: info.Metric,
Timestamp: metav1.Time{
Time: toConvert.Timestamp.Time(),
toConvert.Timestamp.Time(),
},
Value: *resource.NewMilliQuantity(int64(toConvert.Value*1000.0), resource.DecimalSI),
},

View file

@ -69,9 +69,9 @@ func (l *periodicMetricLister) updateMetrics() error {
return err
}
// Cache the result.
//Cache the result.
l.mostRecentResult = result
// Let our listeners know we've got new data ready for them.
//Let our listeners know we've got new data ready for them.
l.notifyListeners()
return nil
}
@ -85,7 +85,5 @@ func (l *periodicMetricLister) notifyListeners() {
}
func (l *periodicMetricLister) UpdateNow() {
if err := l.updateMetrics(); err != nil {
utilruntime.HandleError(err)
}
l.updateMetrics()
}

View file

@ -17,8 +17,7 @@ import (
"testing"
"time"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/stretchr/testify/require"
)

View file

@ -18,18 +18,19 @@ import (
"fmt"
"time"
"k8s.io/apimachinery/pkg/runtime/schema"
pmodel "github.com/prometheus/common/model"
"k8s.io/klog/v2"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
apierr "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/klog/v2"
"k8s.io/metrics/pkg/apis/external_metrics"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
)
type externalPrometheusProvider struct {
@ -39,7 +40,7 @@ type externalPrometheusProvider struct {
seriesRegistry ExternalSeriesRegistry
}
func (p *externalPrometheusProvider) GetExternalMetric(ctx context.Context, namespace string, metricSelector labels.Selector, info provider.ExternalMetricInfo) (*external_metrics.ExternalMetricValueList, error) {
func (p *externalPrometheusProvider) GetExternalMetric(namespace string, metricSelector labels.Selector, info provider.ExternalMetricInfo) (*external_metrics.ExternalMetricValueList, error) {
selector, found, err := p.seriesRegistry.QueryForMetric(namespace, info.Metric, metricSelector)
if err != nil {
@ -51,7 +52,7 @@ func (p *externalPrometheusProvider) GetExternalMetric(ctx context.Context, name
return nil, provider.NewMetricNotFoundError(p.selectGroupResource(namespace), info.Metric)
}
// Here is where we're making the query, need to be before here xD
queryResults, err := p.promClient.Query(ctx, pmodel.Now(), selector)
queryResults, err := p.promClient.Query(context.TODO(), pmodel.Now(), selector)
if err != nil {
klog.Errorf("unable to fetch metrics from prometheus: %v", err)
@ -77,9 +78,9 @@ func (p *externalPrometheusProvider) selectGroupResource(namespace string) schem
}
// NewExternalPrometheusProvider creates an ExternalMetricsProvider capable of responding to Kubernetes requests for external metric data
func NewExternalPrometheusProvider(promClient prom.Client, namers []naming.MetricNamer, updateInterval time.Duration, maxAge time.Duration) (provider.ExternalMetricsProvider, Runnable) {
func NewExternalPrometheusProvider(promClient prom.Client, namers []naming.MetricNamer, updateInterval time.Duration) (provider.ExternalMetricsProvider, Runnable) {
metricConverter := NewMetricConverter()
basicLister := NewBasicMetricLister(promClient, namers, maxAge)
basicLister := NewBasicMetricLister(promClient, namers, updateInterval)
periodicLister, _ := NewPeriodicMetricLister(basicLister, updateInterval)
seriesRegistry := NewExternalSeriesRegistry(periodicLister)
return &externalPrometheusProvider{

View file

@ -22,6 +22,7 @@ import (
"regexp"
"text/template"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime/schema"
pmodel "github.com/prometheus/common/model"
@ -33,6 +34,7 @@ type labelGroupResExtractor struct {
resourceInd int
groupInd *int
mapper apimeta.RESTMapper
}
// newLabelGroupResExtractor creates a new labelGroupResExtractor for labels whose form
@ -40,10 +42,7 @@ type labelGroupResExtractor struct {
// so anything in the template which limits resource or group name length will cause issues.
func newLabelGroupResExtractor(labelTemplate *template.Template) (*labelGroupResExtractor, error) {
labelRegexBuff := new(bytes.Buffer)
if err := labelTemplate.Execute(labelRegexBuff, schema.GroupResource{
Group: "(?P<group>.+?)",
Resource: "(?P<resource>.+?)"},
); err != nil {
if err := labelTemplate.Execute(labelRegexBuff, schema.GroupResource{"(?P<group>.+?)", "(?P<resource>.+?)"}); err != nil {
return nil, fmt.Errorf("unable to convert label template to matcher: %v", err)
}
if labelRegexBuff.Len() == 0 {

View file

@ -24,8 +24,8 @@ import (
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/config"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/config"
)
// MetricNamer knows how to convert Prometheus series names and label names to
@ -131,6 +131,8 @@ func (n *metricNamer) QueryForSeries(series string, resource schema.GroupResourc
}
func (n *metricNamer) QueryForExternalSeries(series string, namespace string, metricSelector labels.Selector) (prom.Selector, error) {
//test := prom.Selector()
//return test, nil
return n.metricsQuery.BuildExternal(series, namespace, "", []string{}, metricSelector)
}
@ -153,13 +155,7 @@ func NamersFromConfig(cfg []config.DiscoveryRule, mapper apimeta.RESTMapper) ([]
return nil, err
}
// queries are namespaced by default unless the rule specifically disables it
namespaced := true
if rule.Resources.Namespaced != nil {
namespaced = *rule.Resources.Namespaced
}
metricsQuery, err := NewExternalMetricsQuery(rule.MetricsQuery, resConv, namespaced)
metricsQuery, err := NewMetricsQuery(rule.MetricsQuery, resConv)
if err != nil {
return nil, fmt.Errorf("unable to construct metrics query associated with series query %q: %v", rule.SeriesQuery, err)
}
@ -194,14 +190,13 @@ func NamersFromConfig(cfg []config.DiscoveryRule, mapper apimeta.RESTMapper) ([]
if nameAs == "" {
// check if we have an obvious default
subexpNames := nameMatches.SubexpNames()
switch len(subexpNames) {
case 1:
if len(subexpNames) == 1 {
// no capture groups, use the whole thing
nameAs = "$0"
case 2:
} else if len(subexpNames) == 2 {
// one capture group, use that
nameAs = "$1"
default:
} else {
return nil, fmt.Errorf("must specify an 'as' value for name matcher %q associated with series query %q", rule.Name.Matches, rule.SeriesQuery)
}
}

View file

@ -27,7 +27,7 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/selection"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
)
// MetricsQuery represents a compiled metrics query for some set of
@ -59,27 +59,6 @@ func NewMetricsQuery(queryTemplate string, resourceConverter ResourceConverter)
return &metricsQuery{
resConverter: resourceConverter,
template: templ,
namespaced: true,
}, nil
}
// NewExternalMetricsQuery constructs a new MetricsQuery by compiling the given Go template.
// The delimiters on the template are `<<` and `>>`, and it may use the following fields:
// - Series: the series in question
// - LabelMatchers: a pre-stringified form of the label matchers for the resources in the query
// - LabelMatchersByName: the raw map-form of the above matchers
// - GroupBy: the group-by clause to use for the resources in the query (stringified)
// - GroupBySlice: the raw slice form of the above group-by clause
func NewExternalMetricsQuery(queryTemplate string, resourceConverter ResourceConverter, namespaced bool) (MetricsQuery, error) {
templ, err := template.New("metrics-query").Delims("<<", ">>").Parse(queryTemplate)
if err != nil {
return nil, fmt.Errorf("unable to parse metrics query template %q: %v", queryTemplate, err)
}
return &metricsQuery{
resConverter: resourceConverter,
template: templ,
namespaced: namespaced,
}, nil
}
@ -89,7 +68,6 @@ func NewExternalMetricsQuery(queryTemplate string, resourceConverter ResourceCon
type metricsQuery struct {
resConverter ResourceConverter
template *template.Template
namespaced bool
}
// queryTemplateArgs contains the arguments for the template used in metricsQuery.
@ -172,7 +150,7 @@ func (q *metricsQuery) BuildExternal(seriesName string, namespace string, groupB
// Build up the query parts from the selector.
queryParts = append(queryParts, q.createQueryPartsFromSelector(metricSelector)...)
if q.namespaced && namespace != "" {
if namespace != "" {
namespaceLbl, err := q.resConverter.LabelForResource(NsGroupResource)
if err != nil {
return "", err
@ -283,8 +261,9 @@ func (q *metricsQuery) processQueryParts(queryParts []queryPart) ([]string, map[
}
func (q *metricsQuery) selectMatcher(operator selection.Operator, values []string) (func(string, string) string, error) {
switch len(values) {
case 0:
numValues := len(values)
if numValues == 0 {
switch operator {
case selection.Exists:
return prom.LabelNeq, nil
@ -293,7 +272,7 @@ func (q *metricsQuery) selectMatcher(operator selection.Operator, values []strin
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
return nil, ErrMalformedQuery
}
case 1:
} else if numValues == 1 {
switch operator {
case selection.Equals, selection.DoubleEquals:
return prom.LabelEq, nil
@ -304,7 +283,7 @@ func (q *metricsQuery) selectMatcher(operator selection.Operator, values []strin
case selection.DoesNotExist, selection.NotIn:
return prom.LabelNotMatches, nil
}
default:
} else {
// Since labels can only have one value, providing multiple
// values results in a regex match, even if that's not what the user
// asked for.
@ -320,8 +299,8 @@ func (q *metricsQuery) selectMatcher(operator selection.Operator, values []strin
}
func (q *metricsQuery) selectTargetValue(operator selection.Operator, values []string) (string, error) {
switch len(values) {
case 0:
numValues := len(values)
if numValues == 0 {
switch operator {
case selection.Exists, selection.DoesNotExist:
// Return an empty string when values are equal to 0
@ -333,7 +312,7 @@ func (q *metricsQuery) selectTargetValue(operator selection.Operator, values []s
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
return "", ErrMalformedQuery
}
case 1:
} else if numValues == 1 {
switch operator {
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
// Pass the value through as-is.
@ -346,7 +325,7 @@ func (q *metricsQuery) selectTargetValue(operator selection.Operator, values []s
case selection.Exists, selection.DoesNotExist:
return "", ErrQueryUnsupportedValues
}
default:
} else {
switch operator {
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
// Pass the value through as-is.

View file

@ -20,13 +20,11 @@ import (
"fmt"
"testing"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
pmodel "github.com/prometheus/common/model"
labels "k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/selection"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
pmodel "github.com/prometheus/common/model"
)
type resourceConverterMock struct {
@ -274,15 +272,7 @@ func TestBuildSelector(t *testing.T) {
func TestBuildExternalSelector(t *testing.T) {
mustNewQuery := func(queryTemplate string) MetricsQuery {
mq, err := NewExternalMetricsQuery(queryTemplate, &resourceConverterMock{true}, true)
if err != nil {
t.Fatal(err)
}
return mq
}
mustNewNonNamespacedQuery := func(queryTemplate string) MetricsQuery {
mq, err := NewExternalMetricsQuery(queryTemplate, &resourceConverterMock{true}, false)
mq, err := NewMetricsQuery(queryTemplate, &resourceConverterMock{true})
if err != nil {
t.Fatal(err)
}
@ -358,19 +348,6 @@ func TestBuildExternalSelector(t *testing.T) {
hasSelector("default [foo bar]"),
),
},
{
name: "multiple GroupBySlice values with namespace disabled",
mq: mustNewNonNamespacedQuery(`<<index .LabelValuesByName "namespaces">> <<.GroupBySlice>>`),
namespace: "default",
groupBySlice: []string{"foo", "bar"},
metricSelector: labels.NewSelector(),
check: checks(
hasError(nil),
hasSelector(" [foo bar]"),
),
},
{
name: "single LabelMatchers value",

View file

@ -21,7 +21,7 @@ import (
"github.com/stretchr/testify/require"
"sigs.k8s.io/prometheus-adapter/pkg/config"
"github.com/directxman12/k8s-prometheus-adapter/pkg/config"
)
func TestReMatcherIs(t *testing.T) {

View file

@ -23,16 +23,14 @@ import (
"sync"
"text/template"
pmodel "github.com/prometheus/common/model"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime/schema"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/config"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
pmodel "github.com/prometheus/common/model"
"k8s.io/klog/v2"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/config"
)
var (

View file

@ -19,30 +19,28 @@ package resourceprovider
import (
"context"
"fmt"
"math"
"sync"
"time"
corev1 "k8s.io/api/core/v1"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
apitypes "k8s.io/apimachinery/pkg/types"
"k8s.io/klog/v2"
metrics "k8s.io/metrics/pkg/apis/metrics"
"sigs.k8s.io/metrics-server/pkg/api"
"sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/config"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
"github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/config"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
pmodel "github.com/prometheus/common/model"
)
var (
nodeResource = schema.GroupResource{Resource: "nodes"}
nsResource = schema.GroupResource{Resource: "ns"}
podResource = schema.GroupResource{Resource: "pods"}
)
@ -71,6 +69,7 @@ func newResourceQuery(cfg config.ResourceRule, mapper apimeta.RESTMapper) (resou
nodeQuery: nodeQuery,
containerLabel: cfg.ContainerLabel,
}, nil
}
// resourceQuery represents query information for querying resource metrics for some resource,
@ -120,12 +119,10 @@ type nsQueryResults struct {
err error
}
// GetPodMetrics implements the api.MetricsProvider interface.
func (p *resourceProvider) GetPodMetrics(pods ...*metav1.PartialObjectMetadata) ([]metrics.PodMetrics, error) {
resMetrics := make([]metrics.PodMetrics, 0, len(pods))
// GetContainerMetrics implements the api.MetricsProvider interface. It may return nil, nil, nil.
func (p *resourceProvider) GetContainerMetrics(pods ...apitypes.NamespacedName) ([]api.TimeInfo, [][]metrics.ContainerMetrics) {
if len(pods) == 0 {
return resMetrics, nil
return nil, nil
}
// TODO(directxman12): figure out how well this scales if we go to list 1000+ pods
@ -165,40 +162,39 @@ func (p *resourceProvider) GetPodMetrics(pods ...*metav1.PartialObjectMetadata)
// convert the unorganized per-container results into results grouped
// together by namespace, pod, and container
for _, pod := range pods {
podMetric := p.assignForPod(pod, resultsByNs)
if podMetric != nil {
resMetrics = append(resMetrics, *podMetric)
}
resTimes := make([]api.TimeInfo, len(pods))
resMetrics := make([][]metrics.ContainerMetrics, len(pods))
for i, pod := range pods {
p.assignForPod(pod, resultsByNs, &resMetrics[i], &resTimes[i])
}
return resMetrics, nil
return resTimes, resMetrics
}
// assignForPod takes the resource metrics for all containers in the given pod
// from resultsByNs, and places them in MetricsProvider response format in resMetrics,
// also recording the earliest time in resTime. It will return without operating if
// any data is missing.
func (p *resourceProvider) assignForPod(pod *metav1.PartialObjectMetadata, resultsByNs map[string]nsQueryResults) *metrics.PodMetrics {
func (p *resourceProvider) assignForPod(pod apitypes.NamespacedName, resultsByNs map[string]nsQueryResults, resMetrics *[]metrics.ContainerMetrics, resTime *api.TimeInfo) {
// check to make sure everything is present
nsRes, nsResPresent := resultsByNs[pod.Namespace]
if !nsResPresent {
klog.Errorf("unable to fetch metrics for pods in namespace %q, skipping pod %s", pod.Namespace, pod.String())
return nil
return
}
cpuRes, hasResult := nsRes.cpu[pod.Name]
if !hasResult {
klog.Errorf("unable to fetch CPU metrics for pod %s, skipping", pod.String())
return nil
return
}
memRes, hasResult := nsRes.mem[pod.Name]
if !hasResult {
klog.Errorf("unable to fetch memory metrics for pod %s, skipping", pod.String())
return nil
return
}
earliestTs := pmodel.Latest
containerMetrics := make(map[string]metrics.ContainerMetrics)
earliestTS := pmodel.Latest
// organize all the CPU results
for _, cpu := range cpuRes {
@ -210,8 +206,8 @@ func (p *resourceProvider) assignForPod(pod *metav1.PartialObjectMetadata, resul
}
}
containerMetrics[containerName].Usage[corev1.ResourceCPU] = *resource.NewMilliQuantity(int64(cpu.Value*1000.0), resource.DecimalSI)
if cpu.Timestamp.Before(earliestTS) {
earliestTS = cpu.Timestamp
if cpu.Timestamp.Before(earliestTs) {
earliestTs = cpu.Timestamp
}
}
@ -225,8 +221,8 @@ func (p *resourceProvider) assignForPod(pod *metav1.PartialObjectMetadata, resul
}
}
containerMetrics[containerName].Usage[corev1.ResourceMemory] = *resource.NewMilliQuantity(int64(mem.Value*1000.0), resource.BinarySI)
if mem.Timestamp.Before(earliestTS) {
earliestTS = mem.Timestamp
if mem.Timestamp.Before(earliestTs) {
earliestTs = mem.Timestamp
}
}
@ -241,50 +237,40 @@ func (p *resourceProvider) assignForPod(pod *metav1.PartialObjectMetadata, resul
}
}
podMetric := &metrics.PodMetrics{
ObjectMeta: metav1.ObjectMeta{
Name: pod.Name,
Namespace: pod.Namespace,
Labels: pod.Labels,
CreationTimestamp: metav1.Now(),
},
// store the time in the final format
Timestamp: metav1.NewTime(earliestTS.Time()),
Window: metav1.Duration{Duration: p.window},
// store the time in the final format
*resTime = api.TimeInfo{
Timestamp: earliestTs.Time(),
Window: p.window,
}
// store the container metrics in the final format
podMetric.Containers = make([]metrics.ContainerMetrics, 0, len(containerMetrics))
containerMetricsList := make([]metrics.ContainerMetrics, 0, len(containerMetrics))
for _, containerMetric := range containerMetrics {
podMetric.Containers = append(podMetric.Containers, containerMetric)
containerMetricsList = append(containerMetricsList, containerMetric)
}
return podMetric
*resMetrics = containerMetricsList
}
// GetNodeMetrics implements the api.MetricsProvider interface.
func (p *resourceProvider) GetNodeMetrics(nodes ...*corev1.Node) ([]metrics.NodeMetrics, error) {
resMetrics := make([]metrics.NodeMetrics, 0, len(nodes))
// GetNodeMetrics implements the api.MetricsProvider interface. It may return nil, nil.
func (p *resourceProvider) GetNodeMetrics(nodes ...string) ([]api.TimeInfo, []corev1.ResourceList) {
if len(nodes) == 0 {
return resMetrics, nil
return nil, nil
}
now := pmodel.Now()
nodeNames := make([]string, 0, len(nodes))
for _, node := range nodes {
nodeNames = append(nodeNames, node.Name)
}
// run the actual query
qRes := p.queryBoth(now, nodeResource, "", nodeNames...)
qRes := p.queryBoth(now, nodeResource, "", nodes...)
if qRes.err != nil {
klog.Errorf("failed querying node metrics: %v", qRes.err)
return resMetrics, nil
return nil, nil
}
resTimes := make([]api.TimeInfo, len(nodes))
resMetrics := make([]corev1.ResourceList, len(nodes))
// organize the results
for i, nodeName := range nodeNames {
for i, nodeName := range nodes {
// skip if any data is missing
rawCPUs, gotResult := qRes.cpu[nodeName]
if !gotResult {
@ -300,30 +286,28 @@ func (p *resourceProvider) GetNodeMetrics(nodes ...*corev1.Node) ([]metrics.Node
rawMem := rawMems[0]
rawCPU := rawCPUs[0]
// use the earliest timestamp available (in order to be conservative
// when determining if metrics are tainted by startup)
ts := rawCPU.Timestamp.Time()
if ts.After(rawMem.Timestamp.Time()) {
ts = rawMem.Timestamp.Time()
// store the results
resMetrics[i] = corev1.ResourceList{
corev1.ResourceCPU: *resource.NewMilliQuantity(int64(rawCPU.Value*1000.0), resource.DecimalSI),
corev1.ResourceMemory: *resource.NewMilliQuantity(int64(rawMem.Value*1000.0), resource.BinarySI),
}
// store the results
resMetrics = append(resMetrics, metrics.NodeMetrics{
ObjectMeta: metav1.ObjectMeta{
Name: nodes[i].Name,
Labels: nodes[i].Labels,
CreationTimestamp: metav1.Now(),
},
Usage: corev1.ResourceList{
corev1.ResourceCPU: *resource.NewMilliQuantity(int64(rawCPU.Value*1000.0), resource.DecimalSI),
corev1.ResourceMemory: *resource.NewMilliQuantity(int64(rawMem.Value*1000.0), resource.BinarySI),
},
Timestamp: metav1.NewTime(ts),
Window: metav1.Duration{Duration: p.window},
})
// use the earliest timestamp available (in order to be conservative
// when determining if metrics are tainted by startup)
if rawMem.Timestamp.Before(rawCPU.Timestamp) {
resTimes[i] = api.TimeInfo{
Timestamp: rawMem.Timestamp.Time(),
Window: p.window,
}
} else {
resTimes[i] = api.TimeInfo{
Timestamp: rawCPU.Timestamp.Time(),
Window: 1 * time.Minute,
}
}
}
return resMetrics, nil
return resTimes, resMetrics
}
// queryBoth queries for both CPU and memory metrics on the given
@ -403,17 +387,13 @@ func (p *resourceProvider) runQuery(now pmodel.Time, queryInfo resourceQuery, re
// associate the results back to each given pod or node
res := make(queryResults, len(*rawRes.Vector))
for _, sample := range *rawRes.Vector {
// skip empty samples
if sample == nil {
for _, val := range *rawRes.Vector {
if val == nil {
// skip empty values
continue
}
// replace NaN and negative values by zero
if math.IsNaN(float64(sample.Value)) || sample.Value < 0 {
sample.Value = 0
}
resKey := string(sample.Metric[resourceLbl])
res[resKey] = append(res[resKey], sample)
resKey := string(val.Metric[resourceLbl])
res[resKey] = append(res[resKey], val)
}
return res, nil

View file

@ -17,25 +17,22 @@ limitations under the License.
package resourceprovider
import (
"math"
"time"
corev1 "k8s.io/api/core/v1"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/metrics/pkg/apis/metrics"
"sigs.k8s.io/metrics-server/pkg/api"
config "sigs.k8s.io/prometheus-adapter/cmd/config-gen/utils"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
fakeprom "sigs.k8s.io/prometheus-adapter/pkg/client/fake"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
corev1 "k8s.io/api/core/v1"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/metrics/pkg/apis/metrics"
"sigs.k8s.io/metrics-server/pkg/api"
config "github.com/directxman12/k8s-prometheus-adapter/cmd/config-gen/utils"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
fakeprom "github.com/directxman12/k8s-prometheus-adapter/pkg/client/fake"
pmodel "github.com/prometheus/common/model"
)
@ -52,9 +49,9 @@ func restMapper() apimeta.RESTMapper {
func buildPodSample(namespace, pod, container string, val float64, ts int64) *pmodel.Sample {
return &pmodel.Sample{
Metric: pmodel.Metric{
"namespace": pmodel.LabelValue(namespace),
"pod": pmodel.LabelValue(pod),
"container": pmodel.LabelValue(container),
"namespace": pmodel.LabelValue(namespace),
"pod_name": pmodel.LabelValue(pod),
"container_name": pmodel.LabelValue(container),
},
Value: pmodel.SampleValue(val),
Timestamp: pmodel.Time(ts),
@ -122,10 +119,10 @@ var _ = Describe("Resource Metrics Provider", func() {
})
It("should be able to list metrics pods across different namespaces", func() {
pods := []*metav1.PartialObjectMetadata{
{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod1"}},
{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod3"}},
{ObjectMeta: metav1.ObjectMeta{Namespace: "other-ns", Name: "pod27"}},
pods := []types.NamespacedName{
{Namespace: "some-ns", Name: "pod1"},
{Namespace: "some-ns", Name: "pod3"},
{Namespace: "other-ns", Name: "pod27"},
}
fakeProm.QueryResults = map[prom.Selector]prom.QueryResult{
mustBuild(cpuQueries.contQuery.Build("", podResource, "some-ns", []string{cpuQueries.containerLabel}, labels.Everything(), "pod1", "pod3")): buildQueryRes("container_cpu_usage_seconds_total",
@ -149,34 +146,27 @@ var _ = Describe("Resource Metrics Provider", func() {
}
By("querying for metrics for some pods")
podMetrics, err := prov.GetPodMetrics(pods...)
Expect(err).NotTo(HaveOccurred())
By("verifying that metrics have been fetched for all the pods")
Expect(podMetrics).To(HaveLen(3))
times, metricVals := prov.GetContainerMetrics(pods...)
By("verifying that the reported times for each are the earliest times for each pod")
Expect(podMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(podMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(podMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(podMetrics[1].Window.Duration).To(Equal(time.Minute))
Expect(podMetrics[2].Timestamp.Time).To(Equal(pmodel.Time(270).Time()))
Expect(podMetrics[2].Window.Duration).To(Equal(time.Minute))
Expect(times).To(Equal([]api.TimeInfo{
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute},
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute},
{Timestamp: pmodel.Time(270).Time(), Window: 1 * time.Minute},
}))
By("verifying that the right metrics were fetched")
Expect(podMetrics).To(HaveLen(3))
Expect(podMetrics[0].Containers).To(ConsistOf(
Expect(metricVals).To(HaveLen(3))
Expect(metricVals[0]).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1100.0, 3100.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1110.0, 3110.0)},
))
Expect(podMetrics[1].Containers).To(ConsistOf(
Expect(metricVals[1]).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1300.0, 3300.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1310.0, 3310.0)},
))
Expect(podMetrics[2].Containers).To(ConsistOf(
Expect(metricVals[2]).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(2200.0, 4200.0)},
))
})
@ -194,65 +184,22 @@ var _ = Describe("Resource Metrics Provider", func() {
}
By("querying for metrics for some pods, one of which is missing")
podMetrics, err := prov.GetPodMetrics(
&metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod1"}},
&metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod-nonexistant"}},
times, metricVals := prov.GetContainerMetrics(
types.NamespacedName{Namespace: "some-ns", Name: "pod1"},
types.NamespacedName{Namespace: "some-ns", Name: "pod-nonexistant"},
)
Expect(err).NotTo(HaveOccurred())
By("verifying that the missing pod had no metrics")
Expect(podMetrics).To(HaveLen(1))
By("verifying that the missing pod had nil metrics")
Expect(metricVals).To(HaveLen(2))
Expect(metricVals[1]).To(BeNil())
By("verifying that the rest of time metrics and times are correct")
Expect(podMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(podMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(podMetrics[0].Containers).To(ConsistOf(
Expect(metricVals[0]).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1100.0, 3100.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1110.0, 3110.0)},
))
})
It("should return metrics of value zero when pod metrics have NaN or negative values", func() {
fakeProm.QueryResults = map[prom.Selector]prom.QueryResult{
mustBuild(cpuQueries.contQuery.Build("", podResource, "some-ns", []string{cpuQueries.containerLabel}, labels.Everything(), "pod1", "pod3")): buildQueryRes("container_cpu_usage_seconds_total",
buildPodSample("some-ns", "pod1", "cont1", -1100.0, 10),
buildPodSample("some-ns", "pod1", "cont2", math.NaN(), 20),
buildPodSample("some-ns", "pod3", "cont1", -1300.0, 10),
buildPodSample("some-ns", "pod3", "cont2", 1310.0, 20),
),
mustBuild(memQueries.contQuery.Build("", podResource, "some-ns", []string{cpuQueries.containerLabel}, labels.Everything(), "pod1", "pod3")): buildQueryRes("container_memory_working_set_bytes",
buildPodSample("some-ns", "pod1", "cont1", 3100.0, 11),
buildPodSample("some-ns", "pod1", "cont2", -3110.0, 21),
buildPodSample("some-ns", "pod3", "cont1", math.NaN(), 11),
buildPodSample("some-ns", "pod3", "cont2", -3310.0, 21),
),
}
By("querying for metrics for some pods")
podMetrics, err := prov.GetPodMetrics(
&metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod1"}},
&metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod3"}},
)
Expect(err).NotTo(HaveOccurred())
By("verifying that metrics have been fetched for all the pods")
Expect(podMetrics).To(HaveLen(2))
By("verifying that the reported times for each are the earliest times for each pod")
Expect(podMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(podMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(podMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(podMetrics[1].Window.Duration).To(Equal(time.Minute))
By("verifying that NaN and negative values were replaced by zero")
Expect(podMetrics[0].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(0, 3100.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(0, 0)},
))
Expect(podMetrics[1].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(0, 0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1310.0, 0)},
))
Expect(times).To(HaveLen(2))
Expect(times[0]).To(Equal(api.TimeInfo{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute}))
})
It("should be able to list metrics for nodes", func() {
@ -267,24 +214,19 @@ var _ = Describe("Resource Metrics Provider", func() {
),
}
By("querying for metrics for some nodes")
nodeMetrics, err := prov.GetNodeMetrics(
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node1"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node2"}},
)
Expect(err).NotTo(HaveOccurred())
times, metricVals := prov.GetNodeMetrics("node1", "node2")
By("verifying that metrics have been fetched for all the nodes")
Expect(nodeMetrics).To(HaveLen(2))
By("verifying that the reported times for each are the earliest times for each node")
Expect(nodeMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(nodeMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(nodeMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(12).Time()))
Expect(nodeMetrics[1].Window.Duration).To(Equal(time.Minute))
By("verifying that the reported times for each are the earliest times for each pod")
Expect(times).To(Equal([]api.TimeInfo{
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute},
{Timestamp: pmodel.Time(12).Time(), Window: 1 * time.Minute},
}))
By("verifying that the right metrics were fetched")
Expect(nodeMetrics[0].Usage).To(Equal(buildResList(1100.0, 2100.0)))
Expect(nodeMetrics[1].Usage).To(Equal(buildResList(1200.0, 2200.0)))
Expect(metricVals).To(Equal([]corev1.ResourceList{
buildResList(1100.0, 2100.0),
buildResList(1200.0, 2200.0),
}))
})
It("should return nil metrics for missing nodes, but still return partial results", func() {
@ -299,54 +241,22 @@ var _ = Describe("Resource Metrics Provider", func() {
),
}
By("querying for metrics for some nodes, one of which is missing")
nodeMetrics, err := prov.GetNodeMetrics(
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node1"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node2"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node3"}},
)
Expect(err).NotTo(HaveOccurred())
times, metricVals := prov.GetNodeMetrics("node1", "node2", "node3")
By("verifying that the missing pod had no metrics")
Expect(nodeMetrics).To(HaveLen(2))
By("verifying that the missing pod had nil metrics")
Expect(metricVals).To(HaveLen(3))
Expect(metricVals[2]).To(BeNil())
By("verifying that the rest of time metrics and times are correct")
Expect(nodeMetrics[0].Usage).To(Equal(buildResList(1100.0, 2100.0)))
Expect(nodeMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(nodeMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(nodeMetrics[1].Usage).To(Equal(buildResList(1200.0, 2200.0)))
Expect(nodeMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(12).Time()))
Expect(nodeMetrics[1].Window.Duration).To(Equal(time.Minute))
})
It("should return metrics of value zero when node metrics have NaN or negative values", func() {
fakeProm.QueryResults = map[prom.Selector]prom.QueryResult{
mustBuild(cpuQueries.nodeQuery.Build("", nodeResource, "", nil, labels.Everything(), "node1", "node2")): buildQueryRes("container_cpu_usage_seconds_total",
buildNodeSample("node1", -1100.0, 10),
buildNodeSample("node2", 1200.0, 14),
),
mustBuild(memQueries.nodeQuery.Build("", nodeResource, "", nil, labels.Everything(), "node1", "node2")): buildQueryRes("container_memory_working_set_bytes",
buildNodeSample("node1", 2100.0, 11),
buildNodeSample("node2", math.NaN(), 12),
),
}
By("querying for metrics for some nodes")
nodeMetrics, err := prov.GetNodeMetrics(
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node1"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node2"}},
)
Expect(err).NotTo(HaveOccurred())
By("verifying that metrics have been fetched for all the nodes")
Expect(nodeMetrics).To(HaveLen(2))
By("verifying that the reported times for each are the earliest times for each pod")
Expect(nodeMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(nodeMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(nodeMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(12).Time()))
Expect(nodeMetrics[1].Window.Duration).To(Equal(time.Minute))
By("verifying that NaN and negative values were replaced by zero")
Expect(nodeMetrics[0].Usage).To(Equal(buildResList(0, 2100.0)))
Expect(nodeMetrics[1].Usage).To(Equal(buildResList(1200.0, 0)))
Expect(metricVals).To(Equal([]corev1.ResourceList{
buildResList(1100.0, 2100.0),
buildResList(1200.0, 2200.0),
nil,
}))
Expect(times).To(Equal([]api.TimeInfo{
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute},
{Timestamp: pmodel.Time(12).Time(), Window: 1 * time.Minute},
{},
}))
})
})

View file

@ -1,41 +0,0 @@
# End-to-end tests
## With [kind](https://kind.sigs.k8s.io/)
[`kind`](https://kind.sigs.k8s.io/) and `kubectl` are automatically downloaded
except if `SKIP_INSTALL=true` is set.
A `kind` cluster is automatically created before the tests, and deleted after
the tests.
The `prometheus-adapter` container image is build locally and imported
into the cluster.
```bash
KIND_E2E=true make test-e2e
```
## With an existing Kubernetes cluster
If you already have a Kubernetes cluster, you can use:
```bash
KUBECONFIG="/path/to/kube/config" REGISTRY="my.registry/prefix" make test-e2e
```
- The cluster should not have a namespace `prometheus-adapter-e2e`.
The namespace will be created and deleted as part of the E2E tests.
- `KUBECONFIG` is the path of the [`kubeconfig` file].
**Optional**, defaults to `${HOME}/.kube/config`
- `REGISTRY` is the image registry where the container image should be pushed.
**Required**.
[`kubeconfig` file]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
## Additional environment variables
These environment variables may also be used (with any non-empty value):
- `SKIP_INSTALL`: skip the installation of `kind` and `kubectl` binaries;
- `SKIP_CLEAN_AFTER`: skip the deletion of resources (`Kind` cluster or
Kubernetes namespace) and of the temporary directory `.e2e`;
- `CLEAN_BEFORE`: clean before running the tests, e.g. if `SKIP_CLEAN_AFTER`
was used on the previous run.

View file

@ -1,213 +0,0 @@
/*
Copyright 2022 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package e2e
import (
"context"
"fmt"
"log"
"os"
"testing"
"time"
monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
monitoring "github.com/prometheus-operator/prometheus-operator/pkg/client/versioned"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
clientset "k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
metricsv1beta1 "k8s.io/metrics/pkg/apis/metrics/v1beta1"
metrics "k8s.io/metrics/pkg/client/clientset/versioned"
)
const (
ns = "prometheus-adapter-e2e"
prometheusInstance = "prometheus"
deployment = "prometheus-adapter"
)
var (
client clientset.Interface
promOpClient monitoring.Interface
metricsClient metrics.Interface
)
func TestMain(m *testing.M) {
kubeconfig := os.Getenv("KUBECONFIG")
if len(kubeconfig) == 0 {
log.Fatal("KUBECONFIG not provided")
}
var err error
client, promOpClient, metricsClient, err = initializeClients(kubeconfig)
if err != nil {
log.Fatalf("Cannot create clients: %v", err)
}
ctx := context.Background()
err = waitForPrometheusReady(ctx, ns, prometheusInstance)
if err != nil {
log.Fatalf("Prometheus instance 'prometheus' not ready: %v", err)
}
err = waitForDeploymentReady(ctx, ns, deployment)
if err != nil {
log.Fatalf("Deployment prometheus-adapter not ready: %v", err)
}
exitVal := m.Run()
os.Exit(exitVal)
}
func initializeClients(kubeconfig string) (clientset.Interface, monitoring.Interface, metrics.Interface, error) {
cfg, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during client configuration with %v", err)
}
clientSet, err := clientset.NewForConfig(cfg)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during client creation with %v", err)
}
promOpClient, err := monitoring.NewForConfig(cfg)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during dynamic client creation with %v", err)
}
metricsClientSet, err := metrics.NewForConfig(cfg)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during metrics client creation with %v", err)
}
return clientSet, promOpClient, metricsClientSet, nil
}
func waitForPrometheusReady(ctx context.Context, namespace string, name string) error {
return wait.PollUntilContextTimeout(ctx, 5*time.Second, 120*time.Second, true, func(ctx context.Context) (bool, error) {
prom, err := promOpClient.MonitoringV1().Prometheuses(ns).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, err
}
var reconciled, available *monitoringv1.Condition
for _, condition := range prom.Status.Conditions {
cond := condition
if cond.Type == monitoringv1.Reconciled {
reconciled = &cond
} else if cond.Type == monitoringv1.Available {
available = &cond
}
}
if reconciled == nil {
log.Printf("Prometheus instance '%s': Waiting for reconciliation status...", name)
return false, nil
}
if reconciled.Status != monitoringv1.ConditionTrue {
log.Printf("Prometheus instance '%s': Reconciiled = %v. Waiting for reconciliation (reason %s, %q)...", name, reconciled.Status, reconciled.Reason, reconciled.Message)
return false, nil
}
specReplicas := *prom.Spec.Replicas
availableReplicas := prom.Status.AvailableReplicas
if specReplicas != availableReplicas {
log.Printf("Prometheus instance '%s': %v/%v pods are ready. Waiting for all pods to be ready...", name, availableReplicas, specReplicas)
return false, err
}
if available == nil {
log.Printf("Prometheus instance '%s': Waiting for Available status...", name)
return false, nil
}
if available.Status != monitoringv1.ConditionTrue {
log.Printf("Prometheus instance '%s': Available = %v. Waiting for Available status... (reason %s, %q)", name, available.Status, available.Reason, available.Message)
return false, nil
}
log.Printf("Prometheus instance '%s': Ready.", name)
return true, nil
})
}
func waitForDeploymentReady(ctx context.Context, namespace string, name string) error {
return wait.PollUntilContextTimeout(ctx, 5*time.Second, 30*time.Second, true, func(ctx context.Context) (bool, error) {
sts, err := client.AppsV1().Deployments(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, err
}
if sts.Status.ReadyReplicas == *sts.Spec.Replicas {
log.Printf("Deployment %s: %v/%v pods are ready.", name, sts.Status.ReadyReplicas, *sts.Spec.Replicas)
return true, nil
}
log.Printf("Deployment %s: %v/%v pods are ready. Waiting for all pods to be ready...", name, sts.Status.ReadyReplicas, *sts.Spec.Replicas)
return false, nil
})
}
func TestNodeMetrics(t *testing.T) {
ctx := context.Background()
var nodeMetrics *metricsv1beta1.NodeMetricsList
err := wait.PollUntilContextTimeout(ctx, 2*time.Second, 30*time.Second, true, func(ctx context.Context) (bool, error) {
var err error
nodeMetrics, err = metricsClient.MetricsV1beta1().NodeMetricses().List(ctx, metav1.ListOptions{})
if err != nil {
return false, err
}
nonEmptyNodeMetrics := len(nodeMetrics.Items) > 0
if !nonEmptyNodeMetrics {
t.Logf("Node metrics empty... Retrying.")
}
return nonEmptyNodeMetrics, nil
})
require.NoErrorf(t, err, "Node metrics should not be empty")
for _, nodeMetric := range nodeMetrics.Items {
positiveMemory := nodeMetric.Usage.Memory().CmpInt64(0)
assert.Positivef(t, positiveMemory, "Memory usage for node %s is %v, should be > 0", nodeMetric.Name, nodeMetric.Usage.Memory())
positiveCPU := nodeMetric.Usage.Cpu().CmpInt64(0)
assert.Positivef(t, positiveCPU, "CPU usage for node %s is %v, should be > 0", nodeMetric.Name, nodeMetric.Usage.Cpu())
}
}
func TestPodMetrics(t *testing.T) {
ctx := context.Background()
var podMetrics *metricsv1beta1.PodMetricsList
err := wait.PollUntilContextTimeout(ctx, 2*time.Second, 30*time.Second, true, func(ctx context.Context) (bool, error) {
var err error
podMetrics, err = metricsClient.MetricsV1beta1().PodMetricses(ns).List(ctx, metav1.ListOptions{})
if err != nil {
return false, err
}
nonEmptyNodeMetrics := len(podMetrics.Items) > 0
if !nonEmptyNodeMetrics {
t.Logf("Pod metrics empty... Retrying.")
}
return nonEmptyNodeMetrics, nil
})
require.NoErrorf(t, err, "Pod metrics should not be empty")
for _, pod := range podMetrics.Items {
for _, containerMetric := range pod.Containers {
positiveMemory := containerMetric.Usage.Memory().CmpInt64(0)
assert.Positivef(t, positiveMemory, "Memory usage for pod %s/%s is %v, should be > 0", pod.Name, containerMetric.Name, containerMetric.Usage.Memory())
}
}
}

View file

@ -1,24 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]

View file

@ -1,9 +0,0 @@
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
namespace: prometheus-adapter-e2e
spec:
replicas: 2
serviceAccountName: prometheus
serviceMonitorSelector: {}

View file

@ -1,5 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: prometheus-adapter-e2e

View file

@ -1,25 +0,0 @@
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/name: kubelet
name: kubelet
namespace: prometheus-adapter-e2e
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
honorTimestamps: false
interval: 10s
path: /metrics/resource
port: https-metrics
scheme: https
tlsConfig:
insecureSkipVerify: true
jobLabel: app.kubernetes.io/name
namespaceSelector:
matchNames:
- kube-system
selector:
matchLabels:
app.kubernetes.io/name: kubelet

View file

@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: prometheus-adapter-e2e
spec:
ports:
- name: web
port: 9090
targetPort: web
selector:
app.kubernetes.io/instance: prometheus
app.kubernetes.io/name: prometheus
sessionAffinity: ClientIP

View file

@ -1,134 +0,0 @@
#!/usr/bin/env bash
# Copyright 2022 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -x
set -o errexit
set -o nounset
# Tool versions
K8S_VERSION=${KUBERNETES_VERSION:-v1.30.0} # cf https://hub.docker.com/r/kindest/node/tags
KIND_VERSION=${KIND_VERSION:-v0.23.0} # cf https://github.com/kubernetes-sigs/kind/releases
PROM_OPERATOR_VERSION=${PROM_OPERATOR_VERSION:-v0.73.2} # cf https://github.com/prometheus-operator/prometheus-operator/releases
# Variables; set to empty if unbound/empty
REGISTRY=${REGISTRY:-}
KIND_E2E=${KIND_E2E:-}
SKIP_INSTALL=${SKIP_INSTALL:-}
SKIP_CLEAN_AFTER=${SKIP_CLEAN_AFTER:-}
CLEAN_BEFORE=${CLEAN_BEFORE:-}
# KUBECONFIG - will be overriden if a cluster is deployed with Kind
KUBECONFIG=${KUBECONFIG:-"${HOME}/.kube/config"}
# A temporary directory used by the tests
E2E_DIR="${PWD}/.e2e"
# The namespace where prometheus-adapter is deployed
NAMESPACE="prometheus-adapter-e2e"
if [[ -z "${REGISTRY}" && -z "${KIND_E2E}" ]]; then
echo -e "Either REGISTRY or KIND_E2E should be set."
exit 1
fi
function clean {
if [[ -n "${KIND_E2E}" ]]; then
kind delete cluster || true
else
kubectl delete -f ./deploy/manifests || true
kubectl delete -f ./test/prometheus-manifests || true
kubectl delete namespace "${NAMESPACE}" || true
fi
rm -rf "${E2E_DIR}"
}
if [[ -n "${CLEAN_BEFORE}" ]]; then
clean
fi
function on_exit {
local error_code="$?"
echo "Obtaining prometheus-adapter pod logs..."
kubectl logs -l app.kubernetes.io/name=prometheus-adapter -n "${NAMESPACE}" || true
if [[ -z "${SKIP_CLEAN_AFTER}" ]]; then
clean
fi
test "${error_code}" == 0 && return;
}
trap on_exit EXIT
if [[ -d "${E2E_DIR}" ]]; then
echo -e "${E2E_DIR} already exists."
exit 1
fi
mkdir -p "${E2E_DIR}"
if [[ -n "${KIND_E2E}" ]]; then
# Install kubectl and kind, if we did not set SKIP_INSTALL
if [[ -z "${SKIP_INSTALL}" ]]; then
BIN="${E2E_DIR}/bin"
mkdir -p "${BIN}"
curl -Lo "${BIN}/kubectl" "https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/kubectl" && chmod +x "${BIN}/kubectl"
curl -Lo "${BIN}/kind" "https://kind.sigs.k8s.io/dl/${KIND_VERSION}/kind-linux-amd64" && chmod +x "${BIN}/kind"
export PATH="${BIN}:${PATH}"
fi
kind create cluster --image "kindest/node:${K8S_VERSION}"
REGISTRY="localhost"
KUBECONFIG="${E2E_DIR}/kubeconfig"
kind get kubeconfig > "${KUBECONFIG}"
fi
# Create the test namespace
kubectl create namespace "${NAMESPACE}"
export REGISTRY
IMAGE_NAME="${REGISTRY}/prometheus-adapter-$(go env GOARCH)"
IMAGE_TAG="v$(cat VERSION)"
if [[ -n "${KIND_E2E}" ]]; then
make container
kind load docker-image "${IMAGE_NAME}:${IMAGE_TAG}"
else
make push
fi
# Install prometheus-operator
kubectl apply -f "https://github.com/prometheus-operator/prometheus-operator/releases/download/${PROM_OPERATOR_VERSION}/bundle.yaml" --server-side
# Install and setup prometheus
kubectl apply -f ./test/prometheus-manifests --server-side
# Customize prometheus-adapter manifests
# TODO: use Kustomize or generate manifests from Jsonnet
cp -r ./deploy/manifests "${E2E_DIR}/manifests"
prom_url="http://prometheus.${NAMESPACE}.svc:9090/"
sed -i -e "s|--prometheus-url=.*$|--prometheus-url=${prom_url}|g" "${E2E_DIR}/manifests/deployment.yaml"
sed -i -e "s|image: .*$|image: ${IMAGE_NAME}:${IMAGE_TAG}|g" "${E2E_DIR}/manifests/deployment.yaml"
find "${E2E_DIR}/manifests" -type f -exec sed -i -e "s|namespace: monitoring|namespace: ${NAMESPACE}|g" {} \;
# Deploy prometheus-adapter
kubectl apply -f "${E2E_DIR}/manifests" --server-side
PROJECT_PREFIX="sigs.k8s.io/prometheus-adapter"
export KUBECONFIG
go test "${PROJECT_PREFIX}/test/e2e/" -v -count=1

1
vendor/github.com/NYTimes/gziphandler/.gitignore generated vendored Normal file
View file

@ -0,0 +1 @@
*.swp

6
vendor/github.com/NYTimes/gziphandler/.travis.yml generated vendored Normal file
View file

@ -0,0 +1,6 @@
language: go
go:
- 1.7
- 1.8
- tip

View file

@ -0,0 +1,75 @@
---
layout: code-of-conduct
version: v1.0
---
This code of conduct outlines our expectations for participants within the **NYTimes/gziphandler** community, as well as steps to reporting unacceptable behavior. We are committed to providing a welcoming and inspiring community for all and expect our code of conduct to be honored. Anyone who violates this code of conduct may be banned from the community.
Our open source community strives to:
* **Be friendly and patient.**
* **Be welcoming**: We strive to be a community that welcomes and supports people of all backgrounds and identities. This includes, but is not limited to members of any race, ethnicity, culture, national origin, colour, immigration status, social and economic class, educational level, sex, sexual orientation, gender identity and expression, age, size, family status, political belief, religion, and mental and physical ability.
* **Be considerate**: Your work will be used by other people, and you in turn will depend on the work of others. Any decision you take will affect users and colleagues, and you should take those consequences into account when making decisions. Remember that we're a world-wide community, so you might not be communicating in someone else's primary language.
* **Be respectful**: Not all of us will agree all the time, but disagreement is no excuse for poor behavior and poor manners. We might all experience some frustration now and then, but we cannot allow that frustration to turn into a personal attack. Its important to remember that a community where people feel uncomfortable or threatened is not a productive one.
* **Be careful in the words that we choose**: we are a community of professionals, and we conduct ourselves professionally. Be kind to others. Do not insult or put down other participants. Harassment and other exclusionary behavior aren't acceptable.
* **Try to understand why we disagree**: Disagreements, both social and technical, happen all the time. It is important that we resolve disagreements and differing views constructively. Remember that were different. The strength of our community comes from its diversity, people from a wide range of backgrounds. Different people have different perspectives on issues. Being unable to understand why someone holds a viewpoint doesnt mean that theyre wrong. Dont forget that it is human to err and blaming each other doesnt get us anywhere. Instead, focus on helping to resolve issues and learning from mistakes.
## Definitions
Harassment includes, but is not limited to:
- Offensive comments related to gender, gender identity and expression, sexual orientation, disability, mental illness, neuro(a)typicality, physical appearance, body size, race, age, regional discrimination, political or religious affiliation
- Unwelcome comments regarding a persons lifestyle choices and practices, including those related to food, health, parenting, drugs, and employment
- Deliberate misgendering. This includes deadnaming or persistently using a pronoun that does not correctly reflect a person's gender identity. You must address people by the name they give you when not addressing them by their username or handle
- Physical contact and simulated physical contact (eg, textual descriptions like “*hug*” or “*backrub*”) without consent or after a request to stop
- Threats of violence, both physical and psychological
- Incitement of violence towards any individual, including encouraging a person to commit suicide or to engage in self-harm
- Deliberate intimidation
- Stalking or following
- Harassing photography or recording, including logging online activity for harassment purposes
- Sustained disruption of discussion
- Unwelcome sexual attention, including gratuitous or off-topic sexual images or behaviour
- Pattern of inappropriate social contact, such as requesting/assuming inappropriate levels of intimacy with others
- Continued one-on-one communication after requests to cease
- Deliberate “outing” of any aspect of a persons identity without their consent except as necessary to protect others from intentional abuse
- Publication of non-harassing private communication
Our open source community prioritizes marginalized peoples safety over privileged peoples comfort. We will not act on complaints regarding:
- Reverse -isms, including reverse racism, reverse sexism, and cisphobia
- Reasonable communication of boundaries, such as “leave me alone,” “go away,” or “Im not discussing this with you”
- Refusal to explain or debate social justice concepts
- Communicating in a tone you dont find congenial
- Criticizing racist, sexist, cissexist, or otherwise oppressive behavior or assumptions
### Diversity Statement
We encourage everyone to participate and are committed to building a community for all. Although we will fail at times, we seek to treat everyone both as fairly and equally as possible. Whenever a participant has made a mistake, we expect them to take responsibility for it. If someone has been harmed or offended, it is our responsibility to listen carefully and respectfully, and do our best to right the wrong.
Although this list cannot be exhaustive, we explicitly honor diversity in age, gender, gender identity or expression, culture, ethnicity, language, national origin, political beliefs, profession, race, religion, sexual orientation, socioeconomic status, and technical ability. We will not tolerate discrimination based on any of the protected
characteristics above, including participants with disabilities.
### Reporting Issues
If you experience or witness unacceptable behavior—or have any other concerns—please report it by contacting us via **code@nytimes.com**. All reports will be handled with discretion. In your report please include:
- Your contact information.
- Names (real, nicknames, or pseudonyms) of any individuals involved. If there are additional witnesses, please
include them as well. Your account of what occurred, and if you believe the incident is ongoing. If there is a publicly available record (e.g. a mailing list archive or a public IRC logger), please include a link.
- Any additional information that may be helpful.
After filing a report, a representative will contact you personally, review the incident, follow up with any additional questions, and make a decision as to how to respond. If the person who is harassing you is part of the response team, they will recuse themselves from handling your incident. If the complaint originates from a member of the response team, it will be handled by a different member of the response team. We will respect confidentiality requests for the purpose of protecting victims of abuse.
### Attribution & Acknowledgements
We all stand on the shoulders of giants across many open source communities. We'd like to thank the communities and projects that established code of conducts and diversity statements as our inspiration:
* [Django](https://www.djangoproject.com/conduct/reporting/)
* [Python](https://www.python.org/community/diversity/)
* [Ubuntu](http://www.ubuntu.com/about/about-ubuntu/conduct)
* [Contributor Covenant](http://contributor-covenant.org/)
* [Geek Feminism](http://geekfeminism.org/about/code-of-conduct/)
* [Citizen Code of Conduct](http://citizencodeofconduct.org/)
This Code of Conduct was based on https://github.com/todogroup/opencodeofconduct

30
vendor/github.com/NYTimes/gziphandler/CONTRIBUTING.md generated vendored Normal file
View file

@ -0,0 +1,30 @@
# Contributing to NYTimes/gziphandler
This is an open source project started by handful of developers at The New York Times and open to the entire Go community.
We really appreciate your help!
## Filing issues
When filing an issue, make sure to answer these five questions:
1. What version of Go are you using (`go version`)?
2. What operating system and processor architecture are you using?
3. What did you do?
4. What did you expect to see?
5. What did you see instead?
## Contributing code
Before submitting changes, please follow these guidelines:
1. Check the open issues and pull requests for existing discussions.
2. Open an issue to discuss a new feature.
3. Write tests.
4. Make sure code follows the ['Go Code Review Comments'](https://github.com/golang/go/wiki/CodeReviewComments).
5. Make sure your changes pass `go test`.
6. Make sure the entire test suite passes locally and on Travis CI.
7. Open a Pull Request.
8. [Squash your commits](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html) after receiving feedback and add a [great commit message](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html).
Unless otherwise noted, the gziphandler source files are distributed under the Apache 2.0-style license found in the LICENSE.md file.

Some files were not shown because too many files have changed in this diff Show more