Compare commits

..

132 commits

Author SHA1 Message Date
Kubernetes Prow Robot
01919d0ef1
Merge pull request #686 from kubernetes-sigs/dependabot/go_modules/golang.org/x/crypto-0.31.0
build(deps): bump golang.org/x/crypto from 0.22.0 to 0.31.0
2025-04-01 03:34:37 -07:00
dependabot[bot]
21ea0ab279
build(deps): bump golang.org/x/crypto from 0.22.0 to 0.31.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.22.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.22.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-12 00:01:25 +00:00
Kubernetes Prow Robot
c2ae4cdaf1
Merge pull request #672 from chc5/go-upgrade-cve-fix
Upgrade Go version to 1.22.5 to fix CVEs.
2024-07-23 10:06:51 -07:00
Chieh Chen
26d05b7ae9 Upgrade Go version to 1.22.5 to fix CVEs. 2024-07-17 20:13:25 +00:00
Damien Grisonnet
17cef511b1
Merge pull request #660 from dgrisonnet/cut-0.12.0
Cut release 0.12.0
2024-05-16 20:14:56 +02:00
Damien Grisonnet
9988fd3e91 *: cut release 0.12.0
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 20:01:59 +02:00
Kubernetes Prow Robot
06e1d3913e
Merge pull request #659 from dgrisonnet/bump-1.30
Bump to Kubernetes 1.30
2024-05-16 09:57:38 -07:00
Damien Grisonnet
39ef9fa0e7 test: use official image of kind
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:37 +02:00
Damien Grisonnet
01b29a6578 *: fix openapi-gen options
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:37 +02:00
Damien Grisonnet
1d31a46aa1 *: update-lint
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:37 +02:00
Damien Grisonnet
d3784c5725 test: bump test dependencies
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:37 +02:00
Damien Grisonnet
aba25ac4aa cmd: fix OpenAPI
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:37 +02:00
Damien Grisonnet
fdde189945 *: bump deps
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:36 +02:00
Kubernetes Prow Robot
63bd3e8d44
Merge pull request #653 from logicalhan/patch-1
Update OWNERS (add myself and dashpole to OWNERs)
2024-04-19 10:20:08 -07:00
Kubernetes Prow Robot
1692f124d3
Merge pull request #651 from kubernetes-sigs/dependabot/go_modules/google.golang.org/grpc-1.58.3
build(deps): bump google.golang.org/grpc from 1.58.2 to 1.58.3
2024-04-19 10:10:45 -07:00
Han Kang
11d7d2bb05
Update OWNERS (add myself and dashpole to OWNERs) 2024-04-18 10:11:01 -07:00
dependabot[bot]
b224085e86
build(deps): bump google.golang.org/grpc from 1.58.2 to 1.58.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.58.2 to 1.58.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.58.2...v1.58.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-05 15:31:29 +00:00
Kubernetes Prow Robot
5d9b01a57a
Merge pull request #649 from machine424/uppps
deps: upgrade github.com/golang/protobuf to v1.5.4 for better compati…
2024-04-05 08:30:48 -07:00
machine424
4d5c98d364
deps: upgrade github.com/golang/protobuf to v1.5.4 for better compatibilty, see https://github.com/golang/protobuf/issues/1596#issuecomment-1981208282
upgrade go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp to v0.49.0 to address CVE-2023-45142 even though prometheus-adapter isn't using it directly and isn't exposing any traces.
2024-04-04 09:48:53 +02:00
Kubernetes Prow Robot
b48bff400e
Merge pull request #599 from bogo-y/fix
Fix metric unregistered
2024-03-26 09:15:19 -07:00
Kubernetes Prow Robot
9156bf3fbc
Merge pull request #614 from kubernetes-sigs/dependabot/go_modules/google.golang.org/grpc-1.56.3
build(deps): bump google.golang.org/grpc from 1.53.0 to 1.56.3
2023-11-28 18:40:38 +01:00
Kubernetes Prow Robot
27cf936f32
Merge pull request #608 from jaybooth4/release-112
Cut release v0.11.2
2023-11-13 15:58:49 +01:00
Kubernetes Prow Robot
ed795c1ae2
Merge pull request #620 from jaybooth4/master
Update prometheus-adapter go patch versions
2023-11-08 18:22:35 +01:00
Jason
a64d132d91 Update prometheus-adapter go patch versions 2023-11-07 02:48:54 +00:00
dependabot[bot]
a01b094a63
build(deps): bump google.golang.org/grpc from 1.53.0 to 1.56.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.53.0 to 1.56.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.53.0...v1.56.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-06 21:56:01 +00:00
Kubernetes Prow Robot
f588141f08
Merge pull request #609 from kubernetes-sigs/dependabot/go_modules/golang.org/x/net-0.17.0
build(deps): bump golang.org/x/net from 0.8.0 to 0.17.0
2023-11-06 22:54:46 +01:00
Kubernetes Prow Robot
98e716c7d3
Merge pull request #618 from machine424/d-http2
Add a toggle to disable HTTP/2 on the server to mitigate CVE-2023-44487
2023-10-31 17:16:11 +01:00
machine424
ba77337ae4
Add a toggle to disable HTTP/2 on the server to mitigate CVE-2023-44487
until the Go standard library and golang.org/x/net are fully fixed.
2023-10-30 09:48:56 +01:00
bogo
a5bcb39046
replace "endpoint" with "path" 2023-10-23 11:15:51 +08:00
bogo
2a4a4316dd run make update-lint and set EnabledMetrics=false in the server config 2023-10-13 20:14:34 +08:00
dependabot[bot]
f82ee9d1dc
build(deps): bump golang.org/x/net from 0.8.0 to 0.17.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.8.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.8.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 23:39:51 +00:00
bogo
a53ee9eed1
replace "endpoint" with "path"
Co-authored-by: Simon Pasquier <spasquie@redhat.com>
2023-10-07 10:40:52 +08:00
Jason Booth
7ba3c13bb6 Cut release v0.11.2 2023-09-08 13:26:38 -04:00
Kubernetes Prow Robot
891c52fe00
Merge pull request #606 from jaybooth4/upversion
Update prometheus-adapter go patch version to 1.20.7
2023-09-08 10:12:15 -07:00
bogo
e772844ed8
normalize import order 2023-09-04 11:01:25 +08:00
Jason Booth
6a1ba321da Update prometheus-adapter go patch versions
Upgrade to 1.20.7 to address multiple CVE security findings
    CVE-2023-29404
    CVE-2023-29402
    CVE-2023-29405
    CVE-2023-29403
    CVE-2023-39533
    CVE-2023-29409
    CVE-2023-29406
2023-08-30 13:53:44 -04:00
bogo
0032610ace change the latency metric and dependency inject prometheus registry 2023-08-29 17:19:47 +08:00
bogo
fda3dad49b
Merge pull request #1 from kubernetes-sigs/master
update my dev brunch
2023-08-29 15:32:10 +08:00
Damien Grisonnet
4cc5de93cb
Merge pull request #604 from dgrisonnet/cut-release-0.11.1
Cut release v0.11.1
2023-08-24 15:30:25 +02:00
Damien Grisonnet
a4100f047a Cut release v0.11.1
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-08-24 15:13:50 +02:00
Damien Grisonnet
cb883fb789
Merge pull request #601 from dgrisonnet/fix-multiarch
Fix multiarch image build
2023-08-22 18:18:47 +02:00
Damien Grisonnet
198c469805 Fix multiarch image build
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-08-22 18:01:46 +02:00
bogo
7cf3ac5d90
Update metric buckets 2023-08-17 20:57:47 +08:00
bogo
966ef227fe Fix metric unregistered 2023-08-17 20:23:01 +08:00
Kubernetes Prow Robot
74ba84b76e
Merge pull request #592 from kubernetes-sigs/dependabot/go_modules/google.golang.org/grpc-1.53.0
build(deps): bump google.golang.org/grpc from 1.40.0 to 1.53.0
2023-07-27 05:56:09 -07:00
Damien Grisonnet
36fbcc78f1
Merge pull request #596 from dgrisonnet/release-0.11.0
Cut release 0.11.0
2023-07-25 14:23:33 +02:00
Damien Grisonnet
0a6e74a5b3 *: cut release 0.11.0
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-07-25 14:06:46 +02:00
dependabot[bot]
147dacee4a
build(deps): bump google.golang.org/grpc from 1.40.0 to 1.53.0
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.40.0 to 1.53.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.40.0...v1.53.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-25 09:38:23 +00:00
Damien Grisonnet
f733e2f74d
Merge pull request #586 from dgrisonnet/k8s-1.27
*: bump go to 1.20 and k8s deps to 0.27.2
2023-07-25 11:37:17 +02:00
Damien Grisonnet
b50333c035 cmd/adapter: add support for openapi v3
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-06-22 13:52:06 +02:00
Damien Grisonnet
f69aae4c78
Merge pull request #587 from dgrisonnet/update-lint
Update golangci-lint to 1.53.2
2023-06-20 18:42:42 +02:00
Damien Grisonnet
8579be6c7b manifests: remove deprecated klog flag
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-06-20 18:39:51 +02:00
Damien Grisonnet
e69388346f *: bump go to 1.20 and k8s deps to 0.27.2
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-06-20 18:39:50 +02:00
Damien Grisonnet
86efb37019 Update golangci-lint to 1.53.2
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-06-13 15:02:46 +02:00
Kubernetes Prow Robot
c8caa11da1
Merge pull request #583 from brcorey/master
fix: use dl.k8s.io, not kubernetes-release bucket
2023-05-17 22:30:33 -07:00
Corey
b3a3d97596 fix: use dl.k8s.io, not kubernetes-release bucket
Signed-off-by: Corey <brant742@gmail.com>
2023-05-11 20:55:13 +00:00
Kubernetes Prow Robot
27eb607509
Merge pull request #565 from kubernetes-sigs/dependabot/go_modules/golang.org/x/net-0.7.0
build(deps): bump golang.org/x/net from 0.4.0 to 0.7.0
2023-03-16 07:15:22 -07:00
dependabot[bot]
d58cdcee93
build(deps): bump golang.org/x/net from 0.4.0 to 0.7.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.4.0 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](https://github.com/golang/net/compare/v0.4.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-09 19:18:02 +00:00
Kubernetes Prow Robot
aab718746b
Merge pull request #569 from kubernetes-sigs/dependabot/go_modules/golang.org/x/crypto-0.1.0
build(deps): bump golang.org/x/crypto from 0.0.0-20220214200702-86341886e292 to 0.1.0
2023-03-09 11:13:52 -08:00
dependabot[bot]
8528f29516
build(deps): bump golang.org/x/crypto
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.0.0-20220214200702-86341886e292 to 0.1.0.
- [Release notes](https://github.com/golang/crypto/releases)
- [Commits](https://github.com/golang/crypto/commits/v0.1.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-07 01:39:44 +00:00
Kubernetes Prow Robot
9a1ffb7b17
Merge pull request #559 from asherf/docs-yaml
Fix yaml in sample config & docs
2023-01-26 02:54:25 -08:00
Asher Foa
6a7f2b5ce1 Fix yaml in sample config & docs 2023-01-25 12:31:52 -05:00
Kubernetes Prow Robot
f607905cf6
Merge pull request #539 from olivierlemasle/e2e
Add initial e2e tests
2023-01-19 03:30:34 -08:00
Olivier Lemasle
7bdb7f14b9 manifests: use node_ metrics 2023-01-19 12:20:42 +01:00
Olivier Lemasle
1145dbfe93 Add initial e2e tests 2023-01-19 12:20:42 +01:00
Kubernetes Prow Robot
307795482f
Merge pull request #553 from gburton1/patch-golang-1.18.9
Patch upgrade of Golang to 1.18.9
2023-01-02 03:09:31 -08:00
Greg Burton
d341e8f67b Patch upgrade of Golang to 1.18.9
Signed-off-by: Greg Burton <9094087+gburton1@users.noreply.github.com>
2022-12-27 16:15:18 -08:00
Kubernetes Prow Robot
b03cc3e7c8
Merge pull request #551 from olivierlemasle/update-golang.org-x-net
Bump golang.org/x/net to v0.4.0 for GO-2022-1144
2022-12-16 08:16:18 -08:00
Kubernetes Prow Robot
b233597358
Merge pull request #518 from sillyfrog/master
Fix broken links in README.md
2022-12-13 11:19:34 -08:00
Olivier Lemasle
5d24df6353 Bump golang.org/x/net to v0.4.0 for GO-2022-1144 2022-12-12 12:11:21 +01:00
Sillyfrog
411763b355 Fix broken links in README.md 2022-12-11 07:09:03 +10:00
Kubernetes Prow Robot
062c42eccc
Merge pull request #550 from olivierlemasle/stylecheck
golangci-lint: Add stylecheck linter
2022-12-09 06:58:19 -08:00
Kubernetes Prow Robot
e18cc18201
Merge pull request #546 from olivierlemasle/log-flags
Refactor adding logging flags
2022-12-09 06:58:12 -08:00
Kubernetes Prow Robot
70604d2f54
Merge pull request #547 from olivierlemasle/GO-2022-0969
Fix GO-2022-0969
2022-12-09 06:52:13 -08:00
Olivier Lemasle
03cd31007e Add stylecheck linter 2022-12-08 23:12:26 +01:00
Olivier Lemasle
3d590269aa Update golang.org/x/net - Fix GO-2022-0969 2022-12-06 21:52:17 +01:00
Olivier Lemasle
09cc27e609 Refactor adding logging flags 2022-12-05 10:10:51 +01:00
Kubernetes Prow Robot
a5faf9f920
Merge pull request #544 from olivierlemasle/minversion
Set MinVersion: tls.VersionTLS12 in prometheus client's TLSClientConfig
2022-11-29 09:11:24 -08:00
Olivier Lemasle
dc0c0058d0 Set MinVersion: tls.VersionTLS12 in prometheus client's TLSClientConfig
Having no explicit MinVersion is reported by [gosec] as G402 (CWE-295):
`TLS MinVersion too low`

Using MinVersion: tls.VersionTLS12 because it's what client-go uses:
cf 1ac8d45935/transport/transport.go (L92)

That way, the Kubernetes API client and the Prometheus client in
prometheus-adapter use the same TLS config MinVersion.

[gosec]: https://github.com/securego/gosec
2022-11-29 17:24:50 +01:00
Kubernetes Prow Robot
8958457968
Merge pull request #540 from olivierlemasle/verify
Use golangci-lint
2022-11-29 08:15:24 -08:00
Olivier Lemasle
0ea1c1b8d3 Use Golangci-lint 2022-11-28 23:17:16 +01:00
Kubernetes Prow Robot
85e2d2052d
Merge pull request #542 from dgrisonnet/add-olivierlemasle
Add olivierlemasle as reviewer
2022-11-28 06:22:08 -08:00
Damien Grisonnet
d5c45b27b0 Add olivierlemasle as reviewer
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-11-28 15:10:40 +01:00
Kubernetes Prow Robot
fdfecc8d7f
Merge pull request #538 from olivierlemasle/fix-token-file
Fix segfault when using --prometheus-token-file
2022-11-22 04:12:13 -08:00
Olivier Lemasle
e740fee947 Fix segfault when using --prometheus-token-file 2022-11-09 18:06:38 +01:00
Kubernetes Prow Robot
268b2a8ec2
Merge pull request #531 from JoaoBraveCoding/426
Updates deploy/manifest to latest version in sync with kube-prom
2022-11-08 22:44:13 -08:00
Joao Marcal
372dfc9d3a
Updates README, docs/walkthrough and deploy/
Signed-off-by: JoaoBraveCoding <jmarcal@redhat.com>
2022-11-08 15:36:35 +00:00
Joao Marcal
3afe2c74bc
Updates deploy/manifest to latest version in sync with kube-prom
Issue https://github.com/kubernetes-sigs/prometheus-adapter/issues/426
2022-09-02 17:16:33 +01:00
Damien Grisonnet
dd75b55557
Merge pull request #529 from dgrisonnet/update-registry-location
Update registry location to registry.k8s.io
2022-08-31 14:54:01 +02:00
Kubernetes Prow Robot
4767a63a67
Merge pull request #526 from dgrisonnet/update-owners
Update OWNERS
2022-08-31 05:45:01 -07:00
Damien Grisonnet
204d5996a4 *: update registry location to registry.k8s.io
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-31 14:32:12 +02:00
Damien Grisonnet
d4d0a69514
Merge pull request #528 from dgrisonnet/fix-image
Fix image location in manifests
2022-08-31 14:28:38 +02:00
Damien Grisonnet
e5ad3d8903 deploy: fix image location
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-31 14:16:23 +02:00
Damien Grisonnet
465e4153f9 *: update OWNERS
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-23 17:29:11 +02:00
Damien Grisonnet
f23e67113a
Merge pull request #524 from dgrisonnet/recover-klog-flags
cmd/adapter: recover klog flags
2022-08-12 18:53:30 +02:00
Damien Grisonnet
303ac6fd45 cmd/adapter: recover klog flags
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-12 18:50:41 +02:00
Damien Grisonnet
7b4ba08b5d
Merge pull request #523 from dgrisonnet/cut-release-0.10
Cut release 0.10.0
2022-08-12 17:52:11 +02:00
Damien Grisonnet
56b57a0b0e VERSION: update to v0.10.0
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-12 17:45:28 +02:00
Kubernetes Prow Robot
dd85956fbf
Merge pull request #509 from ksauzz/feature/query-verb
Add --prometheus-verb to support POST requests to prometheus servers
2022-08-12 05:34:43 -07:00
Kazuhiro Suzuki
65abf73917 Update help about --prometheus-verb option 2022-08-12 12:45:22 +09:00
Damien Grisonnet
47ca16ef50
Merge pull request #521 from dgrisonnet/bump-k8s-deps-1.24
Update dependencies
2022-08-11 16:30:52 +02:00
Damien Grisonnet
cca107d97c *: support new MetricsGetter interface
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-11 15:04:45 +02:00
Damien Grisonnet
9321bf0162 zz_generated.openapi.go: regenerate
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-11 15:04:45 +02:00
Damien Grisonnet
d2ae4c1569 go.mod: bump golang and k8s deps to 0.24.3
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-11 15:04:43 +02:00
Kubernetes Prow Robot
c6e518beac
Merge pull request #491 from Ruwan-Ranganath/master
Update README.md with Helm-3 Command
2022-07-07 07:11:34 -07:00
Kubernetes Prow Robot
508b82b712
Merge pull request #494 from grzesuav/patch-2
Change apiregistration.k8s.io to v1
2022-07-07 07:07:34 -07:00
Kazuhiro Suzuki
a8742cff28 Add --prometheus-verb to support POST requests to prometheus servers 2022-06-28 18:23:52 +09:00
Kubernetes Prow Robot
9008b12a01
Merge pull request #498 from lokichoggio/master
fix: close file
2022-04-27 07:38:11 -07:00
lokichoggio
df3080de31
fix: close file 2022-04-11 17:54:51 +08:00
Grzegorz Głąb
00920756a4
Change apiregistration.k8s.io to v1 2022-03-23 20:48:46 +01:00
Ruwan Ranganath
e85e426ee0
Update README.md 2022-03-10 14:38:05 +05:30
Kubernetes Prow Robot
bf33cafefc
Merge pull request #482 from peizhouyu/Validate_OWNERS_files
Validate OWNERS files
2022-01-26 01:56:26 -08:00
peizhouyu
0aaf002fbc Validate OWNERS files 2022-01-25 11:21:29 +08:00
Kubernetes Prow Robot
2cc6362964
Merge pull request #476 from dims/drop-unused-alias-in-owners-aliases
Drop unused alias in OWNERS_ALIASES
2022-01-10 08:25:13 -08:00
Davanum Srinivas
8441ee2f74
Drop unused alias in OWNERS_ALIASES
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2021-12-24 17:03:30 -05:00
Kubernetes Prow Robot
c9e69613d3
Merge pull request #472 from spiffxp/use-k8s-infra-for-gcb-image
images: use k8s-staging-test-infra/gcb-docker-gcloud
2021-12-01 00:09:17 -08:00
Aaron Crickenberger
b877e9d1bb images: use k8s-staging-test-infra/gcb-docker-gcloud 2021-11-30 13:04:55 -08:00
Kubernetes Prow Robot
bd568beea0
Merge pull request #461 from dgrisonnet/version-v0.9.1
*: merge changes from v0.9.1
2021-11-09 08:39:47 -08:00
Kubernetes Prow Robot
57a6fda6b1
Merge pull request #465 from mbutkereit/typo-sample-config
Add s to metricQuery
2021-11-09 06:23:47 -08:00
mbutkereit
6720d67d3a Add s to metricQuery 2021-10-29 08:13:50 +02:00
Damien Grisonnet
4f58885c9a *: merge changes from v0.9.1
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-10-15 18:30:08 +02:00
Kubernetes Prow Robot
3206c65b47
Merge pull request #455 from leoskyrocker/master
Fix external metrics provider not respecting metrics-max-age
2021-10-06 08:12:34 -07:00
Leo Lei
bb4722e38b Fix external metrics provider not respecting metrics-max-age 2021-09-24 18:42:24 +08:00
Kubernetes Prow Robot
dd107a714b
Merge pull request #454 from spiffxp/follow-k8sio-default-branch
docs: follow kubernetes/k8s.io branch rename:
2021-09-16 02:19:45 -07:00
Aaron Crickenberger
d76d3eaa49 docs: follow kubernetes/k8s.io branch rename: 2021-09-15 15:32:26 -07:00
Kubernetes Prow Robot
12309c9d1d
Merge pull request #438 from fpetkovski/bug-template
Add bug template
2021-09-13 03:40:07 -07:00
fpetkovski
3288fb9d41 Add collapsible blocks 2021-08-23 10:44:30 +02:00
Kubernetes Prow Robot
56df87890c
Merge pull request #447 from dgrisonnet/gcr.k8s.io
README: improve gcr.k8s.io instructions
2021-08-18 05:18:09 -07:00
Kubernetes Prow Robot
ae458c4464
Merge pull request #448 from aw1cks-forks/master
v0.9.0: Bump version file to reflect new release
2021-08-18 01:32:08 -07:00
Alex Wicks
1ef79d0a86 Bump VERSION file to reflect latest release 2021-08-17 17:36:07 +01:00
Damien Grisonnet
7040f70905 README: improve gcr.k8s.io instructions
Images hosted on gcr.k8s.io aren't browsable to the users, so linking to
the website results in a 403 HTTP error which is confusing.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-08-17 17:54:25 +02:00
fpetkovski
ac814833e1 Add bug template 2021-07-27 13:50:42 +02:00
74 changed files with 4173 additions and 2378 deletions

52
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View file

@ -0,0 +1,52 @@
---
name: Bug report
about: Report a bug encountered while running prometheus-adapter
title: ''
labels: kind/bug
assignees: ''
---
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately see https://github.com/kubernetes/kube-state-metrics/blob/master/SECURITY.md
-->
**What happened?**:
**What did you expect to happen?**:
**Please provide the prometheus-adapter config**:
<details open>
<summary>prometheus-adapter config</summary>
<!--- INSERT config HERE --->
</details>
**Please provide the HPA resource used for autoscaling**:
<details open>
<summary>HPA yaml</summary>
<!--- INSERT yaml HERE --->
</details>
**Please provide the HPA status**:
**Please provide the prometheus-adapter logs with -v=6 around the time the issue happened**:
<details open>
<summary>prometheus-adapter logs</summary>
<!--- INSERT logs HERE --->
</details>
**Anything else we need to know?**:
**Environment**:
- prometheus-adapter version:
- prometheus version:
- Kubernetes version (use `kubectl version`):
- Cloud provider or hardware configuration:
- Other info:

1
.gitignore vendored
View file

@ -2,3 +2,4 @@
*~ *~
/vendor /vendor
/adapter /adapter
.e2e

39
.golangci.yml Normal file
View file

@ -0,0 +1,39 @@
run:
deadline: 5m
linters:
disable-all: true
enable:
- bodyclose
- dogsled
- dupl
- errcheck
- exportloopref
- gocritic
- gocyclo
- gofmt
- goimports
- gosec
- goprintffuncname
- gosimple
- govet
- ineffassign
- misspell
- nakedret
- nolintlint
- revive
- staticcheck
- stylecheck
- typecheck
- unconvert
- unused
- whitespace
linters-settings:
goimports:
local-prefixes: sigs.k8s.io/prometheus-adapter
revive:
rules:
- name: exported
arguments:
- disableStutteringCheck

View file

@ -1,3 +1,4 @@
ARG ARCH
ARG GO_VERSION ARG GO_VERSION
FROM golang:${GO_VERSION} as build FROM golang:${GO_VERSION} as build
@ -14,7 +15,7 @@ COPY Makefile Makefile
ARG ARCH ARG ARCH
RUN make prometheus-adapter RUN make prometheus-adapter
FROM gcr.io/distroless/static:latest FROM gcr.io/distroless/static:latest-$ARCH
COPY --from=build /go/src/sigs.k8s.io/prometheus-adapter/adapter / COPY --from=build /go/src/sigs.k8s.io/prometheus-adapter/adapter /
USER 65534 USER 65534

View file

@ -2,12 +2,14 @@ REGISTRY?=gcr.io/k8s-staging-prometheus-adapter
IMAGE=prometheus-adapter IMAGE=prometheus-adapter
ARCH?=$(shell go env GOARCH) ARCH?=$(shell go env GOARCH)
ALL_ARCH=amd64 arm arm64 ppc64le s390x ALL_ARCH=amd64 arm arm64 ppc64le s390x
GOPATH:=$(shell go env GOPATH)
VERSION=$(shell cat VERSION) VERSION=$(shell cat VERSION)
TAG_PREFIX=v TAG_PREFIX=v
TAG?=$(TAG_PREFIX)$(VERSION) TAG?=$(TAG_PREFIX)$(VERSION)
GO_VERSION?=1.16.4 GO_VERSION?=1.22.5
GOLANGCI_VERSION?=1.56.2
.PHONY: all .PHONY: all
all: prometheus-adapter all: prometheus-adapter
@ -53,25 +55,38 @@ push-multi-arch:
test: test:
CGO_ENABLED=0 go test ./cmd/... ./pkg/... CGO_ENABLED=0 go test ./cmd/... ./pkg/...
.PHONY: test-e2e
test-e2e:
./test/run-e2e-tests.sh
# Static analysis # Static analysis
# --------------- # ---------------
.PHONY: verify .PHONY: verify
verify: verify-gofmt verify-deps verify-generated test verify: verify-lint verify-deps verify-generated
.PHONY: update .PHONY: update
update: update-generated update: update-lint update-generated
# Format # Format and lint
# ------ # ---------------
.PHONY: verify-gofmt HAS_GOLANGCI_VERSION:=$(shell $(GOPATH)/bin/golangci-lint version --format=short)
verify-gofmt: .PHONY: golangci
./hack/gofmt-all.sh -v golangci:
ifneq ($(HAS_GOLANGCI_VERSION), $(GOLANGCI_VERSION))
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOPATH)/bin v$(GOLANGCI_VERSION)
endif
.PHONY: verify-lint
verify-lint: golangci
$(GOPATH)/bin/golangci-lint run --modules-download-mode=readonly || (echo 'Run "make update-lint"' && exit 1)
.PHONY: update-lint
update-lint: golangci
$(GOPATH)/bin/golangci-lint run --fix --modules-download-mode=readonly
.PHONY: gofmt
gofmt:
./hack/gofmt-all.sh
# Dependencies # Dependencies
# ------------ # ------------
@ -88,10 +103,16 @@ verify-deps:
generated_files=pkg/api/generated/openapi/zz_generated.openapi.go generated_files=pkg/api/generated/openapi/zz_generated.openapi.go
.PHONY: verify-generated .PHONY: verify-generated
verify-generated: verify-generated: update-generated
@git diff --exit-code -- $(generated_files) @git diff --exit-code -- $(generated_files)
.PHONY: update-generated .PHONY: update-generated
update-generated: update-generated:
go install -mod=readonly k8s.io/kube-openapi/cmd/openapi-gen go install -mod=readonly k8s.io/kube-openapi/cmd/openapi-gen
$(GOPATH)/bin/openapi-gen --logtostderr -i k8s.io/metrics/pkg/apis/custom_metrics,k8s.io/metrics/pkg/apis/custom_metrics/v1beta1,k8s.io/metrics/pkg/apis/custom_metrics/v1beta2,k8s.io/metrics/pkg/apis/external_metrics,k8s.io/metrics/pkg/apis/external_metrics/v1beta1,k8s.io/metrics/pkg/apis/metrics,k8s.io/metrics/pkg/apis/metrics/v1beta1,k8s.io/apimachinery/pkg/apis/meta/v1,k8s.io/apimachinery/pkg/api/resource,k8s.io/apimachinery/pkg/version,k8s.io/api/core/v1 -h ./hack/boilerplate.go.txt -p ./pkg/api/generated/openapi -O zz_generated.openapi -o ./ -r /dev/null $(GOPATH)/bin/openapi-gen --logtostderr \
--go-header-file ./hack/boilerplate.go.txt \
--output-pkg ./pkg/api/generated/openapi \
--output-file zz_generated.openapi.go \
--output-dir ./pkg/api/generated/openapi \
-r /dev/null \
"k8s.io/metrics/pkg/apis/custom_metrics" "k8s.io/metrics/pkg/apis/custom_metrics/v1beta1" "k8s.io/metrics/pkg/apis/custom_metrics/v1beta2" "k8s.io/metrics/pkg/apis/external_metrics" "k8s.io/metrics/pkg/apis/external_metrics/v1beta1" "k8s.io/metrics/pkg/apis/metrics" "k8s.io/metrics/pkg/apis/metrics/v1beta1" "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/version" "k8s.io/api/core/v1"

15
OWNERS
View file

@ -1,16 +1,17 @@
# See the OWNERS docs at https://go.k8s.io/owners # See the OWNERS docs at https://go.k8s.io/owners
owners:
- dgrisonnet
- s-urbaniak
approvers: approvers:
- prometheus-adapter-approvers - dgrisonnet
- logicalhan
- dashpole
reviewers: reviewers:
- prometheus-adapter-approvers - dgrisonnet
- prometheus-adapter-reviewers - olivierlemasle
- logicalhan
- dashpole
emeritus_approvers: emeritus_approvers:
- brancz - brancz
- directxman12 - directxman12
- lilic - lilic
- s-urbaniak

View file

@ -1,7 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners#owners_aliases
aliases:
prometheus-adapter-approvers:
- dgrisonnet
- s-urbaniak
prometheus-adapter-reviewers: []

View file

@ -1,9 +1,7 @@
# Prometheus Adapter for Kubernetes Metrics APIs # Prometheus Adapter for Kubernetes Metrics APIs
This repository contains an implementation of the Kubernetes This repository contains an implementation of the Kubernetes Custom, Resource and External
[resource metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md), [Metric APIs](https://github.com/kubernetes/metrics).
[custom metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md), and
[external metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md) APIs.
This adapter is therefore suitable for use with the autoscaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+. This adapter is therefore suitable for use with the autoscaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+.
It can also replace the [metrics server](https://github.com/kubernetes-incubator/metrics-server) on clusters that already run Prometheus and collect the appropriate metrics. It can also replace the [metrics server](https://github.com/kubernetes-incubator/metrics-server) on clusters that already run Prometheus and collect the appropriate metrics.
@ -21,15 +19,22 @@ If you're a helm user, a helm chart is listed on prometheus-community repository
To install it with the release name `my-release`, run this Helm command: To install it with the release name `my-release`, run this Helm command:
For Helm2
```console ```console
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts $ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update $ helm repo update
$ helm install --name my-release prometheus-community/prometheus-adapter $ helm install --name my-release prometheus-community/prometheus-adapter
``` ```
For Helm3 ( as name is mandatory )
```console
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install my-release prometheus-community/prometheus-adapter
```
Official images Official images
--- ---
All official images for releases after v0.8.4 are available in [gcr.io](http://k8s.gcr.io/prometheus-adapter/prometheus-adapter). The project also maintains a [staging registry](https://console.cloud.google.com/gcr/images/k8s-staging-prometheus-adapter/GLOBAL/) where images for each commit from the master branch are published. You can use this registry if you need to test a version from a specific commit, or if you need to deploy a patch while waiting for a new release. All official images for releases after v0.8.4 are available in `registry.k8s.io/prometheus-adapter/prometheus-adapter:$VERSION`. The project also maintains a [staging registry](https://console.cloud.google.com/gcr/images/k8s-staging-prometheus-adapter/GLOBAL/) where images for each commit from the master branch are published. You can use this registry if you need to test a version from a specific commit, or if you need to deploy a patch while waiting for a new release.
Images for versions v0.8.4 and prior are only available in unofficial registries: Images for versions v0.8.4 and prior are only available in unofficial registries:
* https://quay.io/repository/coreos/k8s-prometheus-adapter-amd64 * https://quay.io/repository/coreos/k8s-prometheus-adapter-amd64
@ -44,7 +49,7 @@ will attempt to using [Kubernetes in-cluster
config](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod) config](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod)
to connect to the cluster. to connect to the cluster.
It takes the following addition arguments specific to configuring how the It takes the following additional arguments specific to configuring how the
adapter talks to Prometheus and the main Kubernetes cluster: adapter talks to Prometheus and the main Kubernetes cluster:
- `--lister-kubeconfig=<path-to-kubeconfig>`: This configures - `--lister-kubeconfig=<path-to-kubeconfig>`: This configures

View file

@ -7,7 +7,7 @@ prometheus-adapter is released on an as-needed basis. The process is as follows:
1. A PR that bumps version hardcoded in code is created and merged 1. A PR that bumps version hardcoded in code is created and merged
1. An OWNER creates a draft Github release 1. An OWNER creates a draft Github release
1. An OWNER creates a release tag using `git tag -s $VERSION`, inserts the changelog and pushes the tag with `git push $VERSION`. Then waits for [prow.k8s.io](https://prow.k8s.io) to build and push new images to [gcr.io/k8s-staging-prometheus-adapter](https://gcr.io/k8s-staging-prometheus-adapter) 1. An OWNER creates a release tag using `git tag -s $VERSION`, inserts the changelog and pushes the tag with `git push $VERSION`. Then waits for [prow.k8s.io](https://prow.k8s.io) to build and push new images to [gcr.io/k8s-staging-prometheus-adapter](https://gcr.io/k8s-staging-prometheus-adapter)
1. A PR in [kubernetes/k8s.io](https://github.com/kubernetes/k8s.io/blob/master/k8s.gcr.io/images/k8s-staging-prometheus-adapter/images.yaml) is created to release images to `k8s.gcr.io` 1. A PR in [kubernetes/k8s.io](https://github.com/kubernetes/k8s.io/blob/main/k8s.gcr.io/images/k8s-staging-prometheus-adapter/images.yaml) is created to release images to `k8s.gcr.io`
1. An OWNER publishes the GitHub release 1. An OWNER publishes the GitHub release
1. An announcement email is sent to `kubernetes-sig-instrumentation@googlegroups.com` with the subject `[ANNOUNCE] prometheus-adapter $VERSION is released` 1. An announcement email is sent to `kubernetes-sig-instrumentation@googlegroups.com` with the subject `[ANNOUNCE] prometheus-adapter $VERSION is released`
1. The release issue is closed 1. The release issue is closed

View file

@ -1 +1 @@
0.8.4 0.12.0

View file

@ -3,7 +3,7 @@ timeout: 3600s
options: options:
substitution_option: ALLOW_LOOSE substitution_option: ALLOW_LOOSE
steps: steps:
- name: 'gcr.io/k8s-testimages/gcb-docker-gcloud:v20210622-762366a' - name: 'gcr.io/k8s-staging-test-infra/gcb-docker-gcloud:v20211118-2f2d816b90'
entrypoint: make entrypoint: make
env: env:
- TAG=$_PULL_BASE_REF - TAG=$_PULL_BASE_REF

View file

@ -19,9 +19,7 @@ package main
import ( import (
"crypto/tls" "crypto/tls"
"crypto/x509" "crypto/x509"
"flag"
"fmt" "fmt"
"io/ioutil"
"net/http" "net/http"
"net/url" "net/url"
"os" "os"
@ -32,8 +30,8 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
openapinamer "k8s.io/apiserver/pkg/endpoints/openapi" openapinamer "k8s.io/apiserver/pkg/endpoints/openapi"
genericapiserver "k8s.io/apiserver/pkg/server" genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/client-go/informers" "k8s.io/client-go/metadata"
"k8s.io/client-go/kubernetes" "k8s.io/client-go/metadata/metadatainformer"
"k8s.io/client-go/rest" "k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd" "k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/transport" "k8s.io/client-go/transport"
@ -74,13 +72,16 @@ type PrometheusAdapter struct {
PrometheusTokenFile string PrometheusTokenFile string
// PrometheusHeaders is a k=v list of headers to set on requests to PrometheusURL // PrometheusHeaders is a k=v list of headers to set on requests to PrometheusURL
PrometheusHeaders []string PrometheusHeaders []string
// PrometheusVerb is a verb to set on requests to PrometheusURL
PrometheusVerb string
// AdapterConfigFile points to the file containing the metrics discovery configuration. // AdapterConfigFile points to the file containing the metrics discovery configuration.
AdapterConfigFile string AdapterConfigFile string
// MetricsRelistInterval is the interval at which to relist the set of available metrics // MetricsRelistInterval is the interval at which to relist the set of available metrics
MetricsRelistInterval time.Duration MetricsRelistInterval time.Duration
// MetricsMaxAge is the period to query available metrics for // MetricsMaxAge is the period to query available metrics for
MetricsMaxAge time.Duration MetricsMaxAge time.Duration
// DisableHTTP2 indicates that http2 should not be enabled.
DisableHTTP2 bool
metricsConfig *adaptercfg.MetricsDiscoveryConfig metricsConfig *adaptercfg.MetricsDiscoveryConfig
} }
@ -90,6 +91,10 @@ func (cmd *PrometheusAdapter) makePromClient() (prom.Client, error) {
return nil, fmt.Errorf("invalid Prometheus URL %q: %v", baseURL, err) return nil, fmt.Errorf("invalid Prometheus URL %q: %v", baseURL, err)
} }
if cmd.PrometheusVerb != http.MethodGet && cmd.PrometheusVerb != http.MethodPost {
return nil, fmt.Errorf("unsupported Prometheus HTTP verb %q; supported verbs: \"GET\" and \"POST\"", cmd.PrometheusVerb)
}
var httpClient *http.Client var httpClient *http.Client
if cmd.PrometheusCAFile != "" { if cmd.PrometheusCAFile != "" {
@ -109,15 +114,19 @@ func (cmd *PrometheusAdapter) makePromClient() (prom.Client, error) {
} }
if cmd.PrometheusTokenFile != "" { if cmd.PrometheusTokenFile != "" {
data, err := ioutil.ReadFile(cmd.PrometheusTokenFile) data, err := os.ReadFile(cmd.PrometheusTokenFile)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to read prometheus-token-file: %v", err) return nil, fmt.Errorf("failed to read prometheus-token-file: %v", err)
} }
httpClient.Transport = transport.NewBearerAuthRoundTripper(string(data), httpClient.Transport) wrappedTransport := http.DefaultTransport
if httpClient.Transport != nil {
wrappedTransport = httpClient.Transport
}
httpClient.Transport = transport.NewBearerAuthRoundTripper(string(data), wrappedTransport)
} }
genericPromClient := prom.NewGenericAPIClient(httpClient, baseURL, parseHeaderArgs(cmd.PrometheusHeaders)) genericPromClient := prom.NewGenericAPIClient(httpClient, baseURL, parseHeaderArgs(cmd.PrometheusHeaders))
instrumentedGenericPromClient := mprom.InstrumentGenericAPIClient(genericPromClient, baseURL.String()) instrumentedGenericPromClient := mprom.InstrumentGenericAPIClient(genericPromClient, baseURL.String())
return prom.NewClientForAPI(instrumentedGenericPromClient), nil return prom.NewClientForAPI(instrumentedGenericPromClient, cmd.PrometheusVerb), nil
} }
func (cmd *PrometheusAdapter) addFlags() { func (cmd *PrometheusAdapter) addFlags() {
@ -137,13 +146,20 @@ func (cmd *PrometheusAdapter) addFlags() {
"Optional file containing the bearer token to use when connecting with Prometheus") "Optional file containing the bearer token to use when connecting with Prometheus")
cmd.Flags().StringArrayVar(&cmd.PrometheusHeaders, "prometheus-header", cmd.PrometheusHeaders, cmd.Flags().StringArrayVar(&cmd.PrometheusHeaders, "prometheus-header", cmd.PrometheusHeaders,
"Optional header to set on requests to prometheus-url. Can be repeated") "Optional header to set on requests to prometheus-url. Can be repeated")
cmd.Flags().StringVar(&cmd.PrometheusVerb, "prometheus-verb", cmd.PrometheusVerb,
"HTTP verb to set on requests to Prometheus. Possible values: \"GET\", \"POST\"")
cmd.Flags().StringVar(&cmd.AdapterConfigFile, "config", cmd.AdapterConfigFile, cmd.Flags().StringVar(&cmd.AdapterConfigFile, "config", cmd.AdapterConfigFile,
"Configuration file containing details of how to transform between Prometheus metrics "+ "Configuration file containing details of how to transform between Prometheus metrics "+
"and custom metrics API resources") "and custom metrics API resources")
cmd.Flags().DurationVar(&cmd.MetricsRelistInterval, "metrics-relist-interval", cmd.MetricsRelistInterval, ""+ cmd.Flags().DurationVar(&cmd.MetricsRelistInterval, "metrics-relist-interval", cmd.MetricsRelistInterval,
"interval at which to re-list the set of all available metrics from Prometheus") "interval at which to re-list the set of all available metrics from Prometheus")
cmd.Flags().DurationVar(&cmd.MetricsMaxAge, "metrics-max-age", cmd.MetricsMaxAge, ""+ cmd.Flags().DurationVar(&cmd.MetricsMaxAge, "metrics-max-age", cmd.MetricsMaxAge,
"period for which to query the set of available metrics from Prometheus") "period for which to query the set of available metrics from Prometheus")
cmd.Flags().BoolVar(&cmd.DisableHTTP2, "disable-http2", cmd.DisableHTTP2,
"Disable HTTP/2 support")
// Add logging flags
logs.AddFlags(cmd.Flags())
} }
func (cmd *PrometheusAdapter) loadConfig() error { func (cmd *PrometheusAdapter) loadConfig() error {
@ -211,7 +227,7 @@ func (cmd *PrometheusAdapter) makeExternalProvider(promClient prom.Client, stopC
} }
// construct the provider and start it // construct the provider and start it
emProvider, runner := extprov.NewExternalPrometheusProvider(promClient, namers, cmd.MetricsRelistInterval) emProvider, runner := extprov.NewExternalPrometheusProvider(promClient, namers, cmd.MetricsRelistInterval, cmd.MetricsMaxAge)
runner.RunUntil(stopCh) runner.RunUntil(stopCh)
return emProvider, nil return emProvider, nil
@ -238,27 +254,39 @@ func (cmd *PrometheusAdapter) addResourceMetricsAPI(promClient prom.Client, stop
return err return err
} }
client, err := kubernetes.NewForConfig(rest) client, err := metadata.NewForConfig(rest)
if err != nil { if err != nil {
return err return err
} }
podInformerFactory := informers.NewFilteredSharedInformerFactory(client, 0, corev1.NamespaceAll, func(options *metav1.ListOptions) { podInformerFactory := metadatainformer.NewFilteredSharedInformerFactory(client, 0, corev1.NamespaceAll, func(options *metav1.ListOptions) {
options.FieldSelector = "status.phase=Running" options.FieldSelector = "status.phase=Running"
}) })
podInformer := podInformerFactory.Core().V1().Pods() podInformer := podInformerFactory.ForResource(corev1.SchemeGroupVersion.WithResource("pods"))
informer, err := cmd.Informers() informer, err := cmd.Informers()
if err != nil { if err != nil {
return err return err
} }
config, err := cmd.Config()
if err != nil {
return err
}
config.GenericConfig.EnableMetrics = false
server, err := cmd.Server() server, err := cmd.Server()
if err != nil { if err != nil {
return err return err
} }
if err := api.Install(provider, podInformer.Lister(), informer.Core().V1().Nodes().Lister(), server.GenericAPIServer); err != nil { metricsHandler, err := mprom.MetricsHandler()
if err != nil {
return err
}
server.GenericAPIServer.Handler.NonGoRestfulMux.HandleFunc("/metrics", metricsHandler)
if err := api.Install(provider, podInformer.Lister(), informer.Core().V1().Nodes().Lister(), server.GenericAPIServer, nil); err != nil {
return err return err
} }
@ -274,18 +302,26 @@ func main() {
// set up flags // set up flags
cmd := &PrometheusAdapter{ cmd := &PrometheusAdapter{
PrometheusURL: "https://localhost", PrometheusURL: "https://localhost",
PrometheusVerb: http.MethodGet,
MetricsRelistInterval: 10 * time.Minute, MetricsRelistInterval: 10 * time.Minute,
} }
cmd.Name = "prometheus-metrics-adapter" cmd.Name = "prometheus-metrics-adapter"
cmd.addFlags()
if err := cmd.Flags().Parse(os.Args); err != nil {
klog.Fatalf("unable to parse flags: %v", err)
}
if cmd.OpenAPIConfig == nil {
cmd.OpenAPIConfig = genericapiserver.DefaultOpenAPIConfig(generatedopenapi.GetOpenAPIDefinitions, openapinamer.NewDefinitionNamer(api.Scheme, customexternalmetrics.Scheme)) cmd.OpenAPIConfig = genericapiserver.DefaultOpenAPIConfig(generatedopenapi.GetOpenAPIDefinitions, openapinamer.NewDefinitionNamer(api.Scheme, customexternalmetrics.Scheme))
cmd.OpenAPIConfig.Info.Title = "prometheus-metrics-adapter" cmd.OpenAPIConfig.Info.Title = "prometheus-metrics-adapter"
cmd.OpenAPIConfig.Info.Version = "1.0.0" cmd.OpenAPIConfig.Info.Version = "1.0.0"
}
cmd.addFlags() if cmd.OpenAPIV3Config == nil {
cmd.Flags().AddGoFlagSet(flag.CommandLine) // make sure we get the klog flags cmd.OpenAPIV3Config = genericapiserver.DefaultOpenAPIV3Config(generatedopenapi.GetOpenAPIDefinitions, openapinamer.NewDefinitionNamer(api.Scheme, customexternalmetrics.Scheme))
if err := cmd.Flags().Parse(os.Args); err != nil { cmd.OpenAPIV3Config.Info.Title = "prometheus-metrics-adapter"
klog.Fatalf("unable to parse flags: %v", err) cmd.OpenAPIV3Config.Info.Version = "1.0.0"
} }
// if --metrics-max-age is not set, make it equal to --metrics-relist-interval // if --metrics-max-age is not set, make it equal to --metrics-relist-interval
@ -334,6 +370,14 @@ func main() {
klog.Fatalf("unable to install resource metrics API: %v", err) klog.Fatalf("unable to install resource metrics API: %v", err)
} }
// disable HTTP/2 to mitigate CVE-2023-44487 until the Go standard library
// and golang.org/x/net are fully fixed.
server, err := cmd.Server()
if err != nil {
klog.Fatalf("unable to fetch server: %v", err)
}
server.GenericAPIServer.SecureServingInfo.DisableHTTP2 = cmd.DisableHTTP2
// run the server // run the server
if err := cmd.Run(stopCh); err != nil { if err := cmd.Run(stopCh); err != nil {
klog.Fatalf("unable to run custom metrics adapter: %v", err) klog.Fatalf("unable to run custom metrics adapter: %v", err)
@ -376,7 +420,7 @@ func makeKubeconfigHTTPClient(inClusterAuth bool, kubeConfigPath string) (*http.
} }
func makePrometheusCAClient(caFilePath string, tlsCertFilePath string, tlsKeyFilePath string) (*http.Client, error) { func makePrometheusCAClient(caFilePath string, tlsCertFilePath string, tlsKeyFilePath string) (*http.Client, error) {
data, err := ioutil.ReadFile(caFilePath) data, err := os.ReadFile(caFilePath)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to read prometheus-ca-file: %v", err) return nil, fmt.Errorf("failed to read prometheus-ca-file: %v", err)
} }
@ -396,6 +440,7 @@ func makePrometheusCAClient(caFilePath string, tlsCertFilePath string, tlsKeyFil
TLSClientConfig: &tls.Config{ TLSClientConfig: &tls.Config{
RootCAs: pool, RootCAs: pool,
Certificates: []tls.Certificate{tlsClientCerts}, Certificates: []tls.Certificate{tlsClientCerts},
MinVersion: tls.VersionTLS12,
}, },
}, },
}, nil }, nil
@ -405,6 +450,7 @@ func makePrometheusCAClient(caFilePath string, tlsCertFilePath string, tlsKeyFil
Transport: &http.Transport{ Transport: &http.Transport{
TLSClientConfig: &tls.Config{ TLSClientConfig: &tls.Config{
RootCAs: pool, RootCAs: pool,
MinVersion: tls.VersionTLS12,
}, },
}, },
}, nil }, nil

View file

@ -27,7 +27,6 @@ import (
const certsDir = "testdata" const certsDir = "testdata"
func TestMakeKubeconfigHTTPClient(t *testing.T) { func TestMakeKubeconfigHTTPClient(t *testing.T) {
tests := []struct { tests := []struct {
kubeconfigPath string kubeconfigPath string
inClusterAuth bool inClusterAuth bool
@ -71,16 +70,13 @@ func TestMakeKubeconfigHTTPClient(t *testing.T) {
t.Error("HTTP client Transport is nil, expected http.RoundTripper") t.Error("HTTP client Transport is nil, expected http.RoundTripper")
} }
} }
} else { } else if err == nil {
if err == nil {
t.Errorf("Error is nil, expected %v", err) t.Errorf("Error is nil, expected %v", err)
} }
} }
}
} }
func TestMakePrometheusCAClient(t *testing.T) { func TestMakePrometheusCAClient(t *testing.T) {
tests := []struct { tests := []struct {
caFilePath string caFilePath string
tlsCertFilePath string tlsCertFilePath string
@ -140,16 +136,13 @@ func TestMakePrometheusCAClient(t *testing.T) {
t.Errorf("TLS certificates is %+v, expected nil", prometheusCAClient.Transport.(*http.Transport).TLSClientConfig.Certificates) t.Errorf("TLS certificates is %+v, expected nil", prometheusCAClient.Transport.(*http.Transport).TLSClientConfig.Certificates)
} }
} }
} else { } else if err == nil {
if err == nil {
t.Errorf("Error is nil, expected %v", err) t.Errorf("Error is nil, expected %v", err)
} }
} }
}
} }
func TestParseHeaderArgs(t *testing.T) { func TestParseHeaderArgs(t *testing.T) {
tests := []struct { tests := []struct {
args []string args []string
headers http.Header headers http.Header
@ -185,3 +178,34 @@ func TestParseHeaderArgs(t *testing.T) {
} }
} }
} }
func TestFlags(t *testing.T) {
cmd := &PrometheusAdapter{
PrometheusURL: "https://localhost",
}
cmd.addFlags()
flags := cmd.FlagSet
if flags == nil {
t.Fatalf("FlagSet should not be nil")
}
expectedFlags := []struct {
flag string
defaultValue string
}{
{flag: "v", defaultValue: "0"}, // logging flag (klog)
{flag: "prometheus-url", defaultValue: "https://localhost"}, // default is set in cmd
}
for _, e := range expectedFlags {
flag := flags.Lookup(e.flag)
if flag == nil {
t.Errorf("Flag %q expected to be present, was absent", e.flag)
continue
}
if flag.DefValue != e.defaultValue {
t.Errorf("Expected default value %q for flag %q, got %q", e.defaultValue, e.flag, flag.DefValue)
}
}
}

View file

@ -7,7 +7,7 @@ import (
pmodel "github.com/prometheus/common/model" pmodel "github.com/prometheus/common/model"
prom "sigs.k8s.io/prometheus-adapter/pkg/client" prom "sigs.k8s.io/prometheus-adapter/pkg/client"
. "sigs.k8s.io/prometheus-adapter/pkg/config" "sigs.k8s.io/prometheus-adapter/pkg/config"
) )
// DefaultConfig returns a configuration equivalent to the former // DefaultConfig returns a configuration equivalent to the former
@ -15,55 +15,55 @@ import (
// will be of the form `<prefix><<.Resource>>`, cadvisor series will be // will be of the form `<prefix><<.Resource>>`, cadvisor series will be
// of the form `container_`, and have the label `pod`. Any series ending // of the form `container_`, and have the label `pod`. Any series ending
// in total will be treated as a rate metric. // in total will be treated as a rate metric.
func DefaultConfig(rateInterval time.Duration, labelPrefix string) *MetricsDiscoveryConfig { func DefaultConfig(rateInterval time.Duration, labelPrefix string) *config.MetricsDiscoveryConfig {
return &MetricsDiscoveryConfig{ return &config.MetricsDiscoveryConfig{
Rules: []DiscoveryRule{ Rules: []config.DiscoveryRule{
// container seconds rate metrics // container seconds rate metrics
{ {
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))), SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))),
Resources: ResourceMapping{ Resources: config.ResourceMapping{
Overrides: map[string]GroupResource{ Overrides: map[string]config.GroupResource{
"namespace": {Resource: "namespace"}, "namespace": {Resource: "namespace"},
"pod": {Resource: "pod"}, "pod": {Resource: "pod"},
}, },
}, },
Name: NameMapping{Matches: "^container_(.*)_seconds_total$"}, Name: config.NameMapping{Matches: "^container_(.*)_seconds_total$"},
MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()), MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()),
}, },
// container rate metrics // container rate metrics
{ {
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))), SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))),
SeriesFilters: []RegexFilter{{IsNot: "^container_.*_seconds_total$"}}, SeriesFilters: []config.RegexFilter{{IsNot: "^container_.*_seconds_total$"}},
Resources: ResourceMapping{ Resources: config.ResourceMapping{
Overrides: map[string]GroupResource{ Overrides: map[string]config.GroupResource{
"namespace": {Resource: "namespace"}, "namespace": {Resource: "namespace"},
"pod": {Resource: "pod"}, "pod": {Resource: "pod"},
}, },
}, },
Name: NameMapping{Matches: "^container_(.*)_total$"}, Name: config.NameMapping{Matches: "^container_(.*)_total$"},
MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()), MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()),
}, },
// container non-cumulative metrics // container non-cumulative metrics
{ {
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))), SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))),
SeriesFilters: []RegexFilter{{IsNot: "^container_.*_total$"}}, SeriesFilters: []config.RegexFilter{{IsNot: "^container_.*_total$"}},
Resources: ResourceMapping{ Resources: config.ResourceMapping{
Overrides: map[string]GroupResource{ Overrides: map[string]config.GroupResource{
"namespace": {Resource: "namespace"}, "namespace": {Resource: "namespace"},
"pod": {Resource: "pod"}, "pod": {Resource: "pod"},
}, },
}, },
Name: NameMapping{Matches: "^container_(.*)$"}, Name: config.NameMapping{Matches: "^container_(.*)$"},
MetricsQuery: `sum(<<.Series>>{<<.LabelMatchers>>,container!="POD"}) by (<<.GroupBy>>)`, MetricsQuery: `sum(<<.Series>>{<<.LabelMatchers>>,container!="POD"}) by (<<.GroupBy>>)`,
}, },
// normal non-cumulative metrics // normal non-cumulative metrics
{ {
SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))), SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))),
SeriesFilters: []RegexFilter{{IsNot: ".*_total$"}}, SeriesFilters: []config.RegexFilter{{IsNot: ".*_total$"}},
Resources: ResourceMapping{ Resources: config.ResourceMapping{
Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix), Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix),
}, },
MetricsQuery: "sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)", MetricsQuery: "sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)",
@ -72,9 +72,9 @@ func DefaultConfig(rateInterval time.Duration, labelPrefix string) *MetricsDisco
// normal rate metrics // normal rate metrics
{ {
SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))), SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))),
SeriesFilters: []RegexFilter{{IsNot: ".*_seconds_total"}}, SeriesFilters: []config.RegexFilter{{IsNot: ".*_seconds_total"}},
Name: NameMapping{Matches: "^(.*)_total$"}, Name: config.NameMapping{Matches: "^(.*)_total$"},
Resources: ResourceMapping{ Resources: config.ResourceMapping{
Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix), Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix),
}, },
MetricsQuery: fmt.Sprintf("sum(rate(<<.Series>>{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()), MetricsQuery: fmt.Sprintf("sum(rate(<<.Series>>{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
@ -83,20 +83,20 @@ func DefaultConfig(rateInterval time.Duration, labelPrefix string) *MetricsDisco
// seconds rate metrics // seconds rate metrics
{ {
SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))), SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))),
Name: NameMapping{Matches: "^(.*)_seconds_total$"}, Name: config.NameMapping{Matches: "^(.*)_seconds_total$"},
Resources: ResourceMapping{ Resources: config.ResourceMapping{
Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix), Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix),
}, },
MetricsQuery: fmt.Sprintf("sum(rate(<<.Series>>{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()), MetricsQuery: fmt.Sprintf("sum(rate(<<.Series>>{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
}, },
}, },
ResourceRules: &ResourceRules{ ResourceRules: &config.ResourceRules{
CPU: ResourceRule{ CPU: config.ResourceRule{
ContainerQuery: fmt.Sprintf("sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()), ContainerQuery: fmt.Sprintf("sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
NodeQuery: fmt.Sprintf("sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, id='/'}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()), NodeQuery: fmt.Sprintf("sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, id='/'}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
Resources: ResourceMapping{ Resources: config.ResourceMapping{
Overrides: map[string]GroupResource{ Overrides: map[string]config.GroupResource{
"namespace": {Resource: "namespace"}, "namespace": {Resource: "namespace"},
"pod": {Resource: "pod"}, "pod": {Resource: "pod"},
"instance": {Resource: "node"}, "instance": {Resource: "node"},
@ -104,11 +104,11 @@ func DefaultConfig(rateInterval time.Duration, labelPrefix string) *MetricsDisco
}, },
ContainerLabel: fmt.Sprintf("%scontainer", labelPrefix), ContainerLabel: fmt.Sprintf("%scontainer", labelPrefix),
}, },
Memory: ResourceRule{ Memory: config.ResourceRule{
ContainerQuery: "sum(container_memory_working_set_bytes{<<.LabelMatchers>>}) by (<<.GroupBy>>)", ContainerQuery: "sum(container_memory_working_set_bytes{<<.LabelMatchers>>}) by (<<.GroupBy>>)",
NodeQuery: "sum(container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}) by (<<.GroupBy>>)", NodeQuery: "sum(container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}) by (<<.GroupBy>>)",
Resources: ResourceMapping{ Resources: config.ResourceMapping{
Overrides: map[string]GroupResource{ Overrides: map[string]config.GroupResource{
"namespace": {Resource: "namespace"}, "namespace": {Resource: "namespace"},
"pod": {Resource: "pod"}, "pod": {Resource: "pod"},
"instance": {Resource: "node"}, "instance": {Resource: "node"},

View file

@ -1,20 +1,11 @@
Example Deployment Example Deployment
================== ==================
1. Make sure you've built the included Dockerfile with `TAG=latest make container`. The image should be tagged as `gcr.io/k8s-staging-prometheus-adapter:latest`. 1. Make sure you've built the included Dockerfile with `TAG=latest make container`. The image should be tagged as `registry.k8s.io/prometheus-adapter/staging-prometheus-adapter:latest`.
2. Create a secret called `cm-adapter-serving-certs` with two values: 2. `kubectl create namespace monitoring` to ensure that the namespace that we're installing
`serving.crt` and `serving.key`. These are the serving certificates used
by the adapter for serving HTTPS traffic. For more information on how to
generate these certificates, see the [auth concepts
documentation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/auth.md)
in the apiserver-builder repository.
The kube-prometheus project published two scripts [gencerts.sh](https://github.com/prometheus-operator/kube-prometheus/blob/62fff622e9900fade8aecbd02bc9c557b736ef85/experimental/custom-metrics-api/gencerts.sh)
and [deploy.sh](https://github.com/prometheus-operator/kube-prometheus/blob/62fff622e9900fade8aecbd02bc9c557b736ef85/experimental/custom-metrics-api/deploy.sh) to create the `cm-adapter-serving-certs` secret.
3. `kubectl create namespace custom-metrics` to ensure that the namespace that we're installing
the custom metrics adapter in exists. the custom metrics adapter in exists.
4. `kubectl create -f manifests/`, modifying the Deployment as necessary to 3. `kubectl create -f manifests/`, modifying the Deployment as necessary to
point to your Prometheus server, and the ConfigMap to contain your desired point to your Prometheus server, and the ConfigMap to contain your desired
metrics discovery configuration. metrics discovery configuration.

View file

@ -0,0 +1,17 @@
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: prometheus-adapter
namespace: monitoring
version: v1beta1
versionPriority: 100

View file

@ -0,0 +1,22 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
namespace: monitoring
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch

View file

@ -0,0 +1,17 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: resource-metrics:system:auth-delegator
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: prometheus-adapter
namespace: monitoring

View file

@ -2,6 +2,9 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding kind: ClusterRoleBinding
metadata: metadata:
name: hpa-controller-custom-metrics name: hpa-controller-custom-metrics
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole

View file

@ -0,0 +1,17 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-adapter
subjects:
- kind: ServiceAccount
name: prometheus-adapter
namespace: monitoring

View file

@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: resource-metrics-server-resources
rules:
- apiGroups:
- metrics.k8s.io
resources:
- '*'
verbs:
- '*'

View file

@ -0,0 +1,20 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- pods
- services
verbs:
- get
- list
- watch

View file

@ -0,0 +1,53 @@
apiVersion: v1
data:
config.yaml: |-
"resourceRules":
"cpu":
"containerLabel": "container"
"containerQuery": |
sum by (<<.GroupBy>>) (
irate (
container_cpu_usage_seconds_total{<<.LabelMatchers>>,container!="",pod!=""}[4m]
)
)
"nodeQuery": |
sum by (<<.GroupBy>>) (
irate(
node_cpu_usage_seconds_total{<<.LabelMatchers>>}[4m]
)
)
"resources":
"overrides":
"namespace":
"resource": "namespace"
"node":
"resource": "node"
"pod":
"resource": "pod"
"memory":
"containerLabel": "container"
"containerQuery": |
sum by (<<.GroupBy>>) (
container_memory_working_set_bytes{<<.LabelMatchers>>,container!="",pod!=""}
)
"nodeQuery": |
sum by (<<.GroupBy>>) (
node_memory_working_set_bytes{<<.LabelMatchers>>}
)
"resources":
"overrides":
"node":
"resource": "node"
"namespace":
"resource": "namespace"
"pod":
"resource": "pod"
"window": "5m"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: adapter-config
namespace: monitoring

View file

@ -1,51 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
namespace: custom-metrics
spec:
replicas: 1
selector:
matchLabels:
app: custom-metrics-apiserver
template:
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
spec:
serviceAccountName: custom-metrics-apiserver
containers:
- name: custom-metrics-apiserver
image: gcr.io/k8s-staging-prometheus-adapter-amd64
args:
- --secure-port=6443
- --tls-cert-file=/var/run/serving-cert/serving.crt
- --tls-private-key-file=/var/run/serving-cert/serving.key
- --logtostderr=true
- --prometheus-url=http://prometheus.prom.svc:9090/
- --metrics-relist-interval=1m
- --v=10
- --config=/etc/adapter/config.yaml
ports:
- containerPort: 6443
volumeMounts:
- mountPath: /var/run/serving-cert
name: volume-serving-cert
readOnly: true
- mountPath: /etc/adapter/
name: config
readOnly: true
- mountPath: /tmp
name: tmp-vol
volumes:
- name: volume-serving-cert
secret:
secretName: cm-adapter-serving-certs
- name: config
configMap:
name: adapter-config
- name: tmp-vol
emptyDir: {}

View file

@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: custom-metrics-resource-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: custom-metrics-resource-reader
subjects:
- kind: ServiceAccount
name: custom-metrics-apiserver
namespace: custom-metrics

View file

@ -1,5 +0,0 @@
kind: ServiceAccount
apiVersion: v1
metadata:
name: custom-metrics-apiserver
namespace: custom-metrics

View file

@ -1,11 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: custom-metrics-apiserver
namespace: custom-metrics
spec:
ports:
- port: 443
targetPort: 6443
selector:
app: custom-metrics-apiserver

View file

@ -1,42 +0,0 @@
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
spec:
service:
name: custom-metrics-apiserver
namespace: custom-metrics
group: custom.metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta2.custom.metrics.k8s.io
spec:
service:
name: custom-metrics-apiserver
namespace: custom-metrics
group: custom.metrics.k8s.io
version: v1beta2
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 200
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.external.metrics.k8s.io
spec:
service:
name: custom-metrics-apiserver
namespace: custom-metrics
group: external.metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---

View file

@ -1,10 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-metrics-server-resources
rules:
- apiGroups:
- custom.metrics.k8s.io
- external.metrics.k8s.io
resources: ["*"]
verbs: ["*"]

View file

@ -1,117 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: custom-metrics
data:
config.yaml: |
rules:
- seriesQuery: '{__name__=~"^container_.*",container!="POD",namespace!="",pod!=""}'
seriesFilters: []
resources:
overrides:
namespace:
resource: namespace
pod:
resource: pod
name:
matches: ^container_(.*)_seconds_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[1m])) by (<<.GroupBy>>)
- seriesQuery: '{__name__=~"^container_.*",container!="POD",namespace!="",pod!=""}'
seriesFilters:
- isNot: ^container_.*_seconds_total$
resources:
overrides:
namespace:
resource: namespace
pod:
resource: pod
name:
matches: ^container_(.*)_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[1m])) by (<<.GroupBy>>)
- seriesQuery: '{__name__=~"^container_.*",container!="POD",namespace!="",pod!=""}'
seriesFilters:
- isNot: ^container_.*_total$
resources:
overrides:
namespace:
resource: namespace
pod:
resource: pod
name:
matches: ^container_(.*)$
as: ""
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,container!="POD"}) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters:
- isNot: .*_total$
resources:
template: <<.Resource>>
name:
matches: ""
as: ""
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters:
- isNot: .*_seconds_total
resources:
template: <<.Resource>>
name:
matches: ^(.*)_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters: []
resources:
template: <<.Resource>>
name:
matches: ^(.*)_seconds_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
resourceRules:
cpu:
containerQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
nodeQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, id='/'}[1m])) by (<<.GroupBy>>)
resources:
overrides:
instance:
resource: node
namespace:
resource: namespace
pod:
resource: pod
containerLabel: container
memory:
containerQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>}) by (<<.GroupBy>>)
nodeQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}) by (<<.GroupBy>>)
resources:
overrides:
instance:
resource: node
namespace:
resource: namespace
pod:
resource: pod
containerLabel: container
window: 1m
externalRules:
- seriesQuery: '{__name__=~"^.*_queue_(length|size)$",namespace!=""}'
resources:
overrides:
namespace:
resource: namespace
name:
matches: ^.*_queue_(length|size)$
as: "$0"
metricsQuery: max(<<.Series>>{<<.LabelMatchers>>})
- seriesQuery: '{__name__=~"^.*_queue$",namespace!=""}'
resources:
overrides:
namespace:
resource: namespace
name:
matches: ^.*_queue$
as: "$0"
metricsQuery: max(<<.Series>>{<<.LabelMatchers>>})

View file

@ -1,15 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-metrics-resource-reader
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
verbs:
- get
- list
- watch

View file

@ -0,0 +1,89 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
spec:
automountServiceAccountToken: true
containers:
- args:
- --cert-dir=/var/run/serving-cert
- --config=/etc/adapter/config.yaml
- --metrics-relist-interval=1m
- --prometheus-url=https://prometheus.monitoring.svc:9090/
- --secure-port=6443
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
image: registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.12.0
livenessProbe:
failureThreshold: 5
httpGet:
path: /livez
port: https
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 5
name: prometheus-adapter
ports:
- containerPort: 6443
name: https
readinessProbe:
failureThreshold: 5
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 5
resources:
requests:
cpu: 102m
memory: 180Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /tmp
name: tmpfs
readOnly: false
- mountPath: /var/run/serving-cert
name: volume-serving-cert
readOnly: false
- mountPath: /etc/adapter
name: config
readOnly: false
nodeSelector:
kubernetes.io/os: linux
securityContext: {}
serviceAccountName: prometheus-adapter
volumes:
- emptyDir: {}
name: tmpfs
- emptyDir: {}
name: volume-serving-cert
- configMap:
name: adapter-config
name: config

View file

@ -0,0 +1,21 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
egress:
- {}
ingress:
- {}
podSelector:
matchLabels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
policyTypes:
- Egress
- Ingress

View file

@ -0,0 +1,15 @@
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
minAvailable: 1
selector:
matchLabels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter

View file

@ -1,7 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding kind: RoleBinding
metadata: metadata:
name: custom-metrics-auth-reader labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: resource-metrics-auth-reader
namespace: kube-system namespace: kube-system
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
@ -9,5 +13,5 @@ roleRef:
name: extension-apiserver-authentication-reader name: extension-apiserver-authentication-reader
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: custom-metrics-apiserver name: prometheus-adapter
namespace: custom-metrics namespace: monitoring

View file

@ -0,0 +1,10 @@
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring

View file

@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
ports:
- name: https
port: 443
targetPort: 6443
selector:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter

View file

@ -36,8 +36,8 @@ rules:
# skip specifying generic resource<->label mappings, and just # skip specifying generic resource<->label mappings, and just
# attach only pod and namespace resources by mapping label names to group-resources # attach only pod and namespace resources by mapping label names to group-resources
overrides: overrides:
namespace: {resource: "namespace"}, namespace: {resource: "namespace"}
pod: {resource: "pod"}, pod: {resource: "pod"}
# specify that the `container_` and `_seconds_total` suffixes should be removed. # specify that the `container_` and `_seconds_total` suffixes should be removed.
# this also introduces an implicit filter on metric family names # this also introduces an implicit filter on metric family names
name: name:

View file

@ -15,8 +15,8 @@ rules:
# skip specifying generic resource<->label mappings, and just # skip specifying generic resource<->label mappings, and just
# attach only pod and namespace resources by mapping label names to group-resources # attach only pod and namespace resources by mapping label names to group-resources
overrides: overrides:
namespace: {resource: "namespace"}, namespace: {resource: "namespace"}
pod: {resource: "pod"}, pod: {resource: "pod"}
# specify that the `container_` and `_seconds_total` suffixes should be removed. # specify that the `container_` and `_seconds_total` suffixes should be removed.
# this also introduces an implicit filter on metric family names # this also introduces an implicit filter on metric family names
name: name:
@ -33,8 +33,8 @@ rules:
- seriesQuery: '{__name__=~"^container_.*_total",container!="POD",namespace!="",pod!=""}' - seriesQuery: '{__name__=~"^container_.*_total",container!="POD",namespace!="",pod!=""}'
resources: resources:
overrides: overrides:
namespace: {resource: "namespace"}, namespace: {resource: "namespace"}
pod: {resource: "pod"}, pod: {resource: "pod"}
seriesFilters: seriesFilters:
# since this is a superset of the query above, we introduce an additional filter here # since this is a superset of the query above, we introduce an additional filter here
- isNot: "^container_.*_seconds_total$" - isNot: "^container_.*_seconds_total$"
@ -63,7 +63,7 @@ rules:
overrides: overrides:
# this should still resolve in our cluster # this should still resolve in our cluster
brand: {group: "cheese.io", resource: "brand"} brand: {group: "cheese.io", resource: "brand"}
metricQuery: 'count(cheddar{sharp="true"})' metricsQuery: 'count(cheddar{sharp="true"})'
# external rules are not tied to a Kubernetes resource and can reference any metric # external rules are not tied to a Kubernetes resource and can reference any metric
# https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects # https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects

View file

@ -46,7 +46,8 @@ instance, if you're on an x86_64 machine, use
`gcr.io/k8s-staging-prometheus-adapter/prometheus-adapter-amd64` image. `gcr.io/k8s-staging-prometheus-adapter/prometheus-adapter-amd64` image.
There is also an official multi arch image available at There is also an official multi arch image available at
`k8s.gcr.io/prometheus-adapter/prometheus-adapter:${VERSION}`. `registry.k8s.io/prometheus-adapter/prometheus-adapter:${VERSION}`.
If you're feeling adventurous, you can build the latest version of If you're feeling adventurous, you can build the latest version of
prometheus-adapter by running `make container` or get the latest image from the prometheus-adapter by running `make container` or get the latest image from the
@ -141,11 +142,11 @@ a HorizontalPodAutoscaler like this to accomplish the autoscaling:
<details> <details>
<summary>sample-app-hpa.yaml</summary> <summary>sample-app.hpa.yaml</summary>
```yaml ```yaml
kind: HorizontalPodAutoscaler kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1 apiVersion: autoscaling/v2
metadata: metadata:
name: sample-app name: sample-app
spec: spec:
@ -164,10 +165,13 @@ spec:
- type: Pods - type: Pods
pods: pods:
# use the metric that you used above: pods/http_requests # use the metric that you used above: pods/http_requests
metricName: http_requests metric:
name: http_requests
# target 500 milli-requests per second, # target 500 milli-requests per second,
# which is 1 request every two seconds # which is 1 request every two seconds
targetAverageValue: 500m target:
type: Value
averageValue: 500m
``` ```
</details> </details>
@ -175,7 +179,7 @@ spec:
If you try creating that now (and take a look at your controller-manager If you try creating that now (and take a look at your controller-manager
logs), you'll see that the that the HorizontalPodAutoscaler controller is logs), you'll see that the that the HorizontalPodAutoscaler controller is
attempting to fetch metrics from attempting to fetch metrics from
`/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app`, `/apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app`,
but right now, nothing's serving that API. but right now, nothing's serving that API.
Before you can autoscale your application, you'll need to make sure that Before you can autoscale your application, you'll need to make sure that
@ -196,11 +200,11 @@ First, you'll need to deploy the Prometheus Operator. Check out the
guide](https://github.com/prometheus-operator/prometheus-operator#quickstart) guide](https://github.com/prometheus-operator/prometheus-operator#quickstart)
for the Operator to deploy a copy of Prometheus. for the Operator to deploy a copy of Prometheus.
This walkthrough assumes that Prometheus is deployed in the `prom` This walkthrough assumes that Prometheus is deployed in the `monitoring`
namespace. Most of the sample commands and files are namespace-agnostic, namespace. Most of the sample commands and files are namespace-agnostic,
but there are a few commands or pieces of configuration that rely on that but there are a few commands or pieces of configuration that rely on that
namespace. If you're using a different namespace, simply substitute that namespace. If you're using a different namespace, simply substitute that
in for `prom` when it appears. in for `monitoring` when it appears.
### Monitoring Your Application ### Monitoring Your Application
@ -212,7 +216,7 @@ service:
<details> <details>
<summary>service-monitor.yaml</summary> <summary>sample-app.monitor.yaml</summary>
```yaml ```yaml
kind: ServiceMonitor kind: ServiceMonitor
@ -232,12 +236,12 @@ spec:
</details> </details>
```shell ```shell
$ kubectl create -f service-monitor.yaml $ kubectl create -f sample-app.monitor.yaml
``` ```
Now, you should see your metrics appear in your Prometheus instance. Look Now, you should see your metrics (`http_requests_total`) appear in your Prometheus instance. Look
them up via the dashboard, and make sure they have the `namespace` and them up via the dashboard, and make sure they have the `namespace` and
`pod` labels. `pod` labels. If not, check the labels on the service monitor match the ones on the Prometheus CRD.
### Launching the Adapter ### Launching the Adapter
@ -255,7 +259,46 @@ the steps to deploy the adapter. Note that if you're deploying on
a non-x86_64 (amd64) platform, you'll need to change the `image` field in a non-x86_64 (amd64) platform, you'll need to change the `image` field in
the Deployment to be the appropriate image for your platform. the Deployment to be the appropriate image for your platform.
The default adapter configuration should work for this walkthrough and However an update to the adapter config is necessary in order to
expose custom metrics.
<details>
<summary>prom-adapter.config.yaml</summary>
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: monitoring
data:
config.yaml: |-
"rules":
- "seriesQuery": |
{namespace!="",__name__!~"^container_.*"}
"resources":
"template": "<<.Resource>>"
"name":
"matches": "^(.*)_total"
"as": ""
"metricsQuery": |
sum by (<<.GroupBy>>) (
irate (
<<.Series>>{<<.LabelMatchers>>}[1m]
)
)
```
</details>
```shell
$ kubectl apply -f prom-adapter.config.yaml
# Restart prom-adapter pods
$ kubectl rollout restart deployment prometheus-adapter -n monitoring
```
This adapter configuration should work for this walkthrough together with
a standard Prometheus Operator configuration, but if you've got custom a standard Prometheus Operator configuration, but if you've got custom
relabelling rules, or your labels above weren't exactly `namespace` and relabelling rules, or your labels above weren't exactly `namespace` and
`pod`, you may need to edit the configuration in the ConfigMap. The `pod`, you may need to edit the configuration in the ConfigMap. The
@ -264,11 +307,36 @@ overview of how configuration works.
### The Registered API ### The Registered API
As part of the creation of the adapter Deployment and associated objects We also need to register the custom metrics API with the API aggregator (part of
(performed above), we registered the API with the API aggregator (part of the main Kubernetes API server). For that we need to create an APIService resource
the main Kubernetes API server).
The API is registered as `custom.metrics.k8s.io/v1beta1`, and you can find <details>
<summary>api-service.yaml</summary>
```yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta2.custom.metrics.k8s.io
spec:
group: custom.metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: prometheus-adapter
namespace: monitoring
version: v1beta2
versionPriority: 100
```
</details>
```shell
$ kubectl create -f api-service.yaml
```
The API is registered as `custom.metrics.k8s.io/v1beta2`, and you can find
more information about aggregation at [Concepts: more information about aggregation at [Concepts:
Aggregation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/aggregation.md). Aggregation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/aggregation.md).
@ -279,7 +347,7 @@ With that all set, your custom metrics API should show up in discovery.
Try fetching the discovery information for it: Try fetching the discovery information for it:
```shell ```shell
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 $ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta2
``` ```
Since you've set up Prometheus to collect your app's metrics, you should Since you've set up Prometheus to collect your app's metrics, you should
@ -293,12 +361,12 @@ sends a raw GET request to the Kubernetes API server, automatically
injecting auth information: injecting auth information:
```shell ```shell
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app" $ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app"
``` ```
Because of the adapter's configuration, the cumulative metric Because of the adapter's configuration, the cumulative metric
`http_requests_total` has been converted into a rate metric, `http_requests_total` has been converted into a rate metric,
`pods/http_requests`, which measures requests per second over a 2 minute `pods/http_requests`, which measures requests per second over a 1 minute
interval. The value should currently be close to zero, since there's no interval. The value should currently be close to zero, since there's no
traffic to your app, except for the regular metrics collection from traffic to your app, except for the regular metrics collection from
Prometheus. Prometheus.
@ -349,7 +417,7 @@ and make decisions based on it.
If you didn't create the HorizontalPodAutoscaler above, create it now: If you didn't create the HorizontalPodAutoscaler above, create it now:
```shell ```shell
$ kubectl create -f sample-app-hpa.yaml $ kubectl create -f sample-app.hpa.yaml
``` ```
Wait a little bit, and then examine the HPA: Wait a little bit, and then examine the HPA:
@ -395,4 +463,4 @@ setting different labels or using the `Object` metric source type.
For more information on how metrics are exposed by the Prometheus adapter, For more information on how metrics are exposed by the Prometheus adapter,
see [config documentation](/docs/config.md), and check the [default see [config documentation](/docs/config.md), and check the [default
configuration](/deploy/manifests/custom-metrics-config-map.yaml). configuration](/deploy/manifests/config-map.yaml).

129
go.mod
View file

@ -1,23 +1,118 @@
module sigs.k8s.io/prometheus-adapter module sigs.k8s.io/prometheus-adapter
go 1.16 go 1.22.1
toolchain go1.22.2
require ( require (
github.com/onsi/ginkgo v1.16.4 github.com/onsi/ginkgo v1.16.5
github.com/onsi/gomega v1.15.0 github.com/onsi/gomega v1.33.1
github.com/prometheus/client_golang v1.11.0 github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.73.2
github.com/prometheus/common v0.26.0 github.com/prometheus-operator/prometheus-operator/pkg/client v0.73.2
github.com/spf13/cobra v1.2.1 github.com/prometheus/client_golang v1.18.0
github.com/stretchr/testify v1.7.0 github.com/prometheus/common v0.46.0
github.com/spf13/cobra v1.8.0
github.com/stretchr/testify v1.9.0
gopkg.in/yaml.v2 v2.4.0 gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.22.0 k8s.io/api v0.30.0
k8s.io/apimachinery v0.22.0 k8s.io/apimachinery v0.30.0
k8s.io/apiserver v0.22.0 k8s.io/apiserver v0.30.0
k8s.io/client-go v0.22.0 k8s.io/client-go v0.30.0
k8s.io/component-base v0.22.0 k8s.io/component-base v0.30.0
k8s.io/klog/v2 v2.9.0 k8s.io/klog/v2 v2.120.1
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e k8s.io/kube-openapi v0.0.0-20240430033511-f0e62f92d13f
k8s.io/metrics v0.22.0 k8s.io/metrics v0.30.0
sigs.k8s.io/custom-metrics-apiserver v1.22.0 sigs.k8s.io/custom-metrics-apiserver v1.30.0
sigs.k8s.io/metrics-server v0.5.0 sigs.k8s.io/metrics-server v0.7.1
)
require (
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.0.0-20230305170008-8188dc5388df // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.12.0 // indirect
github.com/evanphx/json-patch v5.9.0+incompatible // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/cel-go v0.17.8 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.1 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nxadm/tail v1.4.8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
go.etcd.io/etcd/api/v3 v3.5.11 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.11 // indirect
go.etcd.io/etcd/client/v3 v3.5.11 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 // indirect
go.opentelemetry.io/otel v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 // indirect
go.opentelemetry.io/otel/metric v1.21.0 // indirect
go.opentelemetry.io/otel/sdk v1.21.0 // indirect
go.opentelemetry.io/otel/trace v1.21.0 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.26.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/exp v0.0.0-20231226003508-02704c960a9b // indirect
golang.org/x/mod v0.17.0 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/oauth2 v0.18.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.5.0 // indirect
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto v0.0.0-20231212172506-995d672761c0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20231212172506-995d672761c0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20231212172506-995d672761c0 // indirect
google.golang.org/grpc v1.60.1 // indirect
google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiextensions-apiserver v0.29.3 // indirect
k8s.io/gengo/v2 v2.0.0-20240228010128-51d4e06bde70 // indirect
k8s.io/kms v0.30.0 // indirect
k8s.io/utils v0.0.0-20240423183400-0849a56e8f22 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.29.0 // indirect
sigs.k8s.io/controller-runtime v0.17.2 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
) )

1304
go.sum

File diff suppressed because it is too large Load diff

View file

@ -1,41 +0,0 @@
#!/bin/bash
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
verify=0
if [[ ${1:-} = "--verify" || ${1:-} = "-v" ]]; then
verify=1
fi
find_files() {
find . -not \( \( \
-wholename './_output' \
-o -wholename './vendor' \
\) -prune \) -name '*.go'
}
if [[ $verify -eq 1 ]]; then
diff=$(find_files | xargs gofmt -s -d 2>&1)
if [[ -n "${diff}" ]]; then
echo "gofmt -s -w $(echo "${diff}" | awk '/^diff / { print $2 }' | tr '\n' ' ')"
exit 1
fi
else
find_files | xargs gofmt -s -w
fi

View file

@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
//go:build tools
// +build tools // +build tools
// Package tools tracks dependencies for tools that used in the build process. // Package tools tracks dependencies for tools that used in the build process.

File diff suppressed because it is too large Load diff

View file

@ -1,3 +1,4 @@
//go:build codegen
// +build codegen // +build codegen
/* /*

View file

@ -21,10 +21,10 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
"strings"
"time" "time"
"github.com/prometheus/common/model" "github.com/prometheus/common/model"
@ -53,13 +53,25 @@ type httpAPIClient struct {
func (c *httpAPIClient) Do(ctx context.Context, verb, endpoint string, query url.Values) (APIResponse, error) { func (c *httpAPIClient) Do(ctx context.Context, verb, endpoint string, query url.Values) (APIResponse, error) {
u := *c.baseURL u := *c.baseURL
u.Path = path.Join(c.baseURL.Path, endpoint) u.Path = path.Join(c.baseURL.Path, endpoint)
var reqBody io.Reader
if verb == http.MethodGet {
u.RawQuery = query.Encode() u.RawQuery = query.Encode()
req, err := http.NewRequest(verb, u.String(), nil) } else if verb == http.MethodPost {
reqBody = strings.NewReader(query.Encode())
}
req, err := http.NewRequestWithContext(ctx, verb, u.String(), reqBody)
if err != nil { if err != nil {
return APIResponse{}, fmt.Errorf("error constructing HTTP request to Prometheus: %v", err) return APIResponse{}, fmt.Errorf("error constructing HTTP request to Prometheus: %v", err)
} }
req.WithContext(ctx) for key, values := range c.headers {
req.Header = c.headers for _, value := range values {
req.Header.Add(key, value)
}
}
if verb == http.MethodPost {
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
}
resp, err := c.client.Do(req) resp, err := c.client.Do(req)
defer func() { defer func() {
@ -88,7 +100,7 @@ func (c *httpAPIClient) Do(ctx context.Context, verb, endpoint string, query url
var body io.Reader = resp.Body var body io.Reader = resp.Body
if klog.V(8).Enabled() { if klog.V(8).Enabled() {
data, err := ioutil.ReadAll(body) data, err := io.ReadAll(body)
if err != nil { if err != nil {
return APIResponse{}, fmt.Errorf("unable to log response body: %v", err) return APIResponse{}, fmt.Errorf("unable to log response body: %v", err)
} }
@ -132,19 +144,21 @@ const (
// queryClient is a Client that connects to the Prometheus HTTP API. // queryClient is a Client that connects to the Prometheus HTTP API.
type queryClient struct { type queryClient struct {
api GenericAPIClient api GenericAPIClient
verb string
} }
// NewClientForAPI creates a Client for the given generic Prometheus API client. // NewClientForAPI creates a Client for the given generic Prometheus API client.
func NewClientForAPI(client GenericAPIClient) Client { func NewClientForAPI(client GenericAPIClient, verb string) Client {
return &queryClient{ return &queryClient{
api: client, api: client,
verb: verb,
} }
} }
// NewClient creates a Client for the given HTTP client and base URL (the location of the Prometheus server). // NewClient creates a Client for the given HTTP client and base URL (the location of the Prometheus server).
func NewClient(client *http.Client, baseURL *url.URL, headers http.Header) Client { func NewClient(client *http.Client, baseURL *url.URL, headers http.Header, verb string) Client {
genericClient := NewGenericAPIClient(client, baseURL, headers) genericClient := NewGenericAPIClient(client, baseURL, headers)
return NewClientForAPI(genericClient) return NewClientForAPI(genericClient, verb)
} }
func (h *queryClient) Series(ctx context.Context, interval model.Interval, selectors ...Selector) ([]Series, error) { func (h *queryClient) Series(ctx context.Context, interval model.Interval, selectors ...Selector) ([]Series, error) {
@ -160,7 +174,7 @@ func (h *queryClient) Series(ctx context.Context, interval model.Interval, selec
vals.Add("match[]", string(selector)) vals.Add("match[]", string(selector))
} }
res, err := h.api.Do(ctx, "GET", seriesURL, vals) res, err := h.api.Do(ctx, h.verb, seriesURL, vals)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -180,7 +194,7 @@ func (h *queryClient) Query(ctx context.Context, t model.Time, query Selector) (
vals.Set("timeout", model.Duration(timeout).String()) vals.Set("timeout", model.Duration(timeout).String())
} }
res, err := h.api.Do(ctx, "GET", queryURL, vals) res, err := h.api.Do(ctx, h.verb, queryURL, vals)
if err != nil { if err != nil {
return QueryResult{}, err return QueryResult{}, err
} }
@ -207,7 +221,7 @@ func (h *queryClient) QueryRange(ctx context.Context, r Range, query Selector) (
vals.Set("timeout", model.Duration(timeout).String()) vals.Set("timeout", model.Duration(timeout).String())
} }
res, err := h.api.Do(ctx, "GET", queryRangeURL, vals) res, err := h.api.Do(ctx, h.verb, queryRangeURL, vals)
if err != nil { if err != nil {
return QueryResult{}, err return QueryResult{}, err
} }
@ -221,7 +235,7 @@ func (h *queryClient) QueryRange(ctx context.Context, r Range, query Selector) (
// when present // when present
func timeoutFromContext(ctx context.Context) (time.Duration, bool) { func timeoutFromContext(ctx context.Context) (time.Duration, bool) {
if deadline, hasDeadline := ctx.Deadline(); hasDeadline { if deadline, hasDeadline := ctx.Deadline(); hasDeadline {
return time.Now().Sub(deadline), true return time.Since(deadline), true
} }
return time.Duration(0), false return time.Duration(0), false

View file

@ -13,6 +13,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. limitations under the License.
*/ */
package client package client
import ( import (

View file

@ -13,6 +13,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. limitations under the License.
*/ */
package client package client
import ( import (

View file

@ -13,15 +13,21 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. limitations under the License.
*/ */
package metrics package metrics
import ( import (
"context" "context"
"net/http"
"net/url" "net/url"
"time" "time"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
apimetrics "k8s.io/apiserver/pkg/endpoints/metrics"
"k8s.io/component-base/metrics"
"k8s.io/component-base/metrics/legacyregistry"
"sigs.k8s.io/prometheus-adapter/pkg/client" "sigs.k8s.io/prometheus-adapter/pkg/client"
) )
@ -29,18 +35,29 @@ var (
// queryLatency is the total latency of any query going through the // queryLatency is the total latency of any query going through the
// various endpoints (query, range-query, series). It includes some deserialization // various endpoints (query, range-query, series). It includes some deserialization
// overhead and HTTP overhead. // overhead and HTTP overhead.
queryLatency = prometheus.NewHistogramVec( queryLatency = metrics.NewHistogramVec(
prometheus.HistogramOpts{ &metrics.HistogramOpts{
Name: "cmgateway_prometheus_query_latency_seconds", Namespace: "prometheus_adapter",
Subsystem: "prometheus_client",
Name: "request_duration_seconds",
Help: "Prometheus client query latency in seconds. Broken down by target prometheus endpoint and target server", Help: "Prometheus client query latency in seconds. Broken down by target prometheus endpoint and target server",
Buckets: prometheus.ExponentialBuckets(0.0001, 2, 10), Buckets: prometheus.DefBuckets,
}, },
[]string{"endpoint", "server"}, []string{"path", "server"},
) )
) )
func init() { func MetricsHandler() (http.HandlerFunc, error) {
prometheus.MustRegister(queryLatency) registry := metrics.NewKubeRegistry()
err := registry.Register(queryLatency)
if err != nil {
return nil, err
}
apimetrics.Register()
return func(w http.ResponseWriter, req *http.Request) {
legacyregistry.Handler().ServeHTTP(w, req)
metrics.HandlerFor(registry, metrics.HandlerOpts{}).ServeHTTP(w, req)
}, nil
} }
// instrumentedClient is a client.GenericAPIClient which instruments calls to Do, // instrumentedClient is a client.GenericAPIClient which instruments calls to Do,
@ -62,7 +79,7 @@ func (c *instrumentedGenericClient) Do(ctx context.Context, verb, endpoint strin
return return
} }
} }
queryLatency.With(prometheus.Labels{"endpoint": endpoint, "server": c.serverName}).Observe(endTime.Sub(startTime).Seconds()) queryLatency.With(prometheus.Labels{"path": endpoint, "server": c.serverName}).Observe(endTime.Sub(startTime).Seconds())
}() }()
var resp client.APIResponse var resp client.APIResponse

View file

@ -25,10 +25,10 @@ type ErrorType string
const ( const (
ErrBadData ErrorType = "bad_data" ErrBadData ErrorType = "bad_data"
ErrTimeout = "timeout" ErrTimeout ErrorType = "timeout"
ErrCanceled = "canceled" ErrCanceled ErrorType = "canceled"
ErrExec = "execution" ErrExec ErrorType = "execution"
ErrBadResponse = "bad_response" ErrBadResponse ErrorType = "bad_response"
) )
// Error is an error returned by the API. // Error is an error returned by the API.
@ -46,7 +46,7 @@ type ResponseStatus string
const ( const (
ResponseSucceeded ResponseStatus = "succeeded" ResponseSucceeded ResponseStatus = "succeeded"
ResponseError = "error" ResponseError ResponseStatus = "error"
) )
// APIResponse represents the raw response returned by the API. // APIResponse represents the raw response returned by the API.

View file

@ -2,7 +2,7 @@ package config
import ( import (
"fmt" "fmt"
"io/ioutil" "io"
"os" "os"
yaml "gopkg.in/yaml.v2" yaml "gopkg.in/yaml.v2"
@ -11,11 +11,11 @@ import (
// FromFile loads the configuration from a particular file. // FromFile loads the configuration from a particular file.
func FromFile(filename string) (*MetricsDiscoveryConfig, error) { func FromFile(filename string) (*MetricsDiscoveryConfig, error) {
file, err := os.Open(filename) file, err := os.Open(filename)
defer file.Close()
if err != nil { if err != nil {
return nil, fmt.Errorf("unable to load metrics discovery config file: %v", err) return nil, fmt.Errorf("unable to load metrics discovery config file: %v", err)
} }
contents, err := ioutil.ReadAll(file) defer file.Close()
contents, err := io.ReadAll(file)
if err != nil { if err != nil {
return nil, fmt.Errorf("unable to load metrics discovery config file: %v", err) return nil, fmt.Errorf("unable to load metrics discovery config file: %v", err)
} }

View file

@ -99,7 +99,7 @@ func (p *prometheusProvider) metricFor(value pmodel.SampleValue, name types.Name
Name: info.Metric, Name: info.Metric,
}, },
// TODO(directxman12): use the right timestamp // TODO(directxman12): use the right timestamp
Timestamp: metav1.Time{time.Now()}, Timestamp: metav1.Time{Time: time.Now()},
Value: *q, Value: *q,
} }
@ -256,7 +256,7 @@ func (l *cachingMetricsLister) updateMetrics() error {
} }
selectors[sel] = struct{}{} selectors[sel] = struct{}{}
go func() { go func() {
series, err := l.promClient.Series(context.TODO(), pmodel.Interval{startTime, 0}, sel) series, err := l.promClient.Series(context.TODO(), pmodel.Interval{Start: startTime, End: 0}, sel)
if err != nil { if err != nil {
errs <- fmt.Errorf("unable to fetch metrics for query %q: %v", sel, err) errs <- fmt.Errorf("unable to fetch metrics for query %q: %v", sel, err)
return return

View file

@ -87,7 +87,7 @@ var _ = Describe("Custom Metrics Provider", func() {
By("ensuring that no metrics are present before we start listing") By("ensuring that no metrics are present before we start listing")
Expect(prov.ListAllMetrics()).To(BeEmpty()) Expect(prov.ListAllMetrics()).To(BeEmpty())
By("setting the acceptible interval to now until the next update, with a bit of wiggle room") By("setting the acceptable interval to now until the next update, with a bit of wiggle room")
startTime := pmodel.Now().Add(-1*fakeProviderUpdateInterval - fakeProviderUpdateInterval/10) startTime := pmodel.Now().Add(-1*fakeProviderUpdateInterval - fakeProviderUpdateInterval/10)
fakeProm.AcceptableInterval = pmodel.Interval{Start: startTime, End: 0} fakeProm.AcceptableInterval = pmodel.Interval{Start: startTime, End: 0}
@ -98,16 +98,16 @@ var _ = Describe("Custom Metrics Provider", func() {
By("listing all metrics, and checking that they contain the expected results") By("listing all metrics, and checking that they contain the expected results")
Expect(prov.ListAllMetrics()).To(ConsistOf( Expect(prov.ListAllMetrics()).To(ConsistOf(
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "ingress_hits"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "ingresses"}, true, "ingress_hits"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "ingresses"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "ingress_hits"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "ingress_hits"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "service_proxy_packets"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "service_proxy_packets"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "deployments"}, true, "work_queue_wait"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "deployments"}, Namespaced: true, Metric: "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "work_queue_wait"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_usage"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_usage"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_usage"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_usage"},
)) ))
}) })
}) })

View file

@ -84,7 +84,7 @@ var seriesRegistryTestSeries = [][]prom.Series{
}, },
}, },
{ {
// guage metrics // gauge metrics
{ {
Name: "node_gigawatts", Name: "node_gigawatts",
Labels: pmodel.LabelSet{"kube_node": "somenode"}, Labels: pmodel.LabelSet{"kube_node": "somenode"},
@ -159,7 +159,7 @@ var _ = Describe("Series Registry", func() {
// container metrics // container metrics
{ {
title: "container metrics gauge / multiple resource names", title: "container metrics gauge / multiple resource names",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_usage"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_usage"},
namespace: "somens", namespace: "somens",
resourceNames: []string{"somepod1", "somepod2"}, resourceNames: []string{"somepod1", "somepod2"},
metricSelector: labels.Everything(), metricSelector: labels.Everything(),
@ -168,7 +168,7 @@ var _ = Describe("Series Registry", func() {
}, },
{ {
title: "container metrics counter", title: "container metrics counter",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_count"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_count"},
namespace: "somens", namespace: "somens",
resourceNames: []string{"somepod1", "somepod2"}, resourceNames: []string{"somepod1", "somepod2"},
metricSelector: labels.Everything(), metricSelector: labels.Everything(),
@ -177,7 +177,7 @@ var _ = Describe("Series Registry", func() {
}, },
{ {
title: "container metrics seconds counter", title: "container metrics seconds counter",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_time"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_time"},
namespace: "somens", namespace: "somens",
resourceNames: []string{"somepod1", "somepod2"}, resourceNames: []string{"somepod1", "somepod2"},
metricSelector: labels.Everything(), metricSelector: labels.Everything(),
@ -187,7 +187,7 @@ var _ = Describe("Series Registry", func() {
// namespaced metrics // namespaced metrics
{ {
title: "namespaced metrics counter / multidimensional (service)", title: "namespaced metrics counter / multidimensional (service)",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "service"}, true, "ingress_hits"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "service"}, Namespaced: true, Metric: "ingress_hits"},
namespace: "somens", namespace: "somens",
resourceNames: []string{"somesvc"}, resourceNames: []string{"somesvc"},
metricSelector: labels.Everything(), metricSelector: labels.Everything(),
@ -196,7 +196,7 @@ var _ = Describe("Series Registry", func() {
}, },
{ {
title: "namespaced metrics counter / multidimensional (service) / selection using labels", title: "namespaced metrics counter / multidimensional (service) / selection using labels",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "service"}, true, "ingress_hits"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "service"}, Namespaced: true, Metric: "ingress_hits"},
namespace: "somens", namespace: "somens",
resourceNames: []string{"somesvc"}, resourceNames: []string{"somesvc"},
metricSelector: labels.NewSelector().Add( metricSelector: labels.NewSelector().Add(
@ -206,7 +206,7 @@ var _ = Describe("Series Registry", func() {
}, },
{ {
title: "namespaced metrics counter / multidimensional (ingress)", title: "namespaced metrics counter / multidimensional (ingress)",
info: provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "ingress"}, true, "ingress_hits"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "ingress"}, Namespaced: true, Metric: "ingress_hits"},
namespace: "somens", namespace: "somens",
resourceNames: []string{"someingress"}, resourceNames: []string{"someingress"},
metricSelector: labels.Everything(), metricSelector: labels.Everything(),
@ -215,7 +215,7 @@ var _ = Describe("Series Registry", func() {
}, },
{ {
title: "namespaced metrics counter / multidimensional (pod)", title: "namespaced metrics counter / multidimensional (pod)",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pod"}, true, "ingress_hits"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pod"}, Namespaced: true, Metric: "ingress_hits"},
namespace: "somens", namespace: "somens",
resourceNames: []string{"somepod"}, resourceNames: []string{"somepod"},
metricSelector: labels.Everything(), metricSelector: labels.Everything(),
@ -224,7 +224,7 @@ var _ = Describe("Series Registry", func() {
}, },
{ {
title: "namespaced metrics gauge", title: "namespaced metrics gauge",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "service"}, true, "service_proxy_packets"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "service"}, Namespaced: true, Metric: "service_proxy_packets"},
namespace: "somens", namespace: "somens",
resourceNames: []string{"somesvc"}, resourceNames: []string{"somesvc"},
metricSelector: labels.Everything(), metricSelector: labels.Everything(),
@ -233,7 +233,7 @@ var _ = Describe("Series Registry", func() {
}, },
{ {
title: "namespaced metrics seconds counter", title: "namespaced metrics seconds counter",
info: provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "deployment"}, true, "work_queue_wait"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "deployment"}, Namespaced: true, Metric: "work_queue_wait"},
namespace: "somens", namespace: "somens",
resourceNames: []string{"somedep"}, resourceNames: []string{"somedep"},
metricSelector: labels.Everything(), metricSelector: labels.Everything(),
@ -243,7 +243,7 @@ var _ = Describe("Series Registry", func() {
// non-namespaced series // non-namespaced series
{ {
title: "root scoped metrics gauge", title: "root scoped metrics gauge",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "node"}, false, "node_gigawatts"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "node"}, Namespaced: false, Metric: "node_gigawatts"},
resourceNames: []string{"somenode"}, resourceNames: []string{"somenode"},
metricSelector: labels.Everything(), metricSelector: labels.Everything(),
@ -251,7 +251,7 @@ var _ = Describe("Series Registry", func() {
}, },
{ {
title: "root scoped metrics counter", title: "root scoped metrics counter",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "persistentvolume"}, false, "volume_claims"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "persistentvolume"}, Namespaced: false, Metric: "volume_claims"},
resourceNames: []string{"somepv"}, resourceNames: []string{"somepv"},
metricSelector: labels.Everything(), metricSelector: labels.Everything(),
@ -259,7 +259,7 @@ var _ = Describe("Series Registry", func() {
}, },
{ {
title: "root scoped metrics seconds counter", title: "root scoped metrics seconds counter",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "node"}, false, "node_fan"}, info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "node"}, Namespaced: false, Metric: "node_fan"},
resourceNames: []string{"somenode"}, resourceNames: []string{"somenode"},
metricSelector: labels.Everything(), metricSelector: labels.Everything(),
@ -281,23 +281,23 @@ var _ = Describe("Series Registry", func() {
It("should list all metrics", func() { It("should list all metrics", func() {
Expect(registry.ListAllMetrics()).To(ConsistOf( Expect(registry.ListAllMetrics()).To(ConsistOf(
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_count"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_count"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_count"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_count"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_time"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_time"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_time"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_time"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_usage"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_usage"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_usage"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_usage"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "ingress_hits"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "ingresses"}, true, "ingress_hits"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "ingresses"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "ingress_hits"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "ingress_hits"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "service_proxy_packets"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "service_proxy_packets"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "deployments"}, true, "work_queue_wait"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "deployments"}, Namespaced: true, Metric: "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "work_queue_wait"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "nodes"}, false, "node_gigawatts"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "nodes"}, Namespaced: false, Metric: "node_gigawatts"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "persistentvolumes"}, false, "volume_claims"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "persistentvolumes"}, Namespaced: false, Metric: "volume_claims"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "nodes"}, false, "node_fan"}, provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "nodes"}, Namespaced: false, Metric: "node_fan"},
)) ))
}) })
}) })

View file

@ -100,7 +100,7 @@ func (l *basicMetricLister) ListAllMetrics() (MetricUpdateResult, error) {
} }
selectors[sel] = struct{}{} selectors[sel] = struct{}{}
go func() { go func() {
series, err := l.promClient.Series(context.TODO(), pmodel.Interval{startTime, 0}, sel) series, err := l.promClient.Series(context.TODO(), pmodel.Interval{Start: startTime, End: 0}, sel)
if err != nil { if err != nil {
errs <- fmt.Errorf("unable to fetch metrics for query %q: %v", sel, err) errs <- fmt.Errorf("unable to fetch metrics for query %q: %v", sel, err)
return return

View file

@ -103,7 +103,6 @@ func (r *externalSeriesRegistry) filterAndStoreMetrics(result MetricUpdateResult
r.metrics = apiMetricsCache r.metrics = apiMetricsCache
r.metricsInfo = rawMetricsCache r.metricsInfo = rawMetricsCache
} }
func (r *externalSeriesRegistry) ListAllMetrics() []provider.ExternalMetricInfo { func (r *externalSeriesRegistry) ListAllMetrics() []provider.ExternalMetricInfo {

View file

@ -61,7 +61,7 @@ func (c *metricConverter) convertSample(info provider.ExternalMetricInfo, sample
singleMetric := external_metrics.ExternalMetricValue{ singleMetric := external_metrics.ExternalMetricValue{
MetricName: info.Metric, MetricName: info.Metric,
Timestamp: metav1.Time{ Timestamp: metav1.Time{
sample.Timestamp.Time(), Time: sample.Timestamp.Time(),
}, },
Value: *resource.NewMilliQuantity(int64(sample.Value*1000.0), resource.DecimalSI), Value: *resource.NewMilliQuantity(int64(sample.Value*1000.0), resource.DecimalSI),
MetricLabels: labels, MetricLabels: labels,
@ -133,7 +133,7 @@ func (c *metricConverter) convertScalar(info provider.ExternalMetricInfo, queryR
{ {
MetricName: info.Metric, MetricName: info.Metric,
Timestamp: metav1.Time{ Timestamp: metav1.Time{
toConvert.Timestamp.Time(), Time: toConvert.Timestamp.Time(),
}, },
Value: *resource.NewMilliQuantity(int64(toConvert.Value*1000.0), resource.DecimalSI), Value: *resource.NewMilliQuantity(int64(toConvert.Value*1000.0), resource.DecimalSI),
}, },

View file

@ -69,9 +69,9 @@ func (l *periodicMetricLister) updateMetrics() error {
return err return err
} }
//Cache the result. // Cache the result.
l.mostRecentResult = result l.mostRecentResult = result
//Let our listeners know we've got new data ready for them. // Let our listeners know we've got new data ready for them.
l.notifyListeners() l.notifyListeners()
return nil return nil
} }
@ -85,5 +85,7 @@ func (l *periodicMetricLister) notifyListeners() {
} }
func (l *periodicMetricLister) UpdateNow() { func (l *periodicMetricLister) UpdateNow() {
l.updateMetrics() if err := l.updateMetrics(); err != nil {
utilruntime.HandleError(err)
}
} }

View file

@ -77,9 +77,9 @@ func (p *externalPrometheusProvider) selectGroupResource(namespace string) schem
} }
// NewExternalPrometheusProvider creates an ExternalMetricsProvider capable of responding to Kubernetes requests for external metric data // NewExternalPrometheusProvider creates an ExternalMetricsProvider capable of responding to Kubernetes requests for external metric data
func NewExternalPrometheusProvider(promClient prom.Client, namers []naming.MetricNamer, updateInterval time.Duration) (provider.ExternalMetricsProvider, Runnable) { func NewExternalPrometheusProvider(promClient prom.Client, namers []naming.MetricNamer, updateInterval time.Duration, maxAge time.Duration) (provider.ExternalMetricsProvider, Runnable) {
metricConverter := NewMetricConverter() metricConverter := NewMetricConverter()
basicLister := NewBasicMetricLister(promClient, namers, updateInterval) basicLister := NewBasicMetricLister(promClient, namers, maxAge)
periodicLister, _ := NewPeriodicMetricLister(basicLister, updateInterval) periodicLister, _ := NewPeriodicMetricLister(basicLister, updateInterval)
seriesRegistry := NewExternalSeriesRegistry(periodicLister) seriesRegistry := NewExternalSeriesRegistry(periodicLister)
return &externalPrometheusProvider{ return &externalPrometheusProvider{

View file

@ -22,7 +22,6 @@ import (
"regexp" "regexp"
"text/template" "text/template"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/runtime/schema"
pmodel "github.com/prometheus/common/model" pmodel "github.com/prometheus/common/model"
@ -34,7 +33,6 @@ type labelGroupResExtractor struct {
resourceInd int resourceInd int
groupInd *int groupInd *int
mapper apimeta.RESTMapper
} }
// newLabelGroupResExtractor creates a new labelGroupResExtractor for labels whose form // newLabelGroupResExtractor creates a new labelGroupResExtractor for labels whose form
@ -42,7 +40,10 @@ type labelGroupResExtractor struct {
// so anything in the template which limits resource or group name length will cause issues. // so anything in the template which limits resource or group name length will cause issues.
func newLabelGroupResExtractor(labelTemplate *template.Template) (*labelGroupResExtractor, error) { func newLabelGroupResExtractor(labelTemplate *template.Template) (*labelGroupResExtractor, error) {
labelRegexBuff := new(bytes.Buffer) labelRegexBuff := new(bytes.Buffer)
if err := labelTemplate.Execute(labelRegexBuff, schema.GroupResource{"(?P<group>.+?)", "(?P<resource>.+?)"}); err != nil { if err := labelTemplate.Execute(labelRegexBuff, schema.GroupResource{
Group: "(?P<group>.+?)",
Resource: "(?P<resource>.+?)"},
); err != nil {
return nil, fmt.Errorf("unable to convert label template to matcher: %v", err) return nil, fmt.Errorf("unable to convert label template to matcher: %v", err)
} }
if labelRegexBuff.Len() == 0 { if labelRegexBuff.Len() == 0 {

View file

@ -194,13 +194,14 @@ func NamersFromConfig(cfg []config.DiscoveryRule, mapper apimeta.RESTMapper) ([]
if nameAs == "" { if nameAs == "" {
// check if we have an obvious default // check if we have an obvious default
subexpNames := nameMatches.SubexpNames() subexpNames := nameMatches.SubexpNames()
if len(subexpNames) == 1 { switch len(subexpNames) {
case 1:
// no capture groups, use the whole thing // no capture groups, use the whole thing
nameAs = "$0" nameAs = "$0"
} else if len(subexpNames) == 2 { case 2:
// one capture group, use that // one capture group, use that
nameAs = "$1" nameAs = "$1"
} else { default:
return nil, fmt.Errorf("must specify an 'as' value for name matcher %q associated with series query %q", rule.Name.Matches, rule.SeriesQuery) return nil, fmt.Errorf("must specify an 'as' value for name matcher %q associated with series query %q", rule.Name.Matches, rule.SeriesQuery)
} }
} }

View file

@ -283,9 +283,8 @@ func (q *metricsQuery) processQueryParts(queryParts []queryPart) ([]string, map[
} }
func (q *metricsQuery) selectMatcher(operator selection.Operator, values []string) (func(string, string) string, error) { func (q *metricsQuery) selectMatcher(operator selection.Operator, values []string) (func(string, string) string, error) {
switch len(values) {
numValues := len(values) case 0:
if numValues == 0 {
switch operator { switch operator {
case selection.Exists: case selection.Exists:
return prom.LabelNeq, nil return prom.LabelNeq, nil
@ -294,7 +293,7 @@ func (q *metricsQuery) selectMatcher(operator selection.Operator, values []strin
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn: case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
return nil, ErrMalformedQuery return nil, ErrMalformedQuery
} }
} else if numValues == 1 { case 1:
switch operator { switch operator {
case selection.Equals, selection.DoubleEquals: case selection.Equals, selection.DoubleEquals:
return prom.LabelEq, nil return prom.LabelEq, nil
@ -305,7 +304,7 @@ func (q *metricsQuery) selectMatcher(operator selection.Operator, values []strin
case selection.DoesNotExist, selection.NotIn: case selection.DoesNotExist, selection.NotIn:
return prom.LabelNotMatches, nil return prom.LabelNotMatches, nil
} }
} else { default:
// Since labels can only have one value, providing multiple // Since labels can only have one value, providing multiple
// values results in a regex match, even if that's not what the user // values results in a regex match, even if that's not what the user
// asked for. // asked for.
@ -321,8 +320,8 @@ func (q *metricsQuery) selectMatcher(operator selection.Operator, values []strin
} }
func (q *metricsQuery) selectTargetValue(operator selection.Operator, values []string) (string, error) { func (q *metricsQuery) selectTargetValue(operator selection.Operator, values []string) (string, error) {
numValues := len(values) switch len(values) {
if numValues == 0 { case 0:
switch operator { switch operator {
case selection.Exists, selection.DoesNotExist: case selection.Exists, selection.DoesNotExist:
// Return an empty string when values are equal to 0 // Return an empty string when values are equal to 0
@ -334,7 +333,7 @@ func (q *metricsQuery) selectTargetValue(operator selection.Operator, values []s
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn: case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
return "", ErrMalformedQuery return "", ErrMalformedQuery
} }
} else if numValues == 1 { case 1:
switch operator { switch operator {
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn: case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
// Pass the value through as-is. // Pass the value through as-is.
@ -347,7 +346,7 @@ func (q *metricsQuery) selectTargetValue(operator selection.Operator, values []s
case selection.Exists, selection.DoesNotExist: case selection.Exists, selection.DoesNotExist:
return "", ErrQueryUnsupportedValues return "", ErrQueryUnsupportedValues
} }
} else { default:
switch operator { switch operator {
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn: case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
// Pass the value through as-is. // Pass the value through as-is.

View file

@ -26,9 +26,9 @@ import (
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
apimeta "k8s.io/apimachinery/pkg/api/meta" apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/runtime/schema"
apitypes "k8s.io/apimachinery/pkg/types"
"k8s.io/klog/v2" "k8s.io/klog/v2"
metrics "k8s.io/metrics/pkg/apis/metrics" metrics "k8s.io/metrics/pkg/apis/metrics"
@ -43,7 +43,6 @@ import (
var ( var (
nodeResource = schema.GroupResource{Resource: "nodes"} nodeResource = schema.GroupResource{Resource: "nodes"}
nsResource = schema.GroupResource{Resource: "ns"}
podResource = schema.GroupResource{Resource: "pods"} podResource = schema.GroupResource{Resource: "pods"}
) )
@ -72,7 +71,6 @@ func newResourceQuery(cfg config.ResourceRule, mapper apimeta.RESTMapper) (resou
nodeQuery: nodeQuery, nodeQuery: nodeQuery,
containerLabel: cfg.ContainerLabel, containerLabel: cfg.ContainerLabel,
}, nil }, nil
} }
// resourceQuery represents query information for querying resource metrics for some resource, // resourceQuery represents query information for querying resource metrics for some resource,
@ -123,12 +121,11 @@ type nsQueryResults struct {
} }
// GetPodMetrics implements the api.MetricsProvider interface. // GetPodMetrics implements the api.MetricsProvider interface.
func (p *resourceProvider) GetPodMetrics(pods ...apitypes.NamespacedName) ([]api.TimeInfo, [][]metrics.ContainerMetrics, error) { func (p *resourceProvider) GetPodMetrics(pods ...*metav1.PartialObjectMetadata) ([]metrics.PodMetrics, error) {
resTimes := make([]api.TimeInfo, len(pods)) resMetrics := make([]metrics.PodMetrics, 0, len(pods))
resMetrics := make([][]metrics.ContainerMetrics, len(pods))
if len(pods) == 0 { if len(pods) == 0 {
return resTimes, resMetrics, nil return resMetrics, nil
} }
// TODO(directxman12): figure out how well this scales if we go to list 1000+ pods // TODO(directxman12): figure out how well this scales if we go to list 1000+ pods
@ -168,37 +165,40 @@ func (p *resourceProvider) GetPodMetrics(pods ...apitypes.NamespacedName) ([]api
// convert the unorganized per-container results into results grouped // convert the unorganized per-container results into results grouped
// together by namespace, pod, and container // together by namespace, pod, and container
for i, pod := range pods { for _, pod := range pods {
p.assignForPod(pod, resultsByNs, &resMetrics[i], &resTimes[i]) podMetric := p.assignForPod(pod, resultsByNs)
if podMetric != nil {
resMetrics = append(resMetrics, *podMetric)
}
} }
return resTimes, resMetrics, nil return resMetrics, nil
} }
// assignForPod takes the resource metrics for all containers in the given pod // assignForPod takes the resource metrics for all containers in the given pod
// from resultsByNs, and places them in MetricsProvider response format in resMetrics, // from resultsByNs, and places them in MetricsProvider response format in resMetrics,
// also recording the earliest time in resTime. It will return without operating if // also recording the earliest time in resTime. It will return without operating if
// any data is missing. // any data is missing.
func (p *resourceProvider) assignForPod(pod apitypes.NamespacedName, resultsByNs map[string]nsQueryResults, resMetrics *[]metrics.ContainerMetrics, resTime *api.TimeInfo) { func (p *resourceProvider) assignForPod(pod *metav1.PartialObjectMetadata, resultsByNs map[string]nsQueryResults) *metrics.PodMetrics {
// check to make sure everything is present // check to make sure everything is present
nsRes, nsResPresent := resultsByNs[pod.Namespace] nsRes, nsResPresent := resultsByNs[pod.Namespace]
if !nsResPresent { if !nsResPresent {
klog.Errorf("unable to fetch metrics for pods in namespace %q, skipping pod %s", pod.Namespace, pod.String()) klog.Errorf("unable to fetch metrics for pods in namespace %q, skipping pod %s", pod.Namespace, pod.String())
return return nil
} }
cpuRes, hasResult := nsRes.cpu[pod.Name] cpuRes, hasResult := nsRes.cpu[pod.Name]
if !hasResult { if !hasResult {
klog.Errorf("unable to fetch CPU metrics for pod %s, skipping", pod.String()) klog.Errorf("unable to fetch CPU metrics for pod %s, skipping", pod.String())
return return nil
} }
memRes, hasResult := nsRes.mem[pod.Name] memRes, hasResult := nsRes.mem[pod.Name]
if !hasResult { if !hasResult {
klog.Errorf("unable to fetch memory metrics for pod %s, skipping", pod.String()) klog.Errorf("unable to fetch memory metrics for pod %s, skipping", pod.String())
return return nil
} }
earliestTs := pmodel.Latest
containerMetrics := make(map[string]metrics.ContainerMetrics) containerMetrics := make(map[string]metrics.ContainerMetrics)
earliestTS := pmodel.Latest
// organize all the CPU results // organize all the CPU results
for _, cpu := range cpuRes { for _, cpu := range cpuRes {
@ -210,8 +210,8 @@ func (p *resourceProvider) assignForPod(pod apitypes.NamespacedName, resultsByNs
} }
} }
containerMetrics[containerName].Usage[corev1.ResourceCPU] = *resource.NewMilliQuantity(int64(cpu.Value*1000.0), resource.DecimalSI) containerMetrics[containerName].Usage[corev1.ResourceCPU] = *resource.NewMilliQuantity(int64(cpu.Value*1000.0), resource.DecimalSI)
if cpu.Timestamp.Before(earliestTs) { if cpu.Timestamp.Before(earliestTS) {
earliestTs = cpu.Timestamp earliestTS = cpu.Timestamp
} }
} }
@ -225,8 +225,8 @@ func (p *resourceProvider) assignForPod(pod apitypes.NamespacedName, resultsByNs
} }
} }
containerMetrics[containerName].Usage[corev1.ResourceMemory] = *resource.NewMilliQuantity(int64(mem.Value*1000.0), resource.BinarySI) containerMetrics[containerName].Usage[corev1.ResourceMemory] = *resource.NewMilliQuantity(int64(mem.Value*1000.0), resource.BinarySI)
if mem.Timestamp.Before(earliestTs) { if mem.Timestamp.Before(earliestTS) {
earliestTs = mem.Timestamp earliestTS = mem.Timestamp
} }
} }
@ -241,40 +241,50 @@ func (p *resourceProvider) assignForPod(pod apitypes.NamespacedName, resultsByNs
} }
} }
podMetric := &metrics.PodMetrics{
ObjectMeta: metav1.ObjectMeta{
Name: pod.Name,
Namespace: pod.Namespace,
Labels: pod.Labels,
CreationTimestamp: metav1.Now(),
},
// store the time in the final format // store the time in the final format
*resTime = api.TimeInfo{ Timestamp: metav1.NewTime(earliestTS.Time()),
Timestamp: earliestTs.Time(), Window: metav1.Duration{Duration: p.window},
Window: p.window,
} }
// store the container metrics in the final format // store the container metrics in the final format
containerMetricsList := make([]metrics.ContainerMetrics, 0, len(containerMetrics)) podMetric.Containers = make([]metrics.ContainerMetrics, 0, len(containerMetrics))
for _, containerMetric := range containerMetrics { for _, containerMetric := range containerMetrics {
containerMetricsList = append(containerMetricsList, containerMetric) podMetric.Containers = append(podMetric.Containers, containerMetric)
} }
*resMetrics = containerMetricsList
return podMetric
} }
// GetNodeMetrics implements the api.MetricsProvider interface. // GetNodeMetrics implements the api.MetricsProvider interface.
func (p *resourceProvider) GetNodeMetrics(nodes ...string) ([]api.TimeInfo, []corev1.ResourceList, error) { func (p *resourceProvider) GetNodeMetrics(nodes ...*corev1.Node) ([]metrics.NodeMetrics, error) {
resTimes := make([]api.TimeInfo, len(nodes)) resMetrics := make([]metrics.NodeMetrics, 0, len(nodes))
resMetrics := make([]corev1.ResourceList, len(nodes))
if len(nodes) == 0 { if len(nodes) == 0 {
return resTimes, resMetrics, nil return resMetrics, nil
} }
now := pmodel.Now() now := pmodel.Now()
nodeNames := make([]string, 0, len(nodes))
for _, node := range nodes {
nodeNames = append(nodeNames, node.Name)
}
// run the actual query // run the actual query
qRes := p.queryBoth(now, nodeResource, "", nodes...) qRes := p.queryBoth(now, nodeResource, "", nodeNames...)
if qRes.err != nil { if qRes.err != nil {
klog.Errorf("failed querying node metrics: %v", qRes.err) klog.Errorf("failed querying node metrics: %v", qRes.err)
return resTimes, resMetrics, nil return resMetrics, nil
} }
// organize the results // organize the results
for i, nodeName := range nodes { for i, nodeName := range nodeNames {
// skip if any data is missing // skip if any data is missing
rawCPUs, gotResult := qRes.cpu[nodeName] rawCPUs, gotResult := qRes.cpu[nodeName]
if !gotResult { if !gotResult {
@ -290,28 +300,30 @@ func (p *resourceProvider) GetNodeMetrics(nodes ...string) ([]api.TimeInfo, []co
rawMem := rawMems[0] rawMem := rawMems[0]
rawCPU := rawCPUs[0] rawCPU := rawCPUs[0]
// store the results
resMetrics[i] = corev1.ResourceList{
corev1.ResourceCPU: *resource.NewMilliQuantity(int64(rawCPU.Value*1000.0), resource.DecimalSI),
corev1.ResourceMemory: *resource.NewMilliQuantity(int64(rawMem.Value*1000.0), resource.BinarySI),
}
// use the earliest timestamp available (in order to be conservative // use the earliest timestamp available (in order to be conservative
// when determining if metrics are tainted by startup) // when determining if metrics are tainted by startup)
if rawMem.Timestamp.Before(rawCPU.Timestamp) { ts := rawCPU.Timestamp.Time()
resTimes[i] = api.TimeInfo{ if ts.After(rawMem.Timestamp.Time()) {
Timestamp: rawMem.Timestamp.Time(), ts = rawMem.Timestamp.Time()
Window: p.window,
}
} else {
resTimes[i] = api.TimeInfo{
Timestamp: rawCPU.Timestamp.Time(),
Window: 1 * time.Minute,
}
}
} }
return resTimes, resMetrics, nil // store the results
resMetrics = append(resMetrics, metrics.NodeMetrics{
ObjectMeta: metav1.ObjectMeta{
Name: nodes[i].Name,
Labels: nodes[i].Labels,
CreationTimestamp: metav1.Now(),
},
Usage: corev1.ResourceList{
corev1.ResourceCPU: *resource.NewMilliQuantity(int64(rawCPU.Value*1000.0), resource.DecimalSI),
corev1.ResourceMemory: *resource.NewMilliQuantity(int64(rawMem.Value*1000.0), resource.BinarySI),
},
Timestamp: metav1.NewTime(ts),
Window: metav1.Duration{Duration: p.window},
})
}
return resMetrics, nil
} }
// queryBoth queries for both CPU and memory metrics on the given // queryBoth queries for both CPU and memory metrics on the given

View file

@ -23,9 +23,9 @@ import (
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
apimeta "k8s.io/apimachinery/pkg/api/meta" apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/metrics/pkg/apis/metrics" "k8s.io/metrics/pkg/apis/metrics"
"sigs.k8s.io/metrics-server/pkg/api" "sigs.k8s.io/metrics-server/pkg/api"
@ -122,10 +122,10 @@ var _ = Describe("Resource Metrics Provider", func() {
}) })
It("should be able to list metrics pods across different namespaces", func() { It("should be able to list metrics pods across different namespaces", func() {
pods := []types.NamespacedName{ pods := []*metav1.PartialObjectMetadata{
{Namespace: "some-ns", Name: "pod1"}, {ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod1"}},
{Namespace: "some-ns", Name: "pod3"}, {ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod3"}},
{Namespace: "other-ns", Name: "pod27"}, {ObjectMeta: metav1.ObjectMeta{Namespace: "other-ns", Name: "pod27"}},
} }
fakeProm.QueryResults = map[prom.Selector]prom.QueryResult{ fakeProm.QueryResults = map[prom.Selector]prom.QueryResult{
mustBuild(cpuQueries.contQuery.Build("", podResource, "some-ns", []string{cpuQueries.containerLabel}, labels.Everything(), "pod1", "pod3")): buildQueryRes("container_cpu_usage_seconds_total", mustBuild(cpuQueries.contQuery.Build("", podResource, "some-ns", []string{cpuQueries.containerLabel}, labels.Everything(), "pod1", "pod3")): buildQueryRes("container_cpu_usage_seconds_total",
@ -149,28 +149,34 @@ var _ = Describe("Resource Metrics Provider", func() {
} }
By("querying for metrics for some pods") By("querying for metrics for some pods")
times, metricVals, err := prov.GetPodMetrics(pods...) podMetrics, err := prov.GetPodMetrics(pods...)
Expect(err).NotTo(HaveOccurred()) Expect(err).NotTo(HaveOccurred())
By("verifying that metrics have been fetched for all the pods")
Expect(podMetrics).To(HaveLen(3))
By("verifying that the reported times for each are the earliest times for each pod") By("verifying that the reported times for each are the earliest times for each pod")
Expect(times).To(Equal([]api.TimeInfo{ Expect(podMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute}, Expect(podMetrics[0].Window.Duration).To(Equal(time.Minute))
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute},
{Timestamp: pmodel.Time(270).Time(), Window: 1 * time.Minute}, Expect(podMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
})) Expect(podMetrics[1].Window.Duration).To(Equal(time.Minute))
Expect(podMetrics[2].Timestamp.Time).To(Equal(pmodel.Time(270).Time()))
Expect(podMetrics[2].Window.Duration).To(Equal(time.Minute))
By("verifying that the right metrics were fetched") By("verifying that the right metrics were fetched")
Expect(metricVals).To(HaveLen(3)) Expect(podMetrics).To(HaveLen(3))
Expect(metricVals[0]).To(ConsistOf( Expect(podMetrics[0].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1100.0, 3100.0)}, metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1100.0, 3100.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1110.0, 3110.0)}, metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1110.0, 3110.0)},
)) ))
Expect(metricVals[1]).To(ConsistOf( Expect(podMetrics[1].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1300.0, 3300.0)}, metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1300.0, 3300.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1310.0, 3310.0)}, metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1310.0, 3310.0)},
)) ))
Expect(metricVals[2]).To(ConsistOf( Expect(podMetrics[2].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(2200.0, 4200.0)}, metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(2200.0, 4200.0)},
)) ))
}) })
@ -188,23 +194,22 @@ var _ = Describe("Resource Metrics Provider", func() {
} }
By("querying for metrics for some pods, one of which is missing") By("querying for metrics for some pods, one of which is missing")
times, metricVals, err := prov.GetPodMetrics( podMetrics, err := prov.GetPodMetrics(
types.NamespacedName{Namespace: "some-ns", Name: "pod1"}, &metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod1"}},
types.NamespacedName{Namespace: "some-ns", Name: "pod-nonexistant"}, &metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod-nonexistant"}},
) )
Expect(err).NotTo(HaveOccurred()) Expect(err).NotTo(HaveOccurred())
By("verifying that the missing pod had nil metrics") By("verifying that the missing pod had no metrics")
Expect(metricVals).To(HaveLen(2)) Expect(podMetrics).To(HaveLen(1))
Expect(metricVals[1]).To(BeNil())
By("verifying that the rest of time metrics and times are correct") By("verifying that the rest of time metrics and times are correct")
Expect(metricVals[0]).To(ConsistOf( Expect(podMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(podMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(podMetrics[0].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1100.0, 3100.0)}, metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1100.0, 3100.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1110.0, 3110.0)}, metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1110.0, 3110.0)},
)) ))
Expect(times).To(HaveLen(2))
Expect(times[0]).To(Equal(api.TimeInfo{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute}))
}) })
It("should return metrics of value zero when pod metrics have NaN or negative values", func() { It("should return metrics of value zero when pod metrics have NaN or negative values", func() {
@ -224,25 +229,27 @@ var _ = Describe("Resource Metrics Provider", func() {
} }
By("querying for metrics for some pods") By("querying for metrics for some pods")
times, metricVals, err := prov.GetPodMetrics( podMetrics, err := prov.GetPodMetrics(
types.NamespacedName{Namespace: "some-ns", Name: "pod1"}, &metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod1"}},
types.NamespacedName{Namespace: "some-ns", Name: "pod3"}, &metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod3"}},
) )
Expect(err).NotTo(HaveOccurred()) Expect(err).NotTo(HaveOccurred())
By("verifying that metrics have been fetched for all the pods")
Expect(podMetrics).To(HaveLen(2))
By("verifying that the reported times for each are the earliest times for each pod") By("verifying that the reported times for each are the earliest times for each pod")
Expect(times).To(Equal([]api.TimeInfo{ Expect(podMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute}, Expect(podMetrics[0].Window.Duration).To(Equal(time.Minute))
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute}, Expect(podMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
})) Expect(podMetrics[1].Window.Duration).To(Equal(time.Minute))
By("verifying that NaN and negative values were replaced by zero") By("verifying that NaN and negative values were replaced by zero")
Expect(metricVals).To(HaveLen(2)) Expect(podMetrics[0].Containers).To(ConsistOf(
Expect(metricVals[0]).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(0, 3100.0)}, metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(0, 3100.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(0, 0)}, metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(0, 0)},
)) ))
Expect(metricVals[1]).To(ConsistOf( Expect(podMetrics[1].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(0, 0)}, metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(0, 0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1310.0, 0)}, metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1310.0, 0)},
)) ))
@ -260,20 +267,24 @@ var _ = Describe("Resource Metrics Provider", func() {
), ),
} }
By("querying for metrics for some nodes") By("querying for metrics for some nodes")
times, metricVals, err := prov.GetNodeMetrics("node1", "node2") nodeMetrics, err := prov.GetNodeMetrics(
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node1"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node2"}},
)
Expect(err).NotTo(HaveOccurred()) Expect(err).NotTo(HaveOccurred())
By("verifying that the reported times for each are the earliest times for each pod") By("verifying that metrics have been fetched for all the nodes")
Expect(times).To(Equal([]api.TimeInfo{ Expect(nodeMetrics).To(HaveLen(2))
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute},
{Timestamp: pmodel.Time(12).Time(), Window: 1 * time.Minute}, By("verifying that the reported times for each are the earliest times for each node")
})) Expect(nodeMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(nodeMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(nodeMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(12).Time()))
Expect(nodeMetrics[1].Window.Duration).To(Equal(time.Minute))
By("verifying that the right metrics were fetched") By("verifying that the right metrics were fetched")
Expect(metricVals).To(Equal([]corev1.ResourceList{ Expect(nodeMetrics[0].Usage).To(Equal(buildResList(1100.0, 2100.0)))
buildResList(1100.0, 2100.0), Expect(nodeMetrics[1].Usage).To(Equal(buildResList(1200.0, 2200.0)))
buildResList(1200.0, 2200.0),
}))
}) })
It("should return nil metrics for missing nodes, but still return partial results", func() { It("should return nil metrics for missing nodes, but still return partial results", func() {
@ -288,24 +299,23 @@ var _ = Describe("Resource Metrics Provider", func() {
), ),
} }
By("querying for metrics for some nodes, one of which is missing") By("querying for metrics for some nodes, one of which is missing")
times, metricVals, err := prov.GetNodeMetrics("node1", "node2", "node3") nodeMetrics, err := prov.GetNodeMetrics(
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node1"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node2"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node3"}},
)
Expect(err).NotTo(HaveOccurred()) Expect(err).NotTo(HaveOccurred())
By("verifying that the missing pod had nil metrics") By("verifying that the missing pod had no metrics")
Expect(metricVals).To(HaveLen(3)) Expect(nodeMetrics).To(HaveLen(2))
Expect(metricVals[2]).To(BeNil())
By("verifying that the rest of time metrics and times are correct") By("verifying that the rest of time metrics and times are correct")
Expect(metricVals).To(Equal([]corev1.ResourceList{ Expect(nodeMetrics[0].Usage).To(Equal(buildResList(1100.0, 2100.0)))
buildResList(1100.0, 2100.0), Expect(nodeMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
buildResList(1200.0, 2200.0), Expect(nodeMetrics[0].Window.Duration).To(Equal(time.Minute))
nil, Expect(nodeMetrics[1].Usage).To(Equal(buildResList(1200.0, 2200.0)))
})) Expect(nodeMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(12).Time()))
Expect(times).To(Equal([]api.TimeInfo{ Expect(nodeMetrics[1].Window.Duration).To(Equal(time.Minute))
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute},
{Timestamp: pmodel.Time(12).Time(), Window: 1 * time.Minute},
{},
}))
}) })
It("should return metrics of value zero when node metrics have NaN or negative values", func() { It("should return metrics of value zero when node metrics have NaN or negative values", func() {
@ -320,19 +330,23 @@ var _ = Describe("Resource Metrics Provider", func() {
), ),
} }
By("querying for metrics for some nodes") By("querying for metrics for some nodes")
times, metricVals, err := prov.GetNodeMetrics("node1", "node2") nodeMetrics, err := prov.GetNodeMetrics(
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node1"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node2"}},
)
Expect(err).NotTo(HaveOccurred()) Expect(err).NotTo(HaveOccurred())
By("verifying that metrics have been fetched for all the nodes")
Expect(nodeMetrics).To(HaveLen(2))
By("verifying that the reported times for each are the earliest times for each pod") By("verifying that the reported times for each are the earliest times for each pod")
Expect(times).To(Equal([]api.TimeInfo{ Expect(nodeMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute}, Expect(nodeMetrics[0].Window.Duration).To(Equal(time.Minute))
{Timestamp: pmodel.Time(12).Time(), Window: 1 * time.Minute}, Expect(nodeMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(12).Time()))
})) Expect(nodeMetrics[1].Window.Duration).To(Equal(time.Minute))
By("verifying that NaN and negative values were replaced by zero") By("verifying that NaN and negative values were replaced by zero")
Expect(metricVals).To(Equal([]corev1.ResourceList{ Expect(nodeMetrics[0].Usage).To(Equal(buildResList(0, 2100.0)))
buildResList(0, 2100.0), Expect(nodeMetrics[1].Usage).To(Equal(buildResList(1200.0, 0)))
buildResList(1200.0, 0),
}))
}) })
}) })

41
test/README.md Normal file
View file

@ -0,0 +1,41 @@
# End-to-end tests
## With [kind](https://kind.sigs.k8s.io/)
[`kind`](https://kind.sigs.k8s.io/) and `kubectl` are automatically downloaded
except if `SKIP_INSTALL=true` is set.
A `kind` cluster is automatically created before the tests, and deleted after
the tests.
The `prometheus-adapter` container image is build locally and imported
into the cluster.
```bash
KIND_E2E=true make test-e2e
```
## With an existing Kubernetes cluster
If you already have a Kubernetes cluster, you can use:
```bash
KUBECONFIG="/path/to/kube/config" REGISTRY="my.registry/prefix" make test-e2e
```
- The cluster should not have a namespace `prometheus-adapter-e2e`.
The namespace will be created and deleted as part of the E2E tests.
- `KUBECONFIG` is the path of the [`kubeconfig` file].
**Optional**, defaults to `${HOME}/.kube/config`
- `REGISTRY` is the image registry where the container image should be pushed.
**Required**.
[`kubeconfig` file]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
## Additional environment variables
These environment variables may also be used (with any non-empty value):
- `SKIP_INSTALL`: skip the installation of `kind` and `kubectl` binaries;
- `SKIP_CLEAN_AFTER`: skip the deletion of resources (`Kind` cluster or
Kubernetes namespace) and of the temporary directory `.e2e`;
- `CLEAN_BEFORE`: clean before running the tests, e.g. if `SKIP_CLEAN_AFTER`
was used on the previous run.

213
test/e2e/e2e_test.go Normal file
View file

@ -0,0 +1,213 @@
/*
Copyright 2022 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package e2e
import (
"context"
"fmt"
"log"
"os"
"testing"
"time"
monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
monitoring "github.com/prometheus-operator/prometheus-operator/pkg/client/versioned"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
clientset "k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
metricsv1beta1 "k8s.io/metrics/pkg/apis/metrics/v1beta1"
metrics "k8s.io/metrics/pkg/client/clientset/versioned"
)
const (
ns = "prometheus-adapter-e2e"
prometheusInstance = "prometheus"
deployment = "prometheus-adapter"
)
var (
client clientset.Interface
promOpClient monitoring.Interface
metricsClient metrics.Interface
)
func TestMain(m *testing.M) {
kubeconfig := os.Getenv("KUBECONFIG")
if len(kubeconfig) == 0 {
log.Fatal("KUBECONFIG not provided")
}
var err error
client, promOpClient, metricsClient, err = initializeClients(kubeconfig)
if err != nil {
log.Fatalf("Cannot create clients: %v", err)
}
ctx := context.Background()
err = waitForPrometheusReady(ctx, ns, prometheusInstance)
if err != nil {
log.Fatalf("Prometheus instance 'prometheus' not ready: %v", err)
}
err = waitForDeploymentReady(ctx, ns, deployment)
if err != nil {
log.Fatalf("Deployment prometheus-adapter not ready: %v", err)
}
exitVal := m.Run()
os.Exit(exitVal)
}
func initializeClients(kubeconfig string) (clientset.Interface, monitoring.Interface, metrics.Interface, error) {
cfg, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during client configuration with %v", err)
}
clientSet, err := clientset.NewForConfig(cfg)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during client creation with %v", err)
}
promOpClient, err := monitoring.NewForConfig(cfg)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during dynamic client creation with %v", err)
}
metricsClientSet, err := metrics.NewForConfig(cfg)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during metrics client creation with %v", err)
}
return clientSet, promOpClient, metricsClientSet, nil
}
func waitForPrometheusReady(ctx context.Context, namespace string, name string) error {
return wait.PollUntilContextTimeout(ctx, 5*time.Second, 120*time.Second, true, func(ctx context.Context) (bool, error) {
prom, err := promOpClient.MonitoringV1().Prometheuses(ns).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, err
}
var reconciled, available *monitoringv1.Condition
for _, condition := range prom.Status.Conditions {
cond := condition
if cond.Type == monitoringv1.Reconciled {
reconciled = &cond
} else if cond.Type == monitoringv1.Available {
available = &cond
}
}
if reconciled == nil {
log.Printf("Prometheus instance '%s': Waiting for reconciliation status...", name)
return false, nil
}
if reconciled.Status != monitoringv1.ConditionTrue {
log.Printf("Prometheus instance '%s': Reconciiled = %v. Waiting for reconciliation (reason %s, %q)...", name, reconciled.Status, reconciled.Reason, reconciled.Message)
return false, nil
}
specReplicas := *prom.Spec.Replicas
availableReplicas := prom.Status.AvailableReplicas
if specReplicas != availableReplicas {
log.Printf("Prometheus instance '%s': %v/%v pods are ready. Waiting for all pods to be ready...", name, availableReplicas, specReplicas)
return false, err
}
if available == nil {
log.Printf("Prometheus instance '%s': Waiting for Available status...", name)
return false, nil
}
if available.Status != monitoringv1.ConditionTrue {
log.Printf("Prometheus instance '%s': Available = %v. Waiting for Available status... (reason %s, %q)", name, available.Status, available.Reason, available.Message)
return false, nil
}
log.Printf("Prometheus instance '%s': Ready.", name)
return true, nil
})
}
func waitForDeploymentReady(ctx context.Context, namespace string, name string) error {
return wait.PollUntilContextTimeout(ctx, 5*time.Second, 30*time.Second, true, func(ctx context.Context) (bool, error) {
sts, err := client.AppsV1().Deployments(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, err
}
if sts.Status.ReadyReplicas == *sts.Spec.Replicas {
log.Printf("Deployment %s: %v/%v pods are ready.", name, sts.Status.ReadyReplicas, *sts.Spec.Replicas)
return true, nil
}
log.Printf("Deployment %s: %v/%v pods are ready. Waiting for all pods to be ready...", name, sts.Status.ReadyReplicas, *sts.Spec.Replicas)
return false, nil
})
}
func TestNodeMetrics(t *testing.T) {
ctx := context.Background()
var nodeMetrics *metricsv1beta1.NodeMetricsList
err := wait.PollUntilContextTimeout(ctx, 2*time.Second, 30*time.Second, true, func(ctx context.Context) (bool, error) {
var err error
nodeMetrics, err = metricsClient.MetricsV1beta1().NodeMetricses().List(ctx, metav1.ListOptions{})
if err != nil {
return false, err
}
nonEmptyNodeMetrics := len(nodeMetrics.Items) > 0
if !nonEmptyNodeMetrics {
t.Logf("Node metrics empty... Retrying.")
}
return nonEmptyNodeMetrics, nil
})
require.NoErrorf(t, err, "Node metrics should not be empty")
for _, nodeMetric := range nodeMetrics.Items {
positiveMemory := nodeMetric.Usage.Memory().CmpInt64(0)
assert.Positivef(t, positiveMemory, "Memory usage for node %s is %v, should be > 0", nodeMetric.Name, nodeMetric.Usage.Memory())
positiveCPU := nodeMetric.Usage.Cpu().CmpInt64(0)
assert.Positivef(t, positiveCPU, "CPU usage for node %s is %v, should be > 0", nodeMetric.Name, nodeMetric.Usage.Cpu())
}
}
func TestPodMetrics(t *testing.T) {
ctx := context.Background()
var podMetrics *metricsv1beta1.PodMetricsList
err := wait.PollUntilContextTimeout(ctx, 2*time.Second, 30*time.Second, true, func(ctx context.Context) (bool, error) {
var err error
podMetrics, err = metricsClient.MetricsV1beta1().PodMetricses(ns).List(ctx, metav1.ListOptions{})
if err != nil {
return false, err
}
nonEmptyNodeMetrics := len(podMetrics.Items) > 0
if !nonEmptyNodeMetrics {
t.Logf("Pod metrics empty... Retrying.")
}
return nonEmptyNodeMetrics, nil
})
require.NoErrorf(t, err, "Pod metrics should not be empty")
for _, pod := range podMetrics.Items {
for _, containerMetric := range pod.Containers {
positiveMemory := containerMetric.Usage.Memory().CmpInt64(0)
assert.Positivef(t, positiveMemory, "Memory usage for pod %s/%s is %v, should be > 0", pod.Name, containerMetric.Name, containerMetric.Usage.Memory())
}
}
}

View file

@ -1,12 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding kind: ClusterRoleBinding
metadata: metadata:
name: custom-metrics:system:auth-delegator name: prometheus
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole
name: system:auth-delegator name: prometheus
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: custom-metrics-apiserver name: prometheus
namespace: custom-metrics namespace: prometheus-adapter-e2e

View file

@ -0,0 +1,24 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]

View file

@ -0,0 +1,9 @@
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
namespace: prometheus-adapter-e2e
spec:
replicas: 2
serviceAccountName: prometheus
serviceMonitorSelector: {}

View file

@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: prometheus-adapter-e2e

View file

@ -0,0 +1,25 @@
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/name: kubelet
name: kubelet
namespace: prometheus-adapter-e2e
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
honorTimestamps: false
interval: 10s
path: /metrics/resource
port: https-metrics
scheme: https
tlsConfig:
insecureSkipVerify: true
jobLabel: app.kubernetes.io/name
namespaceSelector:
matchNames:
- kube-system
selector:
matchLabels:
app.kubernetes.io/name: kubelet

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: prometheus-adapter-e2e
spec:
ports:
- name: web
port: 9090
targetPort: web
selector:
app.kubernetes.io/instance: prometheus
app.kubernetes.io/name: prometheus
sessionAffinity: ClientIP

134
test/run-e2e-tests.sh Executable file
View file

@ -0,0 +1,134 @@
#!/usr/bin/env bash
# Copyright 2022 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -x
set -o errexit
set -o nounset
# Tool versions
K8S_VERSION=${KUBERNETES_VERSION:-v1.30.0} # cf https://hub.docker.com/r/kindest/node/tags
KIND_VERSION=${KIND_VERSION:-v0.23.0} # cf https://github.com/kubernetes-sigs/kind/releases
PROM_OPERATOR_VERSION=${PROM_OPERATOR_VERSION:-v0.73.2} # cf https://github.com/prometheus-operator/prometheus-operator/releases
# Variables; set to empty if unbound/empty
REGISTRY=${REGISTRY:-}
KIND_E2E=${KIND_E2E:-}
SKIP_INSTALL=${SKIP_INSTALL:-}
SKIP_CLEAN_AFTER=${SKIP_CLEAN_AFTER:-}
CLEAN_BEFORE=${CLEAN_BEFORE:-}
# KUBECONFIG - will be overriden if a cluster is deployed with Kind
KUBECONFIG=${KUBECONFIG:-"${HOME}/.kube/config"}
# A temporary directory used by the tests
E2E_DIR="${PWD}/.e2e"
# The namespace where prometheus-adapter is deployed
NAMESPACE="prometheus-adapter-e2e"
if [[ -z "${REGISTRY}" && -z "${KIND_E2E}" ]]; then
echo -e "Either REGISTRY or KIND_E2E should be set."
exit 1
fi
function clean {
if [[ -n "${KIND_E2E}" ]]; then
kind delete cluster || true
else
kubectl delete -f ./deploy/manifests || true
kubectl delete -f ./test/prometheus-manifests || true
kubectl delete namespace "${NAMESPACE}" || true
fi
rm -rf "${E2E_DIR}"
}
if [[ -n "${CLEAN_BEFORE}" ]]; then
clean
fi
function on_exit {
local error_code="$?"
echo "Obtaining prometheus-adapter pod logs..."
kubectl logs -l app.kubernetes.io/name=prometheus-adapter -n "${NAMESPACE}" || true
if [[ -z "${SKIP_CLEAN_AFTER}" ]]; then
clean
fi
test "${error_code}" == 0 && return;
}
trap on_exit EXIT
if [[ -d "${E2E_DIR}" ]]; then
echo -e "${E2E_DIR} already exists."
exit 1
fi
mkdir -p "${E2E_DIR}"
if [[ -n "${KIND_E2E}" ]]; then
# Install kubectl and kind, if we did not set SKIP_INSTALL
if [[ -z "${SKIP_INSTALL}" ]]; then
BIN="${E2E_DIR}/bin"
mkdir -p "${BIN}"
curl -Lo "${BIN}/kubectl" "https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/kubectl" && chmod +x "${BIN}/kubectl"
curl -Lo "${BIN}/kind" "https://kind.sigs.k8s.io/dl/${KIND_VERSION}/kind-linux-amd64" && chmod +x "${BIN}/kind"
export PATH="${BIN}:${PATH}"
fi
kind create cluster --image "kindest/node:${K8S_VERSION}"
REGISTRY="localhost"
KUBECONFIG="${E2E_DIR}/kubeconfig"
kind get kubeconfig > "${KUBECONFIG}"
fi
# Create the test namespace
kubectl create namespace "${NAMESPACE}"
export REGISTRY
IMAGE_NAME="${REGISTRY}/prometheus-adapter-$(go env GOARCH)"
IMAGE_TAG="v$(cat VERSION)"
if [[ -n "${KIND_E2E}" ]]; then
make container
kind load docker-image "${IMAGE_NAME}:${IMAGE_TAG}"
else
make push
fi
# Install prometheus-operator
kubectl apply -f "https://github.com/prometheus-operator/prometheus-operator/releases/download/${PROM_OPERATOR_VERSION}/bundle.yaml" --server-side
# Install and setup prometheus
kubectl apply -f ./test/prometheus-manifests --server-side
# Customize prometheus-adapter manifests
# TODO: use Kustomize or generate manifests from Jsonnet
cp -r ./deploy/manifests "${E2E_DIR}/manifests"
prom_url="http://prometheus.${NAMESPACE}.svc:9090/"
sed -i -e "s|--prometheus-url=.*$|--prometheus-url=${prom_url}|g" "${E2E_DIR}/manifests/deployment.yaml"
sed -i -e "s|image: .*$|image: ${IMAGE_NAME}:${IMAGE_TAG}|g" "${E2E_DIR}/manifests/deployment.yaml"
find "${E2E_DIR}/manifests" -type f -exec sed -i -e "s|namespace: monitoring|namespace: ${NAMESPACE}|g" {} \;
# Deploy prometheus-adapter
kubectl apply -f "${E2E_DIR}/manifests" --server-side
PROJECT_PREFIX="sigs.k8s.io/prometheus-adapter"
export KUBECONFIG
go test "${PROJECT_PREFIX}/test/e2e/" -v -count=1